text
stringlengths
216
4.52M
meta
dict
\section{Introduction} \label{sec:introduction} In the past seven years, tremendous progress in understanding the cosmic gamma-ray bursts (GRBs) has been achieved from the detections of long-lived GRB embers (or afterglows) in low frequency bands (for reviews see van Paradijs, Kouveliotou, \& Wijers 2000; Cheng \& Lu 2001; M\'{e}sz\'{a}ros\ 2002; Zhang \& M\'{e}sz\'{a}ros\ 2004; Piran 2004). The simple standard shock model (M\'{e}sz\'{a}ros\ \& Rees 1997; Sari, Piran, \& Narayan 1998) responsible for the afterglow has been successfully established (Wijers, M\'{e}sz\'{a}ros, \& Rees 1997; Vietri 1997; Waxman 1997; Galama et al. 1998). As more and more afterglows being detected, the energetics of GRB remnants and shock microphysics have been inferred within the context of the standard model. Although the standard model are in rough agreement with afterglow observations, problems still exist since the standard scenario is oversimplified at least in two aspects. First, in the standard afterglow model the relativistic shock is usually assumed to be quasi-adiabatic. However, the shock in fact may be partially radiative. Fittings to observed afterglows reveal that the shock imparts an equipartition amount of energy into electrons, which is responsible for both the afterglow emission and the shock energy loss (Panaitescu \& Kumar 2001, 2002). The temporal evolution of the shock energy would affect the estimation of the GRB energetics from late time afterglows, as well as the profiles of afterglow light curves (Lloyd-Ronning \& Zhang 2004; B\"{o}ttcher \& Dermer 2000). Second, multi-wavelength fittings to several afterglows also indicate that post-shock energy density imparted to electrons is statistically much larger than that of the magnetic fields (Panaitescu \& Kumar 2001). It implies that the inverse Compton (IC) scattering plays an important role in GRB afterglows. The IC scattering has two effects. One is to enhance the cooling rate of the shock-accelerated electrons and hence delay the transition from early fast-cooling phase to late slow-cooling phase. It finally influences the observed flux density. Another effect is to cause a high energy spectral component typically above the soft X-ray band. These IC effects have been taken into account by numerous authors (Waxman 1997; Wei \& Lu 1998, 2000; Panaitescu \& Kumar 2000; Dermer, B\"{o}ttcher, \& Chiang 2000; Sari \& Esin 2001; Bj\"{o}rnsson 2001; Wang, Dai, \& Lu 2001; Zhang \& M\'{e}sz\'{a}ros\ 2001; Li, Dai, \& Lu 2002). As for the circum-burst environment, in the standard model it is once assumed that the surroundings of GRBs are homogeneous interstellar medium (ISM). Over the past several years, a lot of evidence has been collected linking GRBs to the core collapse of massive stars (Woosley 1993; Paczy\'{n}ski 1998). The most important evidence came from the direct association between GRB $030329$ and the supernova SN $2003$dh (Stanek et al. 2003; Hjorth et al. 2003), as well as the previous tentative association between GRB $980425$ and SN $1998$bw (Kulkarni et al. 1998; Galama et al. 1998). These associated supernovae have been confirmed to be type Ib/c SNe. The progenitors of type Ib/c SNe are commonly recognized as massive Wolf-Rayet stars. During their whole life, the massive progenitors eject their envelope material into their surroundings through line pressure and thus the stellar wind environments are formed. This means that the circum-burst medium for GRB afterglows may be the stellar wind (Dai \& Lu 1998; M\'{e}sz\'{a}ros, Rees, \& Wijers 1998; Chevalier \& Li 1999, 2000; Panaitescu \& Kumar 2000). In this paper, we study the afterglow properties of realistic GRB shocks, considering the effect of energy losses. The circum-burst environment is assumed to be either the ISM-type or the stellar wind type. We present an analytical solution for the realistic blast wave during the fast-cooling phase of GRB afterglows in \S \ref{sec:fastcooling}. This semi-radiative hydrodynamics is applied to the late slow-cooling phase with quite reasonable argument in \S \ref{sec:slowcooling}. Constraints on the IC components in the soft X-ray afterglows are given in both sections. In \S \ref{sec:lightcurves} we illustrate typical analytical light curves for the realistic model in detail. Conclusions and discussion are presented in \S \ref{sec:conclusion}. \section{The early fast-cooling phase} \label{sec:fastcooling} The realistic model for GRB remnants has been extensively investigated in the past few years (Huang, Dai, \& Lu 1999; Huang et al. 2000). It has been shown that this model is correct for both adiabatic and radiative fireballs, and in both ultra-relativistic and non-relativistic phases. The basic hydrodynamic equation of this model can be derived as follows. In the fixed frame, which is rest to the circum-burst environment, the total kinetic energy of the fireball is $E_{\rm{K}}=(\gamma-1)(M_{\rm{ej}}+M_{\rm{sw}})c^{2}+(1-\epsilon)\gamma U$, where $\gamma$ is the Lorentz factor of the blast wave, $M_{\rm{ej}}$ is the initial mass of the blast wave ejected from the central engine, $M_{\rm{sw}}$ is the mass of the swept-up ambient medium, $c$ is the speed of light, and $\epsilon$ is the total radiation efficiency (Huang et al. 1999). In the comoving frame of the blast wave, the total internal energy instantaneously heated by the shock is $U=(\gamma-1)M_{\rm{sw}}c^2$, which is implied from the relativistic jump conditions (Blandford \& McKee 1976). The differential loss of the kinetic energy $E_{\rm{K}}$, when the blast wave sweeps up an infinitesimal mass $\rm{d}\it{M_{\rm{sw}}}$, can be formulated as \begin{equation} \rm{d}[(\gamma-1)(\textit{M}_{\rm{ej}}+\textit{M}_{\rm{sw}})\textit{c}^{2}+(1-\epsilon)\gamma \textit{U}]=-\epsilon\gamma(\gamma-1)\rm{d}\textit{M}_{\rm{sw}}\textit{c}^{2}. \end{equation} Assuming a constant $\epsilon$ and inserting the expression for $U$, it is then easy to obtain the hydrodynamic equation of the realistic model (Huang et al. 1999, 2000) \begin{equation} \frac{\rm{d}\gamma}{\rm{d}\it{M_{\rm{sw}}}}=-\frac{\gamma^2-1}{M_{\rm{ej}}+\epsilon M_{\rm{sw}}+2(1-\epsilon)\gamma M_{\rm{sw}}}. \label{eqn:hydro} \end{equation} Feng et al. (2002) have relaxed the assumption of a constant $\epsilon$, and found that the results differ little from the above equation. As we show later, the epoch for a constant radiation efficiency will not end just at the transition from the fast cooling phase to the slow cooling phase of the fireball evolution. In fact, it would last for a time much longer than that transition time, because the radiation efficiency of the shock-accelerated electrons in the slow cooling phase decreases very slowly for typical values of the index of electron energy distribution, e.g. $p\approx2.2$ indicated both from observations of the afterglow spectra, and from the shock acceleration theory (Achterberg et al. 2001). Throughout this paper, we focus on the early epoch when the afterglow of a relativistic jet is spherical-like, which requires the Lorentz factor of the jet ($\gamma$) being larger than the inverse of the half-opening angle. The swept-up mass is given by $M_{\rm{sw}}=\displaystyle\frac{4\pi}{3-k}m_{p}nR^{3}$, where $m_{p}$ is the proton mass, and the ambient density is \begin{equation} n=AR^{-k}, \label{eqn:density} \end{equation} where $k=0$ with $n=A=\rm{const}$ for the ISM case, and $k=2$ with $A=3\times 10^{35}A_{\ast}$ cm$^{-1}$ for the stellar wind case. Such a blast wave begins to decelerate at the radius $R_{\rm{dec}}$, when the swept-up mass reaches $M_{\rm{ej}}/\gamma_0$, where $\gamma_0$ is the initial Lorentz factor. The corresponding decelerating time measured by an observer is $t_{\rm{dec}}=R_{\rm{dec}}(1+z)/2\gamma_0^2 c$, here $z$ is the cosmological redshift of the GRB. We neglect the effect of reverse shocks in the early afterglow for simplicity. At early times, electrons cool rapidly. The blast wave is therefore semi-radiative with a constant radiation efficiency $\epsilon=\epsilon_{e}$. The typical energy equipartition factor of electrons is $\epsilon_{e}\sim 1/3$. Equation (\ref{eqn:hydro}) can then be analytically integrated by neglecting the first two terms in the denominator at the right side when $t>t_{\rm{dec}}$. The scaling laws for the hydrodynamics are $\gamma^2\propto R^{-m}$ and $R\propto t^{1/(m+1)}$, where the hydrodynamic self-similarity index \begin{equation} m=\displaystyle\frac{3-k}{1-\epsilon}, \label{eqn:m} \end{equation} and $t$ is the observed time since the burst. The $\epsilon$ term in the denominator in equation (\ref{eqn:m}) shows the deviation of the hydrodynamics of a semi-radiative blast wave from that of an adiabatic one of Blandford \& McKee (1976). Since the isotropic-equivalent energy $E$ is proportional to $M_{\rm{sw}}\gamma^2$, it decreases as \begin{equation} E=E_{\rm{dec}}(\frac{t}{t_{\rm{dec}}})^{-(3-k)\epsilon/(4-k-\epsilon)}, \label{eqn:energy} \end{equation} where $E_{\rm{dec}}$ is the initial isotropic energy at $R_{\rm{dec}}$. The minimum Lorentz factor of the shock-accelerated electrons is evaluated by \begin{equation} \gamma_{e,\rm{min}}=\frac{\epsilon_{e}}{6}\frac{m_{p}}{m_{e}}\zeta_{1/6}\gamma_{0}(\frac{t}{t_{\rm{dec}}})^{-m/[2(m+1)]}, \label{eqn:gm} \end{equation} where $\zeta_{1/6}=6\displaystyle\frac{p-2}{p-1}$ and $m_{e}$ is the electron mass. Conventionally, assuming a constant fraction of $\epsilon_{B}$ of the post-shock thermal energy density contained in post-shock magnetic fields, the magnetic field intensity in the comoving frame is \begin{equation} B=B_{\rm{dec}}(\frac{t}{t_{\rm{dec}}})^{-(m+k)/[2(m+1)]}, \label{eqn:B} \end{equation} where the initial value is $B_{\rm{dec}}=(32\pi \epsilon_{B}A R_{\rm{dec}}^{-k}m_{p}c^2)^{1/2}\gamma_{0}$. The maximum Lorentz factor of the shock-accelerated electrons, $\gamma_{e,\rm{max}}\approx 10^{8}(B/\rm{G})^{-1/2}$, is obtained by assuming that the acceleration timescale, which is typically the gyration period in the magnetic field, equals the hydrodynamical timescale. The cooling Lorentz factor of electrons is determined by considering both synchrotron radiation and inverse Compton scattering, i.e. (Sari et al. 1998; Panaitescu \& Kumar 2000) \begin{equation} \gamma_{c}=\frac{6\pi m_{e}c(1+z)}{\sigma_{\rm{T}}B_{\rm{dec}}^2\gamma_0 t_{\rm{dec}}(1+Y)}(\frac{t}{t_{\rm{dec}}})^{(m+2k-2)/[2(m+1)]}, \label{eqn:fc:gc} \end{equation} where $\sigma_{\rm{T}}$ is the Thomson cross section, and the Compton parameter $Y=(\displaystyle\frac{\epsilon_{e}}{\epsilon_{B}})^{1/2}\gg 1$ is a constant during the fast cooling phase (Sari \& Esin 2001). The transition from the fast-cooling phase to the slow-cooling phase happens at $t_{cm}$, when $\gamma_{c}=\gamma_{e,\rm{min}}$. Combining equations (\ref{eqn:m}) --- (\ref{eqn:fc:gc}) with the definition of the deceleration time, we obtain \begin{equation} t_{cm}=\frac{(1+z)\sigma_{\rm{T}}^{1/2}}{2c}A\sigma_{\rm{T}}^{(3-k)/2}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^2}]^{(2-k)/2}(\frac{2m_{p}}{3m_{e}}\epsilon_{e}^{3/4}\epsilon_{B}^{1/4}\zeta_{1/6}^{1/2})^{4-k}. \label{eqn:t0-1} \end{equation} Here the subscript ``cm'' denotes the physical quantity at the time when $\gamma_{c}=\gamma_{e,\rm{min}}$. Throughout this paper we are especially interested in the usual case of $\epsilon_{e}\gg\epsilon_{B}$, in which the inverse Compton scattering has a dominant effect on the evaluation of $\gamma_{c}$. We denote the isotropic energy at $t_{cm}$ as $E_{cm}=E(t_{cm})$. Note that $t_{cm}$ is independent of the radiation efficiency $\epsilon$ and the initial Lorentz factor $\gamma_{0}$. The value of $t_{cm}$ can be further calculated to be \begin{equation} t_{cm}=\left\{ \begin{array}{l} 0.30(1+z)\epsilon_{e,-0.5}^3 \epsilon_{B,-2.5}\zeta_{1/6}^{2}E_{cm,53}n\,\,\rm{day}, \phantom{sss} \rm{ISM,} \\ 0.58(1+z)\epsilon_{e,-0.5}^{3/2} \epsilon_{B,-2.5}^{1/2}\zeta_{1/6}A_{\ast}\,\,\rm{day}, \;\phantom{sssssss} \rm{wind.}\\ \end{array} \right. \label{eqn:t0-2} \end{equation} Here we adopt the conventional definition of $Q=Q_{x}10^{x}$. The radius of the blast wave at $t_{cm}$ is \begin{equation} R_{cm}=\frac{2m_{p}}{3m_{e}}\epsilon_{e}^{3/4}\epsilon_{B}^{1/4}\zeta_{1/6}^{1/2}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^2}]^{1/2}\sigma_{\rm{T}}^{1/2}, \label{eqn:Radius-1} \end{equation} or equivalently \begin{equation} R_{cm}=\left\{ \begin{array}{l} 3.98\times10^{17}\epsilon_{e,-0.5}^{3/4} \epsilon_{B,-2.5}^{1/4}\zeta_{1/6}^{1/2}E_{cm,53}^{1/2}\,\,\rm{cm}, \phantom{sss} \rm{ISM,} \\ 2.30\times10^{17}\epsilon_{e,-0.5}^{3/4} \epsilon_{B,-2.5}^{1/4}\zeta_{1/6}^{1/2}E_{cm,53}^{1/2}\,\,\rm{cm}, \phantom{sss} \rm{wind.} \\ \end{array} \right. \label{eqn:Radius-2} \end{equation} The evolution of the radius is $R=R_{cm}\displaystyle(\frac{t}{t_{cm}})^{(1-\epsilon)/(4-k-\epsilon)}$. The magnetic field at $t_{cm}$ is \begin{equation} B_{cm}=3(\frac{3}{2\pi})^{1/4}\frac{m_{e}^{2}c^{4}}{e^{3}}\frac{3m_{e}}{2m_{p}}\epsilon_{e}^{-9/8}\epsilon_{B}^{1/8}\zeta_{1/6}^{-3/4}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^2}]^{-1/4}, \label{eqn:B-1} \end{equation} or numerically \begin{equation} B_{cm}=\left\{ \begin{array}{l} 0.35\epsilon_{e,-0.5}^{-9/8} \epsilon_{B,-2.5}^{1/8}\zeta_{1/6}^{-3/4}E_{cm,53}^{-1/4}\,\,\rm{G}, \phantom{sss} \rm{ISM,} \\ 0.46\epsilon_{e,-0.5}^{-9/8} \epsilon_{B,-2.5}^{1/8}\zeta_{1/6}^{-3/4}E_{cm,53}^{-1/4}\,\,\rm{G}, \phantom{sss} \rm{wind.} \\ \end{array} \right. \label{eqn:B-2} \end{equation} The magnetic field evolves as $B=B_{cm}\displaystyle(\frac{t}{t_{cm}})^{-(3-k\epsilon)/[2(4-k-\epsilon)]}$. \subsection{Properties of the synchrotron radiation} The characteristic synchrotron frequencies corresponding to the $\gamma_{c}$, $\gamma_{e,\rm{min}}$ and $\gamma_{e,\rm{max}}$ electrons are denoted as $\nu_{c},\nu_{m}$ and $\nu_{M}$ respectively. They can be easily calculated according to $\nu =\displaystyle\gamma\gamma_{e}^2\frac{eB}{2\pi(1+z)m_{e}c}$, with $e$ being the electron charge. The peak flux density of the afterglow is $F_{\nu,\rm{max}}=\displaystyle\frac{N_{\rm{e}}P_{\nu,\rm{max}}}{4\pi D_{\rm{L}}^{2}}(1+z)$, where $N_{\rm{e}}=\displaystyle\frac{4\pi}{3-k}AR^{3-k}$ is the total number of shocked accelerated electrons, $P_{\nu,\rm{max}}=\displaystyle\frac{\sigma_{\rm{T}}m_{e}c^2}{3e}\gamma B$ is the peak spectral power of a single electron and $D_{\rm{L}}$ is the luminosity distance (Sari et al. 1998). This peak flux density is at the cooling frequency $\nu_{c}$ in the fast-cooling phase, and the flux density at $\nu_{c}$ would be reduced if the synchrotron-self-absorption (SSA) frequency $\nu_{a}$ is above $\nu_{c}$. It is convenient to re-scale the physical quantities to the values at the time $t_{cm}$, because physical variables such as $\gamma_{c}$ and $\gamma_{m}$ at $t_{cm}$ are independent of $\epsilon$. The minimum electron Lorentz factor equals to the cooling Lorentz factor at $t_{cm}$, $\gamma_{e,cm}\equiv\gamma_{e,\rm{min}}(t_{cm})=\gamma_{c}(t_{cm})$, \begin{equation} \gamma_{e,cm}=\displaystyle\frac{1}{4}(\frac{2m_{p}}{3m_{e}})^{(k-1)/2}\epsilon_{e}^{(3k-1)/8}\epsilon_{B}^{(k-3)/8} \zeta_{1/6}^{(k+1)/4}(A\sigma_{\rm{T}}^{(3-k)/2})^{-1/2}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^{2}}]^{(k-1)/4}, \label{eqn:gamma_cm-1} \end{equation} which can be evaluated to be \begin{equation} \gamma_{e,cm}=\left\{ \begin{array}{l} 1536\epsilon_{e,-0.5}^{-1/8} \epsilon_{B,-2.5}^{-3/8}\zeta_{1/6}^{1/4}E_{cm,53}^{-1/4}n^{-1/2}, \phantom{sss} \rm{ISM,} \\ 849\epsilon_{e,-0.5}^{5/8} \epsilon_{B,-2.5}^{-1/8}\zeta_{1/6}^{3/4}E_{cm,53}^{1/4}A_{\ast}^{-1/2}, \phantom{ssss} \rm{wind.}\\ \end{array} \right. \label{eqn:gamma_cm-2} \end{equation} The typical frequency $\nu_{m}$ also equals to the cooling frequency $\nu_{c}$ at $t_{cm}$, i.e. $\nu_{cm}\equiv\nu_{m}(t_{cm})=\nu_{c}(t_{cm})$, which is \begin{eqnarray} \nu_{cm}=&&\displaystyle\frac{1}{16}(\frac{3}{2\pi})^{5/4}\frac{m_{e}c^3}{e^2(1+z)}(\frac{2m_{p}}{3m_{e}})^{(3k-7)/2}\epsilon_{e}^{(9k-20)/8}\epsilon_{B}^{(3k-8)/8} \zeta_{1/6}^{(3k-4)/4}\nonumber\\ &&\times(A\sigma_{\rm{T}}^{(3-k)/2})^{-3/2}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^{2}}]^{(3k-4)/4}, \label{eqn:nu_0-1} \end{eqnarray} and can be further deduced to be \begin{equation} \nu_{cm}=\left\{ \begin{array}{l} 3.65\times 10^{13}(1+z)^{-1}\epsilon_{e,-0.5}^{-5/2}\epsilon_{B,-2.5}^{-1}\zeta_{1/6}^{-1}E_{cm,53}^{-1}n^{-3/2}\,\,\rm{Hz}, \phantom{sss} \rm{ISM,} \\ 8.10\times 10^{12}(1+z)^{-1}\epsilon_{e,-0.5}^{-1/4}\epsilon_{B,-2.5}^{-1/4}\zeta_{1/6}^{1/2}E_{cm,53}^{1/2}A_{\ast}^{-3/2}\,\,\rm{Hz}, \phantom{ss} \rm{wind.}\\ \end{array} \right. \label{eqn:nu_0-2} \end{equation} The maximum frequency of the synchrotron radiation at $t_{cm}$ is \begin{equation} \nu_{M}(t_{cm})=\left\{ \begin{array}{l} 4.3\times 10^{25}(1+z)^{-1}\epsilon_{e,-0.5}^{-1/8}\epsilon_{B,-2.5}^{-3/8}\zeta_{1/6}^{1/4}E_{cm,53}^{-1/4}n^{-1/2}\,\,\rm{Hz}, \phantom{ss} \rm{ISM,} \\ 2.4\times 10^{25}(1+z)^{-1}\epsilon_{e,-0.5}^{5/8}\epsilon_{B,-2.5}^{-1/8}\zeta_{1/6}^{3/4}E_{cm,53}^{1/4}A_{\ast}^{-1/2}\,\,\rm{Hz}, \phantom{ss} \rm{wind,}\\ \end{array} \right. \label{eqn:nu_max} \end{equation} which corresponds to $\sim 100$ GeV photons. It ensures that the synchrotron spectrum can be extrapolated to very high energy band, as will be useful in the next subsection. The peak flux density of the synchrotron radiation at $t_{cm}$ is \begin{eqnarray} F_{\nu,\rm{max}}(t_{cm})=&&\displaystyle(\frac{2\pi}{3})^{3/4}\frac{4m_{e}c^{2}(1+z)}{(3-k)D_{\rm{L}}^{2}}(\frac{3m_{e}}{2m_{p}})^{(k-1)/2}\epsilon_{e}^{-3k/8}\epsilon_{B}^{(4-k)/8} \zeta_{1/6}^{-k/4}\nonumber\\ &&\times(A\sigma_{\rm{T}}^{(3-k)/2})^{1/2}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^{2}}]^{(4-k)/4}, \label{eqn:Fnu_max-1} \end{eqnarray} which is numerically expressed as \begin{equation} F_{\nu,\rm{max}}(t_{cm})=\left\{ \begin{array}{l} 44(1+z)\epsilon_{B,-2.5}^{1/2}E_{cm,53}n^{1/2}D_{\rm{L},28}^{-2}\,\,\rm{mJy}, \;\phantom{ssssssssssss} \rm{ISM,} \\ 104(1+z)\epsilon_{e,-0.5}^{-3/4}\epsilon_{B,-2.5}^{1/4}\zeta_{1/6}^{-1/2}E_{cm,53}^{1/2}A_{\ast}^{1/2}D_{\rm{L},28}^{-2}\,\,\rm{mJy}, \phantom{s} \rm{wind.}\\ \end{array} \right. \label{eqn:Fnu_max-2} \end{equation} We obtain the temporal evolutions of these characteristic frequencies and the peak flux density during the fast-cooling phase as follows, \begin{eqnarray} &&\nu_{c}=\nu_{cm}(\frac{t}{t_{cm}})^{(3k-4)/[2(m+1)]},\phantom{sssssssss} \nu_{m}=\nu_{cm}(\frac{t}{t_{cm}})^{-(4m+k)/[2(m+1)]},\nonumber\\ &&\nu_{M}=\nu_{M}(t_{cm})(\frac{t}{t_{cm}})^{-m/[2(m+1)]},\phantom{ss} F_{\nu,\rm{max}}=F_{\nu,\rm{max}}(t_{cm})(\frac{t}{t_{cm}})^{(6-3k-2m)/[2(m+1)]}. \label{eqn:fc:spectrum} \end{eqnarray} The synchrotron-self-absorption frequency in the fast cooling phase can be evaluated by \begin{equation} \nu_{a}=\left\{ \begin{array}{l} \nu_{a,<}\equiv\nu_{c}\displaystyle[\frac{c_{0}}{(3-k)}\frac{enR}{B\gamma_{c}^{5}}]^{3/5}, \phantom{ss} \rm{if} \phantom{ss} \nu_{a}<\nu_{c}, \\ \nu_{a,>}\equiv\nu_{c}\displaystyle[\frac{c_{0}}{(3-k)}\frac{enR}{B\gamma_{c}^{5}}]^{1/3}, \phantom{ss} \rm{if} \phantom{ss} \nu_{c}<\nu_{a}<\nu_{m},\\ \end{array} \right. \label{eqn:fc:nu_a-1} \end{equation} where $c_{0}\approx10.4\displaystyle\frac{p+2}{p+2/3}$ (see the appendix of Wu et al. 2003). When $t<t_{cm}$, the distribution of the cooled electron with $\gamma_{c}<\gamma_{\rm{e}}<\gamma_{e,\rm{min}}$ has $p=2$ and the resulting coefficient $c_{0}=15.6$. For $2<p<3$, the value of $c_{0}$ is nearly a constant, $\sim 15$, which is about $3$ times larger than the coefficient in equation (52) of Panaitescu \& Kumar (2000). The SSA frequency can be determined straightforwardly by \begin{equation} \nu_{a}=\rm{min}\{\it{\nu_{a,<},\nu_{a,>}}\} \label{eqn:fc:nu_a-2} \end{equation} without judging whether $\nu_{a}<\nu_{c}$ or not. The case for $\nu_{a}>\nu_{m}$ can be neglected. The numerical expression for $\nu_{a,<}$ is \begin{equation} \nu_{a,<}=\left\{ \begin{array}{l} 4.75(1+z)^{-1}\epsilon_{e,-0.5}^{-1}\epsilon_{B,-2.5}^{1/5}\zeta_{1/6}^{-1}E_{cm,53}^{1/5}n^{3/5}(\displaystyle\frac{t}{t_{cm}})^{-(10+8\epsilon)/[5(4-\epsilon)]}\,\,\rm{GHz}, \,\phantom{sss} \rm{ISM,} \\ 20.9(1+z)^{-1}\epsilon_{e,-0.5}^{-19/10}\epsilon_{B,-2.5}^{-1/10}\zeta_{1/6}^{-8/5}E_{cm,53}^{-2/5}A_{\ast}^{3/5}(\displaystyle\frac{t}{t_{cm}})^{-(16-10\epsilon)/[5(2-\epsilon)]}\,\,\rm{GHz}, \phantom{} \rm{wind,}\\ \end{array} \right. \label{eqn:fc:nu_a<} \end{equation} while the expression for $\nu_{a,>}$ is \begin{equation} \nu_{a,>}=\left\{ \begin{array}{l} 2.53\times10^{11}(1+z)^{-1}\epsilon_{e,-0.5}^{-5/3}\epsilon_{B,-2.5}^{-1/3}\zeta_{1/6}^{-1}E_{cm,53}^{-1/3}n^{-1/3}(\displaystyle\frac{t}{t_{cm}})^{-2/(4-\epsilon)}\,\,\rm{Hz}, \phantom{sss} \rm{ISM,} \\ 2.96\times10^{11}(1+z)^{-1}\epsilon_{e,-0.5}^{-7/6}\epsilon_{B,-2.5}^{-1/6}\zeta_{1/6}^{-2/3}A_{\ast}^{-1/3}(\displaystyle\frac{t}{t_{cm}})^{-2/3}\,\,\rm{Hz}, \phantom{ssssssssss} \rm{wind.}\\ \end{array} \right. \label{eqn:fc:nu_a>} \end{equation} For the ISM case, the transition from initially $\nu_{a}=\nu_{a,>}$ to the later $\nu_{a}=\nu_{a,<}$ happens when $\nu_{a}=\nu_{c}$, \begin{equation} t_{ac}=\left\{ \begin{array}{l} 6.03\times10^{-51}\epsilon_{e,-1}^{16.25}\epsilon_{B,-2.5}^{13}E_{cm,53}^{13}n^{22.75}t_{cm}, \phantom{ss} \,\rm{if} \phantom{ss} \epsilon=0.1, \\ 2.67\times10^{-13}\epsilon_{e,-0.5}^{4.85}\epsilon_{B,-2.5}^{3.88}E_{cm,53}^{3.88}n^{6.8}t_{cm}, \phantom{sss} \rm{if} \phantom{ss} \epsilon=0.32,\\ \end{array} \right. \label{eqn:fc:ISM:t_ac} \end{equation} which indicates that the epoch when $\nu_{a}=\nu_{a,>}$ is very short, unless the ISM is very dense, e.g. $n\gtrsim10^2$ (Dai \& Lu 1999, 2000). For the stellar wind case, the transition from $\nu_{a}=\nu_{a,>}$ to $\nu_{a}=\nu_{a,<}$ takes place when $\nu_{a}=\nu_{c}$, \begin{equation} t_{ac}=\left\{ \begin{array}{l} 0.14\epsilon_{e,-1}^{-0.80}\epsilon_{B,-2.5}^{0.07}\zeta_{1/6}^{-1.02}E_{cm,53}^{-0.44}A_{\ast}^{1.02}t_{cm}, \phantom{sss} \rm{if} \phantom{ss} \epsilon=0.1, \\ 0.046\epsilon_{e,-0.5}^{-0.85}\epsilon_{B,-2.5}^{0.08}\zeta_{1/6}^{-1.09}E_{cm,53}^{-0.47}A_{\ast}^{1.09}t_{cm}, \;\phantom{s} \rm{if} \phantom{ss} \epsilon=0.32.\\ \end{array} \right. \label{eqn:fc:wind:t_ac} \end{equation} The flux density at the observed frequency $\nu$ from the synchrotron component for $t<t_{ac}$ is \begin{equation} F_{\nu}=\left\{ \begin{array}{l} \displaystyle(\frac{\nu}{\nu_{a}})^{2}(\frac{\nu_{c}}{\nu_{a}})F_{\nu,\rm{max}}\propto t^{(1+k+m)/(m+1)}, \phantom{sssssssssssssssss} \nu<\nu_{c}, \\ \displaystyle(\frac{\nu}{\nu_{a}})^{5/2}(\frac{\nu_{a}}{\nu_{c}})^{-1/2}F_{\nu,\rm{max}}\propto t^{(8+k+4m)/[4(m+1)]}, \phantom{sssssssss} \nu_{c}<\nu<\nu_{a}, \\ \displaystyle(\frac{\nu}{\nu_{c}})^{-1/2}F_{\nu,\rm{max}}\propto t^{(8-3k-4m)/[4(m+1)]}, \phantom{sssssssssssssss} \nu_{a}<\nu<\nu_{m},\\ \displaystyle(\frac{\nu}{\nu_{m}})^{-p/2}(\frac{\nu_{m}}{\nu_{c}})^{-1/2}F_{\nu,\rm{max}}\propto t^{[8-2k-p(4m+k)]/[4(m+1)]}, \phantom{ss}\, \nu_{m}<\nu,\\ \end{array} \right. \label{eqn:fc:syn-spectrum-1} \end{equation} while for $t_{ac}<t<t_{cm}$, the flux density is \begin{equation} F_{\nu}=\left\{ \begin{array}{l} \displaystyle(\frac{\nu}{\nu_{a}})^{2}(\frac{\nu_{a}}{\nu_{c}})^{1/3}F_{\nu,\rm{max}}\propto t^{(1+k+m)/(m+1)}, \phantom{sssssssssssssss} \nu<\nu_{a}, \\ \displaystyle(\frac{\nu}{\nu_{c}})^{1/3}F_{\nu,\rm{max}}\propto t^{(11-6k-3m)/[3(m+1)]}, \phantom{ssssssssssssssss} \nu_{a}<\nu<\nu_{c}, \\ \displaystyle(\frac{\nu}{\nu_{c}})^{-1/2}F_{\nu,\rm{max}}\propto t^{(8-3k-4m)/[4(m+1)]}, \,\phantom{sssssssssssssss} \nu_{c}<\nu<\nu_{m},\\ \displaystyle(\frac{\nu}{\nu_{m}})^{-p/2}(\frac{\nu_{m}}{\nu_{c}})^{-1/2}F_{\nu,\rm{max}}\propto t^{[8-2k-p(4m+k)]/[4(m+1)]}, \,\phantom{ss}\, \nu_{m}<\nu.\\ \end{array} \right. \label{eqn:fc:syn-spectrum-2} \end{equation} \subsection{Constraint on the IC component in an X-ray afterglow} The synchrotron-self-Compton (SSC) spectrum is featured by the characteristic IC frequencies, i.e., $\nu_{a}^{\rm{IC}}\approx 2\gamma_{c}^2\nu_{a}$, $\nu_{m}^{\rm{IC}}\approx 2\gamma_{e,\rm{min}}^2\nu_{m}$ and $\nu_{c}^{\rm{IC}}\approx 2\gamma_{c}^2\nu_{c}$. The IC frequency $\nu_{a}^{\rm{IC}}$ can be directly determined by \begin{equation} \nu_{a}^{\rm{IC}}=\rm{min}\{\it{\nu_{a,<}^{\rm{IC}},\nu_{a,>}^{\rm{IC}}}\}, \label{eqn:fc:nu_a-ic} \end{equation} where $\nu_{a,<}^{\rm{IC}}\approx 2\gamma_{c}^2\nu_{a,<}$ and $\nu_{a,>}^{\rm{IC}}\approx 2\gamma_{c}^2\nu_{a,>}$. Inserting equations (\ref{eqn:fc:gc}), (\ref{eqn:gamma_cm-2}), (\ref{eqn:fc:nu_a<}) and (\ref{eqn:fc:nu_a>}) into the above equation, we obtain \begin{equation} \nu_{a,<}^{\rm{IC}}=\left\{ \begin{array}{l} 2.24\times10^{16}(1+z)^{-1}\epsilon_{e,-0.5}^{-5/4}\epsilon_{B,-2.5}^{-11/20}\zeta_{1/6}^{-1/2}E_{cm,53}^{-3/10}n^{-2/5}(\displaystyle\frac{t}{t_{cm}})^{-(5-2\epsilon)/[5(4-\epsilon)]}\,\,\rm{Hz}, \phantom{ss} \rm{ISM,} \\ 3.01\times10^{16}(1+z)^{-1}\epsilon_{e,-0.5}^{-13/20}\epsilon_{B,-2.5}^{-7/20}\zeta_{1/6}^{-1/10}E_{cm,53}^{1/10}A_{\ast}^{-2/5}(\displaystyle\frac{t}{t_{cm}})^{-(1+30\epsilon)/[5(2-\epsilon)]}\,\,\rm{Hz}, \; \rm{wind,}\\ \end{array} \right. \label{eqn:fc:nu_a-ic<} \end{equation} while the expression for $\nu_{a,>}^{\rm{IC}}$ is \begin{equation} \nu_{a,>}^{\rm{IC}}=\left\{ \begin{array}{l} 1.19\times10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-23/12}\epsilon_{B,-2.5}^{-13/12}\zeta_{1/6}^{-1/2}E_{cm,53}^{-5/6}n^{-4/3}(\displaystyle\frac{t}{t_{cm}})^{-(1-2\epsilon)/(4-\epsilon)}\,\,\rm{Hz}, \phantom{s} \rm{ISM,} \\ 4.27\times10^{17}(1+z)^{-1}\epsilon_{e,-0.5}^{1/12}\epsilon_{B,-2.5}^{-5/12}\zeta_{1/6}^{5/6}E_{cm,53}^{1/2}A_{\ast}^{-4/3}(\displaystyle\frac{t}{t_{cm}})^{(5-14\epsilon)/[3(2-\epsilon)]}\,\,\rm{Hz}, \phantom{ss} \rm{wind.}\\ \end{array} \right. \label{eqn:fc:nu_a-ic>} \end{equation} As we can see, $\nu_{a}^{\rm{IC}}$ is below the X-ray frequency $\nu\sim10^{18}$ Hz for typical parameters in most times during the fast-cooling phase. For simplicity, we do not consider this frequency for our estimation of the IC component in the X-ray light curve. The SSC frequency $\nu_{m}^{\rm{IC}}$ equals to $\nu_{c}^{\rm{IC}}$ when $t=t_{cm}$, i.e. $\nu_{cm}^{\rm{IC}}\equiv 2\gamma_{e,cm}^2\nu_{cm}$. According to equations (\ref{eqn:gamma_cm-1}) and (\ref{eqn:nu_0-1}), we obtain \begin{eqnarray} \nu_{cm}^{\rm{IC}}=&&\displaystyle\frac{1}{128}(\frac{3}{2\pi})^{5/4}\frac{m_{e}c^3}{e^2(1+z)}(\frac{2m_{p}}{3m_{e}})^{(5k-9)/2}\epsilon_{e}^{(15k-22)/8}\epsilon_{B}^{(5k-14)/8} \zeta_{1/6}^{(5k-2)/4}\nonumber\\ &&\times(A\sigma_{\rm{T}}^{(3-k)/2})^{-5/2}[\frac{(3-k)E_{cm}}{4\pi m_{p}c^{2}}]^{(5k-6)/4}, \label{eqn:nu_0-ic-1} \end{eqnarray} which can be further reduced numerically as \begin{equation} \nu_{cm}^{\rm{IC}}=\left\{ \begin{array}{l} 1.72\times 10^{20}(1+z)^{-1}\epsilon_{e,-0.5}^{-11/4}\epsilon_{B,-2.5}^{-7/4}\zeta_{1/6}^{-1/2}E_{cm,53}^{-3/2}n^{-5/2}\,\,\rm{Hz}, \phantom{ss} \rm{ISM,} \\ 1.17\times 10^{19}(1+z)^{-1}\epsilon_{e,-0.5}\epsilon_{B,-2.5}^{-1/2}\zeta_{1/6}^{2}E_{cm,53}A_{\ast}^{-5/2}\,\,\rm{Hz}, \phantom{sss} \rm{wind.}\\ \end{array} \right. \label{eqn:nu_0-ic-2} \end{equation} The characteristic SSC frequencies $\nu_{c}^{\rm{IC}}$ and $\nu_{m}^{\rm{IC}}$ evolve with time as \begin{equation} \nu_{c}^{\rm{IC}}=\nu_{cm}^{\rm{IC}}(\frac{t}{t_{cm}})^{(2m+7k-8)/[2(m+1)]},\phantom{ss} \nu_{m}^{\rm{IC}}=\nu_{cm}^{\rm{IC}}(\frac{t}{t_{cm}})^{-(6m+k)/[2(m+1)]}. \end{equation} The peak flux density of the SSC spectrum, $F_{\nu,\rm{max}}^{\rm{IC}}$, is roughly the product of the peak flux density of the synchrotron spectrum by the Thomson optical depth. Considering some numerical factors of order unity, the exact expression is (Sari \& Esin 2001) \begin{equation} F_{\nu,\rm{max}}^{\rm{IC}}\approx\frac{28}{45}x_{0}(\sigma_{\rm{T}}Rn)F_{\nu,\rm{max}}\propto t^{(8-5k-2m)/[2(m+1)]}, \end{equation} where $x_{0}\approx0.5$. The inverse Compton spectrum above $\nu_{a}^{\rm{IC}}$ is similar to the synchrotron one, which can be approximated by several power law segments, i.e. \begin{equation} F_{\nu}^{\rm{IC}}=\left\{ \begin{array}{l} \displaystyle(\frac{\nu}{\nu_{c}^{\rm{IC}}})^{1/3}F_{\nu,\rm{max}}^{\rm{IC}}\propto t^{(16-11k-4m)/[3(m+1)]}, \phantom{sssssssssssssssss} \nu<\nu_{c}^{\rm{IC}}, \\ \displaystyle(\frac{\nu}{\nu_{c}^{\rm{IC}}})^{-1/2}F_{\nu,\rm{max}}^{\rm{IC}}\propto t^{(8-3k-2m)/[4(m+1)]}, \phantom{sssssssssssssssss} \nu_{c}^{\rm{IC}}<\nu<\nu_{m}^{\rm{IC}},\\ \displaystyle(\frac{\nu}{\nu_{m}^{\rm{IC}}})^{-p/2}(\frac{\nu_{m}^{\rm{IC}}}{\nu_{c}^{\rm{IC}}})^{-1/2}F_{\nu,\rm{max}}^{\rm{IC}}\propto t^{[8-2k+4m-p(6m+k)]/[4(m+1)]}, \phantom{s}\, \nu_{m}^{\rm{IC}}<\nu,\\ \end{array} \right. \label{eqn:fc:SSC-spectrum} \end{equation} where we have neglected the logarithmic term for $\nu>\nu_{m}^{\rm{IC}}$. The SSC flux density begins to dominate over that of the synchrotron radiation in the overall synchrotron $+$ SSC spectrum at the crossing point, which corresponds to $\nu_{\times}^{\rm{IC}}$ (Sari \& Esin 2001). Using equation (\ref{eqn:fc:SSC-spectrum}) and the standard synchrotron spectrum, and assuming $\nu_{\times}^{\rm{IC}}>\nu_{m}>\nu_{c}$, we obtain the crossing point frequency for two cases, $\nu_{\times}^{\rm{IC}}<\nu_{c}^{\rm{IC}}$ and $\nu_{c}^{\rm{IC}}<\nu_{\times}^{\rm{IC}}<\nu_{m}^{\rm{IC}}$, i.e. \begin{equation} \nu_{\times}^{\rm{IC}}=\left\{ \begin{array}{l} \nu_{\times,<}^{\rm{IC}}\equiv\nu_{c}^{\rm{IC}}[c_1\displaystyle\frac{\epsilon_{B}}{\epsilon_{e}}(\frac{\gamma_{e,\rm{min}}}{\gamma_{c}})^{3p-2}(2\gamma_{c}\gamma_{e,\rm{min}})^{2-p}]^{3/(2+3p)}, \phantom{s} \rm{if} \phantom{ss} \nu_{\times}^{\rm{IC}}<\nu_{c}^{\rm{IC}}, \\ \nu_{\times,>}^{\rm{IC}}\equiv\nu_{c}^{\rm{IC}}[c_1\displaystyle\frac{\epsilon_{B}}{\epsilon_{e}}(\frac{\gamma_{e,\rm{min}}}{\gamma_{c}})^{3p-2}(2\gamma_{c}\gamma_{e,\rm{min}})^{2-p}]^{1/(p-1)}, \phantom{ss} \rm{if} \phantom{ss} \nu_{c}^{\rm{IC}}<\nu_{\times}^{\rm{IC}}<\nu_{m}^{\rm{IC}}, \\ \end{array} \right. \label{eqn:fc:nu-ic-1} \end{equation} where the coefficient is $c_1=\displaystyle\frac{225(1-\epsilon)^2(p-1)^2}{49x_{0}^2(4-k-\epsilon)^2(p-2)^2}$. To derive the above equation we have used the relation \begin{equation} \gamma_{c}\gamma_{e,\rm{min}}=\displaystyle\frac{3(4-k-\epsilon)(p-2)}{8(1-\epsilon)(p-1)(1+Y)}\frac{\epsilon_{e}}{\epsilon_{B}}\frac{1}{\sigma_{\rm{T}}nR}. \end{equation} Since $1/(p-1)$ is always larger than $3/(2+3p)$, one can determine $\nu_{\times}^{\rm{IC}}$ directly by \begin{equation} \nu_{\times}^{\rm{IC}}=\rm{max}\{\nu_{\times,<}^{\rm{IC}},\nu_{\times,>}^{\rm{IC}}\}, \label{eqn:fc:nu-ic-2} \end{equation} without judging whether $\nu_{\times}^{\rm{IC}}<\nu_{c}^{\rm{IC}}$ or not. We have calculated numerically the temporal evolution of $\nu_{\times,<}^{\rm{IC}}$ and $\nu_{\times,>}^{\rm{IC}}$ for typical physical parameters. {\em In the ISM case}, the expression for $\nu_{\times,<}^{\rm{IC}}$ is \begin{equation} \nu_{\times,<}^{\rm{IC}}=\left\{ \begin{array}{l} 3.5\times 10^{19}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.08}\epsilon_{B,-2.5}^{-1.35}E_{cm,53}^{-1.47}n^{-2.43}(\displaystyle\frac{t}{t_{cm}})^{(10\epsilon-17.8)/[4.3(4-\epsilon)]}\,{\rm{Hz}},\, p=2.2, \\ 9.1\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.04}\epsilon_{B,-2.5}^{-1.33}E_{cm,53}^{-1.43}n^{-2.37}(\displaystyle\frac{t}{t_{cm}})^{(5\epsilon-9.8)/[2.3(4-\epsilon)]}\,{\rm{Hz}},\phantom{ss} p=2.4, \\ \end{array} \right. \label{eqn:fc:ism:nu<-ic} \end{equation} while the expression for $\nu_{\times,>}^{\rm{IC}}$ is \begin{equation} \nu_{\times,>}^{\rm{IC}}=\left\{ \begin{array}{l} 3.9\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.54}\epsilon_{B,-2.5}^{-0.79}E_{cm,53}^{-1.42}n^{-2.33}(\displaystyle\frac{t}{t_{cm}})^{-8.5/(4-\epsilon)}\,{\rm{Hz}},\phantom{ss} p=2.2, \\ 3.0\times 10^{17}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.39}\epsilon_{B,-2.5}^{-0.82}E_{cm,53}^{-1.36}n^{-2.21}(\displaystyle\frac{t}{t_{cm}})^{-57/[7(4-\epsilon)]}\,{\rm{Hz}},\phantom{s} p=2.4. \\ \end{array} \right. \label{eqn:fc:ism:nu>-ic} \end{equation} The crossing point frequency $\nu_{\times}^{\rm{IC}}$ decreases throughout the fast cooling phase in the ISM case. The transition from the initial $\nu_{\times}^{\rm{IC}}=\nu_{\times,>}^{\rm{IC}}$ to the late $\nu_{\times}^{\rm{IC}}=\nu_{\times,<}^{\rm{IC}}$ happens when $\nu_{\times}^{\rm{IC}}=\nu_{c}^{\rm{IC}}$, i.e. at \begin{equation} t_{\times,c}^{\rm{IC}}=\left\{ \begin{array}{l} 0.24\epsilon_{e,-1}^{-0.39}\epsilon_{B,-2.5}^{0.48}E_{cm,53}^{0.04}n^{0.08}t_{cm},\,\phantom{ss} \rm{if}\,\,\epsilon=0.1, \\ 0.20\epsilon_{e,-0.5}^{-0.33}\epsilon_{B,-2.5}^{0.40}E_{cm,53}^{0.04}n^{0.07}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} t_{\times,c}^{\rm{IC}}=\left\{ \begin{array}{l} 0.057\epsilon_{e,-1}^{-0.33}\epsilon_{B,-2.5}^{0.49}E_{cm,53}^{0.07}n^{0.15}t_{cm},\,\phantom{ss} \rm{if}\,\,\epsilon=0.1, \\ 0.064\epsilon_{e,-0.5}^{-0.28}\epsilon_{B,-2.5}^{0.41}E_{cm,53}^{0.06}n^{0.13}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.4$. The condition for the appearance of the IC component in the soft X-ray afterglow is that $\nu_{\times}^{\rm{IC}}$ at $t_{cm}$ must be much less than $\nu=10^{18}\nu_{18}$ Hz, which leads to the lower limit on the ambient density $n$. Using equation (\ref{eqn:fc:ism:nu<-ic}), we obtain the lower limit of $n$ as \begin{equation} n\gtrsim\left\{ \begin{array}{l} 4.3(1+z)^{-0.41}\nu_{18}^{-0.41}\epsilon_{e,-0.5}^{-1.27}\epsilon_{B,-2.5}^{-0.56}E_{cm,53}^{-0.60}\,{\rm{cm}^{-3}},\phantom{ss} p=2.2, \\ 2.5(1+z)^{-0.42}\nu_{18}^{-0.42}\epsilon_{e,-0.5}^{-1.28}\epsilon_{B,-2.5}^{-0.56}E_{cm,53}^{-0.60}\,{\rm{cm}^{-3}},\phantom{ss} p=2.4. \\ \end{array} \right. \end{equation} The lower limit of $n$ for the emergence of IC component in the X-ray afterglow in the fast cooling phase is typically $\sim1$--$10$ cm$^{-3}$ (Panaitescu \& Kumar 2000). {\em In the stellar wind case}, the expression for $\nu_{\times,<}^{\rm{IC}}$ is \begin{equation} \nu_{\times,<}^{\rm{IC}}=\left\{ \begin{array}{l} 4.5\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{0.56}\epsilon_{B,-2.5}^{-0.13}E_{cm,53}^{0.97}A_{\ast}^{-2.43}(\displaystyle\frac{t}{t_{cm}})^{(3.1-5.7\epsilon)/[4.3(2-\epsilon)]}\,{\rm{Hz}},\phantom{s} p=2.2, \\ 1.4\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{0.48}\epsilon_{B,-2.5}^{-0.12}E_{cm,53}^{0.93}A_{\ast}^{-2.36}(\displaystyle\frac{t}{t_{cm}})^{(2-9\epsilon)/[8.6(2-\epsilon)]}\,{\rm{Hz}},\,\phantom{sss} p=2.4, \\ \end{array} \right. \label{eqn:fc:wind:nu<-ic} \end{equation} while the expression for $\nu_{\times,>}^{\rm{IC}}$ is \begin{equation} \nu_{\times,>}^{\rm{IC}}=\left\{ \begin{array}{l} 1.2\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-0.04}\epsilon_{B,-2.5}^{0.38}E_{cm,53}^{0.92}A_{\ast}^{-2.33}(\displaystyle\frac{t}{t_{cm}})^{(6\epsilon-23)/[6(2-\epsilon)]}\,{\rm{Hz}},\phantom{ss} p=2.2, \\ 1.1\times 10^{17}(1+z)^{-1}\epsilon_{e,-0.5}^{-0.07}\epsilon_{B,-2.5}^{0.29}E_{cm,53}^{0.86}A_{\ast}^{-2.21}(\displaystyle\frac{t}{t_{cm}})^{(7\epsilon-26)/[7(2-\epsilon)]}\,{\rm{Hz}},\phantom{ss} p=2.4. \\ \end{array} \right. \label{eqn:fc:wind:nu>-ic} \end{equation} The crossing point frequency decreases with $\nu_{\times}^{\rm{IC}}=\nu_{\times,>}^{\rm{IC}}$ initially. However, the temporal behavior of the crossing point frequency at late times, $\nu_{\times}^{\rm{IC}}=\nu_{\times,<}^{\rm{IC}}$, depends on $\epsilon$. The transition time when $\nu_{\times,<}^{\rm{IC}}=\nu_{\times,>}^{\rm{IC}}=\nu_{c}^{\rm{IC}}$, is \begin{equation} t_{\times,c}^{\rm{IC}}=\left\{ \begin{array}{l} 0.75\epsilon_{e,-1}^{-0.26}\epsilon_{B,-2.5}^{0.22}E_{cm,53}^{-0.02}A_{\ast}^{0.04}t_{cm},\,\phantom{ss} \rm{if}\,\,\epsilon=0.1, \\ 0.56\epsilon_{e,-0.5}^{-0.26}\epsilon_{B,-2.5}^{0.22}E_{cm,53}^{-0.02}A_{\ast}^{0.04}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} t_{\times,c}^{\rm{IC}}=\left\{ \begin{array}{l} 0.38\epsilon_{e,-1}^{-0.28}\epsilon_{B,-2.5}^{0.21}E_{cm,53}^{-0.04}A_{\ast}^{0.08}t_{cm},\,\phantom{ss} \rm{if}\,\,\epsilon=0.1, \\ 0.27\epsilon_{e,-0.5}^{-0.28}\epsilon_{B,-2.5}^{0.21}E_{cm,53}^{-0.04}A_{\ast}^{0.08}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.4$. After this time, the crossing point frequency $\nu_{\times}^{\rm{IC}}=\nu_{\times,<}^{\rm{IC}}$ will increase for $\epsilon<31/57$ ($2/9$), or continue to decrease for $\epsilon>31/57$ ($2/9$) for $p=2.2$ ($p=2.4$). The emergence of the IC component in the soft X-ray afterglow requires that the minimum of $\nu_{\times}^{\rm{IC}}$ during the fast cooling phase is below $\nu=10^{18}\nu_{18}$ Hz, which leads to a lower limit of $A_{\ast}$ as \begin{equation} A_{\ast}\gtrsim\left\{ \begin{array}{l} 1.88(1+z)^{-0.41}\nu_{18}^{-0.41}\epsilon_{e,-0.5}^{0.20}\epsilon_{B,-2.5}^{-0.02}E_{cm,53}^{0.40},\phantom{ss} \rm{if}\,\,\epsilon<31/57, \\ 2.18(1+z)^{-0.41}\nu_{18}^{-0.41}\epsilon_{e,-0.2}^{0.23}\epsilon_{B,-2.5}^{-0.05}E_{cm,53}^{0.40},\phantom{ss} \rm{if}\,\,\epsilon>31/57, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} A_{\ast}\gtrsim\left\{ \begin{array}{l} 0.94(1+z)^{-0.43}\nu_{18}^{-0.43}\epsilon_{e,-1}^{0.20}\epsilon_{B,-2.5}^{-0.05}E_{cm,53}^{0.40},\;\;\phantom{ss} \rm{if}\,\,\epsilon<2/9, \\ 1.15(1+z)^{-0.42}\nu_{18}^{-0.42}\epsilon_{e,-0.5}^{0.20}\epsilon_{B,-2.5}^{-0.05}E_{cm,53}^{0.39},\phantom{ss} \rm{if}\,\,\epsilon>2/9, \\ \end{array} \right. \end{equation} for $p=2.4$. The above constraint on $A_{\ast}$ is insensitive to the other physical parameters. This implies that the contribution of the IC component to the X-ray afterglow can be neglected during the fast-cooling phase for typical values of $A_{\ast}\lesssim1$ as indicated from the observations of Wolf-Rayet stars and the fittings to some GRB afterglows (Chevalier \& Li 1999; Chevalier, Li, \& Fransson 2004). \section{The late slow-cooling phase} \label{sec:slowcooling} The hydrodynamic evolution of the blast wave in the slow cooling phase can be approximated as that in the early fast cooling phase. This approximation is validated by the fact that the radiation efficiency of the blast wave in the slow cooling phase evolves as, \begin{equation} \epsilon=\epsilon_{e}(\displaystyle\frac{\nu_{m}}{\nu_{c}})^{(p-2)/2}, \end{equation} which decreases very slowly with time as long as the electron power law index $p$ does not deviate far from $2$, e.g. $p\sim 2.2$ as expected in the relativistic shock acceleration theory (Achterberg et al. 2001). This fact will prolong the semi-radiative phase by at least two orders of magnitude in time than $t_{cm}$ for typical $\epsilon_{e}\sim 1/3$ and $p=2.2$\footnotemark\footnotetext{\label{foot:slowcooling}The hydrodynamics will deviate significantly from that of the constant radiation efficiency $\epsilon=\epsilon_{e}$ in the very late times of the slow cooling phase when $\epsilon<e^{-1}\epsilon_{e}$ ($e\approx2.71828$). This happens when $t\gtrsim 1300 t_{cm}$ ($90 t_{cm}$) for the ISM (wind) case for $p=2.2$ and $\epsilon_{e}=0.32$ (see equations \ref{eqn:fc:spectrum} and \ref{eqn:sc:spectrum}). It will happen even later if we adopt the adiabatic relations for $\nu_{c}$ and $\nu_{m}$.}. Hereafter we assume $\epsilon\approx\epsilon_{e}$ for $t>t_{cm}$. The time when $\epsilon\ll\epsilon_{e}$ happens is expected to be very late, when the Lorentz factor of GRB conical ejecta has already dropped below the inverse of the initial opening angle and the resulting light curves deviate from the spherical-like ones, which is beyond the scope of this paper. \subsection{Properties of the synchrotron radiation} The temporal behaviors of the typical frequency $\nu_{m}$ and peak flux density $F_{\nu,\rm{max}}$ in the slow cooling phase are the same as in the fast cooling phase. However, the Compton parameter in the slow cooling case is no longer a constant but evolves as $Y=\displaystyle\sqrt{\frac{\epsilon_{e}}{\epsilon_{B}}}(\displaystyle\frac{\nu_{m}}{\nu_{c}})^{(p-2)/4}$. Since the cooling Lorentz factor and the cooling frequency behave as $\gamma_{c}\propto(1+Y)^{-1}$ and $\nu_{c}\propto\gamma_{c}^{2}$, we obtain \begin{eqnarray} \gamma_{c}&=&\displaystyle\gamma_{e,cm}(\frac{t}{t_{cm}})^{(mp+4k-4)/[2(4-p)(m+1)]},\nonumber\\ \nu_{c}&=&\displaystyle\nu_{cm}(\frac{t}{t_{cm}})^{[6k-8+(p-2)(4m+k)]/[2(4-p)(m+1)]},\nonumber\\ Y&=&\displaystyle\sqrt{\frac{\epsilon_{e}}{\epsilon_{B}}}(\frac{t}{t_{cm}})^{-[(p-2)(m+k-1)]/[(4-p)(m+1)]}. \label{eqn:sc:spectrum} \end{eqnarray} The SSA frequency in the slow cooling phase is \begin{equation} \nu_{a}=\left\{ \begin{array}{l} \nu_{a,<}\equiv\nu_{m}\displaystyle[\frac{c_{0}(p-1)}{(3-k)}\frac{enR}{B\gamma_{e,\rm{min}}^{5}}]^{3/5}, \phantom{sssss} \rm{if} \phantom{ss} \nu_{a}<\nu_{m}, \\ \nu_{a,>}\equiv\nu_{m}\displaystyle[\frac{c_{0}(p-1)}{(3-k)}\frac{enR}{B\gamma_{e,\rm{min}}^{5}}]^{2/(p+4)}, \phantom{ss} \rm{if} \phantom{ss} \nu_{m}<\nu_{a}<\nu_{c},\\ \end{array} \right. \label{eqn:sc:nu_a} \end{equation} which can be determined by $\nu_{a}=\rm{min}\{\it{\nu_{a,<},\nu_{a,>}}\}$ without judging whether $\nu_{a}<\nu_{m}$ or not. The case for $\nu_{a}>\nu_{c}$ can be neglected. The numerical expression for $\nu_{a,<}$ is \begin{equation} \nu_{a,<}=\left\{ \begin{array}{l} 3.72\displaystyle\kappa_{p}^{3/5}(1+z)^{-1}\epsilon_{e,-0.5}^{-1}\epsilon_{B,-2.5}^{1/5}\zeta_{1/6}^{-1}E_{cm,53}^{1/5}n^{3/5}(\displaystyle\frac{t}{t_{cm}})^{-3\epsilon/[5(4-\epsilon)]}\,\,\rm{GHz}, \,\phantom{sssss} \rm{ISM,} \\ 16.4\displaystyle\kappa_{p}^{3/5}(1+z)^{-1}\epsilon_{e,-0.5}^{-19/10}\epsilon_{B,-2.5}^{-1/10}\zeta_{1/6}^{-8/5}E_{cm,53}^{-2/5}A_{\ast}^{3/5}(\displaystyle\frac{t}{t_{cm}})^{(5\epsilon-6)/[5(2-\epsilon)]}\,\,\rm{GHz}, \phantom{s} \rm{wind,}\\ \end{array} \right. \label{eqn:sc:nu_a<} \end{equation} where $\kappa_{p}=(p-1)(p+2)/(p+2/3)$, while the expression for $\nu_{a,>}$ is \begin{equation} \nu_{a,>}=\left\{ \begin{array}{l} 3.13\times10^{11}(1+z)^{-1}\epsilon_{e,-0.5}^{-1.69}\epsilon_{B,-2.5}^{-0.35}E_{cm,53}^{-0.35}n^{-0.37}(\displaystyle\frac{t}{t_{cm}})^{-(8.6+\epsilon)/[3.1(4-\epsilon)]}\,\,\rm{Hz}, \phantom{ssss} \rm{ISM,} \\ 3.46\times10^{11}(1+z)^{-1}\epsilon_{e,-0.5}^{-1.14}\epsilon_{B,-2.5}^{-0.17}E_{cm,53}^{0.02}A_{\ast}^{-0.37}(\displaystyle\frac{t}{t_{cm}})^{-(6.3-3.1\epsilon)/[3.1(2-\epsilon)]}\,\,\rm{Hz}, \phantom{ss} \rm{wind,}\\ \end{array} \right. \label{eqn:sc:nu_a>-1} \end{equation} for $p=2.2$, and \begin{equation} \nu_{a,>}=\left\{ \begin{array}{l} 2.20\times10^{11}(1+z)^{-1}\epsilon_{e,-0.5}^{-1.72}\epsilon_{B,-2.5}^{-0.38}E_{cm,53}^{-0.38}n^{-0.41}(\displaystyle\frac{t}{t_{cm}})^{-(9.2+\epsilon)/[3.2(4-\epsilon)]}\,\,\rm{Hz}, \phantom{ssss} \rm{ISM,} \\ 2.89\times10^{11}(1+z)^{-1}\epsilon_{e,-0.5}^{-1.10}\epsilon_{B,-2.5}^{-0.17}E_{cm,53}^{0.03}A_{\ast}^{-0.41}(\displaystyle\frac{t}{t_{cm}})^{-(3.3-1.6\epsilon)/[1.6(2-\epsilon)]}\,\,\rm{Hz}, \phantom{ss} \rm{wind,}\\ \end{array} \right. \label{eqn:sc:nu_a>-2} \end{equation} for $p=2.4$. In the ISM case, the transition from the earlier $\nu_{a}=\nu_{a,<}$ to the later $\nu_{a}=\nu_{a,>}$ happens when $\nu_{a}=\nu_{m}$, at \begin{equation} t_{am}=\left\{ \begin{array}{l} 1.3\times10^{3}\displaystyle\kappa_{p}^{0.39}\epsilon_{e,-1}^{-0.98}\epsilon_{B,-2.5}^{-0.79}E_{cm,53}^{-0.79}n^{-1.38}t_{cm}, \phantom{ss} \rm{if} \phantom{ss} \epsilon=0.1, \\ 3.4\times10^{2}\displaystyle\kappa_{p}^{0.38}\epsilon_{e,-0.5}^{-0.95}\epsilon_{B,-2.5}^{-0.76}E_{cm,53}^{-0.76}n^{-1.33}t_{cm}, \phantom{ss} \rm{if} \phantom{ss} \epsilon=0.32,\\ \end{array} \right. \end{equation} which indicates that the time when $\nu_{a}=\nu_{a,>}$ is very late, unless the medium is very dense. In the stellar wind case, the transition from $\nu_{a}=\nu_{a,<}$ to $\nu_{a}=\nu_{a,>}$ takes place when $\nu_{a}=\nu_{m}$, at \begin{equation} t_{am}=\left\{ \begin{array}{l} 93.7\displaystyle\kappa_{p}^{0.63}\epsilon_{e,-1}^{1.74}\epsilon_{B,-2.5}^{-0.16}\zeta_{1/6}^{2.22}E_{cm,53}^{0.95}A_{\ast}^{-2.22}t_{cm}, \phantom{ssss} \rm{if} \phantom{ss} \epsilon=0.1, \\ 330.5\displaystyle\kappa_{p}^{0.56}\epsilon_{e,-0.5}^{1.54}\epsilon_{B,-2.5}^{-0.14}\zeta_{1/6}^{1.96}E_{cm,53}^{0.84}A_{\ast}^{-1.96}t_{cm}, \phantom{ss} \rm{if} \phantom{ss} \epsilon=0.32,\\ \end{array} \right. \end{equation} which also indicates that the time when $\nu_{a}=\nu_{a,>}$ is very late for the stellar wind case. Therefore we neglect the case of $t>t_{am}$ below. The flux density at the observed frequency $\nu$ from the synchrotron component for $t_{cm}<t<t_{am}$ is \begin{equation} F_{\nu}=\left\{ \begin{array}{l} \displaystyle(\frac{\nu}{\nu_{a}})^{2}(\frac{\nu_{a}}{\nu_{m}})^{1/3}F_{\nu,\rm{max}}\propto t^{2/(m+1)}, \phantom{sssssssssssssssssssssssssssssssssssssssssss} \nu<\nu_{a}, \\ \displaystyle(\frac{\nu}{\nu_{m}})^{1/3}F_{\nu,\rm{max}}\propto t^{(9-4k-m)/[3(m+1)]}, \phantom{sssssssssssssssssssssssssssssssssssssss} \nu_{a}<\nu<\nu_{m}, \\ \displaystyle(\frac{\nu}{\nu_{m}})^{-(p-1)/2}F_{\nu,\rm{max}}\propto t^{(12-5k-p(4m+k))/[4(m+1)]}, \,\phantom{sssssssssssssssssssssssssssss} \nu_{m}<\nu<\nu_{c},\\ \displaystyle(\frac{\nu}{\nu_{c}})^{-p/2}(\frac{\nu_{c}}{\nu_{m}})^{-(p-1)/2}F_{\nu,\rm{max}}\propto t^{\{(4-p)[12-6k-4m-p(4m+k)]+8(k+m-1)\}/[4(4-p)(m+1)]}, \phantom{s} \nu_{c}<\nu.\\ \end{array} \right. \label{eqn:sc:syn-spectrum} \end{equation} \subsection{Constraint on the IC component in an X-ray afterglow} The temporal behaviors of the typical SSC frequency $\nu_{m}^{\rm{IC}}\approx 2\gamma_{e,\rm{min}}^2\nu_{m}$ and peak flux density $F_{\nu,\rm{max}}^{\rm{IC}}$ in the IC spectrum are the same in the slow cooling phase as in the fast cooling phase. The IC frequency corresponding to $\nu_{c}$ is \begin{equation} \nu_{c}^{\rm{IC}} \approx 2\gamma_{c}^2\nu_{c}=\displaystyle\nu_{cm}^{\rm{IC}}(\frac{t}{t_{cm}})^{(6mp+pk+12k-8m-16)/[2(4-p)(m+1)]}. \end{equation} The IC frequency $\nu_{a}^{\rm{IC}}$ can be directly determined by $\nu_{a}^{\rm{IC}}=\rm{min}\{\it{\nu_{a,<}^{\rm{IC}},\nu_{a,>}^{\rm{IC}}}\}$, where $\nu_{a,<}^{\rm{IC}}\approx 2\gamma_{e,\rm{min}}^2\nu_{a,<}$ and $\nu_{a,>}^{\rm{IC}}\approx 2\gamma_{e,\rm{min}}^2\nu_{a,>}$. Inserting equations (\ref{eqn:gm}), (\ref{eqn:gamma_cm-2}), and (\ref{eqn:sc:nu_a<}) -- (\ref{eqn:sc:nu_a>-2}) into the above equation, we obtain \begin{equation} \nu_{a,<}^{\rm{IC}}=\left\{ \begin{array}{l} 1.75\times10^{16}\displaystyle\kappa_{p}^{3/5}(1+z)^{-1}\epsilon_{e,-0.5}^{-5/4}\epsilon_{B,-2.5}^{-11/20}\zeta_{1/6}^{-1/2}E_{cm,53}^{-3/10}n^{-2/5}(\displaystyle\frac{t}{t_{cm}})^{-(15+3\epsilon)/[5(4-\epsilon)]}\,\,\rm{Hz}, \phantom{s}\rm{ISM,} \\ 2.36\times10^{16}\displaystyle\kappa_{p}^{3/5}(1+z)^{-1}\epsilon_{e,-0.5}^{-13/20}\epsilon_{B,-2.5}^{-7/20}\zeta_{1/6}^{-1/10}E_{cm,53}^{1/10}A_{\ast}^{-2/5}(\displaystyle\frac{t}{t_{cm}})^{-(11-5\epsilon)/[5(2-\epsilon)]}\,\,\rm{Hz}, \rm{wind,}\\ \end{array} \right. \label{eqn:sc:nu_a-ic<} \end{equation} while the expression for $\nu_{a,>}$ is \begin{equation} \nu_{a,>}^{\rm{IC}}=\left\{ \begin{array}{l} 1.48\times10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-1.94}\epsilon_{B,-2.5}^{-1.10}E_{cm,53}^{-0.85}n^{-1.37}(\displaystyle\frac{t}{t_{cm}})^{-(17.9+\epsilon)/[3.1(4-\epsilon)]}\,\,\rm{Hz}, \phantom{ss} \rm{ISM,} \\ 4.99\times10^{17}(1+z)^{-1}\epsilon_{e,-0.5}^{0.11}\epsilon_{B,-2.5}^{-0.42}E_{cm,53}^{0.52}A_{\ast}^{-1.37}(\displaystyle\frac{t}{t_{cm}})^{-(9.4-3.1\epsilon)/[3.1(2-\epsilon)]}\,\,\rm{Hz}, \phantom{s} \rm{wind,}\\ \end{array} \right. \label{eqn:sc:nu_a-ic>-1} \end{equation} for $p=2.2$, and \begin{equation} \nu_{a,>}^{\rm{IC}}=\left\{ \begin{array}{l} 1.35\times10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-1.97}\epsilon_{B,-2.5}^{-1.13}E_{cm,53}^{-0.88}n^{-1.41}(\displaystyle\frac{t}{t_{cm}})^{-(18.8+\epsilon)/[3.2(4-\epsilon)]}\,\,\rm{Hz}, \phantom{ss} \rm{ISM,} \\ 9.35\times10^{17}(1+z)^{-1}\epsilon_{e,-0.5}^{0.15}\epsilon_{B,-2.5}^{-0.42}E_{cm,53}^{0.53}A_{\ast}^{-1.41}(\displaystyle\frac{t}{t_{cm}})^{-(4.9-1.6\epsilon)/[1.6(2-\epsilon)]}\,\,\rm{Hz}, \phantom{s} \rm{wind,}\\ \end{array} \right. \label{eqn:sc:nu_a-ic>-2} \end{equation} for $p=2.4$. As we can see, $\nu_{a}^{\rm{IC}}$ is below the X-ray frequency $\nu\sim10^{18}$ Hz for typical parameters. We thus do not consider this frequency in our estimation of the IC component in the X-ray light curve in the slow-cooling phase. The inverse Compton spectrum is \begin{equation} F_{\nu}^{\rm{IC}}=\left\{ \begin{array}{l} \displaystyle(\frac{\nu}{\nu_{m}^{\rm{IC}}})^{1/3}F_{\nu,\rm{max}}^{\rm{IC}}\propto(\frac{t}{t_{cm}})^{(12-7k)/[3(m+1)]}, \phantom{sssssssssssssssssssssssssssssssssss} \nu<\nu_{m}^{\rm{IC}}, \\ \displaystyle(\frac{\nu}{\nu_{m}^{\rm{IC}}})^{-(p-1)/2}F_{\nu,\rm{max}}^{\rm{IC}}\propto(\frac{t}{t_{cm}})^{[16-9k+2m-p(6m+k)]/[4(m+1)]}, \phantom{sssssssssssssssss} \nu_{m}^{\rm{IC}}<\nu<\nu_{c}^{\rm{IC}},\\ \displaystyle(\frac{\nu}{\nu_{c}^{\rm{IC}}})^{-p/2}(\frac{\nu_{c}^{\rm{IC}}}{\nu_{m}^{\rm{IC}}})^{-(p-1)/2}F_{\nu,\rm{max}}^{\rm{IC}}\propto(\frac{t}{t_{cm}})^{[6(2-k)(4-p)+p(6mp+pk-20m-4)]/[4(4-p)(m+1)]}, \phantom{}\, \nu_{c}^{\rm{IC}}<\nu,\\ \end{array} \right. \label{eqn:sc:SSC-spectrum} \end{equation} where we have neglected the logarithmic term for $\nu>\nu_{c}^{\rm{IC}}$ and also do not consider the lowest spectral segment below $\nu_{a}^{\rm{IC}}$ for simplicity. The relation between the peak flux density of the SSC spectral component and that of the synchrotron component is (Sari \& Esin 2001) \begin{equation} F_{\nu,\rm{max}}^{\rm{IC}}\approx4x_{0}\frac{(p-1)(p+1/3)}{(p-1/3)(p+1)^2}(\sigma_{\rm{T}}Rn)F_{\nu,\rm{max}}. \end{equation} The critical frequency corresponding to the crossing point of the synchrotron spectral component and the SSC component is \begin{equation} \nu_{\times}^{\rm{IC}}=\left\{ \begin{array}{l} \nu_{\times,<}^{\rm{IC}}\equiv\nu_{m}^{\rm{IC}}[c_2\displaystyle\frac{\epsilon_{B}}{\epsilon_{e}}(\frac{\gamma_{c}}{\gamma_{e,\rm{min}}})^{4}(2\gamma_{c}\gamma_{e,\rm{min}})^{2-p}]^{3/(2+3p)}, \phantom{s} \rm{if} \phantom{ss} \nu_{\times}^{\rm{IC}}<\nu_{m}^{\rm{IC}}, \\ \nu_{\times,>}^{\rm{IC}}\equiv\nu_{m}^{\rm{IC}}[c_2\displaystyle\frac{\epsilon_{B}}{\epsilon_{e}}(\frac{\gamma_{c}}{\gamma_{e,\rm{min}}})^{4}(2\gamma_{c}\gamma_{e,\rm{min}})^{2-p}], \phantom{ssssssss} \rm{if} \phantom{ss} \nu_{m}^{\rm{IC}}<\nu_{\times}^{\rm{IC}}<\nu_{c}^{\rm{IC}}, \\ \end{array} \right. \label{eqn:sc:nu-ic} \end{equation} where we include the coefficient $c_2=\displaystyle\frac{(1-\epsilon)^2(p-1/3)^2(p+1)^4}{9x_{0}^2(4-k-\epsilon)^2(p-2)^2(p+1/3)^2}$, which is much larger than unity (by at least one order of magnitude), but was neglected in equation (5.1) of Sari \& Esin (2001). Since $\displaystyle\frac{3}{2+3p}$ is always smaller than unity, $\nu_{\times}^{\rm{IC}}$ can be determined directly by $\nu_{\times}^{\rm{IC}}=\rm{max}\{\nu_{\times,<}^{\rm{IC}},\nu_{\times,>}^{\rm{IC}}\}$ without judging whether $\nu_{\times}^{\rm{IC}}<\nu_{m}^{\rm{IC}}$ or not. We have numerically calculated the temporal evolution of $\nu_{\times,<}^{\rm{IC}}$ and $\nu_{\times,>}^{\rm{IC}}$. {\em In the ISM case}, the expression for $\nu_{\times,<}^{\rm{IC}}$ is \begin{equation} \nu_{\times,<}^{\rm{IC}}=\left\{ \begin{array}{l} 3.5\times 10^{19}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.08}\epsilon_{B,-2.5}^{-1.35}E_{cm,53}^{-1.47}n^{-2.43}(\displaystyle\frac{t}{t_{cm}})^{(190\epsilon-754)/[129(4-\epsilon)]}\,{\rm{Hz}}, p=2.2, \\ 7.3\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.04}\epsilon_{B,-2.5}^{-1.33}E_{cm,53}^{-1.43}n^{-2.37}(\displaystyle\frac{t}{t_{cm}})^{(135\epsilon-522)/[92(4-\epsilon)]}\,{\rm{Hz}}, \phantom{s} p=2.4, \\ \end{array} \right. \label{eqn:sc:ism:nu<-ic} \end{equation} while the expression for $\nu_{\times,>}^{\rm{IC}}$ is \begin{equation} \nu_{\times,>}^{\rm{IC}}=\left\{ \begin{array}{l} 1.7\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.7}\epsilon_{B,-2.5}^{-0.6}E_{cm,53}^{-1.4}n^{-2.3}(\displaystyle\frac{t}{t_{cm}})^{(0.4+38\epsilon)/[9(4-\epsilon)]}\,{\rm{Hz}},\phantom{s} p=2.2, \\ 1.8\times 10^{16}(1+z)^{-1}\epsilon_{e,-0.5}^{-3.65}\epsilon_{B,-2.5}^{-0.45}E_{cm,53}^{-1.3}n^{-2.1}(\displaystyle\frac{t}{t_{cm}})^{(1.2+4.5\epsilon)/(4-\epsilon)}\,{\rm{Hz}},\phantom{ss} p=2.4. \\ \end{array} \right. \label{eqn:sc:ism:nu>-ic} \end{equation} We can see that $\nu_{\times}^{\rm{IC}}$ decreases first with $\nu_{\times}^{\rm{IC}}=\nu_{\times,<}^{\rm{IC}}$, then increases with $\nu_{\times}^{\rm{IC}}=\nu_{\times,>}^{\rm{IC}}$. The time when $\nu_{\times}^{\rm{IC}}$ reaches its minimum, $\nu_{\times,<}^{\rm{IC}}=\nu_{\times,>}^{\rm{IC}}=\nu_{m}^{\rm{IC}}$, is \begin{equation} t_{\times,m}^{\rm{IC}}=\left\{ \begin{array}{l} 4.3\epsilon_{e,-1}^{0.39}\epsilon_{B,-2.5}^{-0.47}E_{cm,53}^{-0.04}n^{-0.08}t_{cm},\phantom{sss} \rm{if}\,\,\epsilon=0.1, \\ 5.1\epsilon_{e,-0.5}^{0.34}\epsilon_{B,-2.5}^{-0.41}E_{cm,53}^{-0.04}n^{-0.07}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} t_{\times,m}^{\rm{IC}}=\left\{ \begin{array}{l} 17.6\epsilon_{e,-1}^{0.33}\epsilon_{B,-2.5}^{-0.48}E_{cm,53}^{-0.07}n^{-0.15}t_{cm},\phantom{sss} \rm{if}\,\,\epsilon=0.1, \\ 16.7\epsilon_{e,-0.5}^{0.29}\epsilon_{B,-2.5}^{-0.41}E_{cm,53}^{-0.06}n^{-0.13}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.4$. The IC component could appear in the X-ray afterglow only if the minimum of $\nu_{\times}^{\rm{IC}}$ is less than $\nu=10^{18}\nu_{18}$ Hz, which leads to a lower limit on the ambient density $n$. We obtain the lower limit of $n$ as \begin{equation} n\gtrsim\left\{ \begin{array}{l} 8.6(1+z)^{-0.43}\nu_{18}^{-0.43}\epsilon_{e,-1}^{-1.58}\epsilon_{B,-2.5}^{-0.29}E_{cm,53}^{-0.61}\,\rm{cm}^{-3},\phantom{ss} \rm{if}\,\,\epsilon=0.1, \\ 1.6(1+z)^{-0.43}\nu_{18}^{-0.43}\epsilon_{e,-0.5}^{-1.54}\epsilon_{B,-2.5}^{-0.32}E_{cm,53}^{-0.61}\,\rm{cm}^{-3},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} n\gtrsim\left\{ \begin{array}{l} 1.9(1+z)^{-0.46}\nu_{18}^{-0.46}\epsilon_{e,-1}^{-1.63}\epsilon_{B,-2.5}^{-0.30}E_{cm,53}^{-0.62}\,\rm{cm}^{-3},\phantom{sss} \rm{if}\,\,\epsilon=0.1, \\ 0.5(1+z)^{-0.46}\nu_{18}^{-0.46}\epsilon_{e,-0.5}^{-1.57}\epsilon_{B,-2.5}^{-0.33}E_{cm,53}^{-0.61}\,\rm{cm}^{-3},\phantom{sss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.4$. This lower limit of $n$ for the emergence of IC component in the X-ray afterglow in the slow cooling phase is typically in the range of $1-10$ cm$^{-3}$ (Sari \& Esin 2001; Panaitescu \& Kumar 2000; Zhang \& M\'{e}sz\'{a}ros 2001). However, the true lower limit of $n$ is even smaller than that given in the above equations, since we have neglected the case of $\nu_{\times}^{\rm{IC}}>\nu_{c}^{\rm{IC}}$. The spectral segment when $\nu_{\times}^{\rm{IC}}>\nu_{c}^{\rm{IC}}$ is oversimplified by a single power law approximation. In fact, the logarithmic term dominates at higher frequencies. The true evolution of $\nu_{\times}^{\rm{IC}}$ is always decreasing, although the decreasing rate is slowed at late times (Sari \& Esin 2001). {\em In the stellar wind case}, the expression for $\nu_{\times,<}^{\rm{IC}}$ is \begin{equation} \nu_{\times,<}^{\rm{IC}}=\left\{ \begin{array}{l} 4.4\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{0.56}\epsilon_{B,-2.5}^{-0.13}E_{cm,53}^{0.97}A_{\ast}^{-2.43}(\displaystyle\frac{t}{t_{cm}})^{-(127+61\epsilon)/[129(2-\epsilon)]}\,{\rm{Hz}},\phantom{s} p=2.2, \\ 3.5\times 10^{18}(1+z)^{-1}\epsilon_{e,-0.5}^{0.51}\epsilon_{B,-2.5}^{-0.14}E_{cm,53}^{0.93}A_{\ast}^{-2.37}(\displaystyle\frac{t}{t_{cm}})^{-[43(2+\epsilon)]/[92(2-\epsilon)]}\,{\rm{Hz}},\phantom{sss} p=2.4, \\ \end{array} \right. \label{eqn:sc:wind:nu<-ic} \end{equation} while the expression for $\nu_{\times,>}^{\rm{IC}}$ is \begin{equation} \nu_{\times,>}^{\rm{IC}}=\left\{ \begin{array}{l} 7.2\times 10^{17}(1+z)^{-1}\epsilon_{e,-0.5}^{-0.25}\epsilon_{B,-2.5}^{0.55}E_{cm,53}^{0.9}A_{\ast}^{-2.3}(\displaystyle\frac{t}{t_{cm}})^{(41.8-29\epsilon)/[9(2-\epsilon)]}\,{\rm{Hz}},\phantom{ss} p=2.2, \\ 3.0\times 10^{16}(1+z)^{-1}\epsilon_{e,-0.5}^{-0.5}\epsilon_{B,-2.5}^{0.6}E_{cm,53}^{0.8}A_{\ast}^{-2.1}(\displaystyle\frac{t}{t_{cm}})^{(5.4-3.5\epsilon)/(2-\epsilon)}\,{\rm{Hz}},\phantom{ssss} p=2.4. \\ \end{array} \right. \label{eqn:sc:wind:nu>-ic} \end{equation} The time when $\nu_{\times}^{\rm{IC}}$ reaches its minimum, $\nu_{\times,<}^{\rm{IC}}=\nu_{\times,>}^{\rm{IC}}=\nu_{m}^{\rm{IC}}$, is \begin{equation} t_{\times,m}^{\rm{IC}}=\left\{ \begin{array}{l} 1.4\epsilon_{e,-1}^{0.29}\epsilon_{B,-2.5}^{-0.24}E_{cm,53}^{0.02}A_{\ast}^{-0.05}t_{cm},\phantom{sss} \rm{if}\,\,\epsilon=0.1, \\ 1.9\epsilon_{e,-0.5}^{0.29}\epsilon_{B,-2.5}^{-0.24}E_{cm,53}^{0.02}A_{\ast}^{-0.05}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} t_{\times,m}^{\rm{IC}}=\left\{ \begin{array}{l} 3.1\epsilon_{e,-1}^{0.32}\epsilon_{B,-2.5}^{-0.23}E_{cm,53}^{0.04}A_{\ast}^{-0.09}t_{cm},\phantom{sss} \rm{if}\,\,\epsilon=0.1, \\ 4.4\epsilon_{e,-0.5}^{0.32}\epsilon_{B,-2.5}^{-0.23}E_{cm,53}^{0.04}A_{\ast}^{-0.08}t_{cm},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.4$. The emergence of the IC component in the X-ray afterglow requires that the minimum of $\nu_{\times}^{\rm{IC}}$ is lower than the X-ray frequency $\nu=10^{18}\nu_{18}$ Hz, which leads to a lower limit on $A_{\ast}$, i.e. \begin{equation} A_{\ast}\gtrsim\left\{ \begin{array}{l} 1.32(1+z)^{-0.42}\nu_{18}^{-0.42}\epsilon_{e,-1}^{0.17}E_{cm,53}^{0.40},\phantom{sssssssss} \rm{if}\,\,\epsilon=0.1, \\ 1.55(1+z)^{-0.42}\nu_{18}^{-0.42}\epsilon_{e,-0.5}^{0.15}\epsilon_{B,-2.5}^{0.01}E_{cm,53}^{0.40},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.2$, and \begin{equation} A_{\ast}\gtrsim\left\{ \begin{array}{l} 1.03(1+z)^{-0.43}\nu_{18}^{-0.43}\epsilon_{e,-1}^{0.15}E_{cm,53}^{0.39},\phantom{sss} \rm{if}\,\,\epsilon=0.1, \\ 1.13(1+z)^{-0.43}\nu_{18}^{-0.43}\epsilon_{e,-0.5}^{0.14}E_{cm,53}^{0.39},\phantom{ss} \rm{if}\,\,\epsilon=0.32, \\ \end{array} \right. \end{equation} for $p=2.4$. The above constraint on $A_{\ast}$ is insensitive to other physical parameters. Together with the constraint on $A_{\ast}$ for $t<t_{cm}$, we conclude that the contribution of the IC component to the X-ray afterglow is insignificant and can be neglected for $A_{\ast}\lesssim1$ as indicated from observations of Wolf-Rayet stars and fittings to some GRB afterglows (Chevalier, Li, \& Fransson 2004; Panaitescu \& Kumar 2001, 2002). \section{Afterglow light curves of semi-radiative blast waves} \label{sec:lightcurves} We assume below that the physical parameters do not deviate significantly from that chosen in previous sections. The contamination of the IC component in the high frequency afterglow (e.g. the soft X-ray afterglow) is not considered for simplicity. However, the IC emissions can be inferred from equations (\ref{eqn:fc:SSC-spectrum}) and (\ref{eqn:sc:SSC-spectrum}). Under these assumptions, the light curve at an observing frequency $\nu$ can be determined by comparing the frequency with the critical frequencies $\nu_{cm}$ and $\nu_{ac}$, where $\nu_{ac}$ is the SSA/cooling frequency at $t_{ac}$ and can be calculated from equations (\ref{eqn:fc:spectrum}), (\ref{eqn:fc:ISM:t_ac}) and (\ref{eqn:fc:wind:t_ac}). Roughly, there are three types of afterglow light curves in various frequency ranges separated by these two critical frequencies. A careful inspection of the order of the transition time $t_{cm}$ and the crossing times $t_{a}$, $t_{c}$, $t_{m}$ gives four types of light curves for both the ISM case and the stellar wind case. The crossing times $t_{a}$, $t_{c}$ and $t_{m}$ correspond to the times when the frequencies $\nu_{a}$, $\nu_{c}$ and $\nu_{m}$ equals the observing frequency, respectively. {\em In the case of ISM}, the orders of these times are (A) $t_{c}<t_{a}<t_{m}<t_{cm}$ for $\nu>\nu_{ac}$; (B) $t_{a}<t_{c}<t_{m}<t_{cm}$ for $\nu_{cm}<\nu<\nu_{ac}$; (C) $t_{a}<t_{cm}<t_{m}<t_{c}$ for $\nu_{a}(t_{cm})<\nu<\nu_{cm}$; (D) $t_{cm}<t_{a}<t_{m}<t_{c}$ for $\nu<\nu_{a}(t_{cm})$. Here $\nu_{a}(t_{cm})$ is the SSA frequency at $t_{cm}$. Using the equations (\ref{eqn:fc:syn-spectrum-1}), (\ref{eqn:fc:syn-spectrum-2}), and (\ref{eqn:sc:syn-spectrum}) in the case of $k=0$, the light curves in each case can be constructed as (A) $F_{\nu}\propto t^{1}\nu^{2}$ ($t<t_{c}$), $F_{\nu}\propto t^{(5-2\epsilon)/(4-\epsilon)}\nu^{5/2}$ ($t_{c}$, $t_{a}$), $F_{\nu}\propto t^{-(1+2\epsilon)/(4-\epsilon)}\nu^{-1/2}$ ($t_{a}$, $t_{m}$), $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)}\nu^{-p/2}$ ($t_{m}$, $t_{cm}$), $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)+(2+\epsilon)(p-2)/(4+\epsilon)/(4-p)}\nu^{-p/2}$ ($t>t_{cm}$); (B) $F_{\nu}\propto t^{1}\nu^{2}$ ($t<t_{a}$), $F_{\nu}\propto t^{(2-11\epsilon)/3(4-\epsilon)}\nu^{1/3}$ ($t_{a}$, $t_{c}$), $F_{\nu}\propto t^{-(1+2\epsilon)/(4-\epsilon)}\nu^{-1/2}$ ($t_{c}$, $t_{m}$), $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)}\nu^{-p/2}$ ($t_{m}$, $t_{cm}$), $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)+(2+\epsilon)(p-2)/(4+\epsilon)/(4-p)}\nu^{-p/2}$ ($t>t_{cm}$); (C) $F_{\nu}\propto t^{1}\nu^{2}$ ($t<t_{a}$), $F_{\nu}\propto t^{(2-11\epsilon)/3(4-\epsilon)}\nu^{1/3}$ ($t_{a}$, $t_{cm}$), $F_{\nu}\propto t^{(2-3\epsilon)/(4-\epsilon)}\nu^{1/3}$ ($t_{cm}$, $t_{m}$), $F_{\nu}\propto t^{-3(p-1+\epsilon)/(4-\epsilon)}\nu^{-(p-1)/2}$ ($t_{m}$, $t_{c}$), $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)+(2+\epsilon)(p-2)/(4+\epsilon)/(4-p)}\nu^{-p/2}$ ($t>t_{c}$); (D) $F_{\nu}\propto t^{1}\nu^{2}$ ($t<t_{cm}$), $F_{\nu}\propto t^{2(1-\epsilon)/(4-\epsilon)}\nu^{2}$ ($t_{cm}$, $t_{a}$), $F_{\nu}\propto t^{(2-3\epsilon)/(4-\epsilon)}\nu^{1/3}$ ($t_{a}$, $t_{m}$), $F_{\nu}\propto t^{-3(p-1+\epsilon)/(4-\epsilon)}\nu^{-(p-1)/2}$ ($t_{m}$, $t_{c}$), $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)+(2+\epsilon)(p-2)/(4+\epsilon)/(4-p)}\nu^{-p/2}$ ($t>t_{c}$). The light curves in the ISM case are illustrated in Figure \ref{fig:ISM:lightcurves}. The crossing times $t_{c}$ and $t_{a}$ in case A and $t_{a}$ in case B occur very early in high observing frequencies, while $t_{c}$ in case C and $t_{a}$, $t_{m}$ and $t_{c}$ in case D occur very late in low observing frequencies. We thus neglect these crossing times in the figure. As indicated in Figure \ref{fig:ISM:lightcurves}, the radiation efficiency, $\epsilon$, has a marked effect on the afterglow light curves. It changes the temporal decaying index $\alpha$ (defined as $F_{\nu}\propto t^{-\alpha}$) of the light curve significantly. For illustration, we adopt $\epsilon=\epsilon_{e}=1/3$ and $p=2.2$ in the following. The initial slowly increasing light curve segment, $F_{\nu}\propto t^{1/6}$, predicted in the standard adiabatic blast wave model will changes to be a slowly decreasing one, $F_{\nu}\propto t^{(2-11\epsilon)/3(4-\epsilon)}\sim t^{-0.15}$, as shown in the early segment of case C. This makes the sub-millimeter afterglow less competitive to distinguish between the ISM and the stellar wind, as proposed by Panaitescu \& Kumar (2000), if the observations are not frequent enough. At the optical wavelength, the early light curve behaves typically as $F_{\nu}\propto t^{-(1+2\epsilon)/(4-\epsilon)}\sim t^{-0.45}$ rather than $F_{\nu}\propto t^{-1/4}$ in the adiabatic case. When $\nu_{m}$ crosses the observing frequency, i.e. $t>t_{m}$, the optical and X-ray light curves decays as $F_{\nu}\propto t^{-(3p-2+2\epsilon)/(4-\epsilon)}\sim t^{-1.44}$, as shown in cases B and A. It should be noted that many X-ray afterglow light curves and a considerable fraction of optical afterglow light curves have temporal decaying indices steeper than predicted $(3p-2)/4\sim 1.15$ by the standard adiabatic model (for $p\approx2.2$). The observed decaying indices in the X-ray afterglow light curves are $\langle\alpha_{X}\rangle=1.33\pm0.38$, while the median value of the observed X-ray spectral indices $\beta_{X}$, $F_{\nu}\propto \nu^{-\beta_{X}}$, is $\sim1.05$ (Berger, Kulkarni, \& Frail 2003; De Pasquale et al. 2003). Assuming the X-ray frequency is above the cooling frequency and the spectral index is $\beta_{X}=p/2$, the measured $p$ is consistent with the standard value of the index of electron energy distribution, $p=2.2-2.3$, predicted in the relativistic shock acceleration mechanism (see Achterberg et al. 2001 and reference therein). However, the observed mean temporal decaying index $\langle\alpha_{X}\rangle$ requires a relatively larger $\langle p\rangle$, $\sim2.44$, provided the shock is adiabatic. There are several caveats on the observations of the X-ray afterglows. First, the temporal behavior of the X-ray afterglow is hardly influenced by the equal arrival time surface effect, which will mix the earlier light from high latitudes into the present light. This effect is important especially for high observing frequencies, e.g. the optical and X-ray. The profile of surface emissivity of the relativistic shock is ring-like in these high frequencies (Sari 1998). This will moderately slow down the decreasing rate of the afterglow after $t_{m}$ for the theoretical light curve A. However, the X-ray afterglow is immune to this effect because its $t_{m}$ is very early and the observed decreasing index $\alpha_{X}$ is based on the observations typically several hours after the burst. Second, the measured $\beta_{X}$ is reliable since the X-ray absorption in the medium along the line of sight takes place at $\nu\lesssim 1$ keV while the observing window is $\sim2-10$ keV. Lastly, one should be cautious when interpreting the property of X-ray afterglow with the synchrotron radiation mechanism, because the X-ray afterglow may be contaminated by the synchrotron-self-Compton component. However, there have so far been only a few X-ray afterglows that were confirmed to have the IC components. Therefore, the radiative corrections to the afterglow light curves must be taken into account based on the observations. As can be seen in Figure \ref{fig:ISM:lightcurves}, the light curve at high frequency (type A and B) flattens when the afterglow enters the slow-cooling phase, $t>t_{cm}$. At the transition time $t_{cm}$, the spectrum nearby the observing frequency changes from $\nu_{c}<\nu_{m}<\nu$ to $\nu_{m}<\nu_{c}<\nu$, while the expressions for the flux density are the same, i.e. $F_{\nu}=F_{\nu,\max}\nu_{m}^{(p-1)/2}\nu_{c}^{1/2}\nu^{-p/2}$. The flattening of the light curve results from the Compton parameter $Y$ in the flux density, i.e. $F_{\nu}\propto Y^{-1}$, since $\nu_{c}\propto(1+Y)^{-2}\approx Y^{-2}$. The Compton parameter $Y$ in the slow-cooling phase decreases slowly, contrary to its constancy in the earlier fast-cooling phase. From equation (\ref{eqn:sc:spectrum}) for $Y$ in the slow-cooling phase and adopting $k=0$, the change of the temporal index around $t_{cm}$ is $\Delta\alpha=(2+\epsilon)(p-2)/[(4-\epsilon)(4-p)]$, which is shown in the last segments of panels A and B in Figure \ref{fig:ISM:lightcurves}. Note that since Sari et al. (1998) did not discuss IC cooling, there is no related segment in their $\epsilon=0$ light curves. {\em In the case of stellar wind}, the orders of the crossing times are (A) $t_{a}<t_{m}<t_{cm}<t_{c}$ for $\nu>\nu_{cm}$; (B) $t_{a}<t_{c}<t_{cm}<t_{m}$ for $\nu_{ac}<\nu<\nu_{cm}$; (C) $t_{c}<t_{a}<t_{cm}<t_{m}$ for $\nu_{a}(t_{cm})<\nu<\nu_{ac}$; (D) $t_{c}<t_{cm}<t_{a}<t_{m}$ for $\nu<\nu_{a}(t_{cm})$. Using the equations (\ref{eqn:fc:syn-spectrum-1}), (\ref{eqn:fc:syn-spectrum-2}), and (\ref{eqn:sc:syn-spectrum}) in the case of $k=2$, the light curves in each case can be constructed as (A) $F_{\nu}\propto t^{(7-5\epsilon)/2(2-\epsilon)}\nu^{5/2}$ ($t<t_{a}$), $F_{\nu}\propto t^{-(1+\epsilon)/2(2-\epsilon)}\nu^{-1/2}$ ($t_{a}$, $t_{m}$), $F_{\nu}\propto t^{-[3p-2-(p-2)\epsilon]/2(2-\epsilon)}\nu^{-p/2}$ ($t_{m}$, $t_{cm}$), $F_{\nu}\propto t^{-[3p-2-(p-2)\epsilon]/2(2-\epsilon)+(p-2)/(4-p)}\nu^{-p/2}$ ($t_{cm}$, $t_{c}$), $F_{\nu}\propto t^{-[3p-1-(p-1)\epsilon]/2(2-\epsilon)}\nu^{-(p-1)/2}$ ($t>t_{c}$); (B) $F_{\nu}\propto t^{(7-5\epsilon)/2(2-\epsilon)}\nu^{5/2}$ ($t<t_{a}$), $F_{\nu}\propto t^{-(1+\epsilon)/2(2-\epsilon)}\nu^{-1/2}$ ($t_{a}$, $t_{c}$), $F_{\nu}\propto t^{-(4-\epsilon)/3(2-\epsilon)}\nu^{1/3}$ ($t_{c}$, $t_{cm}$), $F_{\nu}\propto t^{-\epsilon/3(2-\epsilon)}\nu^{1/3}$ ($t_{cm}$, $t_{m}$), $F_{\nu}\propto t^{-[3p-1-(p-1)\epsilon]/2(2-\epsilon)}\nu^{-(p-1)/2}$ ($t>t_{m}$); (C) $F_{\nu}\propto t^{(7-5\epsilon)/2(2-\epsilon)}\nu^{5/2}$ ($t<t_{c}$), $F_{\nu}\propto t^{(4-3\epsilon)/(2-\epsilon)}\nu^{2}$ ($t_{c}$, $t_{a}$), $F_{\nu}\propto t^{-(4-\epsilon)/3(2-\epsilon)}\nu^{1/3}$ ($t_{a}$, $t_{cm}$), $F_{\nu}\propto t^{-\epsilon/3(2-\epsilon)}\nu^{1/3}$ ($t_{cm}$, $t_{m}$), $F_{\nu}\propto t^{-[3p-1-(p-1)\epsilon]/2(2-\epsilon)}\nu^{-(p-1)/2}$ ($t>t_{m}$); (D) $F_{\nu}\propto t^{(7-5\epsilon)/2(2-\epsilon)}\nu^{5/2}$ ($t<t_{c}$), $F_{\nu}\propto t^{(4-3\epsilon)/(2-\epsilon)}\nu^{2}$ ($t_{c}$, $t_{cm}$), $F_{\nu}\propto t^{2(1-\epsilon)/(2-\epsilon)}\nu^{2}$ ($t_{cm}$, $t_{a}$), $F_{\nu}\propto t^{-\epsilon/3(2-\epsilon)}\nu^{1/3}$ ($t_{a}$, $t_{m}$), $F_{\nu}\propto t^{-[3p-1-(p-1)\epsilon]/2(2-\epsilon)}\nu^{-(p-1)/2}$ ($t>t_{m}$). The light curves in the stellar wind case are illustrated in Figure \ref{fig:wind:lightcurves}. The crossing time $t_{c}$ in cases C and D occurs very early at low observing frequency, while $t_{m}$ in case D occurs very late. We neglect these crossing times in this figure. The radiation efficiency has a significant effect on the light curves in the wind case. The flux density in the optical/infrared light curve decays initially with $t^{-(1+\epsilon)/2(2-\epsilon)}\sim t^{-0.4}$, rather than $t^{-1/4}$ in the adiabatic case, which is shown in case A. For the X-ray afterglow the crossing time $t_{m}$ when the typical frequency $\nu_{m}$ crosses the observing frequency is much earlier, the light curve behaves as $t^{-[3p-2-(p-2)\epsilon]/2(2-\epsilon)}\sim t^{-1.36}$ during the whole fast-cooling phase, which is more consistent with observations than the adiabatic light curve (Berger, Kulkarni, \& Frail 2003). The light curve at high frequency flattens when the afterglow transits to the slow-cooling phase. By the same way as in the ISM case, from equation (\ref{eqn:sc:spectrum}) for $Y$ and adopting $k=2$, the change of the temporal index around $t_{cm}$ is $\Delta\alpha=(p-2)/(4-p)$ in the wind case. Since Chevalier \& Li (2000) did not include IC cooling, there is no relevant flattening segment in their $\epsilon=0$ light curves. Although the flattening of the optical/X-ray light curve around $t_{cm}$ predicted in the inverse Compton dominated cooling regime in the stellar wind case is more obvious than in the ISM case, the change of the temporal decaying index is only $\Delta\alpha\sim 0.1$ for the former case. The detailed theoretical optical light curves taking into account the equal arrival time surface effect and the large error bars in X-ray afterglow observations prevent us from the identifications of such flattening. \section{Conclusions and Discussion} \label{sec:conclusion} In this paper, we have investigated analytically the GRB afterglow hydrodynamics and constructed the semi-radiative light curves realistically. We focus on the case that the electron cooling is in the inverse Compton dominated regime, i.e. $\epsilon_{e}\gg\epsilon_{B}$ or $Y\gg1$. The realistic hydrodynamics is applicable for spherical blast waves with the assumption that the electron energy equipartition factor $\epsilon_{e}$ is not much larger than $1/3$, which seems to be reasonable based on theoretical expectations and observations as well. In fact, the analytical solution for afterglow hydrodynamics is almost tenable throughout the relativistic stage when $\epsilon_{e}\lesssim 2/3$ (see equation \ref{eqn:hydro}). The only uncertainty is the actual evolution of the radiation efficiency in the late slow-cooling phase. Given $p\sim2.2-2.3$, we conclude that a constant radiation efficiency is a good approximation for a fairly long time in the slow-cooling phase. The transition from fast-cooling to slow-cooling happens much later in the IC dominated cooling regime than in the purely synchrotron cooling regime. Since the actual radiation efficiency decreases very slowly in the slow-cooling phase, the semi-radiative epoch is further prolonged typically by two orders of magnitude in time, other than the whole fast-cooling phase as commonly used. As the GRB ejecta sweeps up more and more external medium, the Lorentz factor $\gamma$ of the shock decreases. When $\gamma$ equals the inverse of the initial half-opening angle $\theta_{0}$ of the GRB conical ejecta, or jet, the following afterglow light curves deviate from the previously spherical-like ones, and the light curves in this paper are not suitable at this late stage. Generally, our semi-radiative afterglow light curves hold true when the shock is relativistic after the initial deceleration time and before the jet-like stage, with the radiation efficiency satisfying $\epsilon\approx\epsilon_{e}\lesssim$max$\{\displaystyle\frac{2}{2+\theta_{0}},\frac{2}{3}\}$. The adiabatic afterglow light curves in the synchrotron dominated cooling regime have been well studied in previous works (Sari et al. 1998; Chevalier \& Li 2000; Granot \& Sari 2002). Sari et al. (1998) have considered the light curves of a fully radiative ($\epsilon=1$) blast wave. However, our analytical results can not be directly applied to this case. Actually, in the fully radiative case, equation (\ref{eqn:hydro}) can also be directly integrated and the hydrodynamics and the resulting light curves are the same as derived by Sari et al. (1998). The radiative corrections to the afterglow light curves in the wind model have been discussed by Chevalier \& Li (2000). They had adopted the scaling laws of a semi-radiative shock given by B\"{o}ttcher \& Dermer (2000). The temporal exponents of the hydrodynamics and light curves in B\"{o}ttcher \& Dermer (2000) differ from ours. In fact, the radiation corrections for these exponents in their work are smaller than that in our work\footnotemark\footnotetext{\label{foot:lightcurve}The coefficient of the $\epsilon$ term in these temporal exponents in their work is smaller than ours by an exact factor of 2.}. Cohen, Piran \& Sari (1998) had studied the hydrodynamics of a semi-radiative relativistic blast wave considering a more complicated post-shock material distribution (similar to Blandford \& McKee 1976) than that of the simple thin-shell approximation as in our work. However, the hydrodynamic evolution differs in all these three treatments of Cohen et al. (1998), B\"{o}ttcher \& Dermer (2000) and this paper. The hydrodynamic self-similarity index $m$ we obtain lies between the other two's. Despite the difference between the hydrodynamics due to different approximations/treatments, we can see that the radiative corrections to afterglow light curves are significant. For example, the temporal decaying index of X-ray afterglows at around $10$ hours since the main bursts varies in the range of $1.23-1.69$ in the ISM case, and $1.21-1.58$ in the stellar wind case for $p=2.2-2.3$ and $\epsilon_{e}=0.1-0.5$, provided that the IC component is neglected. This range is consistent with the observations (Berger, Kulkarni \& Frail 2003). In observations, the value of $\epsilon_{e}$ and $p$ can be inferred for a particular X-ray afterglow from the so called ``closure relation" between the temporal index $\alpha_{X}$ and spectral index $\beta_{X}$, provided that the IC component can be neglected. Such a closure relation can be obtained from case A described in \S \ref{sec:lightcurves} in either ISM or wind cases. The application of this method to optical afterglows should be cautious. The early optical light curve has a broken power law profile around $t_{m}$. In contrary to the early X-ray afterglow whose $t_{m}$ is much earlier and which can be regarded as a simple power law one, the optical light curve might be affected moderately when the equal arrival time surface effect is included. Another reason is that the reddening in optical spectra is unknown. While the Galactic extinction can be empirically decoupled, the extinction and reddening within the circum-burst environment and host galaxy are less constrained. These two facts prevent the credible measurements of the temporal and spectral indices of optical afterglows, respectively. We have got the criteria for the emergence of IC components in soft X-ray afterglows, through giving the lower limits of external medium densities. In the ISM case, the lower limit of $n$ is $\sim10$ cm$^{-3}$ in the fast-cooling phase, while it is $\sim 1$ cm$^{-3}$ in the slow-cooling phase (Panaitescu \& Kumar 2000; Sari \& Esin 2001; Zhang \& M\'{e}sz\'{a}ros 2001). These are typical densities of interstellar media in our galaxy. In the wind case, the lower limit of the wind parameter is always $A_{\ast}\sim 1$ (Panaitescu \& Kumar 2000). Such a critical $A_{\ast}$ is also typical for Wolf-Rayet stars in our galaxy. It should be noted that the wind parameter obtained from fitting afterglows within the wind interaction model seems to be quite small (Chevalier \& Li 1999, 2000; Panaitescu \& Kumar 2001; Dai \& Wu 2003; Chevalier, Li, \& Fransson 2004). The contradiction may be due to the limitation of our knowledge about the mass losses of massive stars at the last stage before their collapses. Taking the inferred low $A_{\ast}$ from afterglows, we draw a tentative conclusion that the IC components in X-ray afterglows are insignificant in the stellar wind case. For such a low $A_{\ast}$, a wind bubble would be produced and surrounded by either an outer giant molecular cloud, a slow wind at previous evolutionary stage, or an extremely high pressure in a star-burst environment (Dai \& Wu 2003; Chevalier, Li, \& Fransson 2004). The termination shock radius of the wind bubble will be reached by the GRB shock within hours in observer's frame. The environment of the shock will change from wind type to uniform medium type after this time. We have neglected such a complicated case in this work. Recently, Yost et al. (2003) had relaxed the assumption of a constant magnetic equipartition factor $\epsilon_{B}$ and broadened the circum-burst medium types. They had made a detailed comparison between the results of different assumptions as well as different circum-burst media, and found the degeneracy of different assumed evolutions of $\epsilon_{B}$ and different medium types. Anyway, we adopt the constant $\epsilon_{B}$ assumption and consider the most possible medium types, i.e. the ISM and the stellar wind. The radiative corrections in modelling afterglows are important. Such an effect should be taken into account seriously in analysis of afterglows, especially when a large number of afterglows will be observed in the \emph{Swift} era. It will affect directly the energetics of GRB remnants and therefore the actual efficiency of prompt GRBs (Wu et al. in preparation, which also includes the ICS effect). We thank the referee for his/her valuable suggestions and comments. This work was supported by the National Natural Science Foundation of China (grants 10233010, 10221001, 10003001, and 10473023), the Ministry of Science and Technology of China (NKBRSF G19990754), the Special Funds for Major State Basic Research Projects, and the Foundation for the Author of National Excellent Doctoral Dissertation of P. R. China (Project No. 200125).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \noindent A large majority of deployed methods from control theory have as a prerequisite the provision of relatively precise dynamic model characterizing the temporal evolution of the state variables at stake. This model generally plays a central role, since the performance of the method is often directly related to the accuracy of the model \cite{weinmann2012uncertain,cheah2006adaptive,bauersfeld_neurobem_2021, busoniu_reinforcement_2018, bemporad2006model}. Consequently, modeling and identification of a dynamic system is an essential preliminary step, since it will serve as foundation for additional processing, such as controller or observer design. However, this is not a trivial task: in the general case, the system is complex, non-linear, and involves physical phenomena that are often difficult or even impossible to model correctly without strongly impacting the required computation time. On the other hand, the identification of the parameters of a non-linear model is a non-convex problem, which can require tremendous hours of calibrations and experiments. Moreover, identification of a dynamic system often requires the intervention of domain experts and the ability to freely interact with the system. The development of data-driven techniques for the identification of non-linear systems has provided a promising response to these issues and has received great interest over the last decade (see for example \cite{LJUNG20201175}, \cite{MASTI2021}, \cite{PILLONETTO2014657}). Specifically, neural networks propose to remove the burden of modeling by replacing it with the collection of massive datasets from the system of interest. Modeling methods based on Deep Learning constitute an alternative solution to painstaking physical modeling. The main idea is based on the use of an extremely versatile model, capable of approaching most dynamics with a certain degree of precision, that can be directly identified from pairs of input-output measurements in the case of dynamic systems, provided that theses measurements gather enough information necessary to approximate the true dynamic. Nevertheless, the great flexibility of neural networks comes at the cost of a lack of mathematical structure making it difficult to perform theoretical analysis in terms of robustness, precision and stability. Moreover, learning complex, high-dimensional dynamical systems is not straightforward. The general formulation leads to latent dynamic models lacking of meaningful physical structure and requires large dimensional state spaces. In this article, we propose a new identification structure for nonlinear state-space systems from a set of observation trajectories and associated inputs. We demonstrate the existence of a regressor inspired by finite impulse response models allowing to map a series of past observations to future outputs, and provide bounds derived from the prediction error during deployment. We then deduced a high-dimensional canonical state-space model discovered using an output-error based approach and propose to learn an auto-encoder projecting the dynamics into a smaller state-space. We evaluate our proposal on different systems in simulation and in the real world. \section{Related Work} \noindent Data-driven dynamic models are widely studied in the community and get a lot of attention. In particular, \cite{brunton2016discovering,Sahoo2018Learning,Chen2021physicsinformed} propose to find governing physical equations by performing a sparse regression from the data. At the junction between physical model and learning, \cite{yin2021augmenting,Long2018HybridNetIM,qi2019integrating,mehta2021neural} use neural networks to model complementary phenomena not described by the initial physical model. In particular, \cite{shi_neural_2019,bauersfeld_neurobem_2021,possamai2022learning} extend the dynamic model of a unmanned aerial vehicle with a neural network in charge of predicting aerodynamic disturbances, which are often very demanding and intractable for real time physical simulation. Close to our work, a body of literature proposes to use deep learning for the identification of latent models, that is to say without direct physical meaning. This is notably the case for recent works around the Koopman operator \cite{lusch2018deep,janny2021deepkkl,peralez2020datadriven,rowley2009spectral,buissonfenet2022towards}. Another solution is to use an auto-encoder structure to model the latent dynamics of a system from past observations, following \cite{MASTI2021,beintema2021}. Our proposal differs from this line of works in three main points: (1) we provide theoretical results and conditions for the existence of the dynamical system that we identify, (2) we propose to use a high-dimensional regressor structure without explicit state representation, which will be deduced from a dimensionality reduction operation and (3) we evaluate our approach on challenging and unstable systems. \paragraph{Notation} $\Re$ and $\mathbb{N}$ denote the sets of reals and natural numbers respectively. $\left\|\cdot\right\|$ refers to a generic norm on some appropriate space. \section{Problem statement and preliminary results} \subsection{Problem statement} \noindent We consider a nonlinear discrete-time system of the general form \begin{equation} \label{eq_general_form} \left\{ \begin{array}{rcl} x_{t+1} &=& f^\circ(x_t, u_t) \\ y_t &=& h^\circ(x_t,u_t) +w_t, \end{array} \right. \end{equation} with $x_t\in \mathcal{X}\subset\Re^{n_x}$, $u_t\in \mathcal{U}\subset \Re^{n_u}$, $y_t\in \mathcal{Y}\subset \Re^{n_y}$ being the state, the input and the output of the system at discrete time $t\in \mathbb{N}$ respectively. $f^\circ:\Re^{n_x}\times \Re^{n_u}\rightarrow \Re^{n_x}$ and $h^\circ:\Re^{n_x}\times \Re^{n_u}\rightarrow \Re^{n_y}$ are some nonlinear vector-valued functions. As to $w_t\in \mathcal{W}\subset \Re^{n_y}$, it represents measurement noise. We will make the following important assumptions: \begin{enumerate} \item The external signals $u$ and $w$ take values in compact sets $\mathcal{U}$ and $\mathcal{W}$ respectively. \label{assump:Compacts} \item The state-space $\mathcal{X}$ is a known compact set containing the initial state $x_0$. \item $(\mathcal{X},\mathcal{U},\mathcal{W},\mathcal{Y})$ and $(f^\circ,h^\circ)$ satisfy the following invariance conditions: $$ \begin{aligned} & \forall (x,u)\in \mathcal{X}\times \mathcal{U}, f^\circ(x,u)\in \mathcal{X} \\ & \forall (x,u,w)\in \mathcal{X}\times \mathcal{U}\times \mathcal{W}, h^\circ(x,u)+w \in \mathcal{Y} \end{aligned} $$ \label{assump:invariance} \item $f^\circ$ (and $h^\circ$) are uniformly Lipschitz continuous on $\mathcal{X}\times \mathcal{U}\subset \Re^{n_x}\times \Re^{n_u}$ with respect to $\mathcal{U}$, i.e., there exists a constant $\gamma_f>0$ such that $\|f^\circ(x,u)-f^\circ(x',u)\|\leq \gamma_f\|x-x'\|$ for all $(x,x',u)\in \mathcal{X}\times \mathcal{X}\times \mathcal{U}$. \label{assump:Lipschitz} \end{enumerate} The assumptions \ref{assump:Compacts}--\ref{assump:Lipschitz} are required essentially to theoretically ensure the well-definedness of optimization problems that will be expressed later in the paper. Of course in the context of system identification, such types of assumptions are not intended to be checked prior to applying the method to be developed. The problem of interest in this paper can be stated as follows: Given a finite number $N$ of input-output data pairs $\left\{(u_t,y_t):t=1,\ldots,N\right\}$ generated by a nonlinear system of the form \eqref{eq_general_form}, find an appropriate dimension $n_x$ of a state-space representation along with estimates of the associated functions $f^\circ$ and $h^\circ$. Here, the number $n_y$ of outputs and the number $n_u$ of inputs are known a priori. However the dimension $n_x$ of the state is a parameter of the model which needs to be estimated along with the maps $(f^\circ,h^\circ)$. We develop a solution in three steps: first, a nonlinear regression model is derived from the data-generating system equations in \eqref{eq_general_form}. The underlying nonlinear map is then modelled by a deep neural network structure and trained with the available data following an output-error framework. Given this map, one can readily form an equivalent canonical state-space representation of \eqref{eq_general_form} with, however, the drawback that its dimension may be high. Hence, the third and last step of the proposed procedure consists in model reduction\footnote{By state-space model reduction, we mean the reduction of the state dimension. The process aims at finding another state-space model which is as close as possible to the primary one but with a compressed state. }, an objective which is achieved through the design of an appropriate encoder-decoder. \subsection{Preliminary results} \label{subsec:preliminary_results} \noindent An important challenge concerning the identification of the system \eqref{eq_general_form} is the fact that the state $x_t$ is not entirely measured. We therefore need to express it first as a function of the available past input-output measurements $\left\{(u_\tau,y_\tau): \tau<t\right\}$. Indeed, if the noise $w_t$ in \eqref{eq_general_form} is assumed to be identically equal to zero, then under appropriate observability conditions on the system, there exists a time horizon $\ell$ and a map $\phi:\Re^{L}\rightarrow \Re^{n_x}$, with $L=\ell(n_u+n_y)$, such that the state $x_t$ can be written as \begin{equation}\label{eq:state-function} x_t = \phi(z_t) \end{equation} where \begin{equation}\label{eq:zt} z_t=\begin{pmatrix}u_{t-\ell}^{\top} & y_{t-\ell}^{\top}& \cdots & u_{t-1}^{\top} & y_{t-1}^{\top}\end{pmatrix}^\top \end{equation} is the so-called regressor vector. To show the existence of such a map $ \phi $, some observability conditions on the system to be identified \eqref{eq_general_form} are needed. For this purpose let us start by introducing some notations. For a positive integer $i$, let $F_i:\Re^{n_x}\times \Re^{in_u}\rightarrow \Re^{n_x}$ be the map defined recursively from the function $f^\circ$ in \eqref{eq_general_form} as follows: for $x\in \Re^{n_x}$ and $(u_1,\ldots,u_i)\in \Re^{in_u}$, $F_1(x,u_1)=f^\circ(x,u_1)$ and for all $i\geq 2$, \begin{equation}\label{eq:Fi} F_i(x,u_1,\ldots,u_i)=f^\circ\big(F_{i-1}(x,u_1,\ldots,u_{i-1}),u_i\big). \end{equation} Before proceeding further, let us mention a useful property of the maps $F_i$. \begin{lem}\label{lem:Lipschitz} Under Assumption \ref{assump:invariance}, if $f^\circ:\Re^{n_x} \times \Re^{n_u}\rightarrow \Re^{n_x}$ is uniformly $\gamma_f$-Lipschitz on $\mathcal{X}\times \mathcal{U}$ with respect to $\mathcal{U}$, then the map $F_i$ defined in \eqref{eq:Fi} is uniformly $\gamma_f^i-$Lipschitz on $\mathcal{X}\times\mathcal{U}^i$ with respect to $\mathcal{U}^i\subset \Re^{in_u}$. \end{lem} \begin{proof} The proof of this lemma is straightforward and is therefore omitted. \end{proof} Now consider the function $\mathcal{O}_i:\Re^{n_x}\times \Re^{in_u}\rightarrow \Re^{in_y}$ given by \begin{equation}\label{eq:Observability-Function} \mathcal{O}_i(x,u_1,\ldots,u_i) = \begin{pmatrix}h^\circ(x,u_1)\\ h^\circ\big(F_{1}(x,u_1),u_2\big)\\ \vdots \\ h^\circ\big(F_{i-1}(x,u_1,\ldots,u_{i-1}),u_i\big)\end{pmatrix}. \end{equation} For notational simplicity, let us pose $\bar{u}_{1|i}=(u_1,\ldots,u_i)$ so that $\mathcal{O}_i(x,u_1,\ldots,u_i)$ in the previous equality can be replaced by $\mathcal{O}_i(x,\bar{u}_{1|i})$. \begin{definition}\label{def:observability} The system \eqref{eq_general_form} is said to be finite-time observable over a time horizon $r\in \mathbb{N}$ if for each $\bar{u}\in \mathcal{U}^{r }$, the function $\mathcal{O}_r(\cdot,\bar{u})$, with $\mathcal{O}_r$ defined as in \eqref{eq:Observability-Function}, is injective. \end{definition} Note that if the observability property in Definition \ref{def:observability} holds for some $r\in \mathbb{N}$ then it holds as well for any $i\geq r$. \begin{prop}[Existence of the map $\phi$] If the nonlinear system \eqref{eq_general_form} (considered under the assumption that $w\equiv 0$) is finite-time observable in the sense of Definition \ref{def:observability}, then there exist $\ell\in \mathbb{N}$ and a (nonlinear) map $\phi:\Re^{L}\rightarrow \Re^{n_x}$ such that \eqref{eq:state-function} holds for all time $t\geq \ell$, any initial state in $\mathcal{X}$ and any input signal taking values in $\mathcal{U}$. \end{prop} \begin{proof} For discrete time indices $(i,j)$ with $i\leq j$, let $\bar{y}_{i|j}=\begin{pmatrix}y_i^\top & \cdots & y_j^\top\end{pmatrix}^\top$ be a vector of outputs from time $i$ to time $j$. Likewise define $\bar{u}_{i|j}=\begin{pmatrix}u_i^\top & \cdots & u_j^\top\end{pmatrix}^\top$. By iterating the system equations, it is easy to see that \begin{equation}\label{eq:DataEq} \bar{y}_{t-\ell|t-1} = \mathcal{O}_\ell(x_{t-\ell},\bar{u}_{t-\ell|t-1}). \end{equation} By the finite-time observability assumption of the system, $\mathcal{O}_\ell(\cdot,\bar{u})$ admits an inverse for any given $\bar{u}\in \mathcal{U}^{\ell}$. Denote with $\mathcal{O}_\ell^*(\cdot,\bar{u}):\Re^{\ell n_y}\rightarrow \Re^{n_x}$ the inverse map of $\mathcal{O}_\ell(\cdot,\bar{u})$ which is such that $\mathcal{O}_\ell^*\left(\mathcal{O}_\ell(x,\bar{u}),\bar{u}\right) = x.$ \noindent It hence follows from \eqref{eq:DataEq} that \begin{equation}\label{eq:Inverse-x} x_{t-\ell}=\mathcal{O}_\ell^*\left(\bar{y}_{t-\ell|t-1},\bar{u}_{t-\ell|t-1}\right) \end{equation} which, by recursively applying the first equation of \eqref{eq_general_form}, gives \begin{equation}\label{eq:xt=phi(zt)} x_t = F_\ell\left(\mathcal{O}_\ell^*\left(\bar{y}_{t-\ell|t-1},\bar{u}_{t-\ell|t-1}\right),\bar{u}_{t-\ell|t-1}\right) \triangleq \phi(z_t). \end{equation} \end{proof} Consider now the more realistic scenario where the (unknown) measurement noise sequence $\left\{w_t\right\}$ is nonzero. Then Eq. \eqref{eq:DataEq} becomes \begin{equation}\label{eq:DataEq-Noise} \bar{y}_{t-\ell|t-1} = \mathcal{O}_\ell(x_{t-\ell},\bar{u}_{t-\ell|t-1})+\bar{w}_{t-\ell|t-1}. \end{equation} As a consequence, the state can no longer be obtained exactly by Eq. \eqref{eq:Inverse-x} or \eqref{eq:xt=phi(zt)} since $\bar{y}_{t-\ell|t-1}$ does not lie in the range of $\mathcal{O}_\ell(\cdot,\bar{u}_{t-\ell|t-1})$. Let in this case the state ${x}_{t-\ell}$ and $x_t$ be estimated by \begin{align} &\hat{x}_{t-\ell} \in \operatornamewithlimits{arg\,min}_{x\in \mathcal{X}}\left\|\bar{y}_{t-\ell|t-1}-\mathcal{O}_\ell(x,\bar{u}_{t-\ell|t-1})\right\| \label{eq:xhatPast}\\ &\hat{x}_t = F_\ell\left(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1}\right), \label{eq:xhat} \end{align} for some norm $\left\|\cdot \right\|$ on $\Re^{\ell n_y}$. The optimization problem in \eqref{eq:xhatPast} is well-defined since, by Assumptions \ref{assump:Compacts}-\ref{assump:Lipschitz}, the function $x \mapsto \left\|\bar{y}_{t-\ell|t-1}-\mathcal{O}_\ell(x,\bar{u}_{t-\ell|t-1})\right\|$ is defined on a compact set $\mathcal{X}$ and is continuous. These, by the extreme value theorem, are sufficient conditions for the existence of a minimum and for the existence of the minimizer $\hat{x}_{t-\ell}$ as defined above. In contrast, the estimates $\hat{x}_{t-\ell}$ and $\hat{x}_{t}$ need not be uniquely defined in a general setting. Uniqueness would require some more strict conditions on the system under consideration. Here, we will be content with a set-valued version $\hat{\phi}$ of $\phi$ in the noisy estimation scenario. Hence let $\hat{\phi}$ be defined by $$ \hat{\phi}(z_t) = \left\{F_\ell\left(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1}\right) : \hat{x}_{t-\ell} \mbox{ as in \eqref{eq:xhatPast} }\right\}. $$ A question we ask now is how far the noisy estimate \eqref{eq:xhat} lies from the true state $x_t$. To study this, a stronger notion of observability is introduced as follows. \begin{definition}\label{def:Uniform-Observability} The system \eqref{eq_general_form} is called \textit{finite-time uniformly observable} over a time horizon $\ell\in \mathbb{N}$ if there exists a constant $\alpha_\ell>0$ such that for each $\bar{u}\in \mathcal{U}^{\ell }$, \begin{equation}\label{eq:Uniform-Observability} \left\|\mathcal{O}_\ell(x,\bar{u})-\mathcal{O}_\ell(x',\bar{u})\right\| \geq \alpha_\ell\left\|x-x'\right\| \end{equation} for all $(x,x')\in \Re^{n_x}\times \Re^{n_x}$. Here $\left\|\cdot \right\|$ denotes a generic norm defined on appropriate spaces. \end{definition} Based on this property, it is possible to bound the error between the noisy estimate \eqref{eq:xhat} and the true state. \begin{prop}\label{prop:error-bound} Under Assumptions \ref{assump:Compacts}--\ref{assump:Lipschitz}, if the system \eqref{eq_general_form} is \textit{finite-time uniformly observable} over a time horizon $\ell\in \mathbb{N}$ in the sense of Definition \ref{def:Uniform-Observability}, then \begin{equation}\label{eq:bound} \left\|\hat{x}_{t}-x_{t} \right\|\leq 2 \gamma_f^\ell \alpha_\ell^{-1}\left\|\bar{w}_{t-\ell|t-1} \right\| \end{equation} where $\gamma_f$ is the Lipschitz constant of $f^\circ$ (See Assumption \ref{assump:Lipschitz}) and $\alpha_{\ell}$ is the constant appearing in \eqref{eq:Uniform-Observability}. \end{prop} \begin{proof} It follows from the definition \eqref{eq:xhatPast} of $\hat{x}_{t-\ell}$ that $$\begin{aligned} \left\|\bar{y}_{t-\ell|t-1}-\mathcal{O}_\ell(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1})\right\|&\\ &\hspace{-2cm}\leq \left\|\bar{y}_{t-\ell|t-1}-\mathcal{O}_\ell(x,\bar{u}_{t-\ell|t-1})\right\|, \end{aligned}$$ for all $x\in \Re^{n_x}$. In particular this inequality holds for $x=x_{t-\ell}$. By then invoking \eqref{eq:DataEq-Noise} we get $$ \begin{aligned} &\left\|\mathcal{O}_\ell(x_{t-\ell},\bar{u}_{t-\ell|t-1}) -\mathcal{O}_\ell(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1})+\bar{w}_{t-\ell|t-1}\right\|\\ &\hspace{4cm}\leq \left\|\bar{w}_{t-\ell|t-1})\right\|. \end{aligned} $$ By the triangle inequality property of norms, it follows that \begin{gather*} \left\|\mathcal{O}_\ell(x_{t-\ell},\bar{u}_{t-\ell|t-1}) -\mathcal{O}_\ell(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1})+\bar{w}_{t-\ell|t-1}\right\|\\ \geq \left\|\mathcal{O}_\ell(x_{t-\ell},\bar{u}_{t-\ell|t-1}) -\mathcal{O}_\ell(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1})\right\| \\ -\left\|\bar{w}_{t-\ell|t-1}\right\| \end{gather*} Combining with the previous inequality yields $$ \begin{aligned} \alpha_\ell \left\|\hat{x}_{t-\ell}-{x}_{t-\ell}\right\|&\\ &\hspace{-2cm}\leq \left\|\mathcal{O}_\ell(x_{t-\ell},\bar{u}_{t-\ell|t-1}) -\mathcal{O}_\ell(\hat{x}_{t-\ell},\bar{u}_{t-\ell|t-1})\right\|\\ &\hspace{-2cm}\leq 2\left\|\bar{w}_{t-\ell|t-1})\right\|. \end{aligned} $$ Here, the first inequality is a consequence of the assumption of uniform finite-time observability. As a consequence, $\left\|\hat{x}_{t-\ell}-{x}_{t-\ell}\right\|\leq 2\alpha_\ell^{-1}\left\|\bar{w}_{t-\ell|t-1})\right\| $. The result follows now by applying \eqref{eq:xhat}, the uniform Lipschitz assumption on $f^\circ$ stated in \ref{assump:Lipschitz} and Lemma \ref{lem:Lipschitz}. \end{proof} It can observed from the expression of the error bound \eqref{eq:bound} that the more strongly the system is observable (that is, the larger the constant $\alpha_\ell$), the more robust the estimate $\hat{x}_t$. Indeed $\alpha_\ell$, when it exists, can be defined as $$\inf_{\substack{\bar{u},x,x'\in \mathcal{U}^{\ell}\times \mathcal{X}\times \mathcal{X}\\x\neq x'}}\dfrac{\left\|\mathcal{O}_\ell(x,\bar{u})-\mathcal{O}_\ell(x',\bar{u})\right\|}{\left\|x-x'\right\|}. $$ \section{Modeling and learning} \label{sec:modelingandlearning} \subsection{Nonlinear regression model} \label{subsec:nonlinear_regression_model} \begin{figure}[t] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{figures/model.pdf} \caption{} \label{fig:state_space_equation} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{figures/learning.pdf} \caption{} \label{fig:learning_models} \end{subfigure} \caption{(a) Block diagram of our canonical reduced state space representation as defined in \eqref{eq:Final-State-Space-Estimate} (b) Training: the dynamic is modeled by $\hat H$ acting as a regressor from a short history of previous observations to future. The encoder-decoder model is used to reduce the size of the state $z_t$ to $x_t$. We train each network by minimizing the prediction error as well as the reconstruction error. \vspace{-5mm}} \label{fig:model} \end{figure} \noindent A starting point of our identification method for system \eqref{eq_general_form} is to solve a nonlinear regression problem. To formulate this problem, note by Proposition \ref{prop:error-bound} that the true state of the system can be written as $x_t=\hat{x}_t+\delta_t$ with $\left\|\delta_t\right\|\leq \gamma_f^\ell \alpha_\ell^{-1}\left\|\bar{w}_{t-\ell|t-1} \right\|$. Consider now plugging the state estimate \eqref{eq:xhat} into the output equation of \eqref{eq_general_form}, which gives $$ \begin{aligned} y_t&= h^\circ(\hat{x}_t+\delta_t,u_t)+w_t\\ & = h^\circ(\hat{x}_t,u_t)+\xi_t, \end{aligned} $$ with $\xi_t$ being an error component entirely due to the noise. It is indeed equal zero whenever $w\equiv 0$. It can be shown that $\xi_t$ can be written as $\xi_t=w_t+\tilde{\delta}_t$ with $\|\tilde{\delta}_t\|\leq \gamma_h\gamma_f^{\ell} \alpha_\ell^{-1}\left\|\bar{w}_{t-\ell|t-1} \right\|$, where $\gamma_h$ is the Lipschitz constant of the measurement function $h^\circ$ of system \eqref{eq_general_form}. Since $\hat{x}_t$ is a function of $z_t$ we end up with \begin{equation}\label{eq:regression} y_t = H^\circ(z_t,u_t)+\xi_t \end{equation} for some nonlinear function $H^\circ$. \begin{remark} In the absence of noise, the exact expression of $ H^\circ$ is $ H^\circ(z_t,u_t) =h^\circ\Big( F_\ell\big(\mathcal{O}_\ell^*(z_t),\eta(z_t)\big),u_t\Big)$ with $\eta(z_t)=\bar{u}_{t-\ell|t-1}$. \end{remark} The first step of the identification method is to construct a high dimensional state-space representation whose state is the vector $z_t$ defined in \eqref{eq:zt}. More precisely, consider \begin{equation}\label{eq:state-space-Z} \left\{\begin{aligned} &z_{t+1}= \bar{A}z_t+\bar{B} u_t+\bar{S} H^\circ(z_t,u_t)+\bar{S} \xi_t\\ & y_t = H^\circ(z_t,u_t)+\xi_t, \end{aligned} \right. \end{equation} where $\bar{A} = A\otimes I_{n_u+n_y}$, $\bar{B} = e_{n_x-1}\otimes I_{n_u}$, $\bar{S} = e_{n_x}\otimes I_{n_y}$, $e_i\in \Re^{n_x}$ being the canonical basis vector which has $1$ in its $i$-th entry and zero everywhere else, $\otimes$ referring to the Kronecker product and $A\in \Re^{n_x\times n_x}$ given by the canonical form $$ A = \begin{pmatrix} 0 & 1 & 0& \cdots & 0\\ \vdots & \ddots &\ddots &\ddots& \vdots \\ 0 & \cdots &\ddots &1 & 0\\ 0& \cdots & \cdots & 0& 1\\ 0&\cdots &\cdots & 0 &0 \end{pmatrix}. $$ From \eqref{eq:regression} it can be seen that \eqref{eq:state-space-Z} constitutes a state-space representation for system \eqref{eq_general_form} since both models have the same input-output behavior for $t\geq \ell$. Given a finite set of input-output data points $\left\{(u_t,y_t)\right\}_{t=1}^{T+\ell}$, an estimate of the function $ H^\circ$ can be obtained in a certain nonlinear model class $\mathcal{H}$ as $$\hat{H}\in \operatornamewithlimits{arg\,min}_{H\in \mathcal{H}}{J(H)}, $$ where $J(H)$ is a regression loss given as \begin{align} J(H) = \frac{1}{T} \sum_{t=\ell+1}^{T+\ell} \alpha_t\| y_{t} - H(\hat{z}_{t}, u_{t})\|^2 \\ \text{ s.t. } \hat{z}_{t+1} = \bar{A}\hat{z}_t + \bar{B}u_t + \bar{S}H(\hat{z}_t, u_t),\; \hat{z}_{\ell+1}=z_{\ell+1}, \end{align} \noindent where $\alpha_t$ is a weighting coefficient such as $\alpha_t=1$ except for $\alpha_{\ell+1}=10$. Regression starts after a burn-in phase of $\ell$ steps (ie. the window size), which are needed to construct a full state-representation. The choice of the model class $\mathcal{H}$ is fundamental for several reasons: it must be sufficiently large to represent a good approximation of $H^\circ$, and it should not be too large in order to ensure learnability and generalization to unseen conditions \cite{BookShalev2004}. In other words, this class needs to be complex enough to capture the behavior of the unknown nonlinear function $H^\circ$ while being easily identifiable from rather limited set of measurements. We rely on the well-documented approximation power of deep neural networks and propose to select $\mathcal{H}$ to be the class of multi-layer perceptrons (MLP), whose capacity shall be limited. In particular, the number of hidden layers and number of neurons of each layer is considered as a hyper-parameter optimized over a validation set independent of training and evaluation splits, as classically done in machine learning. \subsection{Model reduction} \noindent The last step of the proposed method consists in model reduction. In effect, the model described in \eqref{eq:state-space-Z}, although structurally simple, may suffer from a high dimensional state vector $z_t$. This may be a concern for some applications. We therefore propose a second deep learning structure allowing nonlinear state-space model reduction by encoding the state variable $z_t$ into a low dimensional state variable $\bar{x}_t\in \Re^{\bar{n}}$ for some user-defined dimension $\bar{n}\in \mathbb{N}$. Formally, we train an auto-encoder $(\mathcal{E}, \mathcal{D})$ such that $\bar{x}_t = \mathcal{E}(z_t)$ and $z_t = \mathcal{D}(\bar{x}_t)$. By applying these maps to Eq. \eqref{eq:state-space-Z} and neglecting the noise terms, we get an approximate representation of the initial system \eqref{eq_general_form} as follows: \begin{equation}\label{eq:Final-State-Space-Estimate} \left\{\begin{array}{l} \bar{x}_{t+1} = \mathcal{E}\Big(\bar A\mathcal{D}(\bar{x}_t) + \bar Bu_t + \bar S \hat{H}(\mathcal{D}(\bar{x}_t), u_t)\Big) \\ \bar{y}_t = \hat{H}(\mathcal{D}(\bar{x}_t), u_t). \end{array}\right. \end{equation} The state space equation is summarized in figure \ref{fig:state_space_equation}. The parameters of the encoder $\mathcal{E}$ and decoder $\mathcal{D}$ are trained with the following reconstruction loss from data samples $\{z_t\}$ collected from the training set (see also Fig. \ref{fig:learning_models}). \begin{equation} \mathcal{(E,D)} = \arg \min_{\mathcal{E',D'}} \left \| z_t - \mathcal{D'}(\mathcal{E'}(z_t)) \right \| \end{equation} Note that the dimension $\bar{n}$ of the compressed state $\bar{x}_t$ in the estimated model \eqref{eq:Final-State-Space-Estimate} is potentially different from the true state dimension $n_x$. Another observation is that by going from \eqref{eq:state-space-Z} to \eqref{eq:Final-State-Space-Estimate}, one reduces the dimension of the state vector but at the cost of introducing some structural complexity. Hence the computational price associated with simulating a model such as \eqref{eq:Final-State-Space-Estimate} may still be high depending on the complexity of the auto-encoder $(\mathcal{E}, \mathcal{D})$. From a formal point of view, this reduced state has several advantages. The constructed state-space $z_t$ is not part of the update equation (\ref{eq:Final-State-Space-Estimate}) anymore. We can also experimentally show (see section \ref{sec:experiments}), that this method can discover state representations of smaller size with a method which is generic in nature and can be applied to a broad class of problems. \medskip \begin{comment} \paragraph{Analysis} Let $\mathcal{F}$ denote the set of all functions defined from $\mathcal{Y}^{\ell}\times \mathcal{U}^{\ell}\times \mathcal{U}$ to $\mathcal{Y}$. Then our model class $\mathcal{H}$ is a subset of $\mathcal{F}$. Let $H\in \mathcal{H}$ be a hypothesis and $H^\circ\in \mathcal{F}$ be the output function appearing in \eqref{eq:regression}. Given a sample $S=\big((z_{\ell+1},u_{\ell+1}),\cdots,(z_{\ell+T},u_{\ell+T})\big)$ of data samples generated by the system \eqref{eq_general_form}, define the vectors $$ \begin{aligned} &\mathbf{H}_S=\begin{pmatrix}H(z_{\ell+1},u_{\ell+1})^\top & \cdots &H(z_{\ell+T},u_{\ell+T})^\top\end{pmatrix}^\top \\ &\mathbf{H}_S^\circ=\begin{pmatrix}H^\circ(z_{\ell+1},u_{\ell+1})^\top & \cdots &H^\circ(z_{\ell+T},u_{\ell+T})^\top\end{pmatrix}^\top \end{aligned} $$ Consider now a bounded vector $\mathbf{e}\in \Re^{Tn_y}$ and define the function $\mathscr{R}_{\mathcal{H},H^\circ,S}$ by \begin{equation}\label{eq:R} \begin{aligned} \mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{e})= &\sup_{H\in \mathcal{H}} \dfrac{1}{T}\mathbf{e}^\top (\mathbf{H}_S-\mathbf{H}_S^\circ)\\ &\mbox{\: \: subject to \quad }\|\mathbf{H}_S-\mathbf{H}_S^\circ\|_2\leq 2\|\mathbf{e}\|_2 \end{aligned} \end{equation} The supremum is well-defined here because the vector $\mathbf{e}$ is assumed to be bounded. $\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{e})$ measures empirically the impact of the error $\mathbf{e}$ relatively to the data passed through hypothesis space $\mathcal{H}$. It measures roughly how much the function class $\mathcal{H}$ correlates with $\mathbf{e}$. Indeed, for a given $S$, the richer the class $\mathcal{H}$ the larger $\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{e})$ tends to be. The maximum value of $\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{e})$ is $2/T\|\mathbf{e}\|_2^2$. \\ \textcolor{red}{... The quantity $\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{e})$ above may have an interpretation which is close to Rademacher's measure of complexity ...}\\ Also, consider the function $V_S:\mathcal{F}\rightarrow \Re_+$ given by \begin{equation}\label{eq:VS} V_S(H)= \dfrac{1}{T}\sum_{t=\ell+1}^{\ell+T} \|H(z_t,u_t)\|_2^2=\dfrac{1}{T}\|\mathbf{H}_S\|_2^2 \end{equation} \begin{definition}[Data informativity]\label{def:informativity} $\: $\\ A data sample $S=\big((z_{\ell+1},u_{\ell+1}),\cdots,(z_{\ell+T},u_{\ell+T})\big)$ generated by the system \eqref{eq_general_form} is called sufficiently informative with respect to the model class $\mathcal{H}$ (assumed to be a normed space of finite dimension) if the function $V_S$ defined in \eqref{eq:VS} is positive definite on $\mathcal{H}$, i.e., for all $H\in\mathcal{H}$, $V_S(H)=0$ if and only if $H=0$. \end{definition} \begin{lem}\label{lem:lower-bound} Assume that the hypothesis space $\mathcal{H}$ is a normed functional space of finite dimension with norm denoted by $\|\cdot\|_{\mathcal{H}}$. If the data $S$ is sufficiently informative with respect to $\mathcal{H}$ in the sense of Definition \ref{def:informativity}, then \begin{equation}\label{eq:Positive-definiteness} d_1\|H\|_{\mathcal{H}}^2\leq V_S(H)\leq d_2\|H\|_{\mathcal{H}}^2 \quad \forall H\in \mathcal{H} \end{equation} for strictly positive constants $d_1$ and $d_2$ defined by $d_1=\min_{H\in \mathcal{H}, \|H\|_{\mathcal{H}}=1}V_S(H)$ and $d_2=\max_{H\in \mathcal{H}, \|H\|_{\mathcal{H}}=1}V_S(H)$. \end{lem} \begin{proof} If $H=0$, then \eqref{eq:Positive-definiteness} is trivially true. For all nonzero $H\in \mathcal{H}$, we observe that \begin{equation}\label{eq:VS-disk} V_S(H)= \|H\|_{\mathcal{H}}^2V_S\left(\dfrac{H}{\|H\|_{\mathcal{H}}}\right). \end{equation} Moreover, $\frac{H}{\|H\|_{\mathcal{H}}}$ lives in the compact set $\mathcal{S}=\left\{H\in \mathcal{H}:\|H\|_{\mathcal{H}}=1\right\}$ and $V_S$ is a continuous function. By the extreme value theorem [REF], $V_S$ admits minimum and maximum values on $\mathcal{S}$. Hence, with $d_1$ and $d_2$ defined as in the statement of the lemma, we have $$d_1\leq V_S\left(\dfrac{H}{\|H\|_{\mathcal{H}}}\right)\leq d_2. $$ Note that $d_1$ is strictly positive here as consequence of $S$ being sufficiently informative. Hence, \eqref{eq:Positive-definiteness} follows by invoking \eqref{eq:VS-disk}. \end{proof} Now consider a norm $\|\cdot\|$ on $\mathcal{F}$ whose restriction to $\mathcal{H}$ coincides with $\|\cdot\|_{\mathcal{H}}$. \begin{prop} Assume that $\mathcal{H}$ is a normed functional space of finite dimension. Consider the projection of $H^\circ$ onto $\mathcal{H}$ given by\footnote{\textcolor{red}{We implicitly assume the existence of such a projection.}} $$\pi_{\mathcal{H}}(H^\circ)\in \operatornamewithlimits{arg\,min}_{H\in \mathcal{H}}\left\|H^\circ-H\right\| $$ If the sample $S$ is sufficiently informative with respect to $\mathcal{H}$, then for all $\hat{H}\in \operatornamewithlimits{arg\,min}_{H\in \mathcal{H}}{J(H)}$, \\ \begin{equation}\label{eq:Functional-Error-Bound} \begin{aligned} \big\|\hat{H}-H^\circ\big\| &\leq \inf_{H\in \mathcal{H}}\left\|H^\circ-H\right\|+\cdots\\ &+\left(\dfrac{2}{d_1}\right)^{1/2}\Big[V_S\big(H^\circ-\pi_{\mathcal{H}}(H^\circ)\big)+ 2\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{\xi})\Big]^{1/2} \end{aligned} \end{equation} with $\mathbf{\xi}=\begin{pmatrix}\xi_{\ell+1}^\top & \cdots &\xi_{\ell+T}^\top\end{pmatrix}^\top $ (see Eq. \eqref{eq:regression}) and $\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{\xi})$ and $d_1$ defined as in \eqref{eq:R} and \eqref{eq:Positive-definiteness} respectively. \end{prop} \begin{proof} By the triangle inequality property of norms, we have \begin{equation}\label{eq:Ineq1} \begin{aligned} \big\|\hat{H}-H^\circ\big\|&\leq \left\|H^\circ-\pi_{\mathcal{H}}(H^\circ)\right\|+\big\|\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big\|\\ &=\inf_{H\in \mathcal{H}}\left\|H^\circ-H\right\|+\big\|\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big\|_\mathcal{H} \end{aligned} \end{equation} Since $S$ is sufficiently informative and $\hat{H}-\pi_{\mathcal{H}}(H^\circ)$ lies\footnote{\textcolor{red}{....this may not be true if $\mathcal{H}$ is a set of neural networks...}} in $\mathcal{H}$, it follows from Lemma \ref{lem:lower-bound} that $$ d_1\big\|\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big\|_\mathcal{H}^2\leq V_S\big(\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big). $$ As a consequence, we have \begin{equation}\label{eq:Ineq2} \big\|\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big\|_\mathcal{H}\leq \dfrac{1}{\sqrt{d_1}}V_S^{1/2}\big(\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big) \end{equation} with $V_S^{1/2}$ referring to the square-root of the function $V_S$. The rest of the proof consists in finding an upper-bound for $V_S\big(\hat{H}-\pi_{\mathcal{H}}(H^\circ)\big)$. For this purpose, we exploit the fact that $\hat{H}\in \operatornamewithlimits{arg\,min}_{H\in \mathcal{H}}{J(H)}$, which implies that $J(\hat{H})\leq J(H^\circ)$, i.e., $$\sum_{t=\ell+1}^{T+\ell} \left\|y_t - \hat{H}(z_t,u_t)\right\|_2^2\leq \sum_{t=\ell+1}^{T+\ell} \left\|y_t - H^\circ(z_t,u_t)\right\|_2^2$$ By using \eqref{eq:regression}, this takes the form $$\sum_{t=\ell+1}^{T+\ell} \left\|H^\circ(z_t,u_t)- \hat{H}(z_t,u_t)+\xi_t\right\|_2^2\leq \sum_{t=\ell+1}^{T+\ell} \left\|\xi_t\right\|_2^2.$$ Developing the term on the left hand side yields $$ \begin{aligned} V_S(\hat{H}-H^\circ)\leq \dfrac{2}{T}\sum_{t=\ell+1}^{T+\ell} \xi_t^\top \big(\hat{H}(z_t,u_t)-H^\circ(z_t,u_t)\big). \end{aligned} $$ From this last equation one can see that $$V_S(\hat{H}-H^\circ)\leq 2\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{\xi}). $$ On the other hand, let us observe that $$V_S(\hat{H}-H^\circ)\geq \dfrac{1}{2} V_S(\hat{H}-\pi_{\mathcal{H}}(H^\circ))- V_S(H^\circ-\pi_{\mathcal{H}}(H^\circ)).$$ Combining with the previous equation gives $$ V_S(\hat{H}-\pi_{\mathcal{H}}(H^\circ))\leq 2V_S(H^\circ-\pi_{\mathcal{H}}(H^\circ))+ 4\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{\xi})$$ Finally, by combining this last inequality with \eqref{eq:Ineq2} and then with \eqref{eq:Ineq1}, we obtain the desired result. \end{proof} In case the true function $H^\circ$ lies in the hypothesis space $\mathcal{H}$, the error bound in \eqref{eq:Functional-Error-Bound} reduces to $\Big[ \dfrac{4}{d_1}\mathscr{R}_{\mathcal{H},H^\circ,S}(\mathbf{\xi})\Big]^{1/2}$, a quantity which depends solely on the noise. On the other hand, in the absence of noise the bound reflects only the structural approximation error $$\inf_{H\in \mathcal{H}}\left\|H^\circ-H\right\|+ \Big[\dfrac{2}{d_1}V_S\big(H^\circ-\pi_{\mathcal{H}}(H^\circ)\big)\Big]^{1/2} $$ \end{comment} \noindent \begin{comment} \color{red}{how can we chose the sequence of inputs $u_{t-1}, \ldots, u_{t-h}$ and the output sequence $y_{t-1}, \ldots, y_{t-h}$ such that $\phi $ exist ? Uniformity? Backword observability, Excitation propriety (controllability\\ how can we guarantee the uniformity with respect of input in the training dataset? Grammien bound?\\ if the analysis is difficult in general, can we illustrate in simple example as LTV system? \\ can we give an analysis on the robustness proprieties? How we can illustrate this performance\\ sampled complexity : bound the sample complexity using the singular values of the Gramian?} \color{black} \end{comment} \section{Experimental results} \label{sec:experiments} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/rollout_demo.pdf} \caption{Visual example of the output prediction made by $\hat{H}$ (\textbf{Ours (MLP)}) for different value of $\ell$ on the three datasets. \vspace{-5mm} } \label{fig:rollout} \end{figure*} \noindent We illustrate and evaluate the proposed nonlinear dynamical model identification approach on the estimation and prediction of the state of a system with unknown dynamics. To demonstrate the practical feasibility of our model, we propose to study its behavior in three different scenarios. First, we demonstrate the capabilities of our regressor function $H^\circ$ for output prediction on simulated systems.\footnote{Code and datasets will be made publicly available upon acceptance.} We also study the influence of key parameters, namely the length $\ell$ of the time window and the impact of the state reduction. \subsection{Dynamical systems and benchmarks} \noindent We use two simulated and one real system to validate our contributions. \begin{description}[leftmargin=0.4cm] \item[Tank] --- we test the proposed method on the cascade tank system introduced in \cite{schoukens2017three}. This system relates the water level in two connected tanks without consideration of overflow. It has the form \eqref{eq_general_form} of a discrete-time state-space model with $f^\circ$ and $h^\circ$ implicitly instantiated as follows \begin{equation} \left\{\begin{array}{l} x_{1,{t+1}} = x_{1,t} - k_1\sqrt{x_{1,t} } + k_2u_t \\ x_{2,t+1} = x_{2,t} +k_3\sqrt{x_{1,t} }- k_4\sqrt{x_{2,t} } \\ y_t = x_{2,t}+w_t, \end{array}\right. \end{equation} with $x_t=\begin{pmatrix} x_{1,t} & x_{2,t}\end{pmatrix}\in \Re^2$ being the state and $k_i$, $i=1,\ldots,4$ known parameters. \item[2D Drone] --- we introduce a model of a 2-dimensional drone, i.e. an unmanned aerial vehicle, which moves in a 2D plane. The drone is equipped with two propellers and its dynamic is modeled by: \begin{equation} \left\{\begin{array}{l} \ddot p_x = -\frac{k_T}{m}(\Omega_1^2+\Omega_2^2)\sin(\theta) - \frac{\gamma}{m}(\Omega_1+\Omega_2)\dot p_x \\ \ddot p_z = \frac{k_T}{m}(\Omega_1^2 + \Omega_2^2)\cos(\theta) - \frac{\gamma}{m}(\Omega_1+\Omega_2)\dot p_z - g \\ \ddot \theta = \frac{k_TL}{J}(\Omega_2^2 - \Omega_1^2) \\ y = (p_x\quad p_z \quad \theta)^T, \end{array}\right. \end{equation} where $(p_x, p_z)$ is the position, $k_T$ the thrust constant, $\Omega_i$ the rotationnal speed of the $i^{th}$ propeller, $L$ the length of the UAV, $m$ its mass, $J$ its inertia and $\gamma$ its friction coefficient. The main interest of such a system is its naturally unstable dynamics, which complicates the identification process. The system has been discretized. \item[3D Drone] --- We also evaluate on recordings of the Blackbird UAV flight dataset \cite{antonini_blackbird_2018}, which consists of 10 hours of aggressive quadrotor flights, measured with an accurate motion capture device. We use this real world data to demonstrate that our observer discovers a state representation containing the same information as in the physical state without any supervision. We processed the raw data gathered from the on-board inertial measurement unit (IMU) and propeller rotation speeds as observation and command signals. The regressor is trained to simulate the IMU measurements, i.e. acceleration and angular speed of the drone expressed in the local frame. \end{description} \noindent Noise has been added to the observations for the two simulated systems, tank and 2D Drone. More details about the dataset generation is provided in the appendix. \subsection{Baseline methods} \noindent To experimentally compare our model to competing approaches from the literature, we introduce a neural baseline in the form of gated recurrent units (GRU) \cite{GRU2014}, the state-of-the-art variant of recurrent neural networks. This is a pure data-driven technique from the machine learning field, where the learned state representation directly corresponds to the hidden state vector of the GRU. For a fair comparison, we limit the size of the hidden vector to fit the corresponding size of $z_t$. We refer to this model as \textit{classic GRU}. Its update equations are given by \begin{equation} \begin{array}{ll} h_{t+1} &= \text{GRU}([y_t;u_t], h_t), \quad h_{0} = 0 \\ y_t &= \text{MLP}(h_t), \end{array} \end{equation} where $\text{GRU(.)}$ is a shorthand notation corresponding to the classical update equations of GRUs \cite{GRU2014}. For simplicity, and as usually done, gates have been omitted from the notation. The baseline is evaluated in a setting which is comparable to the proposed model. In particular, the model has access to the same window of input/output pairs $[y_{t+k}, u_{t+k}]_{k=1..\ell}$ during the initial burn-in phase. However, these values do not explicitly make up the state, as in our model. This data-driven baseline is sufficiently general to be able to learn the same state representation in theory, but there is no guarantee that training will lead to this solution. We also experiment with the model introduced in \cite{MASTI2021} which consists of an auto-encoder with a learned latent dynamics that operates on the reduced state representation. This model has been evaluated on the same tank system, yet, with a different data collection technique. Train and test trajectory in the Tank dataset as proposed in \cite{MASTI2021} are generated from PRBS-like signals, which is a classical approach for system identification. Our version of Tank dataset is much more challenging: observations are collected from closed-loop simulations with targets generated in a procedural manner and PID control. In our dataset, we took care to explore a wide range of possible states with sparse measurements in the train set to prevent over-fitting on a specific command design. \subsection{Extension: a hybrid state-space model} \noindent We also introduce an extension of our model, which combines the advantages of both methodologies. It uses our proposed state representation $z_t$, but implements the mapping $H^\circ$ by a GRU in place of the MLP proposed in section \ref{sec:modelingandlearning}. Formally, the GRU updates a zero-initialized hidden vector using the previous observations and command. This vector is then decoded by a MLP to the desired observation. Equation \eqref{eq:state-space-Z} is then used for forward prediction. We refer to this model as \textit{Ours (GRU)}, it is given as follows: \begin{equation} \begin{array}{ll} z_{t+1} &= \bar A z_t + \bar B u_t + \bar S \ \text{MLP}(h_t) \\ h_{i+1} &= \text{GRU}([y_i;u_i], h_i), \quad h_{t-l} = 0 \\ y_t &= \text{MLP}(h_t). \end{array} \end{equation} \subsection{Output prediction and parameter analysis} \begin{table*}[t] \footnotesize \begin{tabular}{@{}c|cccc|cccc|cccc@{}} \toprule \multirow{2}{*}{\shortstack{Window\\ size}} & \multicolumn{4}{c|}{Tank ($\times 10^{-4}$)} & \multicolumn{4}{c|}{2D Drone ($\times 10^{-2}$)} & \multicolumn{4}{c}{3D Drone ($\times 10^{-2}$)} \\ \cmidrule(l){2-13} $\ell$ & \shortstack{Classic \\GRU$^\dagger$} & \shortstack{Masti \\et al.\cite{MASTI2021}} & \textbf{\shortstack{Ours \\(GRU)}} & \textbf{\shortstack{Ours \\(MLP)}} & \shortstack{Classic \\GRU$^\dagger$} & \shortstack{Masti \\et al.\cite{MASTI2021}} &\textbf{\shortstack{Ours \\(GRU)}} & \textbf{\shortstack{Ours \\(MLP)}} & \shortstack{Classic \\GRU$^\dagger$} & \shortstack{Masti \\et al.\cite{MASTI2021}} & \textbf{\shortstack{Ours \\(GRU)}} & \textbf{\shortstack{Ours \\(MLP)}}\\ \midrule 5 & 163 & 1030 & 138 & \textbf{7.14} & 62.8 & 60.5 & 106 & \textbf{31.4} & 24.8 & 14.6 & \textbf{6.44} & 15.2 \\ 10 & 41.7 & 1070 & 5.60 & \textbf{0.930} & 82.7 & 58.6 & 68.9 & \textbf{9.95} & 23.7 & 14.5 & \textbf{5.32} & 14.2 \\ 15 & 4.57 & 957 & 3.06 & \textbf{0.960} & 61.9 & 58.2 & 35.2 & \textbf{7.52} & 23.4 & 13.0 & \textbf{5.07} & 13.6 \\ 20 & 4.04 & 914 & 1.07 & \textbf{0.761} & 78.4 & 55.3 & 19.3 & \textbf{8.06} & 22.7 & 12.6 & \textbf{4.68} & 13.5 \\ 25 & 0.600 & 915 & \textbf{0.481} & 0.606 & 80.3 & 53.6 & 23.0 & \textbf{5.17} & 21.3 & 12.1 & \textbf{4.83} & 13.6 \\ 30 & 1.73 & 917 & \textbf{0.193} & 0.448 & 104 & 51.3 & 25.0 & \textbf{3.13} & 19.2 & 12.6 & \textbf{4.61} & 13.1 \\ \midrule \multicolumn{13}{l}{$^\dagger$ {\footnotesize The size of the hidden state of each GRU model is adapted to the window size s.t. fits the size of the equivalent regressor model.}}\\ \bottomrule \end{tabular} \caption{\label{tab:regression_rollout}Quantitative evaluation: we report MSE error over 100-step rollouts by the learned regression model and compare with baselines, for different windows sizes $\ell$. Our model consistently outperforms all baselines.\vspace{-5mm}} \end{table*} \textbf{Output forecasting} -- the identified dynamic model can be evaluated by performing open-loop forward prediction from initial conditions and the set of inputs applied to the real system. The model then forecasts outputs, which may be compared to actual ground truth measurements. We assessed the first stage of our method using this task, i.e. the resolution of the regression problem. Table \ref{tab:regression_rollout} reports the mean squared error on 100 step roll-out predictions for each baseline and different window sizes $\ell \in \{5, 10, 15, 20, 25, 30\}$. Our method shows excellent prediction error even for low window sizes, and consistently outperforms the closest competing method from the literature, Masti et al. \cite{MASTI2021}, by a large margin. We conjecture two key arguments to justify this difference : (1) the structure proposed by \cite{MASTI2021} suffers from complex interaction between the auto-encoder and the latent dynamics that penalizes learning, and (2) the model design process over-fitted on the simpler dataset used in the original paper. \textbf{Machine learning baseline} -- is competitive with our contribution. However, its structure forces to observe only one couple $(y_t, u_t)$ at a time. Relevant information needs to be stored in its memory, the vectorial hidden state, and this storage process is fully learned by gradient descent, a difficult process. In principle, these models can learn a state representation which is similar or even identical to our designed state-map, but there is no guarantee that this representation emerges. Our state map model can therefore be seen as a form of useful inductive bias for recurrent neural models. For moderate window sizes, our model benefits from the immediate availability of all the components of $z_t$ in its state. For very large window sizes or complex dynamical systems (such as 3D Drone), the GRU extension (\textit{Ours (GRU))} outperforms the MLP regressor. In this situation, the GRU takes advantage of its incrementally updated memory, and manages to manipulate the large dimension of $z_t$ by processing it piecewise, whereas the MLP must manipulate the entire vector. Figure \ref{fig:rollout} shows samples of predicted trajectory using the MLP regressor approach for each dataset. \subsection{Model reduction} \noindent The reduction step is performed downstream of the regression model training. Nevertheless, the difficulty of the reduction task is directly related to the initial size of the state representation $z_t$, that is, to the size of the window $\ell$. In order to accurately evaluate the compression capabilities of our approach, we trained several auto-encoders for each value of $\ell \in \{5, 10, 15, 20, 25, 30\}$ corresponding to different rates of compression increasing by steps of 15\%. Figure \ref{fig:autoencoder_heatmap} shows the compression capabilities of our encoder-decoder structure for different window sizes $\ell$. The results are consistent on the three datasets. The compression rate is more sensitive on small input dimensions, and conversely, a larger dimension can be reduced extensively with negligible loss of accuracy. Indeed, increasing the number of inputs arguably leads to an increase in the redundancies exploitable by the encoder to reduce the dimension of the state space and reconstruct it with limited deviation with respect to the initial vector. Yet such reduction introduces noise to the state representation that the regressor will have to cope with. We thus evaluate the impact of state-space reduction on the output forecasting capabilities of our model, and summarize the results in figure \ref{fig:state_compression}. Our reduction method manages to reduce the dimension of the state in a consistent way up to 60\% for the two datasets in simulation without sensible variation of the prediction error. The error bars reflect the double dependence of our approach both on the performance of the regression model $H^\circ$ but also on the quality of the encoding-decoding. We compare favorably to the baseline in \cite{MASTI2021}. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/state_compression.pdf} \caption{We studied the impact of state compression for multiple configurations of final latent state dimension and temporal window and aggregate the results by this two paramaters on the synthetic datasets. Specifically, we measure the MSE on observation prediction error for 100 step in the future (\textit{Ours (MLP)}). \vspace{-3mm}} \label{fig:state_compression} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/autoencoding_heatmap.pdf} \caption{Heatmap of MSE for the encoder-decoder structure depending on both the window size (which relates to the initial state dimension $\text{dim }z$) and the compression rate.\vspace{-5mm}} \label{fig:autoencoder_heatmap} \end{figure} \section{Conclusion} \noindent In this work, we take advantage of the power of high-capacity deep neural networks to design a new methodology for estimating nonlinear dynamical systems from a set of input/output data pairs. We show that the state can be expressed as a state map computed as a function of past inputs and outputs. We learn a mapping from this representation to model outputs from training data using deep networks and show that this approach is competitive. We tackled the problem of reducing the state space, showing that this way a state of similar size than the original problem can be obtained through machine learning with an auto-encoding solution. The proposed approach can be used to reduce the order of a given nonlinear model, such as infinite-dimensional discrete systems. The methodology was validated using three numerical examples and using a data-set from real experiments from the literature.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{A}{nomaly} detection, with broad application in network intrusion detection, credit card fraud detection, sensor network fault detection and numerous other fields~\cite{chandola2009anomaly}, has received significant attention among the machine learning community. With the recent advances in deep neural networks, there is a heated topic on anomaly detection in multimedia, \emph{e.g.},~medical diagnosis, defect detection and intrusion detection. In this paper, we focus on anomaly detection of still images. Anomaly detection is a technique used to identify unusual patterns that do not conform to expected behavior. Considering the scarcity and diversity of anomalous data, anomaly detection is usually modeled as an unsupervised learning or one-class classification problem~\cite{ruff2018deep}, \emph{i.e.},~the training dataset contains only ``normal'' data and the anomalous data is not available during training. \begin{figure}[htbp] \centering \includegraphics[width=8.3cm]{img/ARNet_TMM.pdf} \caption{During training phases, to restore the original image, the ARNet is forced to learn high-level feature embeddings related to the erased attributes. During testing phases, restoration with the wrong attribute will enlarge the restoration loss (\emph{i.e.},~the car is restored with wrong color and orientation). } \label{img:model} \end{figure} Reconstruction-based methods~\cite{schlegl2017unsupervised, Akcay2018,Sabokrou2018Adversarially} have recently shown great promise for anomaly detection. Autoencoder~\cite{masci2011stacked} is adopted by most reconstruction-based methods which assume that normal and anomalous samples could lead to significantly different embeddings and thus differences in the corresponding reconstruction errors can be leveraged to differentiate the two types of samples~\cite{Sakurada2014Anomaly}. However, this assumption may fail for datasets with more complex texture and structure information like ImageNet. The MSE loss is shown to forces autoencoders to focus on reducing low-level pixel-wise error that is not sensitive to human perception, rather than learning high-level semantic features~\cite{SimilarityMetricAutoencoding,dosovitskiy2016generating}. As data complexity grows, the extracted low-level features are more likely to be shared between normal and anomalous data, leading to mixed feature embeddings. Under this situation, both normal and anomalous data could be reconstructed properly~\cite{gong2019memorizing,zong2018deep}. To tackle this problem, various attempts have been made to introduce more efficient loss functions rather than the pixel-wise MSE loss. Adversarial training is introduced by adding a discriminator after autoencoders to judge whether its original or reconstructed image~\cite{Sabokrou2018Adversarially,deecke2018image}. Akcay \emph{et al.}~\cite{Akcay2018} adds an extra encoder after autoencoders and leverages an extra MSE loss between the two different embeddings. Despite much progress made along this line, the improvement remains limited, especially for complex datasets. Recent works~\cite{wang2019effective,bergmann2019uninformed} indicate that reconstruction-based methods fail to extract high-level features effectively. We attribute this problem to the ``information equivalence'' which refers to the equivalence between the input data and supervision in reconstruction based methods. Reconstruction based methods typically adopt original data as input data and supervision simultaneously and force the network to output the same as input with MSE loss. However, this framework is likely to just compress the image content without learning a semantically meaningful representation~\cite{pathak2016context}. To achieve effective supervision and learn high-level feature embeddings, we start with breaking the information equivalence. Taken the original data as supervision, the input data is obtained by erasing selected information from original data, creating a gap in information between the input data and the target supervision. Considering that attributes of the objects, such as the color distribution and the orientation, are compact high-level representations~\cite{long2017towards}, an Attribute Erasing Module (AEM) is proposed to remove certain attributes. To restore input data to the original data, the restoration network is then forced to learn what is erased and how to restore it. In this way, we convert this task from reconstruction into restoration, in which feature embedding can be controlled by the corresponding information erasing. Since anomalous and normal data need to be distinguished by restoration errors, we seek to obstruct the restoration process for anomalous data so as to enlarge the gap of restoration errors between normal and anomalous data. Normal data can be restored properly as the erased attributes and the embedded features by the restoration network are matched, which is satisfied through the training process. However, this match will be broken when normal data and anomalous data are different regarding to the erased attribute. In this case, anomalous data can not be restored properly and suffers from high restoration error. Figure~\ref{img:model} provides a brief illustration of the proposed framework, in which cats and cars are considered as normal and anomalous data accordingly. During training, we firstly leverage an AEM to erase some attributes (\emph{e.g.},~color and orientation) from the normal data and get an attribute-erased image set. Then we feed images from the attribute-erased image set to the Attribute Restoration Network (ARNet). We optimize the ARNet by minimizing the restoration loss between the restoration results and the original images. When testing, the anomalous image ``car'' is restored improperly with wrong orientation and color, leading to large restoration error. We call this pipeline as attribute restoration framework for anomaly detection. To validate the effectiveness of ARNet, we conduct extensive experiments with several benchmarks and compare them with state-of-the-art methods. Our experimental results have shown that ARNet outperforms state-of-the-art methods in terms of model accuracy and model stability for different tasks. To further evaluate with more challenging tasks, we experiment with the large-scale dataset ImageNet~\cite{russakovsky2015imagenet} and show that ARNet improves the AUROC of the top-performing baseline by 10.1\%. Experiments on a real-world anomaly detection dataset MVTec AD~\cite{bergmann2019mvtec} and a most recent video anomaly detection benchmark dataset ShanghaiTech~\cite{luo2017revisit} show that our ARNet is more adaptable to complex real-world environments. To summarize, we are the first to apply image restoration to the anomaly detection problem and show impressive performance. The rest of the paper is organized as follows: Section II briefly describes related works. The proposed frameworks are explained in Section III. Experiment settings and results are presented in Section IV. Section V concludes the paper. \section{Related Works} \subsection{Anomaly Detection} For anomaly detection on images and videos, a large variety of methods have been developed in recent years~\cite{Chalapathy2019Deep, Markou2003Novelty_V1, chandola2009anomaly, Pimentel2014A, Kiran2018An, chu2018sparse, xu2018anomaly, xu2019video}. In this paper, we focus on anomaly detection in still images. Suffering from the scarcity and diversity of anomalous data, the vital challenge of anomaly detection is that the training dataset contains only normal data, leading to a lack of supervision. Judging by whether the model can be directly used in anomaly detection, popular methods can be concluded into two types accordingly: one-class classification-based approaches and surrogate supervision based approaches. \noindent\textbf{One-class classification-based approaches:} To distinguish the anomalous data from normal data, some previous conventional methods~\cite{Eskin2000Anomaly, Yamanishi2000On, Rahmani2017Coherence, Xu2012Robust} tended to depict the normal data with statistical approaches. Through training, a distribution function was forced to fit on the features extracted from the normal data to represent them in a shared latent space. During testing, samples mapped to different statistical representations are considered as anomalous. Some approaches tackled the anomaly detection problem by finding a hyperplane to separate normal data in the latent space. In OC-SVM~\cite{scholkopf2001estimating}, the normal samples are mapped to the high-dimensional feature space through kernel function to get better aggregated. In the feature space, the coordinate origin is considered as the only anomalous data. Then a maximum margin hyperplane is found in feature space to better separate the mapped data from the origin. To better aggregate the mapped data in latent space, Ruff \emph{et al.}\emph~\cite{ruff2018deep} optimized the neural network by minimizing the volume of a hyper-sphere which encloses the network representations of the data. Other researchers tried to find the hyperplane through generating or introducing extra anomalous data~\cite{lee2017training,hendrycks2018deep}. Lee \emph{et al.}\emph~\cite{lee2017training} used Kullback-Leibler (KL) divergence to guide GAN to generate anomalous data closer to normal data, leading to a better training set for the classification method. Hendrycks \emph{et al.}\emph~\cite{hendrycks2018deep} introduced extra data to build a multi-class classification task. The experiment revealed that even though the extra data was in limited quantities and weakly correlated to the normal data, the learned hyperplane was still effective in separating normal data. \noindent\textbf{Surrogate supervision based approaches:} Many approaches modeled anomaly detection as an unsupervised learning problem and remedy the lack of supervision by introducing surrogate supervision. The model was trained to optimize the surrogate task-based objective function firstly. Then normal data can be separated with the assumption that anomalous data will result differently in the surrogate task. Reconstruction~\cite{an2015variational,xia2015learning,schlegl2017unsupervised,zong2018deep,deecke2018image} is the most popular surrogate supervision. Based on autoencoders or variation autoencoders, this kind of method compressed normal samples into a lower-dimensional latent space and then reconstructed them to approximate the original input data. It assumed that anomalous samples would be distinguished through relatively high reconstruction errors compared with normal samples. Sakurada \emph{et al.}\emph~\cite{Sakurada2014Anomaly} were the first to apply the autoencoder to anomaly detection. This work further indicated that the learned features in the hidden layer of autoencoders were distinguishable between normal and anomalous data. Based on that, Nicolau \emph{et al.}\emph~\cite{nicolau2016hybrid} introduced density estimation to estimate the different distribution in the latent space of autoencoder. It assumed that anomalous data would hold lower density in latent space. \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{img/ARNet.pdf} \caption{Pipeline for anomaly detection with attribute restoration framework with mathematical expression.} \label{img:ARNet} \end{figure} Some recent works~\cite{SimilarityMetricAutoencoding,dosovitskiy2016generating} indicated mean square error (MSE) loss function, adopted by most reconstruction-based methods, forces the network to focus on pixel-wise error rather than learning high-level semantic features. When dealing with more complex data, the learned low-level features are more likely to be shared and lead to good reconstruction results in both normal and anomalous data. To tackle this problem, some recent approaches continued to follow the reconstruction based method by introducing more efficient loss function rather than MSE. Sabokrou \emph{et al.}\emph~\cite{Sabokrou2018Adversarially} and Akcay \emph{et al.}\emph~\cite{akccay2019skip} employed adversarial training to optimize the autoencoder and leveraged its discriminator to further enlarge the reconstruction error gap between normal and anomalous data. Furthermore, Akcay \emph{et al.}\emph~\cite{akccay2019skip} leveraged another encoder to embed the reconstruction results to the subspace where to calculate the reconstruction error. Similarly, Wang \emph{et al.}\emph~\cite{wang2019advae} employed adversarial training under a variational autoencoder framework with the assumption that normal and anomalous data follows different Gaussian distribution. Zenati \emph{et al.}\emph~\cite{zenati2018efficient} trained a BiGAN model and employed a discriminator to add supervision to encoder and decoder simultaneously. Gong \emph{et al.}\emph~\cite{gong2019memorizing} augmented the autoencoder with a memory module and developed an improved autoencoder called memory-augmented autoencoder to strengthen reconstructed errors on anomalies. Perera \emph{et al.}\emph~\cite{perera2019ocgan} applied two adversarial discriminators and a classifier on a denoising autoencoder. By adding constraint and forcing each randomly drawn latent code to reconstruct examples like the normal data, it obtained high reconstruction errors for the anomalous data. Other approaches tackled this problem by introducing a new classification-based surrogate supervision. Golan \emph{et al.}\emph~\cite{golan2018deep} applied dozens of image geometric transforms and created a self-labeled dataset for transformation classification, assuming that the transformation of anomalous data can not be classified properly. Besides geometric transforms, Wang \emph{et al.}\emph~\cite{wang2019effective} introduced more self-label methods like patch re-arranging and irregular affine transformations to further strengthen surrogate supervision. \subsection{Restoration-Based Unsupervised Learning} Many works tackled the unsupervised learning problem through restoration. It assumes that by restoring the damaged image, the network is forced to learn robust feature embeddings. The quality of the learned features will be further validated through a variety of image understanding tasks, including classification, object detection, and semantic segmentation. Denoising autoencoders~\cite{vincent2008extracting} add alternative corrupting noises to the original data, and require the autoencoder network to undo the damage. Pathak \emph{et al.}\emph~\cite{pathak2016context} randomly blanked out a region from the original image and employed an autoencoder to restore. Jenni et. al.~\cite{Jenni_2018_CVPR} applied a random mask to blank out part of the features from the encoder and forced the decoder to repair it. Denton~\cite{denton2016semi} indicated that previous work was difficult to generate large image patches that look realistic. To address this, a low resolution but intact version of the original image was extra fed to the network to guide reconstruction. Different from traditional restoration-based unsupervised learning, in this paper, we erase certain object attribute from the image. To our best knowledge, we are the first to connect restoration with anomaly detection. \section{Attribute Restoration Framework} In this section, we first formulate the problem of anomaly detection. Let $\mathcal{X}$, $\mathcal{X_\mathrm{n}}$, and $\mathcal{X_\mathrm{an}}$ denote the sets of entire dataset, normal dataset and anomalous dataset, respectively, where $\mathcal{X_\mathrm{n}} \cup \mathcal{X_\mathrm{an}} = \mathcal{X} $ and $\mathcal{X_\mathrm{n}} \cap \mathcal{X_\mathrm{an}} = \emptyset $. Given any image $x \in \mathcal{X}$, where $x \in \mathbb{R}^{C \times H \times W}$, and $C$, $H$ and $W$ denote the dimensions of image channels, height and width, the goal is to build a model $\mathcal{M}(\cdot)$ for discriminating whether $x \in \mathcal{X_\mathrm{n}} $ or $x \in \mathcal{X_\mathrm{an}}$. To solve the above problem, we propose the attribute restoration framework, which consists of three parts: (1) Attribute Erasing Module (AEM): erase certain attributes of images to create an image restoration task; (2) Attribute Restoration Network (ARNet): use the original images as supervision and the images after erasing certain attributes as inputs to train a model for restoring the images against the attribute absence; (3) Anomaly measurement: establish a link between the image restoration task and the image anomaly detection task. The corresponding structure is shown in Figure~\ref{img:ARNet}. The details of the above three modules are introduced in the following sections. \subsection{Attribute Erasing Module (AEM)}\label{method} Attribute Erasing Module (AEM) is leveraged to erase a set of attributes from the objects, enforcing information inequivalence between input and output data and changing the task from reconstruction into restoration. Each erased attribute from the set can be effective in anomaly detection under three assumptions: \begin{itemize} \item The erased attribute should be shared among normal data; otherwise the ARNet will not be able to converge to restore the normal data. \item The erased attribute should be different between normal and anomalous data; otherwise, the ARNet is hard to distinguish the normal data and anomalous data through this attribute, as anomalous data can be restored properly using the shared features learning from the normal data. \item The attributes can be erased by a module which does not rely on extra dataset or labels; otherwise, this will require an additional training process for attribute erasing. \end{itemize} We take some cases of image datasets, as an illustration, to further reveal the details to design the Attribute Erasing Module. By human prior, the orientation of many objects is shared within a class but different between classes. which meets the first and the second condition, \textit{e}.\textit{g}. the wheels of the cars are always at the bottom of the images while the circle of the digit number ``9'' is always at the top of the images. To erase the orientation of these objects, we can employ a random rotation operation, which rotates the images with a randomly selected angle. This orientation erasing operation does not need to introduce an additional training process, which also meets the third condition we discussed above. The main challenge of unsupervised anomaly detection is that anomalous data is not available during training, leaving no guarantee that the second assumption is satisfied by all kinds of anomalous data. Fortunately, although the image restoration task on this attribute cannot be used to distinguish between normal and anomalous data when the second condition is not satisfied, it only causes the anomaly detection performance of the image restoration task degrading to that of the image reconstruction task. To alleviate this problem, we propose a set of attribute erase operations to increase the probability that at least one attribute could meet the second condition. Thus the Attribute Erasing Module works as follows: Suppose we have a set of attribute erasing operations $\mathcal{O} = \{ f_{\mathrm{O}_k}(\cdot) | k=1,\dots,K \}$, where $f_{\mathrm{O}_{k}}(\cdot)$ denotes the \emph{k}-th attribute erasing operation. Given $x_{\rm n} \in \mathcal{X_\mathrm{n}}$, the data after AEM should be $\tilde{x}_{\rm n} = F_{\mathcal{O}}(x_{\rm n}) := f_{\mathrm{O}_k}( f_{\mathrm{O}_{k-1}}( \cdots f_{\mathrm{O}_1}( x_{\rm n})))$. \subsection{Attribute Restoration Network} In this section, we present the \emph{Attribute Restoration Network} (ARNet) in detail. ARNet is based on an encoder-decoder framework to restore the original images. In the training phase, given $\tilde{x}_{\rm n}$ after AEM, the proposed ARNet takes the $\tilde{x}_{\rm n}$ as the inputs, and attempts to inversely restore the original training samples $x_{\rm n}$. Mathematically, given $\tilde{x}_{\rm n}$, the restored sample be $\hat{x}_{\rm n}$ is formulated as \begin{equation} \label{eq:pipeline} \hat{x}_{\rm n}=\mathcal{M}(\tilde{x}_{\rm n})=Dec(Enc(\tilde{x}_{\rm n})), \end{equation} where $\mathcal{M}(\cdot)$ indicates the model of ARNet, while $Enc(\cdot)$ and $Dec(\cdot)$ indicate encoder and decoder of ARNet. Note that while ARNet is employed for the image restoration tasks, it is different from existing autoencoders in that the inputs and outputs are asymmetrical, \emph{i.e.},~ARNet needs to restore attributes erased by the Attribute Erasing Module. To train our ARNet for effective anomaly detection, a likelihood-based restoration loss is employed as loss function. $\ell_2$ loss is utilized to measure the distances between the restored samples and targets since it is smoother and distributes more punishments on the dimensions with larger generation errors. Let the target image be $x_{\rm n}$, the training loss is formulated as \begin{equation} \begin{split} \mathcal{L}_{\rm train} = \mathbb{E}_{x_{\rm n}\sim p(\mathbf{x}_{\rm n})} \left \| \mathcal{M}(\tilde{x}_{\rm n})-x_{\rm n} \right \|_2^2, \end{split} \end{equation} where $\|\cdot\|_2$ denotes the $\ell_2$ norm and $p({\bf x}_{\rm n})$ indicates the distribution of normal data. We use Monte Carlo to approximate the expectation operation by averaging the costs on samples and attribute erasing operations in each mini-batch. \subsection{Anomaly Measurement}\label{sec:score} To establish a link between the image restoration task and the image anomaly detection task, in the test phase, we design a metric based on the restoration error to distinguish whether one sample belongs to the normal set. Both normal and anomalous data are fed into the model, which are utilized together to determine whether a query sample is anomalous. In the test phases, we calculate the restoration error of each input image $x$ for anomaly detection. We suppose that the restorations of normal samples show much smaller errors than the anomalous samples due to the specific image restoration scheme. We note that $\ell_1$ loss is more suitable to measure the distance between outputs and original images. Let the test sample be $x$, the anomaly score is formulated as \begin{equation} \begin{split} {\cal S}_{\rm test}(x) = \left \| \mathcal{M}(F_{\mathcal{O}}(x))-x \right \|_1, \end{split} \end{equation} where $\|\cdot\|_1$ denotes the $\ell_1$ norm. However, $f_{O_k}$ may function through randomization, in which the original fixed operation is reformulated as a random selection $f_{\hat{O}_k}$ from an operation set $\{f_{\hat{O}_{k,j_k}} | j_k=1,\dots,m_k \}$ with size of $m_k$. For example, we employ random rotation to formulate the orientation erasing operation, where the rotation angle is randomly selected from a fixed set, such as several discrete angle options, $\{\ang{0}, \ang{90},\ang{180},\ang{270}\}$. Accordingly, as the $F_{\mathcal{O}}$ is the compound function of $f_{O_k}$, $F_{\mathcal{O}}$ is reformulated as $F_{\hat{\mathcal{O}}}(\cdot) = f_{\hat{\mathrm{O}}_k}( f_{\hat{\mathrm{O}}_{k-1}}( \cdots f_{\hat{\mathrm{O}}_1}(\cdot)))$, where $F_{\hat{\mathcal{O}}}(\cdot)$ is a random selection from the set $\{ F_{\hat{\mathrm{O}}_i}(\cdot) | i=1,\dots,N \}$ with size $\emph{N}=\prod_{k=1}^K m_k$. Note that, when $m_k=1$, $f_{O_k}=f_{\hat{O}_k}$. During the test process, we need to traverse all selections $F_{\hat{\mathrm{O}}_i}(\cdot)$ and set average restoration error as the anomaly score, which is reformulated as \begin{equation} \begin{split} {\cal S}_{\rm test}'(x) = \frac{1}{N} & \sum_{i=1}^{N} \left \| \mathcal{M}(F_{\mathcal{\hat{O}}_i}(x))-x \right \|_1. \end{split} \end{equation} We notice that the restoration errors under some $F_{\hat{\mathrm{O}}_i}(\cdot)$ may larger than the others in natural since different tasks have different restoration difficulties. In this case, given the same input sample, different $F_{\hat{\mathrm{O}}_i}(\cdot)$ lead to different restoration errors and the final anomaly score may has a bias if we average these restoration errors naively. To make each $F_{\hat{\mathrm{O}}_i}(\cdot)$ contributes equally to the final anomaly score, we use the original training data and calculate the mathematical expectation of the restoration error for each $F_{\hat{\mathrm{O}}_i}(\cdot)$ as a normalization, and set the final anomaly score as \begin{equation} \begin{split} {\cal S}_{\rm test}''(x) = \frac{1}{N} & \sum_{i=1}^{N} \frac{\left \| \mathcal{M}(F_{\mathcal{\hat{O}}_i}(x))-x \right \|_1} {\mathbb{E}_{x_{\rm n}\sim p({\bf x}_{\rm n})}\left \| \mathcal{M}(F_{\mathcal{\hat{O}}_i}(x_{\rm n}))-x_{\rm n} \right \|_1 ]}, \end{split} \end{equation} where $p({\bf x}_{\rm n})$ indicates the distribution of normal data, as well as being consistent with the distribution of training set. A normal sample leads to a low anomaly score; the higher value $\mathcal{S}_{\rm test}''(x)$ obtained, the higher probability for the sample $x$ to be anomalous. \subsection{Discussion: Restoration vs. Reconstruction} Both image reconstruction and image restoration tasks can be implemented with an encoder-decoder architecture. The differences are summarized in three folds. First, different from reconstruction, the input and output(supervision) for ARNet are asymmetric which is achieved with an Attribute Erasing Module. The erased information of anomalous data may not be restored properly through feature embeddings learned from the normal data, leading to high anomaly scores for anomalous data. Secondly, unlike the reconstruction-based methods, especially vanilla AE, which blindly learns uncontrollable features from normal data, the restoration-based framework leverages the attribute erasing to guide the feature embedding and thus enables the embedding of semantically meaningful high-level features. Thirdly, in the final anomaly detection phase, the two methods differ in the way to obtain the final anomaly scores. Different from the reconstruction-based method, for the restoration-based framework, multiple restoration losses produced by multiple attribute erasing operations are weighted and summed to obtain the anomaly scores. These weights can be gotten from the training data, which has been discussed in Section~\ref{sec:score}. \renewcommand \arraystretch{0.97} \begin{table*}[!htb] \centering \caption{Average area under the ROC curve (AUROC) in \% of anomaly detection methods. For every dataset, each model is trained on the single class, and tested against all other classes. ``SD'' means standard deviation among classes. The best performing method is in bold.} \small \begin{minipage}[t]{0.95\textwidth} \begin{tabular}{cx{2.0cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.8cm}} \toprule Dataset & Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \textbf{avg} & SD\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \cmidrule(lr){14-14} & VAE~\cite{kingma2013auto} & 92.1 & \textbf{99.9} & 81.5 & 81.4 & 87.9 & 81.1 & 94.3 & 88.6 & 78.0 & 92.0 & 87.7 & 7.05\\ & AnoGAN~\cite{schlegl2017unsupervised} & 99.0 & 99.8 & 88.8 & 91.3 & 94.4 & 91.2 & 92.5 & 96.4 & 88.3 & 95.8 & 93.7 & 4.00\\ & ADGAN~\cite{deecke2018image} & 99.5 & \textbf{99.9} & 93.6 & 92.1 & 94.9 & 93.6 & 96.7 & 96.8 & 85.4 & 95.7 & 94.7 & 4.15\\ MNIST& GANomaly~\cite{Akcay2018} & 97.2 & 99.6 & 85.1 & 90.6 & 94.5 & 94.9 & 97.1 & 93.9 & 79.7 & 95.4 & 92.8 & 6.12\\ & OCGAN~\cite{perera2019ocgan} & \textbf{99.8} & \textbf{99.9} & 94.2 & 96.3 & 97.5 & 98.0 & 99.1 & 98.1 & 93.9 & 98.1 & 97.5 & 2.10\\ & GeoTrans~\cite{golan2018deep} & 98.2 & 91.6 & \textbf{99.4} & 99.0 & \textbf{99.1} & \textbf{99.6} & \textbf{99.9} & 96.3 & \textbf{97.2} & \textbf{99.2} & 98.0 & 2.50\\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \cmidrule(lr){14-14} & AE & 98.8 & 99.3 & 91.7 & 88.5 & 86.2 & 85.8 & 95.4 & 94.0 & 82.3 & 96.5 & 91.9 & 5.90\\ & OURS & 98.6 & \textbf{99.9} & 99.0 & \textbf{99.1} & 98.1 & 98.1 & 99.7 & \textbf{99.0} & 93.6 & 97.8 & \textbf{98.3} & \textbf{1.78}\\ \cmidrule(lr){1-14} & DAGMM~\cite{zhai2016deep} & 42.1 & 55.1 & 50.4 & 57.0 & 26.9 & 70.5 & 48.3 & 83.5 & 49.9 & 34.0 & 51.8 & 16.47\\ & DSEBM~\cite{zong2018deep} & 91.6 & 71.8 & 88.3 & 87.3 & 85.2 & 87.1 & 73.4 & 98.1 & 86.0 & 97.1 & 86.6 & 8.61\\ Fashion- & ADGAN~\cite{deecke2018image} & 89.9 & 81.9 & 87.6 & 91.2 & 86.5 & 89.6 & 74.3 & 97.2 & 89.0 & 97.1 & 88.4 & 6.75\\ MNIST & GANomaly~\cite{Akcay2018} & 80.3 & 83.0 & 75.9 & 87.2 & 71.4 & 92.7 & 81.0 & 88.3 & 69.3 & 80.3 & 80.9 & 7.37\\ & GeoTrans~\cite{golan2018deep} & \textbf{99.4} & 97.6 & \textbf{91.1} & 89.9 & \textbf{92.1} & \textbf{93.4} & 83.3 & \textbf{98.9} & 90.8 & \textbf{99.2} & 93.5 & 5.22\\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \cmidrule(lr){14-14} & AE & 71.6 & 96.9 & 72.9 & 78.5 & 82.9 & 93.1 & 66.7 & 95.4 & 70.0 & 80.7 & 80.9 & 11.03\\ & OURS & 92.7 & \textbf{99.3} & 89.1 & \textbf{93.6} & 90.8 & 93.1 & \textbf{85.0} & 98.4 & \textbf{97.8} & 98.4 & \textbf{93.9} & \textbf{4.70}\\ \cmidrule(lr){1-14} & VAE~\cite{kingma2013auto} & 62.0 & 66.4 & 38.2 & 58.6 & 38.6 & 58.6 & 56.5 & 62.2 & 66.3 & 73.7 & 58.1 & 11.50\\ & DAGMM~\cite{zhai2016deep} & 41.4 & 57.1 & 53.8 & 51.2 & 52.2 & 49.3 & 64.9 & 55.3 & 51.9 & 54.2 & 53.1 & 5.95\\ & DSEBM~\cite{zong2018deep} & 56.0 & 48.3 & 61.9 & 50.1 & 73.3 & 60.5 & 68.4 & 53.3 & 73.9 & 63.6 & 60.9 & 9.10\\ CIFAR- & AnoGAN~\cite{schlegl2017unsupervised} & 61.0 & 56.5 & 64.8 & 52.8 & 67.0 & 59.2 & 62.5 & 57.6 & 72.3 & 58.2 & 61.2 & 5.68\\ 10& ADGAN~\cite{deecke2018image} & 63.2 & 52.9 & 58.0 & 60.6 & 60.7 & 65.9 & 61.1 & 63.0 & 74.4 & 64.4 & 62.4 & 5.56\\ & GANomaly~\cite{Akcay2018} & \textbf{93.5} & 60.8 & 59.1 & 58.2 & 72.4 & 62.2 & 88.6 & 56.0 & 76.0 & 68.1 & 69.5 & 13.08\\ & OCGAN~\cite{perera2019ocgan} & 75.7 & 53.1 & 64.0 & 62.0 & 72.3 & 62.0 & 72.3 & 57.5 & 82.0 & 55.4 & 65.6 & 9.52\\ & GeoTrans~\cite{golan2018deep} & 74.7 & \textbf{95.7} & 78.1 & 72.4 & 87.8 & \textbf{87.8} & 83.4 & \textbf{95.5} & \textbf{93.3} & \textbf{91.3} & 86.0 & 8.52\\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \cmidrule(lr){14-14} & AE & 57.1 & 54.9 & 59.9 & 62.3 & 63.9 & 57.0 & 68.1 & 53.8 & 64.4 & 48.6 & 59.0 & 5.84\\ & OURS & 78.5 & 89.8 & \textbf{86.1} & \textbf{77.4} & \textbf{90.5} & 84.5 & \textbf{89.2} & 92.9 & 92.0 & 85.5 & \textbf{86.6} & \textbf{5.35}\\ \cmidrule(lr){1-14} & GANomaly~\cite{Akcay2018} & 58.9 & 57.5 & 55.7 & 57.9 & 47.9 & 61.2 & 56.8 & 58.2 & 49.7 & 48.8 & 55.3 & \textbf{4.46}\\ ImageNet & GeoTrans~\cite{golan2018deep} & \textbf{72.9} & 61.0 & 66.8 & \textbf{82.0} & 56.7 & 70.1 & 68.5 & \textbf{77.2} & 62.8 & 83.6 & 70.1 & 8.43\\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \cmidrule(lr){14-14} & AE & 57.1 & 51.3 & 47.7 & 57.4 & 43.8 & 54.9 & 54.6 & 51.3 & 48.3 & 41.5 & 50.8 & 5.16 \\ & OURS & 71.9 & \textbf{85.8} & \textbf{70.7} & 78.8 & \textbf{69.5} & \textbf{83.3} & \textbf{80.6} & 72.4 & \textbf{74.9} & \textbf{84.3} & \textbf{77.2} & 5.77\\ \cmidrule(lr){1-14} \end{tabular} \end{minipage} \begin{minipage}[t]{0.92\textwidth} \small \begin{tabular}{cx{2.0cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.7cm}x{0.8cm}} Dataset & Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} & DAGMM~\cite{zhai2016deep} & 43.4 & 49.5 & 66.1 & 52.6 & 56.9 & 52.4 & 55.0 & 52.8 & 53.2 & 42.5 & 52.7\\ & DSEBM~\cite{zong2018deep} & 64.0 & 47.9 & 53.7 & 48.4 & 59.7 & 46.6 & 51.7 & 54.8 & 66.7 & 71.2 & 78.3 \\ & ADGAN~\cite{deecke2018image} & 63.1 & 54.9 & 41.3 & 50.0 & 40.6 & 42.8 & 51.1 & 55.4 & 59.2 & 62.7 & 79.8 \\ & GANomaly~\cite{Akcay2018} & 57.9 & 51.9 & 36.0 & 46.5 & 46.6 & 42.9 & 53.7 & 59.4 & 63.7 & 68.0 & 75.6\\ & GeoTrans~\cite{golan2018deep} & 74.7 & 68.5 & \textbf{74.0} & \textbf{81.0} & \textbf{78.4} & 59.1 & 81.8 & 65.0 & \textbf{85.5} & \textbf{90.6} & \textbf{87.6}\\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} & AE & 66.7 & 55.4 & 41.4 & 49.2 & 44.9 & 40.6 & 50.2 & 48.1 & 66.1 & 63.0 & 52.7 \\ CIFAR-& OURS & \textbf{77.5} & \textbf{70.0} & 62.4 & 76.2 & 77.7 & \textbf{64.0} & \textbf{86.9} & \textbf{65.6} & 82.7 & 90.2 & 85.9 \\ \cmidrule(lr){2-13} 100 & Method & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & \textbf{avg} & SD\\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} & DAGMM~\cite{zhai2016deep} & 46.4 & 42.7 & 45.4 & 57.2 & 48.8 & 54.4 & 36.4 & 52.4 & 50.3 & 50.5 & \textbf{6.55}\\ & DSEBM~\cite{zong2018deep} & 62.7 & 66.8 & 52.6 & 44.0 & 56.8 & 63.1 & 73.0 & 57.7 & 55.5 & 58.8 & 9.36\\ & ADGAN~\cite{deecke2018image} & 53.7 & 58.9 & 57.4 & 39.4 & 55.6 & 63.3 & 66.7 & 44.3 & 53.0 & 54.7 & 10.08\\ & GANomaly~\cite{Akcay2018} & 57.6 & 58.7 & 59.9 & 43.9 & 59.9 & 64.4 & 71.8 & 54.9 & 56.8 & 56.5 & 9.94\\ & GeoTrans~\cite{golan2018deep} & \textbf{83.9} & 83.2 & 58.0 & \textbf{92.1} & 68.3 & 73.5 & \textbf{93.8} & \textbf{90.7} & 85.0 & 78.7 & 10.76 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} & AE & 62.1 & 59.6 & 49.8 & 48.1 & 56.4 & 57.6 & 47.2 & 47.1 & 41.5 & 52.4 & 8.11\\ & OURS & 83.5 & \textbf{84.6} & \textbf{67.6} & 84.2 & \textbf{74.1} & \textbf{80.3} & 91.0 & 85.3 & \textbf{85.4} & \textbf{78.8} & 8.82\\ \bottomrule \end{tabular} \end{minipage} \label{tal:AUC1} \end{table*} \section{Experiments} In this section, we conduct substantial experiments to validate our method. The ARNet is first evaluated on multiple commonly used benchmark datasets under unsupervised anomaly detection settings, and the large-scale dataset ImageNet~\cite{russakovsky2015imagenet}, which is rarely looked into in previous anomaly detection studies. Next, we conduct experiments on real anomaly detection datasets to evaluate the performance in real-world environments. Then we present the respective effects of different designs (\emph{e.g.},~different types of image-level transformation and loss function design) through ablation study. The stability of our models is validated through monitoring performance fluctuation during the training process and comparing the final performance after convergence in multiple training attempts, all from random weights and with the same training configuration. Finally, the visualization analysis illustrates the efficiency of the attribute restoration framework in anomaly detection from a more straightforward perspective. \subsection{Experiments on Popular Benchmarks} 1) Experimental Setups: \noindent\textbf{Datasets.} In this part, our experiments involve five popular image datasets: MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100 and ImageNet. For all datasets, the training and test partitions remain as default. In addition, pixel values of all images are normalized to $[-1, 1]$. We introduce these five datasets briefly as follows: \begin{itemize} \item MNIST~\cite{lecun1998mnist}: consists of 70,000 $28\times28$ handwritten grayscale digit images. \item Fashion-MNIST~\cite{xiao2017fashion}: a relatively new dataset comprising $28\times28$ grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. \item CIFAR-10~\cite{krizhevsky2009learning}: consists of 60,000 $32\times32$ RGB images of 10 classes, with 6,000 images for per class. There are 50,000 training images and 10,000 test images, divided in a uniform proportion across all classes. \item CIFAR-100~\cite{krizhevsky2009learning}: consists of 100 classes, each of which contains 600 RGB images. The 100 classes in the CIFAR-100 are grouped into 20 ``superclasses'' to make the experiment more concise and data volume of each selected ``normal class'' larger. \item ImageNet~\cite{russakovsky2015imagenet}: We group the data from the ILSVRC 2012 classification dataset~\cite{russakovsky2015imagenet} into 10 superclasses by merging similar category labels using Latent Dirichlet Allocation (LDA)~\cite{blei2003latent}, a natural language processing method (see appendix for more details). We note that few anomaly detection research has been conducted on ImageNet since its images have higher resolution and more complex background. \end{itemize} \noindent\textbf{Model configuration.} The detailed structure of the model we used can be found in the appendix. We follow the settings in~\cite{unet, akccay2019skip,isola2017image} and add skip-connections between some layers in encoder and corresponding decoder layers to facilitate the backpropagation of the gradient in an attempt to improve the performance of image restoration. We use stochastic gradient descent (SGD)~\cite{bottou2010large} optimizer with default hyperparameters in Pytorch. ARNet is trained using a batch size of 32 for $500/T$ epochs, where $T$ means the number of transformations we used. The learning rate is initially set to 0.1, and is divided by 2 every $50/T$ epoch. In our experiments, we use a attribute erasing operation set which contains two cascade operations: \begin{itemize} \item {\bf Graying:} This operation averages each pixel value along the channel dimension of images. \item {\bf Random rotation:} This operation rotates $x$ anticlockwise by angle $\alpha$ around the center of each image channel. The rotation angle $\alpha$ is randomly selected from a set $\{0^{\circ}, 90^{\circ},180^{\circ},270^{\circ}\}$. \end{itemize} The graying operation erases color information, and the random rotation operation erases objects' orientation. Both of them meet the assumptions we introduced in Section~\ref{method}. \noindent\textbf{Evaluation protocols.} In our experiments, we quantify the model performance using the area under the Receiver Operating Characteristic (ROC) curve metric (AUROC). It is commonly adopted as performance measurement in anomaly detection tasks and eliminates the subjective decision of threshold value to divide the ``normal'' samples from the anomalous ones. 2) Comparison with State-of-the-art Methods: For a dataset with $C$ classes, we conduct a batch of $C$ experiments respectively with each of the $C$ classes set as the ``normal'' class once. We then evaluate performance on an independent test set, which contains samples from all classes, including normal and anomalous data. As all classes have equal volumes of samples in our selected datasets, the overall number proportion of normal and anomalous samples is simply $1:C-1$. In Table~\ref{tal:AUC1}, we provide results on MNIST, Fashion-MNIST, CIFAR-10, ImageNet and CIFAR-100 in detail. Some popular methods are involved in comparison: VAE~\cite{kingma2013auto}, DAGMM~\cite{zhai2016deep}, DSEBM~\cite{zong2018deep}, AnoGAN~\cite{schlegl2017unsupervised}, ADGAN~\cite{deecke2018image}, GANomaly~\cite{Akcay2018}, OCGAN~\cite{perera2019ocgan}, GeoTrans~\cite{golan2018deep} and our baseline backbone AE. Results of VAE, AnoGAN and ADGAN are borrowed from~\cite{deecke2018image}. Results of DAGMM, DSEBM and GeoTrans are borrowed from~\cite{golan2018deep}. We use the officially released source code of GANomaly to fill the incomplete results reported in~\cite{Akcay2018} with our experimental settings. For RGB datasets, such as CIFAR-10 and CIFAR-100, we use graying and random rotation operations tandemly, together with some standard data augmentations (flipping / mirroring / shifting), which is widely used in~\cite{he2016deep, huang2017densely}. For grayscale datasets, such as MNIST and Fashion-MNIST, we only use random rotation transformation, without any data augmentation. \renewcommand \arraystretch{0.95} \begin{table*}[!htb] \centering \caption{Average area under the ROC curve (AUROC) in \% of anomaly detection methods on MVTec AD~\cite{bergmann2019mvtec} dataset. The best performing method in each experiment is in bold.} \footnotesize \begin{tabular}{cx{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}x{0.54cm}} \toprule Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & avg\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \cmidrule(lr){14-14} \cmidrule(lr){15-15} \cmidrule(lr){16-16} \cmidrule(lr){17-17} GeoTrans~\cite{golan2018deep} & 74.4 & 67.0 & 61.9 & 84.1 & 63.0 & 41.7 & \textbf{86.9} & 82.0 & 78.3 & 43.7 & 35.9 & \textbf{81.3} & 50.0 & 97.2 & 61.1 & 67.2\\ GANomaly~\cite{Akcay2018} & 89.2 & \textbf{73.2} & 70.8 & 84.2 & 74.3 & \textbf{79.4} & 79.2 & 74.5 & 75.7 & 69.9 & 78.5 & 70.0 & 74.6 & 65.3 & 83.4 & 76.2\\ AE & 65.4 & 61.9 & 82.5 & 79.9 & 77.3 & 73.8 & 64.6 & 86.8 & 63.9 & 64.1 & 73.1 & 63.7 & 99.9 & 76.9 & \textbf{97.0} & 75.4\\ OURS & \textbf{94.1} & 68.1 & \textbf{88.3} & \textbf{86.2} & \textbf{78.6} & 73.5 & 84.3 & \textbf{87.6} & \textbf{83.2} & \textbf{70.6} & \textbf{85.5} & 66.7 & \textbf{100} & \textbf{100} & 92.3 & \textbf{83.9}\\ \bottomrule \end{tabular} \label{tal:MVTec} \end{table*} \begin{figure}[tbp] \centering \includegraphics[width=8.3cm]{img/speed.pdf} \caption{Comparison of frames per second (FPS) (horizontal coordinates), GPU memory usages (circular sizes) and AUROC for anomaly detection (vertical coordinates) of various methods testing on CIFAR-10. ARNet takes up a relatively small GPU memory, and its FPS is relatively higher.} \label{img:speed} \end{figure} On all involved datasets, experiment results present that the average AUROC of ARNet outperforms all other methods to different extents. For each individual image class, we also obtain competitive performances, showing effectiveness for anomaly detection. To further validate the effectiveness of our method, we conduct experiments on a subset of the ILSVRC 2012 classification dataset~\cite{russakovsky2015imagenet}. Table~\ref{tal:AUC1} also shows the performance of GANomaly, GeoTrans, baseline AE and our method on ImageNet. As can be seen, our method significantly outperforms the other three methods. Our method maintains performance stability on more difficult datasets. In addition, GeoTrans~\cite{golan2018deep} takes up more GPU memory and computation time. For testing on CIFAR10 (total 10,000 images), GeoTrans needs 285.45s (35fps, NVIDIA GTX 1080Ti, average on 10 runs) and it takes 1389MB of GPU memory. ARNet takes only 36.97s (270fps, same experimental environment) and 713MB of GPU memory ($5\times$ faster than GeoTrans) thanks to its efficient pipeline and network structure. Figure~\ref{img:speed} shows the comparison of frames per second (FPS), GPU memory usage and AUROCs of various anomaly detection methods tested on CIFAR-10. ARNet takes up a relatively small GPU memory, and its FPS is relatively higher. \subsection{Experiments on Real-world Anomaly Detection} \renewcommand \arraystretch{0.95} \begin{table}[!htb] \centering \caption{Average area under the ROC curve (AUROC) in \% of anomaly detection methods on ShanghaiTech~\cite{luo2017revisit} dataset. The best performing method in each experiment is in bold.} \small \begin{tabular}{cx{3cm}x{1.1cm}} \toprule Methods & Temporal Dependency? & AUROC \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} TSC~\cite{luo2017revisit} & \checkmark & 67.9\\ StackRNN~\cite{luo2017revisit} & \checkmark & 68.0\\ AE-Conv3D~\cite{zhao2017spatio} & \checkmark & 69.7\\ MemAE~\cite{gong2019memorizing} & \checkmark & 71.2\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} AE-Conv2D~\cite{hasan2016learning} & \ding{55} & 60.9\\ OURS & \ding{55} & \textbf{72.5}\\ \bottomrule \end{tabular} \label{tal:shanghai} \end{table} Previous works~\cite{golan2018deep,deecke2018image} experiment on multi-class classification datasets since there is a lack of comprehensive real-world datasets available for anomaly detection. By defining anomalous events as occurrences of different object classes and splitting the datasets based on unsupervised settings, the multi-class datasets can be used for anomaly detection experiments. However, the real anomalous data does not necessarily meet the above settings, \emph{e.g.},~damaged objects. In this section, we experiment on the most recent real-world anomaly detection benchmark dataset MVTec AD~\cite{bergmann2019mvtec}. \renewcommand \arraystretch{0.95} \begin{table*}[!h] \centering \caption{Average area under the ROC curve (AUROC) in \% of anomaly detection methods for \textbf{different components} on CIFAR-10. ``S'', ``G'' and ``R'' represent scaling, graying and random rotation operations. The best performing method in each experiment is in bold.} \small \begin{tabular}{cx{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}x{0.6cm}} \toprule Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & avg\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} AE (reconstruction) & 57.1 & 54.9 & 59.9 & 62.3 & 63.9 & 57.0 & 68.1 & 53.8 & 64.4 & 48.6 & 59.3\\ ARNet+S & 72.8 & 41.8 & 66.4 & 57.5 & 71.0 & 62.8 & 68.4 & 48.5 & 56.8 & 31.9 & 57.8\\ ARNet+G & 67.4 & 60.9 & 60.5 & 67.1 & 67.0 & 65.5 & 70.7 & 69.3 & 69.7 & 61.0 & 65.6\\ ARNet+R & 76.1 & 80.0 & 83.6 & 77.1 & 89.2 & 83.0 & 82.6 & 85.0 & 90.0 & 75.9 & 82.2\\ ARNet+G+R & \textbf{78.5} & \textbf{89.8} & \textbf{86.1} & \textbf{77.4} & \textbf{90.5} & \textbf{84.5} & \textbf{89.2} & \textbf{92.9} & \textbf{92.0} & \textbf{85.5} & \textbf{86.6}\\ \bottomrule \end{tabular} \label{tal:AUC_ablation} \end{table*} \renewcommand \arraystretch{0.9} \begin{table*}[!htb] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Average area under the ROC curve (AUROC) in \% of anomaly detection methods for \textbf{different losses} on part of CIFAR-10. ``$\ell_1$'' means $\ell_1$ loss and ``$\ell_2$'' means $\ell_2$ loss. For example, $\ell_2\rightarrow \ell_1$ means using $\ell_2$ loss as training loss to train autoencoders and using $\ell_1$ loss to calculate restoration error when testing. The best performing method in each experiment is in bold.} \small \begin{tabular}{cx{1.5cm}x{1.5cm}x{1.5cm}x{2.5cm}x{0.4cm}x{0.4cm}x{0.4cm}x{0.4cm}x{0.4cm}x{0.4cm}x{0.4cm}x{0.4cm}} \toprule $c_i$ & $\ell_1\rightarrow \ell_1$ & $\ell_1\rightarrow \ell_2$ & $\ell_2\rightarrow \ell_2$ & $\ell_2\rightarrow \ell_1$(OURS)\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} 0 & 74.2 & 74.1 & 77.8 & \textbf{78.5}\\ 1 & 82.0 & 80.7 & 86.8 & \textbf{89.8}\\ 2 & 82.6 & 81.9 & 85.2 & \textbf{86.1}\\ 3 & 77.2 & 77.1 & 76.0 & \textbf{77.4}\\ \bottomrule \end{tabular} \label{tal:AUC_loss} \end{table*} \renewcommand \arraystretch{0.95} \begin{table*}[!htb] \centering \caption{Average area under the ROC curve (AUROC) in \% of anomaly detection methods on MNIST for ten runs in which digit number ``1'' is taken as normal data. Our stability is much higher than GeoTrans.} \small \begin{tabular}{cx{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.8cm}x{0.7cm}} \toprule Methods & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 & \#8 & \#9 & \#10 & avg & SD\\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} GeoTrans ~\cite{golan2018deep} & 91.55 & 72.38 & 81.26 & 82.94 & 87.04 & 87.95 & 87.24 & 81.77 & 85.51 & 85.68 & 84.33 & 5.22\\ OURS & 99.93 & 99.94 & 99.95 & 99.94 & 99.95 & 99.93 & 99.93 & 99.94 & 99.92 & 99.93 & 99.94 & 0.01\\ \bottomrule \end{tabular} \label{tal:stable2}. \end{table*} \noindent\textbf{MVTec anomaly detection dataset.} MVTec Anomaly Detection (MVTec AD) dataset~\cite{bergmann2019mvtec} contains 5354 high-resolution color images of different object and texture categories. It contains normal images intended for training and images with anomalies intended for testing. The anomalies manifest themselves in the form of over 70 different types of defects such as scratches, dents, and various structural changes. In this paper, we conduct image-level anomaly detection tasks on the MVTec AD dataset to classify normal and anomalous objects. \noindent\textbf{Comparison with state-of-the-art methods.} Table~\ref{tal:MVTec} shows that our ARNet performs better than baseline AE, GANomaly and GeoTrans. The advantages of ARNet over GeoTrans are growing from ideal datasets to real-world datasets MVTec AD. We conclude that our ARNet is more adaptable to complex real-world environments. \subsection{Experiments on Video Anomaly Detection} Video anomaly detection, which is distinguished from image-level anomaly detection, requires detections of anomalous objects and strenuous motions in the video data. We here experiment on a most recent video anomaly detection benchmark dataset ShanghaiTech~\cite{luo2017revisit}, comparing our methods with other state-of-the-arts. \noindent\textbf{ShanghaiTech.} ShanghaiTech~\cite{luo2017revisit} has $13$ scenes with complex light conditions and camera angles. It contains $130$ anomalous events and over $270,000$ training frames. In the dataset, objects except for pedestrians (\emph{e.g.},~vehicles) and strenuous motion (\emph{e.g.},~fighting and chasing) are treated as anomalies. \noindent\textbf{Comparison with state-of-the-art methods.} Since our ARNet is designed for image-level anomaly detection, different from some state-of-the-arts~\cite{luo2017revisit,zhao2017spatio,gong2019memorizing}, we use single frames but not stacking neighbor frames as inputs. In order to apply the random rotation transformation, we resize all the images into $480\times 480$. We here use ResNet34~\cite{he2016deep} as our encoder. Following~\cite{hasan2016learning,luo2017revisit,gong2019memorizing}, we obtain the normality score $p_u$ of the $u$th frame by normalizing the errors to range $[0, 1]$: \begin{equation} p_u = 1 - \frac{e_u-\min_u(e_u)}{\max_u(e_u)-\min_u(e_u)}, \end{equation} where $e_u$ denotes the restoration error of the $u$th frame in a video episode. The value of $p_u$ closer to $0$ indicates the frame is more likely an anomalous frame. Table~\ref{tal:shanghai} shows the AUROC values on ShanghaiTech dataset. Results show that our ARNet outperforms all the state-of-the-arts, including some temporal dependent methods~\cite{luo2017revisit,zhao2017spatio,gong2019memorizing}. \begin{figure}[tbp] \centering \vspace{-10pt} \includegraphics[width=7.3cm]{img/train_log/L1_auc_mnist_7.jpg} \caption{Training process under three methods. Both logs are achieved on the MNIST dataset. It shows the case when the digit ``7'' is the normal class. We attach complete logs for Fashion-MNIST and MNIST datasets in the appendix.} \label{fig:stable1} \end{figure} \subsection{Ablation Study and Discussion} In this part, we study the contribution of the proposed components of ARNet independently. Table~\ref{tal:AUC_ablation} shows experimental results of ablation study on CIFAR-10. It shows that both graying and random rotation operations improve the performance significantly, especially the random rotation operation. Table~\ref{tal:AUC_loss} shows the ablation study about the selection of restoration loss. It proves that using $\ell_2$ loss as training loss and using $\ell_1$ loss to calculate restoration error performs the best. Through the ablation study, we claim that the attribute erasing operations, network architecture and the loss function we used all have independent contributions to boost the model performance. We use image scaling to study the degradation problem of the ARNet caused by ill-selected attribute erasing operation. Downsampling of images can delete part of the image information. However, the second assumption is not met since the deleted pixel-level information can be inferred from neighboring pixels and this rule is the same between normal and anomalous data. We test on CIFAR10 with a 0.5x scaling and obtain 58.8\% AUROC for ARNet, while that of AE is 59.3\%, showing that the ARNet degenerates into a vanilla AE with ill-selected attribute erasing operation. \begin{figure}[tbph] \centering \includegraphics[width=7.3cm]{img/vis_single.pdf} \caption{Visualization analysis comparing with GANomaly on MNIST. ``Ori'', ``I'' and ``O'' represent original images, inputs and outputs, respectively. Cases with outputs similar to ``Ori'' are considered normal, otherwise anomalous. All visualization results are based on the number ``6'' as normal samples.} \label{img:color} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=7.7cm]{img/cifar10.pdf} \caption{Visualization analysis comparing with GANomaly on CIFAR-10. ``Ori'', ``I'' and ``O'' represent original images, inputs and outputs, respectively. Cases with outputs similar to ``Ori'' are considered normal, otherwise anomalous. All visualization results are based on the class ``horse'' as normal samples.} \vspace{-8pt} \label{img:cifar10_vis} \end{figure} \subsection{Model Stability} Anomaly detection puts higher concerns on the stability of model performance than traditional classification tasks. It is because of the lack of anomalous data makes it impossible to do validation during training. Thus, model stability tends to be more important since without validation there is no way to select the best checkpoint for anomaly detection model in the training phases. The stability of model performance is mainly reflected in three aspects: 1) whether the model can stably reach convergence after acceptable training epochs in one training attempt; 2) whether the model can reach stable performance level in multiple independent training attempts under the same training configuration; 3) whether the model can stably achieve good performance in various datasets and training configurations. Figure~\ref{fig:stable1} shows AUC-changing during one run to reveal that our model performs more stably in the late training phase, instead of fluctuating. Thus, through our ARNet, a highly reliable model can be achieved through acceptable training epochs in this practically validation-unavailable task. In order to test the stability of multiple training performances, we rerun GeoTrans~\cite{golan2018deep} and our method for 10 times on MNIST. Table~\ref{tal:stable2} shows that GeoTrans suffers a larger performance fluctuation compared with our method. For the last one, the standard deviation (SD) among classes has a good measure. SD in Table~\ref{tal:AUC1} prove that our method has the strongest stability of this type. \begin{figure}[tbp] \begin{minipage}[t]{0.114\textwidth} \centering \includegraphics[width=2cm]{img/shanghaitech-1.jpg} \footnotesize (a) Frame \end{minipage} \begin{minipage}[t]{0.114\textwidth} \centering \includegraphics[width=2cm]{img/shanghaitech-2.jpg} \footnotesize (b) AE-Conv2D \end{minipage} \begin{minipage}[t]{0.114\textwidth} \centering \includegraphics[width=2cm]{img/shanghaitech-3.jpg} \footnotesize (c) ARNet(G) \end{minipage} \begin{minipage}[t]{0.114\textwidth} \centering \includegraphics[width=2cm]{img/shanghaitech-4.jpg} \footnotesize (d) ARNet(G+R) \end{minipage} \caption{Restoration error maps of AE and ARNet on an anomalous frame of ShanghaiTech. Chasing is the anomalous event in this frame (red bounding box). ``G'' means graying and ``R'' means random rotation transformation. ARNet can significantly highlight the anomalous parts in the scene.} \label{fig:shanghaitech} \end{figure} \begin{figure*}[tbph] \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/1vs7_1.pdf} \footnotesize (a) 1-Autoencoder \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/1vs7_2.pdf} \footnotesize (b) 1-GANomaly \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/1vs7_3.pdf} \footnotesize (c) 1-ARNet \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/4vs9_1.pdf} \footnotesize (d) 2-Autoencoder \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/4vs9_2.pdf} \footnotesize (e) 2-GANomaly \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/4vs9_3.pdf} \footnotesize (f) 2-ARNet \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/6vs5_1.pdf} \footnotesize (g) 3-Autoencoder \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/6vs5_2.pdf} \footnotesize (h) 3-GANomaly \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/6vs5_3pdf.pdf} \footnotesize (i) 3-ARNet \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/8vs2_1.pdf} \footnotesize (j) 4-Autoencoder \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/8vs2_2.pdf} \footnotesize (k) 4-GANomaly \end{minipage} \begin{minipage}[t]{0.158\textwidth} \centering \includegraphics[width=2.8cm]{img/tsne/8vs2_3.pdf} \footnotesize (l) 4-ARNet \end{minipage} \vspace{8pt} \caption{T-SNE visualization of latent spaces of autoencoder, GANomaly and ARNet on CIFAR-10. The corresponding AUROCs of anomaly detection are marked in the upper left corners.} \label{fig:tsne_cifar} \end{figure*} \begin{figure}[tbp] \centering \includegraphics[width=7.3cm]{img/tsne_mnist.pdf} \caption{T-SNE visualization of latent spaces of ARNet on number 6 and 9 in the handwritten dataset MNIST. Number 6 is set as the normal class.} \vspace{-8pt} \label{img:tsne_mnist} \end{figure} \subsection{Visualization Analysis}\label{Visualization Analysis} \noindent\textbf{Anomaly detection on images.} In order to demonstrate the effectiveness of the attribute restoration framework for anomaly detection in a simple and straightforward way, we visualize some restoration outputs from ARNet, comparing with GANomaly in Figure~\ref{img:color} and Figure~\ref{img:cifar10_vis}. For MNIST and CIFAR-10, all visualization results are based on the same experimental setting in which the number ``6'' and the class ``horse'' are considered normal samples respectively. The first column ``Ori'' represents original images. ``I'' means images after Attribute Erasing Module. Note that the restoration error is calculated between outputs and original images. Cases with low restoration error with original images are considered normal, otherwise anomalous. For example, the bottom line in Figure~\ref{img:color} shows the testing results of number ``9''. Intuitively,four outputs are far different from ``Ori'' and thus recognized as anomalous. Except for the number ``6'', the other numbers get either wrong direction or ambiguous restoration outputs from our ARNet. It enlarges the gap of restoration error between normal and anomalous data. However, all the outputs from GANomaly are similar to the ground truth, meaning that it is less capable to distinguish between normal and anomalous data. In addition, for the anomalous cases in Figure~\ref{img:cifar10_vis}, restoration errors even larger since the wrong colors the ARNet used for image restoration. All the outputs show that ARNet attempts to restore the input images using the orientation or color distribution of the normal classes learning from the training set. \noindent\textbf{Anomaly detection on videos.} Figure~\ref{fig:shanghaitech} shows restoration error maps of AE and ARNet on an anomalous frame of ShanghaiTech, in which the highlight regions (regions with high restoration error) are considered as anomalous. In this frame, human chasing is the anomalous event (red bounding box in Figure~\ref{fig:shanghaitech} (a)). Due to good model generalization, AE reconstructs this frame properly even including the anomalous event (human chasing), leading to a low reconstruction error (reconstruction error map almost all black in Figure~\ref{fig:shanghaitech} (b)) Thus, AE cannot correctly detect this anomalous event. On the contrary, ARNet can not restore the anomalous region properly and significantly highlights the anomalous regions in the restoration error maps in Figure~\ref{fig:shanghaitech} (c and d). This is the reason why ARNet outperforms state-of-the-arts in video anomaly detection. \noindent\textbf{T-SNE visualization for latent spaces.} In this section, we first use CIFAR-10 to show more T-SNE visualization results, compared with baseline AE and GANomaly. As shown in Figure~\ref{fig:tsne_cifar}, feature maps of latent space for ARNet are more discriminative than the other baselines. It should be pointed out that this result does not directly indicate that ARNet's anomaly detection performance will be higher than other anomaly detection methods because it also depends on the performance of the decoder and how we link the surrogate task to the downstream task anomaly detection. However, the results of Figure~\ref{fig:tsne_cifar} at least show that ARNet can extract more meaningful features than the other two methods. To further illustrate the different mechanisms of reconstruction based method and restoration based method, Figure~\ref{img:tsne_mnist} shows a more specific case. The data we used are handwritten numbers 6 and 9 from the dataset MNIST while numbers 6 are set as the normal class. We use the random rotation operation in this task. As can be seen from Figure~\ref{img:tsne_mnist}, due to the random rotation operation, T-SNE clusters the data into four categories. For example, the number 6 without rotation and the number 9 rotated 180 degrees are classified as the same category. Since the dimensions of the feature maps become higher in the decoder stage, it is more difficult to visualize. But it is not difficult to imagine that in order to restore the handwritten number 6 to the original images with the correct orientation, the decoder needs to simply map these four categories to the one category which has the same orientation. And this simple mapping operation will cause a large image restoration error for the number 9. This is the biggest difference in the mechanism of image restoration compared to image reconstruction. \section{Conclusion and Future Work} In this paper, we propose a novel technique named Attribute Restoration Network (ARNet) for anomaly detection. Attribute Erasing Module is employed to erase certain attributes. The ARNet is forced to learn the attribute related features to restore the original data. The restoration error is expected to be a good indicator of anomalous data. We experiment with two simple but effective attribute erasing operations: graying and random rotation, and show that our method not only outperforms state-of-the-art methods but also achieves high stability. Notably, there are still more operations to explore. These operations are likely to further improve the performance of ARnet for anomaly detection. We look forward to the addition of more operations and the exploration of a more intelligent operations selection strategy. In addition, this way to learn feature embeddings can also be applied to more fields, opening avenues for future research. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A full understanding of the nature and evolutionary characteristics of the extragalactic sources discovered at mid-infrared wavelengths by the Spitzer Space Telescope ($Spitzer$) requires comparison of the spectroscopic and photometric characteristics of these sources. The astrophysical nature of individual sources can be understood from spectra obtained with The Infrared Spectrograph on $Spitzer$ (IRS, Houck et al. 2004). If characteristic mid-infrared spectra for a variety of the sources are known, flux distributions of sources within photometric surveys with the Multiband Imaging Photometer for $Spitzer$ (MIPS, Rieke et al. 2004) can be compared with source count models \citep[e.g. ][]{pap04} to determine the evolution in the universe of dusty sources, primarily starbursts and active galactic nuclei (AGN). The fundamental method to determine representative mid-infrared spectral characteristics is to obtain spectra with the IRS for unbiased samples. The primary limitation is that IRS spectra can only be obtained for significant numbers of sources with f$_{\nu}$(24$\mu$m) $\ga$ 1 mJy. Despite this limitation, it is crucial to determine as thoroughly as possible the spectral characteristics of well-defined, flux-limited samples. In the present paper, we take an initial step toward this determination using a sample of sources discovered in MIPS surveys and defined only by f$_{\nu}$(24$\mu$m) $>$ 10 mJy. Our objective in observing this 10 mJy sample is to determine the range of spectral characteristics among sources defined only by discovery with $Spitzer$ based on mid-infrared flux. The result allows a determination of how representative are spectra of the many sources observed with the IRS but chosen using various other selection criteria, such as optical spectral classification or prior knowledge of the source luminosities (e.g. the many available IRS spectra of Ultraluminous Infrared Galaxies). Previously, we published a catalog of 50 sources \citep{hou07} from the sample determined from the MIPS survey of 8.2 deg$^{2}$ within the Bootes field of the NOAO Deep Wide-Field Survey (NDWFS, Jannuzi et al. 1999), although only the spectra of the starburst sources were presented. In the present paper, we present the complete spectral sample of the 25 extragalactic sources having f$_{\nu}$(24$\mu$m) $>$ 10 mJy in the $Spitzer$ First Look Survey (FLS) point source catalog \citep{fad06}; we also present and discuss the spectra of the AGN within the Bootes sample. Within the combined Bootes and FLS surveys, there are a total of 100 extragalactic sources with f$_{\nu}$(24$\mu$m) $>$ 10 mJy. By combining new observations in our programs with existing observations in the $Spitzer$ archive from other programs, we have complete low-resolution IRS spectra for 35 of 50 Bootes sources having f$_{\nu}$(24$\mu$m) $>$ 10 mJy and for all 25 FLS point sources having f$_{\nu}$(24$\mu$m) $>$ 10 mJy. The present paper is an analysis of the resulting 60 spectra of sources having f$_{\nu}$(24$\mu$m) $>$ 10 mJy. Sources included within our sample were selected using only the flux limit criterion of f$_{\nu}$(24$\mu$m) $>$ 10 mJy with no other selection criteria. Our sample is not, however, a complete sample of all sources in Bootes and the FLS having f$_{\nu}$(24$\mu$m) $>$ 10 mJy. This is primarily because nearby, extended sources (z $\la$ 0.05) having previously known optical redshifts and optical spectral classifications (usually consistent with starbursts) were not observed with the IRS. In particular, no IRS spectra were obtained for the 25 extragalactic sources having f$_{\nu}$(24$\mu$m) $>$ 10 mJy in the FLS extended source catalog. Six of these 25 sources are within NGC or IC galaxies; 24 of the 25 sources have optical redshifts in the National Extragalactic Database, with an average redshift of 0.0362. Because of the difference between a flux limited sample and a complete sample, we do not use the present flux limited sample to determine luminosity functions or to determine the quantitative fractions of sources which fall within various spectroscopic categories, such as starbursts or AGN. Our primary objective is to obtain a census of sources covering a wide range of luminosities in order to determine mid-infrared spectral characterisitics as a function of luminosity. For this objective, the most important goal is to include comparable numbers of sources within different luminosity bins. It was not efficient, therefore, to obtain numerous IRS spectra of nearby starbursts having similar luminosities because the similarity among IRS spectra of such starbursts is well established \citep{bra06}. \section{Observations} \subsection{FLS Sample and New IRS Observations} In the point source catalog of the FLS \citep{fad06}, there are 38 sources with f$_{\nu}$(24$\mu$m) $>$ 10 mJy. Of these, 13 are bright galactic stars. The remaining 25 sources have been observed with the IRS: 19 are in our program 40038, 4 in archival program 20128 (G. Lagache, P.I.) one in archival program 20083 (M. Lacy, P.I.) and one in archival program 40539 (G. Helou, P.I.). Characteristics and observational details of these 25 sources are given in Table 1. $Spitzer$ spectroscopic observations were made with the IRS\footnote{The IRS was a collaborative venture between Cornell University and Ball Aerospace Corporation funded by NASA through the Jet Propulsion Laboratory and the Ames Research Center.} Short Low module in orders 1 and 2 (SL1 and SL2) and with the Long Low module in orders 1 and 2 (LL1 and LL2), described in \citet{hou04}. These give low resolution spectral coverage from $\sim$5\,$\mu$m~ to $\sim$35\,$\mu$m. For our new observations, sources were placed on the slit by using the IRS peakup mode with the blue camera (13.5$\mu$m~ $<$ $\lambda$ $<$ 18.7$\mu$m). All images when the source was in one of the two nod positions on each slit were coadded to obtain the image of the source spectrum. The background which was subtracted was determined from coadded background images that added both nod positions having the source in the other slit (i.e., both nods on the LL1 slit when the source is in the LL2 slit produce LL1 images of background only). The difference between coadded source images minus coadded background images was used for the spectral extraction, giving two independent extractions of the spectrum for each order. These independent spectra were compared to reject any highly outlying pixels in either spectrum, and a final mean spectrum was produced. The extraction of one dimensional spectra from the two dimensional images was done with the SMART analysis package \citep{hig04}, beginning with the Basic Calibrated Data products, version 15, of the $Spitzer$ flux calibration pipeline. Final spectra were boxcar-smoothed to the approximate resolution of the different IRS modules (0.2\,$\mu$m~ for SL1 and SL2, 0.3\,$\mu$m~ for LL2, and 0.4\,$\mu$m~ for LL1). Spectra for all FLS sources in Table 1 are illustrated among the spectra in Figures 1-4. Spectra for FLS starbursts are in Figure 1 and are truncated at 20\,$\mu$m~ in the rest frame because of the absence of any significant features in the low resolution spectra beyond that wavelength; most Bootes starburst spectra were illustrated in \citet{hou07}. Spectra for Bootes and FLS AGN in Table 3 are illustrated in Figures 2-4. Measured spectral parameters for all starbursts with IRS spectra from FLS and Bootes are summarized in Tables 2 and 3, and spectral parameters for all AGN are in Tables 4 and 5. For purposes of classifying sources as starburst or AGN for inclusion in the Tables and Figures, sources with strong polycyclic aromatic hydrocarbon (PAH) features are defined as starbursts, and sources with silicate absorption or emission features are defined as AGN. While such classifications could be further quantified, and while there are certainly composite sources, the systematic differences between spectra defined by this simple classification are obvious when comparing the starbursts in Figure 1 with the AGN in Figures 2-4. \subsection{Sources with PAH features} All of the sources from the Bootes and FLS samples which are characterised primarily by PAH emission are listed in Tables 2 and 3. These Tables give measured fluxes and equivalent widths (EW) for PAH emission features and for the [NeIII] 15.56\,$\mu$m~ emission line; continuum flux densities at 5.5\,$\mu$m~ and 15\,$\mu$m~ are also given. Equivalent widths and fluxes are given for the 6.2$\mu$m and 11.3$\mu$m features; these are measured within SMART as single Gaussians on top of a linear continuum, using the continuum baseline between 5.5\,$\mu$m~ to 6.9\,$\mu$m~ for the 6.2\,$\mu$m~ feature, between 10.4\,$\mu$m~ to 12.2\,$\mu$m~ for the 11.3\,$\mu$m~ feature, and between 14.9\,$\mu$m~ to 16.2\,$\mu$m~ for the 15.6\,$\mu$m~ feature. As discussed by \citet{hou07} and by \citet{wee08}, the stronger 7.7$\mu$m feature is measured only by the flux density at the peak of the feature, which is a combination of feature strength and continuum strength, because of the large uncertainty in separating the underlying continuum from the PAH feature. Most values for the Bootes sources in Tables 2 and 3 are reproduced from \citet{hou07}. Results for all FLS sources and for archival Bootes sources in program 20113 (H. Dole, P.I.) derive from our new spectral extractions. The continuum strength which we choose for comparing source luminosities is measured by the flux density at rest frame 15$\mu$m. This wavelength is chosen because it falls between PAH features in starbursts, and between silicate absorption or emission features in AGN. As a result, it accurately represents the continuum of the underlying dust emission. Continuum fluxes, redshifts, and resulting continuum luminosities $\nu$L$_{\nu}$ (15$\mu$m) for all PAH sources are given in Table 3. This Table also includes a continuum measure at 5.5\,$\mu$m~, although for PAH sources some of this "continuum" may arise from broad wings of PAH emission. IRS redshifts in Table 3 are determined from PAH emission features, assuming rest wavelengths of 6.2$\mu$m, 7.7$\mu$m, 8.6$\mu$m, and 11.3$\mu$m. For the 29 sources also having optical redshifts from spectra in the Sloan Digital Sky Survey (SDSS, Gunn et al. 1998), the average difference in z between IRS and optical redshifts is 0.0013. The SDSS optical classifications all agree with the IRS starburst classification for the sources in Table 3, except for 3 sources noted which are optically classified as Seyfert 2 and one as a narrow-line AGN. Spectra of PAH sources displayed in Figures 1 and in Figures 7-11 are normalized by the peak flux density at 7.7$\mu$m. This is chosen because f$_{\nu}$(7.7$\mu$m) is a well defined parameter for comparing with PAH sources at high redshift, where f$_{\nu}$(7.7$\mu$m) is the brightest observed portion of the rest-frame spectrum and strongly influences the MIPS flux densities observed for faint sources \citep{wee08}. To provide a measure of spectral slope and of dispersion among spectral shapes, the ratios of continuum flux density at rest-frame 24\,$\mu$m~ and 15\,$\mu$m~ to the peak flux density f$_{\nu}$(7.7$\mu$m) are also given in Table 3. \subsection{Sources with Silicate features} Figures 2 and 3 show the 16 sources from Bootes and FLS with silicate emission, and Figure 4 shows the 8 sources with silicate absorption. Spectral parameters for these silicate sources, which we classify as AGN, are listed in Tables 4 and 5. The AGN classifications from the IRS spectra are consistent with the optical SDSS classifications (noted in Table 5) when optical spectra are available (16 of the 24 sources). IRS redshifts for the silicate sources in Table 5 are not as accurate as for the PAH sources in Table 3, because fewer narrow spectral features are available for measure. Only 8 silicate sources have confident redshifts from both IRS and SDSS, and the average difference in z for these is 0.0088. Because of the weakness of PAH features in these sources, we list in Table 4 measures (usually upper limits) only for the 6.2\,$\mu$m~ feature, and also give measures or limits for the [NeIII] 15.56\,$\mu$m~ emission line. Table 4 also includes optical depth $\tau_{si}$ as a measure of depth for the 9.7\,$\mu$m~ silicate absorption feature. This is defined as $\tau_{si}$ = ln[$f_{\nu}$(cont)/$f_{\nu}$(abs)], for $f_{\nu}$(abs) measured at the wavelength of maximum depth for absorption and $f_{\nu}$(cont) being an unabsorbed continuum at the same wavelength, extrapolated from either side of the absorption feature as in \citet{spo07}. Strength of silicate emission features is not measured because of uncertainty in defining the underlying continuum. The continuum flux density and luminosity at 15\,$\mu$m~ rest frame wavelength are given for the AGN sources in Table 5, as well as continuum flux density at 5.5$\mu$m. For the AGN sources, which do not have strong 7.7\,$\mu$m~ PAH emission, spectra are normalized using the continuum at 8$\mu$m rest frame. This wavelength is chosen because it defines a localized spectral peak of the continuum for sources with strong silicate absorption; this peak arises because of absorption features redward and blueward of the peak. The values of f$_{\nu}$(8$\mu$m) are given in Table 5, along with the ratios of continuum flux densities at 15\,$\mu$m~ and 24$\mu$m to the f$_{\nu}$(8$\mu$m). Of the 24 AGN in Table 4, 17 have redshifts from SDSS, and 3 additional sources without SDSS redshifts have firm IRS redshifts from well defined spectral features (strong silicate absorption or atomic emission lines). The IRS spectra in Figures 2-4 confirm the SDSS redshifts for 8 sources; source AGN23 has an IRS spectrum that indicates z $\sim$ 1.1 compared to the SDSS redshift of 0.35. This source is shown in Figure 2 using the IRS redshift, but because of the redshift ambiguity is not used in the average spectra discussed below. Sources AGN3, AGN5, AGN9 and AGN17 do not have SDSS redshifts and do not have strong silicate absorption or atomic emission features that enable a confident IRS redshift. Of these four, sources AGN5, AGN9, and AGN17 are shown in Figures 2-4 at the estimated IRS redshift. These estimates derive from comparisons to other spectra of known redshift by seeking the best match for rest frame spectra of these 3 sources. Because of the uncertainty in z, however, sources AGN5, AGN9, and AGN17 are not used in deriving our average spectra. Source AGN3 is particularly interesting. \citet{bro06} present a redshift distribution of all type 1 QSOs in the Bootes survey field with f$_{\nu}$(24$\mu$m) $>$ 1 mJy having optical redshifts determined from the AGN and Galaxy Evolution Survey (AGES, Cool et al. 2006). Although individual AGES redshifts are not published, the distribution of redshifts and 24\,$\mu$m~ fluxes in Figure 8 of Brown et al. show that a type 1 QSO is present in Bootes having f$_{\nu}$(24$\mu$m) $>$ 10 mJy and z = 2.4. This source must be in our 10 mJy Bootes sample because sources were chosen from the same survey, but it is not among the sources with either SDSS redshifts or firm IRS redshifts. A luminous type 1 QSO should show strong silicate emission \citep{hao05}. Of the three Bootes sources in Table 4 without firm SDSS or IRS redshifts, source 3 is the only source whose IRS spectrum is consistent with a strong silicate emission source at z = 2.4. As seen in Figure 3, the rest frame spectrum of this source shows an increasing continuum at $\sim$ 10\,$\mu$m~ that is just as expected for the onset of a strong 9.7\,$\mu$m~ silicate emission feature. We note also in Figure 3 the close similarity of the rest-frame spectrum of source AGN3 to that of source AGN21, a type 1 QSO with a known SDSS redshift and the most luminous source in our 10 mJy sample, as discussed below. The MIPS flux of source AGN3 \citep{hou07} of 10.3 mJy is also in agreement with the flux of 10.5 mJy plotted by \citet{bro06}. We conclude, therefore, that source AGN3 is the QSO at z = 2.4 identified by \citep{bro06}, and we include this source in our average spectra. \section{Discussion} \subsection{Distribution of Redshifts and Luminosities} Each source in Tables 2-5 is individually interesting, and many comparisons could be made among various properties because extensive multiwavelength data already exist for most of these objects. For the present, we are not undertaking such an overall multiwavelength analysis except for noting in Tables 3 and 5 the general agreement between optical and IRS spectral classifications. The primary result we discuss here is the relation between mid-infrared luminosity and the infrared spectral characteristics. This result is crucial to understanding the nature of sources which are seen in surveys of the mid-infrared sky, and in using these surveys to determine evolutionary characteristics of starbursts and AGN. This 10 mJy sample clearly shows that the most luminous sources are those with AGN characteristics (silicate features) rather than starburst characteristics (PAH features). In Figure 5, the redshifts and 24\,$\mu$m~ fluxes are compared for sources with and without measurable PAH features. It is seen that, while the flux distributions are similar, the redshifts are generally much higher for the AGN, implying greater luminosities. This result is made clear in Figure 6 where luminosity distributions are shown. Luminosities are compared using rest-frame $\nu$L$_{\nu}$(15$\mu$m) in ergs s$^{-1}$. (Log $\nu$L$_{\nu}$(15$\mu$m)(L$_{\odot}$) = log $\nu$L$_{\nu}$(15$\mu$m)(ergs s$^{-1}$) - 33.59.) As mentioned previously, a wavelength of 15\,$\mu$m~ is used for comparison of intrinsic luminosities because the 15\,$\mu$m~ continuum is a pure dust continuum, without contamination by PAH or emission line spectral features and is between the 10\,$\mu$m~ and 17\,$\mu$m~ silicate features. Being an intermediate mid-infrared wavelength, it is not as dominated by luminosity from the hot dust associated with AGN or the cool dust associated with a starburst as would be a shorter or longer wavelength. In Figure 6, luminosities $\nu$L$_{\nu}$(15$\mu$m) are compared to the EW of the 6.2\,$\mu$m~ PAH feature. This Figure shows our most important results. It shows the dramatic range in dust continuum luminosities that is covered by the sample, a factor of 2.5 x 10$^{4}$ in luminosity, a range that encompasses virtually all extragalactic sources observed so far with $Spitzer$, regardless of selection criterion. The faintest source is source SB11 in Tables 2 and 3, a blue compact dwarf in Bootes with log $\nu$L$_{\nu}$ (15$\mu$m) = 41.85 (ergs s$^{-1}$); the most luminous sources are sources AGN11 and AGN21 in Tables 4 and 5, with AGN11 being an absorbed silicate source and AGN21 an emission silicate source, with log $\nu$L$_{\nu}$ (15$\mu$m) = 46.2. (Source AGN3 in Table 5, the QSO at z = 2.4 discussed in section 2.3, does not have a measurement of $\nu$L$_{\nu}$ (15$\mu$m) because the observed rest-frame spectrum does not extend to 15\,$\mu$m~. It certainly is among the sources with log $\nu$L$_{\nu}$(15$\mu$m) $>$ 45.0 (ergs s$^{-1}$) so is included in the average for this luminosity bin discussed below in section 3.2.) Figure 6 shows a well defined gap in the distribution of PAH strength at 0.4$\mu$m~ $<$ EW(6.2$\mu$m) $<$ 0.5$\mu$m. 21 of the 60 sources show EW(6.2$\mu$m) $>$ 0.47$\mu$m, and the remaining sources all have EW(6.2$\mu$m) $<$ 0.37$\mu$m. We also note that the "pure" starbursts in \citet{bra06} show a lower limit for EW(6.2$\mu$m) at about this value; 21 of 22 Brandl et al. starbursts have EW(6.2$\mu$m) $>$ 0.4$\mu$m. We interpret these empirical results for both our 10 mJy sample and the Brandl et al. sample to mean that sources with EW(6.2$\mu$m) $>$ 0.4$\mu$m~ are pure starbursts. Sources in which the strength of the PAH feature is diluted by additional mid-infrared continuum arising from an AGN would show EW(6.2$\mu$m) $<$ 0.4$\mu$m, and we consider such sources as composite starburst+AGN. Sources with the smallest values of EW(6.2$\mu$m) are dominated by the AGN component. These interpretations lead to the classifications shown in Figure 6. For starbursts with EW(6.2$\mu$m)$>$ 0.4$\mu$m, the median log $\nu$L$_{\nu}$ (15$\mu$m) = 43.1. For composite sources with 0.1$\mu$m~ $<$ EW(6.2$\mu$m)$<$ 0.4$\mu$m, the median log $\nu$L$_{\nu}$ (15$\mu$m) = 44.0. For AGN sources with EW(6.2$\mu$m)$<$ 0.1$\mu$m, the median log $\nu$L$_{\nu}$ (15$\mu$m) = 45.0. \subsection{Average Spectra for Different Luminosities} The change in the nature of mid-infrared spectra as a function of luminosity is also clearly shown in average spectra binned with luminosity. We consider 4 bins of luminosity: log $\nu$L$_{\nu}$(15$\mu$m) $>$ 45.0 (ergs s$^{-1}$), 45.0 $>$ log $\nu$L$_{\nu}$(15$\mu$m) $>$ 44.0, 44.0 $>$ log $\nu$L$_{\nu}$(15$\mu$m) $>$ 43.0, and 43.0 $>$ log $\nu$L$_{\nu}$(15$\mu$m) $>$ 42.0. The normalized average spectra within these bins are shown in Figure 7 and tabulated in Table 6. The most important conclusion from Figure 7 is the progressive increase in PAH strength as luminosity $\nu$L$_{\nu}$(15$\mu$m) decreases. While these average spectra can be used as empirical spectral templates for different luminosities, the dispersion among spectra illustrates the uncertainties which arise when adopting a specific template. To allow an estimate of this dispersion within a luminosity bin, Figures 8-11 show the continuum flux densities at 15\,$\mu$m~ and 24\,$\mu$m~ relative to the normalizing peak at 7.7$\mu$m~ (starbursts) or 8$\mu$m~ (AGN) for all of the individual sources which enter an average. The Figures also show the most extreme sources in each bin. All sources do not cover all rest frame wavelengths because of varying redshifts; for example, only 3 of the most luminous sources are at sufficiently low redshift to allow a measure of rest frame f$_{\nu}$(24$\mu$m). Average spectra are determined within different wavelength ranges using only the sources whose rest-frame spectra include those wavelength ranges. \subsection{Star Formation Rate Indicators for Starbursts} We use the classification "starburst galaxies", or "starbursts", to mean galaxies whose spectra and luminosity are dominated by the consequences of on-going star formation. A fundamental measurement of importance for starbursts is the measure of star formation rate (SFR). Mid-infrared spectra of starbursts such as those in Figure 1 contain data for three indicators that give independent measures of SFR. These are the luminosity of the PAH emission features \citep[e.g. ][]{for04,bra06,hou07}, the luminosity of the dust continuum \citep[e.g. ][]{ken98,cal07}, and the luminosity of the Neon emission lines \citep{ho07}. These three indicators measure in different ways the luminosity arising from the young stars of the starburst. The PAH emission is excited by photons penetrating the photodissociation region (PDR) at the boundary between the HII region and the surrounding molecular cloud; the dust continuum arises from dust intermixed in the HII region and heated by the stars; and the emission lines arise within the HII region from ionizing photons arising in the young stars. If the geometry of the star-forming environment, the spectral distribution of stellar radiation, the temperature distribution and nature of the dust, and the gas to dust ratio were the same for all starbursts, any one of these parameters should give the same result for SFR. Of course, all starbursts are not the same among these many characteristics, so different indicators of SFR give different results depending on these characteristics. Although we do not attempt here an evaluation of the relative merits of various estimates of SFR, we can use the data for our 10 mJy sample of starbursts to estimate the dispersion that arises among these three different methods for measuring SFR. For this comparison, we utilize the 7.7\,$\mu$m~ PAH feature, the [NeIII] 15.56\,$\mu$m~ feature, and the strength of the dust continuum at 24\,$\mu$m~. The flux $\nu$f$_{\nu}$(7.7$\mu$m) at the peak of the PAH feature measures the luminosity in the photodissociation region, the flux of the Neon line measures the ionizing luminosity within the HII region, and the flux density of the continuum measures the emission from warm dust. The measures used for comparison are given in Tables 2 and 3 and illustrated in Figures 12 and 13. Parameters are compared to EW(6.2\,$\mu$m~) which determines if a source spectrum arises strictly from a starburst without AGN contamination, as discussed in section 3.1. In Figure 12, the distribution of the ratio $\nu$f$_{\nu}$(7.7$\mu$m) to f([NeIII]) is shown for the sources in Tables 2 and 3 with PAH features. This is a measure of luminosity arising in the PDR compared to that arising in the HII region. Including the limits which are shown, there is a weak trend in Figure 12 for PAH EW to increase as the ratio of PAH to [NeIII] increases. This trend is as expected if the weak PAH sources contain an AGN contribution to the [NeIII] luminosity. The comparison of SFRs is intended to apply only for sources which are pure starbursts, without an AGN contribution. These pure starbursts are taken as defined in Figure 6, based on a criterion that EW(6.2$\mu$m) $>$ 0.4$\mu$m. For these pure starbursts, the median and dispersion in the PAH to [NeIII] ratio in Figure 12 are log [$\nu$f$_{\nu}$(7.7$\mu$m)/f([NeIII])] = 2.8$$+/-$m~$0.3. This result indicates that we should expect a dispersion by a factor of $$+/-$m~$ 2.0 between SFR estimates from PAH compared to those from [NeIII], even for pure starbursts. In Figure 13, the distribution of the ratio f$_{\nu}$(24$\mu$m) to f$_{\nu}$(7.7$\mu$m) is shown for the sources in Tables 2 and 3 with PAH features. This is a measure of luminosity arising from warm dust within both the HII region and PDR compared to luminosity arising in the PDR. There is a weak trend for this ratio to change with EW(6.2$\mu$m). This trend is expected, simply because EW(6.2$\mu$m) is also a measure of PAH strength compared to the continuum. For the pure starbursts with EW(6.2$\mu$m) $>$ 0.4$\mu$m, the median and dispersion of the ratio are log[f$_{\nu}$(24$\mu$m)/f$_{\nu}$(7.7$\mu$m)] = 0.3$$+/-$m~$0.3; the dispersion is the same as for the dispersion in PAH to [NeIII]. This result indicates that either PAH measures or dust continuum measures would give the same value for SFR to within a factor of two. This analysis considers only the empirical scatter among the three mid-infrared parameters which can be used for SFR measures, but it does not explain the sources of this scatter. Given the many assumptions which enter the transformation from an observed spectral parameter to a SFR, the utility of our empirical result is primarily to conclude that it is possible to estimate the SFR to a factor of $\sim$ 2 using several indicators within a mid-infrared spectrum, but more detailed assumptions regarding the nature of the starburst would be necessary to improve such estimates. \subsection{Predicted MIPS Fluxes for Different Average Spectra} Average spectra are important for adopting spectral templates as a function of luminosity for use in modeling source counts \citep[e.g. ][]{cha04,lag04,lef05,cap07}. Such modeling is the fundamental source of conclusions regarding the evolution of extragalactic infrared sources, and different assumptions regarding templates give different answers. A utility of the average spectra in Figures 7 through 11 and Table 6 is that these comprise an empirically derived set of templates and dispersions among the templates which arise directly from a flux limited sample of sources selected only as infrared sources. These templates can be used with assumptions for luminosity functions and evolution of sources to predict source counts at mid-infrared wavelengths, although we do not at present undertake such predictions. For now, we use the four average spectra and corresponding luminosities only to illustrate the observed MIPS f$_{\nu}$(24$\mu$m) which would arise at different redshifts from the templates for the various luminosity bins shown in Figures 7 through 11. The determination of MIPS f$_{\nu}$(24$\mu$m) is made using the synthetic photometry tool in SMART and relating flux to luminosity with a cosmology having H$_0$ = 71 \kmsMpc, $\Omega_{M}$=0.27 and $\Omega_{\Lambda}$=0.73. Results are in Figure 14. A particularly important implication of the results in Figure 14 is that the sources at the highest redshifts which are detected in MIPS surveys with f$_{\nu}$(24$\mu$m) $\ga$ 1 mJy should only be the luminous AGN without strong PAH features, sources like those in Figure 11 having log $\nu$L$_{\nu}$(15$\mu$m) $>$ 45.0. For sources in this luminosity bin, the observed MIPS f$_{\nu}$(24$\mu$m) remains above 1 mJy to a redshift of 3. By contrast, sources in luminosity bins whose average spectra show strong PAH features (log $\nu$L$_{\nu}$(15$\mu$m) $<$ 44.0) would drop to f$_{\nu}$(24$\mu$m) $<$ 0.1 mJy for z $>$ 1. In fact, however, this prediction is proven incorrect by the large numbers of PAH-dominated sources at z $\sim$ 2 and f$_{\nu}$(24$\mu$m) $\ga$ 1 mJy which have been discovered by $Spitzer$ \citep[e.g. ][]{yan07,far08,pop08}. Explaining such sources requires luminosity evolution for starbursts such that PAH dominated spectra can be found in sources with log $\nu$L$_{\nu}$(15$\mu$m) $>$ 45.0. By considering only the most luminous starbursts discovered so far using a PAH luminosity indicator, this evolution has been quantified to scale as (1+z)$^{2.5}$ for the most luminous starbursts \citep{wee08}. Applying evolution by this factor would raise the curve for log $\nu$L$_{\nu}$(15$\mu$m) = 44.0 in Figure 14 to fluxes more than a factor of 10 brighter than shown and would indeed predict PAH sources to be found at z $\sim$ 2 with f$_{\nu}$(24$\mu$m) $\sim$ 1 mJy. This result is illustrated in Figure 14 by taking the most extreme spectra in Figure 10 for 44.0 $<$ log $\nu$L$_{\nu}$(15$\mu$m) $<$ 45.0 and asssigning luminosities log $\nu$L$_{\nu}$(15$\mu$m) = 45.0 to spectra with those extreme shapes. The lower spectrum, having strong PAH features, would reach f$_{\nu}$(24$\mu$m) = 1 mJy at z = 2 using luminosity log $\nu$L$_{\nu}$(15$\mu$m) = 45.0, although the actual luminosity of this source is log $\nu$L$_{\nu}$(15$\mu$m) = 44.3. \section{Summary and Conclusions} Spectra are presented and discussed for a flux-limited sample of 60 galaxies with f$_{\nu}$(24$\mu$m) $>$ 10\,mJy taken from within the $Spitzer$ First Look Survey and the NOAO Deep Wide-Field Survey region in Bootes. 36 sources are characterised as starbursts based on the presence of PAH features, and 24 sources as AGN based on the presence of silicate emission or absorption features. Sources have 0.01 $<$ z $<$ 2.4. The distribution of luminosities and the average mid-infrared spectra of sources as a function of luminosity are determined defined by the continuum luminosity at 15$\mu$m. Lower luminosity sources are dominated by PAH emission features, and higher luminosity sources are dominated by silicate absorption or emission. Source luminosity $\nu$L$_{\nu}$(15$\mu$m) increases as the equivalent width of the 6.2$\mu$m~ PAH feature decreases. For sources with EW(6.2$\mu$m)$>$ 0.4$\mu$m, the median log $\nu$L$_{\nu}$(15$\mu$m) = 43.1 (ergs s$^{-1}$). For sources with 0.1$\mu$m $<$ EW(6.2$\mu$m)$<$ 0.4$\mu$m, the median log $\nu$L$_{\nu}$(15$\mu$m) = 44.0. For sources with EW(6.2$\mu$m)$<$ 0.1$\mu$m, the median log $\nu$L$_{\nu}$(15$\mu$m) = 45.0. Average spectra are used to predict the $Spitzer$ MIPS flux densities f$_{\nu}$(24$\mu$m) that would be observed as a function of redshift for sources of different luminosities. Results show that without significant luminosity evolution, the sources that should be seen with f$_{\nu}$(24$\mu$m) $\ga$ 1 mJy and z $\sim$ 2 would have silicate absorption or emission features and no PAH features. Because PAH sources have been discovered by $Spitzer$ with f$_{\nu}$(24$\mu$m) $\ga$ 1 mJy and z $\sim$ 2, substantial luminosity evolution is required for sources with PAH-dominated spectra. For the pure starbursts, dispersions among PAH strength, [NeIII] emission line strength, and dust continuum strength are illustrated to estimate the resulting dispersions among star formation rates that would be derived from these three independent indicators of SFR. It is found that the dispersions indicate an uncertainty of about a factor of two in deriving the SFR from these different indicators. \acknowledgments We thank P. Hall for help in improving our IRS spectral analysis with SMART. This work is based primarily on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. Support for this work by the IRS GTO team at Cornell University was provided by NASA through Contract Number 1257184 issued by JPL/Caltech. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Consider an ultrasonic wave, or signal, described by the wave equation \begin{equation}\label{waveequation} u_{tt} - c^2(x,y,z)\Delta{u}=0 \end{equation} where $c(x,y,z)>0$ is the variable speed of sound in $\Omega(t)=\Omega_0 \backslash \overline{\Omega_1}(t)$ and $$ u |_{\partial{\Omega_1}(t)} = 0 $$ for $t>0$ and $\overline{\Omega_1(t)} \subset \Omega_0 \subset \mathbb{R}^3$, where $\Omega_1(t)$ is a convex moving obstacle with a smooth boundary in a bounded domain $\Omega_0$. We consider an environment without caustics and look for solutions of the form \begin{equation}\label{solution_wave_eq_2} u(x,y,z,t) = \sum_{j=0}^{\infty}A_j(x,y,z) \frac{e^{i\omega(W(x,y,z)-t)}}{\omega^j} \end{equation} where the eikonal function $W(x,y,z)=\textit{const}$ defines a surface of constant phase. These solutions of the ray equation are called rays or ray solutions. Suppose that for all t we are given all integrals $\int_{\gamma} f(l) dl = C_{\gamma}$ where $\gamma$ are broken rays in $\Omega(t)$ such that for each point $P \in \partial{\Omega_1(t)}$ there is at least one broken ray $\gamma$ reflecting at P. A broken ray is a ray reflecting at the obstacle and starting and ending at the observation boundary $\partial{\Omega_0}$. Let $f(x)=\frac{1}{c(x,y,z)} > 0$ in $\Omega$. Then as we know $C_{\gamma}$ correspond to signal travel times in a medium with speed of sound $c(x,y,z)$. The shape and trajectory reconstruction problem is to find $\partial{\Omega_1(t)}$ given the sets $C_{\gamma}(t)$ where $\gamma \in \Omega(t)$. Consider $\partial{\Omega_0}$ as the observation boundary and signal transmitters and receivers with known locations along the boundary. Transmitters send ray signals with known initial incident and azimuth angles at known transmission times. Receivers receive reflected ray signals and record the time when the signal was received. The combined data leads to a set of data points $$B_k=(x_l, y_l, z_l, x_r, y_r, z_r, \phi_k, \theta_k, t_k, \xi_k)$$ where where $\phi_k$ and $\theta_k$ are the initial incident and azimuth angles of the ray from the transmitter, $x_l$, $y_l$, $z_l$ are the coordinates of the transmitter endpoint of the ray, and $x_r$, $y_r$, $z_r$ are the coordinates of the receiver endpoint of the ray, $t_k$ is the time of flight for the signal and $\xi_k$ is a frequency of the signal. We present algorithms for the shape and trajectory reconstruction problem for finite travel times and variable speed of sound derived from the equations for the Shooting Method for two-point seismic ray tracing\cite{JG}. \begin{align}\label{shooting_method_equations} \frac{dx}{dt} = c(x,y,z) \sin{\phi}\cos{\theta}\\ \frac{dy}{dt} = c(x,y,z) \sin{\phi}\sin{\theta}\\ \frac{dz}{dt} = c(x,y,z) \cos{\phi}\\ \frac{\partial{\phi}}{dt} = -\cos{\phi}( \frac{\partial{c}}{\partial{x}}\cos{\theta} + \frac{\partial{c}}{\partial{y}}\sin{\theta} ) + \frac{\partial{c}}{\partial{z}}\sin{\phi}\\ \frac{\partial{\theta}}{dt} = \frac{1}{\sin{\phi}}( \frac{\partial{c}}{\partial{x}}\sin{\theta} - \frac{\partial{c}}{\partial{y}}\cos{\theta} ) \end{align} This system of equations derived from the eikonal equation has wide applications in seismology and is used in algorithms for seismic ray tracing \cite{SK}. The system of equations \ref{shooting_method_equations} is for a Cartesian coordinate system where $\vec{R}(t)=(x(t), y(t), z(t))$ is the ray position vector, $\phi(t)$ is the incident angle of the ray direction vector with the z axis and $\theta(t)$ is the azimuth angle that the projection of the ray direction vector makes with the positive x axis. We consider that the speed of sound $c(x,y,z)$ is known inside $\Omega$. We will reconstruct the position of the reflection point $P$ given the positions of a transmitter L and receiver S, the incident and azimuth angles of the transmitted ray with respect to a Cartesian coordinate system centered at the transmitter, and the time of flight of the signal. We know the initial position $(x(0),y(0),z(0))$ and therefore we know the velocity of the ray at $t=0$. We know the angle and initial speed of wave propagation for the transmitted signal at time $t=0$. Then, knowing the initial conditions, we can find numerically the signal path from the transmitter L through the reflection point P for a given travel time $\tau_k$. Let the travel time from the transmitter L to the reflection point P be $\tau_k$ and the total travel time from the transmitter L to the receiver S through the reflection point P be $t_k$ where $t_k>\tau_k$. From symmetry, we can imagine that the signal received at the receiver S is transmitted from the receiver S and that it arrives at the reflection point P in time $t=t_k-\tau_k$ because its travel time is $t_k-\tau_k$. The ray path from S to P is again described by the above system of equations for seismic ray tracing, however we do not know the initial angles for the signal from the receiver S. In order to reconstruct the intersection point $P_k$ of rays starting from the transmitter and receiver for a given data point $$B_k=(x_l, y_l, z_l, x_r, y_r, z_r, \phi_k, \theta_k, t_k, \xi_k)$$ containing the measured values for the signal, we step through a discrete set of values for $\tau_k$. Starting to trace a ray path from L, at each time step, each new point on the path is a candidate reflection point P. At this stage, this is basic initial value ray tracing. Then we apply the two point seismic ray tracing shooting method to see if we can reach from receiver S candidate point P for the remainder of our time budget. If we can then we have found the reflection point P for this data point. Otherwise, we continue with the next time step of the initial value ray tracing and check the next candidate point P. We repeat this until we find a reflection point P or exhaust our time budget $t_k$. For example, let $$\tau_{ks}=\frac{t_k n_s}{N_r}$$ where $N_r$ is an integer that specifies the time-step resolution and $n_s$ is an integer that specifies the number of the time step and such that $0 \leq n_s < N_r$. Each time $\tau_{ks}$ corresponds to a unique point $P_{ks}$ on the ray path starting from the transmitter L such that $P_{ks}$ can be reached from $L$ in time $\tau_{ks}$. Next, for each time step corresponding to time $\tau_{ks}$ to reach candidate $P_{ks}$ from L, we step through the range of initial angles for the signal starting from the receiver S and check whether the curve from S will intersect $P_{ks}$ in time $t_k-\tau_{ks}$. Alternatively, we can step through a range of reflection angles at the point $P_{ks}$ that is found for the given $\tau_{ks}$ and check whether the reflected signal will intersect S in time $t_k-\tau_{ks}$. \section{Reconstruction Algorithms} The input to the following algorithm is the speed of sound $c(x)$ for the domain $\Omega$ and a set of data points or ray coordinates corresponding to broken rays. The output is a unique set of points in $\mathbb{R}^3$. The points from the output are the reflection points reconstructed from the input data. \begin{algorithmic} \label{reconstruction_algorithm} \REQUIRE Set of broken ray data points $B_k=(x_l, y_l, z_l, x_r, y_r, z_r, \phi_k, \theta_k, t_k, \xi_k)$ \REQUIRE Speed of sound $c(x)$ for domain $\Omega$ \COMMENT{Algorithm for Shape and Trajectory Reconstruction of Moving Obstacles} \COMMENT{Estimated time complexity is $O(T^2A)$ where T is the number of discretization points for the time of flight, and A is the number of discretization points for the angle space} \FORALL{data points $B_k$} \STATE{$h_k=\frac{t_k}{N_r}$} \STATE{$L=(X_0,Y_0,Z_0)=(x_l,y_l,z_l)$} set this initial position to be position of transmitter \STATE{$S=(aX_0,aY_0,aZ_0)=(x_r,y_r,z_r)$} set this initial position to be position of receiver \STATE{$\Phi_0=\phi_k$} \STATE{$\Theta_0=\theta_k$} \STATE{$T_0=0$} \STATE{$aT_0=0$} \FOR{$s = 0 \to N_r-1$} \STATE \COMMENT{Compute the next point on the ray from the transmitter by fourth order Runge-Kutta step and the ray tracing system \ref{shooting_method_equations}} \STATE{$X_{s+1} = RK4_X(h_k,T_s, X_s,Y_s,Z_s,\Phi_s,\Theta_s$)} \STATE{$Y_{s+1} = RK4_Y(h_k,T_s, X_s,Y_s,Z_s,\Phi_s,\theta_s$)} \STATE{$Z_{s+1} = RK4_Z(h_k,T_s, X_s,Y_s,Z_s,\Phi_s,\Theta_s$)} \STATE{$\Phi_{s+1} = RK4_{\Phi}(h_k,T_s, X_s,Y_s,Z_s,\Phi_s,\Theta_s$)} \STATE{$\Theta_{s+1} = RK4_{\Theta}(h_k,T_s, X_s,Y_s,Z_s,\Phi_s,\Theta_s$)} \STATE{$T_{s+1}=T_s + h_k$} \STATE $P_{s+1}=(X_{s+1},Y_{s+1}, Z_{s+1})$ point on solution of ray tracing equations with initial values for transmitter that is at time $T_{s+1}$ away from the transmitter L \IF{!($P_{s+1} \in \Omega_0$)} \STATE There must be a measurment error. Continue with next data point $B_k$ \ENDIF \FORALL{initial angles $a\Phi_0, a\Theta_0$ in discretized angle space of the receiver} \FOR{$p = 0 \to N_r-1$} \STATE \COMMENT{Compute the next point on the ray from the receiver by fourth order Runge-Kutta step and the ray tracing system \ref{shooting_method_equations}} \STATE{$aX_{p+1} = RK4_X(h_k,aT_p,aX_p,aY_p,aZ_p,a\Phi_p,a\Theta_p$)} \STATE{$aY_{p+1} = RK4_Y(h_k,aT_p, aX_p,aY_p,aZ_p,a\Phi_p,a\Theta_p$)} \STATE{$aZ_{p+1} = RK4_Z(h_k,aT_p, aX_p,aY_p,aZ_p,a\Phi_p,a\Theta_p$)} \STATE{$a\Phi_{p+1} = RK4_{\Phi}(h_k, aT_p, aX_p,aY_p,aZ_p,a\Phi_p,a\Theta_p$)} \STATE{$a\Theta_{p+1} = RK4_{\Theta}(h_k, aT_p, aX_p,aY_p,aZ_p,a\Phi_p,a\Theta_p$)} \STATE{$aT_{p+1}=aT_p + h_k$} \STATE $P_{\alpha_{p+1}}=(aX_{p+1},aY_{p+1},aZ_{p+1})$ point on solution of ray tracing equations with initial angles $a\Phi_0$ and $a\Theta_0$ and initial position S, that is time $aT_{p+1}$ away from S \IF{!($P_{\alpha_{p+1}} \in \Omega_0$)} \STATE Exit this for loop and continue with next pair of initial angles $a\Phi_0, a\Theta_0$ from outer for loop \ENDIF \IF{$distance(P_{s+1},P_{\alpha_{p+1}})<\epsilon_1$ and $|T_{s+1}+aT_{p+1}-t_k|<\epsilon_2$} \STATE $P_k=P_{s+1}$ \COMMENT{Solution for current data point $B_k$ found. Continue with next data point $B_{k+1}$} \ENDIF \IF{$T_{s+1}+aT_{p+1}>t_k+\epsilon_2$} \STATE \COMMENT{We are over the travel time budget $t_k$. Continue looking for a solution with the next set of initial angles $a\Phi_0, a\Theta_0$.} \ENDIF \ENDFOR \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} RK4 stands for a fourth order Runge-Kutta method although other time-dependent numerical methods can be used as well. The above algorithm leads to a class of new algorithms when data structures are used to store the rays for faster processing. For example, when rays from the receiver in its set of directions are stored and looked up in a data structure such as an array, the performance of the algorithm can be improved to $O(TA)$ where T is the number of discretization points for the time of flight, and A is the number of discretization points for the angle space. These algorithms are very suitable for parallelization and their parallelized implementation is key for efficient real-time processing, performance and computational efficiency. For example, the computation for tracing the rays from the receiver can be parallelized in multiple threads. When the above algorithm is run on a set of points $\{B_k\}$ from one sampling time interval $T_k$, the algorithm reconstructs the shape of the obstacle during this sampling interval. In order to reconstruct the trajectory of the obstacle, the algorithm is run on the data points for each of the sampling intervals. The method can achieve high resolution because it can process a very large number of distinct points on the obstacle's boundary. In contrast to tomography where the focus of the reconstruction method is to recover the velocity structure of the domain, the shape and trajectory reconstruction procedure directly finds the shape and trajectory of the obstacle. \subsection{Numerical Example} I have developed a multithreaded Java program that implements the parallelized algorithm from \ref{reconstruction_algorithm} using Euler's numerical integration method instead of RK4 and reconstructs the coordinates of reflection points from input data $\{B_k\}$ and a speed of sound function $c(x)$. Consider a circular reflecting obstacle in the plane xy moving in the first quadrant away from the origin along the line x=y in a medium with variable speed of sound $c(x,y)=x+y+1$. We place a transmitter and a receiver at the origin and set $xl = yl = zl = xr = yr = zr = 0$. In this case, the domain $\Omega_0$ is a circle of sufficiently large radius that contains the origin. Table \ref{Reconstruction_of_a_point_moving_on_the_line_x_y} shows the computed trajectory of a reflection point on the obstacle on the ray path from the origin x=y, $y>0$, corresponding to data for different travel times. \begin{table}[h] \begin{center} \begin{tabular}{|cccccc|} \hline $\phi$ & $\theta$ & T & xp & yp & zp \\ \hline 1.57 & 0.79 & 2 & 1.55 & 1.55 & 0.00 \\ 1.57 & 0.79 & 4 & 7.89 & 7.89 & 0.00 \\ 1.57 & 0.79 & 8 & 138.15 & 138.15 & 0.00 \\ \hline \end{tabular} \end{center} \caption{Reconstruction of a point moving with speed $c(x,y)=x+y+1$ on the line x=y for a fixed signal frequency $\xi$. The initial transmission angles are $\phi=\frac{\pi}{2}$ and $\theta=\frac{\pi}{4}$ \label{Reconstruction_of_a_point_moving_on_the_line_x_y}} \end{table} We check the accuracy of the computation in the above table by the Java program as follows. The time for the ray to reach to obstacle can be computed by the formula $$ t= \int_{0}^{X} \frac{ds}{c(s)} = \sqrt{2}\int_{0}^{X} \frac{dx}{x+y+1} = \sqrt{2}\int_{0}^{X} \frac{dx}{2x+1}$$ Therefore, \begin{equation}\label{numerical_example_formula} X=Y=\frac{e^{\sqrt{2}t}-1}{2} \end{equation} By symmetry, for this particular example, this time t is half of the total travel time T. Then for a travel time $T=2$, or $t=1$, we compute $X=Y=1.55$. This result matches the corresponding result for px and py from Table \ref{Reconstruction_of_a_point_moving_on_the_line_x_y} obtained by numerical integration. In the current implementation, the reconstructed results are more accurate for shorter travel times as can be seen by comparing the result from the table with the result from the above formula \ref{numerical_example_formula} for T=8 but this accuracy can be improved by using numerical integration methods with a smaller error. \subsection{Communication path from receiver to transmitter} The reconstruction method \ref{reconstruction_algorithm} above finds the reflection point P for the signal from transmitter L to receiver S by finding angles of transmission $\phi$ and $\theta$ for the receiver at which a symmetric signal can be sent to L from S that will reach L over the reverse path of the signal detected at S. Therefore, the receiver S can communicate with L by sending a response signal back to L along angles of transmission $\phi$ and $\theta$. L will receive this response if the positions of L or S have not changed signigicantly or if the speed of sound has not changed significantly in the region containing the communication paths. In order to create a more robust connection S can send to L several parallel rays that are very close to each other. S and L can send and receive acknowledgments and create or use an existing networking protocol stack. This approach integrates easily with current network protocols and architectures. S and L recompute the reflection point and transmission angles with the receipt of each message. When S and L receive from each other control messages with high frequency, S and L recompute a current set of transmission angles for routing back messages. The frequency of these control messages depends on the velocity of S, L and the obstacle, and in the case of time-varying speed of sound, on the time derivative of the speed of sound or on how quickly the speed of sound changes as a function of time. \section{Network Architecture for High-Performance Communications} Shape and trajectory tracking of moving obstacles that reflect ray signals sent from moving Internet hosts enables efficient one-hop routing through reflection to the nearest Internet router or peer. A router in this architecture is a transmitter and receiver of ray signals. \subsection{Coordinate system} The architecture is based on a known Cartesian global coordinate system. The physical position, for example, the high precision GPS coordinates of the transmitter or receiver correspond to global coordinate system coordinates $x_l, y_l, z_l$ or $x_r, y_r, z_r$ from the reconstruction algorithms \ref{reconstruction_algorithm}. \subsection{Addressing} We define a new physical layer networking protocol called IHOP. The goal of this protocol is to route messages through the environment in at most one hop. The IHOP address of a host is always relative to the position of another host, a transmitter, and consists of a pair of incident and azimuth angles $\phi$ and $\theta$ for reaching the receiver by sending a ray signal from transmitter to receiver. The IHOP protocol is defined as addressing an IHOP host from an IHOP transmitter by the host's IHOP address or by sending messages from the transmitter to the host along the direction determined by the IHOP angles. \subsection{Optimal communication path} A transmitter communicates with a router via the optimal path, the path that takes the least time, for the given speed of sound and reflection point. This network architecture is illustrated in Fig. \ref{MobileInternetPhoneArchitecture}. The combination of the methods for obstacle tracking from this work with the numerical methods for tomography in the presence of obstacles from \cite{L}, leads to a new method for networking with reflected rays based on tracking the location of reflectors and finding the speed of sound in the environment. Through reflection, messages can be communicated around obstacles blocking the line of sight as well as around areas with a high error rate. The reflectors for the shape and trajectory tracking method do not have to be installed in order to enable a network and can be obstacles such as buildings or cars. The two initial angles provide a way of addressing the obstacle and the receiver i.e. a way of calling the obstacle and the receiver by dialing two angles of transmission. \section{Advantages} \subsection{Improved Security} The communication path between transmitter and receiver is less prone to eavesdropping because communication is via a narrow ray. \subsection{Increased Network Bandwidth} The network architecture described in the previous section enables high bandwidth networking because a receiver can receive information over several rays arriving from different directions. Each ray is an independent communications channel and delivers information in parallel with other rays detected by the receiver. \subsection{Increased Reliability, Survivability and Performance} In order to increase reliability, the transmitter can send a message to the receiver along several different rays to several different reflection points. The receiver could in this case receive the same message from several different rays or directions. This provides a natural way for error correction and leads to high-performance networking because information from messages that are lost or delayed could be delivered quickly from a different direction. The fastest message is processed first and this policy improves end-to-end performance. \subsection{Reduced delay and better support for high-performance networking} The communication path is computed to match the speed of sound in the environment and therefore there will be less environmental errors. For example, the effects of temperature or humidity differences or other weather conditions are included in the computation of the communication path via the function for the speed of sound in the environment. The reduced error rate will lead to better quality of service. The architecture is particularly useful for streaming real-time content to home networks or peer-to-peer networks. \subsection{Increased Ease of Use} The new network architecture makes it easier to connect home network devices via reflected rays compared to the current methods for wireless networking because there is no need for additional routers. \section{Conclusion} Ray signals may reach a host by an unbroken ray or by a broken ray when it reflects from an obstacle. For improved bandwidth and reliability, we can reach the host over different rays. The IHOP address of the host specifies a ray. \section*{Acknowledgment} The authors would like to thank Professor Gregory Eskin for his continuous guidance.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Spin-orbit interaction (SOI) is a promising tool for manipulating spin degrees of freedom via electric field; because of that, it plays an important role in various novel microelectronic devices.\cite{spintronics} In 2D semiconductor microstructures, Rashba\cite{RashbaSOI} and Dresselhaus\cite{Dresselhaus} types of SOI are the most important ones. In phase-coherent diffusive systems, their dominant effect on the transport properties is the so-called anti-localization\cite{Zumbuhl,Skvortsov} -- isotropic SOI-induced correction to the conductivity tensor, leading to the sign change of the phase-coherent correction to the conductivity. Rashba and Dresselhaus SOI terms equally and independently contribute to the weak anti-localization correction. The anisotropic contribution to the conductivity tensor is a more subtle effect (arising in the next order in the weak-disorder expansion). In an infinite-size 2D conductor it comes from the interference between Rashba and Dresselhaus SOI.\cite{anisotrCond} A pure Dresselhaus SOI affects 2D electron systems in the same way as pure Rashba SOI with the same amplitude. However, it is known that in a system with mixed (Rashba-Dresselhaus) type of SOI their action is not independent. This becomes most pronounced in the special case when the amplitude of Rashba SOI is equal to the Dresselhaus one.\cite{schliemann03:146801,anisotrCond} In order to highlight the interference between the Rashba and Dresselhaus SOI, it is convenient to consider effects, which arise entirely due to this interference. An example is the anisotropic contribution to the conductivity of a diffusive (unconfined) 2D electron gas, which is zero in case when only one type of SOI is present in the system.\cite{anisotrCond} Both the (isotropic) antilocalization correction and the SOI-induced anisotropic correction are phase coherent effects; hence it is not surprising that \emph{in a fully phase coherent system} both of them depend singularly on SOI amplitudes. In the limit of small SOI the weak localization correction diverges both in 2D and quasi-1D cases, while the behavior of the anisotropic correction depends on system's geometry: in an infinite 2D-system with time-reversal symmetry it remains finite also when SOI is infinitesimal, and in a quasi-1D case it diverges with the vanishing~SOI. Moreover, while in an infinite 2D disordered slab, two different types of SOI (Rashba and Dresselhaus) are required in order to make the anisotropic component of the conductivity tensor non-zero\cite{anisotrCond}, in a quasi-1D geometry (see Sec.~\ref{sec:quasi1D}) the SOI-induced anisotropy of the conductivity tensor arises also for the case when the energy spectrum is isotropic (i.e., when only one type of SOI is present), despite the fact that all dimensions of a quasi-1D sample are much larger than the mean-free path $l$ of an electron. Thus, in the phase-coherent regime the macroscopic shape anisotropy of the sample results in the (microscopic\cite{note:noDisp}) anisotropy of the conductivity tensor. The SOI is still required, but the energy spectrum does not need to be anisotropic. According to the theorem by Vollhardt and Wölfle (proved in\cite{woelfe} for the spinless case), diffuson propagator poles do not contribute to the conductivity if the system is invariant under time reversal. We extend the validity of this theorem to the spinful case in Appendix~\ref{app:VW}. From this theorem one may expect the appearance of uncompensated diffuson divergences in systems with broken time reversal symmetry, which would then result in enhancement of the SOI-correction to the conductivity tensor. We indeed observe such an enhancement in the example of a ring pierced by a magnetic flux, where the time-reversal symmetry is broken due to presence of the vector potential (see Sec.~\ref{sec:brokenTR}). We perform our calculations using the disorder averaging diagrammatic techniques.\cite{AGD,montambauxBook,Bergmann,DHIKSZ} In problems with spin, a summation over spin indices produces huge expressions which cannot be handled manually anymore in a reasonable time. We have overcome this problem by developing a symbolic-calculation program \cite{theProgram} that (i) generates diagrams having the requested number of loops, (ii) calculates the Hikami-boxes, and finally (iii) performs the integration over the cooperon and diffuson momenta. The first two stages of the program are universal, i.e., can be readily used for other calculations in a diagrammatic approach. The program significantly facilitates the usage of the diagrammatics, especially in the spinful case. The first part of the paper is not specific to the problem of anisotropic conductivity. In Sec.~\ref{sec:Ham} we define the model which we use in our calculation, then we introduce disorder-averaged Green functions (see Sec.~\ref{sec:GF}) and derive the Kubo-Greenwood formula in the Keldysh technique (see Sec.~\ref{sec:Kubo}). Then we derive the loop expansion for the diagrammatic technique in Sec.~\ref{sec:le} and derive expressions for diffusons (cooperons) in Sec.~\ref{sec:CD}. The second part of the paper starts with Sec.~\ref{sec:ZLA} where we derive the incoherent SOI-correction to the conductivity tensor. We then proceed with the contribution from the weak-localization diagrams (which remains isotropic at zero frequency) in Sec.~\ref{sec:WL}. The results for the anisotropic transport in 2D and quasi-1D geometries \emph{in the presence of the time-reversal invariance} are described in Sec.~\ref{sec:twoLoop} for the case of zero frequency $\omega=0$ and in Sec.~\ref{sec:ff} for $\omega\ne0$. Finally, in Sec.~\ref{sec:brokenTR} we give an example of how the effect of SOI-induced anisotropy could be enhanced by the time-reversal symmetry breaking terms. For convenience we summarize some often used notations in Tab.~\ref{tab:notations} at the end of the paper. \begin{table}[f] \centering \begin{tabular}{|c|c|}\hline Symbol & Its definition \tabularnewline\hline\hline $d$& spatial dimension \tabularnewline\hline $\sqrt i$&$(1+i)/\sqrt2$ \tabularnewline\hline $C$ and $D$ & see~\eqref{obVyDC} and~\eqref{defXabDC} \tabularnewline\hline $G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}$ and $\GA$ & see~\eqref{grSOI}; $\GA=G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^\dag$ \tabularnewline\hline $l$ &\begin{minipage}{.7\columnwidth} mean free path of an electron between subsequent elastic scattering off impurities\end{minipage}\tabularnewline\hline $L_\phi$ &\begin{minipage}{.7\columnwidth} electron (orbital) dephasing length due to inelastic scattering\end{minipage}\tabularnewline\hline $\tau_\phi^{-1}=v_\mathrm F/L_\phi$ &\begin{minipage}{.7\columnwidth} dephasing rate due to inelastic scattering\end{minipage}\tabularnewline\hline $\mu$ &(temperature-dependent) chemical potential\tabularnewline\hline $p_F=\sqrt{2m\mu}$ &\begin{minipage}{.7\columnwidth} Fermi momentum\end{minipage}\tabularnewline\hline $v_\mathrm F=p_F/m$ &\begin{minipage}{.7\columnwidth} Fermi velocity\end{minipage}\tabularnewline\hline $\nu$, DoS&density of states\tabularnewline\hline $\Re$ and $\Im$& real and imaginary part\tabularnewline\hline $\sigma_{\mathrm{is}}$ and $\sigma_{\mathrm{an}}$ & see~\eqref{defSan} and~\eqref{defSanInv}\tabularnewline\hline $S$ & see~\eqref{pervoryadDSi} \tabularnewline\hline $\sigma_0$ & $2\times2$ unity matrix \tabularnewline\hline $\sigma_1$, $\sigma_2$, and $\sigma_3$ & Pauli matrices \tabularnewline\hline $\xi_p$& see~\eqref{agf}\tabularnewline\hline $p_F$, $x_a$, and $x_b$ & see~\eqref{dRDa} \tabularnewline\hline $x$ and $\delta$ & see~\eqref{dRDaAlt} \tabularnewline\hline CD&cooperon or diffuson\tabularnewline\hline CS&coordinate system\tabularnewline\hline DM&density matrix\tabularnewline\hline GF&Green function\tabularnewline\hline GFB&Green functions box\tabularnewline\hline HB&Hikami box\tabularnewline\hline lhs&left hand side\tabularnewline\hline rhs&right hand side\tabularnewline\hline SOI&spin-orbit interaction\tabularnewline\hline VW&Vollhardt-Wölfle\tabularnewline\hline WL&weak localization\tabularnewline\hline ZLA&zero-loop approximation\tabularnewline\hline \end{tabular} \caption{Some often used notations and abbreviations.\label{tab:notations}} \end{table} \section{The Hamiltonian\label{sec:Ham}} Rashba and Dresselhaus spin-orbit interaction terms modify the Hamiltonian as follows: \begin{equation}\label{hamrash} {\hat H}'=\frac{{\hat p}^2}{2m}+{\hat V}_s'+U'(\vec r),\quad V_s'=aV_R'+bV_D', \end{equation} where $\hat{\vec p}$ denotes momentum operator, $a$ and $b$ are Rashba and Dresselhaus amplitudes, and $U'(\vec r)$ is the disorder potential created by impurities or defects randomly placed in the sample. We assume that $U'(\vec r)$ is uncorrelated: \begin{equation}\label{iznBes} \overline{U'(\vec r)U'(\vec r')}=\frac{\hbar^2}{2\pi\nu\tau}\delta(\vec r-\vec r'), \end{equation} where $\tau$ is the mean time between collisions of an electron off impurities, $\nu$ is the density of states (DoS) at the Fermi level, and the over-bar indicates average over the different disorder configurations. The Rashba SOI is invariant under arbitrary rotation in the $(x,y)$-plane: \begin{equation}\begin{split} V_R'=\hat{\vec z}\cdot\left[\boldsymbol\sigma\times\vec p\right]\equiv(\hat{\vec z},\boldsymbol\sigma,\vec p)=\sigma_1{\hat p}_y-\sigma_2{\hat p}_x\\ =(\hat{\vec z},R^z_\phi\boldsymbol\sigma,R^z_\phi\vec p)=\begin{pmatrix}0&p_y+ip_x\cr p_y-ip_x&0\end{pmatrix},\quad\forall\phi, \end{split}\end{equation} where $\boldsymbol\sigma=(\sigma_1,\sigma_2,\sigma_3)$ is composed of Pauli matrices, and $R^z_\phi$ denotes $3\times3$ matrix describing rotation by an angle $\phi$ around the $z$-axis. The Dresselhaus SOI term can be written as \begin{equation}\begin{split} V_D'=(\hat{\vec z},C\boldsymbol\sigma,\vec p)=\sigma_1{\hat p}_x-\sigma_2{\hat p}_y,\\ C=R^z_{-\pi/2}R^y_\pi=\begin{pmatrix}0 & -1 & 0 \cr -1 & 0 & 0 \cr 0 & 0 & -1\end{pmatrix}. \end{split}\end{equation} In the coordinate system (CS), rotated by an arbitrary angle $\phi$ around the $z$-axis with respect to the initial CS, the SOI part of the Hamiltonian is transformed into \begin{equation}\label{COB} V_s(\vec p,\boldsymbol\sigma)=V_s'(R^z_\phi\vec p,R^z_\phi\boldsymbol\sigma)= (\hat{\vec z},a\boldsymbol\sigma+bR^z_{-\phi}CR^z_\phi\boldsymbol\sigma,\vec p). \end{equation} In case when $\phi=-\pi/4$, $V_D$ becomes similar to Rashba-SOI\cite{note:altPauliMatr}, so that the SOI term can be written as \begin{equation}\begin{split} R^z_{-\pi/4}CR^z_{\pi/4}&=\begin{pmatrix}-1&0&0\cr0&1&0\cr0&0&-1\end{pmatrix},\\ \label{neuSOI} V_s\equiv V_s(\vec p,\boldsymbol\sigma)&=(a-b)\sigma_1\hat p_y-(a+b)\sigma_2\hat p_x={\hat s}{\hat\Delta}/2, \end{split}\end{equation} where the SOI-induced spectrum-splitting ${\hat\Delta}$ and the helicity (spirality\cite{Edelstein:90}) operators are defined as \begin{equation} \begin{split} \label{spectrumSplitting} {\hat\Delta}=&2\sqrt{{\hat p}_x^2(a+b)^2+{\hat p}_y^2(a-b)^2},\\ {\hat s}=&2\left[(a-b){\hat p}_y\sigma_1-(a+b){\hat p}_x\sigma_2\right]{\hat\Delta}^{-1},\quad {\hat s}^2=\mathds1. \end{split} \end{equation} The original disorder-free Hamiltonian in the rotated CS may be written in the form ${\hat p}^2/(2m)+{\hat s}{\hat\Delta}/2$; it possesses the following eigensystem: \begin{eqnarray} |\vec p,s\rangle&=&\begin{pmatrix}\frac{is\Delta_{\vec p}/2}{(a+b)p_x+i(a-b)p_y}\\1\end{pmatrix}|\vec p\rangle,\quad s=\pm1,\\ E_{\vec p,s}&=&\langle\vec p,s|{\hat H}_0|\vec p,s\rangle=\frac{p^2}{2m}+\frac{s\Delta_{\vec p}}2,\\ \label{spektr} \Delta_{\vec p}&=&\langle\vec p,s|{\hat\Delta}|\vec p,s\rangle=2\sqrt{p_x^2(a+b)^2+p_y^2(a-b)^2}. \end{eqnarray} We see that the simultaneous presence of Rashba and Dresselhaus SOI leads to an anisotropy of the energy spectrum.\cite{JC:2006,anisotrCond} On the other hand,~\eqref{spektr} is symmetric with respect to the exchange $a\leftrightarrow b$; conversely, the same is true for SOI-induced corrections to the conductivity tensor. We note that helicity is invariant with respect to the time reversal: \begin{equation} \sigma_2s_{-\vec p}^T\sigma_2=s_{\vec p}\equiv\langle \vec p|\hat s|\vec p\rangle.\label{invSpi} \end{equation} In the rest of the paper we perform calculations in the coordinate system, rotated by $\pi/4$ in the $xy$ plane, where the (unperturbed) Hamiltonian is given by \begin{equation}\label{RDrotH} {\hat H}=\frac{{\hat p}^2}{2m}+\frac{{\hat s}{\hat\Delta}}2+U(\vec r), \end{equation} where $U(\vec r)=U'(R^z_{\pi/4}\vec r)$ is the disorder potential in the rotated coordinate system. Our assumption~\eqref{iznBes} that $U'(\vec r)$ is a $\delta$-correlated random disorder potential is inherited by the disorder potential in the rotated coordinate system: \begin{equation} \overline{U(\vec r)U(\vec r')}=\frac{\hbar^2}{2\pi\nu\tau}\delta(\vec r-\vec r').\label{fiMod} \end{equation} The Hamiltonian \eqref{RDrotH} defines the velocity operator $\hat{\vec v}$ together with the ``fictitious'' vector potential $\vec{\tilde A}$: \begin{equation} \label{hamlo}\begin{split} \hat{\vec v}=&\frac i\hbar[\hat H,\vec r]=\frac{\hat{\vec p}}m -\frac e{mc}\vec{\tilde A},\\ \vec{\tilde A}=&\frac{mc}e\left[(a+b)\sigma_2,(b-a)\sigma_1,0\right], \end{split}\end{equation} so that \begin{equation} \frac{{\hat p}^2}{2m}+\frac{{\hat s}{\hat\Delta}}2=\frac{m{\hat v}^2}2-m(a^2+b^2).\label{eHtS} \end{equation} The strength of the SOI can be characterized by dimensionless Rashba and Dresselhaus amplitudes introduced as follows: \begin{equation}\label{dRDa} x_a=2p_F a\tau/\hbar,\quad x_b=2p_F b\tau/\hbar,\quadp_F=\sqrt{2m\mu}, \end{equation} where $\mu$ is the (temperature-dependent) chemical potential. Alternatively, the SOI can be characterized by another set of two dimensionless parameters: \begin{equation} x=\sqrt{x_a^2+x_b^2},\quad\delta=\frac{2ab}{a^2+b^2},\quad -1\le\delta\le1, \label{dRDaAlt} \end{equation} which characterizes the ``total'' SOI amplitude and the anisotropy of the energy spectrum \eqref{spektr} correspondingly. The choice between two parameter sets~\eqref{dRDa} and~\eqref{dRDaAlt} becomes important when we have to expand expressions in Taylor series. On the example of the spectrum splitting $\Delta_{\vec p}$ defined in~\eqref{spektr} we see the advantage of the choice~\eqref{dRDaAlt}: while the Taylor expansion of $\Delta_{\vec p}$ in powers of $(x,\delta)$ is uniform, its expansion in the parameters~\eqref{dRDa} is non-uniform: the expansion depends on the fact if one expands subsequently in $x_a,x_b$ or in $x_b,x_a$. \section{Averaged Green function in the self-consistent Born approximation\label{sec:GF}} In our calculations, we use the electron-gas model of the Fermi liquid\cite{Abrikosov}. In the absence of SOI and applied electric field, disorder-averaged Green functions are obtained from the self-consistent Born approximation~\cite{AGD}: \begin{equation}\label{agf} \grae(\vec p)=\left[E-\xi_{\vec p}\pm\frac i{2\tau}\right]^{-1},\quad\hbar\xi_{\vec p}=\frac{p^2}{2m}-\mu, \end{equation} where $\tau$ has small ($\sim E\hbar/\mu$) dependence on frequency $E$. The presence of the SOI changes the expression for Green function (GF) from~\eqref{agf} into \begin{equation}\label{grSOI}\begin{split} G^E_R}\newcommand{\GAE}{G^E_A}\newcommand{\GRAE}{G^E_{\mathrm{R/A}}(\vec p)=&g_r^E}\newcommand{\gae}{g_a^E}\newcommand{\grae}{g_{\mathrm{r/a}}^E(\vec p)\sum_{n\ge0}\left[V_s(\vec p)g_r^E}\newcommand{\gae}{g_a^E}\newcommand{\grae}{g_{\mathrm{r/a}}^E(\vec p)/\hbar\right]^n\\ =&\left\{\sigma_0\left[g_r^E}\newcommand{\gae}{g_a^E}\newcommand{\grae}{g_{\mathrm{r/a}}^E(\vec p)\right]^{-1}-\frac{s_{\vec p}\Delta_{\vec p}}{2\hbar}\right\}^{-1}\\ =&\frac12\left[\left(\sigma_0+s_{\vec p}\right)g_r^{E-}(\vec p)+\left(\sigma_0-s_{\vec p}\right)g_r^{E+}(\vec p)\right], \end{split}\end{equation} where $\sigma_0$ is the $2\times2$ unity matrix, and \begin{equation} g_r^{E\pm}(\vec p)=\left\{\left[g_r^E}\newcommand{\gae}{g_a^E}\newcommand{\grae}{g_{\mathrm{r/a}}^E(\vec p)\right]^{-1}\pm\frac{{\Delta_{\vec p}}}{2\hbar}\right\}^{-1}=\left[g_a^{E\pm}(\vec p)\right]^*.\label{grapm} \end{equation} From \eqref{grSOI} we see that averaged GF for the Hamiltonian \eqref{RDrotH} can be obtained from the GF of the disorder-free Hamiltonian by substituting infinitesimal ``epsilon'' in the denominator with $(2\tau)^{-1}$. Thus for energies close to $\mu$, the averaged GFs are very different from GFs of the disorder-free system; the latter are strongly modified due to the disorder. In fact, Eq.~\eqref{agf} is the result of the summation of an infinite perturbation series. We calculate the universal contribution~\eqref{KuboFormula} to the conductivity tensor (see Sec.~\ref{sec:universal} below); the corresponding momentum integrals converge in the vicinity of the Fermi level, so that the momentum arguments of all Green functions are close to $p_F$. The assumption $p\approxp_F$ simplifies the SOI-term $V_s$ in the GF-expression~\eqref{grSOI}. In the zeroth order (in powers of $\hbar\xi_{\vec p}/\mu\ll1$) \begin{equation}\label{genCubic} V_s(\vec p)\approxp_F\left[(a+b)\sigma_1\sin\phi-(a-b)\sigma_2\cos\phi\right], \end{equation} where $\phi$ is the angular coordinate of $\vec p$. This approximation is sufficient for the calculation of the weak localization and two-loop correction in Sec.~\ref{sec:WL} and~\ref{sec:twoLoop}. However, in the calculation of the zero-loop contribution (see Sec.~\ref{sec:ZLA}), higher accuracy is required: \begin{equation}\begin{split} V_s(\vec p)\approx&p_F\left(1+\frac{\hbar\xi_{\vec p}}{2\mu}\right)\times\\ \times&\left[(a+b)\sigma_1\sin\phi-(a-b)\sigma_2\cos\phi\right]. \end{split}\end{equation} \section{(Non)universal contributions to the conductivity\label{sec:universal}} We calculate the universal (i.e., independent of the details of the energy spectrum far from the Fermi level) corrections to the conductivity tensor. The latter quantity is derived in linear response in the applied electric field (see Sec.~\ref{sec:Kubo}). In the diagrammatic approach, the SOI-induced correction to the conductivity can be graphically represented as a sum of diagrams. A contribution of an individual diagram is initially expressed as an integral in both frequency and momentum over the combination of Green functions and the distribution function. We call such an integral \emph{universal} if its leading contribution comes from the part of the integration space, where all momentum and frequency arguments of GFs in the integrand are close to the Fermi level. In the momentum space this means $|p-p_F|\lesssim\hbar/l$; in the frequency space $\hbar|E|\lesssim T$. ($p_F$ is the Fermi momentum, and $T$ is the temperature in equilibrium or effective temperature in a non-equilibrium case.) Then the integration in momentum space can be performed assuming constant averaged DoS $\nu$ and approximating $\int\ud^2p/(2\pi\hbar)^2\approx\nu\int_{-\infty}^\infty\ud\xi} \newcommand{\iE}{\int_{-\infty}^\infty\frac{\ud E}{2\pi}$ where in 2D $\nu=m/(2\pi\hbar)$. According to the Fermi liquid theory, only electrons with energies near the Fermi level behave like free electron gas so that the effect of the interaction between electrons can be disregarded. Thus only the universal corrections are expected to give reasonable physical results. The non-universal contributions (i) cannot be reliably calculated and (ii) cannot cancel universal contributions. In the diagrammatics, arbitrary universal contributions to Hikami boxes can be calculated. Unfortunately, this is not true for non-universal corrections: some of them can be considered within the diagrammatics\cite{note:NUC}, others are too complicated to be calculated. The impossibility to take into account all non-universal contributions may leave an impression of imperfection of the diagrammatic technique. However, one should note that the situation is not better in the non-linear $\sigma$-model,\cite{Kamenev2} where all approximations we use in the diagrammatics are required as well. \section{Non-equilibrium Kubo formula in Keldysh technique\label{sec:Kubo}} The (mean) current density in a system, characterized by a one-particle density matrix (DM) $\hat\rho$ is given by \begin{equation}\label{tokFQ} \vec j(t)=\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat\rho}(t){\hat{\vec j}}\right],\quad\hat{\vec j}=e\hat{\vec v}, \end{equation} where $\hat{\vec j}$ and $\hat{\vec v}$ denote current and velocity operators. We proceed with calculations in momentum representation: \begin{equation}\label{tokPKop}\begin{split} \vec j(t)=\int\frac{\ud^2p}{(2\pi\hbar)^2}}\newcommand{\ippd}{\int\frac{\ud^2p'}{(2\pi\hbar)^2}\ippd\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}\left[\langle\vec p|\hat\rho(t)|\vec p'\rangle \langle\vec p'|\hat{\vec j}|\vec p\rangle\right],\\ \langle\vec p'|\hat{\vec j}|\vec p\rangle=\delta(\vec p-\vec p')e\frac{\partial H(\vec p,\vec r)}{\partial\vec p},\quad \hat H={\hat H}_0+\delta\hat V, \end{split}\end{equation} where ${\hat H}_0$ is the unperturbed Hamiltonian, and the perturbation term $\delta\hat V$ describes the applied electric field, see~\eqref{perturbation} below. It is convenient to express the DM in terms of GFs (see \S2.1 from Ref.~\onlinecite{Kadanoff}). \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \langle\lambda|\hat\rho(t)|\lambda'\rangle=\langle{\hat\psi}^\dag(\lambda;t)\hat\psi(\lambda';t)\rangle =-i\lim_{t'\to t}\langle\lambda'|{\hat G}^<(t',t)|\lambda\rangle=\nonumber\\ =-\frac i2\lim_{t'\to t}\langle\lambda'|\left[\hGK-\left({\hat G}_R}\newcommand{\hGA}{{\hat G}_A} \newcommand{\hGK}{{\hat G}_K} \newcommand{\hGRA}{{\hat G}_{R/A}-\hGA\right)\right](t',t)|\lambda\rangle= -\int_{-\infty}^\infty\frac{\ud\omega}{2\pi}e^{-i\omega t}\nonumber\\ \times\frac i2\iE\langle\lambda'|\left[\hGK-\left({\hat G}_R}\newcommand{\hGA}{{\hat G}_A} \newcommand{\hGK}{{\hat G}_K} \newcommand{\hGRA}{{\hat G}_{R/A}-\hGA\right)\right](E,E-\omega)|\lambda\rangle,\quad \label{perGF}\eea where $\lambda\equiv(\vec p,s)$. We assume that the unperturbed DM~${\hat\rho}^{(0)}$ is stationary (though not necessary equilibrium) and is characterized by energy distribution function $f_E$. Thus, the zero-order DM is time-independent, and the zero-order GFs are homogeneous in time. The perturbation [see~\eqref{perturbation} below] affects both $\hat\rho(t)$ and $\hat{\vec j}$ in~\eqref{tokFQ}. We call the correction to $\hat{\vec j}$ ``diamagnetic part of the current operator'' ${\hat{\vec j}}_D$; the unperturbed part of~\eqref{tokPKop} we call ``normal part of the current operator'' ${\hat{\vec j}}_N$: \begin{equation}\label{jND}\begin{split} \langle\vec p'|\hat{\vec j}_D|\vec p\rangle=&\delta(\vec p-\vec p')e\frac{\partial\delta V}{\partial\vec p},\\ \langle\vec p'|{\hat{\vec j}}_N|\vec p\rangle=&\delta(\vec p-\vec p')e\frac{\partial H_0}{\partial\vec p}. \end{split}\end{equation} When the system is perturbed by external electric field $\vec E=-\frac1c\frac{\partial\vec A}{\partial t}$, the perturbation operator is given by \begin{equation}\label{perturbation}\begin{split} \langle&\vec p'|\delta\hat V|\vec p\rangle =\delta(\vec p-\vec p')\frac1{2m} \left\{\left[\vec p-\frac ec\left(\vec A+\vec{\tilde A}\,\right)\right]^2\right.\\ -&\left.\left[\vec p-\frac ec\vec{\tilde A}\right]^2\right\}\approx -\frac e{mc}\vec A\left[\vec p-\frac ec\vec{\tilde A}\right]\delta(\vec p-\vec p')\\ =&-\frac1c\vec A\langle\vec p'|{\hat{\vec j}}_N|\vec p\rangle,\quad \langle\vec p'|{\hat{\vec j}}_D|\vec p\rangle=-\frac{e^2}{mc}\delta(\vec p-\vec p')\vec A. \end{split}\end{equation} Let us denote the Keldysh-contour time ordered GF as $\hat G$. The applied electric field affects $\hat G$; the first-order correction is given by \begin{equation}\delta^{(1)}\hat G(E,E-\omega)={\hat G}^E\left[-\frac e{c\hbar}\hat{\vec v}\vec A_\omega\right]{\hat G}^{E-\omega},\label{ktoGF}\end{equation} Expressing~\eqref{ktoGF} in a usual $2\times2$-matrix form\cite{Rammer}, we get perturbation expressions for GFs in~\eqref{perGF}: \begin{equation}\begin{split} \delta^{(1)}&({\hat G}_R}\newcommand{\hGA}{{\hat G}_A} \newcommand{\hGK}{{\hat G}_K} \newcommand{\hGRA}{{\hat G}_{R/A}-\hGA)(E,E-\omega)\\ &=-\frac{eA_\omega^\beta}{c\hbar}\left[{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A{\hat v}_\beta{\hat G}^{E-\omega}_R} \newcommand{\hGAEw}{{\hat G}^{E-\omega}_A-\hGAE{\hat v}_\beta\hGAEw\right], \end{split}\end{equation} and \begin{equation}\begin{split}\hspace{-1ex} \delta^{(1)}&\hGK(E,E-\omega)=-\sum_{\beta=1}^2\frac{eA_\omega^\beta}{c\hbar}\left[\left(h_E-h_{E-\omega}\right){\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A{\hat v}_\beta\hGAEw\right.\\ +&\left.h_{E-\omega}{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A{\hat v}_\beta{\hat G}^{E-\omega}_R} \newcommand{\hGAEw}{{\hat G}^{E-\omega}_A-h_E\hGAE{\hat v}_\beta\hGAEw\right],\quad h_E=1-2f_E. \end{split}\end{equation} In equilibrium \begin{equation} h_E=\begin{cases} 0,&\ E\hbar<-\mu\\ \tanh\frac{E\hbar}{2T},&\ E\hbar\ge-\mu\end{cases}, \end{equation} where $T$ is the temperature (measured in energy units). We split the DM into ``normal'' and ``diamagnetic'' parts [like we did with the current operator in~\eqref{jND}]: \begin{equation}\label{rhoND}\begin{split} \langle\lambda|&\delta\hat\rho_N(\omega)|\lambda'\rangle=\frac i2\sum_{\beta=1}^2\frac{eA_\omega^\beta}{c\hbar}\times\\ \times&\iE \langle\lambda'|\left[\left(h_E-h_{E-\omega}\right){\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A{\hat v}_\beta\hGAEw\right]|\lambda\rangle, \end{split}\end{equation} \begin{equation}\label{rhoDiam}\begin{split} \langle\lambda|&\delta\hat\rho_D(\omega)|\lambda'\rangle= \\ &=\frac i2\sum_{\beta=1}^2\frac{eA_\omega^\beta}{c\hbar}\iE \langle\lambda'|\left[(h_{E-\omega}-1){\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A{\hat v}_\beta{\hat G}^{E-\omega}_R} \newcommand{\hGAEw}{{\hat G}^{E-\omega}_A\right.\\ &-\left.(h_E-1)\hGAE{\hat v}_\beta\hGAEw\right]|\lambda\rangle. \end{split}\end{equation} Then we rewrite~\eqref{tokFQ} in the frequency space, substituting $\hat\rho(\omega)={\hat\rho}^{(0)}+\delta\hat\rho_N(\omega)+\delta\hat\rho_D(\omega)$: \begin{equation}\label{SpND}\begin{split} \vec j(\omega)=\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\left( {\hat\rho}^{(0)}+\delta{\hat\rho}_N(\omega)+\delta{\hat\rho}_D(\omega)\right) \left({\hat{\vec j}}_N+{\hat{\vec j}}_D\right)\right]\\ \approx \mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat\rho}^{(0)}{\hat{\vec j}}_N+{\hat\rho}^{(0)}{\hat{\vec j}}_D+ \delta{\hat\rho}_N(\omega){\hat{\vec j}}_N+\delta{\hat\rho}_N(\omega){\hat{\vec j}}_D\right], \end{split}\end{equation} where we neglected non-linear (in the perturbation) terms. Using Eqs.~\eqref{perGF}, \eqref{jND}, and~\eqref{rhoND} we can calculate~\eqref{SpND} in the momentum representation. From \begin{equation}\label{CancTrick}\begin{split}\hspace{-2ex}% \mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[v_i{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A v_j{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A\right]=\frac i\hbar&\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left\{\left(r_i[{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A]^{-1}-[{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A]^{-1}r_i\right){\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A v_j{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A\right\} \\ =\frac i\hbar\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d([r_i,v_j]{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A)&=\frac i\hbar\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left(\left[r_i,\frac{{\hat p}_j}m\right]{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A\right)=-\delta_{ij}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d{\hat G}^E_R} \newcommand{\hGAE}{{\hat G}^E_A \end{split}\end{equation} we conclude that \begin{equation} \text{at }\omega=0\quad \mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\delta{\hat\rho}_D\hat{\vec j}_N+\rho^{(0)}\delta{\hat{\vec j}}_D\right]=0.\label{diacan} \end{equation} Both equilibrium\cite{AmbegaokarEckern} and non-equilibrium\cite{PRLoctMMII} contributions to the persistent current in a mesoscopic ring are given by~$\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat\rho}^{(0)}{\hat{\vec j}}_N\right]$, while~$\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\delta{\hat\rho}_N(\omega){\hat{\vec j}}_N\right]$ is the linear response to the applied electric field. In the rest of this section we assume that $\vec A$ is directed along the $\beta$-axis, and we measure the charge current in the $\alpha$-direction. Then (for arbitrary energy distribution~$f_E$) \begin{equation}\label{KuboFormula} \begin{split} \sigma_{\alpha\beta}(\omega)&=\sigma^{\mathrm N}_{\alpha\beta}(\omega)+\sigma^{\mathrm D}_{\alpha\beta}(\omega),\quad \sigma^{\mathrm N}_{\alpha\beta}(\omega)\gg\sigma^{\mathrm D}_{\alpha\beta}(\omega),\\ \hspace{-1ex} \sigma^{\mathrm N}_{\alpha\beta}(\omega)=&\frac c{i\omega A_\omega}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\delta\hat\rho_N(\omega){\hat j}^{\,\alpha}_N\right]= \frac{e^2}h\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat v}_\alpha{\hat G}_{\mathrm R}^E{\hat v}_\beta{\hat G}_{\mathrm A}^{E-\omega}\right], \end{split}\end{equation} where we assumed that the (momentum) trace is $E$-independent. (This assumption is valid for all universal quantities except for the Drude conductivity, see Appendix~\ref{app:leadingCond}). The diamagnetic correction to the conductivity is given by \begin{equation} \label{correctionsToKubo} \begin{split} \sigma^{\mathrm D}_{\alpha\beta}(\omega)=\frac c{i\omega A_\omega}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\delta\hat\rho_D(\omega){\hat j}^{\,\alpha}_N-\delta\hat\rho_D(0){\hat j}^{\,\alpha}_N\right]\\ =\frac{e^2}h\int_{-\infty}^\infty\frac{\ud E}\omega f_E\left\{ \mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat v}_\alpha{\hat G}_{\mathrm A}^E{\hat v}_\beta\left({\hat G}_{\mathrm A}^{E-\omega}-{\hat G}_{\mathrm A}^E\right)\right] \right.\\\left.- \mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat v}_\alpha\left({\hat G}_{\mathrm R}^{E+\omega}-{\hat G}_{\mathrm R}^E\right){\hat v}_\beta{\hat G}_{\mathrm R}^E\right]\right\}. \end{split}\end{equation} In the frequency integral $\int_{-\infty}^\infty\ud E/(2\pi)$ in~\eqref{correctionsToKubo}, large negative frequencies $E\sim-\mu/\hbar$ give important contribution to the result, so that the $E$-dependence of $\tau$ in~\eqref{agf} cannot be neglected; this complicates the calculation of~\eqref{correctionsToKubo}. The SOI-dependent part of~\eqref{correctionsToKubo} is non-universal (the momenta of GFs are not bounded in the vicinity of $p_F$). Since~\eqref{correctionsToKubo} does not contain products of different (retarded and advanced) GFs, it cannot contain diffusons or cooperons; hence it is incoherent and cannot produce corrections to the conductivity tensor having the same order in SOI, as our results below [see Eqs.~\eqref{leadingAnis}, \eqref{ani1D}, and~\eqref{piri}]. In what follows, we study the universal contribution~\eqref{KuboFormula} and do not calculate~\eqref{correctionsToKubo}. An attempt to use the Kubo formula~\eqref{KuboFormula} for calculating the leading (Drude) conductivity contribution leads to divergences. In fact, \eqref{KuboFormula} is valid only for calculating \emph{corrections} (due to $\omega\ne0$, SOI, interaction, etc.) to the main (Drude) conductivity value. See Appendix~\ref{app:leadingCond} for details of calculating the Drude conductivity. We derived the universal contribution to the conductivity tensor~\eqref{KuboFormula} for a general case of \emph{non-equilibrium stationary} distribution function. The result~\eqref{KuboFormula} is the same as the one derived for the equilibrium case~\cite{JRammerQTT}. Thus, we see that \emph{corrections} to the conductivity are independent of the distribution function~$f_E$. Note that this is not true for the leading (Drude) conductivity~\eqref{inhCond}, which does depend on~$f_E$. In what follows, we always perform calculations in the rotated coordinate system, where the spin-orbit part of the Hamiltonian is given by~\eqref{neuSOI}. In this coordinate system, the conductivity tensor is diagonal in all considered geometries; its anisotropic part is proportional to $\sigma_3$. [See the discussion after~\eqref{Aexpr}.] We denote the isotropic and anisotropic parts of the conductivity tensor $\sigma$ with symbols $\sigma_{\mathrm{is}}$ and $\sigma_{\mathrm{an}}$: \begin{equation}\label{defSan} \sigma=\sigma_{\mathrm{is}}\sigma_0+\sigma_{\mathrm{an}}\sigma_3. \end{equation} In an arbitrary coordinate system, the (an)isotropic properties of a 2D symmetric tensor can be characterized by two non-negative scalars $\sigma_{\mathrm{is}}$ and $|\sigma_{\mathrm{an}}|$ -- isotropic and anisotropic amplitudes defined by \begin{equation} \label{defSanInv} \begin{split} \sigma_{\mathrm{is}}=&\frac12\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\sigma,\quad |\sigma_{\mathrm{an}}|=\sqrt{\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\left(\sigma-\sigma_0\sigma_{\mathrm{is}}\right)^2/2\right]} \\ =&\sqrt{\left\{\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\sigma\frac{\sigma_1}2\right]\right\}^2+\left\{\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\sigma\frac{\sigma_3}2\right]\right\}^2}. \end{split}\end{equation} It is easy to check that both $\sigma_{\mathrm{is}}$ and $|\sigma_{\mathrm{an}}|$ are independent of the choice of the coordinate system. Finally, we give explicit expressions for the anisotropic part~$\sigma_{\mathrm{an}}$ of the conductivity in the original and rotated by $\pi/4$ coordinate systems: \begin{equation} \label{explAN} \begin{split} \text{in the original CS }\sigma_{\mathrm{an}}&=\sigma_{xy},\text{ and}\\ \text{in the rotated CS }\sigma_{\mathrm{an}}&=\frac{\sigma_{xx}-\sigma_{yy}}2.\\ \end{split}\end{equation} \section{The loop expansion\label{sec:le}} It is convenient to represent the different contributions to the averaged conductivity in graphical form as diagrams. The simplest (bubble) diagram [see Fig.~\ref{fZLA:a}]\cite{note:Drude} is produced by the Kubo formula~\eqref{KuboFormula} by substituting ${\hat G}_{\mathrm R}$ and ${\hat G}_{\mathrm A}$ with the averaged GFs $G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}$ [given by~\eqref{grSOI}] and $\GA=\left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}\right]^\dag$. The bubble has neither diffuson nor cooperon lines. One can proceed by connecting the retarded GF $G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}$ of a bubble with an advanced one $\GA$ by a cooperon or diffuson ladder in all possible (two) ways. Doing so we obtain diagrams depicted in Figures~\ref{fCondDiags} and~\ref{fZLA:b} containing one cooperon and one diffuson. Adding more and more diffuson or cooperon lines in all possible ways, one obtains an infinite number of diagrams. In this section we describe how the most important diagrams can be selected out of this infinity for further calculation. \subsection{Two ways of drawing diagrams\label{sec:twoWays}} \newcommand{\evA}{-e\vec v\vec A}\newcommand{\jcl}{\hat{\vec j}}% \begin{figure}% \subfigure[]{\label{fCondDiags:a}\begin{minipage}{.4\columnwidth}\resizebox{\textwidth}{!}{\input{cond1.ins.tex}}\end{minipage}}\hspace{.1\columnwidth}% \subfigure[]{\label{fCondDiags:b}\begin{minipage}{.4\columnwidth}\resizebox{\textwidth}{!}{\input{cond1a.ins.tex}}\end{minipage}}\hspace{.1\columnwidth}% \mycaption{Two representations of the weak localization diagram, cf. Fig.~4.8 from Ref.~\onlinecite{montambauxBook}. We call Fig.~\ref{fCondDiags:a} ``ladder representation'', and Fig.~\ref{fCondDiags:b} -- ``coordinate representation''.\label{fCondDiags}} \end{figure} In Fig.~\ref{fCondDiags}, the same (weak localization) diagram is drawn in two equivalent representations: on the lhs, the cooperon is drawn in the ``ladder'' form, [see the lhs of Fig.~\ref{fCoopDiff:b}] while on the rhs the ``coordinate'' form [wavy line with two ends, see the rhs of Fig.~\ref{fCoopDiff:b}] is used. The weak localization diagram is usually drawn in the lhs (ladder) -form (or as topologically-equivalent ``bubble with maximally anti-crossing disorder-averaging lines'', cf. Fig.~4.8 from Ref.~\onlinecite{montambauxBook}). Below we use the rhs (coordinate)-form [its advantages are discussed in Sec.~\ref{sec:defCD} below]; Fig.~\ref{fCoopDiff} gives a recipe how a diagram can be transformed from one form into another and back. A diagram in the coordinate representation consists of Green function boxes (bubbles, triangles, squares, pentagons, etc.) connected by wavy lines (cooperons and/or diffusons). A vertex of a Green function box (GFB) may be occupied by a (i) observable operator, (ii) external field operator, or (iii) end of a cooperon and/or diffuson line. \subsection{Loops formed by cooperons and diffusons\label{sec:loops}} There are two important momentum scales in the disorder averaging technique: (i) the (characteristic) absolute value of the momentum argument in averaged GFs $p\simp_F$, and (ii) $\hbar/l\llp_F$. ($l$ is the mean free path of an electron between two subsequent elastic scatterings off impurities.) Momentum integrals from products of GFs of the form \begin{equation} \int\frac{\ud^2p}{(2\pi\hbar)^2}}\newcommand{\ippd}{\int\frac{\ud^2p'}{(2\pi\hbar)^2}\prod_{i=1}^rG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}\left(\vec p-{\vec q}_i\right)\prod_{j=1}^a\GA\left(\vec p-{\vec q}_j\right)\end{equation} usually converge within the interval $p_F-\hbar/l\lesssim p\lesssimp_F+\hbar/l$; hence $\hbar/l$ characterizes the deviation of momentum argument of an averaged GF ~\eqref{agf} or~\eqref{grSOI} from $p_F$. The assumption that ``large'' momentum $p_F$ is much larger than ``small'' momentum $\hbar/l$ is crucial for the disorder averaging technique, since $(p_F l/\hbar)^{-1}\ll1$ is its main expansion parameter (see Sec.~\ref{sec:compareTwo} below). The mean scattering free path $l$ is also a scale on which averaged GFs \eqref{agf} and \eqref{grSOI} decay, e.g., in 3D $\GRA(\vec r-\vec r')\propto\exp[-|\vec r-\vec r'|/l]$. We can interpret this saying that the length of a Green function line in our diagrams is $l$. Within the disorder averaging technique we cannot observe effects on scales shorter than $l$, i.e., to say that the length of a GF-line is $l$, is almost the same, as to say that this length is zero; thus we can consider a GF-line not as a line, but as a point. Now if we draw some diagram in its ``coordinate representation'' (see Sec.~\ref{sec:twoWays}) and squeeze all Green function lines into points, the result will contain only cooperon or diffuson (CD) lines forming a certain number of loops. For example, a bubble in Fig.~\ref{fZLA:a} has no loops (since it has no CD lines which could form a loop); a weak localization (WL) diagram in Fig.~\ref{fCondDiags:b} has one loop, and all diagrams in Figs.~\ref{fdivtoPor},\ref{fOtherSL} have two loops. \begin{figure} \subfigure[]{\label{fZLA:a}% \begin{minipage}{.4\columnwidth}\resizebox{\textwidth}{!}{\input{drude.ins.tex}}\end{minipage}}\qquad \subfigure[]{\label{fZLA:b}% \begin{minipage}{.4\columnwidth}\resizebox{\textwidth}{!}{\input{vertexRenorm.ins.tex}}\end{minipage}}\hspace{.07\columnwidth}% \mycaption{Zero-loop diagrams: (a) the Drude bubble and (b) the vertex renormalization.\label{fZLA}} \end{figure} The number of CD-loops is equal to the number of independent ``small'' momentum variables (which are $\sim\hbar/l$), or, in other words -- to the number of integrals over the CD-momentum variables. \subsection{Comparing two arbitrary diagrams\label{sec:compareTwo}} Let us estimate two arbitrary diagrams for the same physical quantity. An estimate for a GFB is $\propto\nu\tau^{h-1}$, where $h$ is the number of Green function lines composing the GFB. Every CD line has a prefactor $(4\pi\nu\tau)^{-1}$ [see Sec.~\ref{sec:defCD} below]. We estimate the DoS as $\nu\sim m/(2\pi\hbar\lambdaslash^{d-2})$, where $\lambdaslash\sim\hbar/p_F$, and $d$ is the spatial dimension ($d=2$ or $d=3$). Let us denote $L_{1,2}$, $H_{1,2}$, $C_{1,2}$ the corresponding number of loops, GFBs, and CD-lines in the two considered diagrams; the quantities $h_{1j}$ denote number of GF lines in the $j$th GFB of the first diagram, and $h_{2n}$ do the same for the second diagram. The calculation of diagrams is often much simpler in the diffusion approximation -- i.e., assumption that $q^*l\ll\hbar$, where $q^*$ is the characteristic momentum of a CD line (i.e., ``small'' momentum variable). [The validity of the diffusion approximation in our calculation arises from the assumption that $q^*l\sim x\hbar\ll\hbar$.] Sometimes a GFB gains additional smallness of the order of $q^*l/\hbar\ll1$. One has to calculate a GFB in order to reveal how much of these ``extra'' $q^*l/\hbar$ it has -- this is not uniquely defined by the number of loops. In the following estimates we assume $q^*l\sim\hbar$ in order not to mix up expansions in two different small parameters: $q^*l/\hbar\ll1$ and~$(p_F l/\hbar)^{-1}\ll1$. Then the relation between two different arbitrary diagrams is estimated as \begin{equation} \hspace{-3ex}\frac{\mathrm{1st\ diagram}}{\mathrm{2nd\ diagram}}\sim\frac% {\int\prod_{i=1}^{L_1}\frac{\ud^dk_i}{(2\pi\hbar)^d}\left[\prod_{j=1}^{H_1}2\pi\nu\tau^{h_{1j}-1}\right]\left[\frac1{2\pi\nu\tau}\right]^{C_1}}% {\int\prod_{l=1}^{L_2}\frac{\ud^dq_l}{(2\pi\hbar)^d}\left[\prod_{n=1}^{H_2}2\pi\nu\tau^{h_{2n}-1}\right]\left[\frac1{2\pi\nu\tau}\right]^{C_2}}. \end{equation} We use the fact that $L_i=C_i-H_i+1$ for $i=1,2$, and $\sum_{j=1}^{H_1}h_{1j}-\sum_{n=1}^{H_2}h_{2n}=2(C_1-C_2)$, so that \begin{equation}\begin{split}\label{boPa} &\frac{\mathrm{1st\ diagram}}{\mathrm{2nd\ diagram}}\sim \left[\frac1{(2\pi)^d}\left(\frac\hbar{p_F l}\right)^{d-1}\right]^{L_1-L_2},\quad d>1. \end{split}\end{equation} As we discussed above, apart from $(p_F l/\hbar)^{-1}\gg1$, there is an additional small expansion parameter $q^*l/\hbar\ll1$; so the total number of expansion parameters is two. (Later, in Sec.~\ref{sec:ff} the number of small parameters is three.) The loop expansion predicts how large (small) an arbitrary diagram is \emph{only} in powers of $(p_F l/\hbar)^{-1}$. To conclude, the statement that ``every loop brings a smallness $(p_F l/\hbar)^{-1}$'' is known in mesoscopics; for the diagrams produced by the non-linear $\sigma$-model\cite{KamenevAndreevNLSM} it is explained in Sec.~III.3.c of Ref.~\onlinecite{BelitzKirkpatrick}. However, we are not aware of earlier articles, where this statement is justified for diagrams within the usual disorder-averaging diagrammatic technique; this was the reason to include Sec.~\ref{sec:le} in this text. \section{Zero-frequency cooperon and diffuson in the presence of SOI\label{sec:CD}} Both cooperon and diffuson\cite{montambauxBook} can be graphically represented as a sum of ``ladders''; each ``ladder'' is given by retarded Green function $G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}$ (bold line in our drawings) connected to the advanced one $\GA$ (bold line) with some number of disorder-averaging (dashed) lines. An elementary building block of every such ladder is made of one dashed line connecting $G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}$ with $\GA$. Below we discuss a convenient way of rearranging spin indices in every building block of the ladder. \subsection{Separating spin indices} One can write the expression for two GF-lines connected with a disorder-averaging line in different ways; we will write it either as \begin{widetext} \begin{equation}\label{eblo} \begin{split} \insfigh{.16\columnwidth}{6.5ex}{di1}=\frac1{2\pi\nu\tau} \left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_1)G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_2)\right]_{\alpha\beta} \left[\GA(\vec p_1')\GA(\vec p_2')\right]_{\beta'\alpha'}\\ = \frac1{4\pi\nu\tau}\sum_{l=0}^3 \left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_1)\sigma_l\GA(\vec p_1')\right]_{\alpha\alpha'} \left[\GA(\vec p_2')\sigma_lG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_2)\right]_{\beta'\beta}, \end{split} \end{equation} or as \begin{equation}\label{ebloC} \begin{split} \insfigh{.16\columnwidth}{6.5ex}{co1}=\frac1{2\pi\nu\tau} \left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_1)G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_2)\right]_{\alpha\beta} \left[\GA(\vec p_1')\GA(\vec p_2')\right]_{\alpha'\beta'}\\ = \frac1{4\pi\nu\tau}\sum_{l=0}^3 \left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_1){\bar\sigma}_l^\dag G_{\mathrm A}^T(\vec p_1')\right]_{\alpha\alpha'} \left[G_{\mathrm A}^T(\vec p_2'){\bar\sigma}_lG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_2)\right]_{\beta'\beta}, \end{split} \end{equation} where the identity is used: \begin{equation}\label{borinoTozhdestvo} 2\delta_{s_1s_2}\delta_{s_3s_4}=\sum_{\alpha=0}^3\sigma^{s_1s_3}_\alpha{\sigma}^{s_4s_2}_\alpha= \sum_{\alpha=0}^3{\bar\sigma}^{s_1s_3}_\alpha\left({\bar\sigma}_\alpha^\dagger\right)^{s_4s_2},\quad {\bar\sigma}_\alpha\equiv\sigma_2\sigma_\alpha. \end{equation} Note that, differently from the first lines of \eqref{eblo} and~\eqref{ebloC}, square brackets in their second lines contain spin and momenta variables belonging only to the pair of GFs from one side of the diagram (the lhs or the rhs from the central disorder-averaging line). This ``separation of spin indices'' effectively makes the lhs of the diagrams in~\eqref{eblo} and in~\eqref{ebloC} independent from their rhs. Every diagram with cooperons or diffusons contains an infinite number of elementary blocks \eqref{eblo} and/or~\eqref{ebloC}. Below we will always separate spin variables in them according to~\eqref{eblo} or~\eqref{ebloC}. \subsection{Defining cooperon and diffuson\label{sec:defCD}} Now let us transform \emph{every} elementary building block in the diffuson series according to~\eqref{eblo}: \begin{equation}\label{ids} \begin{split} \insfigh{.16\columnwidth}{6.5ex}{di1}\ +\ \insfigh{.16\columnwidth}{6.5ex}{di2}\ +\ \insfigh{.18\columnwidth}{6.5ex}{di3}\ +\ \ldots\\ =\sum_{l,l'=0}^3D^{ll'}_{\vec p_1-\vec p_1'} \left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_1)\sigma_l\GA(\vec p_1')\right]_{\alpha\alpha'}\left[\GA(\vec p_2')\sigma_{l'}G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_2)\right]_{\beta'\beta}, \end{split} \end{equation} where $\vec p_1-\vec p_1'=\vec p_2-\vec p_2'$ and \begin{equation}\label{defXabDC} D^{\alpha\beta}_{\vec q}=\frac1{4\pi\nu\tau}\left[\sum_{n\ge0}X_D^n\right]_{\alpha\beta} =\frac1{4\pi\nu\tau}\left[\mathds1-X_D(\vec q\,)\right]^{-1}_{\alpha\beta},\quad X_D^{\alpha\beta}(\vec q)=\frac1{4\pi\nu\tau}\int\frac{\ud^2p}{(2\pi\hbar)^2}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}[\sigma_\alphaG^E_R}\newcommand{\GAE}{G^E_A}\newcommand{\GRAE}{G^E_{\mathrm{R/A}}(\vec p)\sigma_\beta\GAEw(\vec p-\vec q)], \end{equation} where $\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}$ stands for the trace only in spin indices. So \eqref{eblo} helped us to convert the diffuson series into geometric series that we could sum. Analogously we use~\eqref{ebloC} to transform the cooperon\cite{note:coopDef} series: \begin{equation}\begin{split}\label{ics} \insfigh{.16\columnwidth}{6.5ex}{co1}\ +\ \insfigh{.16\columnwidth}{6.5ex}{co2}\ +\ \insfigh{.18\columnwidth}{6.5ex}{co3}\ +\ \ldots\\ =\sum_{l,l'=0}^3C^{ll'}_{\vec p_1+\vec p_2} \left[G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_1)\sigma_l\GA(\vec p_1')\right]_{\alpha\alpha'}\left[\GA(\vec p_2')\sigma_{l'}G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p_2)\right]_{\beta'\beta}, \end{split}\end{equation} where $\vec p_1+\vec p_1'=\vec p_2+\vec p_2'$ and \begin{equation}\label{obVyDC} C^{\alpha\beta}_{\vec q}=\frac1{4\pi\nu\tau}\left[\sum_{n\ge0}X_C^n\right]_{\alpha\beta}=\frac1{4\pi\nu\tau}\left[\mathds1-X_C(\vec q\,)\right]^{-1}_{\alpha\beta},\quad X_C^{\alpha\beta}(\vec q)=\frac1{4\pi\nu\tau}\int\frac{\ud^2p}{(2\pi\hbar)^2} \mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}\left\{{\bar\sigma}_\alphaG^E_R}\newcommand{\GAE}{G^E_A}\newcommand{\GRAE}{G^E_{\mathrm{R/A}}(\vec p){\bar\sigma}_\beta^\dag[\GAEw(\vec q-\vec p)]^T\right\}. \end{equation} \end{widetext} From~\eqref{invSpi} it follows that \emph{in a system with time reversal symmetry} \begin{equation} \sigma_2G_{\mathrm R/A}^T(-\vec p)\sigma_2=\GRA(\vec p),\label{timeRevarsalGF} \end{equation} so that \begin{equation} X_C^{\alpha\beta}(\vec q)=X_D^{\alpha\beta}(\vec q) \mathrm{\ and\ } C^{\alpha\beta}_{\vec q}=D^{\alpha\beta}_{\vec q}.\label{CeqD} \end{equation} The series~\eqref{ids} [or~\eqref{ics}] depend on four momentum and four spin variables; without external four GF lines only $D^{ll'}_{\vec p_1-\vec p_2}$ [or $C^{ll'}_{\vec p_1+\vec p_2}$] is left, which is a function of two momentum and two spin variables. We call the quantities $D^{\alpha\beta}_{\vec q}$ and $C^{\alpha\beta}_{\vec q}$ the diffuson and the cooperon, respectively. Diagrammatically, $D^{\alpha\beta}_{\vec q}$ can be drawn in two ways. The lhs-diagram in Fig.~\ref{fCoopDiff:a} is more similar to the series in~\eqref{ids}, while the rhs-diagram in Fig.~\ref{fCoopDiff:a} stresses the fact that the diffuson without external four GF lines has only two ends. \begin{figure} \subfigure[]{\label{fCoopDiff:a}% \begin{minipage}{.45\columnwidth}\resizebox{\textwidth}{!}{\input{diffuson.ins.tex}}\end{minipage}}\quad \subfigure[]{\label{fCoopDiff:b}% \begin{minipage}{.45\columnwidth}\resizebox{\textwidth}{!}{\input{cooperon.ins.tex}}\end{minipage}}\hspace{.07\columnwidth}% \mycaption{Diagrams for (a) diffuson and (b) cooperon in two representations. On the lhs, the diffuson (cooperon) is drawn as a ``ladder''; on the rhs it is drawn as a wavy line. (See also Sec.~\ref{sec:twoWays} and Fig.~\ref{fCondDiags}.)\label{fCoopDiff}} \end{figure} This rhs-diagram in Fig.~\ref{fCoopDiff:a} reflects better the spatial structure of the diffuson in the coordinate space. From Eq.~\eqref{fiMod} one can see that the distance between points 1 and 2 in Fig.~\ref{fCoopDiff} is zero, and the same is true for the points 3 and 4. We merged these identical points in the rhs-diagram in Fig.~\ref{fCoopDiff:a}, so that 1=2 and 3=4. A diffuson at frequency $\omega$ in coordinate space decays on the scale $\min(|L_\omega|,L_s)\gg l$: \begin{equation}\label{Lw}\begin{split} L_\omega=l/\sqrt{2i\omega\tau},\quad\sqrt i\equiv(1+i)/\sqrt2,\quad L_s=l/\sqrt{2x},\\ \omega\tau\ll\hbar,\quad x\ll1\Longrightarrow\min(|L_\omega|,L_s)\gg l. \end{split}\end{equation} Thus, we see that the distance (in coordinate space) between points 1 and 2 (or 3 and 4) in Fig.~\ref{fCoopDiff:a} is (within our accuracy) infinitesimal, and this fact is graphically reflected by merging points 1 and 2 together (as well as points 3 and 4) in the rhs of Fig.~\ref{fCoopDiff:a}. A similar reasoning is valid for the cooperon as well, see Fig.~\ref{fCoopDiff:b}. \subsection{Explicit 2D-expressions for $q=0$\label{sec:zeroQ}} The diffuson at zero momentum, $q=0$, can be calculated without assuming that the SOI amplitudes $x_{a,b}$ are small. Using the fact that \begin{equation*}}\newcommand{\ees}{\end{equation*} G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^T(\vec p)+G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^T(-\vec p)=G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p)+G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(-\vec p), \ees one obtains a sum rule: \begin{equation}\label{Xsr} X_D^{22}=X_D^{00}-X_D^{11}+X_D^{33}.\end{equation} For $q=0$, $X_D$ is a diagonal $4\times4$ matrix with elements \begin{equation}\begin{split}\label{toVyddpqrn} X_D^{00}=1,\ X_D^{11}=\frac{1+K}{1+(x_a+x_b)^2+K}\ ,\ X_D^{33}=\frac1K\ ,\\ K=\sqrt{[1+(x_a+x_b)^2][1+(x_a-x_b)^2]}. \end{split}\end{equation} The components of the diffuson at zero momentum are given by the diagonal matrix \begin{equation}\label{exactDiffusonForZeroQ}\begin{split} 4\pi\nu\tau D_0&=\frac{2m\tau}\hbar D_0\\ =\mathrm{diag}&\left(\frac{L_\phi^2}{l^2},1+\frac{1+K}{(x_a+x_b)^2},1+\frac{1+K}{(x_a-x_b)^2},\frac K{K-1}\right), \end{split}\end{equation} where the electron (orbital) dephasing length $L_\phi$ (due to inelastic scattering) serves as a cut-off for the infinity, and the first element $L_\phi^2/l^2$ does not contribute to physical quantities when the Vollhardt-Wölfle theorem holds (see Appendix~\ref{app:VW}). The sum of the two diagrams in Fig.~\ref{fZLA} is equal to the diagram in Fig.~\ref{fZLA:a} with one velocity vertex renormalized: \begin{equation}\label{renVelGraph} \begin{minipage}{3ex}\includegraphics[height=6ex]{rv1}\end{minipage}= \begin{minipage}{3ex}\includegraphics[height=6ex]{rv2}\end{minipage}+ \begin{minipage}{14.2ex}\includegraphics[height=6ex]{rv3}\end{minipage}, \end{equation} which corresponds to the expression \begin{equation}\label{renVel} {\tilde v}_\alpha= v_\alpha+\sum_{\gamma=1}^3\sigma_\gamma D_0^{\gamma\gamma}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\sigma_\gammaG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p)v_\alpha\GA(\vec p)\right]=\frac{p_\alpha}m, \end{equation} where $D_0^{\gamma\gamma}$ are components of the zero-momentum diffuson given by~(\ref{exactDiffusonForZeroQ},\ref{toVyddpqrn}). Since the vertex renormalization of this type occurs in every diagram for the conductivity, we take it everywhere into account by substituting the velocity operator $v_\alpha$ with its renormalized value ${\tilde v}_\alpha=p_\alpha/m$; the only exception is the zero-loop contribution (calculated in Sec.~\ref{sec:ZLA}), which is represented by the diagram in Fig.~\ref{fZLA:a} with \emph{only one} velocity vertex being renormalized. \subsection{Explicit 2D-expressions for $q\ne0$\label{sec:dbnez}} Because of the SOI, most components of the diffuson do not have a pole at zero momentum and frequency even if the dephasing effects are neglected; the diffuson gains a non-zero ``mass'', see \eqref{exactDiffusonForZeroQ}. In case of pure-Rashba or pure-Dresselhaus SOI, this ``mass'' is quadratic in the SOI amplitude. This still remains true in case when one (Rashba or Dresselhaus) SOI amplitude is much smaller than the other one, so that the SOI-induced anisotropy of the energy spectrum \eqref{dRDaAlt} $\delta$ is a small parameter. Consequently, the integrals in diffuson momenta $\vec k$ and $\vec q$ converge on the scale of $q\lesssim x\hbar/l$, and it is convenient to introduce dimensionless variables \begin{equation} \vec K\equiv l\vec k/x\hbar,\quad\vec Q\equiv l\vec q/x\hbar,\label{defBP} \end{equation} so that $K\lesssim1$ and $Q\lesssim1$. Like in Sec.~\ref{sec:dbnez}, the calculation shows that only the $(1,1)$-minor (i.e., $3\times3$ matrix block) of the $4\times4$ diffuson matrix is affected by the SOI. In other words, the upper line and the left column of the diffuson matrix are independent of SOI: \begin{equation} \begin{split}D^{00}_{\vec q}=\frac\hbar{2m\tau}\frac1{l^2q^2/2\hbar-i\omega\tau},\\ D_{\vec q}^{\alpha0}=D_{\vec q}^{0\alpha}=0,\quad\alpha=1\ldots3.\end{split} \end{equation} The element $D_{\vec q}^{00}$ gives no contribution to the conductivity in systems with time-reversal symmetry (see Appendix~\ref{app:VW}); it becomes important, when this symmetry is broken, see Sec.~\ref{sec:brokenTR}. In the rest of this Section we reduce $4\times4$ matrices of $X_D$ and $D_{\vec q}$ to corresponding (1,1)-minors. To simplify the calculation, we further assume that $x\ll1$ (diffusion approximation). Then the diffuson is obtained using \eqref{defXabDC} with $X_D$ given by \begin{equation}X_D\approx\mathds1-x^2\left[Y^{(0)}_{\vec Q}-\delta Y^{(1,0)}_{\vec Q}\right],\end{equation} where $\delta$ is defined in~\eqref{dRDaAlt}, and \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}\label{XdRD} Y^{(0)}_{\vec Q}&=&\frac{Q^2}2\mathds1+\frac12\begin{pmatrix} 1 &0 &-2iQ_x\\ 0 &1 &-2iQ_y\\ 2iQ_x &2iQ_y & 2 \end{pmatrix},\\ Y^{(1,0)}_{\vec Q}&=&\frac12\begin{pmatrix} -1 & 0 & iQ_x\\ 0 & 1 &-iQ_y\\ -iQ_x & iQ_y & 0 \end{pmatrix}. \eea Note that the above expression for $X_D$ is Hermitian and obeys the sum rule \eqref{Xsr}. The resulting diffuson matrix $D_{\vec Q}$ has a denominator $\propto(\det Y^{(0)}_{\vec Q})^n$, where $n>0$ is an integer, and \begin{equation}\label{detYZ}\begin{split} 8\det Y^{(0)}_{\vec Q}&=2+Q^2+Q^6\\ =&(Q^2+1)\left(Q^2-\frac{1-i\sqrt7}2\right)\left(Q^2-\frac{1+i\sqrt7}2\right). \end{split}\end{equation} The expression \eqref{detYZ} is independent of the direction of $\vec Q$, so the same is true for the denominators of all diffuson components. Consequently, an arbitrary diagram containing a diffuson line with a non-zero momentum (e.g., rhs of Fig.~\ref{fdivtoPor}) has denominators [consisting of powers of $\det Y^{(0)}_{\vec K}$, $\det Y^{(0)}_{\vec K+\vec Q}$, and $\det Y^{(0)}_{\vec Q}$], which are invariant with respect to two ``mirror reflections'': (i) $(K_x\to -K_x,Q_x\to -Q_x)$, (ii) $(K_y\to -K_y,Q_y\to -Q_y)$, and (iii) $(K_x\leftrightarrow K_y,Q_x\leftrightarrow Q_y)$. [Note that the original Hamiltonian~\eqref{hamrash}, \eqref{RDrotH} does not possess any of these symmetries.] These symmetries are used in the program\cite{theProgram} for reducing the size of the integrands. Finally, we note that the easiest way to obtain the results of this section is to utilize the computer program\cite{theProgram}. \section{The zero-loop contribution\label{sec:ZLA}} In the zero-loop approximation (ZLA), the calculation can be performed without assuming that SOI amplitudes are small, $x_{a,b}\ll1$ (i.e., without assuming the validity of the diffusion approximation). Only two diagrams (see Fig.~\ref{fZLA}) having zero loops contribute to the ZLA. As we discussed in Sec.~\ref{sec:zeroQ}, their sum is equal to the diagram in Fig.~\ref{fZLA:a} with one velocity vertex substituted by its renormalized value~\eqref{renVel}: \begin{equation}\label{obVydpsoi}\begin{split} \sigma^{(0)}_{\alpha\beta}&-\delta_{\alpha\beta}\sigma_D=\frac{e^2}h\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\hat v}_\alpha{\hat G}_R}\newcommand{\hGA}{{\hat G}_A} \newcommand{\hGK}{{\hat G}_K} \newcommand{\hGRA}{{\hat G}_{R/A}\frac{{\hat p}_\beta}m\hGA-\sigma_0\frac{p_\alpha p_\beta}{m^2}{\hat g}_r^E}\newcommand{\hgae}{{\hat g}_a^E}\newcommand{\hgra}{{\hat g}_{\mathrm{r/a}}\hgae\right]\\ =\frac{e^2}h&\int\frac{\ud^2p}{(2\pi\hbar)^2}}\newcommand{\ippd}{\int\frac{\ud^2p'}{(2\pi\hbar)^2}\frac{p_\alpha p_\beta}{m^2}\left[g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E-}\ga^{E-}+g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E+}\ga^{E+}-2{\hat g}_r^E}\newcommand{\hgae}{{\hat g}_a^E}\newcommand{\hgra}{{\hat g}_{\mathrm{r/a}}\hgae\right]\\ &-\frac{e^2}{2h}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\left(\frac e{mc}{\tilde A}_\alpha\right)\frac{{\hat p}_\beta}m\hat s(g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E-}\ga^{E-}-g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E+}\ga^{E+})\right], \end{split}\end{equation} where $g_{r/a}^{E\pm}$ are defined in \eqref{grapm}, and we subtracted the SOI-independent Drude conductivity $\sigma_D$ \eqref{inhCond}. The second line of~\eqref{obVydpsoi} gives \begin{equation}\label{spPrToIm}\begin{split} \int\frac{\ud^2p}{(2\pi\hbar)^2}}\newcommand{\ippd}{\int\frac{\ud^2p'}{(2\pi\hbar)^2}\frac{p_\alpha p_\beta}{m^2}\left(g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E-}\ga^{E-}+g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E+}\ga^{E+}-2g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^E\ga^E\right)\\ =\frac\hbar{2\mu\tau}\left[(x_a^2+x_b^2)\sigma_0+x_ax_b\sigma_3\right], \end{split}\end{equation} where we used \eqref{genCubic}. The rest of \eqref{obVydpsoi} equals \begin{align} \frac12&\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[\left(-\frac e{mc}{\tilde A}_\alpha\right)\frac{{\hat p}_\beta}m\hat s(g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E-}\ga^{E-}-g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E+}\ga^{E+})\right]\nonumber\\ =&\frac12\sum_{i=1}^2\left(\frac e{mc}\right)^2\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d\left[{\tilde A}_\alpha {\tilde A}_i \frac{2p_\beta p_i}{m\Delta_{\vec p}}\cdot(g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E-}\ga^{E-}-g_r}\newcommand{\ga}{g_a}\newcommand{\gra}{g_{\mathrm{r/a}}^{E+}\ga^{E+})\right]\nonumber\\ &=\frac\hbar{2\mu\tau}\left[-\frac{x_a^2+x_b^2}2\sigma_0-x_ax_b\sigma_3\right]. \label{okoRdvena} \end{align} We see that the anisotropic terms in \eqref{spPrToIm} and \eqref{okoRdvena} cancel each other so that the charge conductivity \eqref{obVydpsoi} is proportional to the unity tensor\cite{anisotrCond}. Thus, within the ZLA~$\sigma_{\mathrm{an}}=0$ and \begin{equation}\label{ZLAres} \sigma^{(0)}_{\mathrm{is}}-\sigma_D=\frac{e^2}h\frac{x_a^2+x_b^2}{4\mu\tau/\hbar}+\frac{e^2}h\times O[(\mu\tau/\hbar)^{-3}], \end{equation} which is confirmed by the computer algebra calculation\cite{theProgram} for the limiting case when \mbox{$2x_ax_b\ll x_a^2+x_b^2\ll1$.} The absence of CD-loops in the ZLA-diagrams in Fig.~\ref{fZLA} means that the ZLA-contribution neglects interference between electrons. Thus, the result~\eqref{ZLAres} is valid also for the phase-incoherent system (e.g., at high temperatures). The ZLA is local -- that is, independent of the macroscopic (on scales $\gg l$) geometrical details of the sample, being the same in 2D and quasi-1D cases. Since the ZLA-diagrams contain no crossings of disorder-averaging lines, their contribution coincides with the results of the Boltzmann equation approach, see the discussion in~\S9.6 from Ref.~\onlinecite{LevitovShytov}. Note that \emph{at finite frequency} $\omega$ there are non-zero anisotropic corrections to the conductivity tensor. We do not present them in the main text; see\cite{theProgram} for details. According to the loop expansion (see Sec.~\ref{sec:le}), diagrams having one (weak localization) and two loops may produce the contribution to the conductivity of the same order, or even larger, than~\eqref{ZLAres}. We calculate contributions coming from these diagrams in the following sections. \section{The weak localization contribution\label{sec:WL}} The weak localization contribution is provided by the diagram in Fig.~\ref{fCondDiags:b} with the renormalized (dashed) Hikami box \begin{equation}\label{wlde} \sigma^{(1)}= \begin{minipage}{8ex}\includegraphics[height=8ex]{wlSum}\end{minipage}\equiv \begin{minipage}{8ex}\includegraphics[height=8ex]{wl2}\end{minipage}+ \begin{minipage}{8ex}\includegraphics[height=8ex]{wl0}\end{minipage}+ \begin{minipage}{8ex}\includegraphics[height=8ex]{wl1}\end{minipage}, \end{equation} where both vertices are renormalized according to~\eqref{renVelGraph} and~\eqref{renVel}. The contribution of the diagrams in~\eqref{wlde} can be written in the form \begin{equation} \begin{split} \sigma^{(1)}_{\alpha\beta}=&\int\frac{\ud^2k}{(2\pi\hbar)^2}}\newcommand{\ikds}{\int\ud^2k/(2\pi\hbar)^2\sum_{i,j=0}^3\mathbb H_{ij}^{\alpha\beta}(\vec k)D^{ij}_{\vec k},\\ \mathbb H_{ij}^{\alpha\beta}&=\mathbb A_{ij}^{\alpha\beta}+\mathbb B_{ij}^{\alpha\beta}+\mathbb C_{ij}^{\alpha\beta}, \end{split}\label{wldeExpr} \end{equation} where we used~\eqref{CeqD} and \begin{widetext} \begin{equation}\label{wlhb1} \begin{split}\text{\tt2-1.max}\to\qquad \mathbb A_{ij}^{\alpha\beta}(\vec k)= \frac\hbar{2m\tau}\sum_{l=0}^3 \int\frac{\ud^2p_1}{(2\pi\hbar)^2}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}} \left[\sigma_lG_R^T(-\vec p_1)\left(-\frac{p_{1\alpha}}m\right)G_A^T(-\vec p_1){\tilde\sigma}_jG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p+\vec k)\right]\\\times \int\frac{\ud^2p_2}{(2\pi\hbar)^2}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}} \left[\sigma_lG_R(\vec k-\vec p_2)\frac{k_{2\beta}-p_{2\beta}}mG_A(\vec k-\vec p_2){\tilde\sigma}_i^\dag G_R^T(\vec p_2)\right], \end{split}\end{equation} \begin{equation}\label{wlhb2}\text{\tt2-2.max}\to\qquad \mathbb B_{ij}^{\alpha\beta}(\vec k)=\int\frac{\ud^2p}{(2\pi\hbar)^2}}\newcommand{\ippd}{\int\frac{\ud^2p'}{(2\pi\hbar)^2}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}} \left[\frac{p_\alpha}mG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p){\tilde\sigma}_i^\dag G_A^T(\vec k-\vec p)\frac{k_\beta-p_\beta}mG_R^T(\vec k-\vec p){\tilde\sigma}_j^T\GA(\vec p)\right], \end{equation} \begin{equation}\label{wlhb3}\begin{split} \text{\tt2-3.max}\to\qquad \mathbb C_{ij}^{\alpha\beta}(\vec k)= \frac\hbar{2m\tau}\sum_{l=0}^3\int\frac{\ud^2p_1}{(2\pi\hbar)^2}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}} \left[\sigma_l^TG_A^T(\vec k-\vec p_1)\frac{k_\beta-p_{1\beta}}mG_R^T(\vec k-\vec p_1){\tilde\sigma}_j^T\GA(\vec p)\right]\\\times \int\frac{\ud^2p_2}{(2\pi\hbar)^2}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}} \left[\sigma_l^TG_A(-\vec p_2)\left(-\frac{p_{2\alpha}}m\right)G_R(-\vec p_2){\tilde\sigma}_i^\dag G_A^T(\vec p+\vec k)\right]. \end{split}\end{equation} The expressions~\eqref{wlhb1}, \eqref{wlhb2}, \eqref{wlhb3} are generated by our program\cite{theProgram} and taken from the (automatically created) files {\tt2-1.max}, {\tt2-2.max}, and {\tt2-3.max}. \end{widetext} In the absence of orbital dephasing and at zero frequency, the isotropic part of \eqref{wlde} diverges reproducing the well-known result for the weak antilocalization correction\cite{HikLarkin80,Edelstein:95a,Skvortsov}. The anisotropic part of \eqref{wlde},\eqref{wldeExpr} converges. Its leading (in the SOI) contribution is given by \begin{equation}\label{anisWL} \sigma^{(1)}_{\mathrm{an}}=2x_ax_bS_{20}^0\frac{e^2}h+\frac{e^2}h\times O[(\mu\tau/\hbar)^{-2}], \end{equation} where we assumed that $2x_ax_b\ll x_a^2+x_b^2\ll1$, and \begin{equation}\label{Aexpr} S_{20}^0=\frac{(131\pi+262\mathrm{arcctg}\sqrt7)/\sqrt7-88-7\log2}{224\pi}\approx0.14 \end{equation} These results are obtained in \cite{theProgram}, where the expression for the renormalized Hikami box (together with other details of calculation) can be found. The results~\eqref{ZLAres}, \eqref{anisWL} manifest the general rule \eqref{defSan}: the disorder-averaged \emph{conductivity tensor is diagonal} in the considered (rotated by $\pi/4$) basis, where the SOI is given by~\eqref{neuSOI}. To demonstrate this rule, consider an arbitrary diagram produced by averaging the Kubo formula~\eqref{KuboFormula} for the off-diagonal conductivity element~$\sigma_{xy}$. Let us change the sign of $p_x$ and of~$\sigma_2$ everywhere in the expression for the diagram. The identity~\eqref{borinoTozhdestvo} remains valid if the sign of any Pauli matrix is changed, so that expressions for diffusons and cooperons will not change, as well as Hikami boxes, except for the Hikami box with the vertex $v_x$, which will change sign. Thus the total expression will change its sign; on the other hand, since our transformation is only the change of variables (over which the $\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d$ is taken), the expression must remain invariant. So we conclude that \emph{every} diagram for~$\sigma_{xy}$ is zero, and the disorder-averaged conductivity tensor is diagonal \section{Two-loop contribution\label{sec:twoLoop}} \subsection{Expansion in SOI} In the remainder of the paper we assume that the SOI parameters~\eqref{dRDaAlt} are small, $x\ll1$, $|\delta|\ll1$ (we also assumed this in Sec.~\ref{sec:WL}), and expand contributions to the conductivity in powers of $x$ and $\delta$. According to Ref.~\onlinecite{anisotrCond}, in an infinite 2DEG, \begin{equation} \begin{split} \sigma_{yy}(-a,b)=&\sigma_{xx}(a,b)=\sigma_{xx}(-a,-b)\\ \text{or}\quad& \sigma_{yy}(x,-\delta)=\sigma_{xx}(x,\delta). \end{split} \label{deltaSigmaxxyy} \end{equation} Then the anisotropic part of the conductivity tensor is given by the following expansion: \begin{equation}\label{pervoryadDSi} \sigma_{\mathrm{an}}=\frac{e^2}h\sum_{m,n,r\ge0}S_{mn}^r\frac{x^m\delta^{2n+1}}{(p_F l/\hbar)^r}. \end{equation} Physical quantities should depend only on \emph{even} powers of SOI amplitudes; thus, we expect that $\forall n,r$ $S_{mn}^r=0$ for arbitrary \emph{odd} $m$; our calculations confirm this statement for several values of $n$ and $r$. The two loop diagrams can only affect terms with $r\ge1$ in~\eqref{pervoryadDSi}; the calculation shows that $S_{00}^1\ne0$. Below we calculate $\sigma_{\mathrm{an}}$ (i) for a 2D case (an infinite film -- see Sec.~\ref{sec:2D}), (ii) in quasi-1D case (infinite wire -- see Sec.~\ref{sec:quasi1D}), and (iii) for the quasi-1D ring pierced by magnetic flux (see Sec.~\ref{sec:brokenTR}). From Eqs.~\eqref{deltaSigmaxxyy}, \eqref{pervoryadDSi}, and~\eqref{explAN} we conclude that the anisotropic contribution $\sigma_{\mathrm{an}}$ to the conductivity tensor [see the definition in~\eqref{defSanInv}] can be extracted from the odd in $\delta$ part of $\sigma_{xx}$: \begin{equation} \sigma_{\mathrm{an}}(x,\delta)=\frac{\sigma_{xx}(x,\delta)-\sigma_{xx}(x,-\delta)}2. \end{equation} \subsection{The 2D case\label{sec:2D}} In total, there are nine two-loop diagrams: three are shown in Fig.~\ref{fdivtoPor}, and the other six in Fig.~\ref{fOtherSL}. The calculation in\cite{theProgram} shows that the leading contribution $S_{00}^1$ is produced by three diagrams in Fig.~\ref{fdivtoPor}. \begin{figure} \subfigure[]{\label{fdivtoPor:a}\begin{minipage}{.1\textwidth}\resizebox{\textwidth}{!}{\rotatebox{90}{\input{cc_SOI-b.ins.tex}}}\end{minipage}}\quad \subfigure[]{\label{fdivtoPor:b}\begin{minipage}{.1\textwidth}\resizebox{\textwidth}{!}{\rotatebox{90}{\input{cc_SOI-c.ins.tex}}}\end{minipage}}\quad \subfigure[]{\label{fdivtoPor:c}\begin{minipage}{.1\textwidth}\resizebox{\textwidth}{!}{\rotatebox{90}{\input{cc_SOI-d.ins.tex}}}\end{minipage}} \mycaption{Three relevant two-loop diagrams, which contribute to $S_{00}^1$. See~Ref.\onlinecite{anisotrCond} for more details. Each diagram contains small-momentum singularities, which mutually cancel each other in accordance with the theorem from Appendix~\ref{app:VW}. \label{fdivtoPor}}\end{figure} \begin{figure} \includegraphics[width=\columnwidth]{6} \mycaption{Six irrelevant two loop diagrams, which do not contribute to $S_{00}^1$.\label{fOtherSL}}\end{figure} Each relevant diagram consists of two Hikami boxes (dashed) and three diffuson and/or cooperon lines. For Ref.~\onlinecite{anisotrCond}, we calculated the Hikami boxes manually (that is, we programmed expressions for them \emph{ourselves} and then computer evaluated them). In order to facilitate the calculation, we made a variable change, which allowed us to express the sum of three diagrams in Fig.~\ref{fdivtoPor} as the diagram in Fig.~\ref{fdivtoPor:a} with the renormalized upper Hikami box (HB). We do not use this trick in this paper because (i) it works only for systems with time-reversal invariance and (ii) such tricks became useless after we modified the program\cite{theProgram}, which now generates programs for calculating Hikami boxes of arbitrary diagrams. \begin{figure} \centering \includegraphics[width=\columnwidth]{nine} \caption{The sum of these nine diagrams with non-dashed Hikami boxes is equal to the diagram in Fig.~\ref{fdivtoPor:c}.} \label{fNine} \end{figure} Let us now describe how Hikami boxes (HBs) for diagrams in Fig.~\ref{fdivtoPor} are calculated. Every diagram in Fig.~\ref{fdivtoPor} contains two dashed HBs, and every dashed HB is given by the sum of three non-dashed HBs, like in Eq.~\eqref{wlde}. Thus, every diagram in Fig.~\ref{fdivtoPor} is a sum of \emph{nine} diagrams with non-dashed HBs. For example, the diagram~\ref{fdivtoPor:c} can be expanded into a sum of nine diagrams in Fig.~\ref{fNine}. Let us take the diagram~\ref{fNine}d) as an example. It is equal to \begin{equation}\label{ndeq} \begin{split} \begin{minipage}{.8\columnwidth}\resizebox{\textwidth}{!}{\input{nd.ins.tex}}\end{minipage}\qquad\\ \hspace{-3ex}=\int\frac{\ud^2k_1}{(2\pi\hbar)^2}\int\frac{\ud^2k_2}{(2\pi\hbar)^2} \mathbb H^{s_1s_2s_3}_{f_1f_2f_3}(\vec k_1,\vec k_2)D^{s_1f_1}_{{\vec k}_1}D^{s_2f_2}_{\vec k_2}D^{s_3f_3}_{\vec k_2-\vec k_1}\\ =\int\frac{\ud^2k}{(2\pi\hbar)^2}}\newcommand{\ikds}{\int\ud^2k/(2\pi\hbar)^2 \int\frac{\ud^2q}{(2\pi\hbar)^2}}\newcommand{\iqds}{\int\ud^2q/(2\pi\hbar)^2 \mathbb H^{s_1s_2s_3}_{f_1f_2f_3}(\vec k+\vec q,\vec k)D^{s_1f_1}_{\vec k+\vec q}D^{s_2f_2}_{\vec q}D^{s_3f_3}_{-\vec k}, \end{split} \end{equation} where we used~\eqref{CeqD}. The quantity $\mathbb H^{s_1s_2s_3}_{f_1f_2f_3}$ is the product of two HBs; it is calculated in the (automatically generated) file\cite{theProgram} {\tt batch.me/5-1.max}: \begin{equation} \label{5-1} \begin{split} \mathbb H^{s_1s_2s_3}_{f_1f_2f_3}(\vec k_1,\vec k_2)&=\sum_{i,j=0}^3\frac{\delta_{i,j}}{2m\tau} \mathbb A_{is_3f_2}^{\vec k_1\vec k_2}\mathbb B_{js_1}^{\vec k_1\vec k_2}\mathbb C_{f_3f_1s_2}^{\vec k_1\vec k_2},\\ \mathbb A_{is_3f_2}^{\vec k_1\vec k_2}=\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\vec p}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}&\left[\sigma_i\GAT(-\vec p)\sigma_{s_3}^T\right.\\ &\left. \timesG_R^T}\newcommand{\GAT}{G_A^T(-\vec p+\vec k_2-\vec k_1){\bar\sigma}_{f_2}^T\GA(\vec p+\vec k_1)\right],\\ \mathbb B_{js_1}^{\vec k_1\vec k_2}=\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\vec p}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}&\left[\sigma_j\GA(-\vec p)\left(-\frac{p_x}m\right)\right.\\ &\left. \timesG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(-\vec p){\bar\sigma}_{s_1}^\dag\GAT(\vec p+\vec k_1)\right],\\ \mathbb C_{f_3f_1s_2}^{\vec k_1\vec k_2}=\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\vec p}\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\mathrm{spin}}&\left[\frac{p_x}mG_R^T}\newcommand{\GAT}{G_A^T(\vec p)\sigma_{f_3}^T\GAT(\vec p-\vec k_2+\vec k_1)\right.\\ &\left. \times{\bar\sigma}_{f_1}G_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(-\vec p+\vec k_2){\bar\sigma}_{s_2}^\dag\GAT(\vec p)\right]. \end{split} \end{equation} These expressions could be simplified, but we on purpose wrote them in the same form, as they have been generated in the file {\tt batch.me/5-1.max} in order to make the comparison easier for an interested reader. The sum of the three diagrams with dashed HBs in Fig.~\ref{fdivtoPor} is equal to the sum of 27 diagrams with non-dashed HBs. These HBs are calculated in 27 automatically generated files {\tt batch.me/5-1.max}\ldots {\tt batch.me/5-27.max} After the HB-calculation, the integration over the diffuson momenta [$\vec k$ and $\vec q$ in~(\ref{ndeq})] is performed. While the calculation of Hikami boxes is done fully automatically, integration over diffuson/cooperon momenta must be manually programmed, differently for different problems. The technical details are discussed in Appendix~\ref{app:AnalNum}. The resulting anisotropic contribution to the conductivity is given by~\eqref{pervoryadDSi} for $r=2$ and $m=n=0$: \begin{equation} \sigma^{(2)}_{\mathrm{an}}=S^1_{00}\delta\frac{e^2}{2\pi}\frac1{p_F l},\label{sigmaXX} \end{equation} where $\delta$ is defined in~\eqref{dRDaAlt} and the coefficient~$S^1_{00}$ is given~by \begin{equation} \mathrm{in\ 2D}\quad S^1_{00}=-5.6\times10^{-3}.\label{finRe} \end{equation} The expression~\eqref{sigmaXX} has a non-obvious property connected with the symmetry of the energy spectrum~\eqref{spektr} with respect to the substitution $a\leftrightarrow b$: the two limiting cases $x_a\ll x_b$ and $x_a\gg x_b$ are described by the same expression~\eqref{sigmaXX}. Using Eqs.~\eqref{dRDaAlt} and~\eqref{deltaSigmaxxyy}, we rewrite~\eqref{sigmaXX} in the form \begin{equation} \sigma^{(2)}_{\mathrm{an}}=S^1_{00}\frac{2x_ax_b}{x_a^2+x_b^2}\frac{e^2}{2\pi}\frac1{p_F l},\quad 2x_ax_b\ll x_a^2+x_b^2\ll1.\label{leadingAnis} \end{equation} The contribution~\eqref{leadingAnis} is non-analytic for small $x$: an infinitesimal SOI coupling results in a finite correction $\sigma^{(2)}$ to the conductivity tensor. We note that a similar non-analyticity occurs also in the weak-antilocalization problem: if one neglects the dephasing (assuming infinite dephasing length $L_\phi$), an infinitesimal SOI reverts the sign of the weak-localization correction, switching it to the antilocalization regime. Similarly to the weak-antilocalization problem, the non-analyticity in~\eqref{finRe} can be smeared by introducing the finite dephasing rate $\tau_\phi^{-1}>0$. A convenient way to do this is to consider the response for the finite-frequency electric field; see Sec.~\ref{sec:ff}. \subsection{The quasi-1D case\label{sec:quasi1D}} Consider a long wire whose cross section $L_\perp$ is much larger than the mean scattering length $l$, but with the corresponding Thouless energy $E_{c\perp}=\hbar lv_\mathrm F/2L_\perp^2$ being large compared with the SOI-induced spectrum splitting~\eqref{spectrumSplitting}: \begin{equation}\label{quasi1D} E_{c\perp}\tau/\hbar\gg{\tilde x}^2\equiv\max\left(x^2,l^2/L_\phi^2\right). \end{equation} When~\eqref{quasi1D} holds, Hikami boxes remain two-dimensional, while diffusons and cooperons become one-dimensional (cf.~\S7.4.1 from Ref.~\onlinecite{montambauxBook}). In other words, in~\eqref{5-1} still $\mathop{\mathrm{Tr}}}\newcommand{\Tr}{\Sp}\newcommand{\ud}{\mathrm d_{\vec p}\equiv\int\ud^2p/(2\pi\hbar)^2$, but in~\eqref{ndeq} the momentum integration becomes one-dimensional, \begin{equation} \label{intQuasi1D} \int\frac{\ud^2k}{(2\pi\hbar)^2}}\newcommand{\ikds}{\int\ud^2k/(2\pi\hbar)^2\int\frac{\ud^2q}{(2\pi\hbar)^2}}\newcommand{\iqds}{\int\ud^2q/(2\pi\hbar)^2\to\frac1{L_\perp^2}\int_{-\infty}^\infty\frac{\ud k_x}{(2\pi\hbar)^2}\int_{-\infty}^\infty\frac{\ud q_x}{(2\pi\hbar)^2}, \end{equation} and one should set $k_y=q_y=0$ in the integrand. From~\eqref{intQuasi1D} one can estimate that the quasi-1D SOI-induced contribution to the conductivity is $l^2/(\tilde xL_\perp)^2\gg1$ times larger than the 2D one. A quasi-1D sample is macroscopically anisotropic (i.e., does not possess rotational symmetry), but this anisotropy becomes relevant only on the scales much larger than~$l$. Thus, the Eq.~\eqref{deltaSigmaxxyy}, as well as the claim in Ref.~\onlinecite{anisotrCond} that ``the conductivity tensor is isotropic when $\delta=0$'' are valid in the ZLA and for the WL-diagram (since the distance between all vertices in corresponding diagrams is of the order of $l$), but may not be valid for the contribution of the two-loop diagrams in Fig.~\ref{fdivtoPor}. In fact, we obtain the following anisotropic contribution for the conductivity tensor in the quasi-1D case (in the rotated CS): \begin{equation}\label{ani1D} \sigma^{(2)}=\frac{e^2}h\frac\hbar{p_F l}\frac{l^2}{{\tilde x}^2L_\perp^2} \left[ \begin{pmatrix} -0.39 & 0 \cr0 & 6.7 \end{pmatrix} + \delta\begin{pmatrix} -852 & 0 \cr0 & 13 \end{pmatrix}\right], \end{equation} which leads to \begin{equation}\label{qor} \sigma^{(2)}_{\mathrm{an}}=\frac{e^2}h\frac\hbar{p_F l}\frac{l^2}{{\tilde x}^2L_\perp^2}\left(3.5+433\delta\right). \end{equation} Thus, in the quasi-1D case the singularity in the SOI-correction to the conductivity tensor is more pronounced: (i) it occurs even when the (averaged) energy spectrum is isotropic and (ii) it diverges at vanishing SOI when orbital dephasing effects are neglected. \section{The finite frequency case\label{sec:ff}} In this Section we will see that the anisotropic part of the conductivity tensor becomes an analytic function of the SOI amplitudes $(x_a,x_b)$ at finite frequency $\omega\ne0$; we assume that the frequency is large enough compared to the strength of the SOI so that $|\omega\tau|\gg x^2$. Here we have to expand corrections to the conductivity tensor not only in SOI, but in powers of $|\omega\tau|$ as well. Similarly to the zero-frequency case we choose three expansion parameters, $x_c$, $x_1$, and $W$, defined in the following way: \begin{equation} \label{xcxeW} \begin{split} x_c=&\sqrt{-2i\omega\tau},\quad x_1=\sqrt{x^2+x_c^2}\ll1,\\ W&=2x_cx/x_1^2,\quad |x_c|,|x_1|,|W|\ll1. \end{split}\end{equation} Differently from the zero-frequency case (see Sec.~\ref{sec:CD}), the effect of the SOI on the diffuson can be calculated perturbatively in case when $|\omega\tau|\gg x^2$. The leading contribution is equal to the diffuson in the absence of SOI: \begin{equation}\label{dipkc} D^{\alpha\beta}_{\vec q}=\frac\hbar{m\tau}\frac1{l^2q^2/\hbar^2-2i\omega\tau}\delta_{\alpha\beta},\quad\alpha,\beta=0\ldots3. \end{equation} The derivation of SOI-corrections to~\eqref{dipkc} is straightforward, but lengthy, so we do not present it in the text of the article; see the program\cite{theProgram} for more details. Comparing~\eqref{dipkc} with expressions for the diffuson at $\omega=0$ (see Secs.~\ref{sec:zeroQ} and~\ref{sec:dbnez}) we see that the diffusons for $|\omega\tau|\ll x^2$ and $|\omega\tau|\gg x^2$ are very different. Consequently, while at $\omega=0$ the vertex renormalization cancels the anomalous part of the velocity operator, this is no longer the case for $\omega\ne0$ and at large frequencies the effect of the vertex renormalization is negligible. Despite that, the calculation of the finite frequency case is similar to the one for $\omega=0$; differences come from the fact that now we have three expansion parameters~\eqref{xcxeW} instead of two~\eqref{dRDaAlt} for $\omega=0$. We obtain\cite{theProgram} \begin{equation}\label{hochFR} \begin{split} \sigma^{(2)}_{\mathrm{an}}=-2\cdot0.25\cdot\frac{-2i\omega\tau\cdot2x_ax_b}{(x_a^2+x_b^2-2i\omega\tau)^2}\frac{e^2}{2\pi}\frac1{p_F l},\\ 2x_ax_b\ll x_a^2+x_b^2\ll\omega\tau\ll1. \end{split}\end{equation} This finite-frequency result obtained first in\cite{anisotrCond} can be interpreted in terms of dephasing; substituting $-i\omega\tau\to\tau/\tau_\phi$, we obtain \begin{equation}\label{anisRT} \sigma^{(2)}_{\mathrm{an}}=\left\{\aligned 5.6\times10^{-3}\cdot\frac{\tau_--\tau_+}{\tau_-+\tau_+}\frac{e^2}{2\pi}\frac1{\mu\tau},\quad\tau_\pm\ll\tau_\phi,\\ 0.13\cdot\left(\frac{\tau_\phi}{\tau_+}-\frac{\tau_\phi}{\tau_-}\right)\frac{e^2}{2\pi}\frac1{\mu\tau},\quad\tau_\phi\ll\tau_\pm, \endaligned \right. \end{equation} where the Dyakonov-Perel' relaxation times are defined as\cite{spinRelax} $2\tau/\tau_\pm=(x_a\mp x_b)^2$. \section{The quasi-1D ring pierced by magnetic flux\label{sec:brokenTR}} The simplest way of breaking the time reversal invariance of the Hamiltonian is considering a constant vector potential field~$\vec A$. It arises in a quasi-onedimensional ring pierced by a magnetic flux. (The magnetic flux can be described by subtracting $eA/c$ from momentum arguments of GFs.) In a ring geometry, the SOI-interaction of the type~\eqref{neuSOI} cannot be provided by usual Rashba and Dresselhaus mechanisms. However, such SOI is not forbidden and thus can occur due to different reasons, e.g., like in InAs nanowires\cite{Samuelson}. Let us assume that $\vec A$ is directed along the ring's circumference so that $A\equiv A_{\parallel}\equiv A_x$. The presence of non-zero vector potential breaks the identities~\eqref{timeRevarsalGF} and~\eqref{CeqD} together with the generalized Vollhardt-Wölfle theorem from Appendix~\ref{app:VW} thus making the anisotropy effect more pronounced because of the small-momentum diffuson-singularities, which now remain uncompensated. Eq.~\eqref{CeqD} now changes into \begin{equation} C^{\alpha\beta}_{\vec q}=D^{\alpha\beta}_{\vec q-2e\vec A/c}.\label{CeqDmod} \end{equation} The calculation of the HBs is the same as in Sec.~\ref{sec:2D} and Sec.~\ref{sec:quasi1D}. Like in Sec.~\ref{sec:quasi1D} we have to perform summation over two diffuson variables, see~\eqref{intQuasi1D}. For the diagrams which \emph{contain no cooperons}, this summation is performed in the same way, like for the infinite quasi-1D wire, see Eq.~\eqref{intQuasi1D}. The diagrams with cooperons become different: in every such diagram, $eA_\parallel/c$ is always subtracted from one (out of two) cooperon momentum. The summation rule becomes then different from~\eqref{intQuasi1D}: \begin{equation} \int\frac{\ud^2k}{(2\pi\hbar)^2}}\newcommand{\ikds}{\int\ud^2k/(2\pi\hbar)^2\int\frac{\ud^2q}{(2\pi\hbar)^2}}\newcommand{\iqds}{\int\ud^2q/(2\pi\hbar)^2\to\frac1{L_\perp^2}\int_{-\infty}^\infty\frac{\ud k^\parallel}{(2\pi\hbar)^2} \frac1{2\pi L_{\parallel}}\sum_{q^\parallel_n=\frac{2\pi\hbar n}{L_{\parallel}}} \end{equation} with $L_{\parallel}$ being the circumference of the ring, and $L_\perp$ its cross section. The summation is performed over all integer $n$ and it cannot be approximated with integration; one can use the Poisson summation formula instead: \begin{equation}\label{calcCoop}\begin{split} \frac1{L_\parallel}&\sum_{n\in{\mathbb Z}}f\left(q^\parallel_n-\frac{2e}cA\right)=\sum_{n\in\mathbb Z}\exp\left[2\pi in\frac\Phi{\Phi_0}\frac e{|e|}\right]C_n,\\ C_n&=\int_{-\infty}^\infty\frac{\ud q^\parallel}{2\pi\hbar}e^{iq^\parallel nL_{\parallel}/\hbar}f\left(q^\parallel\right),\ \Phi=\frac{AL_{\parallel}}c,\ \Phi_0=\frac h{2|e|}, \end{split} \end{equation} where the leading ($A$-dependent) contribution comes from the terms with $n=\pm1$, and we will neglect contribution of terms with $|n|>1$. Keeping only divergent terms (i.e., terms, containing massless cooperon/diffuson matrix elements), and assuming that $xL_\phi\gg l$ we obtain\cite{theProgram} the flux-dependent correction to the conductivity tensor: \begin{equation}\label{piri} \sigma^{(2)}_{\mathrm{an}}=\frac{e^2}h\left[\cos\left(2\pi\frac\phi{\phi_0}\right)-1\right]\frac{lL_\phi}{\tilde xL_\perp^2} \frac\hbar{p_F l}(\Sigma_0+\Sigma_1\delta), \end{equation} with the coefficients \begin{widetext} \begin{equation}\label{ringAmps} \begin{split} \Sigma_i=a_ie^{-{\tilde L}_\parallel}+\exp\left[-\frac{{\tilde L}_\parallel}2\sqrt{2\sqrt2-1}\right] \left\{b_i\cos\left[\frac{{\tilde L}_\parallel}2\sqrt{2\sqrt2+1}\right]+c_i\sin\left[\frac{{\tilde L}_\parallel}2\sqrt{2\sqrt2+1}\right]\right\},\\ a_0= -\frac{2{{\tilde L}_\parallel}^2+274{{\tilde L}_\parallel}-219}{128},\quad a_1=-2{{\tilde L}_\parallel}^2-{{\tilde L}_\parallel}-1,\\ b_0=-\frac{7124\sqrt7{{\tilde L}_\parallel}+4513\sqrt{2\sqrt2-1}\sqrt7-2965 \sqrt{2\sqrt2+1}}{1792\sqrt7}=-3{\tilde L_\parallel- 2.1},\quad b_1=-1.9\cdot10^{-4}{{\tilde L}_\parallel}^3-2.3{{\tilde L}_\parallel}^2-6.7{{\tilde L}_\parallel}+4.2,\\ c_0=\frac{28{{\tilde L}_\parallel}-4513\sqrt{2\sqrt2+1}\sqrt7-2965\sqrt{2 \sqrt2-1}}{1792\sqrt7}=6\cdot10^{-3}{\tilde L}_\parallel+4,\quad c_1=-8.6\cdot10^{-5}{{\tilde L}_\parallel}^3+4.5{{\tilde L}_\parallel}^2-0.46{{\tilde L}_\parallel} -0.87, \end{split} \end{equation} \end{widetext} where ${\tilde L}_\parallel=xL_\parallel/l$. Like in Sec.~\ref{sec:ff} we assumed that the divergence at small SOI ($x\to0$) is regularized by the orbital dephasing. In Eqs.~\eqref{ringAmps} we wrote numerical values of $b_1$ and $c_1$ instead of their analytic expressions in order to save space. The analytic expressions for $b_1$ and $c_1$ can be found in\cite{theProgram}. The result~\eqref{piri} has the same order of magnitude in the loop expansion [in powers of~$(p_F l/\hbar)^{-1}$], as (i) the infinite-plane result~\eqref{sigmaXX} and~\eqref{finRe}, as well as the (ii) quasi-1D result~\eqref{qor}. However, out of all three considered geometries it is the most sensitive one with respect to orbital dephasing. In fact, in the coherent limit, $L_\phi\to\infty$, it diverges as $\propto L_\phi$ for finite SOI-amplitude $x$, and as $\propto L_\phi^2$ in the limit $x\ll l/L_\phi\to0$. \section{Conclusions} We presented symbolic program\cite{theProgram} for generating, sorting, and calculating diagrams in the disorder-averaging diagrammatic technique. This program strongly facilitates analytical calculations, allowing one to calculate subtle effects due to spin-orbit interaction which were virtually inaccessible before due to the large number of integrals to be calculated. The possibility to automatize the calculation improves the usefulness of the diagrammatic approach, especially also in comparison to the non-linear $\sigma$-model\cite{Kamenev2}, as a tool for studying disordered systems. Using this program, we studied anisotropic corrections to the conductivity tensor due to the spin-orbit interaction (SOI). The arising anisotropy is a phase-coherence effect; therefore it strongly depends on the geometry of the sample. In the quasi-1D wire the anisotropic correction is larger than in an infinite 2D-plane. Moreover, while in 2D-case the effect arises due to the anisotropy of the energy spectrum induced by the interference between Rashba and Dresselhaus types of SOI, in the quasi-1D case the conductivity is anisotropic even in the presence of only one type of SOI (Rashba or Dresselhaus), that is, when the energy spectrum is isotropic. The (microscopic) anisotropy of the conductivity tensor arises due to the macroscopic (shape) anisotropy of the sample (on the scale much larger than the mean free path~$l$). We also studied the case when the time-reversal symmetry of the system is broken (a ring pierced by a magnetic flux); then the anisotropy of the conductivity tensor becomes more sensitive to orbital dephasing effects due to the uncompensated small-momentum divergences in the integration over the diffuson momentum. In all the considered geometries, the effect is non-analytical in the amplitude of the spin-orbit interaction, if the orbital dephasing effects are not taken into account. Once the dephasing effects are considered, the conductivity becomes an analytical function of the spin-orbit amplitudes. We are grateful to M.~Duckheim and D.~Maslov for helpful discussions. We acknowledge financial support from the Swiss NSF and the NCCR Nanoscience. \section{Universal and non-universal contributions\label{sec:uniIneuni}} In the diagrammatic technique, one often has to take integrals from the product of GFs of the type \begin{equation}\label{algProGF} \int\frac{\ud^2p}{(2\pi\hbar)^2}}\newcommand{\ippd}{\int\frac{\ud^2p'}{(2\pi\hbar)^2}\prod_{i=1}^nG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p-{\vec q}_i)\prod_{j=1}^m\GA(\vec p-{\vec q}_j). \end{equation} In case when all momenta arguments are equal in~\eqref{algProGF}, $\forall i,j$ ${\vec q}_i={\vec q}_j=0$, the integral is taken using the change of variable: $\int\ud^2p/(2\pi\hbar)^2\to\nu\int_{-E_F}^{\infty}\ud\xi$, where the density of states $\nu=m/(2\pi\hbar)$ in 2D. About such integrals, we will say that ``integral $J=\int_{-E_F}^{\infty}f(\xi)\ud\xi$ converges on the scale $\delta\xi$'', if $J_{\delta\xi}\equiv\int_{-\delta\xi}^{\delta\xi}f(\xi)\ud\xi$ has the same order of magnitude (in $p_F l\gg\hbar$) as $J$. An integral $\int\ud\xi$ from the product of GFs is of the order of $m\tau^{h-1}$ if it converges on the scale $\delta\xi\sim\hbar/\tau$: \begin{equation}\label{epGF} \int\ud\xiG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^n(\xi)\GA^m(\xi)\sim\tau^{n+m-1}. \end{equation} The simplest example of \eqref{epGF} is a ``bubble'' $\nu\int\ud\xiG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\xi)\GA(\xi)=2\pi\nu\tau$. It is \emph{universal}, because only values of $\xi=\epsilon_{\vec p}-E_F$ out of close vicinity of zero (i.e., $\epsilon_{\vec p}\in[E_F-\hbar/\tau,E_F+\hbar/\tau]$) contribute to the result. Only in this close vicinity of the Fermi level Landau Fermi liquid theory is valid. Other integrals, e.g., $\int\ud\xiG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^2(\xi)$, we call \emph{non-universal}. Using~\eqref{agf}, one concludes that \begin{equation}\label{estNU} \ipdG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^2(\vec p)\sim\frac\hbar{E_F\tau}\ipdG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}(\vec p)\GA(\vec p). \end{equation} \ldots \msc{Still we have to say smth about the non-universal corrections\ldots} In conclusion, the universal corrections are produced by $\int\ud\xiG_R}\newcommand{\GA}{G_A}\newcommand{\GRA}{G_{\mathrm{R/A}}^n\GA^m$; all other contributions must be neglected by postulating $\int\ud^2p/(2\pi\hbar)^2\equiv\nu\int_{-\infty}^\infty\ud\xi$. The above reasoning is valid also in case when the momentum arguments in~\eqref{algProGF} are close to each other (the s.c. diffusion approximation): \begin{equation} \forall i\quad q_il\ll\hbar. \end{equation} We call momentum variables ${\vec q}_i$ ``small'' if $q_il\ll\hbar$. Then $q_i\ll p\simp_F$; we call $\vec p$ ``large'' momentum variable.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The classification of properties into safety and liveness properties is pivotal for reactive systems verification. As Lamport introduced in 1977~\cite{Lamport1977PCM} and detailed later in~\cite{Alford1985DSM}, safety properties assert that something ``bad'' never happens, while liveness properties require that something ``good'' will happen eventually. The precise formulation of safety and liveness properties as well as their characteristics have been subject to extensive investigations. Alpern and Schneider~\cite{alpern1987recognizing} provided a topological characterisation in which safety properties are closed sets, while liveness properties correspond to dense sets. This naturally gives rise to a decomposition---every property can be represented as a conjunction of a safety and liveness property. It was shown that this characterisation can also be obtained using Boolean~\cite{Gumm1993GAC} and standard set theory~\cite{Rem1990personal}. Sistla~\cite{Sistla1985CSL} studied the problem from a different perspective and provided syntactic characterisations of safety and liveness properties in LTL. The above linear-time approaches are surveyed in~\cite{Survey1994}. In the case of possible system failures, safety properties sometimes turn into liveness properties~\cite{DBLP:conf/concur/Charron-BostTB00}. The algebraic framework of Gumm~\cite{Gumm1993GAC} has been further generalised by Manolios and Trefler to characterise safety and liveness properties both in the linear-time setting~\cite{Manolios2003LCS} as well as in the branching-time setting~\cite{Manolios2001SLB}. Earlier work by Bouajjani \emph{et al.}~\cite{DBLP:conf/icalp/BouajjaniFGRS91} characterises regular safety properties by tree automata and formulas of a branching time logic. Alternatives to the safety-liveness taxonomy have been given in~\cite{DBLP:conf/sigsoft/NaumovichC00}. The taxonomy of properties is not just of theoretical interest, but plays an important role in verification. Safety and liveness properties require different proof methods~\cite{Owicki1982PLP}. Whereas global invariants suffice for safety properties, liveness is typically proven using proof lattices or well-founded induction and ranking functions. Model checking of safety properties is usually easier than checking liveness properties~\cite{Kupferman2001MCS}. Fairness assumptions are often imposed to exclude some unrealistic executions~\cite{Francez1986FAI}. As fairness constraints only affect infinite computations, they can be ignored in the verification of safety properties, typically simplifying the verification process. Abstraction techniques are mostly based on simulation pre-order relations that preserve safety, but no liveness properties. Compositional techniques have been tailored to safety properties~\cite{DBLP:journals/tosem/CheungK99}. This paper focuses on a formal characterisation of safety and liveness properties in the \emph{probabilistic} setting. For the verification of linear-time properties, one typically resorts to using LTL or $\omega$-automata. In the branching-time setting, mostly variants of CTL such as PCTL~\cite{Hansson94alogic} are exploited. This is the setting that we consider. PCTL is one of the most popular logics in the field of probabilistic model checking. Providing a precise characterisation of safety and liveness properties for probabilistic models is highly relevant. It is useful for identifying the appropriate analysis algorithm and provides mathematical insight. In addition, many techniques rely on this taxonomy. Let us give a few examples. Assume-guarantee frameworks~\cite{DBLP:conf/tacas/KwiatkowskaNPQ10,KomuravelliPC12} and abstraction techniques~\cite{DBLP:conf/cav/HermannsWZ08,DBLP:journals/jlp/KatoenKLW12} aim at safety properties. Recent verification techniques based on monitoring~\cite{Sistla2011MSDS} indicate that arbitrary high levels of accuracy can only be achieved for safety properties. Similar arguments force statistical model checking~\cite{DBLP:journals/iandc/YounesS06} to be limited to safety properties. Optimal synthesis for safety properties in probabilistic games can also be done more efficiently than for liveness properties~\cite{DBLP:conf/cav/ChatterjeeHJS10}. Despite the importance of distinguishing safety and liveness properties in probabilistic systems, this subject has (to the best of our knowledge) not been systematically studied. The lack of such a framework has led to different notions of safety and liveness properties~\cite{Baier2005CBS,Chadha2010CAF}. We will show that a systematic treatment leads to new insights and indicates some deficiencies of existing logical fragments for safety and liveness properties. Inspired by~\cite{Manolios2001SLB}, we consider properties as sets of probabilistic trees and provide a decomposition result stating that every property can be represented by a conjunction of a safety and liveness property. Moreover, all properties of the classification in the traditional setting, such as closure of property classes under Boolean operators, are shown to carry over to probabilistic systems. We study the relationship of safety and liveness properties to finite and infinite counterexamples~\cite{Han2009CGP}, and compare our taxonomy with the classification in~\cite{Manolios2001SLB} for qualitative properties. A major contribution is the identification of logical fragments of PCTL to characterise safety and liveness. It is shown that fragments in the literature~\cite{Baier2005CBS} can be extended (for safety), or are inconsistent with our definitions (for liveness). In addition, we consider absolute liveness and strong safety as originated by Sistla~\cite{Sistla1994SLF} for the linear-time setting. Phrased intuitively, strong safety properties are closed under stuttering and are insensitive to the deletion of states, while once an absolutely live property holds, it is ensured it holds in the entire past. We obtain a sound and complete characterisation of strong safety and---in contrast to \cite{Sistla1994SLF}---of absolute liveness. In addition, we show that every absolutely live formula is equivalent to positive reachability. This result could be employed to simplify a formula prior to verification in the same way as \cite{EtessamiH00} to simplify LTL formulas by rewriting in case they are stable (the complement of absolutely live) or absolutely live. Summarising, the main contributions of this paper are: \begin{itemize} \item A formal characterisation for safety and liveness properties yielding a decomposition theorem, i.e., every property can be represented as a conjunction of a safety and liveness property. \item The relation of the characterisation to counterexamples. \item A linear-time algorithm to decompose a flat, i.e., unnested PCTL formula into a conjunction of safety and liveness properties. \item A PCTL fragment that is a sound and complete characterisation of safety properties. (Here, completeness means that every safety property expressible in PCTL can be expressed in the logical fragment.) The same applies to absolute liveness and strong safety properties. \item A PCTL fragment that is a sound characterisation of liveness properties, and a fragment that is complete. We discuss the difficulty to obtain a single sound and complete syntactic characterisation by relating it to the PCTL decidability problem. \item The relation of the property characterisation to simulation pre-orders~\cite{JonssonL91}. \end{itemize} \paragraph{Organisation of the paper} Section~\ref{sec:pre} provides some preliminary definitions. Section~\ref{sec:safety and liveness} presents the characterisation of safety and liveness properties. We show the relations to counterexamples and qualitative properties of our characterisation in Section~\ref{sec:counterexample} and \ref{sec:qualitative properties} respectively. Safety PCTL is considered in Section~\ref{sec:safety pctl}, while liveness PCTL is discussed in Section~\ref{sec:liveness pctl}. We show in Section~\ref{sec:simulation} that the new notions of safety and liveness properties can also characterise strong simulation. Section~\ref{sec:strong and absolute} gives the full characterisation for strong safety and absolute liveness PCTL. Section~\ref{sec:conclusion} concludes the paper. All proofs are included in the appendix. \section{Preliminaries}\label{sec:pre} For a countable set $S$, let $\mathcal{P}(S)$ denote its powerset. A distribution is a function $\mu:S\to [0,1]$ satisfying $\sum_{s\in S} \mu(s)= 1$. Let $\mathit{Dist}(S)$ denote the set of distributions over $S$. We shall use $s, r, t, \ldots$ and $\mu, \nu, \ldots$ to range over $S$ and $\mathit{Dist}(S)$, respectively. The support of $\mu$ is defined by $\mathit{supp}(\mu) =\{s\in S \mid \mu(s)>0\}$. Let $S^*$ and $S^{\omega}$ denote the set of finite sequences and infinite sequences, respectively, over the set $S$. The set of all (finite and infinite) sequences over $S$ is given by $S^\infty = S^* \cup S^{\omega}$. Let $\len{\pi}$ denote the length of $\pi \in S^\infty$ with $\len{\pi} = \infty$ if $\pi \in S^{\omega}$. For $i\in\mathbb{N}$, let $\pi[i]$ denote the $i{+}1$-th element of $\pi$ provided $i <\len{\pi}$, and $\lastState{\pi} \, = \pi[\len{\pi}{-}1]$ denote the last element of $\pi$ provided $\pi\in S^*$. A sequence $\pi_1$ is a prefix of $\pi_2$, denoted $\pi_1 \preceq \pi_2$, if $\len{\pi_1} \leq \len{\pi_2}$ and $\pi_1[i] = \pi_2[i]$ for each $0 \leq i< \len{\pi_1}$. Sequence $\pi_1$ is a proper prefix of $\pi_2$, denoted $\pi_1 \prec \pi_2$, if $\pi_1 \preceq \pi_2$ and $\pi_1 \neq \pi_2$. The concatenation of $\pi_1$ and $\pi_2$, denoted $\pi_1 \cdot \pi_2$, is the sequence obtained by appending $\pi_2$ to the end of $\pi_1$, provided $\pi_1$ is finite. The set $\Pi \subseteq S^\infty$ is \emph{prefix-closed} iff for all $\pi_1\in \Pi$ and $\pi_2 \in S^*$, $\pi_2 \preceq \pi_1$ implies $\pi_2 \in \Pi$. \subsection{Discrete-Time Markov Chains}\label{sec:dtmc} This paper focuses on discrete-time Markov chains (MCs). Although we consider state-labelled models, all results can be transferred to action-labelled models in a straightforward way. \begin{definition}[Markov chain] \label{def:dtmc} A \emph{Markov chain} (\emph{MC}) is a tuple $\text{\sf D} = (\mathcal{S}, \mathit{AP}, \rightarrow, L, s_0)$, where $\mathcal{S}$ is a countable set of states, $\mathit{AP}$ is a finite non-empty set of atomic propositions, $\rightarrow:\mathcal{S}\mapsto\mathit{Dist}(\mathcal{S})$ is a transition function, $L:\mathcal{S}\mapsto\mathcal{P}(\mathit{AP})$ is a labelling function, and $s_0 \in \mathcal{S}$ is the initial state. \end{definition} \begin{figure}[!t] \centering \scalebox{0.8}{ \begin{tikzpicture}[->,>=stealth,auto,node distance=3.5cm,semithick,scale=1,every node/.style={scale=1}] \tikzstyle{state}=[minimum size=20pt,circle,draw,thick] \tikzstyle{stateNframe}=[] every label/.style=draw \tikzstyle{blackdot}=[circle,fill=black, minimum size=6pt,inner sep=0pt] \node[state,label={[label distance=0pt]90:{$a$}}](s0){$s_0$}; \node[state,label={[label distance=0pt]90:{$a$}}](s1)[right of=s0,yshift=1cm]{$s_1$}; \node[state,label={[label distance=0pt]90:{$c$}}](s2)[right of=s0,yshift=-1cm]{$s_2$}; \node[state,label={[label distance=0pt]90:{$a$}}](t0)[right of=s0, xshift=2cm]{$t_0$}; \node[state,label={[label distance=0pt]90:{$b$}}](t1)[right of=t0,yshift=1cm]{$t_1$}; \node[state,label={[label distance=0pt]90:{$c$}}](t2)[right of=t0,yshift=-1cm]{$t_2$}; \node[stateNframe](a)[below of=s2,yshift=2.5cm, xshift=-1.5cm]{(a)}; \node[stateNframe](b)[below of=t2,yshift=2.5cm, xshift=-1.5cm]{(b)}; \path (s0) edge node[left,yshift=5pt] {0.5} (s1) edge node[left,yshift=-5pt] {0.5} (s2) (s1) edge[loop right] node {1} (s1) (s2) edge[loop right] node {1} (s2) (t0) edge node[left,yshift=5pt] {0.4} (t1) edge node[left,yshift=-5pt] {0.4} (t2) edge[loop below] node {0.2} (t0) (t1) edge[loop right] node {1} (t1) (t2) edge[loop right] node {1} (t2); \end{tikzpicture} } \caption{\label{fig:dtmc} Examples of MCs} \end{figure} Fig.~\ref{fig:dtmc} presents two sample MCs where circles denote states, symbols inside the states and attached to the states denote the name and label of a state respectively. A path $\pi\in\mathcal{S}^\infty$ through MC $\text{\sf D}$ is a (finite or infinite) sequence of states. The cylinder set $C_\pi$ of $\pi\in\mathcal{S}^*$ is defined as: $C_\pi = \{\pi' \in \mathcal{S}^{\omega} \mid \pi \prec \pi' \}$. The $\sigma$-algebra $\mathcal{F}$ of $\text{\sf D}$ is the smallest $\sigma$-algebra containing all cylinder sets $C_\pi$. By standard probability theory, there exists a unique probability measure $\Pr$ on $\mathcal{F}$ such that: $\Pr(C_{\pi})=1$ if $\pi=s_0$, and $\Pr(C_{\pi}) = \Pi_{0 \leq i <n}\ \mu_i(s_{i+1})$ if $\pi=s_0\ldots s_n$ with $n>0$, where $s_i \rightarrow \mu_i$ for $0 \le i <n$. Otherwise $\Pr(C_{\pi})=0$. \subsection{Probabilistic CTL} \label{sec:pctl} Probabilistic CTL (PCTL for short, ~\cite{Hansson94alogic}) is a branching-time logic for specifying properties of probabilistic systems. Its syntax is defined by the grammar: \begin{align*} \Phi & ::= \ a \mid \Phi_1\wedge\Phi_2\mid\neg \Phi\mid\P{\bowtie q}{\varphi}\\ \varphi & ::=\ \text{\sf X} \Phi\mid\Phi_1 \text{\sf U} \Phi_2 \mid \Phi_1\text{\sf W} \Phi_2 \end{align*} where $a\in\mathit{AP}$, $\bowtie\ \in\{<,>,\leq,\geq\}$ is a binary comparison operator on the reals, and $q \in [0,1]$. Let $\top= a\lor\neg a$ denote true and $\bot=\neg\top$ denote false. As usual, $\Diamond\Phi = \top\text{\sf U}\Phi$ and $\Box\Phi=\Phi\text{\sf W}\bot$. We will refer to $\Phi$ and $\varphi$ as state and path formulas, respectively. The satisfaction relation $s \models\Phi$ for state $s$ and state formula $\Phi$ is defined in the standard manner for the Boolean connectives. For the probabilistic operator, it is defined by: $ s \models\P{\bowtie q}{\varphi} \text{ iff } \Pr \{\pi\in\mathcal{S}^{\omega}(s)\mid\pi\models\varphi\} \bowtie q, $ where $\mathcal{S}^{\omega}(s)$ denotes the set of infinite paths starting from $s$. For MC $\text{\sf D}$, we write $\text{\sf D}\models\Phi$ iff its initial state satisfies $\Phi$, i.e., $s_0\models\Phi$. The satisfaction relation for $\pi \in \mathcal{S}^\omega$ and path formula $\varphi$ is defined by: \begin{align*} &\pi\models\text{\sf X}\Phi &&\text{iff } \pi[1]\models\Phi\\ &\pi\models\Phi_1\text{\sf U}\Phi_2 && \text{iff } \exists j\geq 0.\pi[j]\models\Phi_2\land\forall 0\leq k<j.\pi[k]\models\Phi_1\\ &\pi\models\Phi_1\text{\sf W}\Phi_2 && \text{iff }\pi\models\Phi_1\text{\sf U}\Phi_2\lor\forall i\ge 0.\pi[i]\models\Phi_1. \end{align*} The until $\text{\sf U}$ and weak until $\text{\sf W}$ modalities are dual: \begin{align*} \P{\ge q}{\Phi_1\text{\sf U}\Phi_2} &\equiv \P{\le 1-q}{(\Phi_1\land\neg\Phi_2)\text{\sf W}(\neg\Phi_1\land\neg\Phi_2)},\\ \P{\ge q}{\Phi_1\text{\sf W}\Phi_2} &\equiv \P{\le 1-q}{(\Phi_1\land\neg\Phi_2)\text{\sf U}(\neg\Phi_1\land\neg\Phi_2)}. \end{align*} These duality laws follow directly from the known equivalence $\neg (\Phi_1\text{\sf U}\Phi_2) \equiv (\Phi_1\wedge\neg\Phi_2)\text{\sf W}(\neg\Phi_1\wedge\neg\Phi_2)$ in the usual setting. Every PCTL formula can be transformed into an equivalent PCTL formula in \emph{positive normal form}. A formula is in positive normal form, if negation only occurs adjacent to atomic propositions. In the sequel, we assume PCTL formulas to be in positive normal form. \section{Safety and Liveness Properties} \label{sec:safety and liveness} \subsection{Probabilistic Trees} This section introduces the concept of probabilistic trees together with prefix and suffix relations over them. These notions are inspired by~\cite{Manolios2001SLB}. Let $A,B,\ldots$ range over $\mathcal{P}(\mathit{AP})$, where $\{a\}$ is abbreviated by $a$. Let $\epsilon$ be the empty sequence. \begin{definition}[Probabilistic tree]\label{def:pt} A \emph{probabilistic tree} (PT) is a tuple $T=\atree$ where $\epsilon\not\inW$, and \begin{itemize} \item $(W \cup \{\epsilon\}) \subseteq \mathbb{N}^*$ is an unlabelled tree, i.e., prefix-closed, \item $L: W\mapsto\mathcal{P}(\mathit{AP})$ is a node labelling function, \item $\mathit{P}:W\mapsto\mathit{Dist}(W)$ is an edge labelling function, which is a partial function satisfying $\mathit{P}(\pi)(\pi')>0$ iff $\pi'=\pi\cdot n\inW$ for some $n\in\mathbb{N}$. \end{itemize} \end{definition} The node $\pi$ with $|\pi|=1$ is referred to as the \emph{root}, while all nodes $\pi$ such that $P(\pi)$ is undefined are referred to as the \emph{leaves}. To simplify the technical presentation, $\epsilon$ is excluded from the tree. This will become clear after introducing the PT semantics for MCs. PT $T=\atree$ is \textit{total} iff for each $\pi_1\inW$ there exists $\pi_2\inW$ such that $\pi_1 \prec\pi_2$, otherwise it is \textit{non-total}. $T$ is \textit{finite-depth} if there exists $n\in\mathbb{N}$ such that $\len{\pi}\le n$ for each $\pi\inW$. Let $\mathbb{T}^\omega$ and $\mathbb{T}^*$ denote the sets of all total PTs and finite-depth PTs respectively, and $\mathbb{T}^\infty=\mathbb{T}^*\cup\mathbb{T}^\omega$. If no confusion arises, we often write a PT as a subset of $((0,1]\times\mathcal{P}(AP))^*$, i.e., as a set of sequences of its edge labelling and node labelling functions. \begin{example}[Probabilistic trees] Fig.~\ref{fig:pt} depicts the finite-depth PT $T=\atree$. Circles represent nodes and contain the node label and the order of the node respectively. $$W=\{0,00,01,02,000,001,002,011,022\}$$ and functions $L$ and $\mathit{P}$ are defined in the obvious way, e.g., $L(00)=a$ and $\mathit{P}(00,001)=0.4$. PT $T$ can also be written as: \begin{align*} &\{ (1,a) ,(1,a)(0.2,a),(1,a)(0.4,b),(1,a)(0.4,c),\\ &\phantom{\{ } (1,a)(0.2,a)(0.2,a),(1,a)(0.2,a)(0.4,b),\\ &\phantom{\{ } (1,a)(0.2,a)(0.4,c),(1,a)(0.4,b)(1,b),\\ &\phantom{\{ } (1,a)(0.4,c)(1,c)\}. \end{align*} \end{example} \begin{figure}[tbh] \centering \scalebox{0.8}{ \begin{tikzpicture}[->,>=stealth,auto,node distance=2cm,semithick,scale=1,every node/.style={scale=1}] \tikzstyle{state}=[minimum size=20pt,circle,draw,thick] \tikzstyle{stateNframe}=[] every label/.style=draw \tikzstyle{blackdot}=[circle,fill=black, minimum size=6pt,inner sep=0pt] \node[state](b1){$a,0$}; \node[state](b2)[right of=b1]{$b,1$}; \node[state](b3)[right of=b2]{$c,2$}; \node[state](b4)[right of=b3]{$b,1$}; \node[state](b5)[right of=b4]{$c,2$}; \node[state](m1)[above of=b2]{$a,0$}; \node[state](m2)[above of=b4]{$b,1$}; \node[state](m3)[above of=b5]{$c,2$}; \node[state](t1)[above of=m2]{$a,0$}; \path (m1) edge node[left] {0.2} (b1) edge node {0.4} (b2) edge node {0.4} (b3) (m2) edge node {1} (b4) (m3) edge node {1} (b5) (t1) edge node[left,yshift=5pt] {0.2} (m1) edge node[left] {0.4} (m2) edge node {0.4} (m3); \end{tikzpicture} } \caption{A sample probabilistic tree}\label{fig:pt} \end{figure} We now define when a PT is a prefix of another PT. \begin{definition}[Prefix]\label{def:prefix} Let $T_i=\atree[i]$ for $i{=}1,2$ with $T_1\in\mathbb{T}^*$ and $T_2\in\mathbb{T}^\infty$. $T_1$ is a \emph{prefix} of $T_2$, denoted $T_1\prefixTreeT_2$, iff $$ W_1 \subseteq W_2 \mbox{ and } L_2 \upharpoonright W_1 = L_1 \mbox{ and } P_2 \upharpoonright (W_1 \times W_1) = P_1, $$ where $\upharpoonright$ denotes restriction. Let $\mathit{Pre}_{\mathit{fin}}(T) = \{T_1 \in \mathbb{T}^* \mid T_1 \preceq T\}$ denote the set of all prefixes of $T \in \mathbb{T}^\infty$. \end{definition} Conversely, we define a suffix relation between PTs: \begin{definition}[Suffix]\label{def:suffix} Let $T_i=\atree[i]$ with $T_i \in\mathbb{T}^\infty$, $i=1,2$. $T_2$ is a \emph{suffix} of $T_1$ iff there exists $\pi_1\inW_1$ such that \begin{itemize} \item $\{\pi_1\cdot\pi_2\mid\pi_2\inW_2\}\subseteqW_1$; \item $L_2(\pi_2)=L_1(\pi_1{\cdot}\pi_2)$ for each $\pi_2\inW_2$; \item $\mathit{P}_2(\pi_2,\pi'_2)=\mathit{P}_1(\pi_1{\cdot}\pi_2,\pi_1{\cdot}\pi'_2)$ for any $\pi_2,\pi'_2\inW_2$. \end{itemize} \end{definition} Intuitively, a suffix $T_2$ of $T_1$ can be seen as a PT obtained after executing $T_1$ along some sequence $\pi_1\inW_1$. \subsection{A PT semantics for MCs} There is a close relation between PTs and MCs, as the execution of every MC is in fact a PT. Without loss of generality, we assume there exists a total order on the state space $\mathcal{S}$ of an MC, e.g., $\mathcal{S} = \mathbb{N}$. \begin{definition}[Unfolding of an MC]\label{def:unfolding} The \emph{unfolding} of the MC $\text{\sf D}=(\mathcal{S},\mathit{AP},\rightarrow,L,s_0)$ is the PT $T(\text{\sf D})=\atree[\text{\sf D}]$ with: \begin{itemize} \item $W_\text{\sf D}$ is the least set satisfying: i) $s_0\inW_\text{\sf D}$; ii) $\pi\inW_\text{\sf D}$ implies $\pi \cdot t \inW_\text{\sf D}$ for any $t\in \mathit{supp}(\mu)$, where $\lastState{\pi} \, \rightarrow \mu$; \item $L_\text{\sf D}(\pi)=L(\lastState{\pi})$ for each $\pi\inW_\text{\sf D}$; \item $\mathit{P}_\text{\sf D}(\pi,\pi')=\mu(\lastState{\pi'})$ where $\lastState{\pi} \, \rightarrow\mu$. \end{itemize} \end{definition} Note the initial state $s_0$ is the root of the tree $T(\text{\sf D})$. \begin{example}[Prefix, suffix and unfolding] Let $T_2$ be the PT depicted in Fig.~\ref{fig:pt} and $T_1$ be a PT written by $\{(1,a),(1,a)(0.2,a),(1,a)(0.4,b),(1,a)(0.4,c)\}.$ It follows that $T_1$ is a prefix of $T_2$. Actually, $T_1$ is a fragment of $T_2$. PT $T_1$ can be seen as a partial execution of MC $\text{\sf D}$ in Fig.~\ref{fig:dtmc}(b) up to two steps, while $T_2$ is a partial execution of $\text{\sf D}$ up to 3 steps. By taking the limit over the number of steps to infinity, one obtains the total PT $T(\text{\sf D})$. Note that $T_1$ and $T_2$ are both prefixes of $T(\text{\sf D})$. Let $T_3=\{(1,b),(1,b)(1,b),(1,b)(1,b)(1,b),\ldots\}$ be a total PT. By Def.~\ref{def:suffix}, $T_3$ is a suffix of $T(\text{\sf D})$. It is representing the resulting PT after jumping to $t_1$ in $\text{\sf D}$. \end{example} Def.~\ref{def:unfolding} suggests to represent properties on MCs as a set of probabilistic trees. \begin{definition}[Property] A \emph{property} $P\subseteq\mathbb{T}^\omega$ is a set of total PTs. Property $P$ (over $\mathit{AP}$) is satisfied by an MC $\text{\sf D}$ (over $\mathit{AP}$), denoted $\text{\sf D}\models P$, iff $T(\text{\sf D})\in P$. \end{definition} The complement of $P$, denoted $\overline{P}$, equals $\mathbb{T}^\omega \setminus P$. In the sequel, let $P_\Phi = \{T(\text{\sf D})\mid\text{\sf D}\models\Phi\}$ denote the property corresponding to the PCTL-formula $\Phi$. By a slight abuse of notation, we abbreviate $P_\Phi$ by $\Phi$ when it causes no confusion. \subsection{Safety and Liveness} \label{sec:safetyandliveness} Along the lines of Alpern and Schneider~\cite{alpern1987recognizing}, let us define safety and liveness properties. \begin{definition}[Safety]\label{def:safety} $P\subseteq\mathbb{T}^\omega$ is a \emph{safety property} iff for all $T\in\mathbb{T}^\omega$: $T\in P\text{ iff } \forallT_1\in\mathit{Pre}_{\mathit{fin}}(T). \, (\existsT_2\in P. \, T_1\prefixTreeT_2).$ \end{definition} Thus, a safety property $P$ only consists of trees $T$ for which any finite-depth prefix of $T$ can be extended to a PT in $P$. Colloquially stated, if $T \not\in P$, there is a finite-depth prefix of $T$, in which ``bad things'' have happened in finite depth and are not irremediable. \begin{definition}[Liveness]\label{def:liveness} $P\subseteq\mathbb{T}^\omega$ is a \emph{liveness property} iff: $\forallT_1\in\mathbb{T}^*. \, \existsT_2\in P. \, T_1\prefixTreeT_2.$ \end{definition} Intuitively, a property $P$ is live iff for any finite-depth PT, it is possible to extend it such that the resulting PT satisfies $P$. Colloquially stated, it is always possible to make ``good things'' happen eventually. As in the classical setting, it holds that $\emptyset$ is a safety property, while $\mathbb{T}^\omega$ is the only property which is both safe and live. \begin{example}[Classification of sample PCTL formulas]\label{ex:classification}\ \begin{itemize} \item $\Phi=\P{\leq 0.5}{a\text{\sf U} b}$ is a safety property.\\ This can be seen as follows. First, note that $T\in\Phi$ and $T_1\in\mathit{Pre}_{\mathit{fin}}(T)$ implies the existence of $T_1\prefixTreeT_2:=T$ and $T_2\in\Phi$. The other direction goes by contraposition. Assume $T\not\in \Phi$, but for all $T_1\in\mathit{Pre}_{\mathit{fin}}(T)$, there exists $T_2\in\Phi$ such that $T_1\prefixTreeT_2$ (assumption *). If $T\not\in\Phi$, i.e., $T\in\P{>0.5}{a\text{\sf U} b}$, there must exist $T_1\in\mathit{Pre}_{\mathit{fin}}(T)$ in which the probability of reaching a $b$-state via $a$-states exceeds $0.5$. Therefore, $T_1\not\prefixTreeT_2$ for any $T_2\in\Phi$. This contradicts the assumption (*). \item $\Phi=\P{\geq 0.5}{a\text{\sf U} b}$ is neither safe nor live.\\ Let MC $\text{\sf D}$ be depicted in Fig.~\ref{fig:dtmc}(a). Every finite-depth PT $T_1$ with $T_1\prefixTreeT(\text{\sf D})$ can easily be extended to $T_2$ such that $T_2\in\Phi$ and $T_1\prefixTreeT_2$. But obviously $T(\text{\sf D})\not\in\Phi$. Therefore $\Phi$ is not a safety property. To show that $\Phi$ is not a liveness property, let $T_1=\{(1,a),(1,a)(p,a),(1,a)(1-p,c)\}$ with $p<0.5$. For any possible extension of $T_1$, the probability of satisfying $a\text{\sf U} b$ is at most $p<0.5$. Therefore $\Phi$ is not live. \item $\Phi=\P{\geq 0.5}{\Diamond b}$, $\Phi=\P{>0.5}{\Diamond b}$ are liveness properties.\\ For every finite-depth PT $T_1$, there exists $T_2\in\Phi$ such that $T_1\prefixTreeT_2$ (obtained by extending $T_1$ with $b$-states). \item $\Phi=\P{< 0.5}{a\text{\sf U} b}$ is neither safe nor live.\\ Consider the MC $\text{\sf D}$ in Fig.~\ref{fig:dtmc}(b). Since the probability of reaching a $b$-state $t_1$ is 0.5, $T(\text{\sf D})\not\in\Phi$. The probability of reaching $t_1$ in finitely many steps is however strictly less than 0.5. Thus, for any $T_1\in\mathit{Pre}_{\mathit{fin}}(T(\text{\sf D}))$, there exists $T_2\in\Phi$ with $T_1\prefixTreeT_2$. Therefore $\Phi$ is not a safety property. Moreover, PTs like $T_1=\{(1,c)\}$ show that $\Phi$ is not a liveness property either. Remark that $\P{\le 0.5}{a\text{\sf U} b}$ is a safety property, whereas $\P{<0.5}{a\text{\sf U} b}$ is neither safe nor live. This can be seen as follows. Intuitively, $T\not\models\P{\le 0.5}{a\text{\sf U} b}$ iff $T\models\P{>0.5}{a\text{\sf U} b}$, i.e., the probability of paths in $T$ satisfying $a \text{\sf U} b$ exceeds 0.5. For this, there must exist a set of \emph{finite} paths in $T$ satisfying $a\text{\sf U} b$ whose probability mass exceeds 0.5. However, this does not hold for $\P{<0.5}{a \text{\sf U} b}$, as $T\not\models\P{< 0.5}{a\text{\sf U} b}$ iff $T\models\P{\ge 0.5}{a\text{\sf U} b}$. There exist PTs (like the one in Fig.~\ref{fig:dtmc}(b)) such that they satisfy $\P{\ge 0.5}{a\text{\sf U} b}$, but the probability mass of their \emph{finite} paths satisfying $a\text{\sf U} b$ never exceeds 0.5. \item $\Phi=\P{> 0.4}{a\text{\sf U} b}$ is neither safe nor live.\\ Consider the MC $\text{\sf D}$ in Fig.~\ref{fig:dtmc}(a). Clearly, $\text{\sf D} \not\models \Phi$, as the probability of reaching a $b$-state is 0. But any finite-depth prefix of $T(\text{\sf D})$ can be extended to a PT in $\Phi$. Thus, $\Phi$ is not a safety property. Moreover for finite-depth PTs like $T_1=\{(1,c)\}$, there exists no $T_2\in\Phi$ such that $T_1\prefixTreeT_2$. Therefore $\Phi$ is not a liveness property. \end{itemize} \end{example} \subsection{Characterisations of Safety and Liveness} As a next step, we aim to give alternative characterisations of safety and liveness properties using topological closures~\cite{Manolios2003LCS}. \begin{definition}[Topological closure]\label{def:topological closure} Let $X$ be a set. The function $\mathit{tco}:\mathcal{P}(X)\mapsto\mathcal{P}(X)$ is a \emph{topological closure} operator on a $X$ iff for any $C,D\subseteq X$ it holds: \begin{enumerate} \item $\mathit{tco}(\emptyset)=\emptyset$; \item $C\subseteq\mathit{tco}(C)$; \item $\mathit{tco}(C)=\mathit{tco}(\mathit{tco}(C))$; \item $\mathit{tco}(C\cup D)=\mathit{tco}(C)\cup\mathit{tco}(D)$. \end{enumerate} \end{definition} The following lemma shows two important properties of topological closure operators, where $\comp{C} = X\setminus C$ denotes the complement of $C$ w.r.t.\, $X$. \begin{lemma}[\cite{Manolios2003LCS}]\label{lem:topological closure} For a topological closure operator $\mathit{tco}$ on $X$ and $C\subseteq X$ we have: \begin{itemize} \item $\mathit{tco}(C\cup\comp{\mathit{tco}(C)})=X$; \item $\mathit{tco}(C)\cap(C\cup\comp{\mathit{tco}(C)})=C$. \end{itemize} \end{lemma} A closure function maps sets of total trees onto sets of total trees. It is in particular useful when applied to properties. \begin{definition}[Property closure]\label{def:closure linear} Let $\mathit{cls}: \mathcal{P}(\mathbb{T}^\omega)\rightarrow\mathcal{P}(\mathbb{T}^\omega)$. The \emph{closure} of property $P \subseteq \mathbb{T}^\omega$ is defined by: $$ \mathit{cls}(P) = \{T\in\mathbb{T}^\omega\mid\forallT_1\in\mathit{Pre}_{\mathit{fin}}(T).(\existsT_2\in P.T_1\prefixTreeT_2)\}. $$ \end{definition} Intuitively speaking, $\mathit{cls}(P)$ is the set of probabilistic trees for which all prefixes have an extension in $P$. Consider the topological space $(\mathbb{T}^\omega, \mathcal{P}(\mathbb{T}^\omega))$. It follows: \begin{lemma}\label{lem:topological closure} The function $\mathit{cls}$ is a topological closure operator on $(\mathbb{T}^\omega, \mathcal{P}(\mathbb{T}^\omega))$. \end{lemma} The following theorem provides a topological characterisation of safety and liveness for probabilistic systems, which can be seen as a conservative extension of the results in~\cite{Manolios2003LCS}. \begin{theorem}\label{thm:safety and liveness characterisation} \ \begin{enumerate} \item $P$ is a safety property iff $P=\mathit{cls}(P)$. \item $P$ is a liveness property iff $\mathit{cls}(P)=\mathbb{T}^\omega$. \end{enumerate} \end{theorem} Theorem~\ref{thm:safety and liveness characterisation} asserts that a property is safe iff its closure coincides with itself. A property $P$ is live iff the closure of $P$ equals $\mathbb{T}^\omega$, i.e., the set of all total PTs. \begin{remark}\label{rm:decomposition} From these results, it follows that $P\cup\comp{\mathit{cls}(P)}$ is a liveness property for any $P$. Using Lemma~\ref{lem:topological closure}, we have $\mathit{cls}(P\cup\comp{\mathit{cls}(P)}) = \mathit{cls}(P)\cup\mathit{cls}(\comp{\mathit{cls}(P)})\supseteq\mathit{cls}(P)\cup\comp{\mathit{cls}(P)}=\mathbb{T}^\omega$. Therefore $\mathit{cls}(P\cup\comp{\mathit{cls}(P)})=\mathbb{T}^\omega$. By Theorem~\ref{thm:safety and liveness characterisation}, it follows that $P\cup\comp{\mathit{cls}(P)}$ is a liveness property. \end{remark} Theorem~\ref{thm:safety and liveness characterisation} and Remark~\ref{rm:decomposition} provide the basis for a decomposition result stating that every property can be represented as an intersection of a safety and liveness property. \begin{proposition}[Decomposition proposition]\label{prop:decompose} For any property $P\subseteq\mathbb{T}^\omega$, $P=\mathit{cls}(P)\cap(P\cup\comp{\mathit{cls}(P)})$. \end{proposition} We thus can decompose any property $P$ into the intersection of the properties $\mathit{cls}(P)$ and $(P\cup\comp{\mathit{cls}(P)})$, where $\mathit{cls}(P)$ is a safety property by Theorem~\ref{thm:safety and liveness characterisation}, and $P\cup\comp{\mathit{cls}(P)}$ is a liveness property by Remark~\ref{rm:decomposition}. Finally, we study whether safety and liveness properties are closed under conjunction and disjunction. \begin{lemma}\label{lem:disjunction and conjunction} Given two properties $P_1$ and $P_2$: \begin{enumerate} \item Safety properties are closed under $\cap$ and $\cup$; \item If $P_1$ and $P_2$ are live with $P_1\cap P_2\neq\emptyset$, so is $P_1\cap P_2$; \item If at least one of $P_1$ and $P_2$ is live, so is $P_1\cup P_2$. \end{enumerate} \end{lemma} Lemma~\ref{lem:disjunction and conjunction} provides a means to prove safety and liveness properties in a compositional way. For instance, in order to prove that $P_1\cap P_2$ is safe, we can prove whether $P_1$ and $P_2$ are safe or not separately. In case that both $P_1$ and $P_2$ are safe, so is $P_1\cap P_2$. \subsection{Safety and liveness versus counterexamples} \label{sec:counterexample} We conclude this section by providing a relationship between safety and liveness properties and counterexamples. A property $P$ only has finite counterexamples iff for any MC $\text{\sf D}\not\models P$, there exists $T_1\in\mathit{Pre}_{\mathit{fin}}(T(\text{\sf D}))$ with $T_1\not\prefixTreeT_2$ for any $T_2\in P$. Conversely, a property $P$ has no finite counterexamples iff for any MC $\text{\sf D}$ such that $\text{\sf D}\not\models P$, for each $T_1\in\mathit{Pre}_{\mathit{fin}}(T(\text{\sf D}))$ there exists $T_2\in P$ such that $T_1\prefixTreeT_2$, i.e., no finite-depth prefix is able to violate the property. \begin{theorem}\label{thm:counterexample safety and liveness} \ \begin{enumerate} \item $P$ is safe iff it only has finite counterexamples. \item $P$ is live iff it has no finite counterexamples. \end{enumerate} \end{theorem} Recall that $\Phi=\P{\leq 0.5}{a\text{\sf U} b}$ is a safety property. As shown in~\cite{Han2009CGP}, for any MC $\text{\sf D}\not\models\Phi$, there exists a (finite) set of finite paths of $\text{\sf D}$ whose mass probability exceeds 0.5. This indicates that $\Phi$ only has finite counterexamples. \section{Qualitative Properties} \label{sec:qualitative properties} \begin{table} \centering \caption{Property classification of qualitative PCTL}\label{tab:qualitative} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Qualitative PCTL} & \multirow{2}{*}{Equivalence} & \multicolumn{3}{|c|}{CTL}\\ \cline{1-2}\cline{4-6}formula & here & & formula & \cite{Manolios2001SLB} & \cite{alpern1987recognizing} \\ \hline \hline $\P{=1}{\Diamond a}$ & L & $\not\equiv$ & $\forall\Diamond a$ & UL & L\\[1ex] $\P{>0}{\Diamond a}$ & L & $\equiv$ & $\exists\Diamond a$ & EL & \\[1ex] $\P{>0}{a\text{\sf U} b}$ & X & $\equiv$ & $\exists(a\text{\sf U} b)$ & X & X\\[1ex] $\P{=1}{\Box a}$ & S & $\equiv$ & $\forall\Box a$ & US & S\\[1ex] $\P{>0}{\Box a}$ & X & $\not\equiv$ & $\exists\Box a$ & ES & S\\[1ex] \hline \end{tabular} \end{table} The qualitative fragment of PCTL only contains formulas with probability bounds $\geq 1$ (or $=1$) and $>0$. Although CTL and qualitative PCTL have incomparable expressive power~\cite{Baier2008PMC}, they have a large fragment in common. (For finite MCs, qualitative PCTL coincides with CTL under strong fairness assumptions.) This provides a basis for comparing the property classification defined above to the existing classification for branching-time properties~\cite{Manolios2001SLB}. A qualitative PCTL-formula $\Phi$ is equivalent to a CTL-formula $\Psi$ whenever $\text{\sf D} \models \Phi$ iff $\text{\sf D} \models \Psi$, where the latter is interpreted over the underlying digraph of MC $\text{\sf D}$. \begin{example}[Classifying qualitative PCTL versus CTL/LTL] \ \begin{itemize} \item $\P{=1}{\Diamond a}$ and $\forall\Diamond a$. Although $\P{=1}{\Diamond a} \not\equiv \forall\Diamond a$, both formulas are liveness properties. Recall that $\P{=1}{\Diamond a} \equiv\P{\ge 1}{\top\text{\sf U} a}$, which is a liveness property (see Example~\ref{ex:classification}). \item $\P{>0}{\Diamond a}$ and $\exists\Diamond a$. As $\P{>0}{\Diamond a}\equiv\P{>0}{\top\text{\sf U} a}$ it follows from Example~\ref{ex:classification} that $\P{>0}{\Diamond a}$ is a liveness property. According to~\cite{Manolios2001SLB}, \emph{CTL}-formula $\exists\Diamond a$ is a universally liveness property. Note that $\forall\Diamond a$ and $\exists\Diamond a$ coincide in the linear-time setting of~\cite{alpern1987recognizing}. \item $\P{>0}{a\text{\sf U} b}$ and $\exists(a\text{\sf U} b)$. Note $\P{>0}{a\text{\sf U} b}\equiv\exists(a\text{\sf U} b)$. In fact, also their classifications coincide: the \emph{PCTL}-formula $\P{>0}{a\text{\sf U} b}$ is neither safe nor live (see Example~\ref{ex:classification}), whereas the CTL-formula $\exists(a\text{\sf U} b)$ is also neither safe nor live~\cite{Manolios2001SLB}. Similarly, in the linear-time setting, $a \text{\sf U} b$ is neither safe nor live~\cite{alpern1987recognizing}. \item $\P{=1}{\Box a}$ and $\forall\Box a$. In this case, $\P{=1}{\Box a}\equiv\forall\Box a$ (see~\cite{Baier2008PMC}). Since $\P{=1}{\Box a} \equiv \P{\leq 0}{a \text{\sf U}\neg a}$, it follows from Example~\ref{ex:classification} that $\P{=1}{\Box a}$ is safe. This coincides with the characterisation of $\forall\Box a$ in~\cite{alpern1987recognizing}. \item $\P{>0}{\Box a}$ and $\exists\Box a$. As shown in~\cite{Baier2008PMC}, $\P{>0}{\Box a}\not\equiv\exists\Box a$. This non-equivalence is also reflected in the property characterisation. Since $\P{>0}{\Box a}\equiv \P{<1}{a\text{\sf U}\neg a}$, it is neither safe nor live (see Example~\ref{ex:classification}). In contrast, $\exists\Box a$ is classified as a safety property and existentially safety property in~\cite{alpern1987recognizing} and~\cite{Manolios2001SLB}, respectively. \end{itemize} \end{example} Table~\ref{tab:qualitative} summarises the classification where L, S, and X denote liveness, safety, and other properties respectively, while the prefixes E and U denote \textit{existentially} and \textit{universally} respectively. The second column indicates our characterisation, while the 5th and 6th column present the characterisation of~\cite{Manolios2001SLB} and~\cite{alpern1987recognizing} respectively. Please bear in mind, that~\cite{alpern1987recognizing} considers linear-time properties. In conclusion, our characterisation for qualitative PCTL coincides with that of~\cite{alpern1987recognizing} and \cite{Manolios2001SLB} with the exception of $\P{>0}{\Box a}$. \cite{Manolios2001SLB} considers the branching-time setting, and treats two types of safety properties: universally safety (such as $\forall\Box a$) and existentially safety (e.g., $\exists\Box a$). The same applies to liveness properties. Accordingly, \cite{Manolios2001SLB} considers two closure operators: one using finite-depth prefixes (as in Def.~\ref{def:closure linear}) and one taking non-total prefixes into account. The former is used for universally safety and liveness properties, the latter for existentially safety and liveness. This explains the mismatches in Table~\ref{tab:qualitative}. We remark that our characterisation of qualitative properties will coincide with~\cite{Manolios2001SLB} by using a variant of $\mathit{cls}$ that considers non-total prefixes. \section{Safety PCTL}\label{sec:safety pctl} In this section, we will provide syntactic characterisations of safety properties in PCTL. For flat PCTL, in which nesting is prohibited, we present an algorithm to decompose a flat PCTL-formula into a conjunction of a safe and live formula. Then we provide a sound and complete characterisation for full PCTL. In both setting, formulas with strict probability bounds are excluded. \subsection{Flat PCTL}\label{sec:flat pctl} Here we focus on a flat fragment of PCTL, denoted $\text{PCTL}_{\mathit{flat}}$, whose syntax is given by the following grammar: $$ \Phi ::= \P{\bowtie q}{\Phi^a_1\text{\sf U}\Phi^a_2} \mid \P{\bowtie q}{\Phi^a_1\text{\sf W}\Phi^a_2} \mid \P{\bowtie q}{\text{\sf X}\Phi^a} \mid\Phi_1 \wedge \Phi_2 \mid \Phi_1 \vee \Phi_2 $$ with $\bowtie \, \in \{\le,\ge\}$, and $\Phi^a ::= a \mid \neg\Phi^a \mid \Phi^a_1\wedge\Phi^a_2$ is referred to as \emph{literal formulas}. The fragment $\text{PCTL}_{\mathit{flat}}$ excludes nested probabilistic operators as well as strict probability bounds. Note that by applying the distribution rules of disjunction and conjunction, every formula $\Phi$ in $\text{PCTL}_{\mathit{flat}}$ can be transformed into an equivalent formula such that all conjunctions are at the outermost level except for those between literal formulas $\Phi^a$. Therefore we assume all $\text{PCTL}_{\mathit{flat}}$-formulas to obey such form. We provide an algorithm that decomposes a $\text{PCTL}_{\mathit{flat}}$-formula into a conjunction of two PCTL-formulas, one of which is a safety property, while the other one is a liveness property. $\text{PCTL}_{\mathit{flat}}$ is closed under taking the closure: \begin{lemma}\label{lem:closure PCTL} The closure formula of a $\text{PCTL}_{\mathit{flat}}$-formula equals: $$ \begin{array}{rcl} \mathit{cls}(\Phi^a) & = & \Phi^a \\ \mathit{cls}(\P{\bowtie q}{\text{\sf X}\Phi^a}) & = & {\P{\bowtie q}{\text{\sf X}\Phi^a}} \mbox{ for } \bowtie \, \in \{ \leq, \geq \} \\ \mathit{cls}(\P{\le q}{\Phi^a_1\text{\sf U}\Phi^a_2}) & = & {\P{\le q}{\Phi^a_1\text{\sf U}\Phi^a_2}} \\ \mathit{cls}(\P{\ge q}{\Phi^a_1\text{\sf U}\Phi^a_2}) & = & {\P{\geq q}{\Phi^a_1\text{\sf W}\Phi^a_2}} \\ \mathit{cls}(\P{\ge q}{\Phi^a_1\text{\sf W}\Phi^a_2}) & = & {\P{\ge q}{\Phi^a_1\text{\sf W}\Phi^a_2}} \\ \mathit{cls}(\P{\le q}{\Phi^a_1\text{\sf W}\Phi^a_2}) & = & {\P{\le q}{\Phi^a_1\text{\sf U}\Phi^a_2}} \\ \mathit{cls}({\Phi_1\lor\Phi_2}) & = & \mathit{cls}({\Phi_1})\lor\mathit{cls}({\Phi_2}). \end{array} $$ \end{lemma} By Lemma~\ref{lem:closure PCTL}, the size of $\mathit{cls}(\Phi)$ is linear in the size of $\Phi$ for any $\text{PCTL}_{\mathit{flat}}$ formula $\Phi$. In Lemma~\ref{lem:closure PCTL}, we do not define the closure formula for conjunctions, as in general it does not hold that $\mathit{cls}(\Phi_1\land\Phi_2)=\mathit{cls}(\Phi_1)\land\mathit{cls}(\Phi_2)$: \begin{example}[Closure of conjunctions] Let $\Phi=\Phi_1\land\Phi_2$ where $\Phi_1=\P{\ge 1}{a\text{\sf U} b}$ and $\Phi_2=\P{\ge 1}{(a\land\neg b)\text{\sf U}(\neg a\land\neg b)}$. It follows that $\Phi\equiv\bot$. We show that $\mathit{cls}(\Phi)\neq\mathit{cls}(\Phi_1)\land\mathit{cls}(\Phi_2)=\P{\ge 1}{a\text{\sf W} b}\land\P{\ge 1}{(a\land\neg b)\text{\sf W}(\neg a\land\neg b)}$. Since a PT always staying in $a$-states almost surely is in $\mathit{cls}(\Phi_1)\land\mathit{cls}(\Phi_2)$, $\mathit{cls}(\Phi_1)\land\mathit{cls}(\Phi_2)\not\equiv\bot$. However $\mathit{cls}(\Phi)\equiv\bot$ because $\Phi\equiv\bot$. \end{example} Algorithm~\ref{alg:safety and liveness} describes the procedure of decomposition. It is worth mentioning that given $\Phi\in\text{PCTL}_{\mathit{flat}}$, Algorithm~\ref{alg:safety and liveness} returns a pair of formulas $(\Phi^s,\Phi^l)$ such that $\Phi\equiv\Phi^s\land\Phi^l$, where $\Phi^s\in\text{PCTL}_{\mathit{flat}}$, but $\Phi^l$ is not necessary in $\text{PCTL}_{\mathit{flat}}$. \begin{algorithm}[!t] \caption{$\text{PCTL}_{\mathit{flat}}$ decomposition}\label{alg:safety and liveness} \begin{algorithmic}[1] \REQUIRE A $\text{PCTL}_{\mathit{flat}}$-formula $\Phi$. \ENSURE~~\\ $(\Phi^s,\Phi^l)$ such that $\Phi^s\land\Phi^l\equiv\Phi$ where $\Phi^s$ is a safety property and $\Phi^l$ is a liveness property. \\[1ex] \STATE Transform $\Phi$ into an equivalent formula such that $\Phi\equiv\Phi_1\land\Phi_2\land\ldots\land\Phi_n$ where $\Phi_i$ ($1\le i\le n$) contains no conjunction operators except between literal formulas;\label{line:distributivity} \STATE Let $\Phi^s_i = \mathit{cls}(\Phi_i)$ for each $1\le i\le n$ (see Lemma~\ref{lem:closure PCTL});\label{line:safety} \STATE Let $\Phi^l_i = \Phi_i\lor\neg\Phi^s_i$ for each $1\le i\le n$;\label{line:liveness} \STATE Return ($\bigwedge_{1\le i\le n}\Phi^s_i, \bigwedge_{1\le i\le n}\Phi^l_i$).\label{line:conjunction} \end{algorithmic} \end{algorithm} \begin{theorem}\label{thm:correctness of algorithm} Algorithm~\ref{alg:safety and liveness} is correct. \end{theorem} Since line 1 in Algorithm~\ref{alg:safety and liveness} may cause an exponential blow-up by transforming $\Phi$ into an equivalent formula in conjunctive normal form. It follows that Algorithm~\ref{alg:safety and liveness} has an exponential worst-case time complexity. The reason for not considering formulas with strict bounds can be seen in the following example: \begin{example}[Strict bounds]\label{ex:non-strict} Let $\Phi=\P{>0.5}{a\text{\sf U} b}$. We show that $\mathit{cls}(\Phi)$ cannot be represented in \emph{PCTL}. Let $\text{\sf D}_1$ be the MC in Fig.~\ref{fig:dtmc}(b). Every finite-depth prefix $T_1$ of $T(\text{\sf D}_1)$ can easily be extended to a PT $T_2\in\Phi$ such that $T_1 \prefixTreeT_2$. From Def.~\ref{def:closure linear} it follows $T(\text{\sf D}_1)\in\mathit{cls}(\Phi)$. Now consider MC $\text{\sf D}_2$ in Fig.~\ref{fig:dtmc}(a) where we label state $s_1$ with $b$ (rather than $c$). Then $T(\text{\sf D}_2)\not\in\mathit{cls}(\Phi)$. For instance, the finite-depth prefix $\{(1,a),(1,a)(0.5,b),(1,a)(0.5,c)\}$ of $T(\text{\sf D}_2)$ cannot be extended to a PT in $\Phi$ as the probability of reaching $b$-states via only $a$-states is at most $0.5$. Applying \cite[Th.\ 50]{Baier2005CBS}, no {PCTL} $\text{\sf X}$-free formula can distinguish $\text{\sf D}_1$ and $\text{\sf D}_2$, as they are \emph{weakly bisimilar} (which is easy to verify). The above arguments indicate that all PTs in which $\neg(a\lor b)$-states are reached with probability $\ge$ 0.5 in finitely many steps are not in $\mathit{cls}(\Phi)$, while PTs where $\neg(a\lor b)$-states can only be reached with probability $\ge$ 0.5 in infinitely many steps are in $\mathit{cls}(\Phi)$. However, in order to characterise PTs where $\neg(a\lor b)$-states can only be reached with probability $\ge$ 0.5 in infinitely many steps, we need infinitary conjunction of $\text{\sf X}$ operators. This is not possible in {PCTL}. Thus, $\mathit{cls}(\Phi)$ cannot be represented in {PCTL}. \end{example} \subsection{Safety PCTL with Nesting} \label{sec:embedded PCTL} In this section we aim to give a sound and complete characterisation of safety properties in PCTL. That is to say, we will define a fragment of PCTL, that in contrast to $\text{PCTL}_{\mathit{flat}}$, contains nesting of probability operators, such that each formula in that fragment is a safety property. We also show the opposite, namely, that every safety property expressible in PCTL can be expressed as a formula in the provided logical fragment. For the same reasons as explained in Example~\ref{ex:non-strict}, strict probability bounds are excluded. The logical fragment is defined as follows. \begin{definition}[Safety PCTL]\label{def:safty pctl} Let $\mathcal{F}=\text{PCTL}_{\mathit{safe}}$ denote the \emph{safe fragment} of \emph{PCTL}, defined as the smallest set satisfying: \begin{enumerate} \item $\Phi^a\in\mathcal{F}$; \item If $\Phi\in\mathcal{F}$, then $\P{\ge q}{\text{\sf X}\Phi}\in\mathcal{F}$; \item If $\Phi_1,\Phi_2\in\mathcal{F}$, then $\Phi_1\land\Phi_2,\Phi_1\lor\Phi_2,\P{\ge q}{\Phi_1\text{\sf W}\Phi_2}\in\mathcal{F}$; \item If $\neg\Phi_1,\neg\Phi_2\in\mathcal{F}$, then $\P{\le q}{\Phi_1\text{\sf U}\Phi_2}\in\mathcal{F}$. \end{enumerate} \end{definition} The next result asserts that all properties in $\text{PCTL}_{\mathit{safe}}$ are indeed safety properties according to Def.~\ref{def:safety}. \begin{theorem}\label{thm:safety pctl} Every $\text{PCTL}_{\mathit{safe}}$-formula is a safety property. \end{theorem} The following theorem asserts (in some sense) the converse of Theorem~\ref{thm:safety pctl}, i.e., all safety properties in PCTL can be represented by an equivalent formula in $\text{PCTL}_{\mathit{safe}}$. \begin{theorem}\label{thm:safety pctl complete} For every safety property $\Phi$ expressible in \emph{PCTL} (no strict bounds), there exists $\Phi'\in\text{PCTL}_{\mathit{safe}}$ with $\Phi\equiv\Phi'$. \end{theorem} Note for any $\Phi\in\text{PCTL}_{\mathit{flat}}$, $\mathit{cls}(\Phi)\in\text{PCTL}_{\mathit{flat}}\cap\text{PCTL}_{\mathit{safe}}$. Thus, Algorithm~\ref{alg:safety and liveness} decomposes $\text{PCTL}_{\mathit{flat}}$-formula $\Phi$ into a conjunction of a safety and liveness property such that the safety property is expressed in $\text{PCTL}_{\mathit{flat}}\cap\text{PCTL}_{\mathit{safe}}$. \section{Liveness PCTL}\label{sec:liveness pctl} In this section we investigate expressing liveness properties in PCTL. We start with providing a sound characterisation of liveness properties, that is to say, we provide a logical fragment for liveness properties. Subsequently, we show that a slight superset of this fragment yields a complete characterisation of liveness properties expressible in PCTL. We then discuss the reasons why, in contrast to safety properties, a syntactic sound and complete characterisation of PCTL-expressible liveness properties is difficult to achieve. Let us first define the logical fragment $\text{PCTL}_{\mathit{live}}^{<}$. \begin{definition}[Liveness PCTL]\label{def:liveness pctl} Let $\mathcal{F}=\text{PCTL}_{\mathit{live}}^{<}$ denote the \emph{live fragment} of \emph{PCTL}, defined as the smallest set satisfying: \begin{enumerate} \item $\top\in\mathcal{F}$ and $\bot\not\in\mathcal{F}$; \item $\P{\ge q}{\Diamond\Phi^a}\in\mathcal{F}$; \item If $\Phi_1,\Phi_2\in\mathcal{F}$, then $\Phi_1\land\Phi_2\in\mathcal{F}$; \item If $\Phi_1\in\mathcal{F}$ or $\Phi_2\in\mathcal{F}$, then $\Phi_1\lor\Phi_2,\P{\ge q}{\Phi_1\text{\sf W}\Phi_2}\in\mathcal{F}$; \item If $\Phi\in\mathcal{F}$, then $\P{\ge q}{\text{\sf X}\Phi}\in\mathcal{F}$; \item If $\Phi_2\in\mathcal{F}$, then $\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}\in\mathcal{F}$ for any $\Phi_1$.\label{item:liveness U} \end{enumerate} \end{definition} It follows that $\text{PCTL}_{\mathit{live}}^{<}$-formulas are liveness properties. \begin{theorem}\label{thm:liveness pctl sound} Every $\text{PCTL}_{\mathit{live}}^{<}$-formula is a liveness property. \end{theorem} However, the converse direction is not true, i.e., it is not the case that every liveness property expressible in PCTL can be expressed in $\text{PCTL}_{\mathit{live}}^{<}$. This is exemplified below. \begin{example}[A liveness property not in $\text{PCTL}_{\mathit{live}}^{<}$]\label{ex:liveness not complete} Let $\Phi=\P{\ge 1}{\P{\ge 1}{\Diamond a}\text{\sf U} b}$. First, observe $\Phi\not\in\text{PCTL}_{\mathit{live}}^{<}$, since $b\not\in\text{PCTL}_{\mathit{live}}^{<}$ according to Def.~\ref{def:liveness pctl}. On the other hand, it follows that $\Phi$ is a liveness property. This can be seen as follows. Let $T_1\in\mathbb{T}^*$ be an arbitrary finite-depth PT. By Def.~\ref{def:safety}, it suffices to show that $T_1\prefixTreeT_2$ for some $T_2\in\Phi$. Such $T_2$ can be constructed by extending all leaves in $T_1$ with a transition to $(a\land b)$-states with probability 1. This yields $T_2\in\Phi$. Therefore such $T_2\in\Phi$ with $T_1\prefixTreeT_2$ always exists and $\Phi$ is a liveness property. \end{example} Example~\ref{ex:liveness not complete} shows that $\text{PCTL}_{\mathit{live}}^{<}$ is not complete, i.e., it does not contain all liveness properties expressible in PCTL. The problem is caused by clause~\ref{item:liveness U}) in Def.~\ref{def:liveness pctl}, where we require that $\Phi_2 \in \text{PCTL}_{\mathit{live}}^{<}$, in order for $\P{\ge q}{\Phi_1\text{\sf U}\Phi_2} \in \text{PCTL}_{\mathit{live}}^{<}$. As shown in Example~\ref{ex:liveness not complete}, this requirement is too strict, since it excludes liveness properties like $\P{\ge 1}{\P{\ge 1}{\Diamond a}\text{\sf U} b}$. Let us now slightly relax the definition of $\text{PCTL}_{\mathit{live}}^{<}$ by replacing clause~\ref{item:liveness U}) in Def.~\ref{def:liveness pctl} by: \begin{equation}\label{eq:liveness U} \begin{aligned} &\text{If }\Phi_1\in\mathcal{F}\text{ or }\Phi_2\in\mathcal{F}, \text{then }\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}\in\mathcal{F}. \end{aligned} \end{equation} The resulting logical fragment is referred to as $\text{PCTL}_{\mathit{live}}^{>}$. This fragment contains all liveness properties expressible in PCTL. \begin{theorem}\label{thm:liveness pctl complete} For any liveness property $\Phi$ expressible in \emph{PCTL}, there exists $\Phi'\in\text{PCTL}_{\mathit{live}}^{>}$ with $\Phi\equiv\Phi'$. \end{theorem} $\text{PCTL}_{\mathit{live}}^{>}$ is a superset of $\text{PCTL}_{\mathit{live}}^{<}$ and contains all liveness PCTL properties. Unfortunately, it also contains some properties which are not live, i.e., it is not sound. \iffalse \begin{example}[A sample non-live property in $\text{PCTL}_{\mathit{live}}^{>}$]\label{ex:liveness not sound} According to the definition of $\text{PCTL}_{\mathit{live}}^{>}$, $\Phi=\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}$ is in $\text{PCTL}_{\mathit{live}}^{>}$ if $\Phi_1\in\text{PCTL}_{\mathit{live}}^{>}$, regardless whether $\Phi_2\in\text{PCTL}_{\mathit{live}}^{>}$ or not. Let $\Phi_2\equiv\bot$, then $\Phi\equiv\bot$ for any $\Phi_1$, provided $q>0$, thus $\Phi$ is not a liveness property. \end{example} \fi In the example below we show that formulas like $\Phi=\P{\ge 0.5}{\Phi_1\text{\sf U}\Phi_2}$ cannot be classified easily when $\Phi_1$ is a liveness property while $\Phi_2$ is not (A live formula with a similar schema is given in Example~\ref{ex:liveness not complete}). \begin{example}[Liveness is hard to capture syntactically]\label{ex:liveness not sound involved} Let $\Phi=\P{\ge 0.5}{\Phi_1\text{\sf U}\Phi_2}$ with $\Phi_1=\P{\ge 1}{\Diamond a}\land\P{\ge 1}{\Diamond(\neg a\land\neg b)}$ and $\Phi_2=\P{\ge 1}{\Box(\neg a\land b)}$. Intuitively, $\Phi_1$ requires that $a$-states and $(\neg a\land \neg b)$-states are each eventually reached almost surely, while $\Phi_2$ requires to almost surely stay in $(\neg a\land b)$-states. By Def.~\ref{def:liveness pctl}, $\Phi_1\in\text{PCTL}_{\mathit{live}}^{<}$, which implies $\Phi_1\in\text{PCTL}_{\mathit{live}}^{>}$ and $\Phi\in\text{PCTL}_{\mathit{live}}^{>}$. $\Phi$ is however not a liveness property. We show this by arguing that $T_1=\{(1,a)\}$ is not a prefix of any PT in $\Phi$. Let $T_1\prefixTreeT_2$. As $T_2\not\in\Phi_2$, $T_1$ needs to be extended so as to yield a PT in $\Phi_1$ so as to fulfil $\Phi$. Since $\Phi_1\land\Phi_2\equiv\bot$ and $a\land(\neg a\land\neg b)\equiv\bot$, for any $T\in\Phi_1$, it follows $T\not\in\Phi_2$ and $T\not\in\P{>0}{\text{\sf X}\Phi_2}$. $\Phi_1$ thus implies $\neg\Phi$. Thus $\Phi$ is not live. Actually, $\Phi\equiv\Phi_2$, since it is not possible to reach $\Phi_2$-states via only $\Phi_1$-states. In order for a PT satisfying $\Phi$, it must satisfy $\Phi_2$ initially. Every $\Phi$ can be simplified to an equivalent property not in $\text{PCTL}_{\mathit{live}}^{>}$. \end{example} In conclusion, formulas like $\Phi=\P{\ge 0.5}{\Phi_1\text{\sf U}\Phi_2}$ are live, provided $\Phi_2$ is live too. The difficulty arises when $\Phi_2$ is not live but $\Phi_1$ is. Since Examples~\ref{ex:liveness not complete} and \ref{ex:liveness not sound involved} indicate that the liveness of $\Phi_1$ does not necessarily imply the liveness of $\Phi$. Whereas the definition of safe PCTL formulas can be done inductively over the structure of the formula, this is not applicable to live PCTL. For instance, formulas like $\P{\ge 0.5}{\Phi_1\text{\sf U}\Phi_2}$ cannot be categorised as being live (or not) based on the sub-formulas. It is worth mentioning that membership in $\text{PCTL}_{\mathit{safe}}$ can be determined syntactically, while this does neither hold for $\text{PCTL}_{\mathit{live}}^{<}$ nor for $\text{PCTL}_{\mathit{live}}^{>}$. Since, first of all, we require that $\Phi\not\equiv\bot$ for each $\Phi\in\text{PCTL}_{\mathit{live}}^{<}$ and $\Phi\in\text{PCTL}_{\mathit{live}}^{>}$. The checking of $\Phi\not\equiv\bot$ relies on PCTL satisfiability checking, i.e., $\Phi\not\equiv\bot$ if and only if there exists $T\in\mathbb{T}^\omega$ such that $T\in\Phi$ ($\Phi$ is satisfiable). PCTL satisfiability has received scant attention, and only partial solutions are known: \cite{Brazdil2008SPP} considers satisfiability checking for qualitative PCTL, while~\cite{BertrandFS12} presents an algorithm for bounded satisfiability checking of bounded PCTL. To the best of our knowledge, no algorithm for full PCTL satisfiability checking exists. Secondly, as indicated in Example~\ref{ex:liveness not sound involved}, formulas of the form $\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}$ cannot be easily classified syntactically. In order for $\text{PCTL}_{\mathit{live}}^{>}$ to solely contain liveness properties, the condition Eq.~(\ref{eq:liveness U}) should be changed to: $\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}\in\mathcal{F}$ iff \begin{enumerate} \item either $\Phi_2\in\mathcal{F}$, \item or $\Phi_1\in\mathcal{F}$ and $\Phi_1\land\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}\not\equiv\bot$. \end{enumerate} The first clause subsumes $\text{PCTL}_{\mathit{live}}^{<}$, while the second clause requires that in case only $\Phi_1$ is in $\text{PCTL}_{\mathit{live}}^{>}$, $\Phi_1\land\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}$ must be satisfiable, namely, it is possible to extend a PT satisfying $\Phi_1$ such that it satisfies $\P{\ge q}{\Phi_1\text{\sf U}\Phi_2}$. It is not surprising to encounter such difficulties when characterising PCTL liveness. Even in the non-probabilistic setting, the characterisation of liveness LTL relies on LTL satisfiability checking and it is (to our knowledge) still an open problem to provide a both sound and complete characterisation for liveness in LTL~\cite{Sistla1994SLF} and CTL. \begin{remark} In contrast to Section~\ref{sec:embedded PCTL}, where safety properties are restricted to non-strict bounds, both $\text{PCTL}_{\mathit{live}}^{<}$ and $\text{PCTL}_{\mathit{live}}^{>}$ can be extended to strict bounds while preserving all theorems of this section. \end{remark} \iffalse \begin{example}\label{ex:liveness complete} In conclusion, in case $\Phi_2$ is not a liveness property, properties like $\P_{\unrhd q}(\Phi_1\text{\sf U}\Phi_2)$ cannot be classified syntactically. As the classification depends on the semantics. \end{example} \fi \section{Characterisation of Simulation Pre-order}\label{sec:simulation} Simulation is an important pre-order relation for comparing the behaviour of MCs~\cite{JonssonL91}. Roughly speaking, an MC $\text{\sf D}$ simulates $\text{\sf D}'$ whenever it can mimic all transitions of $\text{\sf D}'$ with at least the same probability. A logical characterisation of (weak and strong) simulation pre-order relations on MCs has been given in~\cite{Baier2005CBS}. Baier \emph{et al.}~\cite{Baier2005CBS} use the following safety and liveness fragments of PCTL. The safety fragment is given by: \begin{equation}\label{eq:safety} \Phi ::= a \mid \neg a \mid \Phi_1\land\Phi_2 \mid \Phi_1\lor\Phi_2 \mid \P{\ge p}{\text{\sf X}\Phi} \mid \P{\ge q}{\Phi_1\text{\sf W}\Phi_2}, \end{equation} while the liveness fragment is defined by: \begin{equation}\label{eq:liveness} \Phi ::= a \mid \neg a \mid \Phi_1\land\Phi_2 \mid \Phi_1\lor\Phi_2 \mid \P{\ge p}{\text{\sf X}\Phi} \mid \P{\ge q}{\Phi_1\text{\sf U}\Phi_2}. \end{equation} Observe that $\text{PCTL}_{\mathit{safe}}$ subsumes the safety PCTL defined in Eq.~(\ref{eq:safety}). In addition, formulas of the form $\P{\le q}{\Phi_1\text{\sf U}\Phi_2}$ belong to $\text{PCTL}_{\mathit{safe}}$, provided $\neg\Phi_1$ and $\neg\Phi_2$ are safety properties. The main difference between \cite{Baier2005CBS} and our characterisation is concerned with liveness properties. The liveness fragment in Eq.~(\ref{eq:liveness}) is incomparable with both $\text{PCTL}_{\mathit{live}}^{<}$ and $\text{PCTL}_{\mathit{live}}^{>}$. For instance, formulas like $\P{\ge q}{a\text{\sf U} b}$ are live according to Eq.~(\ref{eq:liveness}), but is neither safe nor live according to our characterisation. Now we demonstrate whether the logical fragment $\text{PCTL}_{\mathit{safe}}$ characterises strong simulations, and similar for the two liveness fragments defined before. The concept of strong simulation between probabilistic models relies on the concept of \emph{weight function}~\cite{Jones1989PPE,JonssonL91}: \begin{definition}[Weight function]\label{def:weight function} Let $\mathcal{S}$ be a set and $R\subseteq \mathcal{S}\times \mathcal{S}$. A \emph{weight function} for distributions $\mu_1$ and $\mu_2$ with respect to $R$ is a function $\Delta:\mathcal{S}\times \mathcal{S}\mapsto[0,1]$ satisfying: \begin{itemize} \item $\Delta(s_1,s_2)>0$ implies $s_1~R~s_2$, \item $\mu_1(s_1)=\sum_{s_2\in\mathcal{S}}\Delta(s_1,s_2)$ for any $s_1\in \mathcal{S}$, \item $\mu_2(s_2)=\sum_{s_1\in\mathcal{S}}\Delta(s_1,s_2)$ for any $s_2\in \mathcal{S}$. \end{itemize} We write $\mu_1~\weightFunc~\mu_2$ if there exists a weight function $\Delta$ for $\mu_1$ and $\mu_2$ with respect to $R$. \end{definition} Strong simulation for MCs is now defined as follows. \begin{definition}[Strong simulation]\label{def:simulation} Let $\text{\sf D}=(\mathcal{S}, \mathit{AP}, \rightarrow, L, s_0)$ be an MC. $R \subseteq \mathcal{S} \times \mathcal{S}$ is a \emph{strong simulation} iff $s_1~R~s_2$ implies $L(s_1)=L(s_2)$ and $\mu_1~\weightFunc~\mu_2$, where $s_i\rightarrow\mu_i$ with $i\in\{1,2\}$. We write $s_1~\precsim~s_2$ iff there exists a strong simulation $R$ such that $s_1~R~s_2$. \end{definition} In order to give a logical characterisation of $\precsim$ using $\text{PCTL}_{\mathit{safe}}$, we define a pre-order relation on $\text{PCTL}_{\mathit{safe}}$. Let $s_1~\precsim_{\mathit{safe}}~s_2$ iff $s_2\models\Phi$ implies $s_1\models\Phi$ for every $\Phi\in\text{PCTL}_{\mathit{safe}}$. Similarly, $s_1~\precsim_{\mathit{live}}^i~s_2$ iff $s_1\models\Phi$ implies $s_2\models\Phi$ for any $\Phi\in\text{PCTL}_{\mathit{live}}^i$ with $i\in\{1,2\}$. The following theorem shows that both $\precsim_{\mathit{safe}}$ and $\precsim^2_{\mathit{live}}$ can be used to characterise strong simulation as in~\cite{Baier2005CBS}, while $\precsim^1_{\mathit{live}}$ is strictly coarser than $\precsim$. \begin{theorem}\label{thm:simulation logical} $\precsim~=~\precsim_{\mathit{safe}}~=~\precsim^2_{\mathit{live}}~\subsetneq~\precsim^1_{\mathit{live}}$. \end{theorem} The proof of $\precsim^2_{\mathit{live}}~\subseteq~\precsim$ relies on liveness properties expressible in PCTL. Consequently, $\precsim~=~\precsim_{\mathit{live}}$, where $\precsim_{\mathit{live}}$ is the pre-order induced by $\text{PCTL}_{\mathit{live}}$, i.e., the set of all liveness properties expressible in PCTL. \section{Strong Safety and Absolute Liveness}\label{sec:strong and absolute} In this section, we characterise strong safety and absolute liveness properties as originated in~\cite{Sistla1985CSL} for LTL. In the original setting, a strong safety property $P$ is a safety property that is closed under stuttering, and is insensitive to the deletion of states, i.e., deleting an arbitrary number of states from a sequence in $P$ yields a sequence in $P$. (A similar notion also appeared in~\cite{DBLP:journals/ipl/AlpernDS86}.) We lift this notion to probabilistic trees and provide a sound and complete characterisation of strong safety (expressible in PCTL). In contrast, an absolute liveness property is a liveness property that is insensitive to adding prefixes. We provide a sound and complete characterisation of absolute liveness properties, and show that each such property is in fact an almost sure reachability formula. \subsection{Strong Safety Properties} \begin{definition}[Stuttering]\label{def:stuttering} PT $T_1=\atree[1]$ is a \emph{stuttering} of PT $T_2=\atree[2]$ iff for some $\pi_1$ with $\lastState{\pi_1} \, =n$: $$ W_1\setminusW_2=\{\pi_1{\cdot}n{\cdot}\pi_2\mid\pi_1{\cdot}\pi_2\inW_2\}\text{, and} $$ \begin{itemize} \item for any $\pi\inW_1$, $$ L_1(\pi) = \left\{ \begin{array}{ll} L_2(\pi) & \mbox{if } \pi \in W_2 \\ L_2(\pi_1) & \mbox{if } \pi=\pi_1{\cdot}n \\ L_2(\pi_1{\cdot}\pi_2) & \mbox{if } \pi=\pi_1{\cdot}n{\cdot}\pi_2 \\ \end{array} \right. $$ \item for any $\pi,\pi'\inW_1$, $\mathit{P}_1(\pi)(\pi')$ equals $$ \left\{ \begin{array}{ll} \mathit{P}_2(\pi)(\pi') & \mbox{if } \pi,\pi' \in W_2 \\ 1 & \mbox{if } \pi=\pi_1, \pi'=\pi_1{\cdot}n \\ \mathit{P}_2(\pi_1{\cdot}\pi_2)(\pi_1{\cdot}\pi'_2) & \mbox{if } \pi=\pi_1{\cdot}n{\cdot}\pi_2, \pi'=\pi_1{\cdot}n{\cdot}\pi'_2. \end{array} \right. $$ \end{itemize} \end{definition} Phrased in words, $T_1$ is the same as $T_2$ except that one or more nodes in $T_2$, such as the last node of $\pi_1$ is repeated (stuttered) with probability one for all paths in $W_1$ with prefix $\pi_1$. Conversely, we can also delete nodes from a PT: \begin{definition}[Shrinking]\label{def:shrinking} Let $T_1,T_2\in\mathbb{T}^\omega$. PT $T_1=\atree[1]$ is a \emph{shrinking} of $T_2=\atree[2]$ iff there exists $\pi_1{\cdot} n\inW_2$ with $\pi_1\neq\epsilon$ such that $$ W_1\setminusW_2=\{\pi_1{\cdot}\pi_2\mid\pi_1{\cdot}n {\cdot}\pi_2\inW_2\}\text{, and} $$ \begin{itemize} \item for any $\pi\inW_1$, $$ L_1(\pi) = \left\{ \begin{array}{ll} L_2(\pi) & \mbox{if } \pi\inW_2 \\ L_2(\pi_1{\cdot}n{\cdot}\pi_2) & \mbox{if } \pi=\pi_1{\cdot}\pi_2. \end{array} \right. $$ \item for any $\pi,\pi'\inW_1$, $\mathit{P}_1(\pi)(\pi')$ equals $$ \!\!\!\!\!\!\!\!\!\!\!\! \left\{ \begin{array}{ll} \mathit{P}_2(\pi)(\pi') & \mbox{if } \pi,\pi' \in W_2 \\ \mathit{P}_2(\pi)(\pi_1{\cdot}n){\times}\mathit{P}_2(\pi_1{\cdot}n)(\pi_1{\cdot}n{\cdot}\pi'_2) & \mbox{if } \pi=\pi_1, \pi'=\pi_1{\cdot}\pi'_2 \\ \mathit{P}_2(\pi_1{\cdot}n{\cdot}\pi_2)(\pi_1{\cdot}n{\cdot}\pi'_2) & \mbox{if } \pi=\pi_1{\cdot}\pi_2 \mbox{ and} \\ & \phantom{\mbox{ifif}} \pi'=\pi_1{\cdot}\pi'_2. \end{array} \right. $$ \end{itemize} \end{definition} Note that deletion of the initial node is prohibited, as $\pi_1\neq\epsilon$. \begin{example}[Shrinking and stuttering]\label{ex:stuttering and shrinking} Let $T_1$, $T_2$, and $T_3$ be the PTs depicted in Fig.~\ref{fig:stuttering and shrinking}, where symbols inside circles denote node labels. $T_2$ is a stuttering PT of $T_1$, as in $T_2$ the $c$-node is stuttered with probability one. On the other hand, $T_3$ is obtained by deleting the $b$-state from $T_1$, such that the probability from $a$-state to $d$-state and $e$-state equals $0.5{\times}0.4 = 0.2$ and $0.5{\times}0.6=0.3$, respectively. Thus, $T_3$ is a shrinking PT of $T_1$. \end{example} \begin{figure}[t] \centering \scalebox{1}{ \begin{tikzpicture}[->,>=stealth,auto,node distance=2cm,semithick,scale=0.8,every node/.style={scale=0.8}] \tikzstyle{state}=[minimum size=20pt,circle,draw] \tikzstyle{stateNframe}=[] every label/.style=draw \node[stateNframe](d11) {$\vdots$}; \node[state](d1)[above of=d11] {$d$}; \node[state](e1)[right of=d1] {$e$}; \node[stateNframe](d12)[below of=e1] {$\vdots$}; \node[state](b1)[above of=d1,xshift=1cm] {$b$}; \node[state](a1)[above of=b1,xshift=1cm] {$a$}; \node[state](c1)[right of=b1] {$c$}; \node[stateNframe](d13)[below of=c1] {$\vdots$}; \node[stateNframe](d21)[right of=d13,yshift=-2cm,xshift=-1cm] {$\vdots$}; \node[state](d2)[above of=d21] {$d$}; \node[state](e2)[right of=d2] {$e$}; \node[stateNframe](d22)[below of=e2] {$\vdots$}; \node[state](b2)[above of=d2,xshift=1cm] {$b$}; \node[state](a2)[above of=b2,xshift=1cm] {$a$}; \node[state](c2)[right of=b2] {$c$}; \node[state](c21)[below of=c2] {$c$}; \node[stateNframe](d23)[below of=c21] {$\vdots$}; \node[stateNframe](d31)[right of=c21,xshift=-1cm] {$\vdots$}; \node[stateNframe](d32)[right of=d31,xshift=-1cm] {$\vdots$}; \node[stateNframe](d33)[right of=d32,xshift=-1cm] {$\vdots$}; \node[state](d3)[above of=d31] {$d$}; \node[state](e3)[above of=d32] {$e$}; \node[state](c3)[above of=d33] {$c$}; \node[state](a3)[above of=e3] {$a$}; \node[stateNframe](n1)[below of=d12,yshift=1cm] {PT $T_1$}; \node[stateNframe](n2)[below of=d22,yshift=1cm] {PT $T_2$}; \node[stateNframe](n3)[below of=d32,yshift=-1cm] {PT $T_3$}; \path (a1) edge node [left] {0.5} (b1) edge node [right]{0.5} (c1) (b1) edge node [left] {0.4} (d1) edge node [right]{0.6} (e1) (c1) edge node {1} (d13) (d1) edge node [left] {1} (d11) (e1) edge node [right]{1} (d12) (a2) edge node [left] {0.5} (b2) edge node [right]{0.5} (c2) (b2) edge node [left] {0.4} (d2) edge node [right]{0.6} (e2) (c2) edge node {1} (c21) (c21) edge node {1} (d23) (d2) edge node [left] {1} (d21) (e2) edge node [right]{1} (d22) (a3) edge node [left] {0.2} (d3) edge node [xshift=-0.1cm,yshift=-0.2cm] {0.3} (e3) edge node [right]{0.5} (c3) (d3) edge node {1} (d31) (e3) edge node {1} (d32) (c3) edge node {1} (d33); \end{tikzpicture} } \caption{Illustrating stuttering and shrinking of PTs}\label{fig:stuttering and shrinking} \end{figure} Now we are ready to define the \emph{strong safety} properties in the probabilistic setting: \begin{definition}[Strong safety]\label{def:strong safety} A safety property $P$ is a \emph{strong safety} property whenever \begin{enumerate} \item $P$ is closed under stuttering, i.e, $T\in P$ implies $T'\in P$, for every stuttering PT $T'$ of $T$, and \item $P$ is closed under shrinking, i.e., $T\in P$ implies $T'\in P$, for every shrinking PT $T'$ of $T$. \end{enumerate} \end{definition} Observe that there exist non-safety properties that are closed under stuttering and shrinking. For instance $\P{\ge 0.5}{\top\text{\sf U}\P{\ge 1}{\Box a}}$ is not a safety property, but is closed under stuttering and shrinking. In~\cite{Sistla1994SLF}, it was shown that an LTL formula is a strong safety property iff it can be represented by an LTL formula in positive normal form using only $\Box$ operators. We extend this result in the probabilistic setting: strong safety properties syntactically cover more PCTL-formulas than those only containing $\Box$ operators. \begin{definition}[Strong safety PCTL]\label{def:strong safety pctl} Let $\mathcal{F}=\text{PCTL}_{\mathit{ssafe}}$ denote the \emph{strong safety fragment} of $\text{PCTL}_{\mathit{safe}}$ such that: \begin{enumerate} \item $\Phi^a\in\mathcal{F}$; \item If $\Phi_1,\Phi_2\in\mathcal{F}$, then $\Phi_1\land\Phi_2$ and $\Phi_1\lor\Phi_2$ are in $\mathcal{F}$; \item\label{item:ss U} If $\Phi_1\in\mathcal{F}$ and $\Phi_2\in\mathcal{F}^\Box$, then $\P{\ge q}{\Phi_1\text{\sf W}\Phi_2}\in\mathcal{F}$; \end{enumerate} where $\mathcal{F}^\Box$ is defined as follows: \begin{enumerate} \item If $\Phi_1,\Phi_2\in\mathcal{F}^\Box$, then $\Phi_1\land\Phi_2$ and $\Phi_1\lor\Phi_2$ are in $\mathcal{F}^\Box$; \item If $\Phi\in\mathcal{F}$, then $\P{\ge 1}{\Box\Phi}\in\mathcal{F}^\Box$. \end{enumerate} \end{definition} Note that by clause~\ref{item:ss U}), $\P{\ge q}{\Box\Phi}$ is a formula in $\text{PCTL}_{\mathit{ssafe}}$, provided $\Phi\in\text{PCTL}_{\mathit{ssafe}}$. This follows from the fact that $\P{\ge q}{\Box\Phi}\equiv\P{\ge q}{\Phi\text{\sf W}\bot}\equiv\P{\ge q}{\Phi\text{\sf W}\P{\ge 1}{\Box\bot}}$, and $\P{\ge 1}{\Box\bot}\in\mathcal{F}^\Box$. The following result shows that $\text{PCTL}_{\mathit{ssafe}}$ is sound and complete, i.e., all formulas in $\text{PCTL}_{\mathit{ssafe}}$ are strong safety properties and every strong safety property expressible in PCTL is expressible in $\text{PCTL}_{\mathit{ssafe}}$. \begin{theorem}\label{thm:strong sound and complete} Every $\text{PCTL}_{\mathit{ssafe}}$-formula is a strong safety property and for any strong safety property $\Phi$ expressible in \emph{PCTL}, there exists $\Phi'\in\text{PCTL}_{\mathit{ssafe}}$ with $\Phi\equiv\Phi'$. \end{theorem} The question whether all formulas in $\text{PCTL}_{\mathit{ssafe}}$ can be represented by an equivalent formula in positive normal form using only $\Box$-modalities is left for future work. \subsection{Absolute Liveness Properties} Now we introduce the concepts of \emph{stable} properties and \emph{absolute liveness} properties. Intuitively, a property $P$ is stable, if for any $T\in P$, all suffixes of $T$ are also in $P$. This intuitively corresponds to once $P$ is satisfied, it will never be broken in the future. \begin{definition}[Stable property]\label{def:stable} $P$ is a \emph{stable property} iff $T\in P$ implies $T'\in P$, for every suffix $T'$ of $T$. \end{definition} A property $P$ is an absolute liveness property, if for any $T\in P$, all PTs which have $T$ as a suffix are also in $P$. Colloquially stated, once $P$ is satisfied at some point, $P$ was satisfied throughout the entire past. \begin{definition}[Absolute liveness]\label{def:absolute liveness} $P$ is an \emph{absolute liveness} property iff $P\neq\emptyset$ and $T'\in P$ implies $T\in P$, for every suffix $T'$ of $T$. \end{definition} Rather than requiring every absolutely liveness property to be a liveness property by definition, this follows implicitly: \begin{lemma}\label{lem:absolute is liveness} Every absolute liveness property is live. \end{lemma} For transition systems, there is a close relationship between stable and absolute liveness properties~\cite{Sistla1994SLF}. A similar result is obtained in the probabilistic setting: \begin{lemma}\label{lem:stable and absolute} For any $P\neq\mathbb{T}^\omega$, $P$ is a stable property iff $\comp{P}$ is an absolute liveness property. \end{lemma} \begin{definition}[Absolute liveness PCTL]\label{def:absolute liveness pctl} Let $\mathcal{F}=\text{PCTL}_{\mathit{alive}}$ denote the \emph{absolute liveness fragment} of \emph{PCTL} such that: \begin{enumerate} \item $\top\in\mathcal{F}$ and $\bot\not\in\mathcal{F}$; \item If $\Phi_1,\Phi_2\in\mathcal{F}$, then $\Phi_1\land\Phi_2$, $\Phi_1\lor\Phi_2$, $\P{>0}{\Phi_1\text{\sf W}\Phi_2}\in\mathcal{F}$; \item If $\Phi_2\in\mathcal{F}$, then $\P{> 0}{\text{\sf X}\Phi_2},\P{>0}{\Phi_1\text{\sf U}\Phi_2}\in\mathcal{F}$; \item\label{item:al} If $\Phi_1\in\mathcal{F}$ with $\neg\Phi_1\land\Phi_2\equiv\bot$, then $\P{> 0}{\Phi_1\text{\sf U}\Phi_2},\P{>0}{\Phi_1\text{\sf W}\Phi_2}\in\mathcal{F}$. \end{enumerate} \end{definition} According to the definition of $\text{PCTL}_{\mathit{alive}}$, $\text{PCTL}_{\mathit{alive}}$ only contains qualitative properties with bound $>0$. By clause~\ref{item:al}), $\P{>0}{\Diamond\Phi}$ is an absolute liveness formula for any $\Phi\not\equiv\bot$, while $\P{>0}{\Box\Phi}$ is an absolute liveness formula provided $\Phi$ is so too. Note that $\text{PCTL}_{\mathit{alive}}$ is a proper subset of $\text{PCTL}_{\mathit{live}}^{>}$ but not of $\text{PCTL}_{\mathit{live}}^{<}$, e.g., formulas like $\P{>0}{\Phi_1\text{\sf U}\Phi_2}$ with $\Phi_1=\P{>0}{\Diamond b}$ and $\Phi_2=\P{\ge 0.5}{a\text{\sf U} b}$ is in $\text{PCTL}_{\mathit{alive}}$ because $\Phi_1\in\text{PCTL}_{\mathit{alive}}$ and $\neg\Phi_1\land\Phi_2\equiv\bot$. However $\Phi\not\in\text{PCTL}_{\mathit{live}}^{<}$, since $\Phi_2\not\in\text{PCTL}_{\mathit{live}}^{<}$. \begin{theorem}\label{thm:absolute sound and complete} Every formula in $\text{PCTL}_{\mathit{alive}}$ is an absolute liveness property, and for every absolute liveness property $\Phi$ expressible in \emph{PCTL}, there exists $\Phi'\in\text{PCTL}_{\mathit{alive}}$ with $\Phi\equiv\Phi'$. \end{theorem} Inspired by~\cite{Sistla1994SLF}, we provide an alternative characterisation of absolute liveness properties. \begin{theorem}\label{thm:absolute characterisation} \emph{PCTL}-formula $\Phi$ is an absolute liveness property iff $\Phi\not\equiv\bot$ and $\Phi\equiv\P{>0}{\Diamond\Phi}$. \end{theorem} \section{Conclusions} \label{sec:conclusion} This paper presented a characterisation of safety and liveness properties for fully probabilistic systems. It was shown that most facts from the traditional linear-time~\cite{alpern1987recognizing} and branching-time setting~\cite{Manolios2003LCS} are preserved. In particular, every property is equivalent to the conjunction of a safety and liveness property. Various sound PCTL-fragments have been identified for safety, absolute liveness, strong safety, and liveness properties. Except for liveness properties, these logical characterisation are all complete. Fig.~\ref{fig:summary} summarises the PCTL-fragments and their relation, where $L_1\rightarrow L_2$ denotes that $L_2$ is a sub-logic of $L_1$.\footnote{ Here, it is assumed that $\text{PCTL}_{\mathit{live}}^{<}$ and $\text{PCTL}_{\mathit{live}}^{>}$ also support strict bounds. } \begin{figure}[h] \centering \scalebox{1}{ \begin{tikzpicture}[->,>=stealth,auto,node distance=1.8cm,semithick] \tikzstyle{state}=[] \tikzstyle{stateNframe}=[] every label/.style=draw \node[state](ss){$\text{PCTL}_{\mathit{ssafe}}$}; \node[state](safe)[above of=ss]{$\text{PCTL}_{\mathit{safe}}$}; \node[state](flat)[right of=safe]{$\text{PCTL}_{\mathit{flat}}$}; \node[state](live2)[right of=flat]{$\text{PCTL}_{\mathit{live}}^{>}$}; \node[state](pctl)[above of=flat]{PCTL}; \node[state](live1)[below left of=live2,yshift=-0.5cm]{$\text{PCTL}_{\mathit{live}}^{<}$}; \node[state](alive)[below right of=live2,yshift=-0.5cm]{$\text{PCTL}_{\mathit{alive}}$}; \path (pctl) edge node {} (safe) edge node {} (flat) edge node {} (live2) (safe) edge node {} (ss) (live2) edge node {} (live1) edge node {} (alive); \end{tikzpicture} } \caption{Overview of relationships between PCTL fragments}\label{fig:summary} \end{figure} There are several directions for future work such as extending the characterisation to Markov decision processes, considering fairness~\cite{Volzer2005DF}, finite executions~\cite{Maier04}, and more expressive logics such as the probabilistic $\mu$-calculus~\cite{DBLP:journals/corr/abs-1211-1511}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The beam-plasma interaction is one of the most interesting paradigms of plasma physics \citep{ZCrmp}, not only for its direct implementation in laboratory physics, \emph{e.g., } for plasma accelerators \citep{Li14,ES96,KJ86}, but also because it is isomorphic to the bump-on-tail problem \citep{BS11,shalaby17,pommo17}. In fact, the relaxation of supra-thermal particle beams provides a paradigm for the quasi-linear theory of weak plasma turbulence \citep{Vedenov,Pines}, finding several applications from fusion plasma to astrophysics and cosmic geophysics. In particular, the non-linear interaction between resonant particles and electrostatic waves (and the corresponding hole and clump mechanism) is addressed to analyze the generation of whistler-mode chorus and electromagnetic ion cyclotron emissions in space plasmas (see \emph{e.g., } \citet{tzc17} and \citet{tobita18} and references therein). Moreover, the phenomenology of particle trapping in the Langmuir potential well has been proposed as a reduced model for the behavior of fast ions interacting with Alfv\'en waves in fusion devices \citep{BB90a,BB90b,BB90c,BS11}. The main features of the beam-plasma system (BPS) are the dispersion relation (characterizing the linear phase of instability) and the non-linear evolution of the mode, \emph{e.g., } the saturation of fluctuating fields and particle trapping \citep{OM68,OWM71,MK78,TMM94,BB95a}. The BPS can be cast as a $N$-body scheme for both the thermal distribution of plasma electrons (assuming a neutralizing ion background) and the supra-thermal particles constituting the beam itself \citep{EEbook,ee18}. However, in \citet{OWM71}, the problem was successfully reduced assuming that the thermal plasma could be described as linear dielectric medium, in which the beam electrons interact with a single Langmuir mode. This approach has been generalized to the case of a warm electron beam (in which the initial velocity dispersion is significant) in \citet{L72}, and the implications of this paradigm for the transport features (convection and diffusion) of beam electrons have been analyzed in \citet{BB95b,VK12,ncentropy}. Recent applications of this theoretical framework to interpret fast-ion transport in Tokamaks can be found in \citet{nceps16,nceps19}. The basic features of the interaction of a tenuous beam with a cold plasma have been extensively discussed. In this paper, we review and refine by means of numerical simulations some standard findings, adding original details regarding the phase-space trapping dynamics related to the non-linear spread of particles in overlapping and non-overlapping regimes. We start by discussing the relevant scalings with respect to the system drive, in the case of a single mode (see the pioneering works of \citet{ucla_fried,OWM71,L72,BB90a,wu94}). The analysis is developed for a realistic initial profile of the beam distribution function. We outline the quadratic behavior of the saturated mode amplitude with respect to the linear growth rate and the linear scaling of both the trapping frequency and the estimate for the non-linear velocity spread of the beam particles. We also measure the forming clump behavior in the phase space (due to the saturated mode evolution), in order to characterize the resonance width. This analysis is based on the clustering of particle trajectories in the trapping region of the potential well. We show that the mean size of this region linearly scales with the drive of the system. We then address the behavior of a three-resonance scenario. Instead of discussing the onset of the overlap as a function of the drive (\citet{L72,BB94a}), we analyze the features of the non-linear velocity spread as responsible for the mode overlap. We thus fix the density parameter of the beam in order to obtain the minimal condition for the overlapping regime at saturation. We show how the measured clump size would not be predictive in defining the overlap, but an additional factor of about 1.3 is required. When the distribution function is considered, this redefined non-linear velocity spread well corresponds to the effective distortion of the profile, and it differs from the clump size, which seems to determine only the ``plateau'' region. This reflects the fact that also particles that are not trapped by the wave (as discussed, for example, in \citet{EEbook}) are relevant in the ``active'' overlap of different non-linear fluctuations, since the power transfer also involves those particles simultaneously feeling two (or multiple) electric fields. Through this mechanism, the resonant wave-particle power exchange can be enhanced. With our analysis, we characterize the non-linear size of the resonance in a fully self-consistent scheme, where the distribution function is dynamically coupled to the spectral evolution. Such self-consistency allows one to take into account the backreaction induced by the distribution function deformation on the fluctuation spectrum which is a crucial point in identifying the real resonance region involved in the non-linear dynamics. In this respect, this analysis enters the longstanding debate about the transition to stochasticity of adjacent resonances associated with the well-known Chirikov overlap criterion \citep{Ch60,Ch79} and its deepening and upgradings \citep{Gr68,JL72,ED81,BB95b,LL10}. Actually, having a quantitative characterization of the degree of overlap in the presence of multiple resonances is crucial in determining different transport scenarios of the BPS (in particular, discriminating between pure diffusive and convective transport in the velocity space). This tool takes an important role when the BPS is implemented as a reduced scenario. A detailed description of the phase-space dynamics is also provided by means of the Lagrangian Coherent Structure (LCS) technique (see \citet{Haller_2015} for a complete review). This approach is particularly suited as a post-processing technique for describing particle motion in the phase space in the presence of a time-dependent scalar potential and it has been already applied to the BPS in \citet{CFMZJPP}. The value of calculating LCSs in this work stems from the fact that they generalize dynamical structures observed in autonomous and periodic systems, \emph{e.g., } invariant manifolds, to temporally aperiodic flows. We will consider only hyperbolic LCSs that organize the flow by attracting or repelling phase-space volume elements over a finite time span and, for the sake of simplicity, we will refer to these specific lines in the phase space as LCS. As already shown in \citet{CFMZJPP}, by means of this technique we can describe the shape of the clumps enclosing trapped particles and thus characterize resonance overlap. Consequently, the phase space at each instant can be partitioned into different subdomains with small or negligible exchange of particles between them. In the case of single resonance, the LCS plots can be compared with the size of the region in the velocity space involved in the particle power exchange as discussed above. We finally discuss the relevant feature of the BPS by which overlapping resonances yield enhanced fluctuation levels at saturation, with respect to the isolated resonance case. This behavior is a consequence of an efficient transfer of phase-space energy to the Langmuir modes \citep{BB95a}. It takes place only in the presence of adjacent resonances that are weakly overlapped, and it is a relevant physical process also for fast-ion transport induced by Alfv\'en modes in Tokamaks (see, for instance, the recent analysis in \citet{spb16}). With the help of a toy model, we also show the mechanism responsible for this phenomenon which relies on the enhancement of available free energy in overlapping regimes, as outlined in \citet{BB95a}. The paper is structured as follows. In Sec.\ref{bps}, the adopted BPS equations are described and the basic simulation parameters are given. The particle trapping mechanism is discussed and the description of the non-linear velocity spread is introduced also in terms of the clump size. Trapping frequency, mode saturation level and non-linear velocity spread are studied by means of numerical simulations as a function of the linear drive. In Sec.\ref{overlap}, the issue of resonance overlap is addressed, and the effective the non-linear velocity spread is defined using a scale factor applied to the clump width. In Sec.\ref{ftle}, the LCS technique is applied to describe phase-space dynamics during the overlap. In Sec.\ref{modes}, the mode saturation level is analyzed. Concluding remarks are given in Sec.\ref{conclusion}. \section{Hamiltonian description of the beam-plasma interaction}\label{bps} The BPS describes the non-linear dynamics of a fast electron beam interacting with the a background plasma in a 1-dimensional approximation. The dynamics is regulated by the Vlasov-Poisson equation, when collisions can be neglected (\emph{i.e., } the collision frequency is much smaller than the plasma frequency). Such a system describes the coupled dynamics of the electric field and the electron distribution function and its analytical treatment is limited by the non-linear character of the resulting dynamics. For a description in terms of a graph theory, see \citet{AK66}. However, when addressing the numerical study of the Vlasov-Poisson system, a significant simplification is offered by the Hamiltonian approach \citep{OWM71,EEbook}. In this representation, the electron distribution function is reduced to a discrete set of beams (actually, on a numerical level, such an issue is conceptually mandatory) and then the self-consistent evolution of the Newton's second law for the electrons and of the Fourier-transformed Poisson equation for the electric field (generated by the charge displacement) is naturally addressed in terms of dimensionless universal quantities. From a numerical point of view, this approach is less demanding with respect to the direct integration of the Vlasov equation. As a result, a large number of resonances can be easily described, up to dealing with the so-called quasi-linear limit \citep{BB95b,VK12}. The conceptual framework that justifies the implementation of a Hamiltonian approach is the so-called notion of ``quasi-stationary states'' \citep{BB95a,EEbook,CFGGMP14}, \emph{i.e., } transient states of the discrete system, in which a time is spent by the dynamics proportional to the number of considered particles (in practice, charged macroparticles). Such states approach for a diverging number of particles, \emph{i.e., } in the continuum limit, the equilibrium configuration of the Vlasov-Poisson system. In this work, following \citet{OM68} and \citet{OWM71} (see also \citet{CFMZJPP,ncentropy} for other reference details), the plasma is addressed as a cold linear dielectric medium represented by longitudinal electrostatic Langmuir waves, whose density $n_p$ is assumed much greater that the beam density $n_B$. In this respect, we define $\eta\equiv n_B/n_p$ as one of the fundamental parameters of the model. The Langmuir wave is described in terms of the corresponding modes whose frequencies $\omega$ are very close to the plasma frequency $\omega_p$: the dielectric function (for a fixed mode) $\epsilon=1-\omega_p^2/\omega^2$ is, thus, nearly vanishing. This allows one to write an evolutive equation for the modes corresponding to the Poisson equation. The simple force equation governs, instead, the particle dynamics. A single mode is set to be resonantly excited by the beam, considering wave number $k=\omega/v_{r}$, where $v_{r}$ is a fixed initial resonant velocity of the beam particles. In this scheme, the cold background plasma is considered as a periodic slab of length $L$, and particle positions are labeled by $x_i$ ($i=1,\,...,\,N$, where $N$ indicates the total particle number). The Langmuir wave scalar potential $\varphi(x,t)$ is addressed by means of the Fourier components $\varphi_k(t)$ and we use the following dimensionless quantities: $\bar{x}_i=x_i(2\pi/L)$, $\tau=t\omega_p$, $u_i=\bar{x}_i'=v_i(2\pi/L)/\omega_p$, $\ell=k(2\pi/L)^{-1}$, $\phi_\ell=(2\pi/L)^2 e\varphi_\ell/m\omega_p^2$, $\bar{\phi}_\ell=\phi_\ell e^{-i\tau}$. The derivative with respect $\tau$ is indicated with the prime, while barred frequencies (and the related growth rates) are normalized as $\bar{\omega}=\omega/\omega_p$ ($\bar{\gamma}=\gamma/\omega_p$). The governing equations of the BPS reads \begin{align}\label{mainsys1} \begin{split} &\bar{x}_i'=u_i \;,\\ & u_i'=\sum_{\ell}\big(i\,\ell\;\bar{\phi}_\ell\;e^{i\ell\bar{x}_{i}}+c.c.\big)\;,\\ &\bar{\phi}_\ell'=-i\bar{\phi}_\ell+\frac{i\eta}{2\ell^2 N}\sum_{\ell} e^{-i\ell\bar{x}_{i}}\;, \end{split} \end{align} while the resonance condition is $\ell=\bar{\omega}/u_{r}$. As reference case for the analysis of this work, we consider a positive slope as initial warm beam distribution function in the velocity space: \begin{align}\label{erffb} F_0(u)=0.5\;\textrm{Erfc}[a-b\,u]\;, \end{align} with the beam distributed from $u_{min}=0.001$ to $u_{max}=0.002$ (with $a\simeq6.8$ and $b\simeq4537$). In the non-linear simulations, we initialize $N=10^{6}$ particles and implement a Runge-Kutta (fourth-order) algorithm to solve \erefs{mainsys1}. Actually, the initialization in the velocity space is formal and particles are free to spread in this coordinate, while we consider uniform initial conditions for the particle positions between $0$ and $2\pi$ and we apply periodic boundary conditions in this range for $\bar{x}_i$. Specifically, we discretize the beam profile in 500 delta-like beams, each of them having a single fixed velocity. Each beam is also initialized with random generated positions and with a number of particles provided by \eref{erffb} (normalized to $N$). This procedure provides the initial $2\times N$ array for $(\bar{x}_i,u_i)$ which is evolved in time self-consistently with the modes. The particles are thus considered as points in the phase space and no $(\bar{x},u)$-meshes are required. The modes are instead initialized at $\mathcal{O}(10^{-14})$ (this guarantees the initial linear regime). For the considered time scales, both the total energy and momentum (for the explicit expressions, see \citet{CFMZJPP}) are conserved with relative fluctuations of about $1.4\times10^{-5}$. We conclude by writing down the the linear dispersion relation for electric field perturbations. Using dimensionless variables, it reads \citep{OM68,LP81} \begin{align}\label{disrel} 2(\bar{\omega}_0+i\bar{\gamma}_L-1)-\frac{\eta}{\ell M} \int_{-\infty}^{+\infty}\!\!\!\!\!\!\!du\frac{\partial_u F_0(u)}{u\ell-\bar{\omega}_0-i\bar{\gamma}_L}=0\;, \end{align} where $M=\int F_0(u)du$ and we have explicitly written $\bar{\omega}=\bar{\omega}_0+i\bar{\gamma}_L$. Here, $\bar{\omega}_0$ includes a small real frequency shift with respect to $\omega_p$ and $\bar{\gamma}_L$ indicates the normalized linear growth rate \citep{nceps18}. This expression describes the inverse Landau damping mechanism, and defines the linear instability condition of a single mode through the resonance pole $u=\bar{\omega}_0/\ell$. \subsection{Non-linear velocity spread} Let us now review some basic non-linear features of the BPS in the single-mode assumption, by means of numerical simulations of \erefs{mainsys1}. The dynamics of one isolated unstable mode consists of an initial exponential growth (characterized by $\bar{\gamma}_L$) followed by non-linear saturation, where particles are trapped and begin to slosh back and forth in the potential well of the wave. This makes the mode intensity oscillate and generates rotating clumps in the phase space. In \citet{BB95a} (and references therein), also verified was the existence of two distinct saturation regimes in the presence of source and sink. They correspond, on the one hand, to the previously discussed steady-state saturation and, on the other hand, to the so-called pulsating scenario in the correspondence of large mode damping rate (not addressed in the present paper). A quadratic relation exists \citep{OWM71,L72,ucla_fried} between the saturation level of the considered linearly unstable mode (dubbed $|\bar{\phi}|^{S}$) and the linear growth rate, \emph{i.e., } $|\bar{\phi}|^{S}=\alpha\bar{\gamma}_L^{2}$ (with $\alpha=const.$). This relation holds only if the non-linear dynamics is not sensitive to the morphology of the distribution function \citep{ZCrmp,nceps16}; and all the studies reported in the present work satisfy this condition. Assuming a single-mode scheme, the approximation of the post-saturation dynamics by an instantaneous harmonic oscillator allows one to identify the so-called trapping (bounce) frequency $\omega_B$ as \begin{align}\label{ombeq} \bar{\omega}_B=\sqrt{2\ell^{2}|\bar{\phi}|^{S}}= \sqrt{2\alpha}\;\ell\,\bar{\gamma}_L\;. \end{align} Meanwhile, from energy conservation at saturation, one can estimate the non-linear velocity spread of resonant particles, \emph{i.e., } particles having velocity $u_r=1/\ell$ (here and in the following, we approximate $\bar{\omega}_0=1$). This quantity is clearly related to the (half) rotating clump width mentioned above and it is derived from the relation $m(\Delta\tilde{v}_{NL})^{2}/2=e|\varphi(x,t)|^{S}$. Using dimensionless variables, this definition of the non-linear velocity spread can be cast as \begin{align}\label{estnlvs} \Delta\tilde{u}_{NL}/u_r=2\ell\sqrt{|\bar{\phi}|^{S}}= \sqrt{2}\,\bar{\omega}_B=2\ell\sqrt{\alpha}\;\bar{\gamma}_L\;. \end{align} This estimate does not account for effects of non-resonant particles. Thus, in order to get a satisfactory characterization of the non-linear dynamics, we introduce the clump width $\Delta{u}^{c}_{NL}$ defined by measuring the maximum instantaneous velocity of particles initialized at $\tau=0$ with $u<u_r$; and, similarly, the minimum velocity of particles initialized with $u>u_r$. This corresponds monitoring the instantaneous spread of particles above and below resonance. Such a measure is performed during the temporal evolution of the system and $\Delta{u}^{c}_{NL}$ is taken as the value at saturation time $\tau_S$. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig1a} \includegraphics[width=0.48\textwidth]{fig1b} \caption{(Color online) Case $u_r\simeq0.0015$: crosses represent simulation results while solid lines are linear fits. \emphpanel{Left-hand panel}: Plot of the clump width $\Delta{u}^{c}_{NL}$ as a function of $\tau$. Different colors represent the 10 equispaced values of $\eta\in[0.00015\,\textrm{(blue)},\,0.0025\,\textrm{(red)}]$. \emphpanel{Right-hand panel:} Dependence of $\Delta{u}^{c}_{NL}$ as a function of $\bar{\gamma}_L$. \label{fig_poug}} \end{figure} Let us discuss the behavior of the quantities introduced above, as a function of the linear growth rate. The following analysis refines well-known results reported in the literature, mainly derived for linear shapes of the initial distribution profile \citep{ucla_fried,OWM71,L72,wu95,wu94}, enlightening the crucial role of wave-particle trapping for non-linear mode saturation and resonance broadening. We study 5 distinct cases having different resonant velocities (namely $u_r\simeq0.0013,\,0.0014,\,0.0015,\,0.0016,\,0.0017$). For each case, 10 simulations with different $\eta$ values are studied (equispacing the $\eta$ value from $0.00015$ to $0.0025$), providing distinct drives $\bar{\gamma}_L$ by means of \eref{disrel}, and then linear fits are implemented to estimate the scalings. We obtain the following behaviors \begin{align} |\bar{\phi}|^{S}&=1.27\pm0.38\times10^{-5}\;\bar{\gamma}_L^{2}\;,\\ \bar{\omega}_B&=3.31\pm0.07\;\bar{\gamma}_L\;,\\ \Delta\tilde{u}_{NL}/u_r&=4.72\pm0.1\;\bar{\gamma}_L\;,\\ \Delta u^{c}_{NL}/u_r&=6.64\pm0.15\;\bar{\gamma}_L\;. \end{align} Regarding the measured clump width, for the sake of completeness, we show the details of the case at the inflection point of $F_0$ ($u_r\simeq0.0015$). In the \emphpanel{left-hand panel} \figref{fig_poug}, we plot the clump width $\Delta{u}^{c}_{NL}$ versus time and for the different values of $\eta$ (different colors in the figure): as expected, the smaller the value of $\eta$, the smaller is $\Delta{u}^{c}_{NL}$ is. In fact, as the value of $\eta$ is lowered, the instability drive becomes correspondingly weaker, and, in turn, the electric field amplitude at saturation and the clump width are decreased. We observe that, for a fixed $\eta$ value, the clump width increases with time during the linear instability growth phase, until the saturation level is reached. In the \emphpanel{right-hand panel} of \figref{fig_poug}, we instead illustrate the behaviors of $\Delta{u}^{c}_{NL}$ as a function of the linear drive outlining the linear scaling. We emphasize that repeating the analysis above with different slope of the Erfc distribution, \emph{i.e., } varying, at a given $u_r$ value, the slope of the distribution function $\partial_u f_B(u)$ (responsible for change of $\bar{\gamma}_L$), yields quantitatively comparable behaviors. This suggests a universal character of the scalings. \vspace{5mm} \section{Resonance overlap at saturation}\label{overlap} In this Section, we discuss the problem of resonance overlap. Such an issue is often faced by characterizing the onset of the overlap in terms of the drive (\emph{e.g., } \citet{L72,BB94a}). This means that a threshold is determined in the linear growth rate in order to discriminate between overlapping and isolated regimes. Such a threshold can be estimated directly by enhancing the instability drive, or by characterizing the phase velocity distance between neighboring modes. In the following, we discuss the parallel issue of a proper characterization of the non-linear velocity spread as responsible for the mode overlap, also in view of the analysis of the phase-space dynamics of the next Section. In this sense, we set an overlapping system in the correspondence of the threshold drive and we analyze the predictivity of the already discussed non-linear velocity spread evaluations. We set a system in which $3$ distinct modes are excited in the correspondence of different resonant velocities ($u_{r1}\simeq0.0013$, $u_{r2}\simeq0.0015$ and $u_{r3}\simeq0.0017$). We consider that resonance overlap occurs when the phase-space regions associated with different resonances mix, due to the non-linear velocity spread. In the considered case, the onset of the overlap regime at saturation time emerges for $\eta\geqslant0.00055$, as clearly represented by the mode evolution depicted in the \emphpanel{left-hand panel} of \figref{fig_over}. The system is evolved self-consistently for the $3$ modes and it is compared with the single-mode simulation results of each resonance (gray lines). As it can be argued from the plot, two resonances start to interact nearby the corresponding single-mode saturation time. In particular, by increasing the value of $\eta$, the resonances become increasingly more overlapped and the interaction time becomes smaller. On the contrary, reducing the drive results in a progressive separation of the resonances, which eventually behave as isolated. For small $\eta$, however, some residual non-linear interplay is found, although the overlap starts much later than single-mode saturation time. Let us now depict the resonance position and the corresponding $\Delta u^{c}_{NL}$ (\emphpanel{right-hand panel} of \figref{fig_over}, dashed lines) for the threshold value $\eta=0.00055$. In this case, the non-linear trapping regions appear, actually, non-overlapped, suggesting that fluctuations should evolve as superposition of non-interacting modes. Such an evidence has the physical implication that in the ``active'' overlap of different non-linear fluctuations, also particles that are not trapped by the wave \citep{EEbook} are relevant. This phenomenon is related to the analysis of the transition to stochasticity of adjacent resonances. There is vast a literature on this specific topic and the corresponding well-known Chirikov overlap criterion \citep{Ch60,Ch79} (for details, see also \citet{ED81,BB95a,LL10}). It is worth remarking that in the present work, the fluctuating spectrum is not imposed, as usual \citep{EEbook,LL10}, as an external field. It is instead self-consistently determined including also the coupled beam particle non-linear evolution, as in the already mentioned works related to the threshold drive for the onset of overlapping. According to the literature, \emph{e.g., } \citet{LL10}, the quantity $\Delta u^{c}_{NL}$ must be enlarged by means of a scale factor to obtain the observed overlap of the resonance width. In particular, taking into account \figref{fig_over} (right-hand panel), we have to multiply the clump widths evaluated using $\Delta u^{c}_{NL}=6.64 \bar{\gamma}_L$ (dashed lines) by a factor $>1$ in order to obtain a non-zero intersection of the new re-sized non-linear spread regions (solid lines). In our case, this is represented by the yellow/blue line intersection around $u=0.00162$. We can thus recognize an effective non-linear velocity spread $\Delta u_{NL}$, able to properly describe the verified resonance regime, as \begin{align}\label{new_du} \Delta u_{NL}=\beta\Delta u^{c}_{NL}\;,\qquad\beta\simeq1.28\;, \end{align} well represented in the figure with solid lines. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig2a} \includegraphics[width=0.46\textwidth]{fig2b} \caption{(Color online) \emphpanel{Left-hand panel}: Plot of the mode evolution in a $3$ resonance model (colored lines) and the respective single-mode simulations (gray lines), for $\eta=0.00055$. The color scheme is: $u_{r1}\simeq0.0013$ (green), $u_{r2}\simeq0.0015$ (yellow), $u_{r3}\simeq0.0017$ (blue). \emphpanel{Right-hand panel}: Three resonance system. The black line represents $F_0$ of \eref{erffb}. Circles denote $u_{r1}$ (green), $u_{r2}$ (yellow) and $u_{r3}$ (blue). We indicate also $u_r\pm\Delta u^{c}_{NL}$ (dashed lines) and $u_r\pm\Delta u_{NL}$ (solid lines) with corresponding colors. \label{fig_over}} \end{figure} It is worth observing that, for the obtained $\beta$ value, the resonance $u_{r1}$ (represented in green) results in being initially isolated even using the redefined non-linear width. Nonetheless, in this scenario, the non-linear interplay of the overlapping resonances ($u_{r2}$ and $u_{r3}$) affects the dynamics by broadening the corresponding region of non-linear velocity spread (as discussed in \citet{BB94a,BB94b,BB95a}). This can be recognized in the evolution of the distribution function $f_B(u,\tau)$ plotted in \figref{fig_nlnlnl_}. The initial merging at saturation time of the second two resonances (yellow and blue) is evident, while for later times the morphology of the distribution profile starts to affect the resonance region of the first (green) resonance. In this way, the $u_{r1}$ overlap sets in at this slightly later time (cf. also the \emphpanel{left-hand panel} of \figref{fig_over}), after the other fluctuations have been non-linearly amplified. This eventually allows synergistic interaction also with the initially isolated resonance. We also mention that, setting a system in which only the first two resonances $u_{r1}$ (green) and $u_{r2}$ (yellow) are considered, it can be shown how (for the sake of simplicity, we do not propose other plots) they properly behave as two independent modes as predicted by the quantity $\Delta u_{NL}$ in \eref{new_du}. A careful analysis of the BPS is developed in \citet{BB95a}, where different regimes of the dynamics are considered and numerical simulations are shown to be in agreement with the theoretical models \citep{BB92PRL,BB93,BB94a,BB94b}, \emph{e.g., } the pulsating regime. The present re-analysis wants to emphasizes the role played by intrinsic non-linear effects, due to the self-consistent evolution of the system. Actually, the clump size is typically fixed via analytical estimates, which break down the self-consistency by assuming a frozen field. In \citet{finelli19}, it has been clearly shown how retaining this self-consistency is of crucial importance to ensure predictivity. Here, we also argue how the overlapping process of two or more resonances can strictly depend on small deformations of the distribution function in those regions of the velocity space which are out of the plateau. Such apparently marginal deformations contribute to the real size of the resonant region and play a crucial role when the BPS is implemented into the dynamics of fast ions in the Alfv\'en spectrum of a fusion device \citep{BB90a,BB90b,BB90c}. In view of the study of the next Section, we conclude by analyzing the morphology of the beam distribution function in the presence of only the single resonance $u_{r2}$ (the saturation time is set $\tau_S=720$). The results are summarized in \figref{fig_nlnlnl}. The role of un-trapped particles is clear in this scheme: the re-sized non-linear velocity spread $\Delta u_{NL}$ well defines the global distortion of $f_B(\tau_S)$ with respect to the initial profile $F_0$, including un-trapped but nearly resonant particles. These are represented by the plateau edges, which is relevant in the active overlap. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig3a} \includegraphics[width=0.48\textwidth]{fig3b} \caption{(Color online) Beam distribution function at saturation $f_B(u,\tau_S=720)$ (left-hand panel) and at a later instant $f_B(u,\tau=1500)$ (right-hand panel) for $\eta=0.00055$ in the case of 3 resonances. Color scheme and other notations as in \figref{fig_over} (right-hand panel). The red line represents $F_0$.\label{fig_nlnlnl_}} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig4} \caption{(Color online) Snapshot of the beam distribution function at saturation $f_B(u,\tau_S=720)$ for $\eta=0.00055$ in the presence of one single resonance $u_{r2}$ ,represented by the circle (red line represents $F_0$). We also indicate $u_r\pm\Delta u^{c}_{NL}$ (dashed lines) and $u_r\pm\Delta u_{NL}$ (solid lines).\label{fig_nlnlnl}} \end{figure} The present study provides an example regarding the difficulty of having quantitative predictions for the degree of overlap based on the linear parameters (the system drive). The physical impact of this fact relies on the possibility of discriminating or excluding specific transport regimes directly from the linear set-up in order to apply (or not) reduced schemes for the dynamics. The focus on the saturation time is due to the necessity of applying the standard estimate for the particle trapping dynamics. Moreover, this choice is also natural because at the saturation time the field amplitude approaches a constant value and we can compare this picture with the case of an assigned external field. Only slightly different results could be obtained by setting later times and/or extending the time scale for the mixing of the phase space to occur. We conclude this Section by stressing how this analysis of the beam-plasma instability refines in a fully-self consistent fashion (also using more realistic profiles for the particle distribution) the already existing studies regarding the transition to the stochasticity made by dynamical system theory. In particular, using the scaling addressed in the previous Section, we get \begin{align} \Delta u_{NL}/u_r\simeq8.5\;\bar{\gamma}_L\;. \end{align} This expression provides a prescription about the resonance width directly from linear information and it is able to properly describe the resonance overlap regimes, improving the estimates of the trapping region related to the mode saturation amplitude. \section{Lagrangian Coherent Structure analysis}\label{ftle} Let us now describe phase-space dynamics produced by resonance overlap using the LCS technique, in comparison to the distribution profile evolution of \figref{fig_nlnlnl_} and \figref{fig_nlnlnl}. LCSs are a generalization of dynamical structures observed in autonomous and periodic systems, \emph{e.g., } invariant manifolds, to temporally aperiodic flows and allow one to identify phase-space regions distinguished by qualitatively different behavior of particle motion (see \citet{Haller_2015} or \citet{MPJPP,CFMZJPP,Di_Giannatale_2018, Di_Giannatale_2018b,Pegoraro_2019} for applications to plasma physics). Following the work of \citet{Haller_2015}, we define LCSs as the most repulsive or attractive material lines (1-dimensional ensemble of material points, \emph{i.e., } points advected by the dynamics) with respect to the nearby ones and, therefore, they are associated with peaked profiles of the Finite Time Lyapunov Exponent (FTLE) fields. In order to calculate the FTLE fields, we trace several test particle trajectories under the action of the time-dependent scalar potential generated from an $N$-body simulation. Test particles are initialized to sample the whole phase space of interest, at a fixed time \(\tau\), in two phase-space grids having an infinitesimal displacement in the velocity direction. By other words, a test particle located in \((\bar{x},u)\) has a neighbor initialized in \((\bar{x},u+\delta_\tau)\). Evolving such a system with assigned time-dependent potentials, at a time \(\tau+\Delta \tau\) these two test particles will be at a distance \(\delta_{\Delta \tau}\) in the phase space and the FTLE value \(\sigma\) in the point \((\bar{x},u)\) can be evaluated using the following expression: \begin{equation}\label{eqftle} \sigma(\bar{x},u,\tau,\Delta \tau)= \ln\,(\delta_{\Delta \tau}/\delta_\tau)/\Delta \tau\;. \end{equation} When considering a positive time span \(\Delta \tau>0\), the curves where the FTLE field is peaked define repulsive transport barriers, while when setting \(\Delta \tau<0\) they represent attractive ones. The LCS can be visualized by plotting the maximum values of \(\sigma(\bar{x},u,\tau,\Delta \tau)\) as extracted from a contour plot in the phase space. In the analysis of this Section, we have set two grids of \(200\times200\) test particles thus obtaining \(4000\) values of the FTLE for each phase-space snapshot. We apply the methodology introduced in \citet{CFMZJPP} and we choose a relatively small value for $\Delta \tau$ which highlights finite-time transport barriers instead of the system asymptotic properties such as invariant manifolds. The scalar electrostatic potentials have been sampled from the complete $N$-body simulations. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig5a} \includegraphics[width=0.48\textwidth]{fig5b} \caption{(Color online) Attractive and repulsive LCSs (in blue and red, respectively) in the presence of one resonant mode ($u_r\simeq0.0015$), marked by a gray line, calculated by means of the FTLE field with $\Delta \tau = 20$ at $\tau = 700$ (\emphpanel{left-hand panel}) and $\tau=800$ (\emphpanel{right-hand panel}). Dashed lines are placed at $u_{r} \pm \Delta u^{c}_{NL}$ while solid ones at $u_{r} \pm \Delta u_{NL}$ (cf. \figref{fig_nlnlnl}). Passive tracers are depicted with different colors to show phase-space dynamics. \label{fig_singlemodeFTLE}} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig6a} \includegraphics[width=0.48\textwidth]{fig6b}\\ \includegraphics[width=0.48\textwidth]{fig6c} \includegraphics[width=0.48\textwidth]{fig6d} \caption{(Color online) Contour plots of the FTLE value obtained either with a forward or with a backward time integration ($\Delta \tau = \pm 20$) at $\tau = 700$ \emphpanel{(top left-hand panel)}, $1100$ \emphpanel{(top right-hand)}, $1500$ \emphpanel{(bottom left-hand)}, $2100$ \emphpanel{(bottom right-hand)}. Regions with high FTLE are represented in yellow (color scheme from blue to yellow). Passive tracers are marked in black. \label{fig_multimodeFTLE}} \end{figure} We first characterize phase-space dynamics in the presence of one mode with resonance velocity \(u_r=0.0015\). In \figref{fig_singlemodeFTLE}, we show attractive and repulsive transport barriers (in blue and red, respectively) associated with a time span of $\Delta \tau = 20$ together with a collection of tracers at two different snapshots, \emph{i.e., } $\tau = 700$ and $800$, to underline the saturation dynamics (cf. \figref{fig_nlnlnl}). Yellow lines are placed at $u_{r} \pm \Delta u^{c}_{NL}$ and $u_{r} \pm \Delta u_{NL}$, as indicated in the figure caption. As expected, the phase space is divided into different macro-regions with negligible exchange of particles between them representing the inside and the outside, respectively, of a clump which moves coherently, \emph{i.e., } only with just minor deformation of its structure. This dynamics is confirmed by the evolution of the tracers and can be attributed to the relatively low-amplitude oscillations of the electric field around its saturation value. In particular, as shown in \citet{CFMZJPP}, greater amplitude oscillations of the scalar potential can de-trap a fraction of the particles originally inside the clump and vice versa. The value of $\Delta u^{c}_{NL}$ is consistent with the maximum velocity width of the region enclosed by the LCS. Tracers outside $u_{r} \pm \Delta u_{NL}$ have been colored in order to highlight their dynamics. It can be argued how these particles are exchanging, on average, less power with respect to those ones in the inside region which are moving towards the x-point consistent with the already introduced resonance width. The same technique can be applied to describe the resonance overlap process introduced in Sec. \ref{overlap}. In \figref{fig_multimodeFTLE} (cf. the evolution of the distribution function in \figref{fig_nlnlnl_}) we show, for each point of the phase space, the largest FTLE value obtained by either a forward or a backward time integration with \(\Delta \tau = \pm 20\). We plot, for the sake of clarity, attractive and repulsive LCSs with the same color since this difference is not relevant for the present analysis. For $\tau=700$, three closed domains are depicted, while trajectories characterized by \(u \lesssim 0.0013\) and \(u \gtrsim 0.0017\) are only slightly deformed. The three resonances do not behave independently, thus generating overlap at saturation time as described in the previous Section and the LCS technique allows one to illuminate this dynamics at each time step. In particular, in the first panel of \figref{fig_multimodeFTLE}, even if the clumps look undistorted and the scalar potentials have approximately single-mode simulation values, the FTLE profile shows peaked structures in the region between adjacent resonances, \emph{i.e., } \(u_{r} \sim 0.0015\) and \(u_{r} \sim 0.0017\), thus suggesting transport processes in the phase space (for a detailed description of this process, see \citet{CFMZJPP}). The overlap is more evident in the next two snapshots ($\tau=1100$ and $1500$), where this region is filled by a convoluted tangle of repulsive and attractive structures. In the last snapshot ($\tau=2100$), the overlap is complete and a unique structure is formed consistent with the formation of a plateau in the particle distribution function. \section{Mode amplitude at saturation}\label{modes} The case analyzed in the previous Sections corresponds to an intermediate situation between the limiting cases of isolated and strongly overlapping resonances. The \emphpanel{left-hand panel} of \figref{fig_over} shows that the mode saturation levels are larger in the multi-mode simulations than in the single-mode runs. This feature of the BPS has been discussed in \citet{BB95a} (see also references therein), where sources and sink are included in the model. Due to the enhancement of the distribution function plateau in the overlapping regime, the amount of free energy of the interaction process becomes larger, and this is the cause of the observed mode saturation level. In the following, we numerically show the overlapping saturation as a function of the phase-velocity separation of neighboring modes, highlighting the feature mentioned above in the collisionless case. Equivalent to the conclusion drawn in \citet{BB95a}, we provide a visual interpretation of this process in terms of the areas relative to the distribution function distortions, \emph{i.e., } the particle number. In particular, we initialize the beam particles using a distribution function with a positive constant gradient and $\eta=0.002$, \emph{i.e., } we consider a linear initial profile in the velocity space. We run 7 simulations with two modes, fixing one of the resonances (namely at $u_{r1}=0.00135$) and sweeping the other one (dubbed $u_{r2}$) in order to span, with constant velocity increment, the resonance separations $\Delta u_{SEP}\equiv u_{r2}-u_{r1}$ from $5\times10^{-6}$ to $2\times10^{-3}$. For each case, we compare numerical results with the evolutions obtained for isolated resonances. In the presence of multiple modes and resonances, we recall that the total momentum and energy are conserved. In particular, total conserved momentum can be written as \citep{OWM71,CFMZJPP} \begin{align}\label{momcons} \mathcal{K}_P=\sum_{\ell}\frac{|\bar{\phi}_{\ell}|^{2}}{\ell}+\frac{2}{\eta N}\sum_i u_i\;. \end{align} \begin{figure} \centering \includegraphics[width=0.34\textwidth]{fig7a} \includegraphics[width=0.32\textwidth]{fig7b} \includegraphics[width=0.32\textwidth]{fig7c} \caption{(Color online) Left-hand panel: Saturation level $\sum_\ell|\bar{\phi}_{\ell}^S|^{2}/\ell$ as a function of $\Delta u_{SEP}=u_{r2}-u_{r1}$ for self-consistent simulations of two resonances (blue) compared with the saturation level obtained artificially superposing the evolution of the individual isolated resonances (yellow). The threshold for the onset for enhanced saturation level is the intersection point (dashed gray line) $\Delta u^*_{SEP}\simeq 1.75\times10^{-5}$. Center and right-hand panels: Time evolution of the modes for the multi-mode (colored lines) and single-mode (grey lines) systems, for a case below threshold, \emph{i.e., } $\Delta u_{SEP}\simeq 5.5\times10^{-6}$ \emphpanel{(left-hand panel)} and above threshold, \emph{i.e., } $\Delta u_{SEP}\simeq 2\times10^{-4}$ \emphpanel{(right-hand panel)}. \label{fig_satu}} \end{figure} Thus, for each case, we can measure the saturation level as $\sum_\ell|\bar{\phi}_{\ell}^S|^{2}/\ell$. In \figref{fig_satu} \emphpanel{(left-hand panel)}, we plot an interpolation of this quantity as function of $\Delta u_{SEP}$ for the self-consistent simulations of two modes (blue line) compared with the saturation levels obtained artificially superposing the evolution of isolated resonances (yellow line). Note that for vanishing resonance separation, as expected, the self-consistent saturation level is half of the value obtained by the artificial addition of single isolated modes. In fact, for coalescing resonances, the modes become different realizations of the same fluctuating field and their saturation amplitude is reduced by a factor of two. It is immediately seen that a threshold value ($\Delta u^*_{SEP}\simeq 1.75\times10^{-5}$) emerges, below which the saturation level for the self-consistent evolution is lower than that obtained by artificial superposition of single isolated modes: this corresponds to the regime of strongly overlapped resonances. The opposite situation occurs above $\Delta u^*_{SEP}$ (clearly provided that $\Delta u_{SEP}\lesssim 2\Delta u^f_{NL}$, otherwise resonances are not overlapped and the modes evolve as single isolated fluctuations). In order to better illustrate such a behavior, in \figref{fig_satu}, we also plot the single- and multi-mode evolutions, for a case below \emphpanel{(center panel)} and above (\emphpanel{right-hand panel)} threshold, respectively. \subsection{Interpretative model} When the multi-mode saturation level is larger than for single isolated modes, it is evident that a more efficient process can tap energy from the particle phase space. Following the line of \citet{BB95a}, we discuss a toy model to qualitatively describe this effect and the threshold condition introduced above. For each resonance, respectively called $u_1$ and $u_2$, we assume to model the non-linear distorted distribution function at saturation by a flattening over a certain region; that is, as horizontal lines centered at the resonance position and extended over twice the non-linear velocity spread, respectively denoted as $\Delta u_1$ and $\Delta u_2$ for the two considered resonances. As described in the previous Sections, resonance overlap occurs when the flattening regions intersect. We then describe the overlapping resonances as a single one having a new resonance velocity and non-linear velocity spread defined, respectively, by \begin{align}\label{hfdjksiulh} u_r=u_1+\Delta u_{SEP}/2\;,\\ \Delta u_r=\rho((u_2+\Delta u_2)-(u_1-\Delta u_1))/2\;,\label{kjshhh} \end{align} where $\Delta u_{SEP}=u_2-u_1$ and $\rho=\mathcal{O}(1)$ is a control parameter for the model. This scheme is illustrated, for $\rho=1$, in \figref{fig_satumo} \emphpanel{(left-hand panel)}, where we used realistic quantities (estimated from $\Delta u_{NL}$ ) for comparison with the simulation results described above. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig8a} \includegraphics[width=0.42\textwidth]{fig8b} \caption{(Color online) Left-hand panel: Representation for the distribution function at saturation in the case of single isolated modes (grey dashed lines) and of one overlapped resonance (blue solid line) modeled using \erefs{hfdjksiulh} and \reff{kjshhh}. Red dots correspond, from left to right, to $u_1$, $u_r$ and $u_2$, respectively. Corresponding velocity spreads are indicated in the plot. Right-hand panel: Behavior of $\tilde{A}$ (yellow) and $A$ for $\rho=1$ (blue) in terms of $\Delta u_{SEP}$. Dashed gray line indicates the intersection point. \label{fig_satumo}} \end{figure} The non-linear mode saturation corresponds to particle transfer from large to small velocities as indicated in the momentum conservation law \eref{momcons}. Thus, the relevant quantities to be analyzed are the areas of the triangles that are colored in the figure. These correspond to the amount of available free energy, in the interpretation of \citet{BB95a}. Evolving the two modes as single isolated resonances, the saturation level scales as \begin{align} \tilde{A}= \tan\alpha((\Delta u_1)^2+(\Delta u_2)^2)/2\;, \end{align} \emph{i.e., } as the sum of the areas built around the two resonances. Here $\alpha$ denotes the angular coefficient of the initial linear distribution. Meanwhile, the area of the triangle corresponding to the new merged resonance can be evaluated by means of \erefs{hfdjksiulh} and \reff{kjshhh} as \begin{align} A=\rho^2 \tan\alpha(\Delta u_{SEP}+\Delta u_1+\Delta u_2)^2/8\;. \end{align} Assuming to neglect the velocity dependence of $\Delta u_2$ (as $u_2$ is swept to change $\Delta u_{SEP}$), \figref{fig_satumo} \emphpanel{(right-hand panel)} shows $\tilde{A}$ and $A$ (for $\rho=1$) as functions of the resonance separation. For $\Delta u_{SEP} > \Delta u_1 + \Delta u_2$ there is no resonance overlap and the system response is consistent with two isolated resonances. For decreasing $\Delta u_{SEP}$ below the onset of the overlap (\emph{i.e., } the system depicted in \figref{fig_satumo}, \emphpanel{left-hand panel}), the saturation level for the merged resonance is larger than for the single isolated modes, \emph{i.e., } $A>\tilde{A}$, or, more precisely, $A=2\tilde{A}$ at the onset condition, suggesting a sudden transition in the non-linear dynamics due to the synergistic behavior of the interacting modes. In particular, it is readily verified that $A>\tilde{A}$ as long as a lower critical threshold is exceeded (as in the simulation results), namely \begin{align} \Delta u_{SEP}^*=-\Delta u_1-\Delta u_2+2\sqrt{((\Delta u_1)^2+(\Delta u_2)^2)/\rho^2}\;. \end{align} Thus, we can identify a range for enhanced or synergistic interaction of overlapping resonances \begin{align} \Delta u^*_{SEP}<\Delta u_{SEP}<\Delta u_1 + \Delta u_2\;. \end{align} Meanwhile, for $\Delta u_{SEP}<\Delta u^*_{SEP}$, the two resonances are strongly overlapping and $A<\tilde{A}$, with $A=\tilde{A}/2$ in the limit of $\Delta u_{SEP}=0$. This is consistent with our numerical simulations. The discrepancy between the intersection point ($\simeq 1.7\times10^{-4}$) of \figref{fig_satumo} \emphpanel{(right-hand panel)} with respect to the value found in numerical simulations shown in \figref{fig_satu} \emphpanel{(left-hand panel)} is due to intrinsic details of the non-linear evolution and the simplification contained in the model: it can be taken into account by properly setting the model control parameter $\rho$. Indeed, a proper match of the model theoretical threshold with numerical simulations can be obtained using $\rho=1.41$. As a result, the present model shows how the mode saturation level can be described via the analysis of wave-particle power transfer in phase space. The model can also be made quantitative (and, thus, predictive) by a proper choice of the control parameter $\rho$. \section{Concluding remarks}\label{conclusion} The BPS and its paradigmatic implications in plasma and fusion physics have been widely discussed across the literature. The present analysis focused on the refinement of some subtle questions mandatory for practical applications, in order to make quantitative estimates of the non-linear wave-particle interaction. A study of the non-linear velocity spread allowed selection of a proper region in the velocity space characterizing the resonance width. Analyzing the simulations of isolated resonances up to the limiting distance of their overlap, we show that also the ``tails'' around the plateau play an important dynamical role. It is worth noting that such regions do not contain trapped particles. Nonetheless, their existence must be carefully taken into account when assessing the separation of two resonant modes. In fact, the plateau region, mainly coinciding with the clump size in the phase space, would underestimate the transport of particles between two adjacent resonances due to the important role played by un-trapped particles. The shape of the clumps and phase-space dynamics have been described using the LCS technique, which clarifies how particles outside the effective resonance width exchange limited power with the mode spectrum. Finally, we study the enhanced saturation of interacting resonant modes with respect to the level of isolated resonances. We quantitatively describe the merging/overlap of two adjacent resonances. A critical distance exists in the velocity space, above which the two resonances are isolated and below which they are too overlapped to be really distinct in the velocity space. In either case, the two fluctuations behave as individual modes and the wave particle power exchange is limited. Only when the two resonances are adjacent and the power transfer from particles to modes is maximized by an enforced velocity spread (essentially the sum of the original ones), can we predict the enhanced saturation observed for instance in \citet{spb16} and \citet{vnf18}. The analysis of this paper offers a summary of the relevant features of the bump-on-tail paradigm (to which the BPS is isomorphic), in view of its paradigmatic character in describing many several important physical problems. Quantitative prediction of resonance overlap and of enhanced saturation conditions and levels was the original motivation of this work, which has led us to conclude that the system evolution is strongly influenced by the global features of the distribution function. \vspace{1cm} {\small ** We would like to gratefully thank Fulvio Zonca for having inspired this work and for his valuable suggestions. NC would like to thank Philip Lauber and Thomas Hayward-Schneider for their fruitful discussions and advices. This work has been carried out within the framework of the EUROfusion Consortium [Enabling Research Projects: NAT (AWP17-ENR-MFE-MPG-01), MET (AWP19-ENR-01-ENEA-05)] and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission. **}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The popularity of the social media (e.g., Twitter, Weibo) over the last two decades has led to increasing demands for learning the content of social media, which may be beneficial to many real-world applications, such as sentiment analysis~\cite{wang2015unsupervised}, information retrieval~\cite{li2017image2song}, and recommendation systems~\cite{wu2017mobile}. However, the task is non-trivial and challenging, because social media content is commonly multi-modal and involves several types of data, including text, audio, and image (an example is illustrated in Fig.~\ref{fig:example}). Each representation of the individual modality encodes specific knowledge and complementary, which can be explored to facilitate understanding of the entire meaning of the content. Therefore, gaining comprehensive knowledge from learning arising out of social media requires deliberate exploration of single-modal information and joint learning of the intrinsic correlation among various-modalities.\par \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{example.png} \caption{An example of multi-modal social media content, organized by a textual passage, a clip of the song ``Muhammad Ali'', and a picture of Muhammad Ali's famous victory.} \label{fig:example} \end{figure} Previous works on learning social media content have focused mainly on single modality, e.g., textual information~\cite{wu2017mobile} and image information~\cite{garimella2016social}. To our best knowledge, no studies have focused on mere acoustic information analysis for social media. These previous works cope with data in the early stage of social media, because users are limited to posting single-modal content. In contrast, today's social media content usually includes textual, acoustic, and visual ingredients simultaneously, and requires multi-modal learning strategies to integrate rich information from all modalities.\par Many researchers have studied the bi-modal utilization to address the issue of incomplete information exploration. This utilization can be classified into two kinds of approaches. The first is bi-modal retrieval, which utilizes one modality to find similar content in the other modality, such as finding the most relevant texts to a given image~\cite{li2017image2song}. This kind of approach is similar to machine translation tasks to some extent, for instance, the input is fine-grained features of one modality, the objective of which is to minimize the distance between the output vectors with fine-grained features of the target modality. The second is bi-modal unification~\cite{park2016image}, which seeks to integrate individual information of the two modalities together. The process of this kind of approach can be divided into two stages: (a) feature extraction, in which the two modalities are encoded into their corresponding vectors or matrices via embedding approaches; and (b) bi-modal feature aggregation, in which the extracted features are mapped into a shared latent space through multilayer perceptrons or directly into the target spaces of tasks through classifiers~\cite{chen2017visual}. \par Learning representations of multi-modal data, which contains features of more than two modalities, is a more challenging task. The main reason is that the fine-grained features of the modalities can be difficult to obtain because of the lack of annotated labels for every modality. Previous studies lack the appropriate strategies to aggregate various kinds of information effectively. It's easy to imagine that bi-modal aggregation models might be extended to multi-modal learning with linear operations, e.g., the concatenation operation. However, this technique may generate inappropriate integral representation of the multiple features and lacks further exploration in the previous work. \par We address the abovementioned issues by introducing effective strategies to represent the individual features for the three modalities. A general unified framework is proposed to integrate multi-modalities seamlessly. Specifically, we begin by encoding single modal parts into dense vectors. For textual content, we design an attention-based network~(i.e., attBiGRU), to incorporate various textual information. For acoustic content, we introduce an effective DRCNN approach to embed temporal audio clips locally and globally. For visual content, we fine-tune a state-of-the-art general framework, DenseNet~\cite{huang2017densely}. We propose a novel fusion framework, which involves the cross-modal fusion and the attentive pooling strategies, to aggregate various modal information. Extensive experimental evaluations conducted on real-world datasets demonstrate that our proposed model outperforms state-of-the-art approaches by a large margin. The contributions of this paper are presented as follows: \begin{itemize} \item We introduce effective strategies to extract representative features of the three modalities, including the following. (1) For textual information, we design an attention based network, named as attBiGRU, to integrate various textual parts. The independent textual parts are considered as two separate roles, namely, the protagonist and the supporting players. The supporting players are encoded into an attention weight vector on the protagonist. (2) For acoustic information, we propose the DCRNN strategy based on the densely connected convolutional and recurrent networks. The densely connected convolutional networks are used to learn the acoustic features both locally and globally. The recurrent networks are designed to learn the temporal information. (3) For visual information, we introduce state-of-the-art strategies from~\cite{simonyan2014very,he2016deep,huang2017densely}. \item We propose a general multi-modal feature aggregation framework, JTAV, to learn information jointly. The framework considers the textual, acoustic, and visual contents to generate a unified representation of social media content with the help of the optimized feature fusion network, CMF-AP, which can learn inner and outer cross-modal information simultaneously. \item We conduct comprehensive experiments on two kinds of tasks, sentiment analysis and information retrieval, to evaluate the effectiveness of the proposed JTAV framework. The experiments demonstrate that JTAV outperforms state-of-the-art approaches remarkably, that is, about 2\% higher than the state-of-the-art approaches on all metrics in the sentiment analysis experiment and more than ten times better than the baseline on main metrics in music information retrieval experiment. \end{itemize} \section{Related Work} Over the past decades, social media learning has gained considerable attention in related research literature. Social media content is utilized as raw material for recommendation~\cite{wang2013social}, information retrieval~\cite{agichtein2008finding}, and so forth.\par The dominance of the single-modal content in social media in the early stages, however, has caused most previous works to position themselves in single-modal exploration, e.g., natural language processing and image processing. \cite{han2013lexical} focused on the short text messages in social media, and normalized lexical variants to their canonical forms. In~\cite{hutto2014vader}, combined lexical features of social media text were proposed for expressing and emphasizing sentiment intensity. An example of image processing was ~\cite{garimella2016social}, social media images, particularly geo-tagged images, were studied in predicting county-level health statistics.\par In recent years, users have come to prefer presenting more attractive ingredients, such as image and audio in addition to natural words. Therefore, literature on social media analysis has been shifting its focus from single-modal to multi-modal learning. For example, ~\cite{wang2015unsupervised} assume that visual and textual content have a shared sentiment label space, and the optimization problem can be converted to the minimization between the derivatives of both modalities. In~\cite{li2017image2song}, the images and lyrics are mapped into a shared tag space. The lyrics, which have the smallest mean squared error with a given image, emerged as the matching object. \cite{chen2017visual} present a new end-to-end framework for visual and textual sentiment analysis; they began with the co-appearing image and text pairs that provide semantic information for each other, and use the concatenation of the vectors generated from the original image and text, in the shared sentiment space. However, the drawback of these works is that they utilized only partial information.\par An intuitive idea to gain a comprehensive learning of the entire meaning of social media is to integrate more modalities effectively. The reason is that each representation of the individual modality encodes specific knowledge and is complementary, an aspect that can be explored to facilitate understanding on the entire meaning of the content. However, this task could be extremely challenging because we need to explore single-modal information deliberately and jointly learn the intrinsic correlation among various modalities. This work is the first study that focuses on the seamless integration of the three modalities~(i.e., text, audio, and image), into a general framework to jointly learn the social media content.\par \section{Model Description} The architecture of our proposed framework is illustrated in Fig.~\ref{fig:model}. It has two main modules, the feature extraction and the feature aggregation. Let $\bm t$, $\bm a$, and $\bm v$ denote the textual, acoustic, and visual features, which are encoded from the feature extraction module, respectively. Let $\bm u$ denote the target unified representation of the multi-modal content. The feature aggregation module is designed to generate $\bm u$ from $\bm t$, $\bm a$, and $\bm v$ effectively and comprehensively. The example in Fig.~\ref{fig:example} is utilized to ground our discussion and to illustrate the intuitive idea of this work. \begin{figure}[t] \centering \includegraphics[width=0.66\textwidth]{model.png} \caption{The architecture of the proposed JTAV framework} \label{fig:model} \end{figure} \subsection{Feature Extraction} Fine-grained features, which are related to the qualities of the input of the aggregation processing directly, are the foundation of multi-modal representation learning. The proposed strategies for generating appropriate features for text, audio, and image will be presented in the following subsections. \par \subsubsection{Text modeling} \label{sec:text} \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{text.png} \caption{The attBiGRU strategy for modeling textual content } \label{fig:text} \end{figure} The purpose of text modeling is to encode raw textual parts into a latent representation $\bm t$. We design a bidirectional gated recurrent neural network (BiGRU) based attentive network~(i.e., attBiGRU), to extract sufficient textual features carrying the long-range dependencies, as illustrated in Fig.~\ref{fig:text}.\par Generally, social media content contains more than one textual part, such as the lyrics and reviews of a song. It is reasonable to assume that a protagonist and supplying players exist among these textual parts. Intuitively and for sake of simplicity, we can regard the part with the maximum number of words as the protagonist and the other parts as the supplying players. Let $\bm {T_{large}}$ denote the protagonist and $\bm {T_{small}}$ denote the supplying roles. Inspired by~\cite{li2017image2song} to reduce the gap between image and text, we use a pre-trained word embedding model, e.g., FastText~\cite{bojanowski2016enriching}, to encode $\bm {T_{small}}$ into a word matrix~(i.e., $\bm{\mathcal{T}_{small}}$). $\bm{\mathcal{T}_{small}}$ is computed as $\bm{w_{small}} W_e$, where $\bm{w_{small}}$ is the words in $\bm {T_{small}}$ and $W_e$ is a pre-trained embedding matrix. $\bm {t_{att}}$ is an attention vector generated from $\bm{\mathcal{T}_{small}}$ with an average pooling layer, and works on every word vector of $\bm{\mathcal{T}_{large}}$ through the element-wise product. We obtain an attented word matrix, denoted as $\bm{\mathcal{T}}$. \par The BiGRU encoder is utilized to encode every word vector in the sequence into a latent representation and receive the aid of the other textual parts. Subsequently, we feed the attended word matrix $\bm {\mathcal{T}}$ into a BiGRU network. The output of the BiGRU encoder is a $2M\times N$ matrix, where $M$ is the number of hidden units of a layer, and $N$ is the number of words in $\bm{\mathcal{T}_{large}}$. We denote the output matrix as $\bm {\mathcal{H}}$.\par The attention with context part is a simplified version of~\cite{yang2016hierarchical} without the sentence attention part. It is introduced to balance the importance among all words and is defined as follows: \begin{center} \begin{equation} \begin{split} \label{equ:text} \bm{\hat{h}_i}&= tanh(\bm {W_u}\bm{h_i}+\bm{b_u}),\\ \bm{\alpha_i}&= \frac{exp\left( \bm{\hat{h}{_i}^{\top } \hat{h}_{c}}\right) }{\sum_{j=0}^{N-1}{exp\left( \bm {\hat{h}{_j}^\top \hat{h}_{c}} \right) } },\\ \bm{t}&= \sum_{j=0}^{N-1}{\bm{\alpha{_j} h{_j}}}, \end{split} \end{equation} \end{center} where $\bm {h_i}(0\le i \le N - 1)$ is the $i^{th}$ column of $\bm{\mathcal{H}}$, $\bm {\hat{h}_i} $ is a latent vector computed from $\bm h_i$ using the tanh function, $\bm {\alpha_i}$ is the attention weight scalar computed from $\bm {\hat{h}_i}$ and a random initialized context vector $\bm{\hat{h}_{c}}$, $\bm {W_u}$ and $\bm {b_u}$ are parameters learned from training. The final textual representation $\bm t$ is a weighted sum of $\bm {h_j}(0\le j \le N - 1)$ on $\bm {\alpha_j}(0\le j \le N - 1)$.\par \subsubsection{Audio modeling} \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{audio.png} \caption{The DRCNN strategy for modeling acoustic content } \label{fig:audio} \end{figure} In this paper, we focus on polyphonic audio, such as music clips. To capture useful acoustic features, we design a densely connected convolutional recurrent neural network called DCRNN, which takes raw audio as input, and generates a distributed representation $\bm a$ to effectively represent the acoustic information. DCRNN consists of five phases, as illustrated in Fig.~\ref{fig:audio}.\par The first phase is named as 2D transformation, which transfers raw audio into two-dimensional vectors. Following the majorities of the deep learning approaches in music information retrieval~\cite{choi2017tutorial}, we use two dimensional spectrograms instead of the discrete audio signals. We choose two kinds of spectrograms~(i.e., mel-spectrogram~\cite{boashash1996time} and constant-Q transform~\cite{brown1992efficient}). A mel-spectrogram~(MelS) is a visual representation of the spectrum of frequencies of audio and is optimized for human auditory perception. A constant-Q transform~(CQT) is computed with the logarithmic-scale of the central frequencies. The CQT is well suited with the frequency distribution of music pitch. MelS and CQT have been used frequently in various acoustic tasks~\cite{zhu2017fusing,aren2017towards}, and are utilized as coarse-grained features in the current paper. We use $\bm{\mathcal{A}_0}$ to indicate the 2D spectrograms.\par The second phase is a convolutional neural network (CNN) based block~(called CNN block). $\bm{\mathcal{A}_0}$ is used as the input of the block. According to~\cite{choi2017transfer}, early layers of CNN have the ability to capture local information such as pitch and harmony, whereas deeper layers can capture global information such as melody. We introduce the densely connected CNN, which has been proved powerful at learning both local and global features of images~\cite{huang2017densely}. The CNN block is composed of several densely connected ``CNN-BN-MP'' sub-blocks, where ``BN'' represents batch normalization layers, and ``MP'' indicates max pooling layers. Suppose there are $N$ densely connected ``CNN-BN-MP'' sub-blocks in the CNN block, each can be defined as:\par \begin{center} \begin{equation} \begin{split} \bm{\mathcal{A}_h^1}&=CNN-BN-MP(\bm{\mathcal{A}_{T-1}}),\\ \bm{\mathcal{A}_h^2}&=CNN-BN-MP(\bm{\mathcal{A}_h^1}),\\ \bm{\mathcal{A}_h^3}&=CNN-BN-MP(\bm{\mathcal{A}_h^2}),\\ \bm{\mathcal{A}_{T}}&=CNN-BN-MP(\bm{\mathcal{A}_h^1}\oplus \bm{\mathcal{A}_h^3}), \end{split} \end{equation} \end{center} where $\bm T\in (1,N)$, $\bm{\mathcal{A}_{T-1}}$ represents the output of the ${ ( \bm T-1)}^{th}$ densely connected ``CNN-BN-MP'' sub-block, $\bm{\mathcal{A}_h^*}$ indicate the latent variables learned by the CNN block, $\oplus$ denotes the concatenation operation, and $\bm{\mathcal{A}_{T}}$ is the output of the $\bm T^{th}$ sub-block. \par The permute block is adopted from the technique introduced in ~\cite{panwar2017deep}. The output of the CNN block is permuted to time major form, and fed into the following recurrent neural network.\par The recurrent neural network~(RNN) block is built based on bidirectional RNN. The stacked RNN layers are used to learn the long-short term temporal context information. We take the concatenation of the last hidden state (in vector format) in the forward direction and the first hidden state in the backward direction as the output of this phase.\par Several fully connected dense layers constitutes the fully connected dense~(i.e., FCD) block. We use the output of the last dense layer as the acoustic representation $\bm{a}$. Note that all the parameters presented in this section, e.g., number of layers, are optimally tuned and will be clarified in the experiment section.\par \subsubsection{Image modeling} Extracting rewarding features from images is a well explored problem; hence, following the tradition of other image-related tasks~\cite{li2017image2song,chen2017visual}, we use pre-trained approaches on external visual tasks. \par VGG~\cite{simonyan2014very} is a deep convolutional network with nineteen weight layers, ResNet~\cite{he2016deep} is a residual learning framework with 101 weight layers, and DenseNet is a deep densely connected convolutional network with 121 weight layer. All these models are pre-trained on the ImageNet dataset~\footnote{\url{http://www.image-net.org}.}, and are fine-tuned on the experimental datasets by modifying the final densely-connected layers. The last dense layer is used as the visual representation $\bm v$. \subsection{Feature Aggregation} In this section, we propose a feature aggregation network, which consists of the cross-modal fusion~(CMF) and the attentive pooling~(AP) strategies to generate a unified representation of multi-modal information, as shown in the bottom part of Fig.~\ref{fig:model}. We call the feature aggregation approach as CMF-AP.\par The first phase is cross-modal fusion. We introduce the concepts of the inner and outer information. The inner information refers to information remaining in single modality, like the visual information of an image or the acoustic information of a song clip. The outer mutual information cross modalities refers to the relevant information between two modalities. In addition to the inner information generated from the feature extraction module, we compute the matrix product ($\times$) pairwise on all modal vectors to calculate the cross-modal aware matrices. The cross-modal aware matrices contain outer mutual information cross modalities and the inner information. \par However, handling the cross-modal aware matrices directly is not feasible, and thus, we reconstruct $\bm t$, $\bm a$, and $\bm v$ from the cross-modal aware matrices. The attentive pooling strategy is adapted from Eq.~\ref{equ:text} to conduct row pooling and column pooling.\par After the above procedure, two reconstructed $\bm t$, two reconstructed $\bm a$, and two reconstructed $\bm v$ are obtained. For instance, from the text-image matrix, we can obtain a text-enhanced visual vector and an image-enhanced textual vector. The reconstructed vectors carry both inner and outer information. Densely connected layers are deployed for dimensionality reduction purpose. The final unified representation $\bm u$ is the concatenation of the reconstructed vectors. \par The feature aggregation module is formulated as follows: \begin{center} \begin{equation} \begin{split} \begin{gathered} \label{equ:cmf} {\bm {C_{mn}}}=\bm m \times \bm n^\top ;\\ \begin{aligned} &\left\{ \begin{aligned} & {\bm {C_m}}=tanh(\bm {W_m}\bm {C_{mn}}+\bm {b_m}),\cr & {\bm {\alpha_m}}= softmax(\bm {C_{m}^\top}\bm {u_m}), \\ & \bm{\hat{m}}=\sigma(\bm {\hat{W}_m}\bm {C_{mn}}\bm {\alpha_m}+\bm {\hat{b}_m});\\ \end{aligned} \right. \\ &\left\{ \begin{aligned} & \bm {C_n}=tanh(\bm{W_n}\bm{C_{mn}^\top}+\bm{b_n}),\\ & {\bm {\alpha_n}}=softmax(\bm{C_{n}^\top}\bm{u_n}),\\ & \bm{\hat{n}}=\sigma(\bm{\hat{W}_n}\bm{C_{mn}^\top}\bm {\alpha_n}+\bm{\hat{b}_n});\\ \end{aligned} \right. \end{aligned} \\ \bm u_{mn}=\bm {\hat m}\oplus \bm{\hat{n}}; \end{gathered} \end{split} \end{equation} \end{center} where $\bm m, \bm n \in \{\bm t, \bm a, \bm v\}$ and $\bm m \ne \bm n$, $\sigma$ represents an activation function, $\bm {\hat m}$ and $\bm {\hat n}$ refer to the reconstructed vectors of $\bm m $ and $ \bm n $, respectively, and the remaining variables stand for latent parameters learned from training. The final unified representation $\bm u$ is defined as $\bm u=\bm{u_{tv}}\oplus\bm{u_{ta}}\oplus\bm{u_{av}}$. \subsection{Training and Optimization} The Adam algorithm~\cite{kingmaadam} is used as optimizer to minimize the binary cross entropy function. The binary cross entropy is defined as follows: \begin{equation} \mathcal{L}=-\frac{1}{N}\sum_{n=1}^{N}[{y_n\log\hat{y}_n}+(1-y_n)\log(1-\hat{y}_n)], \end{equation} where $N$ denotes the number of target values, $y_n$ denotes the $n^{th}(1\le n \le N)$ true value, and $\hat{y}_n$ denotes the corresponding predicted value. For the information retrieval task, given a query, we randomly generate a fake candidate along with the correct candidate from the candidate corpus for effective training. \begin{comment} {\color{blue}Training stops when model observes that the loss keeps growing for 3 epochs or the iteration meets the max epoch. The batch-size is set as 64 empirically.} \end{comment} \section{Experimental Evaluation} Evaluation is conducted on two kinds of tasks, sentiment analysis and music information retrieval, as presented in Sections~\ref{sec:SA} and~\ref{sec:IR}, respectively\footnote{The source code and more details on the experimental settings are available at \url{http://github.com/mengshor/JTAV}}. \subsection{Sentiment Analysis}\label{sec:SA} {\textbf {Task Description}}. Sentiment analysis is a task that deals with the computational detection and extraction of opinions, beliefs, and emotions in given content~\cite{paltoglou2017sensing}. In this paper, we analyze posts from a social media platform ShutterSong. Given a post, which includes the visual (image), textual (song lyrics and user defined caption), and acoustic (song clip) parts, the goal is to tag a ``positive'' or ``negative'' label for the multi-modal content. We use ``1'' to indicate the positive label, and ``0'' to indicate the negative label. The sentiment analysis task can be regarded as a binary classification task.\par \noindent {\textbf {Dataset and Parameter Settings}}. We sort a sub dataset from the ShutterSong dataset\footnote{\url{https://drive.google.com/file/d/0B2N8XiDRrEgISXFJSXBEMWpUMDA/view}.} to conduct the sentiment analysis task. The user-defined mood tags are used as labels, and 3260 items have available moods. After removing duplications, we obtain 272 mood tags. We divide these mood tags into ``positive'' and ``negative'' on the basis of the related meaning manually. We obtain 2297 positive and 963 negative samples, with each including a user posted image, a song clip, and its corresponding lyrics, part of the samples have available captions. We call this dataset as MoodShutter, and separate it into train (80\%), validation (10\%), and test (10\%) sets.\par The song lyrics are truncated at 100 words, and captions are cleaned into five words according to a caption corpus. We use a 300 dimensional embedding matrix, which is pre-trained on the English Wikipedia dataset\footnote{\url{https://en.wikipedia.org/wiki/Wikipedia:Database_download#English-language_Wikipedia}.} through FastText, as ${W_e}$. The number of the hidden units of the BiGRU encoder is set as 50. Raw song clips are transformed into spectrograms with the \textit{librosa} tool~\cite{mcfee2017librosa}. We cut all the audio files into 10-second-segments. Audio sample rates are set as 22050Hz, hop lengths are 1024, and the numbers of frequency bins are 96. The parameter $N$ of DCRNN is set as $N=2$. The images are sized to $224\times 224\times 3$. \par \noindent {\textbf {Baseline approaches}}. We compare our JTAV framework with: \begin{itemize} \item doc2vec~\cite{le2014distributed}, a paragraph embedding approach; \item lyricsHAN~\cite{alexandros2017lyrics}, a hierarchical attention network which encodes songs into vectors; \item BiGRU, a bidirectional GRU network which utilizes the song lyrics as input; \item attBiGRU, our novel text modeling approach which adds the available captions as the supplying role based on BiGRU; \item MFCC~\cite{mcfee2017librosa}, a type of cepstral representation of the audio clip generated by \textit{librosa}; \item convnet~\cite{choi2017transfer}, an acoustic feature learning method based on transfer learning; \item DCRNN-MelS, our novel audio modeling approach which takes MelSs as input; \item DCRNN-CQT, our novel audio modeling approach which takes CQTs as input; \item F-VGG~\cite{simonyan2014very}, a VGG network pre-trained on the ImageNet dataset and fine tuned on the MoodShutter dataset; \item F-ResNet~\cite{he2016deep}, a ResNet pre-trained on the ImageNet dataset and fine tuned on the MoodShutter dataset; \item F-DenseNet~\cite{huang2017densely}, a DenseNet pre-trained on the ImageNet dataset and fine tuned on the MoodShutter dataset; and \item early fusion, a method which combines features learned by our methods before classification or regression tasks, and is very popularly used~\cite{chen2017visual,ramas2017multi}. \end{itemize} \begin{table}[t] \centering \caption{Results of the sentiment analysis task} \scalebox{0.8}{ \begin{tabular}{lllccc} \toprule \multicolumn{1}{l}{modal} & \multicolumn{1}{l}{materials} & approaches & AUC score & F1 score & precision score\\ \midrule \multicolumn{1}{l}{\multirow{5}{*}{text}} & \multicolumn{1}{l}{\multirow{4}{*}{lyrics}} & doc2vec~\cite{le2014distributed} & {0.513} & {0.545} & {0.593 } \\ \cmidrule{3-6} & & lyricsHAN~\cite{alexandros2017lyrics} &{0.518} &{0.575} & {0.596} \\ \cmidrule{3-6} & & BiGRU & 0.572 & 0.602 & 0.640 \\ \cmidrule{2-6} & \multicolumn{1}{l}{{lyrics+caption}} & {attBiGRU} & {0.581} & {0.652} & {0.650 } \\ \midrule \multicolumn{1}{l}{\multirow{5}{*}{audio}} & \multicolumn{1}{l}{\multirow{5}{*}{\mbox{song clip}}} & MFCC~\cite{mcfee2017librosa} & {0.505} & {0.549} & {0.586} \\ \cmidrule{3-6} & & convnet~\cite{choi2017transfer} & {0.518} &{0.614} & {0.621 } \\ \cmidrule{3-6} & & DCRNN-MelS& {0.536} & {0.635} & {0.634} \\ \cmidrule{3-6} & & DCRNN-CQT & {0.559} & {0.651} & {0.643} \\ \midrule \multicolumn{1}{l}{\multirow{4}{*}{image}} & \multicolumn{1}{l}{\multirow{4}{*}{image}} & F-VGG~\cite{simonyan2014very} & {0.546} & {0.594} & {0.619 } \\ \cmidrule{3-6} & & F-ResNet~\cite{he2016deep} & {0.578} &{0.618} &{0.645 } \\ \cmidrule{3-6} & & F-DenseNet~\cite{huang2017densely} & {0.588} & {0.627} & {0.653 } \\ \midrule \multicolumn{1}{l}{\multirow{2}{*}{text+audio}} & \multicolumn{1}{l}{\multirow{2}{5em}{lyrics+caption+ song~clip}} & early fusion & \multirow{1}[2]{*}{0.586} & \multirow{1}[2]{*}{0.660} & \multirow{1}[2]{*}{0.656} \\ \cmidrule{3-6} & & CMF-AP & 0.597 & 0.673 & 0.668 \\ \midrule \multicolumn{1}{l}{\multirow{2}{*}{text+image}} & \multicolumn{1}{l}{\multirow{2}{5em}{lyrics+caption+ image}} & early fusion & \multirow{1}[2]{*}{0.593} & \multirow{1}[2]{*}{0.663} & \multirow{1}[2]{*}{0.661 } \\ \cmidrule{3-6} & & CMF-AP & 0.611 & 0.688& 0.683 \\ \midrule \multicolumn{1}{l}{\multirow{2}{*}{audio+image}} & \multicolumn{1}{l}{\multirow{2}{*}{image+\mbox{song clip}}} & early fusion & \multirow{1}[2]{*}{0.589} & \multirow{1}[2]{*}{{0.624}} & \multirow{1}[2]{*}{{0.653}} \\ \cmidrule{3-6} & & CMF-AP & 0.603 & {0.654} & {0.665}\\ \midrule \multicolumn{1}{l}{\multirow{2}{*}{text+audio+image}} & \multicolumn{1}{l}{\multirow{2}{6em}{lyrics+caption+ image+\mbox{song clip}}} & early fusion & \multirow{1}[2]{*}{ 0.602 } & \multirow{1}[2]{*}{ 0.671 } & \multirow{1}[2]{*}{ 0.669 } \\ \cmidrule{3-6} & & CMF-AP & \textbf{ 0.623 }& \textbf{0.691} & \textbf{0.688} \\ \midrule \multicolumn{6}{l}{{JTAV=attBiGRU+(F-DenseNet)+(DCRNN-CQT)+(CMF-AP)}} \\ \bottomrule \end{tabular}% } \label{tab:results}% \end{table}% \noindent {\textbf {Evaluation Metrics}}. The positive and negative samples are unbalanced and the ratio is about $7:3$. The weighted average area under the curve~(AUC) score, F1 score, and precision score are chosen as evaluation metrics\footnote{We utilize the implementation in sklearn, \url{http://scikit-learn.org/}.}.\par \noindent {\textbf {Results and Discussion}}. Table~\ref{tab:results} presents the results between JTAV and the baseline approaches. For fair comparison, all feature vectors are obtained through optimized training; the classic logistic regression approach is utilized as the classier; and the reported results are the average values of 10 runs.The values in boldface represent the best results among all the approaches. The proposed JTAV performed the best on all metrics, with 0.623 AUC score, 0.691 F1 score, 68.8\% precision score. \par \noindent \textit{Observations on single modality}. For textual content, attBiGRU obtained the best results, and was superior to doc2vec with about 7 percentage points promotion of AUC score, 11 percentage points promotion of F1 score, and 6 percentage points promotion of precision score. These results demonstrate the proposed BiGRU approach extracts more powerful features than doc2vec and lyricsHAN. And taking the omni textual component into consideration, the attBiGRU approach can generate better results. For acoustic content, DCRNN outperforms MFCC and convnet because it can learn acoustic information both locally and globally. Long-term temporal context dependency is also retained.We also observe that CQT works better than MelS. DenseNet is the best choice for encoding images into vectors as compared with VGG and ResNet. \par \noindent \textit{Observations on multiple modalities}. When more modalities emerge, more information is included, and better results are obtained. For example, the performance of combining textual and acoustic information is much better than that of using texts or audio solely. Our CMF-AP approach works steadily better than the baseline approach, because it not only extracts the inner features inside single modality, but also the the outer information cross modalities. Moreover, the best results are presented when we utilize all modalities and all available materials~(i.e., JTAV). The AUC score is 0.623, representing an improvement of 2.1 percentage points to the early fusion approach. The F1 score is 0.691, performing an improvement of 2 percentage points over the best baseline approach. The precision score is 0.688, representing about 1.9 percentage points improvement to the best baseline approach. It seems like that the promotions of the proposed JTAV over the best baseline approach is not that huge. As a matter of fact, the early fusion approach utilizes the effective features generated by our proposed work~(i.e., attBiGRU, DCRNN-CQT, and F-DenseNet). We believe that, if compared to the original version of early fusion method which only uses the previous features, the margin would be larger. For the sake of space limitation, in Table 1 the early fusion approach on previous features are omitted. \subsection{Music Information Retrieval}\label{sec:IR} We conduct an extended experiment, which is can be explained as given an image query and named as image2song~\cite{li2017image2song}, to verify the general effectiveness of JTAV. Image2song is a music information retrieval task and aims to find the most relevant song.\par We perform the image2song task on two benchmark datasets, they are the Shuttersong† dataset and the Shuttersong§ dataset~\cite{li2017image2song}. Both datasets include no-repeat 620 songs with 3100 images, that is, each song is related to five images. Part of the images are attached with user-defined captions, and each song has an acoustic song clip and corresponding song lyrics. The difference lies in the partition strategy of the train and test dataset. In Shuttersong†, 100 songs and related images are selected randomly for testing, and the rest for training. While in Shuttersong§, the train and test set share the whole song set, while one of the five images of each song is chosen randomly for testing. The preprocessing settings of images, captions, song lyrics and song clips is the same as that of the sentiment analysis task.\par We utilize the rank-based evaluation metrics to compare the results. For fair comparison, we adopt the evaluate metrics in~\cite{li2017image2song}. Med r represents the medium rank of the ground truth retrieved song, and lower values indicate better performance. Recall@k~(R@k for short), is the percentage of a ground truth song retrieved in the top-k ranked items, and higher values indicate better performance. In dataset†, $k=\{1,5,10\}$, and $1\le$ Med r $\le100$; whereas in Shuttersong§, $k=\{10,50,100\}$, and $1\le$ Med r $\le620$.\par \begin{figure}[t] \centering \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=7.6cm]{dataset_1.png} \caption{Results of image2song on Shuttersong†} \label{fig:dataset_1} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=7.6cm]{dataset_2.png} \caption{Results of image2song on Shuttersong§} \label{fig:dataset_2} \end{minipage} \end{figure} We compare the proposed JTAV with the state-of-the-art approach~\cite{li2017image2song}. For a more comprehensive comparison, we design several baseline approaches based on our proposed strategies: I+L uses images as queries and song lyrics as candidates with a suit of BiGRU, F-DenseNet and CMF-AP; I+C+L uses images and available captions as queries while song lyrics as candidates with a suit of attBiGRU, F-DenseNet and CMF-AP; I+S uses images as queries and song clips as candidates with a suit of DCRNN-CQT, F-DenseNet and CMF-AP; and I+C+S uses images and available captions as queries while song clips as candidates with a suit of DCRNN-CQT, F-DenseNet and CMF-AP. In JTAV, the images and available captions form the queries, the song clips and lyrics form the candidates.\par Fig.~\ref{fig:dataset_1} and Fig.~\ref{fig:dataset_2} display the results on Shuttersong† and Shuttersong§, respectively. Both prove JTAV is the most powerful one among all testing approaches. JTAV turns the image2song task from finding the most similar item to finding the most relevant item, which is nearer the real situation in cross-modal retrieval tasks. Notably, JTAV exceeds the state-of-the-art approaches soundly. In Shuttersong†, the Med r is no more than 5, and the R@1 score is more than 95\%, which is more than 90\% better than that obatined in~\cite{li2017image2song}. In Shuttersong§, the Med r is no more than 20, which is less a tenth of that obtained in~\cite{li2017image2song}, the R@10 score is more than 83\%, and the R@100 score is more than 97\%. \section{Conclusion} In this paper, we aim to address the issue of learning social media content from the multi-modal view. We have designed effective approaches~(i.e., attBiGRU) for textual content, F-DenseNet for visual content, and DCRNN for acoustic content, to extract fine-grained features. We have introduced CMF-AP to generate cross-modal aware matrices and reconstructed modal vectors, which are used to produce an unified representation of the multi-modal content. The proposed framework has been intensively evaluated, and the experimental results have demonstrated the general validity of JTAV by comparing with the state-of-the-art approaches. \begin{comment} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{dataset_1.png} \caption{Results of the music information retrieval task on dataset†} \label{fig:dataset_1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{dataset_2.png} \caption{Results of the music information retrieval task on dataset§} \label{fig:dataset_2} \end{figure} \section{Conclusion} In this paper, we are interested in learning social media content from the multi-modal view. To accomplish the target, we first design effective approaches~(i.e., attBiGRU for textual content, F-DenseNet for visual content, and DCRNN for acoustic content, to extract fine-grained features from every modality. Next, we introduce CMF-AP to generate cross-modal aware matrices and reconstruct modal vectors. Finally, we use the reconstructed vectors to produce an unified representation of the multi-modal social media content. The proposed framework is intensively evaluated, and the experimental results demonstrate the validity and effectiveness of JVTA. \end{comment} \section*{Acknowledgements} This work was supported in part by the National Natural Science Foundation of China under Grant No.U1636116, 11431006, 61772288, the Research Fund for International Young Scientists under Grant No. 61650110510 and 61750110530, and the Ministry of education of Humanities and Social Science project under grant 16YJC790123.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1} \setcounter{equation}{0} } \newcommand{\Aslash}{A \! \! \! /} \newcommand{\Dslash}{D \! \! \! \! /} \newcommand{\kslash}{k \! \! \! /} \newcommand{\pslash}{p \! \! \! /} \newcommand{\xslash}{x \! \! \! /} \newcommand{\partialslash}{\partial \! \! \! /} \newcommand{\half}{\mbox{\small{$\frac{1}{2}$}}} \newcommand{\Nc}{N_{\!c}} \newcommand{\Nf}{N_{\!f}} \newcommand{\MSbar}{\overline{\mbox{MS}}} \newcommand{\MOMbar}{\overline{\mbox{MOM}}} \begin{document} \title{Three loop anomalous dimensions of higher moments of the non-singlet twist-$2$ Wilson and transversity operators in the $\MSbar$ and RI${}^\prime$ schemes} \author{J.A. Gracey, \\ Theoretical Physics Division, \\ Department of Mathematical Sciences, \\ University of Liverpool, \\ P.O. Box 147, \\ Liverpool, \\ L69 3BX, \\ United Kingdom.} \date{} \maketitle \vspace{5cm} \noindent {\bf Abstract.} We compute the anomalous dimension of the third and fourth moments of the flavour non-singlet twist-$2$ Wilson and transversity operators at three loops in both the $\MSbar$ and RI$^\prime$ schemes. To assist with the extraction of estimates of matrix elements computed using lattice regularization, the finite parts of the Green's function where the operator is inserted in a quark $2$-point function are also provided at three loops in both schemes. \vspace{-17cm} \hspace{13.5cm} {\bf LTH 718} \newpage \sect{Introduction.} In calculations relating to deep inelastic scattering the operator product expansion plays an important role in allowing one to evaluate the underlying current correlators. In essence there are two parts to the formalism. The first is the basis of gauge invariant operators into which the current correlators are decomposed and the other is the process dependent Wilson coefficients. For high energy experiments the dominant set of operators are those of leading twist defined as the difference between dimension and spin. The Wilson coefficients are determined from the specific process of interest and are computed in perturbation theory. To understand the physics at various energies one requires the solution of the underlying renormalization group equation at some particular loop approximation and necessary for this are the anomalous dimensions of the operators of the basis. With the increased precision now demanded by experiments the goal in recent years has been to ascertain the anomalous dimensions at three loops as an analytic function of the moment, $n$, of the operators. This has been achieved by the magnificent computation of \cite{1,2,3,4} for the twist-$2$ flavour non-singlet and singlet Wilson operators in the $\MSbar$ scheme as well as the Wilson coefficients to the same order which extended the lower loop results of \cite{5,6,7}. Hence the {\em full} two loop renormalization group evolution has been determined. However, one feature which cannot be deduced from perturbative techniques is the underlying matrix element which is non-perturbative in nature and has to be deduced by use of the lattice where various moments of the matrix element have been determined for low moments. See, for example, the ongoing work of the QCDSF collaboration, \cite{8,9,10,11,12,13}, and others, \cite{14,15,16,17}. However, in making measurements of matrix elements, which ultimately will provide accurate predictions for experiments, there are several issues. First, to reduce computation time and consequently cost, the matrix elements are determined on the lattice in a renormalization scheme geared for this problem which is known as the regularization invariant (RI) scheme and its variation called the modified regularization invariant (RI$^\prime$) scheme, \cite{18}. Both are mass dependent renormalization schemes. However, when the results are extracted in this scheme they need to be converted to the reference $\MSbar$ scheme which is a mass independent scheme. An early example of the application of this approach is given in \cite{19,20}. Second, to ensure that the results are credible when evolved to large energy they must match the perturbative result for the matrix element in the {\em same} renormalization scheme. There are several ways of achieving this. One is to use a non-perturbative approach such as the Schr\"{o}dinger functional method, \cite{21}. The other is to compute the matrix element to as many orders as possible in conventional perturbation theory and then match the lattice results to the explicit perturbative results in the same renormalization scheme. Previously that has been the approach of Chetyrkin and R\'{e}tey, \cite{20}, and \cite{22,23}. Specifically, various quark currents have been considered including the tensor current as well as the second moment of the flavour non-singlet Wilson and transversity operators. The latter was originally introduced in \cite{24,25,26} and relates to the probability of finding a quark in a transversely polarized nucleon polarized parallel to the nucleon versus that of the nucleon in the antiparallel polarization. The results of \cite{20,22,23} have been important in the matrix element lattice computations of \cite{11,12,13,14,15,16,17}. Necessary for the three loop perturbative calculations of the Green's functions with the operator inserted has been the full three loop renormalization of QCD in the RI$^\prime$ scheme in an arbitrary linear covariant gauge, \cite{20,22}. Given that there has been an advance in computing technology in recent years, it transpires that it is now feasible to measure higher moments of the underlying matrix elements on the lattice since essentially there has been a significant improvement in numerically isolating a clear signal. Therefore to assist the full matching procedure in the ultraviolet region to produce {\em accurate} estimates, it is necessary to extend the approach of \cite{22,23} to higher moments of these two classes of operators. This is the purpose of this article where we will consider the third and fourth moments of the flavour non-singlet Wilson and transversity operators in an arbitrary linear covariant gauge. There will be two parts to this venture. The first is the determination of the anomalous dimensions of the operators at three loops in $\MSbar$ and RI$^\prime$. Whilst the second is to produce to the same loop order the value of the Green's function involving the operator itself inserted in a quark $2$-point function in both schemes. Although it is worth noting that for the lattice, only the Landau gauge results are relevant since that is the gauge in which lattice measurements are made. The full arbitrary gauge calculation, being more complete, is actually important for internal checks on the loop computations. Whilst the even moments are appropriate for deep inelastic scattering experiments, the odd moments are accessible on the lattice and can serve the purpose of a testing ground for technical issues for higher moment lattice matching. Whilst the Wilson operator anomalous dimensions are known already at three loops in $\MSbar$, \cite{1,2}, we note that what is required are the values of the specific Green's function with the operator inserted which has not been determined previously. In also considering the transversity operator, we will deduce {\em new} anomalous dimensions at three loops in both $\MSbar$ and RI$^\prime$ beyond the earlier two loop calculation of \cite{27,28,29,30,31}. Moreover, since we will be using symbolic manipulation and computer algebra tools and given that the Wilson and transversity operators are both formally similar, the actual computations are efficiently performed through the same computer programmes. The paper is organised as follows. In section $2$ we review the basics of the RI$^\prime$ scheme and introduce the properties of the operators we will consider. The full three loop anomalous dimensions for these operators are given in both schemes in section $3$, whilst the same information for the underlying Green's functions are provided in section $4$ including the restriction to the colour group $SU(3)$ and the Landau gauge. Section $5$ records the explicit functions which convert all the three loop anomalous dimensions between the $\MSbar$ and RI$^\prime$ schemes with concluding remarks given in section $6$. Several appendices summarize the projection of the Green's function onto the basis of independent Lorentz tensors sharing the same symmetry properties as the corresponding operator, as well as the construction of the full operator with these symmetries. \sect{RI$^\prime$ scheme.} In this section we discuss the definition of the RI$^\prime$ scheme and properties of the operators we are interested in. First, we recall that the standard renormalization scheme is the MS scheme, \cite{32}, where the poles with respect to the regulator are absorbed into the renormalization constants. Its widely used extension, $\MSbar$, which is also a mass independent scheme additionally absorbs the finite part $\ln ( 4\pi e^{-\gamma} )$ where $\gamma$ is the Euler-Mascheroni constant, \cite{33}. By contrast the regularization invariant schemes, \cite{18}, are mass dependent schemes which from the point of view of the Lagrangian is the scheme where the quark $2$-point function not only is rendered finite but also chosen to be unity through the RI definition \begin{equation} \left. \lim_{\epsilon \rightarrow 0} \left[ \frac{1}{4d} \mbox{tr} \left( Z^{\mbox{\footnotesize{RI}}}_\psi \gamma^\mu \frac{\partial ~}{\partial p^\mu} \Sigma_\psi(p) \right) \right] \right|_{p^2 \, = \, \mu^2} ~=~ 1 ~. \label{ridef} \end{equation} where $\Sigma_\psi(p)$ is the quark $2$-point function with external momentum $p$, $Z^{\mbox{\footnotesize{${\cal R}$}}}_\psi$ is the quark wave function renormalization constant in the ${\cal R}$ renormalization scheme and $\mu$ is the scale introduced to ensure that the coupling constant remains dimensionless in $d$ dimensions when using dimensional regularization with $d$~$=$~$4$~$-$~$2\epsilon$. However, in practice since taking a derivative is (financially) costly on the lattice, one takes a variation of this definition (\ref{ridef}), to define the RI$^\prime$ scheme which does not involve differentiation, \cite{18}, which is \begin{equation} \left. \lim_{\epsilon \rightarrow 0} \left[ Z^{\mbox{\footnotesize{RI$^\prime$}}}_\psi \Sigma_\psi(p) \right] \right|_{p^2 \, = \, \mu^2} ~=~ \pslash ~. \label{ripdef} \end{equation} Although this is primarily the key to defining the scheme on the lattice as well as the continuum, the full three loop renormalization of QCD has been performed in an arbitrary linear covariant gauge and colour group in \cite{22}. Additionally part of the four loop renormalization has been performed for $SU(\Nc)$ in \cite{20}. However, in \cite{22} the other field $2$-point functions were defined in an analogous way to (\ref{ripdef}). Further, by contrast the $3$-point functions were renormalized according to the usual $\MSbar$ condition. So that those Green's functions were not constrained to be unity. Consequently, the relationship between the variables in both schemes were established to three loops and specifically are, \cite{22}, \begin{equation} a_{\mbox{\footnotesize{RI$^\prime$}}} ~=~ a_{\mbox{\footnotesize{$\MSbar$}}} ~+~ O \left( a_{\mbox{\footnotesize{$\MSbar$}}}^5 \right) \label{cccon} \end{equation} where $a$~$=$~$g^2/(16\pi^2)$ in terms of the coupling constant $g$ in the definition of the covariant derivative $D_\mu$, and for the arbitrary linear covariant gauge parameter $\alpha$, \begin{eqnarray} \alpha_{\mbox{\footnotesize{RI$^\prime$}}} &=& \left[ 1 + \left( \left( - 9 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 18 \alpha_{\mbox{\footnotesize{$\MSbar$}}} - 97 \right) C_A + 80 T_F \Nf \right) \frac{a_{\mbox{\footnotesize{$\MSbar$}}}}{36} \right. \nonumber \\ && \left. ~+~ \left( \left( 18 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^4 - 18 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^3 + 190 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 576 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 463 \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 864 \zeta(3) - 7143 \right) C_A^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \left( -~ 320 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 320 \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 2304 \zeta(3) + 4248 \right) C_A T_F \Nf \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \frac{}{} \left( -~ 4608 \zeta(3) + 5280 \right) C_F T_F \Nf \right) \frac{a^2_{\mbox{\footnotesize{$\MSbar$}}}}{288} \right. \nonumber \\ && \left. ~+~ \left( \left( ~-~ 486 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^6 + 1944 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^5 + 4212 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^4 - 5670 \zeta(5) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^4 - 18792 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^4 \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~+~ 48276 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^3 - 6480 \zeta(5) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^3 - 75951 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^3 - 52164 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~+~ 2916 \zeta(4) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 + 124740 \zeta(5) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 + 92505 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 1303668 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}} \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~+~ 11664 \zeta(4) \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 447120 \zeta(5) \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 354807 \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 2007504 \zeta(3) \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~+~ 8748 \zeta(4) + 1138050 \zeta(5) - 10221367 \right) C_A^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \left( 12960 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^4 - 8640 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^3 - 129600 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 147288 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~+~ 698112 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}} - 312336 \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 1505088 \zeta(3) - 279936 \zeta(4) \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~-~ 1658880 \zeta(5) + 9236488 \right) C_A^2 T_F \Nf \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \left( 248832 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 285120 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 + 248832 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}} - 285120 \alpha_{\mbox{\footnotesize{$\MSbar$}}} \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~-~ 5156352 \zeta(3) + 373248 \zeta(4) - 2488320 \zeta(5) + 9293664 \right) C_A C_F T_F \Nf \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \left( ~-~ 38400 \alpha_{\mbox{\footnotesize{$\MSbar$}}}^2 - 221184 \zeta(3) \alpha_{\mbox{\footnotesize{$\MSbar$}}} + 55296 \alpha_{\mbox{\footnotesize{$\MSbar$}}} \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~~~\,-~ 884736 \zeta(3) - 1343872 \right) C_A T_F^2 \Nf^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \left( ~-~ 3068928 \zeta(3) + 4976640 \zeta(5) - 988416 \right) C_F^2 T_F \Nf \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ \left( 2101248 \zeta(3) - 2842368 \right) C_F T_F^2 \Nf^2 \right) \frac{a^3_{\mbox{\footnotesize{$\MSbar$}}}}{31104} \right] \alpha_{\mbox{\footnotesize{$\MSbar$}}} ~+~ O \left( a^4_{\mbox{\footnotesize{$\MSbar$}}} \right) \label{alpcon} \end{eqnarray} where $T_F$, $C_F$ and $C_A$ are the usual group theory factors defined by \begin{equation} \mbox{Tr} \left( T^a T^b \right) ~=~ T_F \delta^{ab} ~~,~~ T^a T^a ~=~ C_F I ~~,~~ f^{acd} f^{bcd} ~=~ C_A \delta^{ab} \end{equation} for colour group generators $T^a$, $\zeta(n)$ is the Riemann zeta function and the scheme of the variable is indicated as a subscript. Clearly only in the Landau gauge, where $\alpha$~$=$~$0$, are the variables equivalent. Although ultimately we are interested in the Landau gauge for the lattice matching, we have chosen to compute in an arbitrary linear covariant gauge as the extra $\alpha$ dependence, evident in expressions such as (\ref{alpcon}), will provide a central checking tool in the loop computations. For instance, in a mass independent renormalization scheme the anomalous dimension of a gauge invariant operator is independent of the gauge parameter. Though this is not the case for mass dependent schemes such as RI$^\prime$ which will be apparent in our explicit results. More specifically the non-singlet operators we will focus on are \begin{eqnarray} {\cal O}_W^{\nu_1\ldots\nu_n} &=& {\cal S} \bar{\psi} \gamma^{\nu_1} D^{\nu_2} \ldots D^{\nu_n} \psi \nonumber \\ {\cal O}_T^{\mu\nu_1\ldots\nu_n} &=& {\cal S} \bar{\psi} \sigma^{\mu\nu_1} D^{\nu_2} \ldots D^{\nu_n} \psi \end{eqnarray} with $n$~$=$~$3$ and $4$ where $\sigma^{\mu\nu}$~$=$~$\half [ \gamma^\mu, \gamma^\nu ]$ and is antisymmetric in its Lorentz indices. The operator ${\cal S}$ in both sets denotes the symmetrization of the Lorentz indices $\{\nu_1,\ldots,\nu_n\}$ and the selection of the traceless part according to slightly different criteria for both cases. For the Wilson operators this is \begin{equation} \eta_{\nu_i\nu_j}{\cal O}_W^{\nu_1\ldots\nu_i\ldots\nu_j\ldots\nu_n} ~=~ 0 ~. \end{equation} Whilst for the transversity operators, \cite{29}, \begin{eqnarray} \eta_{\mu\nu_i}{\cal O}_T^{\mu\nu_1\ldots\nu_i\ldots\nu_n} &=& 0 ~~~~ (i ~\geq~ 2) \nonumber \\ \eta_{\nu_i\nu_j}{\cal O}_T^{\mu\nu_1\ldots\nu_i\ldots\nu_j\ldots\nu_n} &=& 0 ~. \end{eqnarray} Therefore, the transversity operator has formally one less traceless condition compared to the Wilson operators. For the renormalization of an operator in a quark $2$-point function, which will be either $\langle \psi(-p) {\cal O}_W^{\mu_1\ldots\mu_n}(0) \bar{\psi}(p) \rangle$ or $\langle \psi(-p) {\cal O}_T^{\mu\nu_1\ldots\nu_n}(0) \bar{\psi}(p) \rangle$, the RI$^\prime$ scheme definition is similar to (\ref{ripdef}), \cite{18,19,20,22,23}. However, as the operators we will consider will carry Lorentz indices this $2$-point function will decompose into several invariant amplitudes which may or may not have a tree $(T)$ or Born term. For those amplitudes which have a tree term, the RI$^\prime$ scheme definition is, \cite{23}, \begin{equation} \left. \lim_{\epsilon \, \rightarrow \, 0} \left[ Z^{\mbox{\footnotesize{RI$^\prime$}}}_\psi Z^{\mbox{\footnotesize{RI$^\prime$}}}_{{\cal O}} \Sigma^{(T)}_{{\cal O}}(p) \right] \right|_{p^2 \, = \, \mu^2} ~=~ {\cal T} \label{riopdef} \end{equation} where $\Sigma^{(T)}_{{\cal O}}(p)$ is the tree part of $\langle \psi(-p) {\cal O}(0) \bar{\psi}(p) \rangle$ and ${\cal T}$ is the value of the tree term amplitude which may not necessarily be unity. In other words there is no $a$ dependence in ${\cal T}$. The explicit details of our choice of how to construct the Green's functions in terms of a basis of Lorentz tensors satisfying the same symmetry properties as the original operator is given in appendix A. This summarizes the procedure we will use to extract the renormalization constants of the operators which will then be encoded in the associated anomalous dimensions through the respective definitions \begin{equation} \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{{\cal O}} (a) ~=~ -~ \beta(a) \frac{\partial \ln Z^{\mbox{\footnotesize{$\MSbar$}}}_{{\cal O}}} {\partial a} ~-~ \alpha \gamma^{\mbox{\footnotesize{$\MSbar$}}}_\alpha(a) \frac{\partial \ln Z^{\mbox{\footnotesize{$\MSbar$}}}_{{\cal O}}} {\partial \alpha} \end{equation} and \begin{equation} \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{{\cal O}} (a) ~=~ -~ \beta(a) \frac{\partial \ln Z^{\mbox{\footnotesize{RI$^\prime$}}}_{{\cal O}}} {\partial a} ~-~ \alpha \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_\alpha(a) \frac{\partial \ln Z^{\mbox{\footnotesize{RI$^\prime$}}}_{{\cal O}}} {\partial \alpha} \end{equation} where $\gamma^{\mbox{\footnotesize{RI$^\prime$}}}_\alpha(a)$ is given in \cite{22}. \sect{Anomalous dimensions.} Having described in detail the method of renormalizing in the RI$^\prime$ scheme, we now record the explicit three loop results for the anomalous dimensions in this section. In constructing our results we made extensive use of the {\sc Mincer} package, \cite{34,35}, written in the symbolic manipulation language {\sc Form}, \cite{36}. The {\sc Mincer} algorithm, \cite{34}, determines the divergent and finite parts of massless $2$-point functions using dimensional regularization in $d$-dimensions. Therefore, it is ideal for the current problem since the Green's functions we are interested in are massless quark $2$-point functions with the appropriate operator inserted at zero momentum. This is the momentum configuration which is measured on the lattice. Moreover, since we are concerned with operators which are symmetrized in their Lorentz indices and satisfy various tracelessness conditions in addition to being flavour non-singlet operators, there is no possibility of mixing into other operators. This is an important observation since ordinarily nullifying the momentum flow through the operator could lead to the inability to correctly determine the projection into the full basis of operators. (For a clear exposition on the deeper perils of operator mixing see, for example, \cite{37}.) The fact that each of the operators is multiplicatively renormalizable avoids this potential technicality. For our three loop computation we generated the Feynman diagrams with the {\sc Qgraf} package, \cite{38}. Specifically there are $3$ one loop, $37$ two loop and $684$ three loop diagrams to be calculated. Though for the operators with no three gluon and two quark leg operator insertions the latter total is reduced by $14$. Finally, the electronic {\sc Qgraf} output is converted into {\sc Form} input notation and the {\sc Form} version of the {\sc Mincer} algorithm, \cite{35}, is applied to the $724$ Feynman diagrams. The actual Feynman rules for each operator were generated automatically in {\sc Form}. First we constructed the object with the same symmetry and traceless conditions as the operators we are interested in. The explicit details for each operator are given in appendix B. Then we applied an algorithm which systematically substitutes for the covariant derivatives and functionally differentiates with respect to the various fields present to produce electronic forms of the $2$, $3$, $4$ and $5$-point operator vertex insertions. Now we provide our results in $\MSbar$. For completeness we give those for the two Wilson operators and note that we found exact agreement with the results first deduced in \cite{1,5,6,39,40}. These are \begin{eqnarray} \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(a) &=& \frac{25}{6} C_F a ~+~ C_F \left[ \frac{535}{27} C_A - \frac{2035}{432} C_F - \frac{415}{54} T_F \Nf \right] a^2 \nonumber \\ && +~ C_F \left[ \left( \frac{55}{3} \zeta(3) + \frac{889433}{7776} \right) C_A^2 - \left( 55 \zeta(3) + \frac{311213}{15552} \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~~~~~~~-~ \left( \frac{200}{3} \zeta(3) + \frac{62249}{1944} \right) C_A T_F \Nf + \left( \frac{110}{3} \zeta(3) - \frac{244505}{15552} \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~~~~~~~+~ \left( \frac{200}{3} \zeta(3) - \frac{203627}{3888} \right) C_F T_F \Nf - \frac{2569}{486} T_F^2 \Nf^2 \right] a^3 \nonumber \\ && +~ O(a^4) \end{eqnarray} and \begin{eqnarray} \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(a) &=& \frac{157}{30} C_F a ~+~ \left[ 1292560 C_A - 287303 C_F - 530840 T_F \Nf \right] \frac{C_F a^2}{54000} \nonumber \\ && +~ \left[ \left( 932472000 \zeta(3) + 6803318650 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~~-~ \left( 2797416000 \zeta(3) + 1335140785 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 4069440000 \zeta(3) + 1760516200 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 1864944000 \zeta(3) - 714245693 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( 4069440000 \zeta(3) - 3304751260 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~-~ 307421600 T_F^2 \Nf^2 \right] \frac{C_F a^3}{48600000} ~+~ O(a^4) \end{eqnarray} where we note that throughout the article when the operator appears explicitly as a subscript on an object, it is regarded as a label and the free indices do not endow the object with tensor properties. Likewise when we indicate the renormalization scheme on a quantity which is evaluated in perturbation theory, that means that the variables in which it is expressed, such as $a$ and $\alpha$, are regarded as the variables in the {\em same} scheme. The relationship between the variables in either scheme is given in (\ref{cccon}) and (\ref{alpcon}). For the two transversity operators the $\MSbar$ anomalous dimensions have not been given previously and we find that \begin{eqnarray} \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(a) &=& \frac{13}{3} C_F a ~+~ \left[ 1195 C_A - 311 C_F - 452 T_F \Nf \right] \frac{C_F a^2}{54} \nonumber \\ && +~ \left[ \left( 10368 \zeta(3) + 126557 \right) C_A^2 - \left( 31104 \zeta(3) + 30197 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 67392 \zeta(3) + 38900 \right) C_A T_F \Nf + \left( 67392 \zeta(3) - 50552 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 20736 \zeta(3) - 17434 \right) C_F^2 - 4816 T_F^2 \Nf^2 \right] \frac{C_F a^3}{972} ~+~ O(a^4) \label{tra3ms} \end{eqnarray} and \begin{eqnarray} \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(a) &=& \frac{16}{3} C_F a ~+~ \left[ 1357 C_A - 296 C_F - 554 T_F \Nf \right] \frac{C_F a^2}{54} \nonumber \\ && +~ \left[ \left( 272160 \zeta(3) + 2893009 \right) C_A^2 ~-~ \left( 816480 \zeta(3) + 662155 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 1658880 \zeta(3) + 798892 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 544320 \zeta(3) - 235100 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( 1658880 \zeta(3) - 1328860 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~-~ 117776 T_F^2 \Nf^2 \right] \frac{C_F a^3}{19440} ~+~ O(a^4) ~. \label{tra4ms} \end{eqnarray} There are several checks on these two expressions. First, as we have computed them in an arbitrary covariant gauge their final form must be independent of the gauge parameter in a mass independent renormalization scheme, which is apparent in (\ref{tra3ms}) and (\ref{tra4ms}), \cite{32,41}. Second, part of each of the three loop terms has in fact already been determined by the large $\Nf$ expansion in \cite{23}. There the leading order critical exponent corresponding to the anomalous dimension evaluated at the non-trivial $d$-dimensional fixed point of the QCD $\beta$-function was determined in $d$-dimensions using a method that was originally developed to study the perturbative structure of scalar field theories, \cite{42,43}. This critical exponent, \cite{23}, encodes all orders information on the corresponding renormalization group function at $O(1/\Nf)$. Therefore, if we formally write the leading $O(1/\Nf)$ part of the arbitrary $n$ transversity operator anomalous dimension as, \cite{23}, \begin{equation} \gamma^{(n) \, \mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \sigma^{\mu\nu_1} D^{\nu_2} \ldots D^{\nu_n} \psi}(a) ~=~ C_F \left[ b_1 a ~+~ \left( b_{21} T_F \Nf + b_{20} \right) a^2 ~+~ \sum_{r=3}^\infty \sum_{j=0}^{r-1} b_{rj} T_F^j \Nf^j a^r \right] \end{equation} then the leading order coefficient of the $\Nf$ polynomial at three loops is given by \begin{equation} b_{32} ~=~ \frac{4}{27} \left[ 48 S_3(n) - 80 S_2(n) - 16 S_1(n) + \frac{3( 17 n^2 + 17 n - 8 )}{n(n+1)} \right] \end{equation} where $S_l(n)$~$=$~$\sum_{i=1}^n 1/i^l$. Evaluating this for $n$~$=$~$3$ and $n$~$=$~$4$ reproduces the corresponding coefficients in (\ref{tra3ms}) and (\ref{tra4ms}) respectively. As a final check we note that we have used the method of \cite{44} to perform the automatic renormalization of Green's functions at high loop order. This entails computing the Green's functions as a function of the bare coupling constant and bare gauge parameter. The counterterms are then introduced at the end of the computation by rescaling by the known renormalization constants. Therefore, the remaining divergence is absorbed by the renormalization constant associated with the Green's function. Moreover, given the way it has been deduced, the non-simple poles in $\epsilon$ are constrained to satisfy a specific form depending on the lower order simple poles due to the underlying renormalization group equation. This is a non-trivial checking criterion, especially in the presence of parameters such as the gauge parameter and group Casimirs, and it is reassuring to record that all the renormalization constants determined for the above anomalous dimensions precisely satisfied this criterion. Implicit in this final check is the fact that the already known two loop anomalous dimensions of \cite{27,28,29,30,31} are correctly reproduced when the $n$-dependent results are evaluated for $n$~$=$~$3$ and $n$~$=$~$4$. All these checks therefore give us confidence that not only are all our expressions correct but also, for example, that the original Feynman rules were correctly generated. Having established the $\MSbar$ anomalous dimensions it is then relatively straightforward to deduce the anomalous dimensions in the RI$^\prime$ scheme. This is achieved by replacing the renormalization constants which scale the bare internal parameters and that of the overall quark wave function, by those of the RI$^\prime$ scheme and then impose the RI$^\prime$ scheme definition for the operator renormalization, (\ref{riopdef}). As a check on the resulting renormalization constants, the non-simple poles in $\epsilon$ also have to satisfy various constraints emanating from the renormalization group equation, similar to those of the $\MSbar$ scheme. We note, for completeness, that these are fulfilled. Therefore, we record the corresponding three loop RI$^\prime$ anomalous dimensions are, in four dimensions, \begin{eqnarray} \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(a) &=& \frac{25}{6} C_F a ~+~ \left[ \left( 324 \alpha^2 + 972 \alpha + 17976 \right) C_A - 2035 C_F - 6744 T_F \Nf \right] \frac{C_F a^2}{432} \nonumber \\ && +~ \left[ \left( 29160 \alpha^4 + 260820 \alpha^3 - 69984 \zeta(3) \alpha^2 + 1257768 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~-~ 723168 \zeta(3) \alpha + 4103676 \alpha - 6443712 \zeta(3) + 50460154 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( 3240 \alpha^3 - 91260 \alpha^2 - 1043460 \alpha - 171072 \zeta(3) - 8028146 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 259200 \alpha^2 - 186624 \zeta(3) \alpha + 1401408 \alpha + 2322432 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~+~ 35016976 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 269280 \alpha + 3691008 \zeta(3) - 3568016 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 2851200 \zeta(3) - 1222525 \right) C_F^2 + 5492800 T_F^2 \Nf^2 \right] \frac{C_F a^3}{77760} \nonumber \\ && +~ O(a^4) \end{eqnarray} and \begin{eqnarray} \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(a) &=& \frac{13}{3} C_F a ~+~ \left[ \left( 99 \alpha^2 + 297 \alpha + 4788 \right) C_A - 622 C_F - 1776 T_F \Nf \right] \frac{C_F a^2}{108} \nonumber \\ && +~ \left[ \left( 8910 \alpha^4 + 80055 \alpha^3 - 23328 \zeta(3) \alpha^2 + 387846 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~-~ 241056 \zeta(3) \alpha + 1279197 \alpha - 1902528 \zeta(3) + 13940156 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( 6210 \alpha^3 + 3420 \alpha^2 - 157170 \alpha + 746496 \zeta(3) - 2737412 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 79200 \alpha^2 - 62208 \zeta(3) \alpha + 434736 \alpha + 580608 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~+~ 9592064 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 40560 \alpha + 850176 \zeta(3) - 706112 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 414720 \zeta(3) - 348680 \right) C_F^2 + 1491200 T_F^2 \Nf^2 \right] \frac{C_F a^3}{19440} \nonumber \\ && +~ O(a^4) \end{eqnarray} for $n$~$=$~$3$. Further, for the $n$~$=$~$4$ operators we have \begin{eqnarray} \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(a) &=& \frac{157}{30} C_F a ~+~ \left[ \left( 63000 \alpha^2 + 189000 \alpha + 2939040 \right) C_A - 287303 C_F \right. \nonumber \\ && \left. ~~~~~~~~~~~~~~~~~~~-~ 1129560 T_F \Nf \right] \frac{C_F a^2}{54000} \nonumber \\ && +~ \left[ \left( 28350000 \alpha^4 + 264937500 \alpha^3 - 87480000 \zeta(3) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~+~ 1290594375 \alpha^2 - 903960000 \zeta(3) \alpha + 4337679375 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~-~ 4769928000 \zeta(3) + 44592646550 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( 27675000 \alpha^3 + 58466250 \alpha^2 - 253773750 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~+~ 624024000 \zeta(3) - 8248198970 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 252000000 \alpha^2 - 233280000 \zeta(3) \alpha + 1452285000 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~+~ 1995840000 \zeta(3) + 31068538400 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 65490000 \alpha + 2825280000 \zeta(3) - 2407455920 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 1864944000 \zeta(3) - 714245693 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~+~ 4979028800 T_F^2 \Nf^2 \right] \frac{C_F a^3}{48600000} ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(a) &=& \frac{16}{3} C_F a ~+~ \left[ \left( 225 \alpha^2 + 675 \alpha + 11874 \right) C_A - 1184 C_F - 4560 T_F \Nf \right] \frac{C_F a^2}{216} \nonumber \\ && +~ \left[ \left( 16200 \alpha^4 + 150768 \alpha^3 - 46656 \zeta(3) \alpha^2 + 731421 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~-~ 482112 \zeta(3) \alpha + 2435409 \alpha - 3214080 \zeta(3) + 27763364 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( 8964 \alpha^3 - 8244 \alpha^2 - 363072 \alpha + 518400 \zeta(3) - 5050688 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~~-~ \left( 144000 \alpha^2 - 124416 \zeta(3) \alpha + 818712 \alpha + 1327104 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~+~ 19484432 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 93696 \alpha + 1990656 \zeta(3) - 1687424 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~~+~ \left( 870912 \zeta(3) - 376160 \right) C_F^2 + 3138560 T_F^2 \Nf^2 \right] \frac{C_F a^3}{31104} \nonumber \\ && +~ O(a^4) \end{eqnarray} in four dimensions. Clearly they all satisfy the trivial check that the one loop term is scheme independent. Though since the RI$^\prime$ scheme is a mass dependent one, the anomalous dimensions will not necessarily be independent of the gauge parameter as is clearly the case above. Although we have performed the computation in an arbitrary gauge and colour group, for practical purposes it is useful to specify the results for $SU(3)$. Therefore, the $\MSbar$ transversity anomalous dimensions are \begin{eqnarray} \left. \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(a) \right|^{SU(3)} &=& \frac{52}{9} a ~-~ 2\left[ 678 \Nf - 9511 \right] \frac{a^2}{243} \nonumber \\ && -~ \left[ 10836 \Nf^2 + 505440 \zeta(3) \Nf + 828462 \Nf \right. \nonumber \\ && \left. ~~~~~~-~ 51840 \zeta(3) - 8885081 \right] \frac{a^3}{6561} ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(a) \right|^{SU(3)} &=& \frac{64}{9} a ~-~ 2\left[ 831 \Nf - 11029 \right] \frac{a^2}{243} \nonumber \\ && -~ \left[ 264996 \Nf^2 + 12441600 \zeta(3) \Nf + 18758202 \Nf \right. \nonumber \\ && \left. ~~~~~~-~ 1360800 \zeta(3) - 206734549 \right] \frac{a^3}{131220} ~+~ O(a^4) \end{eqnarray} where $T_F$~$=$~$1/2$, $C_F$~$=$~$4/3$ and $C_A$~$=$~$3$ for $SU(3)$. In addition for the RI$^\prime$ scheme we record each of the anomalous dimensions in the Landau gauge since that is the gauge primarily used in matching to lattice results. We have \begin{eqnarray} \left. \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(a) \right|_{\alpha = 0}^{SU(3)} &=& \frac{50}{9} a ~-~ \left[ 2529 \Nf - 38411 \right] \frac{a^2}{243} \nonumber \\ && +~ \left[ 6179400 \Nf^2 - 4603392 \zeta(3) \Nf - 247068636 \Nf \right. \nonumber \\ && \left. ~~~~~-~ 241240032 \zeta(3) + 1889349409 \right] \frac{a^3}{262440} ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(a) \right|_{\alpha = 0}^{SU(3)} &=& \frac{314}{45} a ~-~ \left[ 423585 \Nf - 6325537 \right] \frac{a^2}{30375} \nonumber \\ && +~ \left[ 5601407400 \Nf^2 - 4996080000 \zeta(3) \Nf - 216935001960 \Nf \right. \nonumber \\ && \left. ~~~~~-~ 167030100000 \zeta(3) + 1651820638271 \right] \frac{a^3}{164025000} \nonumber \\ && +~ O(a^4) \end{eqnarray} for the Wilson operators. Whilst \begin{eqnarray} \left. \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(a) \right|_{\alpha = 0}^{SU(3)} &=& \frac{52}{9} a ~-~ 4\left[ 666 \Nf - 10151 \right] \frac{a^2}{243} \nonumber \\ && +~ \left[ 838800 \Nf^2 - 684288 \zeta(3) \Nf - 33432384 \Nf \right. \nonumber \\ && \left. ~~~~~-~ 30148848 \zeta(3) + 256256731 \right] \frac{a^3}{32805} ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(a) \right|_{\alpha = 0}^{SU(3)} &=& \frac{64}{9} a ~-~ 5 \left[ 684 \Nf - 10213 \right] \frac{a^2}{243} \nonumber \\ && +~ \left[ 1765440 \Nf^2 - 1492992 \zeta(3) \Nf - 68291094 \Nf \right. \nonumber \\ && \left. ~~~~~-~ 56935872 \zeta(3) + 515247289 \right] \frac{a^3}{52488} ~+~ O(a^4) \end{eqnarray} for the transversity case. \sect{Finite parts.} In this section we record the three loop $\MSbar$ and RI$^\prime$ expressions for the amplitudes of the various Green's functions we computed to obtain the previous anomalous dimensions. These are essential for lattice matching computations which therefore necessitates their tedious presentation. The specific definitions of the quantities $\Sigma^{(i)}_{{\cal O}}(p)$ are, as noted before, given in appendix A. It is worth pointing out that not all the amplitudes have an $a$ independent leading term. First, for the Wilson operator with $n$~$=$~$3$, we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& \left( \frac{1}{3} + \frac{2}{3} \alpha \right) C_F a \nonumber \\ && +~ \left[ \left( \frac{367}{30} - \frac{6}{5} \zeta(3) \alpha + \frac{361}{90} \alpha + \frac{7}{9} \alpha^2 - \frac{6}{5} \zeta(3) \right) C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{1087}{120} - \frac{37}{18} \alpha + \frac{1}{9} \alpha^2 + \frac{24}{5} \zeta(3) \right) C_F^2 - \frac{25}{9} \Nf T_F C_F \right] a^2 \nonumber \\ && + \left[ \left( -~ \frac{36151}{243} + \frac{112}{15} \zeta(3) \alpha - \frac{14671}{810} \alpha + \frac{154}{9} \zeta(3) \right) \Nf T_F C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{504013}{6480} + \frac{22735}{1944} \alpha - \frac{224}{5} \zeta(3) \right) \Nf T_F C_F^2 + \frac{4210}{243} \Nf^2 T_F^2 C_F \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{6480923}{19440} - \frac{3727}{120} \zeta(3) \alpha + 2 \zeta(5) \alpha + \frac{759413}{12960} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{47}{15} \zeta(3) \alpha^2 + \frac{1}{6} \zeta(5) \alpha^2 + \frac{49223}{4320} \alpha^2 + \frac{401}{216} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{2563}{24} \zeta(3) - \frac{67}{2} \zeta(5) \right) C_F C_A^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{2811619}{6480} + \frac{25}{3} \zeta(3) \alpha + 8 \zeta(5) \alpha - \frac{51839}{1215} \alpha - \zeta(3) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{613}{162} \alpha^2 - \frac{1}{6} \alpha^3 + \frac{979}{15} \zeta(3) + \frac{784}{3} \zeta(5) \right) C_F^2 C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{28855943}{155520} - 4 \zeta(3) \alpha + \frac{218971}{15552} \alpha + \frac{539}{162} \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{11}{54} \alpha^3 + \frac{860}{9} \zeta(3) - 272 \zeta(5) \right) C_F^3 \right] a^3 ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& -~ \frac{1}{3} ~+~ \left( \frac{107}{54} + \frac{1}{6} \alpha \right) C_F a \nonumber \\ && +~ \left[ \left( \frac{86597}{4860} + \frac{2}{5} \zeta(3) \alpha + \frac{167}{360} \alpha + \frac{13}{72} \alpha^2 - \frac{18}{5} \zeta(3) \right) C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{1471891}{155520} - \frac{401}{216} \alpha + \frac{5}{36} \alpha^2 + \frac{12}{5} \zeta(3) \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~ -~ \frac{32363}{3888} \Nf T_F C_F \right] a^2 \nonumber \\ && +~ \left[ \left( -~ \frac{30365437}{209952} + \frac{68}{45} \zeta(3) \alpha - \frac{1474}{405} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~ -~ \frac{3577}{243} \zeta(3) - \frac{100}{9} \zeta(4) \right) \Nf T_F C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{1019471}{2099520} - \frac{100}{27} \zeta(3) \alpha + \frac{26413}{2592} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~ +~ \frac{4166}{135} \zeta(3) + \frac{100}{9} \zeta(4) \right) \Nf T_F C_F^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{1227463}{52488} + \frac{400}{243} \zeta(3) \right) \Nf^2 T_F^2 C_F \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{208545851}{1049760} - \frac{3721}{720} \zeta(3) \alpha - \frac{1}{8} \zeta(4) \alpha + \frac{1}{6} \zeta(5) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{390323}{25920} \alpha - \frac{23}{720} \zeta(3) \alpha^2 - \frac{1}{16} \zeta(4) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{17}{36} \zeta(5) \alpha^2 + \frac{30983}{8640} \alpha^2 + \frac{1}{9} \zeta(3) \alpha^3 + \frac{475}{864} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{31163}{972} \zeta(3) + \frac{647}{144} \zeta(4) + \frac{13}{4} \zeta(5) \right) C_F C_A^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{781692217}{8398080} + \frac{697}{54} \zeta(3) \alpha - \frac{8}{3} \zeta(5) \alpha - \frac{57691}{1620} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{7}{18} \zeta(3) \alpha^2 - \frac{7895}{2592} \alpha^2 - \frac{1}{3} \zeta(3) \alpha^3 + \frac{1}{24} \alpha^3 - \frac{743}{540} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{67}{6} \zeta(4) - \frac{4}{9} \zeta(5) \right) C_F^2 C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{1161367}{43740} - \frac{733}{54} \zeta(3) \alpha + \frac{458737}{20736} \alpha - \frac{25}{9} \zeta(3) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{617}{324} \alpha^2 + \frac{2}{9} \zeta(3) \alpha^3 - \frac{31}{216} \alpha^3 + \frac{9203}{972} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{55}{9} \zeta(4) + 24 \zeta(5) \right) C_F^3 \right] a^3 ~+~ O(a^4) ~. \end{eqnarray} For the $n$~$=$~$4$ case we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& -~ 1 + \left( \frac{1871}{225} + \frac{4}{3} \alpha \right) C_F a \nonumber \\ && +~ \left[ \left( \frac{26869109}{324000} - \frac{3}{5} \zeta(3) \alpha + \frac{10313}{1440} \alpha + \frac{245}{144} \alpha^2 - 13 \zeta(3) \right) C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{345682991}{6480000} - \frac{43553}{3600} \alpha + \frac{13}{72} \alpha^2 + \frac{48}{5} \zeta(3) \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~ -~ \frac{6041063}{162000} \Nf T_F C_F \right] a^2 \nonumber \\ && +~ \left[ \left( -~ \frac{32796795659}{43740000} + \frac{46}{3} \zeta(3) \alpha - \frac{48917}{1296} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~ -~ \frac{139334}{2025} \zeta(3) - \frac{628}{15} \zeta(4) \right) \Nf T_F C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{63233459093}{437400000} - \frac{628}{45} \zeta(3) \alpha + \frac{2010352}{30375} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ +~ \frac{77018}{675} \zeta(3) + \frac{628}{15} \zeta(4) \right) \Nf T_F C_F^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{1335574847}{10935000} + \frac{2512}{405} \zeta(3) \right) \Nf^2 T_F^2 C_F \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{1578785326351}{1399680000} - \frac{43861}{720} \zeta(3) \alpha - \frac{3}{8} \zeta(4) \alpha + \frac{5}{2} \zeta(5) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ +~ \frac{56000717}{414720} \alpha - \frac{1753}{360} \zeta(3) \alpha^2 - \frac{3}{16} \zeta(4) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{4}{3} \zeta(5) \alpha^2 + \frac{986237}{34560} \alpha^2 + \frac{1}{3} \zeta(3) \alpha^3 + \frac{7859}{1728} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{2350679}{10125} \zeta(3) + \frac{16687}{1200} \zeta(4) + \frac{91}{3} \zeta(5) \right) C_F C_A^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{1552257600373}{1749600000} + \frac{135041}{2250} \zeta(3) \alpha + 4 \zeta(5) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{181148459}{777600} \alpha + \frac{32}{15} \zeta(3) \alpha^2 - \frac{1336787}{51840} \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \zeta(3) \alpha^3 - \frac{205}{192} \alpha^3 + \frac{395693}{2700} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{1739}{50} \zeta(4) + \frac{116}{3} \zeta(5) \right) C_F^2 C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{203923883969}{2916000000} - \frac{29329}{450} \zeta(3) \alpha + \frac{685201111}{4860000} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{157}{15} \zeta(3) \alpha^2 + \frac{911003}{64800} \alpha^2 + \frac{2}{3} \zeta(3) \alpha^3 - \frac{565}{864} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{910609}{40500} \zeta(3) + \frac{1439}{75} \zeta(4) + 96 \zeta(5) \right) C_F^3 \right] a^3 \nonumber \\ && +~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& \left( \frac{3}{160} + \frac{1}{32} \alpha \right) C_F a \nonumber \\ && +~ \left[ \left( \frac{70559}{115200} - \frac{3}{40} \zeta(3) \alpha + \frac{4991}{23040} \alpha + \frac{31}{768} \alpha^2 - \frac{1}{40} \zeta(3) \right) C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{40907}{96000} - \frac{7453}{57600} \alpha - \frac{1}{384} \alpha^2 + \frac{1}{5} \zeta(3) \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~ -~ \frac{119}{800} \Nf T_F C_F \right] a^2 \nonumber \\ && +~ \left[ \left( -~ \frac{6448567}{864000} + \frac{53}{120} \zeta(3) \alpha - \frac{104179}{103680} \alpha + \frac{53}{216} \zeta(3) \right) \Nf T_F C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{28776619}{8640000} + \frac{4054403}{5184000} \alpha - \frac{77}{54} \zeta(3) \right) \Nf T_F C_F^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{818118521}{55296000} - \frac{4003}{2160} \zeta(3) \alpha + \frac{5}{48} \zeta(5) \alpha + \frac{22103567}{6635520} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{6641}{34560} \zeta(3) \alpha^2 + \frac{1}{96} \zeta(5) \alpha^2 + \frac{69635}{110592} \alpha^2 + \frac{913}{9216} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{198233}{28800} \zeta(3) + \frac{263}{96} \zeta(5) \right) C_F C_A^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{318424073}{25920000} + \frac{7129}{12000} \zeta(3) \alpha + \frac{1}{2} \zeta(5) \alpha - \frac{20896549}{6912000} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{1}{80} \zeta(3) \alpha^2 - \frac{502243}{1382400} \alpha^2 - \frac{29}{1024} \alpha^3 + \frac{1396697}{108000} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ 8 \zeta(5) \right) C_F^2 C_A + \frac{51959}{54000} \Nf^2 T_F^2 C_F \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{37693459}{345600000} - \frac{13}{60} \zeta(3) \alpha + \frac{239928131}{207360000} \alpha + \frac{27433}{115200} \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~~ -~ \frac{41}{4608} \alpha^3 - \frac{272419}{27000} \zeta(3) + \frac{38}{3} \zeta(5) \right) C_F^3 \right] a^3 \nonumber \\ && +~ O(a^4) ~. \end{eqnarray} Turning to the case of the transversity operator, for $n$~$=$~$3$ we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& \frac{1}{18} \left[ 1 ~+~ \left[ \left( -~ \frac{109}{18} - \frac{5}{6} \alpha \right) C_F \right] a \right. \nonumber \\ && \left. +~ \left[ \left( -~ \frac{48941}{810} - \frac{3}{5} \zeta(3) \alpha - \frac{1223}{360} \alpha - \frac{67}{72} \alpha^2 + \frac{59}{5} \zeta(3) \right) C_F C_A \right. \right. \nonumber \\ && \left. \left. ~~~~~ +\, \left( \frac{26467}{810} + \frac{119}{18} \alpha - \frac{17}{36} \alpha^2 - \frac{48}{5} \zeta(3) \! \right) C_F^2 + \frac{2197}{81} \Nf T_F C_F \right] a^2 \right. \nonumber \\ && \left. +~ \left[ \left( \frac{2254181}{4374} - \frac{124}{15} \zeta(3) \alpha + \frac{32359}{1620} \alpha \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~ +~ \frac{3404}{81} \zeta(3) + \frac{104}{3} \zeta(4) \right) \Nf T_F C_F C_A \right. \right. \nonumber \\ && \left. \left. ~~~~~ +~ \left( -~ \frac{699751}{21870} + \frac{104}{9} \zeta(3) \alpha - \frac{18349}{486} \alpha \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ -~ \frac{11176}{135} \zeta(3) - \frac{104}{3} \zeta(4) \right) \Nf T_F C_F^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~ +~ \left( -~ \frac{177970}{2187} - \frac{416}{81} \zeta(3) \right) \Nf^2 T_F^2 C_F \right. \right. \nonumber \\ && \left. \left. ~~~~~ +~ \left( -~ \frac{260680459}{349920} + \frac{5537}{180} \zeta(3) \alpha + \frac{3}{8} \zeta(4) \alpha - \frac{3}{2} \zeta(5) \alpha \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ -~ \frac{95953}{1296} \alpha + \frac{133}{80} \zeta(3) \alpha^2 + \frac{3}{16} \zeta(4) \alpha^2 + \frac{4}{3} \zeta(5) \alpha^2 \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ -~ \frac{35543}{2160} \alpha^2 - \frac{1}{3} \zeta(3) \alpha^3 - \frac{2227}{864} \alpha^3 \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ +~ \frac{194707}{1296} \zeta(3) - \frac{463}{48} \zeta(4) - \frac{139}{6} \zeta(5) \right) C_F C_A^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~ +~ \left( \frac{77900401}{174960} - \frac{3983}{90} \zeta(3) \alpha + 4 \zeta(5) \alpha + \frac{635981}{4860} \alpha \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ -~ \frac{5}{6} \zeta(3) \alpha^2 + \frac{14591}{1296} \alpha^2 + \zeta(3) \alpha^3 - \frac{1}{24} \alpha^3 - \frac{9163}{135} \zeta(3) \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ +~ 22 \zeta(4) - 20 \zeta(5) \right) C_F^2 C_A \right. \right. \nonumber \\ && \left. \left. ~~~~~ +~ \left( \frac{265849}{7290} + \frac{410}{9} \zeta(3) \alpha - \frac{74771}{972} \alpha + \frac{26}{3} \zeta(3) \alpha^2 \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ -~ \frac{5063}{648} \alpha^2 - \frac{2}{3} \zeta(3) \alpha^3 + \frac{115}{216} \alpha^3 - \frac{15518}{405} \zeta(3) \right. \right. \right. \nonumber \\ && \left. \left. \left. ~~~~~~~~~~~~ -~ \frac{32}{3} \zeta(4) - 32 \zeta(5) \right) C_F^3 \right] a^3 \right] ~+~ O(a^4) \end{eqnarray} with further calculation giving \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& \frac{1}{2} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} ~+~ O(a^4) \nonumber \\ \left. \Sigma^{(3) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& -~ \frac{3}{2} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2} ~+~ O(a^4) ~. \end{eqnarray} For the fourth moment of the transversity operator we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& -~ \frac{1}{32} ~+~ \left( \frac{293}{1152} + \frac{13}{384} \alpha \right) C_F a \nonumber \\ && +~ \left[ \left( \frac{2037217}{829440} + \frac{3127}{18432} \alpha + \frac{397}{9216} \alpha^2 - \frac{13}{32} \zeta(3) \right) C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{8099}{5184} - \frac{1595}{4608} \alpha + \frac{29}{4608} \alpha^2 + \frac{1}{4} \zeta(3) \right) C_F^2 \right. \nonumber \\ && \left. ~~~~~ -~ \frac{118621}{103680} \Nf T_F C_F \right] a^2 \nonumber \\ && +~ \left[ \left( -~ \frac{2440299949}{111974400} + \frac{59}{160} \zeta(3) \alpha - \frac{384991}{414720} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~ -~ \frac{14681}{6480} \zeta(3) - \frac{4}{3} \zeta(4) \right) \Nf T_F C_F C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{42119449}{11197440} - \frac{4}{9} \zeta(3) \alpha + \frac{2369273}{1244160} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{4291}{1080} \zeta(3) + \frac{4}{3} \zeta(4) \right) \Nf T_F C_F^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{25433893}{6998400} + \frac{16}{81} \zeta(3) \right) \Nf^2 T_F^2 C_F \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{112663077941}{3583180800} - \frac{99239}{69120} \zeta(3) \alpha - \frac{3}{256} \zeta(4) \alpha + \frac{5}{96} \zeta(5) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{89697707}{26542080} \alpha - \frac{2879}{27648} \zeta(3) \alpha^2 - \frac{3}{512} \zeta(4) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{17}{384} \zeta(5) \alpha^2 + \frac{541433}{737280} \alpha^2 + \frac{1}{96} \zeta(3) \alpha^3 + \frac{12979}{110592} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{503123}{82944} \zeta(3) + \frac{181}{512} \zeta(4) + \frac{463}{384} \zeta(5) \right) C_F C_A^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{2225988241}{89579520} + \frac{10093}{5760} \zeta(3) \alpha - \frac{16376411}{2488320} \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{7}{96} \zeta(3) \alpha^2 - \frac{477515}{663552} \alpha^2 - \frac{1}{32} \zeta(3) \alpha^3 - \frac{323}{12288} \alpha^3 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{1219}{360} \zeta(3) - \frac{27}{32} \zeta(4) + \frac{3}{4} \zeta(5) \right) C_F^2 C_A \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{7904501}{3732480} - \frac{295}{144} \zeta(3) \alpha + \frac{1045465}{248832} \alpha - \frac{1}{3} \zeta(3) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{64399}{165888} \alpha^2 + \frac{1}{48} \zeta(3) \alpha^3 - \frac{1007}{55296} \alpha^3 + \frac{13739}{12960} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{7}{16} \zeta(4) + \frac{5}{6} \zeta(5) \right) C_F^3 \right] a^3 ~+~ O(a^4) ~. \end{eqnarray} The remaining amplitudes are related to the first through \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& -~ \frac{4}{5} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2} ~+~ O(a^4) \nonumber \\ \left. \Sigma^{(3) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2} &=& \frac{1}{5} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2} ~+~ O(a^4) \end{eqnarray} Finally, for practical purposes we provide the general results for the specific case of the Landau gauge when the colour group is $SU(3)$. Therefore, we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& \frac{4}{9} a ~+~ \left[ -~ \frac{50}{27} \Nf + \frac{4432}{135} + \frac{56}{15} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \frac{4210}{729} \Nf^2 + \left( -~ \frac{1665047}{7290} - \frac{28}{5} \zeta(3) \right) \Nf \right. \nonumber \\ && \left. ~~~~~ +~ \frac{279011797}{131220} - \frac{1717789}{2430} \zeta(3) + \frac{9370}{27} \zeta(5) \right] a^3 \nonumber \\ && +~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& -~ \frac{1}{3} ~+~ \frac{214}{81} a ~+~ \left[ -~ \frac{32363}{5832} \Nf + \frac{4763093}{87480} - \frac{152}{15} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \left( \frac{1227463}{157464} + \frac{400}{729} \zeta(3) \right) \Nf^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{1364405723}{4723920} - \frac{814}{405} \zeta(3) - \frac{1000}{81} \zeta(4) \right) \Nf \right. \nonumber \\ && \left. ~~~~~ +~ \left( \frac{8619089351}{4723920} - \frac{12125507}{32805} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \frac{8599}{972} \zeta(4) + \frac{2525}{27} \zeta(5) \right) \right] a^3 ~+~ O(a^4) ~. \end{eqnarray} For the RI$^\prime$ scheme we note that \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{RI$^\prime$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& \frac{4}{9} a ~+~ \left[ -~ \frac{50}{27} \Nf + \frac{44168}{1215} + \frac{56}{15} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \frac{4210}{729} \Nf^2 + \left( -~ \frac{2738978}{10935} - \frac{28}{5} \zeta(3) \right) \Nf \right. \nonumber \\ && \left. ~~~~~ +~ \frac{326345791}{131220} - \frac{1678717}{2430} \zeta(3) + \frac{9370}{27} \zeta(5) \right] a^3 ~+~ O(a^4) \nonumber \\ \left. \Sigma^{(2) ~ {\mbox{\footnotesize{RI$^\prime$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& -~ \frac{1}{3} ~+~ O(a^4) ~. \end{eqnarray} Further, for the next Wilson operator \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& -~ 1 ~+~ \frac{7484}{675} a ~+~ \left[ -~ \frac{6041063}{243000} \Nf + \frac{431713457}{1822500} - \frac{524}{15} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \left( \frac{1335574847}{32805000} + \frac{2512}{1215} \zeta(3) \right) \Nf^2 \right. \nonumber \\ && \left. ~~~~~ +~ \left( -~ \frac{1349388886469}{984150000} - \frac{43972}{1215} \zeta(3) - \frac{1256}{27} \zeta(4) \right) \Nf \right. \nonumber \\ && \left. ~~~~~ +~ \frac{706189399771421}{78732000000} - \frac{112503104}{54675} \zeta(3) \right. \nonumber \\ && \left. ~~~~~ +~ \frac{43507}{1620} \zeta(4) + \frac{7180}{9} \zeta(5) \right] a^3 ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& \frac{1}{40} a ~+~ \left[ -~ \frac{119}{1200} \Nf + \frac{731129}{432000} + \frac{23}{90} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \frac{51959}{162000} \Nf^2 + \left( -~ \frac{232632277}{19440000} - \frac{755}{972} \zeta(3) \right) \Nf \right. \nonumber \\ && \left. ~~~~~ +~ \frac{1047728166241}{9331200000} - \frac{109467991}{2916000} \zeta(3) + \frac{13111}{648} \zeta(5) \right] a^3 \nonumber \\ && +~ O(a^4) ~. \end{eqnarray} As $\left. \Sigma^{(1) ~ {\mbox{\footnotesize{RI$^\prime$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0}$~$=$~$(-1)$ by construction, we note \begin{eqnarray} \left. \Sigma^{(2) ~ {\mbox{\footnotesize{RI$^\prime$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& \frac{1}{40} a ~+~ \left[ -~ \frac{119}{1200} \Nf + \frac{850873}{432000} + \frac{23}{90} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \frac{51959}{162000} \Nf^2 + \left( -~ \frac{266088707}{19440000} - \frac{755}{972} \zeta(3) \right) \Nf \right. \nonumber \\ && \left. ~~~~~ +~ \frac{435587120587}{3110400000} - \frac{20750459}{583200} \zeta(3) + \frac{13111}{648} \zeta(5) \right] a^3 \nonumber \\ && +~ O(a^4) ~. \end{eqnarray} For the transversity cases, when $n$~$=$~$3$ we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& \frac{1}{18} \left[ 1 ~-~ \frac{218}{27} a ~+~ \left[ \frac{4394}{243} \Nf - \frac{669202}{3645} + \frac{452}{15} \zeta(3) \right] a^2 \right. \nonumber \\ && \left. ~~~~~+~ \left[ \left( -~ \frac{177970}{6561} - \frac{416}{243} \zeta(3) \right) \Nf^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ +~ \left( \frac{98639141}{98415} + \frac{12712}{1215} \zeta(3) + \frac{1040}{27} \zeta(4) \right) \Nf \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{1020141085}{157464} + \frac{59050063}{43740} \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~~ -~ \frac{7679}{324} \zeta(4) - \frac{12434}{27} \zeta(5) \right] a^3 \right] ~+~ O(a^4) \end{eqnarray} and \begin{equation} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{RI$^\prime$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} ~=~ \frac{1}{18} ~+~ O(a^4) ~. \end{equation} Finally, for the transversity moment $n$~$=$~$4$ we have \begin{eqnarray} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{$\MSbar$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} &=& -~ \frac{1}{32} ~+~ \frac{293}{864} a ~+~ \left[ -~ \frac{118621}{155520} \Nf + \frac{13151593}{1866240} - \frac{85}{72} \zeta(3) \right] a^2 \nonumber \\ && +~ \left[ \left( \frac{25433893}{20995200} + \frac{16}{243} \zeta(3) \right) \Nf^2 \right. \nonumber \\ && \left. ~~~~~+~ \left( -~ \frac{20277921581}{503884800} - \frac{1943}{1944} \zeta(3) - \frac{40}{27} \zeta(4) \right) \Nf \right. \nonumber \\ && \left. ~~~~~+~ \frac{2013899793847}{8062156800} - \frac{146176079}{2799360} \zeta(3) \right. \nonumber \\ && \left. ~~~~~+~ \frac{2693}{3456} \zeta(4) + \frac{52991}{2592} \zeta(5) \right] a^3 ~+~ O(a^4) \end{eqnarray} with clearly \begin{equation} \left. \Sigma^{(1) ~ {\mbox{\footnotesize{RI$^\prime$}}} ~ \mbox{\footnotesize{finite}}}_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(p) \right|_{p^2 \, = \, \mu^2}^{SU(3), \alpha=0} ~=~ -~ \frac{1}{32} ~+~ O(a^4) ~. \end{equation} \sect{Conversion functions.} An additional check on our computations is provided by the conversion functions for each of the operators we have considered. These functions allow one to convert the anomalous dimension of the operator in one renormalization scheme to that in another scheme and are defined by the ratio of the renormalization constants in both schemes \begin{equation} C_{\cal O}(a,\alpha) ~=~ \frac{Z^{\mbox{\footnotesize{RI$^\prime$}}}_{\cal O}} {Z^{\mbox{\footnotesize{$\MSbar$}}}_{\cal O}} ~. \end{equation} Then, \cite{37}, \begin{eqnarray} \gamma^{\mbox{\footnotesize{RI$^\prime$}}}_{\cal O} \left(a_{\mbox{\footnotesize{RI$^\prime$}}}\right) &=& \gamma^{\mbox{\footnotesize{$\MSbar$}}}_{\cal O} \left(a_{\mbox{\footnotesize{$\MSbar$}}}\right) ~-~ \beta\left(a_{\mbox{\footnotesize{$\MSbar$}}}\right) \frac{\partial ~}{\partial a_{\mbox{\footnotesize{$\MSbar$}}}} \ln C_{\cal O} \left(a_{\mbox{\footnotesize{$\MSbar$}}}, \alpha_{\mbox{\footnotesize{$\MSbar$}}}\right) \nonumber \\ && -~ \alpha_{\mbox{\footnotesize{$\MSbar$}}} \gamma^{\mbox{\footnotesize{$\MSbar$}}}_\alpha \left(a_{\mbox{\footnotesize{$\MSbar$}}}\right) \frac{\partial ~}{\partial \alpha_{\mbox{\footnotesize{$\MSbar$}}}} \ln C_{\cal O} \left(a_{\mbox{\footnotesize{$\MSbar$}}}, \alpha_{\mbox{\footnotesize{$\MSbar$}}}\right) \end{eqnarray} where one needs to express the $\MSbar$ variables in terms of the RI$^\prime$ scheme using (\ref{cccon}) and (\ref{alpcon}) in order to compare with the anomalous dimensions from the explicit computation. We record that the conversion functions for the various operators we are interested in here are \begin{eqnarray} C_{\bar{\psi} \gamma^\mu D^\nu D^\sigma \psi}(a,\alpha) &=& 1 ~+~ \left( 27 \alpha + 107 \right) \frac{C_F a}{18} \nonumber \\ && +~ \left[ \left( 86400 \alpha^2 - 93312 \zeta(3) \alpha + 409104 \alpha - 715392 \zeta(3) + 3302464 \right) C_A \right. \nonumber \\ && \left. ~~~~+~ \left( 60480 \alpha^2 + 327600 \alpha + 373248 \zeta(3) + 327549 \right) C_F \right. \nonumber \\ && \left. ~~~~-~ 1475960 T_F \Nf \right] \frac{C_F a^2}{51840} \nonumber \\ && +~ \left[ \left( 11032200 \alpha^3 - 13915152 \zeta(3) \alpha^2 - 466560 \zeta(5) \alpha^2 + 64538856 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~-~ 141379344 \zeta(3) \alpha + 8398080 \zeta(5) \alpha + 319887792 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~-~ 635381280 \zeta(3) + 25660800 \zeta(4) + 142767360 \zeta(5) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ 2356357048 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~+~ \left( 4607280 \alpha^3 + 5785344 \zeta(3) \alpha^2 + 32256792 \alpha^2 - 13841280 \zeta(3) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 33592320 \zeta(5) \alpha + 180233856 \alpha - 297743040 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 76982400 \zeta(4) - 59719680 \zeta(5) + 1051633031 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~+~ \left( 35085312 \zeta(3) \alpha - 97555104 \alpha - 75098880 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 93312000 \zeta(4) - 1625432200 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 2177280 \alpha^3 - 23328000 \zeta(3) \alpha^2 + 27799200 \alpha^2 - 73685376 \zeta(3) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 90339057 \alpha + 319139136 \zeta(3) + 51321600 \zeta(4) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 201553920 \zeta(5) - 607345686 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~-~ \left( 31104000 \zeta(3) \alpha + 63327960 \alpha - 303948288 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 93312000 \zeta(4) + 922104436 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 13824000 \zeta(3) + 250653280 \right) T_F^2 \Nf^2 \right] \frac{C_F a^3}{2799360} ~+~ O(a^4) \end{eqnarray} \begin{eqnarray} C_{\bar{\psi} \gamma^\mu D^\nu D^\sigma D^\rho \psi}(a,\alpha) &=& 1 ~+~ \left( 525 \alpha + 1871 \right) \frac{C_F a}{225} \nonumber \\ && +~ \left[ \left( 18315000 \alpha^2 - 23328000 \zeta(3) \alpha + 88528500 \alpha - 103680000 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~+~ 603802180 \right) C_A \right. \nonumber \\ && \left. ~~~~+~ \left( 21330000 \alpha^2 + 119182200 \alpha + 62208000 \zeta(3) + 98349057 \right) C_F \right. \nonumber \\ && \left. ~~~~-~ 264322520 T_F \Nf \right] \frac{C_F a^2}{6480000} \nonumber \\ && +~ \left[ \left( 239334750000 \alpha^3 - 340977600000 \zeta(3) \alpha^2 - 2916000000 \zeta(5) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~+~ 1428857212500 \alpha^2 - 3356364600000 \zeta(3) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~+~ 174960000000 \zeta(5) \alpha + 7142849746875 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~-~ 12700608624000 \zeta(3) + 335689920000 \zeta(4) \right. \right. \nonumber \\ && \left. \left. ~~~~~~+~ 2504844000000 \zeta(5) + 48069511158775 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~+~ \left( 229047750000 \alpha^3 - 142300800000 \zeta(3) \alpha^2 + 1689792435000 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 1524733632000 \zeta(3) \alpha + 839808000000 \zeta(5) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 9165889557000 \alpha - 1770530400000 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 1007069760000 \zeta(4) + 653184000000 \zeta(5) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 18744964493980 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~+~ \left( 816480000000 \zeta(3) \alpha - 2158137000000 \alpha - 1801163520000 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 1464998400000 \zeta(4) - 31372620527200 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 145435500000 \alpha^3 - 366249600000 \zeta(3) \alpha^2 + 1372611420000 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 1048904640000 \zeta(3) \alpha + 3148063990200 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 4800009888000 \zeta(3) + 671379840000 \zeta(4) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 3359232000000 \zeta(5) - 8872064364708 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~-~ \left( 488332800000 \zeta(3) \alpha + 2684380392000 \alpha - 4552485120000 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 1464998400000 \zeta(4) + 18121905428720 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 217036800000 \zeta(3) + 4952079510400 \right) T_F^2 \Nf^2 \right] \frac{C_F a^3}{34992000000} \nonumber \\ && ~+~ O(a^4) \end{eqnarray} \begin{eqnarray} C_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho \psi}(a,\alpha) &=& 1 ~+~ \left( 33 \alpha + 109 \right) \frac{C_F a}{18} \nonumber \\ && +~ \left[ \left( 6600 \alpha^2 - 7776 \zeta(3) \alpha + 32067 \alpha - 47952 \zeta(3) + 228974 \right) C_A \right. \nonumber \\ && \left. ~~~~+~ \left( 6480 \alpha^2 + 30900 \alpha + 31104 \zeta(3) + 10917 \right) C_F \right. \nonumber \\ && \left. ~~~~-~ 99220 T_F \Nf \right] \frac{C_F a^2}{3240} \nonumber \\ && +~ \left[ \left( 3407670 \alpha^3 - 4575204 \zeta(3) \alpha^2 - 58320 \zeta(5) \alpha^2 + 20121777 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~-~ 46022256 \zeta(3) \alpha + 2799360 \zeta(5) \alpha + 100170405 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~-~ 196675020 \zeta(3) + 3732480 \zeta(4) + 45081360 \zeta(5) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ 693358478 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~+~ \left( 2334420 \alpha^3 - 46656 \zeta(3) \alpha^2 + 15956352 \alpha^2 - 12324960 \zeta(3) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 11197440 \zeta(5) \alpha + 86296752 \alpha - 34434720 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 11197440 \zeta(4) + 214883180 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~+~ \left( 11384064 \zeta(3) \alpha - 30726648 \alpha - 17280000 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 24261120 \zeta(4) - 463372640 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 1399680 \alpha^3 - 6065280 \zeta(3) \alpha^2 + 13024800 \alpha^2 - 13965696 \zeta(3) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 26888652 \alpha + 108183168 \zeta(3) + 7464960 \zeta(4) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 22394880 \zeta(5) - 153974772 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~-~ \left( 8087040 \zeta(3) \alpha + 27287280 \alpha - 69133824 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 24261120 \zeta(4) + 231549328 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 3594240 \zeta(3) + 70515200 \right) T_F^2 \Nf^2 \right] \frac{C_F a^3}{699840} ~+~ O(a^4) \end{eqnarray} and \begin{eqnarray} C_{\bar{\psi} \sigma^{\mu\nu} D^\sigma D^\rho D^\lambda \psi}(a,\alpha) &=& 1 ~+~ \left( 75 \alpha + 293 \right) \frac{C_F a}{36} \nonumber \\ && +~ \left[ \left( 64890 \alpha^2 - 77760 \zeta(3) \alpha + 309195 \alpha - 414720 \zeta(3) + 2302897 \right) C_A \right. \nonumber \\ && \left. ~~~~+~ \left( 63720 \alpha^2 + 380940 \alpha + 207360 \zeta(3) + 404940 \right) C_F \right. \nonumber \\ && \left. ~~~~-~ 1039688 T_F \Nf \right] \frac{C_F a^2}{25920} \nonumber \\ && +~ \left[ \left( 677127600 \alpha^3 - 918993600 \zeta(3) \alpha^2 - 18662400 \zeta(5) \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ 4008299580 \alpha^2 - 9063653760 \zeta(3) \alpha + 466560000 \zeta(5) \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ 19846116045 \alpha - 36380232000 \zeta(3) + 783820800 \zeta(4) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~+~ 8939289600 \zeta(5) + 140182687541 \right) C_A^2 \right. \nonumber \\ && \left. ~~~~+~ \left( 517071600 \alpha^3 - 102643200 \zeta(3) \alpha^2 + 3840647400 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 3332482560 \zeta(3) \alpha + 2239488000 \zeta(5) \alpha + 21797212320 \alpha \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 9369146880 \zeta(3) - 2351462400 \zeta(4) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 447897600 \zeta(5) + 58907275400 \right) C_A C_F \right. \nonumber \\ && \left. ~~~~+~ \left( 2217093120 \zeta(3) \alpha - 6005931840 \alpha - 6177116160 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 4777574400 \zeta(4) - 94522187168 \right) C_A T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 279936000 \alpha^3 - 1194393600 \zeta(3) \alpha^2 + 3013848000 \alpha^2 \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 4503859200 \zeta(3) \alpha + 8684647200 \alpha + 18380113920 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~+~ 1567641600 \zeta(4) + 2985984000 \zeta(5) - 24416900640 \right) C_F^2 \right. \nonumber \\ && \left. ~~~~-~ \left( 1592524800 \zeta(3) \alpha + 6750907200 \alpha - 16028098560 \zeta(3) \right. \right. \nonumber \\ && \left. \left. ~~~~~~~~~~-~ 4777574400 \zeta(4) + 57917250880 \right) C_F T_F \Nf \right. \nonumber \\ && \left. ~~~~+~ \left( 707788800 \zeta(3) + 15192521216 \right) T_F^2 \Nf^2 \right] \frac{C_F a^3}{111974400} \nonumber \\ && +~ O(a^4) \end{eqnarray} where the coupling constant and gauge parameter are in the $\MSbar$ scheme. We note that the same RI$^\prime$ anomalous dimensions are determined as previously. \sect{Discussion.} We have provided the finite parts of various Green's functions required for the renormalization of the $n$~$=$~$3$ and $4$ moments of the non-singlet twist-$2$ Wilson and transversity operators at three loops in both the $\MSbar$ and RI$^\prime$ schemes. Since these are available at several loop orders, the hope is that they will be central to the extraction of accurate values for the matrix elements which will be measured on the lattice. From another point of view the new $\MSbar$ anomalous dimensions which are now available for the moments up to and including $4$ for the transversity operator will provide a useful check on the full $n$-dependent three loop transversity anomalous dimensions when they are eventually determined. The impressive symbolic manipulation machinery which achieved the arbitrary $n$ anomalous dimensions for the twist-$2$ flavour non-singlet and singlet Wilson operators, \cite{1,2,3,4}, can be applied to the transversity case. Whilst this can be achieved in principle, in the interim one could follow a similar direction to the earlier approach of \cite{39,40} where the anomalous dimensions of the Wilson operators were determined to moment $n$~$=$~$10$ and later to higher moments, $n$~$\leq$~$16$ (except $n$~$=$~$14$), \cite{45,46}. These explicit moments were then used to construct solid approximations to the full anomalous dimensions which were then shown to be credible in a substantial range of the $x$ variable in, for example, \cite{47}. Given the advance in computer capabilities since \cite{39,40}, it would seem to us that a fixed moment computation for the anomalous dimensions of the higher moments of the transversity operator is certainly viable. Moreover, one would not be constrained by the choice of an arbitrary covariant gauge and the use of the Feynman gauge would therefore reduce computer time. \vspace{1cm} \noindent {\bf Acknowledgements.} The author thanks Dr R. Horsley, Dr P.E.L. Rakow and Dr C. McNeile for valuable discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The No-Cloning theorem \cite{Dieks,WZ} is a basic limitative result for quantum mechanics, with particular significance for quantum information. It says that there is no unitary operation which makes perfect copies of an unknown (pure) quantum state. A stronger form of this result is the No-Broadcasting theorem \cite{Broadcast}, which applies to mixed states. There is also a No-Deleting theorem \cite{Pati}. Recently, the author and Bob Coecke have introduced a categorical formulation of Quantum Mechanics \cite{AC2,AC3,AC4}, as a basis for a more structural, high-level approach to quantum information and computation. This has been elaborated by ourselves, our colleagues, and other workers in the field \cite{Abr0,Abr1,Abr2,AD,CPav,CD,Selinger,Vicary}, and has been shown to yield an effective and illuminating treatment of a wide range of topics in quantum information. Diagrammatic calculi for tensor categories \cite{JoyalStreet,Turaev}, suitably extended to incorporate the various additional structures which have been used to reflect fundamental features of quantum mechanics, play an important r\^ole, both as an intuitive and vivid visual presentation of the formalism, and as an effective calculational device. It is clear that such a novel reformulation of the mathematical formalism of quantum mechanics, a subject more or less set in stone since Von Neumann's classic treatise \cite{vN}, has the potential to yield new insights into the foundations of quantum mechanics. In the present paper, we shall use it to open up a novel perspective on No-Cloning. What we shall find, quite unexpectedly, is a link to some fundamental issues in logic, computation, and the foundations of mathematics. A striking feature of our results is that they are visibly in the same genre as a well-known result by Joyal in categorical logic \cite{LS} showing that a `Boolean cartesian closed category' trivializes, which provides a major road-block to the computational interpretation of classical logic. In fact, they strengthen Joyal's result, insofar as the assumption of a full categorical product (diagonals \emph{and} projections) in the presence of a classical duality is weakened. This shows a heretofore unsuspected connection between limitative results in proof theory and No-Go theorems in quantum mechanics. The further contents of the paper are as follows: \begin{itemize} \item In the next section, we shall briefly review the three-way link between logic, computation and categories, and recall Joyal's lemma. \item In section~3, we shall review the categorical approach to quantum mechanics. \item Our main results are in section~4, where we prove our limitative result, which shows the incompatibility of structural features corresponding to quantum entanglement (essentially, the existence of Bell states enabling teleportation) with the existence of a `natural' (in the categorical sense, corresponding essentially to \emph{basis-independent}) copying operation. This result is mathematically robust, since it is proved in a very general context, and has a topological content which is clearly revealed by a diagrammatic proof. At the same time it is delicately poised, since \emph{non-natural}, basis-dependent copying operations do in fact play a key r\^ole in the categorical formulation of quantum notions of measurement. We discuss this context, and the conceptual reading of the results. \item We conclude with some discussion of extensions of the results, further directions, and open problems. \end{itemize} \section{Categories, Logic and Computational Content: Joyal's Lemma} Categorical logic \cite{LS} and the Curry-Howard correspondence in Proof Theory \cite{CH} give us a beautiful three-way correspondence: \begin{diagram} \mbox{Logic} & & \rLRTo & & \mbox{Computation} \\ & \luLRTo & & \ruLRTo & \\ & & \mbox{Categories} & & \end{diagram} More particularly, we have as a paradigmatic example: \begin{diagram} \mbox{Intuitionistic Logic} & & \rLRTo & & \mbox{$\lambda$-calculus} \\ & \luLRTo & & \ruLRTo & \\ & & \mbox{Cartesian Closed Categories} & & \end{diagram} Here we are focussing on the fragment of intuitionistic logic containing conjunction and implication, and the simply-typed $\lambda$-calculus with product types. We shall assume familiarity with basic notions of category theory \cite{Mac,LS2}. Recall that a cartesian closed category is a category with a terminal object, binary products and exponentials. The basic cartesian closed adjunction is \[ \mathcal{C}(A \times B, C) \cong \mathcal{C}(A, B \Rightarrow C) \, . \] More explicitly, a category $\mathcal{C}$ with finite products \emph{has exponentials} if for all objects $A$ and $B$ of $\mathcal{C}$ there is a couniversal arrow from $- \times A$ to $B$, \textit{i.e.} an object $A \Rightarrow B$ of $\mathcal{C}$ and a morphism \[ \App[A,B] : (A \Rightarrow B) \times A \longrightarrow B \] with the couniversal property: for every $g : C \times A \longrightarrow B$, there is a unique morphism $\Lambda (g) : C \longrightarrow A \Rightarrow B$ such that \[ \begin{diagram} A \Rightarrow B \\ \uDashto^{\Lambda (g)} \\ C \\ \end{diagram} \qquad \qquad \begin{diagram} (A \Rightarrow B) \times A & \rTo^{\App[A,B]} & B \\ \uDashto^{\Lambda (g) \times \id{A}} & \ruTo_g & \\ C \times A & & \\ \end{diagram} \] \noindent The correspondence between the intuitionistic logic of conjunction and implication and cartesian closed categories is summarized in the following table: \begin{center} \renewcommand{\arraystretch}{0.5}\fbox{$\begin{array}{c||@{\;\;}c@{\;\;}|@{\;\;}c@{\;\;}} &&\\ \textbf{Axiom} & \infer[\someQWE{Id}]{\Gamma,A\vdash A}{} & \infer{\pi_{2}:\Gamma\times A\longrightarrow A}{} \\&&\\\hline&&\\ \textbf{Conjunction} & \infer[\someQWEqwe{\wedge}{I}]{\Gamma\vdash A\wedge B}{\Gamma\vdash A\qquad\Gamma\vdash B} & \infer{\langle f,g\rangle:\Gamma\longrightarrow A \times B}{f : \Gamma \longrightarrow A \qquad g : \Gamma \longrightarrow B} \\&&\\ & \infer[\someQWEqwe{\wedge}{E_1}]{\Gamma \vdash A}{\Gamma \vdash A \wedge B} & \infer{\pi_{1} \circ f : \Gamma \longrightarrow A}{f : \Gamma \longrightarrow A \times B} \\&&\\ & \infer[\someQWEqwe{\wedge}{E_2}]{\Gamma \vdash B}{\Gamma \vdash A \wedge B} & \infer{\pi_{2}\circ f : \Gamma \longrightarrow B}{f : \Gamma \longrightarrow A \times B} \\&&\\\hline&&\\ \textbf{Implication} & \infer[\someQWEqwe{\mathnormal\supset}{I}]{\Gamma\vdash A\supset B}{\Gamma,A\vdash B} & \infer{\Lambda(f):\Gamma\longrightarrow(A\Rightarrow B)}{f:\Gamma \times A\longrightarrow B} \\&&\\ & \infer[\someQWEqwe{\mathnormal\supset}{E}]{\Gamma\vdash B}{\Gamma\vdash A\supset B \quad\Gamma\vdash A} & \infer{\App[A,B]\circ\langle f,g\rangle:\Gamma\longrightarrow B}{f : \Gamma \longrightarrow (A \Rightarrow B) \quad g : \Gamma \rightarrow A}\\&&\\ \end{array}$} \end{center} \subsection{Joyal's Lemma} \label{JLsec} It is a very natural idea to seek to extend the correspondence shown above to the case of \emph{classical logic}. Joyal's lemma shows that there is a fundamental impediment to doing so.\footnote{It is customary to refer to this result as Joyal's lemma, although, apparently, he never published it. The usual reference is \cite{LS}, who attribute the result to Joyal, but follow the proof given by Freyd \cite{Freyd}. Our statement and proof are somewhat different to those in \cite{LS}.} The natural extension of the notion of cartesian closed category, which corresponds to the \emph{intuitionistic logic} of conjunction and implication, to the classical case is to introduce a suitable notion of classical negation. We recall that it is customary in intuitionistic logic to \emph{define} the negation by \[ \neg A := A \supset \bot \] where $\bot$ is the \emph{falsum}. The characteristic property of the falsum is that it implies every proposition. In categorical terms, this translates into the notion of an initial object. Note that for any fixed object $B$ in a cartesian closed category, there is a well-defined contravariant functor \[ \mathcal{C} \longrightarrow \op{\mathcal{C}} :: A \mapsto (A \Rightarrow B) \, . \] This will always satisfy the properties corresponding to negation in minimal logic, and if $B = \bot$ is the initial object in $\mathcal{C}$, then it will satisfy the laws of intuitionistic negation. In particular, there is a canonical arrow \[ A \longrightarrow (A \Rightarrow \bot) \Rightarrow \bot \] which is just the curried form of the evaluation morphism. This corresponds to the valid intuitionistic principle $A \supset \neg \neg A$. What else is needed in order to obtain classical logic? As is well known, the missing principle is that of \emph{proof by contradiction}: the converse implication $\neg \neg A \supset A$. This leads us to the following notion. A \emph{dualizing object} $\bot$ in a closed category is one for which the canonical arrow \[ A \longrightarrow (A \Rightarrow \bot) \Rightarrow \bot \] is an isomorphism for all $A$. We can now state Joyal's lemma: \begin{proposition}[Joyal's Lemma] Any cartesian closed category with a dualizing object is a preorder (hence \emph{trivial} as a semantics for proofs or computational processes). \end{proposition} \begin{proof} Note firstly that, if $\bot$ is dualizing, the induced negation functor $\mathcal{C} \longrightarrow \op{\CC}$ is a \emph{contravariant equivalence} $\mathcal{C} \simeq \op{\CC}$. Since $(\top \Rightarrow A) \cong A$ where $\top$ is the terminal object, it follows that $\bot$ is the dual of $\top$, and hence initial. So it suffices to prove Joyal's lemma under the assumption that the dualizing object is initial. We assume that $\bot$ is a dualizing initial object in a cartesian closed category $\mathcal{C}$. We write $\iota_{C} : \bot \rightarrow C$ for the unique arrow given by initiality. Note that $\mathcal{C}(A \times \bot, A \times \bot) \cong \mathcal{C}(\bot, A \Rightarrow (A \times \bot))$, which is a singleton by initiality. It follows that $\iota_{A \times \bot} \circ \pi_{2} = \id{A \times \bot}$, while $\pi_{2} \circ \iota_{A \times \bot} = \id{\bot}$ by initiality. Hence $A \times \bot \cong \bot$.\footnote{A slicker proof simply notes that $A \times (-)$ is a left adjoint by cartesian closure, and hence preserves all colimits, in particular initial objects.} Now \begin{equation} \label{JLeq} \mathcal{C}(A, B) \cong \mathcal{C}(B \Rightarrow \bot, A \Rightarrow \bot) \cong \mathcal{C}((B \Rightarrow \bot) \times A, \bot) . \end{equation} Given any $h, k : C \longrightarrow \bot$, note that \[ h = \pi_{1} \circ \langle h, k \rangle , \qquad k = \pi_{2} \circ \langle h, k \rangle . \] But $\bot \times \bot \cong \bot$, hence by initiality $\pi_{1} = \pi_{2}$, and so $h = k$, which by (\ref{JLeq}) implies that $f = g$ for $f, g : A \longrightarrow B$. \end{proof} \subsection{Linearity and Classicality} However, we know from Linear Logic that there is no impediment to having a closed structure with a dualizing object, \emph{provided} we weaken our assumption on the underlying context-building structure, from \emph{cartesian} $\times$ to \emph{monoidal} $\otimes$. Then we get a wealth of examples of \emph{$*$-autonomous categories} \cite{Barr}, which stand to Multiplicative Linear Logic as cartesian closed categories do to Intuitionistic Logic \cite{Seely}. Joyal's lemma can thus be stated in the following equivalent form. \begin{proposition} A $*$-autonomous category in which the monoidal structure is cartesian is a preorder. \end{proposition} Essentially, a cartesian structure is a monoidal structure plus natural diagonals, and with the tensor unit a terminal object, \textit{i.e.}\ \emph{plus cloning and deleting}! \section{Categorical Quantum Mechanics} In this section, we shall provide a brief review of the structures used in categorical quantum mechanics, their graphical representation, and how these structures are used in formalizing some key features of quantum mechanics. Further details can be found elsewhere \cite{AC4,Abr1,Selinger}. \subsection{Symmetric Monoidal Categories} We recall that a \emph{monoidal category} is a structure $(\mathcal{C} , \otimes , I, a, l, r)$ where: \begin{itemize} \item $\mathcal{C}$ is a category, \item $\otimes : \mathcal{C} \times \mathcal{C} \rightarrow\mathcal{C}$ is a functor (\emph{tensor}), \item $I$ is a distinguished object of $\mathcal{C}$ (\emph{unit}), \item $a$, $l$, $r$ are natural isomorphisms (\emph{structural isos}) with components: \[ a_{A, B, C} : A \otimes (B \otimes C) \cong (A \otimes B) \otimes C \] \[ l_A : I \otimes A \cong A \qquad \quad r_A : A \otimes I \cong A \] \end{itemize} such that certain diagrams commute, which ensure \emph{coherence} \cite{Mac}, described by the slogan: \begin{center} \fbox{All diagrams only involving $a$, $l$ and $r$ must commute.} \end{center} Examples: \begin{itemize} \item Both products and coproducts give rise to monoidal structures\,---\,which are the common denominator between them. (But in addition, products have \emph{diagonals} and \emph{projections}, and coproducts have \emph{codiagonals} and \emph{injections}.) \item $(\mathbb{N}, {\leqslant} , + , 0)$ is a monoidal category. \item $\textbf{Rel}$, the category of sets and relations, with cartesian product (which is \emph{not} the categorical product). \item $\textbf{Vect}_k$ with the standard tensor product. \end{itemize} Let us examine the example of $\textbf{Rel}$ in some detail. We take $\otimes$ to be the cartesian product, which is defined on relations $R:X\rightarrow X'$ and $S:Y\rightarrow Y'$ as follows. \[ \forall(x,y)\in X\times Y,(x',y')\in X'\times Y'.\; (x,y)R\otimes S(x',y')\iff xRx'\land ySy'\,. \] It is not difficult to show that this is indeed a functor. Note that, in the case that $R,S$ are \emph{functions}, $R\otimes S$ is the same as $R\times S$ in \textbf{Set}. Moreover, we take each $a_{A,B,C}$ to be the associativity function for products (in \textbf{Set}), which is an iso in $\textbf{Set}$ and hence also in \textbf{Rel}. Finally, we take $I$ to be the one-element set, and $l_A,r_A$ to be the projection functions: their relational converses are their inverses in \textbf{Rel}. The monoidal coherence diagrams commute simply because they commute in $\textbf{Set}$. \paragraph{Tensors and products} As mentioned earlier, products are tensors with extra structure: natural diagonals and projections, corresponding to cloning and deleting operations. This fact is expressed more precisely as follows. \begin{proposition} Let $\mathcal{C}$ be a monoidal category $(\mathcal{C},\otimes,I,a,l,r)$. The tensor $\otimes$ induces a product structure iff there exist natural diagonals and projections, i.e.~natural transformations \[ \Delta_A:A\longrightarrow A\otimes A\,,\qquad p_{A,B}:A\otimes B\longrightarrow A\,,\qquad q_{A,B}:A\otimes B\longrightarrow B\,, \] such that the following diagrams commute. \[ \begin{diagram} & & A & & \\ & \ldTo^{\id{A} } & \dTo_{\Delta_{A}} & \rdTo^{\id{A}} & \\ A & \lTo_{p_{A, A}} & A \otimes A & \rTo_{q_{A, A}} & A \end{diagram} \qquad \begin{diagram}[4em] A \otimes B & \rTo^{\Delta_{A, B}} & (A \otimes B) \otimes (A \otimes B) \\ & \rdTo_{\id{A \otimes B}} & \dTo_{p_{A, B} \otimes q_{A, B}} \\ & & A \otimes B \end{diagram} \] \end{proposition} \paragraph{Symmetry} A \emph{symmetric monoidal category} is a monoidal category $(\mathcal{C},\otimes,I,a,l,r)$ with an additional natural isomorphism (\emph{symmetry}), \[ \sigma_{A, B} : A \otimes B \cong B \otimes A \] such that $\sigma_{B,A}=\sigma_{A,B}^{-1}$, and some additional coherence diagrams commute. \subsection{Scalars} Let $(\mathcal{C} , \otimes , I, l, a, l, r)$ be a monoidal category . We define a \emph{scalar} in $\mathcal{C}$ to be a morphism $s : I \rightarrow I$, \textit{i.e.}\ an endomorphism of the tensor unit. \begin{example} In $\mathbf{FdVec}_{\mathbb{K}}$, linear maps $\mathbb{K} \to \mathbb{K}$ are uniquely determined by the image of $1$, and hence correspond biuniquely to elements of $\mathbb{K}\,$; composition corresponds to multiplication of scalars. In $\mathbf{Rel}$\index{category of relations}, there are just two scalars, corresponding to the Boolean values $0$, $1$. \end{example} \noindent The (multiplicative) monoid of scalars is then just the endomorphism monoid $\mathcal{C} (I , I )$. The first key point is the elementary but beautiful observation by Kelly and Laplaza \cite{KL} that this monoid is always commutative. \begin{lemma} \label{scprop} $\mathcal{C} (I , I )$ is a commutative monoid \end{lemma} \begin{proof} \[ \begin{diagram} I & \rTo^{r_{I}^{-1}} & I \otimes I & \req & I \otimes I & \rTo^{l_{I}} & I \\ \uTo^{s} & & \uTo^{s \otimes 1} & & \dTo_{1 \otimes t} & & \dTo_{t} \\ I & \rTo^{r_{I}^{-1}} & I \otimes I & \rTo^{s \otimes t} & I \otimes I & \rTo^{l_{I}} & I \\ \dTo^{t} & & \dTo^{1 \otimes t} & & \uTo_{s \otimes 1} & & \uTo_{s} \\ I & \rTo_{l_{I}^{-1}} & I \otimes I & \rEq & I \otimes I & \rTo_{r_{I}} & I \\ \end{diagram} \] using the coherence equation $l_{I} = r_{I}$. \end{proof} The second point is that a good notion of \emph{scalar multiplication} exists at this level of generality. That is, each scalar $s:I\toI$ induces a natural transformation \[ \begin{diagram} s_A :A & \rTo^{\simeq} & I \otimes \!A & \rTo^{s \otimes 1_A} & I \otimes\! A & \rTo^{\!\!\simeq\ } & A\,.\ \ \ \ \ \end{diagram} \] with the naturality square \[ \begin{diagram} A & \rTo^{s_A} & A \\ \dTo^{f} & & \dTo_{f} \\ B & \rTo_{s_B} & B \\ \end{diagram} \] We write $s \bullet f$ for $f \circ s_A=s_B\circ f$. Note that \[ \begin{array}{lcl} 1 \bullet f & = & f \label{sdotident} \\ s \bullet (t \bullet f) & = & (s \circ t) \bullet f \label{sdotact}\\ (s \bullet g)\circ(t \bullet f) & = & (s\circ t)\bullet(g\circ f) \label{sdotcomp}\\ (s \bullet f) \otimes (t \bullet g) & = & (s \circ t) \bullet (f \otimes g) \label{sdotten} \end{array} \] which exactly generalizes the multiplicative part of the usual properties of scalar multiplication. Thus scalars act globally on the whole category. \subsection{Compact Closed Categories} A category {\bf C} is \em $*$-autonomous \em \cite{Barr} if it is symmetric monoidal, and comes equipped with a full and faithful functor \[ (\ )^*:{\bf C}^{op}\to{\bf C} \] such that a bijection \[ {\bf C}(A\otimes B,C^*)\simeq {\bf C}(A,(B\otimes C)^*) \] exists which is natural in all variables. Hence a $*$-autonomous category is closed, with \[ {A\multimap B:=(A\otimes B^*)^*}\,. \] These $*$-autonomous categories provide a categorical semantics for the multiplicative fragment of linear logic \cite{Seely}. A \em compact closed category \em \cite{KL} is a $*$-autonomous category with a self-dual tensor\index{compact closure}, i.e.~with natural isomorphisms \[ u_{A,B}:(A\otimes B)^*\simeq A^*\otimes B^* \qquad u_{I} : I^* \simeq I\,. \] It follows that \[ A\multimap B\simeq A^*\otimes B\,. \] \noindent A very different definition arises when one considers a symmetric monoidal category as a one-object bicategory. In this context, compact closure simply means that every object $A$, qua 1-cell of the bicategory, has a specified adjoint \cite{KL}. \begin{definition}[Kelly-Laplaza]\label{def:compclos}\em A \em compact closed category \em is a symmetric monoidal category in which to each object $A$ a \em dual object \em $A^*$, a \em unit \em \[ \eta_A:{\rm I}\to A^*\otimes A \] and a \em counit \em \[ \epsilon_A:A\otimes A^*\to {\rm I} \] are assigned, in such a way that the diagram \begin{diagram} A&\rTo^{r^{-1}_A}&A\otimes{\rm I}&\rTo^{1_A\otimes\eta_A}&A\otimes(A^*\otimes A)\\ \dTo^{1_A}&&&&\dTo_{a_{A,A^*\!\!,A}}\\ A&\lTo_{l_A}&{\rm I}\otimes A&\lTo_{\epsilon_A\otimes 1_A}&(A\otimes A^*)\otimes A \end{diagram} and the dual one for $A^*$ both commute. \end{definition} \paragraph{Examples}The symmetric monoidal categories $({\bf Rel},\times)$ of sets, relations and cartesian product\index{category of relations} and $({\bf FdVec}_\mathbb{K},\otimes)$ of finite-dimensional vector spaces over a field $\mathbb{K}$, linear maps and tensor product are both compact closed. In $({\bf Rel},\times)$, we simply set $X^{*} = X$. Taking a one-point set $\{ * \}$ as the unit for $\times$, and writing $R^{\cup}$ for the converse of a relation $R$: \[ \eta_X=\epsilon_X^{\cup}=\{(*,(x,x))\mid x\in X\}\,. \] For $({\bf FdVec}_\mathbb{K},\otimes)$, we take $V^{*}$ to be the dual space of linear functionals on $V$. The unit and counit in $({\bf FdVec}_\mathbb{K},\otimes)$ are \[ \eta_V:\mathbb{K}\to V^*\otimes V::1\mapsto\sum_{i=1}^{i=n}\bar{e}_i\otimes e_i \qquad {\rm and} \qquad \epsilon_V:V\otimes V^*\to\mathbb{K}::e_i\otimes\bar{e}_j\mapsto \bar{e}_{j}( e_{i}) \] where $n$ is the dimension of $V$, $\{e_i\}_{i=1}^{i=n}$ is a basis of $V$ and $\bar{e}_i$ is the linear functional in $V^*$ determined by $\bar{e}_{j}( e_{i}) = \delta_{ij}$. \begin{definition}\label{def:name}\em The \em name \em $\uu f\urcorner$ and the \em coname \em $\llcorner f\lrcorner$ of a morphism $f:A\to B$ in a compact closed category are \begin{diagram} A^*\!\!\otimes\! A&\rTo^{1_{A^*}\!\!\otimes\! f}&A^*\!\otimes\! B&&&&&{\rm I}\\ \uTo^{\eta_A}&\ruTo_{\uu f\urcorner}&&&&&\ruTo^{\llcorner f\lrcorner}&\uTo_{\epsilon_B} \\ {\rm I}&&&&&A\!\otimes\! B^*&\rTo_{f\!\otimes\! 1_{B^*}}&B\!\otimes\! B^*&& \end{diagram} \end{definition} For $R\in{\bf Rel}(X,Y)$ we have \[ \uu R\urcorner=\{(*,(x,y))\mid xRy,x\in X, y\in Y\}\quad{\rm and}\quad \llcorner R\lrcorner=\{((x, y),*)\mid xRy,x\in X, y\in Y\} \] \noindent and for $f\in {\bf FdVec}_\mathbb{K}(V,W)$ with $(m_{ij})$ the matrix of $f$ in bases $\{e_i^V\}_{i=1}^{i=n}$ and $\{e_j^W\}_{j=1}^{j=m}$ of $V$ and $W$ respectively \[ \uu f\urcorner:\mathbb{K}\to V^*\otimes W::1\mapsto\!\!\sum_{i,j=1}^{\!i,j=n,m\!}\!\!m_{ij}\cdot \bar{e}_i^V\otimes e_j^W \] and \[ \llcorner f\lrcorner:V\otimes W^*\to\mathbb{K}::e_i^V\otimes\bar{e}_j^W\mapsto m_{ij}. \] \noindent Given $f:A\to B$ in any compact closed category ${\bf C}$ we can define $f^*:B^*\to A^*$ as \begin{diagram} B^*&\rTo^{l_{B^*}^{-1}}&{\rm I}\otimes B^*&\rTo^{\eta_A\otimes 1_{B^*}}&A^*\otimes A\otimes B^*\\ \dTo^{f^*}&&&&\dTo_{1_{A^*}\!\otimes f\otimes 1_{B^*}}\\ A^*&\lTo_{r_{A^*}}&A^*\otimes {\rm I}&\lTo_{1_{A^*}\otimes \epsilon_B}&A^*\otimes B\otimes B^* \end{diagram} This operation $(\ )^*$ is functorial and makes Definition \ref{def:compclos} coincide with the one given at the beginning of this section. It then follows by \[ {\bf C}(A\otimes B^*,{\rm I}) \cong {\bf C}(A,B) \cong {\bf C}(I,A^*\otimes B) \] that every morphism of type $I\!\to\! A^*\!\otimes B$ is the name of some morphism of type ${A\to B}$ and every morphism of type ${A\otimes B^*\!\to{\rm I}}$ is the coname of some morphism of type ${A\to B}$. In the case of the unit and the counit we have \[ \eta_A={\uu 1_A\urcorner}\quad\quad{\rm and} \quad\quad\epsilon_A={\llcorner 1_A\lrcorner}\,. \] For $R\in{\bf Rel}(X,Y)$ the dual is the converse, $R^*=R^{\cup}\in{\bf Rel}(Y,X)$, and for $f\in{\bf FdVec}_\mathbb{K}(V,W)$, the dual is \[ f^*:W^*\to V^*::\,\phi \mapsto\,\phi\circ f\,. \] \subsection{Dagger Compact Categories} In order to fully capture the salient structure of $\mathbf{FdHilb}$, the category of finite-dimensional complex Hilbert spaces and linear maps, an important refinement of compact categories, to dagger- (or strongly-) compact categories, was introduced in \cite{AC2,AC3}. We shall not make any significant use of this refined definition in this paper, since our results hold at the more general level of compact categories.\footnote{We shall often use the abbreviated form ``compact categories'' instead of ``compact closed categories''.} Nevertheless, we give the definition since we shall refer to this notion later. We shall adopt the most concise and elegant axiomatization of strongly or dagger compact closed categories, which takes the adjoint as primitive, following \cite{AC3}. It is convenient to build the definition up in several stages, as in \cite{Selinger}. \begin{definition} A \emph{dagger category} is a category $\mathcal{C}$ equipped with an identity-on-objects, contravariant, strictly involutive functor $f\mapsto f^\dagger$: \[ 1^{\dagger} = 1, \qquad (g \circ f)^{\dagger} = f^{\dagger} \circ g^{\dagger}, \qquad f^{\dagger\dagger} = f \, . \] We define an arrow $f : A \rightarrow B$ in a dagger category to be \emph{unitary} if it is an isomorphism such that $f^{-1} = f^{\dagger}$. An endomorphism $f : A \rightarrow A$ is \emph{self-adjoint} if $f = f^{\dagger}$. \end{definition} \begin{definition} A \emph{dagger symmetric monoidal category} $(\mathcal{C} , \otimes , I, a, l, r, \sigma, {\dagger} )$ combines dagger and symmetric monoidal structure, with the requirement that the natural isomorphisms $a$, $l$, $r$, $\sigma$ are componentwise unitary, and moreover that $\dagger$ is a strict monoidal functor: \[ (f \otimes g)^{\dagger} = f^{\dagger} \otimes g^{\dagger} \, . \] \end{definition} Finally we come to the main definition. \begin{definition} A \emph{dagger compact category} is a dagger symmetric monoidal category which is compact closed, and such that the following diagram commutes: \[ \begin{diagram} I & \rTo^{\eta_{A}} & A^{*} \otimes A \\ & \rdTo_{\epsilon_{A}^{\dagger}} & \dTo_{\sigma_{A^{*}, A}} \\ & & A \otimes A^{*} \end{diagram} \] \end{definition} \noindent This implies that the counit is \emph{definable} from the unit and the adjoint: \[ \epsilon_{A} = \eta_{A}^{\dagger} \circ \sigma_{A, A^{*}} \] and similarly the unit can be defined from the counit and the adjoint. Furthermore, it is in fact possible to replace the two commuting diagrams required in the definition of compact closure by one. We refer to \cite{AC3} for the details. \subsection{Trace} An essential mathematical instrument in quantum mechanics is the \emph{trace} of a linear map. In quantum information, extensive use is made of the more general notion of \emph{partial trace}, which is used to trace out a subsystem of a compound system. A general categorical axiomatization of the notion of partial trace has been given by Joyal, Street and Verity \cite{JSV}. A trace in a symmetric monoidal category $\mathcal{C}$ is a family of functions \[ \Tr_{A,B}^U : \mathcal{C} (A \otimes U, B \otimes U) \longrightarrow \mathcal{C} (A, B) \] for objects $A$, $B$, $U$ of $\mathcal{C}$, satisfying a number of axioms, for which we refer to \cite{JSV}. This specializes to yield the total trace for endomorphisms by taking $A = B = I$. In this case, $\Tr(f) = \Tr_{I,I} ^{U}(f): I \rightarrow I$ is a scalar. Expected properties such as the invariance of the trace under cyclic permutations \[ \Tr(g \circ f) = \Tr(f \circ g) \] follow from the general axioms. Any compact closed category carries a canonical (in fact, a unique) trace. For an endomorphism $f : A \rightarrow A$, the total trace is defined by \[ \Tr(f) = \epsilon_{A} \circ (f \otimes 1_{A^{*}}) \circ \sigma_{A^{*},A} \circ \eta_{A} \, . \] This definition gives rise to the standard notion of trace in $\mathbf{FdHilb}$. \subsection{Graphical Representation} Complex algebraic expressions for morphisms in symmetric monoidal categories can rapidly become hard to read. Graphical representations exploit two-dimensionality, with the vertical dimension corresponding to composition and the horizontal to the monoidal tensor, and provide more intuitive presentations of morphisms. We depict objects by wires, morphisms by boxes with input and output wires, composition by connecting outputs to inputs, and the monoidal tensor by locating boxes side-by-side. \begin{center} \centering{\epsfig{figure=EPSRC1.eps,width=280pt}} \end{center} \noindent Algebraically, these correspond to: \[ 1_A:A\to A, \quad f:A\to B, \quad g\circ f, \quad 1_A\otimes 1_B, \quad f\otimes 1_C, \quad f\otimes g, \quad (f\otimes g)\circ h \] respectively. (The convention in these diagrams is that the `upward' vertical direction represents progress of time.) \paragraph{Kets, Bras and Scalars:} A special role is played by boxes with either no input or no output, \textit{i.e.}\ arrows of the form $I \longrightarrow A$ or $A \longrightarrow I$ respectively, where $I$ is the unit of the tensor. In the setting of $\mathbf{FdHilb}$ and Quantum Mechanics, they correspond to \emph{states} and \emph{costates} respectively (cf.~Dirac's kets and bras \cite{Dirac}), which we depict by triangles. \emph{Scalars} then arise naturally by composing these elements (cf.~inner-product or Dirac's bra-ket): \begin{center} \centering{\epsfig{figure=Prop2bis.eps,width=200pt}} \end{center} Formally, scalars are arrows of the form $I \longrightarrow I$. In the physical context, they provide numbers (``probability amplitudes'' etc.). For example, in $\mathbf{FdHilb}$, the tensor unit is $\mathbb{C}$, the complex numbers, and a linear map $s : \mathbb{C} \longrightarrow \mathbb{C}$ is determined by a single number, $s(1)$. In $\textbf{Rel}$, the scalars are the boolean semiring $\{ 0, 1 \}$. This graphical notation can be seen as a substantial two-dimensional generalization of \emph{Dirac notation} \cite{Dirac}: \[ \langle \phi \mid \qquad \qquad \mid \psi \rangle \qquad \qquad \langle \phi \mid \psi \rangle \] Note how the geometry of the plane absorbs functoriality and naturality conditions, e.g.: \begin{center} \input{funcnat.tex} \end{center} \vspace{-.2in} \[ \;\; (f \otimes 1) \circ (1 \otimes g) \qquad = \qquad f \otimes g \qquad = \qquad (1 \otimes g) \circ (f \otimes 1) \] \paragraph{Cups and Caps} We introduce a special diagrammatic notation for the unit and counit. \vspace{-.3in} \begin{center} \input{trcupcap.tex} \end{center} \vspace{-.4in} \[ \epsilon_{A} : A \otimes A^{*} \longrightarrow I \qquad \qquad \qquad \qquad \eta_{A} : I \longrightarrow A^{*} \otimes A . \] The lines indicate the \emph{information flow} accomplished by these operations. \paragraph{Compact Closure} The basic algebraic laws for units and counits become diagrammatically evident in terms of the information-flow lines: \begin{center} \input{compclosp.tex} \end{center} \vspace{-.2in} \[ \qquad (\epsilon_A \otimes 1_A ) \circ (1_A \otimes \eta_A ) = 1_A \quad \qquad \qquad (1_{A^*} \otimes \epsilon_A ) \circ (\eta_A \otimes 1_{A^*} ) = 1_{A^*} \] \paragraph{Names and Conames in the Graphical Calculus} The units and counits are powerful; they allow us to define a \emph{closed structure} on the category. In particular, we can form the \emph{name} $\uu f \urcorner$ of any arrow $f : A \rightarrow B$, as a special case of $\lambda$-abstraction, and dually the \emph{coname} $\llcorner f \lrcorner$: \begin{center} \input{namep.tex} \end{center} \vspace{-.1in}\vsn \[ \llcorner f \lrcorner : A \otimes B^* \rightarrow I \quad \quad \quad \quad \quad \quad \;\; \uu f \urcorner : I \rightarrow A^* \otimes B \] This is the general form of Map-State duality: \[ \mathcal{C} (A \otimes B^\ast , I ) \cong \mathcal{C} (A, B) \cong \mathcal{C} (I , A^\ast \otimes B). \] \subsection{Formalizing Quantum Information Flow} In this section, we give a brief glimpse of categorical quantum mechanics. While not needed for the results to follow, it provides the motivating context for them. For further details, see e.g. \cite{AC4}. \subsubsection{Quantum Entanglement} We consider for illustration two standard examples of two-qubit entangled states, the Bell state: \begin{center} \psset{unit=1in,cornersize=absolute}% \begin{pspicture}(0,-0.1)(2.4,0.1) \psset{linewidth=2pt}% \pscircle[fillstyle=solid,fillcolor=black](0.05,0){0.1} \psline(0.1,0)(2,0) \uput{0.5ex}[u](1,0.05){\mbox{{$|00\rangle + |11\rangle$}}} \pscircle[fillstyle=solid,fillcolor=black](2,0){0.1} \end{pspicture}% \end{center} \vspace{.2in} \noindent and the EPR state: \begin{center} \psset{unit=1in,cornersize=absolute}% \begin{pspicture}(0,-0.1)(2.4,0.1) \psset{linewidth=2pt}% \pscircle[fillstyle=solid,fillcolor=black](0.05,0){0.1} \psline(0.1,0)(2,0) \uput{0.5ex}[u](1,0.05){\mbox{{$|01\rangle + |10\rangle$}}} \pscircle[fillstyle=solid,fillcolor=black](2,0){0.1} \end{pspicture}% \end{center} \vspace{.1in} In quantum mechanics, compound systems are represented by the \emph{tensor product} of Hilbert spaces: $\mathcal{H}_1\otimes \mathcal{H}_2$. A typical element of the tensor product has the form: \[\sum_{i} \lambda_i \cdot \phi_i \otimes \psi_i \] where $\phi_{i}$, $\psi_{i}$ range over basis vectors, and the coefficients $\lambda_{i}$ are complex numbers. \emph{Superposition} encodes \emph{correlation}: in the Bell state, the off-diagonal elements have zero coefficients. This gives rise to Einstein's ``spooky action at a distance''. Even if the particles are spatially separated, measuring one has an effect on the state of the other. In the Bell state, for example, when we measure one of the two qubits we may get either 0 or 1, but once this result has been obtained, it is certain that the result of measuring the other qubit will be the same. This leads to Bell's famous theorem \cite{Bell}: QM is \emph{essentially non-local}, in the sense that the correlations it predicts exceed those of any ``local realistic theory''. \paragraph{From `paradox' to `feature': Teleportation} \begin{center} \input{telep.tex} \end{center} \[ \qquad \quad \mbox{Alice} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{Bob} \] \vspace{.1in} \noindent In the teleportation protocol \cite{BBC}, Alice sends an unknown qubit $\phi$ to Bob, using a shared Bell pair as a ``quantum channel''. By performing a measurement in the Bell basis on $\phi$ and her half of the entangled pair, a collapse is induced on Bob's qubit. Once the result $x$ of Alice's measurement is transmitted by classical communication to Bob (there are four possible measurement outcomes, hence this requires two classical bits), Bob can perform a corresponding unitary correction $U_{x}$ on his qubit, after which it will be in the state $\phi$. \subsubsection{Categorical Quantum Mechanics and Diagrammatics} We now outline the categorical approach to quantum mechanics developed in \cite{AC2,AC3}. The \emph{same} graphical calculus and underlying algebraic structure which we have seen in the previous section has been applied to quantum information and computation, yielding an incisive analysis of \emph{quantum information flow}, and powerful and illuminating methods for reasoning about quantum informatic processes and protocols \cite{AC2}. \paragraph{Bell States and Costates:} The cups and caps we have already seen in the guise of deficit and cancellation operations, now take on the r\^ole of \emph{Bell states and costates} (or preparation and test of Bell states), the fundamental building blocks of quantum entanglement. (Mathematically, they arise as the transpose and co-transpose of the identity, which exist in any finite-dimensional Hilbert space by ``map-state duality''). \begin{center} \centering{\epsfig{figure=Proc6.eps,width=200pt}} \end{center} \noindent The formation of \emph{names} and \emph{conames} of arrows (\textit{i.e.}\ map-state and map-costate duality) is conveniently depicted thus: \begin{center} \centering{\fbox{\epsfig{figure=Vax5a.eps,width=285pt}\hfill{\bf (2)}}} \end{center} \noindent The key lemma in exposing the quantum information flow in (bipartite) entangled quantum systems can be formulated diagrammatically as follows: \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Vax5b.eps,width=300pt}} \end{minipage} Note in particular the interesting phenomenon of ``apparent reversal of the causal order'' . While on the left, physically, we first prepare the state labeled $g$ and then apply the costate labeled $f$, the global effect is {\em as if} we first applied $f$ itself first, and only then $g$. \paragraph{Derivation of quantum teleportation.} This is the most basic application of compositionality in action. We can read off the basic quantum mechanical potential for teleportation immediately from the geometry of Bell states and costates: \bigskip\noindent \begin{minipage}[b]{1\linewidth} \centering{$\!\!$\epsfig{figure=Proc10.eps,width=300pt}} \end{minipage} \noindent The Bell state forming the shared channel between Alice and Bob appears as the downwards triangle in the diagram; the Bell costate forming one of the possible measurement branches is the upwards triangle. The information flow of the input qubit from Alice to Bob is then immediately evident from the diagrammatics. This is not quite the whole story, because of the non-deterministic nature of measurements. But in fact, allowing for this shows the underlying \emph{design principle} for the teleporation protocol. Namely, we find a measurement basis such that each possible branch $i$ through the measurement is labelled, under map-state duality, with a unitary map $f_{i}$. The corresponding correction is then just the inverse map $f_{i}^{-1}$. Using our lemma, the full description of teleportation becomes: \begin{center} \centering{\epsfig{figure=Proc12.eps,width=220pt}} \end{center} \section{No-Cloning} Note that the proof of Joyal's lemma given in Section~\ref{JLsec} makes full use of both diagonals and projections, \textit{i.e.}\ of both cloning and deleting. Our aim is to examine cloning and deleting as separate principles, and to see how far each in isolation is compatible with the strong form of duality which, as we have seen, plays a basic structural r\^ole in the categorical axiomatization of quantum mechanics, and applies very directly to the analysis of entanglement. \subsection{Axiomatizing Cloning} Our first task is to axiomatize cloning as a \emph{uniform operation} in the setting of a symmetric monoidal category. As a preliminary, we recall the notions of monoidal functor and monoidal natural transformation. Let $\mathcal{C}$ and $\mathcal{D}$ be monoidal categories. A (strong) monoidal functor $(F, e, m) : \mathcal{C} \longrightarrow \mathcal{D}$ comprises: \begin{itemize} \item A functor $F : \mathcal{C} \longrightarrow \mathcal{D}$ \item An isomorphism $e : I \cong FI$ \item A natural isomorphism $m_{A, B} : FA \otimes FB \longrightarrow F(A \otimes B)$ \end{itemize} subject to various coherence conditions. Let $(F, e, m), (G, e', m') : \mathcal{C} \longrightarrow \mathcal{D}$ be monoidal functors. A monoidal natural transformation between them is a natural transformation $t : F \stackrel{.}{\lrarr} G$ such that \[ \begin{diagram} I & \rTo^{e} & FI \\ & \rdTo_{e'} & \dTo_{t_{I}} \\ & & GI \end{diagram} \qquad \qquad \begin{diagram} FA \otimes FB & \rTo^{m_{A, B}} & F(A \otimes B) \\ \dTo^{t_{A} \otimes t_{B}} & & \dTo_{t_{A \otimes B}} \\ GA \otimes GB & \rTo_{m'_{A, B}} & G(A \otimes B) \end{diagram} \] \noindent We say that a monoidal category \emph{has uniform cloning} it is has a diagonal, \textit{i.e.}\ a monoidal natural transformation \[ \Delta_{A} : A \longrightarrow A \otimes A \] which is moreover \emph{coassociative and cocommutative}: \[ \begin{diagram} A & \rTo^{\Delta} & A \otimes A & \rTo^{1 \otimes \Delta} & A \otimes (A \otimes A) \\ \deq & & & & \dTo_{a_{A,A,A}} \\ A & \rTo_{\Delta} & A \otimes A & \rTo_{\Delta \otimes 1} & (A \otimes A) \otimes A) \end{diagram} \qquad \qquad \begin{diagram} A & \rTo^{\Delta} & A \otimes A \\ & \rdTo_{\Delta} & \dTo_{\sigma_{A,A}} \\ & & A \otimes A \end{diagram} \] Note that in the case when the monoidal structure is induced by a product, the standard diagonal \[ \Delta_{A} : A \rTo^{\ang{1_{A}, 1_{A}}} A \times A \] automatically satisfies all these properties. To simplify the presentation, we shall henceforth make the assumption that the monoidal categories we consider are \emph{strictly associative}. This is a standard manouevre, and by the coherence theorem for monoidal categories \cite{Mac} is harmless. Note that the functor $A \mapsto A \otimes A$ which is the codomain of the diagonal has as its monoidal structure maps \[ \begin{diagram} m_{A,B} = A \otimes B \otimes A \otimes B & \rTo^{1 \otimes \sigma \otimes 1} & A \otimes A \otimes B \otimes B, & \quad & e = I & \rTo^{l_{I}^{-1}} & I \otimes I \, . \end{diagram} \] Of course the identity functor, which is the domain of the diagonal, has identity morphisms as its structure maps. \subsection{Compact categories with cloning (almost) collapse} \begin{theorem} \label{thethe} Let $\mathcal{C}$ be a compact category with cloning. Then every endomorphism is a scalar multiple of the identity. More precisely, for $f : A \rightarrow A$, $f = \Tr(f) \bullet \id{A}$. This means that for every object $A$ of $\mathcal{C}$, $\mathcal{C}(A, A)$ is a retract of $\mathcal{C}(I, I)$: \[ \alpha : \mathcal{C}(A, A) \lhd \mathcal{C}(I, I) : \beta, \qquad \alpha(f) = \Tr(f), \quad \beta(s) = s \bullet \id{A} \, . \] \end{theorem} In a category enriched over vector spaces, this means that each endomorphism algebra is one-dimensional. In the cartesian case, there is a unique scalar, and we recover the reflexive part of the posetal collapse of Joyal's lemma. But in general, the collapse given by our result is of a different nature to that of Joyal's lemma, as we shall see later. Note that our collapse result only refers to endomorphisms. In the dagger-compact case, every morphism $f : A \rightarrow B$ has an associated endomorphism \[ \sigma \circ (f \otimes f^{\dagger}) : A \otimes B \rightarrow A \otimes B \, . \] Moreover the passage to this associated endomorphism can be seen as a kind of ``projective quotient'' of the original category \cite{deLL}. Thus in this case, the collapse given by our theorem can be read as saying that the projective quotient of the category is trivial. \subsection{Proving the Cloning Collapse Theorem} We shall make some use of the graphical calculus in our proofs. We shall use slightly different conventions from those adopted in the previous section: \begin{itemize} \item Firstly, the diagrams to follow are to be read downwards rather than upwards. \item Secondly, we shall depict the units and counits of a compact category simply as ``cups'' and ``caps'', without any enclosing triangles. \end{itemize} To illustrate these points, the units and counits will be depicted thus: \vspace{.1in} \begin{center} \input{pictures/cupcap.tex} \end{center} \noindent while the identities for the units and counits in compact categories will appear thus: \begin{center} \input{pictures/cupcapeqn.tex} \end{center} The small nodes appearing in these diagrams indicate how the figures are built by composition from basic figures such as cups, caps and identities. \paragraph{First step} We shall begin by showing that \begin{center} ``parallel caps = nested caps'' \end{center} \noindent Diagrammatically: \begin{center} \input{pictures/parnest.tex} \end{center} \noindent This amounts to a ``confusion of entanglements''. In fact, we shall find it more convenient to prove this result in the following form: \[ \eta_{A} \otimes \eta_{A} = (3\ 2\ 1\ 4) \circ (\eta_{A} \otimes \eta_{A}) \] Here $(3\ 2\ 1\ 4)$ is the permutation acting on the tensor product of four factors which is built from the symmetry isomorphisms in the obvious fashion. Diagrammatically: \begin{center} \input{pictures/pareqn1.tex} \end{center} \begin{lemma} \label{sidemlemm} We have $\Delta_{I} = l_{I}^{-1} : I \rightarrow I \otimes I$. \end{lemma} \begin{proof} This is an immediate application of the monoidality of $\Delta$, together with $e = l_{I}^{-1}$ for the codomain functor. \end{proof} \begin{lemma} \label{pstep1} Let $u : I \rightarrow A \otimes B$ be a morphism in a symmetric monoidal category with cloning. Then \[ u \otimes u = (3\ 2\ 1\ 4) \circ (u \otimes u) \, . \] \end{lemma} \begin{proof} Consider the following diagram. \[ \begin{diagram} I & \rTo^{\Delta_{I}} & I \otimes I \\ \dTo^{u} & & \dTo_{u \otimes u} \\ A \otimes B & \rTo^{\Delta_{A \otimes B}} & A \otimes B \otimes A \otimes B \\ \dTo^{\Delta_{A} \otimes \Delta_{B}} & \rdTo_{\Delta_{A} \otimes \Delta_{B}} & \uTo_{1 \otimes \sigma \otimes 1} \\ A \otimes A \otimes B \otimes B & \rTo_{\sigma \otimes 1} & A \otimes A \otimes B \otimes B \end{diagram} \] The upper square commutes by naturality of $\Delta$. The upper triangle of the lower square commutes by monoidality of $\Delta$. The lower triangle commutes by cocommutativity of $\Delta$ in the first component, and then tensoring with the second component and using the bifunctoriality of the tensor. Let $f = (u \otimes u) \circ \Delta_{I}$, and $g = (\Delta_{A} \otimes \Delta_{B}) \circ u$. Then by the above diagram \[ f = (1 \otimes \sigma \otimes 1) \circ (\sigma \otimes 1) \circ g \, . \] A simple computation with permutations shows that \[ (1 \otimes \sigma \otimes 1) \circ (\sigma \otimes 1) = (1\ 3\ 2\ 4) \circ (2\ 1\ 3\ 4) = (3\ 2\ 1\ 4) \circ (1 \otimes \sigma \otimes 1) \, . \] Appealing to the above diagram again, $f = (1 \otimes \sigma \otimes 1) \circ g$. Hence \[ \begin{array}{rcl} f & = & (1 \otimes \sigma \otimes 1) \circ (\sigma \otimes 1) \circ g \\ & = & (3\ 2\ 1\ 4) \circ (1 \otimes \sigma \otimes 1) \circ g \\ & = & (3\ 2\ 1\ 4) \circ f \, . \end{array} \] Applying the previous lemma: \[ u \otimes u = f \circ l_{I} = (3\ 2\ 1\ 4) \circ f \circ l_{I} = (3\ 2\ 1\ 4) \circ (u \otimes u) \, . \] Diagrammatically, this can be presented as follows: \begin{center} \input{pictures/parderiv.tex} \end{center} and hence \begin{center} \input{pictures/pareqn1.tex} \end{center} \end{proof} \noindent Note that this lemma is proved in generality, for any morphism $u$ of the required shape. However, we shall, as expected, apply it by taking $u = \eta_{A}$. It will be convenient to give the remainder of the proof in diagrammatic form. \paragraph{Second step} We use the first step to show that \begin{center} \fbox{\textbf{the twist map $=$ the identity}} \end{center} in a compact category with cloning, by putting parallel and serial caps in a common context and simplifying using the triangular identities. The context is: \begin{center} \input{pictures/context.tex} \end{center} \noindent We get: \vspace{.5in} \begin{center} \input{pictures/tweq.tex} \end{center} and: \vspace{.1in} \begin{center} \input{pictures/ideq.tex} \end{center} \noindent We used the original picture of nested caps for clarity. If we use the picture directly corresponding to the statement of lemma~\ref{pstep1}, we obtain the same result: \vspace{.1in} \begin{center} \input{pictures/tweq2.tex} \end{center} The important point is that the left input is connected to the right output, and the right input to the left output. \paragraph{Third step} Finally, we use the trace to show that any endomorphism $f : A \longrightarrow A$ is \emph{a scalar multiple of the identity}: \[ f = s \bullet 1_{A} \] for $s = \Tr(f)$. \begin{center} \input{pictures/endoeq1.tex} \end{center} \noindent This completes the proof of the Cloning Collapse Theorem~\ref{thethe}. \subsection{Examples} We note another consequence of cloning. \begin{proposition} In a monoidal category with cloning, the multiplication of scalars is idempotent. \end{proposition} \begin{proof} This follows immediately from naturality \[ \begin{diagram} I & \rTo^{\Delta_{I}} & I \otimes I \\ \dTo^{s} & & \dTo_{s \otimes s} \\ I & \rTo_{\Delta_{I}} & I \otimes I \end{diagram} \] together with lemma~\ref{sidemlemm}. \end{proof} Thus the scalars form a commutative, idempotent monoid, \textit{i.e.}\ a semilattice. Given any semilattice $S$, we regard it qua monoid as a one-object category, say with object $\bullet$. We can define a trivial strict monoidal structure on this category, with \[ \bullet \otimes \bullet = \bullet = I \, . \] Bifunctoriality follows from commutativity. A natural diagonal is also given trivially by the identity element (which is the top element of the induced partial order, if we view the semilattice operation as meet). Units and counits are also given trivially by the identity. Note that the scalars in this category are of course just the elements of $S$. Thus any semilattice yields an example of a (trivial) compact category with cloning. Note the contrast with Joyal's lemma. While every boolean algebra is of course a semilattice, it forms a degenerate cartesian closed category as a \emph{poset}, with many objects but at most one morphism in each homset. The degenerate categories we are considering are categories qua \emph{monoids}, with arbitrarily large hom-sets, but only one object. Posets and monoids are opposite extremal examples of categories, which appear as contrasting degenerate examples allowed by these no-go results. Note that our result as it stands is not directly comparable with Joyal's, since our hypotheses are weaker insofar as we only assume a monoidal diagonal rather than full cartesian structure, but stronger insofar as we assume compact closure. A boolean algebra which is compact closed qua category is necessarily the trivial, one-element poset, since meets and joins --- and in particular the top and bottom of the lattice --- are identified. \subsection{Discussion} The Cloning Collapse theorem can be read as a No-Go theorem. It says that it is not possible to combine basic structural features of quantum entanglement with a uniform cloning operation without collapsing to degeneracy. It should be understood that the key point here is the \emph{uniformity} of the cloning operation, which is formalized as the \emph{monoidal naturality} of the diagonal. A suitable intuition is to think of this as corresponding to \emph{basis-independence}.\footnote{In fact, the original example which led Eilenberg and Mac Lane to define naturality was the naturality of the isomorphism from a finite-dimensional vector space to its second dual, as compared with the non-natural isomorphism to the first dual.} The distinction is between an operation that exists in a representation-independent form, for logical reasons, as compared to operations which do intrinsically depend on specific representations. In fact, in turns out that much significant quantum structure can be captured in our categorical setting by \emph{non-uniform} copying operations \cite{CPav}. Given a choice of basis for a finite-dimensional Hilbert space $\mathcal{H}$, one can define a diagonal \[ \ket{i} \mapsto \ket{ii} \, . \] This is coassociative and cocommutative, and extends to a comonoid structure. Applying the dagger yields a commutative monoid structures, and the two structures interact by the Frobenius law. It can be shown that such ``dagger Frobenius structures'' on finite-dimensional Hilbert spaces correspond exactly to bases. Since bases correspond to ``choice of measurement context'', these structures can be used to formalize quantum measurements, and quantum protocols involving such measurements \cite{CPav}. It is of the essence of quantum mechanics that \emph{many} such structures can coexist on the same system, leading to the idea of \emph{incompatible measurements}. This too has been axiomatized in the categorical setting, enabling the effective description of many central features of quantum computation \cite{CD}. Thus the No-Go result is delicately poised on the issue of naturality. It seems possible that a rather sharp delineation between quantum and classical, and more generally a classification of the space of possible theories incorporating various features, may be achieved by further development of these ideas. \section{No-Deleting} The issue of No-deleting is much simpler from the current perspective. A uniform deleting operation is a monoidal natural transformation $d_{A} : A \rightarrow I$. Note that the domain of this transformation is the identity functor, while the codomain is the constant functor valued at $I$. The following result was originally observed by Bob Coecke in the dagger compact case: \begin{proposition} If a compact category has uniform deleting, then it is a preorder. \end{proposition} \begin{proof} Given $f : A \longrightarrow B$, consider the naturality square \[ \begin{diagram} A \otimes B^{*} & \rTo^{d_{A \otimes B^{*}}} & I \\ \dTo^{\coname{f}} & & \deq \\ I & \rTo_{d_{I}} & I \end{diagram} \] By monoidal naturality, $d_{I} = 1_{I}$. So for all $f, g : A \longrightarrow B$: \[ \coname{f} = d_{A \otimes B^{*}} = \coname{g} \] and hence $f = g$. \end{proof} \section{Further Directions} We conclude by discussing some further developments and possible extensions of these ideas. \begin{itemize} \item In a forthcoming joint paper with Bob Coecke, the results are extended to cover No-Broadcasting by lifting the Cloning Collapse theorem to the CPM category \cite{Selinger}, which provides a categorical treatment of \emph{mixed states}. \item The proof of the Cloning Collapse theorem makes essential use of compactness. \\ \textbf{Open Question} Are there non-trivial examples of \emph{$*$-autonomous categories with uniform cloning operations}? \\ One can also consider various possible sharpenings of our results, by weakening the hypotheses, e.g. on monoidality of the diagonal, or by strengthening the conclusions, to a more definitive form of collapse. \item Finally, the r\^ole of scalars in these results hints at the relevance of projectivity ideas \cite{deLL}, which should be developed further in the abstract setting. \end{itemize} \noindent Altogether, these results, while preliminary, suggest that the categorical axiomatization of quantum mechanics in \cite{AC2,AC3,AC4} does indeed open up a novel and fruitful perspective on No-Go Theorems and other foundational results. Moreover, these foundational topics in physics can usefully be informed by results and concepts stemming from categorical logic and theoretical computer science. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{On the right of the critical strip} Consider the $\zeta(s)$ function of Riemann, represented by a Dirichlet series, an Euler product, or via the Dirichlet alternating zeta integral function $\eta(s)$ as~: \begin{align} \zeta(s) &= \suny \, \fr 1/{n^s}, &(\Re s >1), \\ \label{Eulerproduct} &= \prod_{p \text{ prime}} \, \fr 1/{1-p^{-s}}, &(\Re s >1), \\ \label{zeta:eta} &= \fr 1/{1-2^{1-s}}\;\eta(s), &(\Re s >0, s \ne 1+\fr {2k\pi i}/{\log 2}, k \text { integer}), \\ \nonumber &= \fr {\eta'(s)}/{\log 2}, &(s = 1+\fr {2k\pi i}/{\log 2}, k \text{ nonzero integer}), \\ \nonumber \eta(s) &= \suny \, \fr (-1)^{n+1}/{n^s}, &(\Re s >0). \end{align} In his second proof of the functional equation of the zeta function discovered by Euler \cite{Euler:1749}, Riemann \cite{Riemann:1859} has shown that $\xi(s):=\pi^{\tfr s/2}\Gamma(1+\tfr s/2)(s-1)\zeta(s)$ is an integral function left unchanged by the mapping $s \mapsto 1-s$. The non trivial zeros of $\zeta(s)$ are thus located symmetrically with respect to the line $\Re s=\tfr 1/2$, inside the critical strip $0<\Re s<1$ because the Euler product (\ref{Eulerproduct}) is not zero for $\Re s>1$, nor is the limit for $\Re s=1$ of an expression derived from it. Riemann has stated that all these non trivial zeros are very likely located on the critical line $\Re s=\tfr 1/2$ itself. In order to prove this ``Riemann Hypothesis", it is sufficient to show that if $\eta(s)$ had one zero in the right hand side $\tfr 1/2 <\Re s<1$ of the critical strip, then $\zeta(2s)$ would also vanish while $2s$ is outside the critical strip, contradicting (\ref{Eulerproduct}). This is Mr Lee's claim and the main goal of \cite{Lee:2013,Lee:2014}~: \begin{equation} \label{Leeclaim} \fr 1/2<\Re s <1 \;\&\; \suny \, \fr (-1)^{n+1}/{n^s}=0 \;\implies\; \Re(2s)>1 \;\&\; \suny \fr 1/{n^{2s}} = 0. \end{equation} \section{Arithmetic functions} The following two arithmetic functions are defined on the natural numbers in \cite{Lee:2013}~: first, with the convention that $\Omega(1)=0$, \cite[definition 3.1]{Lee:2013} \begin{equation} \label{defomeg} \Omega(m) := \sum_{k=0}^K r_k \quad\text{where, for distinct primes } p_k,\, m = \prod_{k=0}^K p_k^{r_k}, \end{equation} and secondly \cite[definition 3.3]{Lee:2013} \begin{equation} \label{defbeta} \beta(n) := \sumn (-1)^{\Omega(m)}(-1)^{l + 1}, \qquad(l=\fr n/m). \end{equation} According to \cite[theorem 3.7]{Lee:2013}~: \begin{equation} \label{thmbeta} \beta(n)= \begin{cases} 1 & \text{if $n$ is the square of a natural number},\\ -2 & \text{if $n$ is twice the square of a natural number},\\ 0 & \text{otherwise}. \end{cases} \end{equation} \section{A new expression for the Riemann zeta function} If $\Re s>\tfr 1/2$, then $\Re (2s)>1$ and the Dirichlet series representing $\zeta(2s)$ converges absolutely (and also uniformly on compact sets). We can certainly write, replacing the usual exponent $1-2s$ for $\eta(2s)$ in (\ref{zeta:eta}) by the exponent $1-s$ as in \cite{Lee:2013}~: \begin{align*} (1-2^{1-s}) & \suny \fr 1/{n^{2s}} \,=\, \suny \, \left( \, \fr 1/{(n^2)^s} \,-\, \fr 2/{(2n^2)^s} \, \right), \\ &= \left( \fr 1/{1^s} - \fr 2/{2^s} \right) + \left( \fr 1/{4^s} - \fr 2/{8^s} \right) + \left( \fr 1/{9^s} - \fr 2/{18^s} \right) + \left( \fr 1/{16^s} - \fr 2/{32^s} \right) + \cdots. \\ \noalign{\text{After rearranging terms by absolute convergence and using (\ref{thmbeta}), this becomes }} &= \fr 1/{1^s} - \fr 2/{2^s} + \fr 1/{4^s} - \fr 2/{8^s} + \fr 1/{9^s} + \fr 1/{16^s} - \fr 2/{18^s} + \fr 1/{25^s} - \fr 2/{32^s} + \fr 1/{36^s} + \cdots, \\ &= \suny \, \fr {\beta(n)}/{n^s}, \end{align*} and finally, inserting the definition (\ref{defbeta}) of $\beta(n)$, \cite[eq. 3.4, 3.5]{Lee:2013} \begin{equation} \label{lhs} (1-2^{1-s}) \zeta(2s) = \suny \, \sumn \fr { (-1)^{\Omega(m)} (-1)^{l+1} }/{m^s l^s}, \qquad(l=\fr n/m, \Re s >\fr 1/2). \end{equation} \bigskip For a different proof of the last equation, consider the classic Liouville arithmetic function $\lambda(n)=(-1)^{\Omega(n)}$ \cite[p. 184]{Sierpinski:1988} whose generating function is \cite[p. 618]{Landau:1909} \begin{equation} \label{lambda} \suny \, \fr {\lambda(n)}/{n^s} = \fr {\zeta(2s)}/{\zeta(s)} \qquad(s>1). \end{equation} Landau uses Euler's product for $\zeta(s)$ with $s>1$ in his proof and notes that the exact domain of convergence of the $\lambda$-series is one of the most important unsolved problems of number theory. A Dirichlet product \cite[p. 670]{Landau:1909} then gives the right-hand side of (\ref{lhs}) and the uniqueness of the coefficients of a Dirichlet series yields immediately the expression (\ref{thmbeta}) for the arithmetic function $\beta(n)$~: \begin{align*} (1-2^{1-s})\zeta(2s)&=\eta(s)\fr {\zeta(2s)}/{\zeta(s)} = \suny\,\fr {(-1)^{n+1}}/{n^s}\,\suny\,\fr {\lambda(n)}/{n^s} \\ &=\suny \, \sumn \fr { \lambda(m) (-1)^{1+\tfr n/m}}/{n^s} \\ &= \suny \, \fr {\beta(n)}/{n^s}, \qquad(s>1). \end{align*} Since both $\zeta(2s)$ and the $\beta$-series (a sum over $n^2$) are analytic for $\Re s >\fr 1/2$, the previous equality can be extended from $s>1$ to $\Re s>\fr 1/2$, yielding a complete proof of the equation (\ref{lhs}). \section{Series of zero series} If $s$ is a zero of $\eta(s)$ such that $\Re s>0$, then for any natural number $m$ \cite[p. 10]{Lee:2013} $$ \fr {\lambda(m)}/{m^s}\,\suly\,\fr {(-1)^{l+1}}/{l^s} = \fr {\lambda(m)}/{m^s}\,(0)=0, $$ which obviously yields \cite[p. 11]{Lee:2013} \begin{equation} \label{rhs} \sumy\, \suly\,\fr {\lambda(m)(-1)^{l+1}}/{m^s l^s} = 0, \qquad (\Re s > 0, \eta(s)=0). \end{equation} \section{Comparison of double series} In order to prove (\ref{Leeclaim}), Mr. Lee states in \cite[theorem 3.10]{Lee:2013} that the double sums in (\ref{lhs}) and (\ref{rhs}) are equal because the product sets of integers $(n,l)$ in (\ref{lhs}) and $(m,l)$ in (\ref{rhs}) are identical. In fact, what needs to be justified is that the order of summation in the double series can be changed~: \begin{align} \label{Snm} (1-2^{1-s}) \zeta(2s) = & \suny \, \sumn \fr { \lambda(m)(-1)^{l+1}}/{m^s l^s}, \qquad(l=\fr n/m) \\ \label{Smn} =& \sumy\, \sum_{n=m, m\mid n}^\infty \fr {\lambda(m)(-1)^{l+1}}/{m^s l^s}, \qquad(l=\fr n/m) \\ \nonumber =& \sumy\, \suly \fr {\lambda(m)(-1)^{l+1}}/{m^s l^s} \,=\, 0, \quad (\fr 1/2<\Re s <1, \eta(s)=0). \end{align} Some justification is needed here for the inversion between (\ref{Snm}) and (\ref{Smn}) since according to Riemann's rearrangement theorem for single series \cite[p. 318]{Knopp:1954}, \begin{quote} ``If a series converges, but not absolutely, its sum can be made to have any arbitrary value by a suitable derangement of the series; it can also be made divergent or oscillatory." \cite[p. 74]{Bromwich:1955} \end{quote} Although the original and the reordered series have exactly the same terms, the second can diverge or converge conditionally to a different sum. A double series can be summed by columns (\ref{lhs},\ref{Snm}), by rows (\ref{rhs},\ref{Smn}), or by expanding rectangles in the sense of Pringsheim \cite[\S 2.5]{WhittakerWatson:1927}. For positive terms, the three sums are equal if any one of them converges. \begin{quote} ``When the terms of the double series are positive, its convergence implies the convergence of all the rows and columns, and its sum is equal to the sum of the two repeated series." \cite[p. 84]{Bromwich:1955} ``The terms being always positive, if either repeated series is convergent, so is the other and also the double series; and the three sums are the same." \cite[p. 84]{Bromwich:1955} \end{quote} These properties can be extended easily to absolutely convergent double series. \begin{quote} ``It is clear that an array whose elements are indiscriminately positive and negative, if it converges absolutely, may be treated as if it were a convergent array of positive terms." \cite[p. 356]{Goursat:1904} \end{quote} But the situation for general terms is not so simple. A striking example from Ces\`aro makes this clear \cite[p. 89]{Bromwich:1955}~: if $a_{m,n}=(-1)^{n+1}b_n(1-b_n)^{m-1}$ where $b_n=\tfr 1/{2^{\lfloor{n/2}\rfloor+1}}$, then the sum of row $m$ is $\tfr 1/{2^m}$, converging absolutely, and so the sum by columns is 1. But the sum of column $n$ is $(-1)^{n+1}$, so the sum by rows is oscillating. In a similar example due to Arndt \cite[p. 356]{Goursat:1904}, the sum by columns is $\tfr 1/2$ while the sum by rows is $-\tfr 1/2$. Thus \begin{quote} ``the sum of a non-absolutely convergent double series may have different values according to the mode of summation" \cite[p. 89]{Bromwich:1955}. \end{quote} \begin{quote} ``A double series should not be used in computations unless it is absolutely convergent. \cite[p. 357]{Goursat:1904}. \end{quote} \bigskip However, Pringsheim has proven \cite[p. 28]{WhittakerWatson:1927} the following result~: \begin{quote} ``If the rows and columns converge, and if the double series is convergent, then the repeated sums are equal" \cite[p. 81]{Bromwich:1955}. \end{quote} For the double series in (\ref{Smn}), column $n$ has a finite number of terms whose sum is $\tfr \beta(n)/n^s$, while the sum of each row $m$ converges to $0$ if $\eta(s)=0$. But it is not proven in \cite{Lee:2013} that the sums by expanding rectangles converge, and Pringsheim's theorem cannot be used to show that the sum of the double series in (\ref{Snm}) is zero. \section{Refutation of the first version of Mr Lee's proof} The simple proof of the ``Riemann Hypothesis" proposed in \cite{Lee:2013}, although interesting and original, is clearly incomplete~: a crucial theorem presents conditionally convergent infinite series as sums over sets, without specifying the order of summation, and without providing any justification for disregarding this order. \section{Uniform convergence and double sequences} \subsection{} After being made aware of this gap in his proof, the author of \cite{Lee:2012} and \cite{Lee:2013} suspended \cite{Lee:2012} and proposed in \cite{Lee:2013a} a justification based on the Moore theorem for the inversion of two limits, one of which is uniform \cite[p. 28]{DundordSchwartz:1957}. The second version of Mr Lee's proof was made public one year later in \cite{Lee:2014}, and relies on two theorems (2.13, 2.15) from a very readable elementary report about double sequences made available over the Internet in 2005 by Dr Eissa Habil \cite{Habil:2005}. \subsection{} Below, the expression ``exist-U" means that the convergence is uniform with respect to the free variable in $\mathbb{N}$, and we will use the following abbreviations~: $$ \begin{array}{l|ll} \text{Symbol} & \text{Long notation} & \text{Description} \\ \hline \\ f(\infty,n) & \lim_{m\to\infty} f(m,n) & \text{first partial limit}, n\in\mathbb{N} \\ f(m,\infty) & \lim_{n\to\infty} f(m,n) & \text{second partial limit}, m\in\mathbb{N} \\ f(\infty,\infty) & \lim_{m,n\to\infty} f(m,n) & \text{limit of double sequence} \\ f(\infty_1,\infty_2) & \lim_{m\to\infty}\lim_{n\to\infty} f(m,n) & \text{first iterated limit} \\ f(\infty_2,\infty_1) & \lim_{n\to\infty}\lim_{m\to\infty} f(m,n) & \text{second iterated limit} \end{array} $$ \subsection{} Following \cite[Definition 2.1]{Habil:2005}, a double sequence $f(m,n)$ of complex numbers converges to zero if and only if $$ (\forall \epsilon>0) (\exists N \in \mathbb{N}) \text{ such that } m>N \And n>N \Rightarrow |f(m,n)|<\epsilon. $$ \subsection{}\label{thm215} Two conditions sufficient for such convergence to zero are specified in \cite[Theorem 2.15]{Habil:2005}~: $$ f(m,\infty) \text{ exists-U } \And f(\infty_1,\infty_2)=0 \implies f(\infty,\infty)=0. $$ Proof: In the inequality $$ \left|f(m,n)\right| \le \left|f(m,n) - f(m,\infty)\right| + \left|f(m,\infty)\right|, $$ the last term is small if $m$ is large by hypothesis, while the middle difference is small if $n$ is large, independently of the previously chosen large $m$ by uniformity. \subsection{}\label{thm211c} Conversely \cite[Theorem 2.11, corrected]{Habil:2005}~: $$ f(m,\infty) \text{ exists-U } \And f(\infty,\infty)=0 \implies f(\infty_1,\infty_2)=0. $$ Proof: In the inequality \begin{equation} \label{ineqfmn} \left|f(m,\infty)\right| \le \left|f(m,\infty) - f(m,n)\right| + \left|f(m,n)\right|, \end{equation} the last term is small if $m$ and $n$ are large by hypothesis, while the middle difference is small if $n$ is large, independently of the previously chosen large $m$ by uniformity. \subsection{}\label{thm213c} As a corollary \cite[Theorem 2.13, corrected]{Habil:2005}~: $$ f(\infty_1,\infty_2)=0 \And f(m,\infty) \text{ exists-U } \And f(\infty,n) \text{ exists-U } \implies f(\infty_2,\infty_1)=0. $$ Moore's theorem \cite[p. 28]{DundordSchwartz:1957} is stronger than this, requiring only {\em one} uniform limit~: $$ f(m,\infty) \text{ exists } \And f(\infty,n) \text{ exists-U } \implies f(\infty,\infty)=f(\infty_1,\infty_2) = f(\infty_2,\infty_1). $$ \subsection{} Unfortunately, the important uniformity condition has been omitted in \cite[Theorems 2.11, 2.12, 2.13]{Habil:2005}, for example in theorem (2.11)~: $$ f(m,\infty) \text{ \em exists } \And f(\infty,\infty)=0 \implies f(\infty_1,\infty_2)=0. $$ The proposed proof relies on the inequality (\ref{ineqfmn}) and on the three bounds: \\ 1. ``Given $\epsilon>0$, there exists $N_1$ such that $|f(m,n)|<\tfr\epsilon/2$ if $m,n>N_1$"; \\ 2. ``There exists $N_2$ such that $|f(m,\infty)-f(m,n)|<\tfr\epsilon/2$ if $n>N_2$"; \\ 3. ``Choose $n>\max(N_1,N_2)$. Then $\forall m>N_1, |f(m,\infty)| \le \epsilon$". \\ In general, the integer $N_2$ could however depend on $m$ in the second bound, so that it could be impossible to choose $n>N_2, \forall m>N_1$ for the third bound! Fortunately, the hypothesis can be modified easily by replacing ``{\em exists}" with ``{\em exists uniformly}" in order to make the proof correct, as was done in \S\ref{thm211c}. This stronger condition already appears in the statement of \cite[Theorem 2.15]{Habil:2005} in \S\ref{thm215}. \section{Uniform convergence and double series} \subsection{} The convergence to zero of a double series of complex numbers $a_{m,n}$ is defined as the convergence to zero of the double sequence of partial sums \cite[Definition 7.1]{Habil:2005} $$ S(M,N)=\sum_{m=1}^M\sum_{n=1}^N a_{m,n}, \qquad(M,N\in\mathbb{N}). $$ \subsection{} Following \cite[Definition 3.1]{Lee:2014}, let $$ a_{m,n} = \begin{cases} \tfr {\lambda(m)(-1)^{1+\tfr n/m}}/{n^s}, &\text{if }m\text{ divides } n \\ 0 &\text{otherwise} \end{cases} $$ where $\tfr 1/2<\Re s<1$ and $\eta(s)=0$, if possible. The iterated series in (\ref{Snm}) then corresponds to $S(\infty_2,\infty_1)$, and the iterated series in (\ref{Smn}) to $S(\infty_1,\infty_2)$. \subsection{} Both partial limits certainly exist in this case, from (\ref{lhs}) and (\ref{rhs})~: $$ S(\infty,N)=\sum_{n=1}^N \fr {\beta(n)}/{n^s}, \forall N\in\mathbb{N};\quad S(M,\infty)=\sum_{m=1}^M \fr {\lambda(m)}/{m^s}\eta(s)=0, \forall M\in\mathbb{N}. $$ In \cite[Lemma 3.5]{Lee:2014}, only the differences $S(\infty,n)-S(\infty,n-1)=\tfr {\beta(n)}/{n^s}$ and $S(m,\infty)-S(m-1,\infty)=0$ are verified, but this is equivalent by telescoping finite sums. Obviously, $S(\infty_1,\infty_2)=\lim_{M\to\infty}S(M,\infty)=0$ also. \subsection{} From the stronger version of corollary \ref{thm213c} derived by Moore's theorem, $$ S(\infty_1,\infty_2)=0 \And S(M,\infty) \text{ exists } \And S(\infty,N) \text{ exists-U } \implies S(\infty_2,\infty_1)=0. $$ In order to prove that $(1-2^{1-s})\zeta(2s)=S(\infty_2,\infty_1)=0$, we thus only need the uniformity with respect to $N\in\mathbb{N}$ in the partial limit $$ S(\infty,N) = \lim_{M\to\infty} \sum_{m=1}^M \sum_{n=m,m\mid n}^N \fr {(-1)^{\Omega(m)}(-1)^{1+\tfr n/m}}/{n^s}. $$ Equivalently \cite[p. 123]{Bromwich:1955}, we must show that $$ (\forall \epsilon>0) (\exists M \in \mathbb{N}) \text{ such that } \left|\sum_{m=M}^{M+p} \sum_{n=m}^N a_{m,n}\right|<\epsilon, \qquad(\forall N\in\mathbb{N}, \forall p\in\mathbb{N}). $$ \subsection{} Unfortunately, in \cite[Lemma 3.5]{Lee:2014}, we only find the verification of a different result, ``$\suny a_{m,n}$ converges to zero uniformly on $m$", which is equivalent to $$ (\forall \epsilon>0) (\exists N \in \mathbb{N}) \text{ such that } \left|\sum_{n=N}^{N+p} a_{m,n}\right|<\epsilon, \qquad(\forall m\in\mathbb{N}, \forall p\in\mathbb{N}). $$ \section{Refutation of the second version of Mr Lee's proof} The simple proof of the ``Riemann Hypothesis" proposed in \cite{Lee:2014} is incomplete~: it refers to an unproven, probably misstated, theorem from another report that has not been published. Even if this crucial theorem was accepted as is, or more appropriately restated to make its current proof correct, only two of three needed conditions are verified in \cite{Lee:2014}. The gap found in the first version of the proof \cite{Lee:2013} has not been filled in the second version \cite{Lee:2014}. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Periocular NIR images have been mainly used to recognize subjects in controlled environments, and for soft-biometrics applications such as gender classifications, \cite{Periocular_tapia}. Improvements and reductions in the cost of iris capture devices will witness broader applications and may be confronted with newer challenges. One unique challenge is identifying if a subject is under the effects of alcohol, drugs or sleep deprivation. This area is known as Fitness for Duty (FFD) \cite{FFD,murphy} and allows to determine whether the person is physically or psychologically able to perform their task. \cite{murphy}. An Unfit subject is one under the influence of alcohol/drugs/sleepiness. Working under such conditions can affect workers' performance and increase the risk of accidents. The impact of certain drugs (including alcohol) on the oculomotor system has been extensively studied in the literature \cite{BrownAdamsHaegerstrom-PortnoyEtAl1977, AroraVatsaSinghEtAl2012, Tomeo-ReyesRossChandran2016, M.C.RowbothamJones1987, M.C.RowbothamBenowitz.1984}. It is common knowledge that alcohol alone or mixed with cocaine or marijuana and sleep disruption are psychoactive agents that change the brain's functions. Such changes may be manifested in perception, memory, decision-making, attention, motor activity, and many other parts associated with the brain. Alcohol consumption can lead to increased sleepiness and reduced alertness, even after the alcohol is no longer detectable in blood. Alcohol intoxication significantly impairs physical performance and cognitive functions. Numerous studies in the literature have studied at the impact of these agents on basic skills \cite{GennaroFerraraUrbaniEtAl2000}. According to the World Health Organisation, there are 3.3 million deaths worldwide every year due to the harmful use of alcohol. This represent 5.9 \% of all deaths per year \footnote{\url{http://www.who.int/mediacentre/factsheets/fs349/en/}}. Duty of Care legislation exists in some countries such as Australia, the USA, Chile, and others that place responsibility on both employers and employees for maintaining a safe workplace. The employer's burden is to provide and maintain a safe and healthy workplace. The employee's responsibility is to undertake lifestyle management that ensures fitness for duty. As such, detecting people under the influence of alcohol is critical to prevent injures and accidents. Therefore, it is essential to have systems that can identify and robustly and efficiently detect the influence of alcohol in workers to ensure "Fitness for Duty". This kind of observation and analysis system are needed broadly with a high impact in several industries such as mining, health, logistics (driver), insurances, and others \cite{miner}. Several commercial products are currently available to measure the FFD. For instances: The Optalert\footnote{\url{https://www.optalert.com/}}, is an infrared reflectance (IR) oculography based on the principle that while an individual is tired, the central nervous system inhibits the muscle groups controlling eye and eyelid movements. The Sobereye\footnote{\url{https://www.sober-eye.com/}}, is a portable device used to predict impairment caused by substance abuse or fatigue. It uses a smartphone attached to an opaque enclosure that fits a user's eyes to measure the Pupillary Light Reflex (PLR). The PMI-FIT 2000\footnote{\url{https://www.pmifit.com/}}, uses eye-tracking and pupillometry to identify impaired physiological states due to fatigue and other factors, such as alcohol or drug use. Some critical issues have been shown related to the type of device response and the possibility of impersonation. Most of these are portable devices (wearables) or using personal identification numbers. The impersonation is easy because they do not have a biometrical identification stage. They are mainly designed to detect risky events related to tiredness and correlation with alcohol, drugs and others. This work focuses on detecting the presence of alcohol in a pro-activate fashion using iris information. It is essential to highlight that our proposal will not perform a traditional analysis based on the measurement of alcohol in blood using alcohol-test or other devices. This research aims to determine the effect of external agents such as alcohol on the CNS and how this effect is showed in behaviour changes on pupils and irises diameters and movements. This paper proposes a Fused Capsule Network-based algorithm to automatically detect alcohol consumption from data extracted from periocular NIR images. The main contributions of this work are the following: \begin{enumerate} \item A novel dataset of 3,000 NIR periocular images from 30 volunteers was captured, which contains sessions with and without the influence of alcohol. This dataset will be available to other researchers upon request \footnote{\url{https://github.com/jedota/Iris_alcohol_classification}}. \item A novel Capsule Network-based algorithm that detects the influence of alcohol on Periocular NIR images is proposed. The algorithm is called Fused-Capsule Network (F-CapsNet). It includes a convolutional block that extracts features from each data class separately and then fuses them into a two-layer capsule net. The resulting algorithm improves the performance of the traditional Capsule Network while using fewer parameters. \item Additional classifiers such as SVM, Small-VGG, and feature extraction using a pre-trained model (embedding) were also implemented and tested for comparison and baseline. \end{enumerate} The remainder of this paper is structured as follows: The related work is reviewed in Section \ref{SOA}. The Fusion Capsule Network algorithm and the database captured are described in Section \ref{FCN}. Experiments and conclusion are reported in sections \ref{Experiments} and \ref{Conclusions} respectively. \vspace{-0.2cm} \section{Related Work} \label{SOA} This section reports a brief review of alcohol influence in iris changes and its effects on "Fitness for Duties." Alcohol (ethyl alcohol or ethanol) is mainly used for recreational purposes and is the second most widely used drug after caffeine. It is essential to point out that the FFD does not relate to the traditional alcohol test. Here, we are detecting the effect on CNS and alertness of the worker due to alcohol consumption and not the Blood Alcohol Concentration (BAC). (See Figure \ref{diagrama}). Alcohol consumption affects pupil changes directly, such as dilation. This change can be detected using state-of-the-art computer vision techniques, including Deep Learning. \begin{figure}[] \begin{centering} \includegraphics[scale=0.35]{images/changes.png} \par\end{centering} \caption{\label{diagrama} Block diagram of the influence of the alcohol in the Central Nervous System.} \end{figure} Alcohol is depressant for the Central Nervous System (CNS). It diminishes environmental awareness, response to sensory stimulation, cognitive functioning, spontaneity, and physical activity. See Figure \ref{diagrama}. Alcohol can produce increasing drowsiness, lethargy, amnesia, anti-epileptic effects, hypnosis, and anaesthesia in high doses. It is, therefore, not surprising that most countries restrict people from driving and operating dangerous equipment under the influence of alcohol (when blood alcohol concentration exceeds a certain level. For example, 0.05\% in Australia and 0.08\% in the United Kingdom \footnote{\url{http://www.who.int/gho/alcohol/en/}}). Alcohol abuse directly affects workplaces with increased worker absenteeism, reduced productivity, increased working tardiness, frequent stoppages, lower quality work, increased number of accidents causing injury, and equipment damage\cite{wickwire2017shift, Frone, DorrianandSkinner, leo_causa}. Navarro et al. \cite{Navarro7877181} developed a system that captures the driver iris image to detect if the person is drunk. That paper comprises a hardware and software system that implements an algorithm based on the Gabor Filter. The system consists of a Charge-Coupled Device (CCD) Camera and Analog-to-Digital Converter linked into a program to simulate the captured image. The system provides a signal to interact with the car ignition if the software detects that the driver is under alcohol. Monteiro et al. \cite{MonteiroPinheiro2015} proposed a non-invasive and simple test to detect the use of alcohol through pupillary reflex analysis. Results present detection close to 85\% of accuracy using algorithms for pattern recognition. These results demonstrate the efficacy of the test method. The main limitation of this work is related to the active participation of the volunteers; each volunteer had to stay in a dark testing room for approximately 5 minutes to adapt the pupil dilation/constriction to the darkness. Arora et al. \cite{AroraVatsaSinghEtAl2012} presented a preliminary study about the impact of alcohol on an Iris recognition system. The experiments were performed on the 'Iris Under Alcohol Influence' database. Results show that when comparing pre and post-alcohol consumption images, the overlap between mated and non-mated comparison score distributions increases by approximately 20\%. These results were obtained using a relatively small database (220 pre-alcohol and 220 post-alcohol images obtained from 55 subjects). The subjects consumed about 200 ml of alcohol (with 42\% concentration level) in approximately 15 minutes and the images were captured 15-20 minutes after alcohol consumption. That work suggests that about one in five subjects under the influence of alcohol may fail identification by iris recognition systems. Bernstein et al. \cite{BernsteinMendezSunEtAl2017} used spectrogram images of size $224\times224$ from audio waveforms to identify the presence of alcohol with Convolutional Neural Networks (CNN) and wearable sensors. They used 80 training images (40 positives, 40 negatives) and 20 test images (10 positives, 10 negatives) spectrograms, obtained promising results for non-audio waveforms. Koukio et al. \cite{Koukiou,facialTermal} proposed the use of thermal images to identify individuals under the influence of alcohol. They have shown that changes in the eye temperature distribution in intoxicated individuals can be detected using thermal imagery \cite{eyetermal}. \section{Proposed Method} \label{FCN} This work proposes a modified Capsule Network architecture \cite{sabour} called Fusion CapNets (F-CapNets). It uses periocular NIR images to detect alcohol consumption. The architecture details are described in section \ref{fcapnet}. As an additional contribution, a novel periocular NIR database from volunteers with and without the influence of alcohol was captured. The images were analysed in order to show the influence of alcohol on the deformation of the pupil (Section \ref{databasedetails}). \subsection{Fusion Capsule Network (F-CapNets)} \label{fcapnet} Deep Learning techniques through Convolutional Neural Networks have been demonstrated to be a valid alternative to replace handcrafted methods for feature selection. CNN algorithms can learn specific features automatically. However, they usually require large volumes of data for training the network. Furthermore, the quantity of data required increases as the architecture goes deeper. In order to overcome the limitations of CNNs to work with reduced volumes of data and difficulties of handling changes in rotation or translation of the input data, a Capsule Neural Network is proposed. Despite traditional implementation, the proposed Fusion Capsule Network algorithm includes a pre-step where the feature extraction process is done separately for a pre-trained network for each class (Alcohol and No-alcohol images). A convolutional network block comprises the convolutional layers and one fusion layer. The extracted features for each category are then concatenated and used as input for the capsule network. The capsule block includes two layers with $N$ capsules. A scheme of the architecture used in the training process is shown in Figure \ref{archcap}. Only one periocular image (potentially under alcohol) within the two eyes on it is sent to the system for inference time. This architecture helps to reduce the number of network parameters, making the algorithm suitable for implementation in mobile devices. In order to train and test the algorithm, a novel database was captured, and it is described as follows. \begin{figure}[] \begin{centering} \includegraphics[scale=0.3]{images/capnet1.png} \caption{ \label{archcap} Representation of train CapNets architecture proposed for alcohol classification. The convolutional layers extract information separately for alcohol and no-alcohol images and concatenate them before enter to the regular CapNets. For inference time, only one periocular image (potentially under alcohol) is sent to the system.} \par\end{centering} \end{figure} \subsection{IAL-I Database} \label{databasedetails} To the best of our knowledge, there is no publicly available database of NIR periocular images captured from volunteers under the influence of alcohol. A novel database (IAL-I) from 30 volunteer subjects, 24 male and six female from the 25-50 age group, was captured in this work. The capture protocol and the images' analysis are presented as follows. A health committee team evaluated the captured process before it started. As requested by the health committee, a consent form was used to capture images for all the volunteers. This database will be available on request. \subsubsection{Data Capture} The capture process cooperates and delivers periocular images with both eyes. On average, 20 frames are captured per subject. This capture process takes five seconds. It is essential to highlight that the subject in the presence of alcohol shows an involuntary movement on X-axis because they cannot keep the position. This movement adds blurring to the captured images. The author reported the problems associated with eye detection and segmentation under alcohol effects in a different publication \cite{tapia2021semantic}. A TD-100 and Iritech NIR capture devices under a controlled environment were used. Marks on the floor at 30 cm up to 50 cm distance from the camera were used to facilitate the data capture process. Each subject was requested to step on the first mark (50 cm from the camera) and to look at the NIR sensor. An image of both irises was captured. A second image was taken with the subject placed at the second mark (30 cm from the camera). This process was repeated five times for each volunteer. On the fifth time, a sequence of 20 consecutive frames from both eyes was captured to record changes in the pupil due to the light used by the NIR sensor. This image sequence allows detection velocity changes of the iris across all the frames to be measured and, therefore, helps estimate the influence of alcohol on the volunteers. Alcohol directly affects the velocity of iris adjustment to direct light. The data capture process with both devices was organized into 5 sessions (According to our protocol): \begin{itemize} \item Session 0: Images were captured when volunteers were not under the influence of alcohol. \item Session 1: Images were captured 15 minutes after the volunteers consumed 200 ml of alcohol with a concentration level of 42\%. \item Session 2: Images were captured 30 minutes after alcohol consumption. \item Session 3: Images were captured 45 minutes after alcohol consumption. \item Session 4: Images were captured 60 minutes after alcohol consumption. \end{itemize} The room temperature and lighting (200 lux) were kept constant in the capturing process. A total of ten images per eye were captured for each volunteer per session. The total number of images in each session is shown in Table \ref{database}. \begin{table}[H] \centering \scriptsize \caption{\label{database} Database description.} \label{my-label} \begin{tabular}{|c|c|c|c|l|c|} \hline \textbf{Session} & \textbf{Condition} & \textbf{Time} & \textbf{Alcohol} & \multicolumn{1}{c|}{\textbf{Images}} & \textbf{Total} \\ \hline S0 & Pre-alcohol & 0 & 0 & 20 (L-R) & 600 \\ \hline S1 & Post-alcohol & 15 & 200 ml & 20 (L-R) & 600 \\ \hline S3 & Post-alcohol & 30 & 0 & 20 (L-R) & 600 \\ \hline S4 & Post-alcohol & 45 & 0 & 20 (L-R) & 600 \\ \hline S5 & Post-alcohol & 60 & 0 & 20 (L-R) & 600 \\ \hline \end{tabular} \end{table} A total of 600 images of volunteers not under the influence of alcohol were captured and 2,400 images were taken after each volunteer had ingested 200 ml of alcohol (Images taken in intervals of 15 minutes after consumption). The database was divided into 70\% and 30\% for Training and Testing. The partition is a subject-disjoint database. Example images are shown in Figure \ref{fig:figceleb}. \begin{figure}[h] \begin{centering} \begin{tabular}{cc} \includegraphics [scale=0.15]{images/E_1_0_0_R_M_N_N_1977_0_2017.png} \includegraphics [scale=0.15]{images/E_1_0_0_L_M_N_N_1977_0_2017.png}\tabularnewline \includegraphics [scale=0.15]{images/E_1_0_2_R_M_N_N_1976_0_2017.png} \includegraphics [scale=0.15]{images/E_1_0_2_L_M_N_N_1976_0_2017.png}\tabularnewline \end{tabular} \par\end{centering} \caption{\label{fig:figceleb} Top: Example of eyes in control condition. Below: Eyes under the effect of alcohol 30 minutes after consumption. The images belong to the same subject.} \end{figure} \subsubsection{Image analysis} According to the literature \cite{AroraVatsaSinghEtAl2012}, alcohol influence affects the size of the pupil and iris. It involves, in particular, the relation between both measures: iris and pupil radius. This relation is called $p$, and the values were normalized between values of 0.0 up to 1.0 according to the following expression: \begin{equation} p(A)= Ir/Pr \end{equation} Where $A$ is the image, $p(A)$ represents the ratio between $Ir$ (iris radius) y $Pr$ (pupil radius). This measure can be used to compare the standard deviation between the radius with alcohol $p(A)$ and the radius without alcohol $p(NA)$. If the value $p(A)>X$ is considered as Dilation, and $p(A)<Y$ represents Contraction. With $X$ the average ratio of volunteers under control conditions is expressed (without any alcohol consumption). A change can be observed in the dataset captured 30 minutes after the subjects consumed alcohol. However, the ratio of pupil size did not change drastically in all other sessions. The ratio of the pupils alone can not be considered as an indicator of the presence of alcohol. In fact, the distributions are overlapping. It is impossible to easily separate both distributions using only a threshold or a traditional machine learning algorithm. Figure \ref{fig:fig_barras} Shows the histograms of $p(A)$ computed for images captured in the different sessions. The pupil and iris radius were computed using a semantic segmentation proposed in \cite{tapia2021semantic} to measure the $p$ ratio automatically. \begin{figure*}[] \centering \begin{tabular}{c \includegraphics [scale=0.42]{images/Fig4.png} \end{tabular} \caption{\label{fig:fig_barras} Pupils versus Irides ratio histograms for each alcohol captured sessions separately from 0, 15, 30, 45, and 60 minutes (Blue). Yellow represents Minute 0.} \end{figure*} \section{Experiments and Results} \label{Experiments} The captured database (IAL-I) was used to train and test all the experiments. It was divided into two classes: Alcohol and No-alcohol. All the eyes images are cropped as shown in Figure \ref{fig:figceleb}. Each periocular NIR image (left and right) has a resolution of $120\times160$ pixels. Each image was converted into a vector and included in a matrix as $M$ $(1 \times 19,200)$ for the SVM and Small-VGG network. Each row represents one image, and each column in $M$ represents one feature. Also, the ArcFace \cite{deng2018arcface} pre-trained network was used for extracting embedded features (512-D) for F-CapsNet as an input of capsnet. Experiments with and without data augmentation were performed. This technique helps to increase the number of images available for training, obtaining a model with better generalization properties. An image generator based on imgaug algorithm \cite{imgaug} is used to create samples from the left and the right eyes, preserving the partition of train, test, and validation sets were used. When using data augmentation, the dataset was increased from 3,000 images to 12,000 images for each eye. The following geometric transformations used a rotation range of 10 degrees, width shift range and height range of 0.2, and a Zoom Range of 15\%. All changes were made using a 'Nearest' fill mode, meaning the images are taken from the corners to apply the transformation. The mirroring operation was not applied because this can 'transform' the left eye into a right eye considering the pixel's positions. Four sets of experiments are reported in section \ref{Experiments}. First, a baseline using periocular images as input with SVM and Small-VGG were tested. The second uses features extracted from the embedding vector from a pre-trained ArcFace model. This embedding vector was used as an input of SVM. The third used embeddings vector from ResNet50/100 \cite{resnet} and MobileNetV2 \cite{mobilenetv2}. The fourth uses periocular input images as input in a traditional capsnet and the proposed fusion capsule network (F-CapsNet) uses features extracted from the ArcFace as an input of the capnets. \subsection{Images classification using SVM and CNN} In order to compare the performance of the proposed Fusion Capsule Network algorithm, two additional classifiers (SVM and CNN) were implemented and applied to the same database. \textbf{SVM Classifier:} A SVM classifier with Gaussian kernel was trained using LIBSVM implementation \cite{Chang:2011}. A 5-fold cross-validation was used on 60\% of the original data to select the best parameters. The selected model was trained on the full 60\% training data. Finally, the model evaluation was performed on the test data (40$\%$ of the dataset). The best result only reached 63.55\% +/- 0.9 of accuracy in classifying alcohol and no-alcohol influence from NIR periocular images. \vspace{+0.3cm} \textbf{Small VGG - CNN Classifier:} Small-VGG network with three convolutional blocks and two fully connected layers with a small number of neurons was used. The choice of a smaller network design was motivated by the desire to reduce the risk of overfitting. The network processed only one channel (grayscale image of size $120\times160$). Sparse, convolutional layers and max-pooling are at the heart of the Small-VGG models. The three convolutional blocks are defined with 128, 256 and 512 filters, respectively. Several experiments for the fusion of left and right periocular images were performed according to the parameters reported in Table \ref{leftEye}. In the table, the classification accuracy of the algorithms is also reported. The best result obtained was 73.41\%. \vspace{-0.3cm} \begin{table}[H] \centering \scriptsize \caption{Periocular Results on Small-VGG. Drop-out$=0.5$, Acc. Represents the Accuracy. BS represents Batch size.} \label{leftEye} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Epoch} & \textbf{BS} & \textbf{C1} & \textbf{C2} & \textbf{C3} & \textbf{DENSE} & \textbf{Acc. (\%)} \\ \hline 100 & 16 & 32 & 32 & 32 & 7200 & \textbf{73.41} \\ \hline 100 & 16 & 16 & 16 & 32 & 7200 & 70.07 \\ \hline 100 & 16 & 8 & 8 & 16 & 7200 & 59.02 \\ \hline 100 & 16 & 4 & 4 & 8 & 7200 & 65.90 \\ \hline 100 & 16 & 32 & 16 & 16 & 7200 & 68.30 \\ \hline \end{tabular} \end{table} \subsection{Embedding} In previous sections, the images are used to input classifiers using a Small-VGG and SVM. However, in the state-of-art, it was shown that also embeddings could be used to train a classifier. An embedding vector of real numbers contains all extracted information from a pre-trained network. According to our revision, there are no available pre-trained networks under alcohol presence. Therefore, the ArcFace recognition model was used to extract the features. This feature vector has 512 elements. Table \ref{tab:emb}, shows the classification results using embedding information extracted from ArcFace. The best result was 73,0\% when the embedding was extracted from the 45 convolutional layers. This result is very similar to Small-VGG. Despite the fact that the embedding may help to extract information, such information is not enough to distinguish pre-alcohol and post-alcohol images. Fine-tuning was also explored without success and with lower power of generalization. \vspace{-0.3cm} \begin{table}[H] \centering \caption{Classification results from embedding features extracted from ArcFace.} \label{tab:emb} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Classifier} & \textbf{Layers} & \textbf{TP(\%)} & \textbf{TN(\%)} & \textbf{FP(\%)} & \textbf{FN(\%)}& \textbf{Acc.(\%)} \\ \hline SVM & 45 & 93.0 & 54.8 & 45.2 & 7.0& \textbf{73.0} \\ \hline SVM & 50 & 86.2 & 48.7 & 51.3. & 13.8& 67.0 \\ \hline SVM & 54 & 91.6 & 49.8 & 50.2 & 8.4& 70.0 \\ \hline \end{tabular} \end{table} \subsection{Images classification using Pre-trained Networks} Several models have been proposed in state-of-the-art used face images. However, deep face recognition networks have shown that they can work very robustly, even on challenging data. Therefore, we also propose to employ deep face representations extracted by such deep face recognition systems for alcohol classification \cite{deepfake}. As we mentioned before, it would be possible to apply transfer learning and re-train a pre-trained deep face recognition network to detect alcohol images. However, the high complexity of the model, represented by the large number of weights in the neural network, requires a large amount of training data. Even if only the lower layers are re-trained, the limited number of training images (and a much lower number of subjects) in our database can easily result in over-fitting to the characteristics of the training set. Three pre-trained networks were used such as MobileNetv2 \cite{mobilenetv2} and ResNet50/100 \cite{resnet}. All models are based on the Imaginet database~\cite{imagenet} and were used to extract features on the lowest layer. The eye images were resized to $224\times224\times3$. The output sizes before flatten layer are 2,048 for ResNet and 49 for MobileNetv2, respectively. Table \ref{tab:deep} shows that ResNet50 obtained the best results with 73.8\%. This result is very similar to the ArcFace results on Table \ref{tab:emb}. All the results are presented in Table \ref{tab:deep}. \begin{table}[H] \centering \caption{Classification results from three pre-trained networks.} \label{tab:deep} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Classifier} & \textbf{TP~(\%)} & \textbf{TN~(\%)} & \textbf{FP~(\%)} & \textbf{FN~(\%)}& \textbf{Acc.(\%)} \\ \hline MobilNetV2 & 91.0 & 60.0 & 47.0 & 10.0 & 72.6 \\ \hline ResNet50 & 88.0 & 62.0 & 45.0. & 8.0. & 73.8 \\ \hline ResNet101 & 86.0 & 58.0 & 46.0 & 9.0 & 72.3 \\ \hline \end{tabular} \end{table} \subsection{Images classification using Capsule Network} In this section the traditional capsule network and the proposed F-CapsNet algorithms are tested and compared. \newline \newline \textbf{Traditional Capsule Network algorithm:} The algorithm proposed by \cite{sabour} was implemented for this experiment. Two layers with several capsules number were tested (8, 16, 32, and 64). Experiments with and without data augmentation were also performed. A grid search was used to look for the best parameters for capsules and routing (8, 16, 32, and 64 capsules and 3, 4, and 5 routes). The output has the original image size $120 \times 160$ with sigmoid activation. The reconstruction error was optimized to increase model accuracy and avoid the model over-fitting to the training data. Thus, the reconstruction error was set to 0.0005. A grid search involving six options for the reconstruction error (from 0 to 0.00005, dividing by 10 in each step), selecting from 64, 128, or 256 features maps as the output of each convolution and picking among five options for the kernel size of the convolutional layers (3, 5, 7, 9 and 11). Learning rate values were explored from $0.1$ up to $1e-5$. The best value was reached at $1e-5$. \textbf{Proposed Fusion Capsule Network algorithm (F-CapsNet):} The proposed architecture (As shown in Figure \ref{archcap}) uses a pre-trained convolutional block to extract the features separately from each class (Alcohol and no-alcohol). The output of the convolutional layers with the best features for alcohol and no-alcohol images are merged before going through the CapNets layer. This process allows the number of parameters of the model to be reduced (See Table~\ref{tab:res_cap}). F-CapsNet uses a decoder (at the end of the last capsule) to regularise and reconstruct the original iris images. The first layer with 8 capsule has $9,600$ vectors ($80\times60\times2$), ReLU activation, $256$ filters, kernel size of $3 \times 3$ and, stride size of $1$. The second layers with 8 capsule has 1,024 neurons, ReLU activation, 512 filters, strides $2$, kernel size of $3 \times 3$. Four different options were considered for the number of primary capsules and their dimensions (8, 16, 32, and 64) according to the number of feature maps. The number of routing iterations was tuned by considering $5$ different options (sequence from $1$ to $5$ with a step of $1$). The best results obtained are reported in Table \ref{tab:res_cap}. The proposed model Caps8-Fusion reaches a high accuracy ($92.35\%$) when detecting the influence of alcohol in periocular NIR images. The resulting model contains fewer parameters than the traditional capsule network implementation (reduction of approximately $50\%$ of the number of parameters). Table~\ref{tab:res_cap} shows the results obtained for each experiment. The first column indicates the name (CapsX) of the model. Where $X$ represents the number of capsules used for each model. The expression DA in the model's name indicates the use of data augmentation. The Caps8-Fusion reaches the best accuracy with a low number of parameters $(9M)$ in comparison of traditional CapsNet $(33M)$. Best results are highlighted in Bold. \begin{table}[] \centering \scriptsize \caption{Best results of CapsNet implementation. DA. represents Data-Augmentation. \# represent numbers.} \label{tab:res_cap} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Model} & \textbf{\begin{tabular}[c]{@{}c@{}}\#\\ Caps\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Accuracy\\ (\%)\end{tabular}} & \textbf{Parameters} & \textbf{\begin{tabular}[c]{@{}c@{}}Specificity\\ (\%)-TNR\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Sensitivity\\ (\%)-TPR\end{tabular}} \\ \hline Caps8 & 8 & 90.08 & 18,066,220 & 87.06 & 93.10 \\ \hline Caps16 & 16 & 90.15 & 20,249,800 & 88.21 & 92.09 \\ \hline Caps32 & 32 & 90.26 & 24,615,744 & 89.15 & 91.37 \\ \hline Caps64 & 64 & 91,35 & 33,348,416 & 89.95 & 92.76 \\ \hline \begin{tabular}[c]{@{}c@{}}Caps8\\ DA\end{tabular} & 8 & 91.15 & 18,066,220 & 90.05 & 93.16 \\ \hline \begin{tabular}[c]{@{}c@{}}Caps16\\ DA\end{tabular} & 16 & 90.26 & 20,249,800 & 90.15 & 93.05 \\ \hline \begin{tabular}[c]{@{}c@{}}Caps32\\ DA\end{tabular} & 32 & 88.15 & 24,615,744 & 87.25 & 89.05 \\ \hline \begin{tabular}[c]{@{}c@{}}Caps64\\ DA\end{tabular} & 64 & 86.25 & 33,348,416 & 86.35 & 86.15 \\ \hline \textbf{\begin{tabular}[c]{@{}c@{}}Cap8\\ Fusion\end{tabular}} & \textbf{8} & \textbf{92.35} & \textbf{9,060,220} & \textbf{91.91} & \textbf{92.79} \\ \hline \begin{tabular}[c]{@{}c@{}}Caps8-DA\\ Fusion\end{tabular} & 8 & 92.26 & 9,060,220 & 91.33 & 93.19 \\ \hline \end{tabular} \end{table} Table \ref{tab:res_cap} also reports the True Positive Rate and the Negative Rate, which are standard metrics for binary classification. For this paper, the TNR refers to the accuracy of the negative class (No-Alcohol), and the TPR is the accuracy of the positive class (Alcohol). The alcohol prediction class shows to be more confident since it has a high TPR metric value and a low TNR level. Figure \ref{fig:figgradcam} shows four heatmaps of the most relevant areas that the CNN and F-capsule network algorithms are using for classification. The Gradcam ++ visualisation algorithm \cite{gradcam} was used. Red and blue colors represent higher and lower responses to the model, respectively. Areas with lower responses are less relevant to classification. The higher activation area in images taken from subjects under the influence of alcohol is located mainly in the iris. Conversely, the activation area for images from volunteers without alcohol consumption is located in the periocular region of the iris. People under alcohol effects tend to close the eyes; then changes will appear in the skin periocular areas. The relevant areas are represented for an average image and show the average region highlighted from 15 to 60 minutes images. The site over the iris captures the changes of the pupil and texture in different dilation positions for the same person. This prediction is possible because of the multiple iris images for each subject. \begin{figure}[] \begin{centering} \begin{tabular}{cc} (a)\includegraphics [scale=0.37]{images/alcohol_block5_conv2.png} (b)\includegraphics [scale=0.37]{images/alcohol_block5_conv3.png}\tabularnewline (c)\includegraphics [scale=0.37]{images/alcohol_block5_conv2_a.png} (d)\includegraphics [scale=0.37]{images/alcohol_block5_conv3_a.png}\tabularnewline \end{tabular} \par\end{centering} \caption{\label{fig:figgradcam} Figures (a) and (b) show examples of eyes under control conditions. Below: (c) and (d) eyes under the influence of alcohol. Figures (a) and (c) belong to CNN outputs. And (b) and (d) belongs to F- Capsule network outputs.} \end{figure} \section{Discussion and Conclusions} \label{Conclusions} As a relevant conclusion, we can point out that it is possible to detect alcohol consumption using NIR iris images. Also, this work shows that the ratio of the pupils alone for one image can not be considered as a sufficient indicator of the presence of alcohol. The distributions are overlapping, as shown in Figure \ref{fig:fig_barras}. Thus, it is impossible to separate both distributions easily using only a threshold or a traditional machine learning algorithm. The correct alcohol presence is feasible to detect because we capture the pupil and iris texture variation across 60 minutes. SVM and CNN classifiers have been shown to obtain low results when attempting to classify alcohol consumption from NIR periocular images (63.5$\%$ and 73.4$\%$ of accuracy, respectively). In the CNN algorithm, there is a loss of information in the pooling layer. There is a positional invariance of the components that the network uses to carry out the classification, which prevents it from being sensitive to changes in rotation or translation within the image. A Fused Capsule Network algorithm was proposed (F-CapsNet). This algorithm achieved a classification rate of 92.3$\%$. The properties of each object in the image are expressed as a vector that is mapped and routed to the final image. It includes a CNN block that extracted features separately from each class and fused them to feed the capsule layers. Only half of the parameters (9M) of the standard CapsNet implementation (Caps8) instead of $18M$ parameters are needed to model F-CapsNet, making it suitable for implementation in mobile devices. See Table \ref{tab:res_cap}. This work is part of ongoing research that aims to develop a robust algorithm to estimate 'Fitness for Duty' from periocular NIR images. This research aims to develop technologies to pro-actively detect volunteers under the influence of alcohol, drugs, and fatigue. \section{ACKNOWLEDGMENTS} This research work has been partially funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE and Universidad de Chile-DIMEC. {\small \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\it}} \renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bf}} \numberwithin{equation}{section} \def{\it i.e.}{{\it i.e.}} \def{\it e.g.}{{\it e.g.}} \def\revise#1 {\raisebox{-0em}{\rule{3pt}{1em}}% \marginpar{\raisebox{.5em}{\vrule width3pt\ \vrule width0pt height 0pt depth0.5em \hbox to 0cm{\hspace{0cm}{% \parbox[t]{4em}{\raggedright\footnotesize{#1}}}\hss}}}} \newcommand\fnxt[1] {\raisebox{.12em}{\rule{.35em}{.35em}}\mbox{\hspace{0.6em}}#1} \newcommand\nxt[1] {\\\fnxt#1} \def{\cal A} {{\cal A}} \def{\mathfrak A} {{\mathfrak A}} \def{\underline \calA} {{\underline {\mathfrak A}}} \def{\cal B} {{\cal B}} \def{\cal C} {{\cal C}} \def{\cal D} {{\cal D}} \def{\cal E} {{\cal E}} \def{\cal F} {{\cal F}} \def{\cal G} {{\cal G}} \def{\mathfrak G} {{\mathfrak G}} \def{\cal H} {{\cal H}} \def{\cal I} {{\cal I}} \def{\cal J} {{\cal J}} \def{\cal K} {{\cal K}} \def{\cal L} {{\cal L}} \def{\cal M} {{\cal M}} \def{\cal N} {{\cal N}} \def{\cal O} {{\cal O}} \def{\cal P} {{\cal P}} \def{\cal Q} {{\cal Q}} \def{\cal R} {{\cal R}} \def{\cal S} {{\cal S}} \def{\cal T} {{\cal T}} \def{\cal U} {{\cal U}} \def{\cal V} {{\cal V}} \def{\cal W} {{\cal W}} \def{\mathbb C} {{\mathbb C}} \def{\mathbb N} {{\mathbb N}} \def{\mathbb P} {{\mathbb P}} \def{\mathbb Q} {{\mathbb Q}} \def{\mathbb R} {{\mathbb R}} \def{\mathbb Z} {{\mathbb Z}} \def\partial {\partial} \def\bar\partial {\bar\partial} \def\cale {{\rm e}} \def{\rm i} {{\rm i}} \def{\circ} {{\circ}} \def\mathop{\rm Tr} {\mathop{\rm Tr}} \def{\rm Re\hskip0.1em} {{\rm Re\hskip0.1em}} \def{\rm Im\hskip0.1em} {{\rm Im\hskip0.1em}} \def{\it id} {{\it id}} \def\de#1#2{{\rm d}^{#1}\!#2\,} \def\De#1{{{\cal D}}#1\,} \def{\frac12}{{\frac12}} \newcommand\topa[2]{\genfrac{}{}{0pt}{2}{\scriptstyle #1}{\scriptstyle #2}} \def\undertilde#1{{\vphantom#1\smash{\underset{\widetilde{\hphantom{\displaystyle#1}}}{#1}}}} \def\mathop{{\prod}'}{\mathop{{\prod}'}} \def\gsq#1#2{% {\scriptstyle #1}\square\limits_{\scriptstyle #2}{\,}} \def\sqr#1#2{{\vcenter{\vbox{\hrule height.#2pt \hbox{\vrule width.#2pt height#1pt \kern#1pt \vrule width.#2pt}\hrule height.#2pt}}}} \def\square{% \mathop{\mathchoice{\sqr{12}{15}}{\sqr{9}{12}}{\sqr{6.3}{9}}{\sqr{4.5}{9}}}} \newcommand{\fft}[2]{{\frac{#1}{#2}}} \newcommand{\ft}[2]{{\textstyle{\frac{#1}{#2}}}} \def\mathop{\mathchoice{\sqr{8}{32}}{\sqr{8}{32}{\mathop{\mathchoice{\sqr{8}{32}}{\sqr{8}{32}} {\sqr{6.3}{9}}{\sqr{4.5}{9}}}} \newcommand{\mathfrak{w}}{\mathfrak{w}} \DeclareMathOperator*{\Img}{\mathfrak{Im}} \DeclareMathOperator*{\Nd}{\bullet} \DeclareMathOperator*{\Max}{\mathfrak{max}} \def\alpha{\alpha} \def\beta{\beta} \def\omega{\omega} \def.325{\rho} \def\delta{\delta} \def\epsilon{\epsilon} \def\chi{\chi} \def\gamma{\gamma} \def\cale{{\cal E}} \def\hat{x}{\hat{x}} \def\hat{\rho}{\hat{\rho}} \def\hat{\chi}{\hat{\chi}} \def\hat{h}{\hat{h}} \def\phi{\phi} \def\psi{\psi} \def\hat{h}{\hat{h}} \def\nabla_\mu{\nabla_\mu} \def\nabla_\nu{\nabla_\nu} \def\Gamma{\Gamma} \def{\rm arctanh}{{\rm arctanh}} \catcode`\@=12 \begin{document} \title{{\bf Cosmological Polytopes and the} \\ \vspace{.2cm} {\bf Wavefuncton of the Universe for Light States}} \pubnum{% arXiv:xxxx.xxxxx} \date{September 2019} \author{ \scshape Paolo Benincasa${}^{\dagger}$ \\[0.4cm] \ttfamily ${}^{\dagger}$ Niels Bohr International Academy and Discovery Center, \\ \ttfamily University of Copenhagen, The Niels Bohr Institute,\\ \ttfamily Blegdamsvej 17, DK-2100, Copenhagen, Denmark\\ \small \ttfamily [email protected] \\[0.2cm] } \Abstract{We extend the investigation of the structure of the late-time wavefunction of the universe to a class of toy models of scalars with time-dependent masses and polynomial couplings, which contains general massive scalars in FRW cosmologies. We associate a universal integrand to each Feynman diagram contributing to the wavefunction of the universe. For certain (light) masses, such an integrand satisfies recursion relations involving certain differential operators, connecting states with different masses and having, as a seed, the massless scalar (which describes a conformally coupled scalar in FRW cosmologies as a special case). We show that it is a {\it degenerate limit} of the canonical form of a generalisation of the cosmological polytopes describing the subclass of these models with massless scalars. Intriguingly, the flat-space scattering amplitude appears as a higher codimension face of this generalisation of the cosmological polytope: this is the reflection of the fact that it is contained in the leading term in the Laurent expansion as the total energy is taken to zero, with the codimension of the face providing the order of the total energy pole. The same connection between the other faces and the Laurent expansion coefficients holds for the other singularities of the wavefunction of the universe, all of them connectable to flat-space processes. This new construction makes manifest the origin of the multiple poles in the universal integrand of the wavefunction, which is exactly obtained in a degenerate limit, where some of the singularities of the canonical form of the polytope collapse onto each other. Finally, we consider the mass as a perturbative coupling as well, showing that the contribution to the wavefunction coming from graphs with mass two-point couplings can be identified with a degenerate limit of the canonical form of the cosmological polytope, if the perturbative expansion is done around the massless (conformally coupled) state; or as double degenerate limit of the canonical form of the extension of the cosmological polytopes introduced in the present paper, if the perturbative expansion is done around minimally coupled states.} \makepapertitle \body \versionWF, CP, ms -- draft \tableofcontents \section{Introduction}\label{sec:Intro} Physics at accessible high energies is extremely constrained by the unitarity of time evolution as well as Lorentz invariance and the locality of the interactions: these basic principles fix all the possible three-particle couplings \cite{Benincasa:2007xk, McGady:2013sga, Arkani-Hamed:2017jhn}, Yang's theorem \cite{Arkani-Hamed:2017jhn}, the consistency of the interactions among massless particles with spin less or equal to two \cite{Benincasa:2007xk, Benincasa:2011pg, McGady:2013sga, Arkani-Hamed:2017jhn}, the impossibility of having interaction involving a finite number of massless particles with spin higher than 2 \cite{Weinberg:1964ew, Benincasa:2007xk, Benincasa:2011pg, McGady:2013sga}, as well as the charge conservation for interactions mediated by massless spin-$1$ particles, the equivalence principle \cite{Weinberg:1964ew} and the uniqueness of the graviton \cite{Benincasa:2007xk}. The imprint of locality and unitarity in the relevant quantum mechanical observables, {\it i.e.} the scattering amplitudes, is given by their sufficiently analytic structure with at most poles and branch cuts, with locality fixing the location of such singularities at those points of kinematic space where the square of the sum of two or more momenta vanishes, while unitarity reflects into the fact that when such singularities are approached, the scattering amplitudes factorise into lower point ones. However, while Lorentz invariance is broken at cosmological scales, the phase of accelerated expansion the universe is undergoing \cite{Riess:1998cb, Perlmutter:1998np}, makes impossible even in principle to have a well-defined quantum mechanical observable. However, for cosmologies in which the universe opens up to become infinitely large and flat at sufficiently late times -- which indeed is not ours, due precisely to the current accelerated expansion --, it is possible to define spatial correlation functions, or, equivalently, the wavefunction of the universe whose squared modulus provides the probability distribution through which the spatial correlations can be computed. They are static quantities which depends only on data living at the future spatial boundary of the universe. Having now both Lorentz invariance and unitarity as approximated concepts (the former is broken, while the latter is {\it hidden} because the time evolution has been integrated out), the features listed above are not bounded to hold. And, indeed, important differences appear, {\it e.g.} in cosmological settings we no longer have cluster decompositions globally, but it can hold only in each branch in which the wavefunction of the universe separates via a branched diffusion process as the universe expands\cite{Starobinsky:1982ee}. The lack of global cluster decomposition manifests itself even in the structure of the two-point function for massless scalars, which grows logarithmically at large distances, as well as in the ultrametric structure of the wavefunction of the universe \cite{Anninos:2011kh}. All these features are tied to the tree-like structure of the cosmological bulk \cite{Winitzki:2001np, Harlow:2011az}. Thus, it is fair and necessary to ask whether there exists a cosmological counterpart of the list of constraints which hold for flat-space scattering, and which are the fundamental principles behind it. Said differently, we need to understand what are the invariant properties that the wavefunction of the universe ought to satisfy in order to come from a consistent causal evolution in cosmological space-times. Despite the existence of a number of consistency conditions for inflationary correlation functions \cite{Maldacena:2002vr, Seery:2008ax, Leblond:2010yq, Creminelli:2011rh, Creminelli:2012ed, Senatore:2012wy, Assassi:2012zq, Goldberger:2013rsa, Hinterbichler:2013dpa, Pimentel:2013gza, Creminelli:2013mca, Bordin:2016ruc, Bordin:2017ozj, Finelli:2017fml, Pajer:2019jhb}, yet no general rules are known for cosmological observables, and very little is known about the structure of the wavefunction of the universe \cite{Anninos:2014lwa, Konstantinidis:2016nio}. In order to address this class of questions, we need to collect more theoretical data: this is an important zero-th order step for gaining a deeper understanding of the general analytic structure of cosmological observables and how physics is encoded into it. One distinctive feature that we have already learnt for observables with Bunch-Davies condition in the infinite past, is that the lack of time translation invariance shows up as a dependence on the sum of the energies, {\it i.e.} the length of the momenta, of all the states in the correlation function/wavefunction of the universe. \begin{wrapfigure}{l}{4.5cm} \begin{tikzpicture}[node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}] % \begin{scope}[parallelogram/.style={trapezium, draw, fill=gray, opacity=.3, minimum width=4cm, trapezium left angle=45, trapezium right angle=135}] \def1{1} \def3{3} \def.325{.35} \def.55{.15} \coordinate (Ct) at (1,3); \coordinate (TC) at ($(Ct)+(0,-2)$); \coordinate (S) at ($(Ct)+(0,-3.5)$); \coordinate (A1) at ($(TC)+(-1,0)$); \coordinate (A2) at ($(TC)+(-.5,-.5)$); \coordinate (A3) at ($(TC)+(1,0)$); \coordinate (A4) at ($(TC)+(.5,.5)$); \pgfmathsetmacro\Axi{.325*cos(180)} \pgfmathsetmacro\Ayi{.325*sin(180)} \coordinate (As) at ($(S)+(\Axi,\Ayi)$); \pgfmathsetmacro\Bxi{.325*cos(0)} \pgfmathsetmacro\Byi{.325*sin(0)} \coordinate (Bs) at ($(S)+(\Bxi,\Byi)$); \pgfmathsetmacro\Cxi{.325*cos(60)} \pgfmathsetmacro\Cyi{.325*sin(60)} \coordinate (Cs) at ($(S)+(\Cxi,\Cyi)$); \pgfmathsetmacro\Dxi{.55*cos(120)} \pgfmathsetmacro\Dyi{.55*sin(120)} \coordinate (Ds) at ($(S)+(\Dxi,\Dyi)$); \coordinate (BR) at ($(Ct)+({3/2},-5)$); \coordinate (TR) at ($(Ct)+({3/2},-2)$); \coordinate (tb) at ($(BR)+(0.125,0)$); \coordinate (tt) at ($(TR)+(0.125,0)$); \node [shade, shading=ball,circle,ball color=green!70!black,minimum size=.75cm] (Ampl) at (S) {}; \draw[-, directed, black, thick] (A1) -- (A2); \draw[-, directed, black, thick] (A2) -- (A3); \draw[-, directed, black, thick] (A3) -- (A4); \draw[-, directed, black, thick] (A4) -- (A1); \draw[-, red, thick] (As) edge [bend left] (A1); \draw[-, red, thick] (Ds) edge [bend left=20] (A2); \draw[-, red, thick] (Bs) edge [bend right] (A3); \draw[-, red, thick] (Cs) edge [bend right=20] (A4); \draw [->, thick] (tb) -- (tt); \coordinate [label=right:{\tiny $\eta$}] (t) at ($(tb)!0.5!(tt)$); \node[parallelogram, trapezium stretches=false,minimum height=1cm] at (TC) {}; \coordinate (LT) at ($(Ct)+(0,-6.5)$); \node [shade, shading=ball,circle,ball color=red!90!black,opacity=.25, minimum size=1cm] (Ampl2) at (S) {}; \coordinate (tc) at ($(S)-(.75,0)$); \coordinate (tc1) at ($(tc)+(0,.5)$); \coordinate (tc2) at ($(tc)-(0,.5)$); \draw[->] (tc1) -- (tc2); \node[scale=.75] (tcl) at ($(tc)-(.125,0)$) {$\bar{\eta}$}; \end{scope} % \end{tikzpicture} \end{wrapfigure} Outside of the physical sheet, they develop a singularity in such a sum, which can be reached upon analytic continuation. At this point in energy space, the process shows energy conservation and, thus, is time-translation invariant as well as Lorentz invariant, and it reduces to the high energy limit of the flat-space scattering amplitudes -- this is a fact which can be understood by realising that the point in energy space $\sum_{j=1}^n E_j\,\longrightarrow\,0$ dominates as the interaction are taken at early times, with the late-time boundary which effectively becomes infinitely far away and disappears, restoring the conditions which characterise a scattering process in flat-space \cite{Maldacena:2011nz, Arkani-Hamed:2015bza}. It is quite remarkable how the wavefunction of the universe and the spatial correlation functions in a static Bunch-Davies vacuum encode the flat-space scattering amplitudes. This relation between cosmological and flat-space observables has deep implications for the analytic structure of the formers, which are not yet fully understood: there should be an imprint of all the theorems and properties holding for the flat-space S-matrix in the wavefunction of the universe. For example, it has to factorise in a codimension-two surface of the energy space, reflecting the factorisation properties in flat-space. \noindent \begin{wrapfigure}{l}{4.5cm} \begin{tikzpicture}[node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}] % \begin{scope}[parallelogram/.style={trapezium, draw, fill=gray, opacity=.3, minimum width=4cm, trapezium left angle=45, trapezium right angle=135}] \def1{1} \def3{3} \def.325{.15} \def.55{.15} \coordinate (Ct) at (1,3); \coordinate (TC) at ($(Ct)+(0,-2)$); \coordinate (S) at ($(Ct)+(0,-3.5)$); \coordinate (Sl) at ($(S)-(.75,0)$); \coordinate (Sr) at ($(S)+(.75,0)$); \coordinate (A1) at ($(TC)+(-1,0)$); \coordinate (A2) at ($(TC)+(-.5,-.5)$); \coordinate (A3) at ($(TC)+(1,0)$); \coordinate (A4) at ($(TC)+(.5,.5)$); \pgfmathsetmacro\Axi{.325*cos(180)} \pgfmathsetmacro\Ayi{.325*sin(180)} \coordinate (As) at ($(Sl)+(\Axi,\Ayi)$); \pgfmathsetmacro\Bxi{.325*cos(0)} \pgfmathsetmacro\Byi{.325*sin(0)} \coordinate (Bs) at ($(Sr)+(\Bxi,\Byi)$);m \pgfmathsetmacro\Cxi{.325*cos(60)} \pgfmathsetmacro\Cyi{.325*sin(60)} \coordinate (Cs) at ($(Sr)+(\Cxi,\Cyi)$); \pgfmathsetmacro\Dxi{.55*cos(120)} \pgfmathsetmacro\Dyi{.55*sin(120)} \coordinate (Ds) at ($(Sl)+(\Dxi,\Dyi)$); \coordinate (Es) at ($(Sl)+(\Bxi,\Byi)$); \coordinate (Fs) at ($(Sr)+(\Axi,\Ayi)$); \coordinate (BR) at ($(Ct)+({3/2},-5)$); \coordinate (TR) at ($(Ct)+({3/2},-2)$); \coordinate (tb) at ($(BR)+(0.125,0)$); \coordinate (tt) at ($(TR)+(0.125,0)$); \node [shade, shading=ball,circle,ball color=blue,minimum size=.2cm] (AmplL) at (Sl) {}; \node [shade, shading=ball,circle,ball color=blue,minimum size=.2cm] (AmplR) at (Sr) {}; \draw[-, directed, black, thick] (A1) -- (A2); \draw[-, directed, black, thick] (A2) -- (A3); \draw[-, directed, black, thick] (A3) -- (A4); \draw[-, directed, black, thick] (A4) -- (A1); \draw[-, black, thick] (A1) -- (A3); \draw[-, red, thick] (As) edge [bend left] (A1); \draw[-, red, thick] (Ds) edge [bend left=20] (A2); \draw[-, red, thick] (Bs) edge [bend right] (A3); \draw[-, red, thick] (Cs) edge [bend right=20] (A4); \draw[-, red, thick] (Es) -- (Fs); \draw [->, thick] (tb) -- (tt); \coordinate [label=right:{\tiny $\eta$}] (t) at ($(tb)!0.5!(tt)$); \node[parallelogram, trapezium stretches=false,minimum height=1cm] at (TC) {}; \coordinate (LT) at ($(Ct)+(0,-6.5)$); % \node [shade, shading=ball,circle,ball color=red!90!black,opacity=.25, minimum size=.5cm] (AmplL2) at (Sl) {}; \coordinate (tc) at ($(Sl)-(.5,0)$); \coordinate (tc1) at ($(tc)+(0,.5)$); \coordinate (tc2) at ($(tc)-(0,.5)$); \draw[->] (tc1) -- (tc2); \node[scale=.75] (tcl) at ($(tc)-(.1,0)$) {$\eta_L$}; % \node [shade, shading=ball,circle,ball color=red!90!black,opacity=.25, minimum size=.5cm] (AmplR2) at (Sr) {}; \coordinate (tc) at ($(Sr)+(.5,0)$); \coordinate (tc1) at ($(tc)+(0,.5)$); \coordinate (tc2) at ($(tc)-(0,.5)$); \draw[->] (tc1) -- (tc2); \node[scale=.75] (tcl) at ($(tc)+(.1,0)$) {$\eta_R$}; \end{scope} % \end{tikzpicture} \end{wrapfigure} This fact has been used, together with the requirement that Bunch-Davies observables should not have singularities in the physical sheet as well as conformal symmetry, to compute the four-point correlation functions with external conformally-coupled or massless scalars and internal massive states in de Sitter space-time and the inflationary three-point functions which can be obtained from the former by evaluating one of the external states on the time dependent background \cite{Arkani-Hamed:2018kmz}. Even more surprisingly, for a large class of toy models described by a massless scalar state in flat-space with time-dependent polynomial interactions, which, upon a specific choice for the time-dependence of the couplings, contains the conformally-coupled scalars with polynomial interactions in FRW cosmologies \cite{Arkani-Hamed:2017fdk}, it is possible to reconstruct the Bunch-Davies perturbative wavefunction at all order in perturbation theory from the knowledge of the flat-space scattering amplitudes and the requirement of the absence of unphysical singularities \cite{Benincasa:2018ssx}. Despite this latter result cannot be completely general, but it may hold for a larger class of toy models, it suggests that the flat-space physics constrains the wavefunction of the universe more than what one would have ever expected. In the case of \cite{Benincasa:2018ssx}, it reflects into the fact that the coefficients of all the singularities can be interpreted in terms of flat-space processes or, anyhow, expressed in terms of them. These features are made manifest in the formulation of the wavefunction in terms of {\it cosmological polytopes} introduced in \cite{Arkani-Hamed:2017fdk}. They are combinatorial-geometrical objects with their own first principle definition, characterised by a differential form, called {\it canonical form}\footnote{For an extensive study of positive geometries and canonical forms, see \cite{Arkani-Hamed:2017tmz}.}, whose coefficient has all the properties that we ascribe to the wavefunction of the universe. In particular, their boundaries are lower-dimensional polytopes which encode the residues of the wavefunction poles, with the hyperplanes identifying them being related to the poles themselves. Thus, there is a codimension-one boundary, named {\it scattering facet}, which is related to the total energy pole and encodes the relevant flat-space scattering amplitude. Amazingly, the vertex structure of such a facet makes the cutting rules manifest, allowing us for a novel combinatorial-geometrical proof of them, while its dual makes Lorentz invariance manifest \cite{Arkani-Hamed:2018ahb}. The works \cite{Arkani-Hamed:2018kmz} and \cite{Arkani-Hamed:2017fdk, Arkani-Hamed:2018ahb, Benincasa:2018ssx} provide two different but complementary approaches for understanding the general rules behind cosmological processes, both of which take the perspective of not considering explicitly the time evolution: in the former the correlation functions are determined from symmetries and the knowledge of their singularities, in a very S-matrix-like fashion; the latter instead consider a totally new mathematical formulation with the rules we are looking for which should emerge from its first principles\footnote{Also this approach takes a lesson coming from the most recent developments in the context of flat-space scattering amplitudes, which have a combinatorial characterisation in a number of cases, see \cite{ArkaniHamed:2012nw, Benincasa:2016awv} and \cite{Arkani-Hamed:2013jha, Arkani-Hamed:2017mur, Frost:2018djd, Salvatori:2018aha, Banerjee:2018tun, Raman:2019utu}.}. However, in both the approaches we still need more theoretical data in order to grasp the fundamental properties ruling the cosmological observables. In this paper, we will extend the exploration of the detailed structure of the perturbative wavefunction of the universe developed in \cite{Arkani-Hamed:2017fdk, Arkani-Hamed:2018ahb, Benincasa:2018ssx}, focusing on a class of toy models of scalars with time-dependent masses and time-dependent polynomial couplings, which contains massive scalars with polynomial interactions in FRW cosmologies, upon a specific choice of the time-dependence of the mass and couplings. In Section \ref{sec:Rev}, after having introduced the model and discussed its generalities, we restrict to a subclass for which the time-dependent mass is inversely proportional to the (conformal) time and we define a set of differential operators mapping free flat-space massless\footnote{The model is formulated as a scalar in flat-space, with the cosmology encoded into the time-dependence of the mass and of the coupling. Thus, when we refer to the states, we will always use the flat-space wording, unless otherwise specified.} scalars ({\it i.e.} conformally coupled scalars in FRW cosmologies) to states with generic masses. This allows us to focus on the wavefunction of the universe with external massless scalars and prove a novel set of recursion relations which relates wavefunction with internal states with different masses and involve certain differential operators. In Section \ref{sec:EWG}, we exploit these recursion relations for a class of values of the masses, for which the iterated recursion relations has the structure of a differential operator acting on the wavefunction with all the internal states being massless ({\it i.e.} conformally coupled). In these cases the masses on a given edge $e$ of the graph representing a certain contribution to the wavefunction, can be labelled by an integer $l_e$, and the wavefunction is represented by edge-weighted graphs with the integer $l_e$ being the weight of the edge $e$. While the combinatorial rules proven in \cite{Arkani-Hamed:2017fdk} for computing the seed of our new recursion relations together with the differential operators, allows us to compute the contribution to the wavefunction from a given graph, we also provide a combinatorial rule to predict the order of the poles in the wavefunction. Section \ref{sec:Pert} is devoted to generalise the discussion of the previous section. In this case we treat the mass perturbatively. In Section \ref{sec:CPl1} we discuss a generalisation of the cosmological polytopes, whose canonical form encode the wavefunction of the universe for $l\,=\,1$, which contains the minimally coupled scalars in FRW cosmologies. We discuss in detail its face structure. The wavefunction of the universe turns out to be a degenerate limit of the canonical form of these polytopes, and the flat-space amplitude is returned by a higher codimension face. For these wavefunctions, the flat-space amplitudes are given by the coefficient of the leading term in its Laurent expansion when the total energy goes to zero. This is beautifully reflected in the polytope picture by the fact that the scattering face has now higher codimension, with the codimension giving the degree of the pole. We conclude the section commenting on the polytope description of the perturbative mass expansion, whose contribution can be obtained as a degenerate limit of a certain subclass of the standard cosmological polytopes. The degenerate limit allows to obtain a rational function with multiple poles from the canonical forms of the polytopes which are characterised by having logarithmic singularities only. This is a very similar phenomenon to what happens in the case of the halohedron \cite{Salvatori:2018aha}. Finally, Section \ref{sec:Concl} contains our conclusion and outlook. \section{The wavefunction of the universe for massive scalars}\label{sec:Rev} We consider a class of toy models of a scalar $\phi$ in a $(d+1)$-dimensional flat space-time with a time-dependent mass $\mu(\eta)$ as well as time-dependent polynomial couplings: \begin{equation}\eqlabel{eq:Sact} S\:=\:\int d^dx\,\int_{-\infty}^{0}d\eta\, \left\{ \frac{1}{2}\left(\partial\phi\right)^2-\frac{1}{2}\mu^2(\eta)\phi^2-\sum_{k\ge3}\lambda_{k}(\eta)\phi^{k} \right\}. \end{equation} Such a model describes a massive scalar $\phi$ in FRW cosmologies with polynomial self-interactions, for the following choices for the mass $\mu(\eta)$ and the couplings $\lambda_{k}(\eta)$ \begin{equation}\eqlabel{eq:ml} \mu^2(\eta)\:=\:m^2a^2(\eta)+2d\left(\xi-\frac{d-1}{4d}\right)\left[\partial_{\eta}\left(\frac{\dot{a}}{a}\right)+\frac{d-1}{2}\left(\frac{\dot{a}}{a}\right)^2\right],\quad \phantom{\ldots}\\ \lambda_{k}(\eta)\:=\:\lambda_{k}\left[a(\eta)\right]^{2+\frac{(d-1)(2-k)}{2}} \end{equation} where $\dot{\phantom{a}}$ indicates the derivative with respect to the conformal time $\eta$, $a(\eta)$ is the time-dependent warp factor for FRW cosmologies: \begin{equation}\eqlabel{eq:FRW} ds^2\:=\:a^2(\eta)\left[-d\eta+dx_idx^i\right], \end{equation} and $\xi$ is a parameter such that for $\xi\,=(d-1)/4d$ and $m\,=\,0$ the model reduces to the one of a conformally coupled scalar with (non-conformal) polynomial interactions\footnote{There is also another special case for which the model reduces to a massless scalar with time-dependent interactions, but without requiring that the parameter $\xi$ has the conformal value. Setting $m\,=\,0$, it corresponds to a specific choice of cosmology, such that: $$ \partial_{\eta}\left(\frac{\dot{a}}{a}\right)+\frac{d-1}{2}\left(\frac{\dot{a}}{a}\right)^2\,=\,0. $$ For $d\,=\,1$, the warp factor is an exponential $a(\eta)\,=\,a_0\,e^{A_0\eta}$, which vanishes in the far past if $A_0\,\in\,\mathbb{R}_{+}$. For $d\,>\,1$, the solution blows up as $\eta\,\longrightarrow\,-\infty$. Finally notice that, in a cosmology $a(\eta)\,=\,a_0e^{A_0\eta}$ ($A_0\,\in\,\mathbb{R}_+$) and with $d\,>\,1$ (and still $m\,=\,0$), it is described by a scalar with a constant mass in a flat space-time -- while for $m\,\neq\,0$, the time-dependent mass of the flat-space scalar increases as the universe expands.}, which has been discussed in \cite{Arkani-Hamed:2017fdk, Arkani-Hamed:2018ahb,Benincasa:2018ssx}; for $\xi\,=\,0$ the scalar becomes minimally coupled. The mode functions are determined by the following differential equation \begin{equation}\eqlabel{eq:mfeq} \ddot{\phi}_{\circ}(\eta)+\left(E^2+\mu^2(\eta)\right)\phi_{\circ}(\eta)\:=\:0,\hspace{1.5cm} E\:\equiv\:|\overrightarrow{p}|, \end{equation} with the condition that it vanishes in the far past, as $\eta\,\longrightarrow\,-\infty$. A solution for such an equation is not known for an arbitrary time-dependent mass $\mu(\eta)$, but it can be studied if it is considered perturbatively. For the time being, let us focus on the specific choice $\mu(\eta)\,=\,\mu_{\alpha}^2\eta^{-2}$, which corresponds to cosmologies with $a(\eta)\,=\,(-\eta)^{-\alpha}$ ($\alpha\,\in\,\mathbb{R}$), where $\mu^2_{\alpha}\,\equiv\,2d\alpha(\xi-(d-1)/4d)(1+(d-1)\alpha/2)$ for $m\,=\,0$, and it includes also the case $m\,\neq\,0$ for $\alpha\,=\,1$ ({\it i.e.} in de Sitter), with $\mu^2_1\,=\,m^2+d(\xi-(d-1)/4d)(1+d)$. In this case, the solution of the mode equation \eqref{eq:mfeq} ensuring the correct oscillating behaviour in the far past $\phi_{\circ}(\eta)\:\overset{\eta\,\longrightarrow\,-\infty}{\sim}\:e^{iE\eta}$ is given in terms of Hankel functions of the second type \begin{equation}\eqlabel{eq:mm} \phi_{\circ}^{\mbox{\tiny $(\nu)$}}\:=\:\sqrt{-E\eta}H_{\nu}^{\mbox{\tiny $(2)$}}(-E\eta)\:\overset{\eta\,\longrightarrow\,-\infty}{\sim}\:e^{iE\eta},\hspace{1.5cm}\nu\,\equiv\,\sqrt{\frac{1}{4}-\mu_{\alpha}^2}. \end{equation} Notice from \eqref{eq:mm} that $\nu$ can be either real or purely imaginary depending on whether $\mu^2_{\alpha}$ is respectively smaller or greater than $1/4$. More explicitly, the order parameter $\nu$ writes \begin{equation}\eqlabel{eq:cups} \begin{array}{ccl} \mbox{Conformal coupling } {\displaystyle\xi\,=\,\frac{d-1}{4d}}\phantom{\ldots} & \mbox{Minimal coupling } {\displaystyle\xi\,=\,0}\phantom{\ldots} & {} \\ {} & & {} \\ {\displaystyle\nu\,=\,\sqrt{\frac{1}{4}-m^2}} & {\displaystyle\nu\,=\,\sqrt{\frac{d^2}{4}-m^2}} & (\alpha\,=\,1,\;m) \\ {} & & {} \\ {\displaystyle\nu\,=\,\frac{1}{2}} & {\displaystyle\nu\,=\,\frac{1}{2}+\frac{d-1}{2}\alpha} & (\alpha,\;m\,=\,0) \\ \end{array} \end{equation} and it can be either imaginary ($\nu\,=\,i\zeta$, $\zeta\,\in\,\mathbb{R}$) or real for $\alpha\,=\,1$ -- they are respectively the principal and complementary series in de Sitter --, while it is only real for generic $\alpha$. In this last case, $\nu\,\in\,\mathbb{Z}_{\frac{1}{2}}$ if $(d-1)\alpha\,=\,2l$ ($l\,\in\,\mathbb{Z}$). Furthermore, as for the case of a generic function $\mu(\eta)$, the mode equation \eqref{eq:mfeq} for generic $\alpha$ and $m$ cannot be solved exactly, but it can be treated considering $m$ perturbatively, which will be analysed in Section \ref{sec:Pert}. \begin{figure}[t] \centering \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, scale=1.5, transform shape] \begin{scope} \def1.75{1.75} \def3{3} \def.325{.6} \pgfmathsetmacro\Axi{1.75+.325*cos(135)} \pgfmathsetmacro\Ayi{3+.325*sin(135)} \pgfmathsetmacro\Axf{\Axi+cos(135)} \pgfmathsetmacro\Ayf{\Ayi+sin(135)} \coordinate (pAi) at (\Axi,\Ayi); \coordinate (pAf) at (\Axf,\Ayf); \pgfmathsetmacro\Bxi{1.75+.325*cos(45)} \pgfmathsetmacro\Byi{3+.325*sin(45)} \pgfmathsetmacro\Bxf{\Bxi+cos(45)} \pgfmathsetmacro\Byf{\Byi+sin(45)} \coordinate (pBi) at (\Bxi,\Byi); \coordinate (pBf) at (\Bxf,\Byf); \pgfmathsetmacro\Cxi{1.75+.325*cos(-45)} \pgfmathsetmacro\Cyi{3+.325*sin(-45)} \pgfmathsetmacro\Cxf{\Cxi+cos(-45)} \pgfmathsetmacro\Cyf{\Cyi+sin(-45)} \coordinate (pCi) at (\Cxi,\Cyi); \coordinate (pCf) at (\Cxf,\Cyf); \pgfmathsetmacro\Dxi{1.75+.325*cos(-135)} \pgfmathsetmacro\Dyi{3+.325*sin(-135)} \pgfmathsetmacro\Dxf{\Dxi+cos(-135)} \pgfmathsetmacro\Dyf{\Dyi+sin(-135)} \coordinate (pDi) at (\Dxi,\Dyi); \coordinate (pDf) at (\Dxf,\Dyf); \pgfmathsetmacro\Exi{1.75+.325*cos(90)} \pgfmathsetmacro\Eyi{3+.325*sin(90)} \pgfmathsetmacro\Exf{\Exi+cos(90)} \pgfmathsetmacro\Eyf{\Eyi+sin(90)} \coordinate (pEi) at (\Exi,\Eyi); \coordinate (pEf) at (\Exf,\Eyf); \coordinate (ti) at ($(pDf)-(.25,0)$); \coordinate (tf) at ($(pAf)-(.25,0)$); \draw[->] (ti) -- (tf); \coordinate[label=left:{\tiny $\displaystyle\eta$}] (t) at ($(ti)!0.5!(tf)$); \coordinate (t0) at ($(pBf)+(.1,0)$); \coordinate (tinf) at ($(pCf)+(.1,0)$); \node[scale=.5, right=.0125 of t0] (t0l) {\tiny $\displaystyle\eta\,=\,0$}; \node[scale=.5, right=.0125 of tinf] (tinfl) {\tiny $\displaystyle\eta\,=\,-\infty$}; \draw[-] ($(pAf)-(.1,0)$) -- (t0); \draw[-] ($(pDf)-(.1,0)$) -- ($(pCf)+(.1,0)$); \coordinate (d2) at ($(pAf)!0.25!(pBf)$); \coordinate (d3) at ($(pAf)!0.5!(pBf)$); \coordinate (d4) at ($(pAf)!0.75!(pBf)$); \node[above=.01cm of pAf, scale=.625] (d1l) {$\displaystyle\overrightarrow{p}_1$}; \node[above=.01cm of d2, scale=.625] (d2l) {$\displaystyle\overrightarrow{p}_2$}; \node[above=.01cm of d3, scale=.625] (d3l) {$\displaystyle\overrightarrow{p}_3$}; \node[above=.01cm of d4, scale=.625] (d4l) {$\displaystyle\overrightarrow{p}_4$}; \node[above=.01cm of pBf, scale=.625] (d5l) {$\displaystyle\overrightarrow{p}_5$}; \def.55{.55} \pgfmathsetmacro\sax{1.75+.55*cos(180)} \pgfmathsetmacro\say{3+.55*sin(180)} \coordinate[label=below:{\scalebox{0.5}{$x_1$}}] (s1) at (\sax,\say); \pgfmathsetmacro\sbx{1.75+.55*cos(135)} \pgfmathsetmacro\sby{3+.55*sin(135)} \coordinate (s2) at (\sbx,\sby); \pgfmathsetmacro\scx{1.75+.55*cos(90)} \pgfmathsetmacro\scy{3+.55*sin(90)} \coordinate (s3) at (\scx,\scy); \pgfmathsetmacro\sdx{1.75+.55*cos(45)} \pgfmathsetmacro\sdy{3+.55*sin(45)} \coordinate (s4) at (\sdx,\sdy); \pgfmathsetmacro\sex{1.75+.55*cos(0)} \pgfmathsetmacro\sey{3+.55*sin(0)} \coordinate[label=below:{\scalebox{0.5}{$x_3$}}] (s5) at (\sex,\sey); \coordinate[label=below:{\scalebox{0.5}{$x_2$}}] (sc) at (1.75,3); \draw (s1) edge [bend left] (pAf); \draw (s1) edge [bend left] (d2); \draw (s3) -- (d3); \draw (s5) edge [bend right] (d4); \draw (s5) edge [bend right] (pBf); \draw [fill] (s1) circle (1pt); \draw [fill] (sc) circle (1pt); \draw (s3) -- (sc); \draw [fill] (s5) circle (1pt); \draw[-,thick] (s1) -- (sc) -- (s5); \end{scope} % \begin{scope}[shift={(4.5,0)}, transform shape] \def1.75{1.75} \def3{3} \def.325{.6} \pgfmathsetmacro\Axi{1.75+.325*cos(135)} \pgfmathsetmacro\Ayi{3+.325*sin(135)} \pgfmathsetmacro\Axf{\Axi+cos(135)} \pgfmathsetmacro\Ayf{\Ayi+sin(135)} \coordinate (pAi) at (\Axi,\Ayi); \coordinate (pAf) at (\Axf,\Ayf); \pgfmathsetmacro\Bxi{1.75+.325*cos(45)} \pgfmathsetmacro\Byi{3+.325*sin(45)} \pgfmathsetmacro\Bxf{\Bxi+cos(45)} \pgfmathsetmacro\Byf{\Byi+sin(45)} \coordinate (pBi) at (\Bxi,\Byi); \coordinate (pBf) at (\Bxf,\Byf); \pgfmathsetmacro\Cxi{1.75+.325*cos(-45)} \pgfmathsetmacro\Cyi{3+.325*sin(-45)} \pgfmathsetmacro\Cxf{\Cxi+cos(-45)} \pgfmathsetmacro\Cyf{\Cyi+sin(-45)} \coordinate (pCi) at (\Cxi,\Cyi); \coordinate (pCf) at (\Cxf,\Cyf); \pgfmathsetmacro\Dxi{1.75+.325*cos(-135)} \pgfmathsetmacro\Dyi{3+.325*sin(-135)} \pgfmathsetmacro\Dxf{\Dxi+cos(-135)} \pgfmathsetmacro\Dyf{\Dyi+sin(-135)} \coordinate (pDi) at (\Dxi,\Dyi); \coordinate (pDf) at (\Dxf,\Dyf); \pgfmathsetmacro\Exi{1.75+.325*cos(90)} \pgfmathsetmacro\Eyi{3+.325*sin(90)} \pgfmathsetmacro\Exf{\Exi+cos(90)} \pgfmathsetmacro\Eyf{\Eyi+sin(90)} \coordinate (pEi) at (\Exi,\Eyi); \coordinate (pEf) at (\Exf,\Eyf); \coordinate (ti) at ($(pDf)-(.25,0)$); \coordinate (tf) at ($(pAf)-(.25,0)$); \def.55{.55} \pgfmathsetmacro\sax{1.75+.55*cos(180)} \pgfmathsetmacro\say{3+.55*sin(180)} \coordinate[label=below:{\scalebox{0.5}{$x_1$}}] (s1) at (\sax,\say); \pgfmathsetmacro\sbx{1.75+.55*cos(135)} \pgfmathsetmacro\sby{3+.55*sin(135)} \coordinate (s2) at (\sbx,\sby); \pgfmathsetmacro\scx{1.75+.55*cos(90)} \pgfmathsetmacro\scy{3+.55*sin(90)} \coordinate (s3) at (\scx,\scy); \pgfmathsetmacro\sdx{1.75+.55*cos(45)} \pgfmathsetmacro\sdy{3+.55*sin(45)} \coordinate (s4) at (\sdx,\sdy); \pgfmathsetmacro\sex{1.75+.55*cos(0)} \pgfmathsetmacro\sey{3+.55*sin(0)} \coordinate[label=below:{\scalebox{0.5}{$x_3$}}] (s5) at (\sex,\sey); \coordinate[label=below:{\scalebox{0.5}{$x_2$}}] (sc) at (1.75,3); \draw [fill] (s1) circle (1pt); \draw [fill] (sc) circle (1pt); \draw [fill] (s5) circle (1pt); \draw[-,thick] (s1) -- (sc) -- (s5); \end{scope} % \end{tikzpicture} \caption{Example of a Feynman graph contribution to the wavefunction of the universe (left) and its associated reduced graph (right), which is obtained from the former by suppressing the external lines.} \label{Fig:G} \end{figure} As usual, the perturbative wavefunction can be computed via Feynman graphs whose vertices are associated to the time-dependent couplings $\lambda_{k}(\eta)$, the external edges to the bulk-to-boundary propagators, which are given by the solution of \eqref{eq:mfeq} satisfying the Bunch-Davies boundary condition, and the internal edges are associated to a bulk-to-bulk propagator, which shows three terms, two encoding the time-ordered Feynman propagators and the third one fixed by the condition that the fluctuations have to vanish at the boundary: \begin{equation}\eqlabel{eq:WF} \tilde{\psi}_{\mathcal{G}}\:=\:\int_{-\infty}^{0}\prod_{v\in\mathcal{V}}\left[d\eta_{\nu}\,V_v\phi_{\circ}^{\mbox{\tiny $(v)$}}\right]\prod_{e\in\mathcal{E}}G_e\left(\eta_{v_e},\,\eta_{v'_e}\right), \end{equation} where $\mathcal{V}$ and $\mathcal{E}$ are the sets of, respectively, the vertices and the internal edges of the graph $\mathcal{G}$, $V_v\:\equiv\:i\lambda_{k}(\eta_v)$, is the interaction associated to the vertex $v$, $\displaystyle \phi_{\circ}^{\mbox{\tiny $(v)$}}\,\equiv\,\prod_{j\in v}\phi_{\circ}(-E_j\eta_v)$ are the free solution associated to the external states, and $G_e$ is the propagator which is given by \begin{equation}\eqlabel{eq:ge} \begin{split} & G_e(\eta_{v_e},\eta_{v'_e})\:=\:\frac{1}{2y_e \left[ \bar{\phi}_{\circ}(-y_e\eta_{v_e})\phi_{\circ}(-y_e\eta_{v'_e})\vartheta(\eta_{v_e}-\eta_{v'_e})+\phi_{\circ}(-y_e\eta_{v_e})\bar{\phi}_{\circ}(-y_e\eta_{v'_e})\vartheta(\eta_{v'_e}-\eta_{v_e})- \right.\\ & \left.\hspace{3cm} -\frac{\bar{\phi}_{\circ}(0)}{\phi_{\circ}(0)}\phi_{\circ}(-y_e\eta_{v_e})\phi_{\circ}(-y_e\eta_{v'_e}) \right], \end{split} \end{equation} $E_j$ and $y_e$ being the modulus of the momentum of an external state $j$ and of the momentum running along the edge $e$, respectively, while $\bar{\phi}_{\circ}$ identifies the complex conjugate of the mode function $\phi_{\circ}$. Finally, considering the time-dependent coupling constants $\lambda_k(\eta)$ in Fourier space \begin{equation}\eqlabel{eq:Fcc} \lambda(\eta)\:=\:\int_{-\infty}^{+\infty}d\varepsilon\,e^{i\varepsilon\eta}\tilde{\lambda}_k(\varepsilon), \end{equation} the perturbative wavefunction can be written as \begin{equation}\eqlabel{eq:PWF2a} \tilde{\psi}_{\mathcal{G}}\:=\: \int_{-\infty}^{+\infty}\prod_{v\in\mathcal{V}} \left[d\varepsilon_v\,\tilde{\lambda}_k(\varepsilon_v)\right]\psi_n(\left\{\varepsilon_v\right\}), \end{equation} where \begin{equation}\eqlabel{eq:PWF2b} \psi_{\mathcal{G}}(\left\{\varepsilon_v\right\})\:\equiv\: \int_{-\infty}^{0}\prod_{v\in\mathcal{V}}\left[d\eta_v e^{i\varepsilon_v\eta_v}\phi_{\circ}^{\mbox{\tiny $(v)$}}(\eta_v)\right] \prod_{e\in\mathcal{E}}G_e\left(\eta_{v_e},\eta_{v'_e}\right). \end{equation} Our analysis will mainly focus on the structure of \eqref{eq:PWF2b}, leaving the integrations \eqref{eq:PWF2a} as the very last step. From the perspective of the integrand $\psi_{\mathcal{G}}$, the time-dependence of the coupling constants is reflected in the presence of an additional massless external state at each vertex. For cosmologies $a(\eta)\,\propto\,(-\eta)^{-\alpha}$ ($\alpha\,\in\,\mathbb{R}_+$), where $\lambda_k(\eta)$ is given by \eqref{eq:ml}, the Fourier coupling constant $\tilde{\lambda}_k(\varepsilon)$ has support on the Heaviside step function $\vartheta(\varepsilon)$ and, consequently, it takes values just on the positive energy axis. More precisely: \begin{equation}\eqlabel{eq:Fcc2} \lambda_k(\eta)\:=\:\lambda_k\int_{-\infty}^{+\infty}d\varepsilon\,e^{i\varepsilon\eta}\varepsilon^{\gamma_k-1}\vartheta(\varepsilon)\:\equiv\: \lambda_k\int_{0}^{+\infty}d\varepsilon\,e^{i\varepsilon\eta}\varepsilon^{\gamma_k-1} \end{equation} as long as $\gamma_k\,\equiv\,\alpha\left[2+(d-1)(2-k)/2\right]\,>\,0$. As we will discuss in more details later, for $\gamma_k\,<\,0$, $\tilde{\psi}_{\mathcal{G}}$ can be obtained by acting with a derivative operator on $\psi_{\mathcal{G}}$. As a final remark, in the rest of the paper, unless stated explicitly, we will focus on cosmologies $a(\eta)\,\propto\,(-\eta)^{-\alpha}$, for which the (squared) time-dependent mass is $\mu^2(\eta)\,=\,\mu_{\alpha}^2\eta^{-2}$ and the mode functions are given in terms of Hankel functions, as discussed earlier. \subsection{Boundary representations for the wavefunction of the universe}\label{subsec:BRca} In order to get more insights into the structure of the wavefunction of the universe, the zero-th order step is to look for new ways of computing it. For cosmologies $a(\eta)\,\propto\,(-\eta)^{-\alpha}$, the data characterising the wavefunction of the universe are the moduli $E_j$ of the spatial momenta, the angles among the spatial momenta themselves which can be parametrised via the moduli $y_J$ of sums of momenta ($y_J\,\equiv\,\sum_{j\in J}\overrightarrow{p}_j$, $J$ being a subset of the external momenta), as well as the masses of the states involved, which can be encoded into the parameter $\nu_j$ defined as in \eqref{eq:cups}. So, given a graph $\mathcal{G}$, the related wavefunction will be denoted as $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_j\};\{\nu_e\})$}}(\{E_j\},\{y_e\})$, with the energies $E_j$'s associated to the external states, $y_e$'s associated to the edge $e$, while $\nu_j$ and $\nu_e$ respectively encode masses of the external state $j$ and the internal state on the edge $e$ of $\mathcal{G}$. In this section we first discuss how contributions $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_j\}; \{\nu_e\})$}}$ to the wavefunction of the universe related to a graph $\mathcal{G}$ with generic external scalars, can be obtained by acting with certain operators on the contribution to the wavefunction $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{1/2\}; \{\nu_e\})$}}\,\equiv\,\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}$ from the very same graph $\mathcal{G}$ but with external states with order $\nu\,=\,1/2$, corresponding to the case $\mu(\eta)\,=\,0$ with the mode functions that reduce to be exactly exponentials. Then we will show how the wavefunction satisfies a new set of recursion relations. As a final comment, we will consider either $\nu\,\in\,\mathbb{R}_+$, or $\nu\,\in\,i\zeta$ with $\zeta\,\in\,\mathbb{R}_+$ given that the bulk-to-bulk propagator $G^{\mbox{\tiny $(\nu)$}}$ is invariant under the sign flip of $\nu$: $G^{\mbox{\tiny $(-\nu)$}}\,=\,G^{\mbox{\tiny $(\nu)$}}$. \subsubsection{General external scalars from $\nu\,=\,1/2$} In this paper, we will focus on wavefunctions whose external states are given by just $\nu\,=\,1/2$, the reason being that {\it any state with a generic mass can be obtained by applying a suitable differential operator on the $\nu\,=\,1/2$ ones} -- an example of this fact was illustrated in \cite{Arkani-Hamed:2015bza, Arkani-Hamed:2018kmz} where weight-shifting operators were defined for mapping the wavefunction of the universe with external conformally coupled scalars in de Sitter space, to a wavefunction with {\it all} external massless states. Here we will define operators which, acting on the wavefunction, changes $\nu\,=\,1/2$ to an arbitrary $\nu$, {\it i.e.} an arbitrary mass, for a single state. The first direct observation is that any mode function $\phi_{\circ}^{\mbox{\tiny $(\nu)$}}$ can be conveniently written as an operator $\hat{\mathcal{O}}_{\nu}(E)$ in energy space acting on $\phi_{\circ}^{\mbox{\tiny $(1/2)$}}\,\equiv\,e^{iE\eta}$. Concretely \begin{equation}\eqlabel{eq:ExSop} \phi_{\circ}^{\mbox{\tiny $(\nu)$}}\:=\:\frac{1}{(-E\eta)^{\nu-\frac{1}{2}}}\hat{\mathcal{O}}_{\nu}(E)\,e^{iE\eta},\qquad \end{equation} where \begin{equation}\eqlabel{eq:ExSop2} \hat{\mathcal{O}}_{\nu}(E)\:\equiv\:\frac{E^{2\nu}}{\Gamma\left(\frac{1}{2}+\nu\right)\Gamma\left(\frac{1}{2}-\nu\right)}\int_0^{+\infty}dt\,t^{\nu-\frac{1}{2}}\int_0^{+\infty}ds\,s^{-\nu-\frac{1}{2}} e^{-\left(Et+is\frac{\partial}{\partial E}\right)}. \end{equation} There is a class of values of $\nu$, and consequently of the masses, for which the expression \eqref{eq:ExSop2} simplifies. For $\nu\,=\,l+\frac{1}{2}$ ($l\,\in\,\mathbb{Z}_+$), we have \begin{equation}\eqlabel{eq:Opl12} \hat{\mathcal{O}}_{l}(E)\:=\:\prod_{r=1}^{l}\left(E\frac{\partial}{\partial E}-(2r-1)\right), \end{equation} With the relation \eqref{eq:ExSop} among a generic mode function $\phi_{\circ}^{\mbox{\tiny $(\nu)$}}$ and $\phi_{\circ}^{\mbox{\tiny $(1/2)$}}\,\equiv\,e^{iE\eta}$ at hand, we can also deduce how the wavefunction $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_j\}; \{\nu_e\})$}}$ can be obtained from $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}$. Let us begin with considering a general graph $\mathcal{G}$ with $n_v$ vertices and $n_e$ edges, and a rescaled wavefunction \begin{equation}\eqlabel{eq:WFres} \tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_j\};\{\nu_e\})$}}\:\longrightarrow\: \prod_{j=1}^nE^{\frac{1}{2}-\nu_j}_j\prod_{e\in\mathcal{E}}y_e^{2\left(\frac{1}{2}-\nu_e\right)}\tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_j\};\{\nu_e\})$}} \end{equation} whose explicit expression as a time integral is given by \begin{equation}\eqlabel{eq:tWF} \tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_j\};\{\nu_e\})$}}\:=\: \prod_{j\in v}\hat{\mathcal{O}}_{\nu_j}(E_j) \int_{-\infty}^{0}\prod_{v\in\mathcal{V}}\left[d\eta_v\,\frac{i\lambda_k(\eta_v)}{(-\eta_v)^{\nu_v-\frac{\rho_v}{2}}}\,e^{iX_v\eta_v}\right]\prod_{e\in\mathcal{E}}G_e\left(y_e;\,\eta_{v_e},\eta_{v'_e}\right) \end{equation} where $X_v\,\equiv\,\sum_{j\in v}E_j$, the propagators $G_e$ have been rescaled by $[(-y_e\eta_{v_e})(-y_e\eta_{v'_e})]^{\frac{1}{2}-\nu_e}$, $\rho_v$ is the sum of all the states (both internal and external) at the vertex $v$, and $\nu_v\,\equiv\,\sum_{j\in v}\nu_j + \sum_{e\in\mathcal{E}_v}\nu_e$. For each vertex $v$, the coupling constant and the other factors of $\eta_v$ can be consider all together in Fourier space: \begin{equation}\eqlabel{eq:FCCt} \frac{i\lambda_k(\eta_v)}{\left(-\eta_v\right)^{\nu_v-\frac{\rho_v}{2}}}\:=\: i^{\beta_{k,\nu}}\left(i\lambda_k\right)\int_{0}^{\infty}d\varepsilon_v\,e^{i\varepsilon_v\eta_v}\varepsilon_v^{\beta_{k,\nu}-1}, \end{equation} for $\mbox{Re}\left\{\beta_{k,\nu}\right\}\,>\,0$, where $\beta_{k,\nu}\,\equiv\,\gamma_k+\nu_v-\frac{\rho_v}{2}$. The power $\beta_{k,\nu}$ contains both the information about the cosmology and the type of interactions through the parameters $\alpha$ and $k$ in $\gamma_k$, and about the internal and external states at each vertex via $\nu_v$. It is possible to keep the two set of information separated, by writing two different Fourier spaces for $\mbox{Re}\{\gamma_k\}\,>\,0$ and $\mbox{Re}\{\nu_v-\rho_v/2\}\,>\,0$ separately -- then \eqref{eq:FCCt} results from the convolution theorem. Indeed, if either $\mbox{Re}\{\gamma_k\}\,<\,0$ or $\mbox{Re}\{\nu_v-\rho_v/2\}\,<\,0$ (or both), the related Fourier integral is substituted by a derivative operator. Thus, the wavefunction of the universe related to a generic graph $\mathcal{G}$ can be written as \begin{equation}\eqlabel{eq:tWF2} \tilde{\psi}_{\mathcal{G}}\:=\: \prod_{j=1}^{n}\hat{\mathcal{O}}_{\nu_j}(E_j) \int_{0}^{+\infty}\prod_{v\in\mathcal{V}}\left[d\varepsilon_v\,\varepsilon^{\beta_{k,v}-1}\right] \underbrace{\int_{-\infty}^{0}\prod_{v\in\mathcal{V}}\left[d\eta_v\,i\,e^{i(X_v+\varepsilon_v)\eta_v}\right]\prod_{e\in\mathcal{E}}G_e\left(\eta_{v_e},\eta_{v'_e}\right)}_{\text{$\displaystyle \psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}$}} \end{equation} {\it i.e.} the (rescaled) wavefunction $\tilde{\psi}_{\mathcal{G}}$ with arbitrary external states as well as its integrand $\psi_{\mathcal{G}}$ can be obtained by acting with the operators $\hat{\mathcal{O}}$ on the (rescaled) wavefunction $\tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}$ with just external conformally coupled scalars and its integrand $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}$ respectively. Notice that the formula \eqref{eq:tWF2} is valid as long as $\mbox{Re}\left\{\beta_{k,\nu}\right\}\,>\,0$. For $\mbox{Re}\left\{\beta_{k,\nu}\right\}\,\le\,0$, the integrations over $\varepsilon_v$ is substituted by a differential operator of order $\beta\,\in\,\mathbb{Z}_+$, with $\beta$ which is finally analytically continued to $-\beta_{k,\nu}$. Hence, we can write in full generality: \begin{equation}\eqlabel{eq:tWF3} \psi'_{\mathcal{G}}\:=\:\left[\prod_{j=1}^{n}\hat{\mathcal{O}}_{\nu_j}(E_j)\right]\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}, \qquad \tilde{\psi}'_{\mathcal{G}}\:=\:\left[\prod_{j=1}^{n}\hat{\mathcal{O}}_{\nu_j}(E_j)\right]\tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}, \end{equation} with $\tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}$ given by \begin{equation}\eqlabel{eq:tWF3b} \tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}\:=\:\hat{\mathcal{W}}(x,X)\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}, \qquad \hat{\mathcal{W}}\:=\: \left\{ \begin{array}{l} {\displaystyle \prod_{v\in\mathcal{V}}\int_{X_v}^{+\infty}dx_v\,(x_v-X_v)^{\beta_{k,v}-1}},\quad\mbox{for } \mbox{Re}\left\{\beta_{k,\nu}\right\}\,>\,0,\\ \phantom{\ldots}\\ {\displaystyle \left.\prod_{v\in\mathcal{V}}\left(i\frac{\partial}{\partial X_v}\right)^{\beta}\right|_{\beta\longrightarrow -\beta_{k,\nu}}},\hspace{1.25cm} \mbox{for } \mbox{Re}\left\{\beta_{k,\nu}\right\}\,\le\,0, \end{array} \right. . \end{equation} Notice that in \eqref{eq:tWF3} the integrated (rescaled) wavefunction $\tilde{\psi}'_{\mathcal{G}}$ with arbitrary scalars is obtained from the integrated (rescaled) wavefunction $\tilde{\psi}_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}$ with external $\nu\,=\,1/2$ states only. It is also possible to obtaining it by applying the operator $\hat{\mathcal{W}}$, written now as an integral over $\varepsilon_v$, acting on $\psi'_{\mathcal{G}}$. Thus, the integrand $\psi'_{\mathcal{G}}$ and the integral $\tilde{\psi}'_{\mathcal{G}}$ for contact interactions acquire the following form respectively: \begin{equation}\label{eq:CI1} \psi'_{\mathcal{G}}\:=\:\left[\prod_{j=1}^{n}\hat{\mathcal{O}}_{\nu_j}(E_j)\right]\frac{1}{X}, \qquad \tilde{\psi}'_{\mathcal{G}}\:=\:\left[\prod_{j=1}^{n}\hat{\mathcal{O}}_{\nu_j}(E_j)\right]\hat{\mathcal{W}}\frac{1}{x}, \end{equation} where $X$ is the sum of all the energies, and the product in the operator $\hat{\mathcal{W}}$ is given by a single term. For $k\,=\,3$ and $d\,=\,5$ as well as $k\,=\,4$ and $d\,=\,3$, $\gamma_k\,=\,0$ and the operator $\hat{\mathcal{W}}$ has an integral or derivative form depending only on whether $\mbox{Re}\{\nu\}-n/2$ is positive or negative. Indeed, if all the external states have $\nu_j\,=\,1/2$, then such an operator is just the identity. \subsubsection{Recursive relations}\label{subsubsec:RR} Let us now focus on the integrand $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\frac{1}{2}\};\{\nu_e\})$}}$ with external $\nu\,=\,1/2$ states only. For brevity, and because it will not give rise to any confusion, we will drop the $\{1/2\}$ in the upper index: \begin{equation}\eqlabel{eq:WFint} \psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}\:=\:\left(i\lambda_k\right)^{n_v}\int_{-\infty}^0\prod_{v\in\mathcal{V}}\left[d\eta_{v}\,e^{ix_v\eta_v}\right]\prod_{e\in\mathcal{E}}G^{\mbox{\tiny $(\nu_e)$}}\left(y_e;\,\eta_{v_e},\eta'_{v_e}\right), \end{equation} with the propagators $G_e$ which, as in \eqref{eq:tWF}, have been rescaled by $[(-y_e\eta_{v_e})(-y_e\eta_{v'_e})]^{\frac{1}{2}-\nu_e}$. From this integral representation, we can consider the integral $\mathcal{I}$ obtained from \eqref{eq:WFint} by inserting the time-translation operator $\Delta$ in such a way that it acts on the full integrand of \eqref{eq:WFint}: \begin{equation}\eqlabel{eq:rrWF} \mathcal{I}\:\equiv\:\left(i\lambda_k\right)^{n_v}\int_{-\infty}^0\prod_{v\in\mathcal{V}}d\eta_{v}\,\Delta\left[\prod_{v\in\mathcal{V}}e^{ix_v\eta_v}\prod_{e\in\mathcal{E}}G^{\mbox{\tiny $(\nu_e)$}}\left(y_e;\,\eta_{v_e},\eta'_{v_e}\right)\right],\qquad \Delta\:\equiv\:-i\sum_{v\in\mathcal{V}}\partial_{\eta_v}. \end{equation} It is straightforward to notice that this integral vanishes: because of $\Delta$, the integrand of $\mathcal{I}$ is a sum of total derivatives whose contribution from $-\infty$ vanishes because of the positive frequency external states, while from the boundary because the propagation vanishes there. Thus, allowing the total time translation operator acting before on the external states and then on the propagators, $\mathcal{I}$ can also be written as \begin{equation}\eqlabel{eq:rrWF2} 0\:=\:\mathcal{I}\:=\: \left(\sum_{v\in\mathcal{V}} x_v\right)\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}\:+\: \left(i\lambda_k\right)^{n_v}\sum_{e\in\mathcal{E}}\int_{-\infty}^0\prod_{v\in\mathcal{V}}\left[d\eta_{v}\,e^{ix_v\eta_v}\right]\Delta G_e^{\mbox{\tiny $(\nu_e)$}}\prod_{\bar{e}\in\mathcal{E}\setminus\left\{e\right\}}G_e^{\mbox{\tiny $(\nu_{\bar{e}})$}}, \end{equation} where the notation has been shortened by writing $G_{e}^{\mbox{\tiny $(\nu_e)$}}\,\equiv\,G_e^{\mbox{\tiny $(\nu_{\bar{e}})$}}\left(y_e;\,\eta_{v_e},\eta'_{v_e}\right)$. Interestingly, the total time translation operator maps the propagator $G_e^{\mbox{\tiny $(\nu_e)$}}$ of a state $\nu_e$ into a propagator of a state $\nu_e-1$: \begin{equation}\eqlabel{eq:rrWF3} \Delta G_{e}^{\mbox{\tiny $(\nu_e)$}}\:=\:-i\left(\eta_{v_e}+\eta_{v'_e}\right)2(\nu_e-1)y_e^2 G_{e}^{\mbox{\tiny $(\nu_e-1)$}} + y_e^2\eta_{v_e}\eta_{v'_e}\Delta G_{e}^{\mbox{\tiny $(\nu_e-1)$}}. \end{equation} Substituting \eqref{eq:rrWF3} into \eqref{eq:rrWF2}, the first term in \eqref{eq:rrWF3} gives rise to a first derivative operator dependent on the energies $x_{v_e}$ and $x_{v'_e}$ of the endpoints of the edge $e$ acting on the wavefunction $\psi_{\mathcal{G}}^{\mbox{\tiny $(\nu_e-1,\{\nu_{\bar{e}}\})$}}$ of a graph with the very same topology of $\mathcal{G}$ but with the edge $e$ now related to a state with order $\nu_e-1$. The second term of \eqref{eq:rrWF3} instead gives rise to a double derivative operator dependent again on the energies $x_{v_e}$ and $x_{v'_e}$ of the endpoints of the edge $e$, which now acts on the product of the total energy times $\psi_{\mathcal{G}}^{\mbox{\tiny $(\nu_e-1,\{\nu_{\bar{e}}\})$}}$. Hence, the wavefunction with arbitrary internal states satisfies the following recursion relation \begin{equation}\eqlabel{eq:rrWFfin1} \left(\sum_{v\in\mathcal{V}} x_v\right)\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}\:=\:\sum_{e\in\mathcal{E}} \left[ \left(\frac{\partial}{\partial x_{v_e}}+\frac{\partial}{\partial x_{v'_e}}\right)2\left(\nu_e-1\right)- \frac{\partial^2}{\partial x_{v_e} \partial x_{v'_e}}\left(\sum_{v\in\mathcal{V}} x_v\right) \right] \psi_{\mathcal{G}}^{\mbox{\tiny $(\nu_e-1,\{\nu_{\bar{e}}\})$}} \end{equation} which can be schematically visualised as \begin{equation}\eqlabel{eq:rrWfin1pic} \begin{tikzpicture}[node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] \begin{scope}[shift={(2.125,0)}, scale={.9}, transform shape] \begin{scope}[shift={(-1,0)}, transform shape] \fill[shade,thick] (0,0) circle (.8); \node[text width=.18cm,color=black] at (0,0) (Fn) {$\displaystyle\psi_{\mbox{\tiny $\mathcal{G}$}}$}; \node[below=.5cm of Fn] (lelhs) {$\displaystyle\left\{\nu_e\right\}$}; \node[text width=.18cm,color=black] at (-2.8,-0.1) (sum) {$\displaystyle\left(\sum_{v\in\mathcal{V}}x_v\right)$}; \node[ball,text width=.18cm,fill,color=black,label=left:$\mbox{\tiny $x_1$}$, scale=.75] at (-.8,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black,label=left:$\mbox{\tiny $x_2$}$, scale=.75] at ({.8*cos(150)},{.8*sin(150)}) (x2) {}; \node[ball,text width=.18cm,fill,color=black,label={right:$\mbox{\tiny $x_{i-1}$}$}, scale=.75] at ({.8*cos(30)},{.8*sin(30)}) (xi1) {}; \node[ball,text width=.18cm,fill,color=black,label={right:$\mbox{\tiny $x_i$}$}, scale=.75] at ({.8},{0}) (xi) {}; \node[ball,text width=.18cm,fill,color=black,label={right:$\mbox{\tiny $x_{i+1}$}$}, scale=.75] at ({.75*cos(-30)},{.75*sin(-30)}) (xi1b) {}; \node[ball,text width=.18cm,fill,color=black,label=left:$\mbox{\tiny $x_n$}$, scale=.75] at ({.8*cos(150)},{-.8*sin(150)}) (xn) {}; \end{scope} % \node[color=black, scale=1.125] at (2.25,0) (eq) {$\displaystyle\quad=\:\sum_{e\in\mathcal{E}}\hat{\mathcal{O}}_{e} \hspace{-.125cm} \left[ \begin{array}{l} \phantom{ldots}\\ \phantom{ldots}\\ \phantom{ldots} \end{array} \right.$}; \fill[shade,thick] (4,0) circle (.8); \node[text width=.18cm,color=black] at (4,0) (Fl) {$\displaystyle\psi_{\mathcal{L}}$}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at (3.2,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(150)+4},{.8*sin(150)}) (x2) {}; \node[ball,text width=.18cm,fill,color=red,label={[label distance=.05mm]30:$\hspace{-.1cm}\mbox{\tiny $x_{v_e}+y_e$}$}] at ({.8+4},{0}) (xv) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(150)+4},{-.8*sin(150)}) (xn) {}; \fill[shade,thick] (8,0) circle (.8); \node[text width=.18cm,color=black] at (8,0) (Fr) {$\displaystyle\psi_{\mathcal{R}}$}; \node[ball,text width=.18cm,fill,color=red,label={[left=1cm]-30:$\hspace{-.1cm}\mbox{\tiny $x_{v'_e}+y_e$}$}] at (7.2,0) (xv2) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(30)+8},{.8*sin(30)}) (x2) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8+8},{0}) (xt) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(-30)+8},{.8*sin(-30)}) (xn) {}; \draw[-,ultra thick,color=red] (xv) -- (xv2); \coordinate (lre) at ($(xv)!0.5!(xv2)$); \node[below=.15cm of lre, scale=.75] {$\displaystyle\left\{\nu_e-1\right\}$}; \node[color=black] at (9.25,-0.1) (eq2) {$\displaystyle\;+$}; \end{scope} % \begin{scope}[shift={(8,0)}, scale={.9}, transform shape] \fill[shade,thick] (4,0) circle (.8); \node[ball,text width=.18cm,fill,color=red,label={[left=1.2cm]-30:$\mbox{\tiny $x_{v_e}+y_e$}$}] at ({.7*cos(-120)+4},{.7*sin(-120)}) (xv1) {}; \node[ball,text width=.18cm,fill,color=red,label={[right=-.2cm]-30:$\mbox{\tiny $x_{v'_e}+y_e$}$}] at ({.7*cos(-60)+4},{.7*sin(-60)}) (xv2) {}; \draw[-,ultra thick,color=red] (xv1.south) edge[bend right=100] (xv2.south); \coordinate (lre) at ($(xv1.south)!0.5!(xv2.south)$); \node[below=.15cm of lre, scale=.75] {$\displaystyle\left\{\nu_e-1\right\}$}; \node[scale=1.125] at ($(4,0)+(.5,0)$) {$\displaystyle \left. \begin{array}{l} \phantom{ldots}\\ \phantom{ldots}\\ \phantom{ldots} \end{array} \right]$}; \end{scope} \end{tikzpicture} \end{equation} with the operator $\hat{\mathcal{O}}_{e}$ being the differential operator appearing inside the square brackets of \eqref{eq:rrWFfin1}. Such a recursion relation, relates the contribution to wavefunction from a given graph $\mathcal{G}$ with internal states with order $\left\{\nu_e\right\}$ to the contribution to the wavefunction from the very same graph $\mathcal{G}$ but now with the order of the internal edges shifted by $-1$ one edge at a time. Notice that, diagrammatically, a weight $\nu_e$ can be associated to the edge $e$ of $\mathcal{G}$, and thus the operator $\hat{\mathcal{O}}_{e}$ raises the weight of the edge $e$ by one. A straightforward manipulation of the recursion relation leads to following expression of the wavefunction $\psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}$ \begin{equation}\eqlabel{eq:rrWFfin2} \psi_{\mathcal{G}}^{\mbox{\tiny $(\{\nu_e\})$}}\:=\:\sum_{e\in\mathcal{E}}\hat{\mathcal{O}}'_{e}\psi_{\mathcal{G}}^{\mbox{\tiny $(\nu_e-1,\{\nu_{\bar{e}}\})$}}, \qquad \hat{\mathcal{O}}'_{e}\:\equiv\: \frac{2\left(\nu_e-\frac{3}{2}\right)}{\displaystyle\sum_{v\in\mathcal{V}} x_v}\left(\frac{\partial}{\partial x_{v_e}}+\frac{\partial}{\partial x_{v'_e}}\right)- \frac{\partial^2}{\partial x_{v_e} \partial x_{v'_e}}. \end{equation} Interestingly, for $\nu_e\,=\,3/2$ -- which corresponds to massless states for cosmologies with $\alpha\,=\,2/(d-1)$, including de Sitter in four dimensions, as well as to dS space-time ($\alpha\,=\,1$) with squared mass $m^2\,=\,(d^2-9)/4$ --, the operator $\hat{\mathcal{O}}'_{e}$ reduces to just the second derivative term. A comment is now in order. Despite revealing the unsuspected connection among wavefunctions with different internal states, recursion relations gain power if they have an endpoint -- it is the seed the attention can be focused on -- or, in any case, basic terms in the recursion are known or computable. The order $\nu$ of a state can be either real or purely imaginary. If $\nu$ not only is real but it is also an integer or half-integer, the seed of the recursion relation can be taken to be $\nu\,=\,0$ an $\nu\,=\,1/2$ respectively. The latter corresponds to the massless scalar in flat space with time-dependent interaction, containing the conformally coupled scalar in FRW cosmologies. For de Sitter in four dimensions, the unitarity representations having $\nu\,\in\,\mathbb{R}_+$\footnote{Recall that because of the invariance of the propagator $G$ under the sign flip of the order $\nu$, we focused on $\nu\,\in\,\mathbb{R}_+$. Notice that however, the rescaling by of the propagator by $\left[(-y\eta_{v_e})(-y\eta_{v'_e})\right]^{1/2-\nu_e}$ breaks this symmetry. Luckily, the above formulas become valid also for $\nu\,\in\,\mathbb{R}_{-}$ -- up to the sign flip of $\nu$ -- if we simultaneously think about the propagator as rescaled by $\left[(-y\eta_{v_e})(-y\eta_{v'_e})\right]^{1/2+\nu_e}$, {\it i.e.} if we flip the sign of $\nu$ in the operator $\hat{\mathcal{W}}$, which maps the {\it integrand} into the actual wavefunction.} are such that, $\nu\,\in\,[0,\,3/2]$ and, consequently, the only states with $\nu\,\in\,\mathbb{Z}_+\,\mbox{ or }\,\mathbb{Z}_{\mbox{\tiny $+\frac{1}{2}$}}$ are given by $\nu\,=\,0,\,\frac{1}{2},\,1,\,\frac{3}{2}$. Notice further that which representations are unitary and which are not, and consequently the related values of $\nu$, changes with $d$ and $\alpha$, but the recursion relation is valid {\it irrespectively} of this: such an information is instead encoded into the operator $\hat{\mathcal{W}}$ mapping the {\it integrand} that the recursion relation is computing into the integrated wavefunction. For arbitrary $\nu$, irrespectively of being real or purely imaginary, the recursion relations discussed do not have an endpoint. Further, for $\nu$ purely imaginary, the operator $\hat{\mathcal{O}}'_{e}$ would take the state out of the Hilbert space. In this case, one can take a perturbative approach, by considering the mass as a perturbative two point coupling around the point for which $\nu\,=\,0$. Importantly, we can even think of using this perturbative approach considering the full $\mu(\eta)$ as a perturbative coupling, {\it i.e.} introducing two-point corrections to the conformally coupled case: this would allow not to make any choice of the cosmology {\it at all}, while for the time being we have been restricting ourselves to a given class of $a(\eta)$'s. We postpone this discussion to future work, while in the rest of this paper we will focus on light states, with $\nu\,\in\,\mathbb{R}_+$. \section{Edge-weighted graphs and the wavefunctions of the universe}\label{sec:EWG} Let us restrict ourselves to the specific case $\nu_e\,=\,l_e+1/2$ ($l_e\,\in\,\mathbb{Z}_+$). Considering for all the $\nu_e$'s the value $\nu_e\,=\,1/2$ as the seed, then we can iterate \eqref{eq:rrWF3} to obtain \begin{equation}\eqlabel{eq:rrWF4} \Delta G_{e}^{\mbox{\tiny $(\l_e)$}}\:=\:-i\left(\eta_{v_e}+\eta_{v'_e}\right)y_e^2\sum_{r_e\,=\,0}^{l_e-1}2\left(\l_e-r_e-\frac{1}{2}\right)\left(y_e^2\eta_{v_e}\eta_{v'_e}\right)^r G_{e}^{\mbox{\tiny $(l_e-r_e-1)$}} + \left(y_e^2\eta_{v_e}\eta_{v'_e}\right)^{l_e}\Delta G_{e}^{\mbox{\tiny $(0)$}}. \end{equation} Some comments are now in order. First, in the last term of \eqref{eq:rrWF4}, the total time translation operator acts on the propagator of a conformally coupled state: the time ordered terms get annihilated and, consequently, the non-time ordered part of $G_e$ contributes as $\Delta G_e\,=\,-2\,e^{iy_e(\eta_{v_e}+\eta_{v'_e})}$. Secondly, when $\Delta G_{e}^{\mbox{\tiny $(\l_e)$}}$ is inserted in \eqref{eq:rrWF2}, the factor of $\eta$ can be replaced by derivative acting on the external energies $x_{v_e}$ and $x_{v'_e}$, obtaining \begin{equation}\eqlabel{eq:rrWFfin} \begin{split} \left(\sum_{v\in\mathcal{V}} x_v\right)\psi_{\mathcal{G}}^{\mbox{\tiny $(\{l_e\})$}}\:&=\: \sum_{e\in\mathcal{E}}\left(\frac{\partial}{\partial x_{v_e}}+\frac{\partial}{\partial x_{v'_e}}\right)\sum_{r_e=0}^{l_e-1}2\left(l_e-r_e-\frac{1}{2}\right)\left(-\frac{\partial^2}{\partial x_{v_e}\partial x_{v'_e}}\right)^{r_e}\psi_{\mathcal{G}}^{\mbox{\tiny $(\{l_{\bar{e}}\}; l_e-r_e-1)$}}\:+\\ &+\:\sum_{e\in\mathcal{E}}\left(-\frac{\partial^2}{\partial x_{v_e}\partial x_{v'_e}}\right)^{l_e}\psi_{\mathcal{G}_{\mbox{\tiny L}}}^{\mbox{\tiny $(\{l_{\bar{e}}\})$}}\times \psi_{\mathcal{G}_{\mbox{\tiny R}}}^{\mbox{\tiny $(\{l_{\bar{e}}\})$}}, \end{split} \end{equation} which can be schematically represented as \begin{equation}\eqlabel{eq:rrWFpic} \begin{tikzpicture}[node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] \begin{scope}[shift={(2.125,0)}, scale={.9}, transform shape] \begin{scope}[shift={(-1,0)}, transform shape] \fill[shade,thick] (0,0) circle (.8); \node[text width=.18cm,color=black] at (0,0) (Fn) {$\displaystyle\psi_{\mbox{\tiny $\mathcal{G}$}}$}; \node[below=.5cm of Fn] (lelhs) {$\displaystyle\left\{l_e\right\}$}; \node[text width=.18cm,color=black] at (-2.8,-0.1) (sum) {$\displaystyle\left(\sum_{v\in\mathcal{V}}x_v\right)$}; \node[ball,text width=.18cm,fill,color=black,label=left:$\mbox{\tiny $x_1$}$, scale=.75] at (-.8,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black,label=left:$\mbox{\tiny $x_2$}$, scale=.75] at ({.8*cos(150)},{.8*sin(150)}) (x2) {}; \node[ball,text width=.18cm,fill,color=black,label={right:$\mbox{\tiny $x_{i-1}$}$}, scale=.75] at ({.8*cos(30)},{.8*sin(30)}) (xi1) {}; \node[ball,text width=.18cm,fill,color=black,label={right:$\mbox{\tiny $x_i$}$}, scale=.75] at ({.8},{0}) (xi) {}; \node[ball,text width=.18cm,fill,color=black,label={right:$\mbox{\tiny $x_{i+1}$}$}, scale=.75] at ({.75*cos(-30)},{.75*sin(-30)}) (xi1b) {}; \node[ball,text width=.18cm,fill,color=black,label=left:$\mbox{\tiny $x_n$}$, scale=.75] at ({.8*cos(150)},{-.8*sin(150)}) (xn) {}; \end{scope} % \node[color=black, scale=1.125] at (2.25,0) (eq) {$\displaystyle\quad=\:\sum_{e\in\mathcal{E}}\sum_{r=0}^{l_e-1}\hat{\mathcal{O}}_{r_e}^{\mbox{\tiny $(1)$}} \hspace{-.125cm} \left[ \begin{array}{l} \phantom{ldots}\\ \phantom{ldots}\\ \phantom{ldots} \end{array} \right.$}; \fill[shade,thick] (4,0) circle (.8); \node[text width=.18cm,color=black] at (4,0) (Fl) {$\displaystyle\psi_{\mathcal{L}}$}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at (3.2,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(150)+4},{.8*sin(150)}) (x2) {}; \node[ball,text width=.18cm,fill,color=red,label={[label distance=.05mm]30:$\hspace{-.1cm}\mbox{\tiny $x_{v_e}+y_e$}$}] at ({.8+4},{0}) (xv) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(150)+4},{-.8*sin(150)}) (xn) {}; \fill[shade,thick] (8,0) circle (.8); \node[text width=.18cm,color=black] at (8,0) (Fr) {$\displaystyle\psi_{\mathcal{R}}$}; \node[ball,text width=.18cm,fill,color=red,label={[left=1cm]-30:$\hspace{-.1cm}\mbox{\tiny $x_{v'_e}+y_e$}$}] at (7.2,0) (xv2) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(30)+8},{.8*sin(30)}) (x2) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8+8},{0}) (xt) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(-30)+8},{.8*sin(-30)}) (xn) {}; \draw[-,ultra thick,color=red] (xv) -- (xv2); \coordinate (lre) at ($(xv)!0.5!(xv2)$); \node[below=.15cm of lre, scale=.75] {$\displaystyle\left\{l_e-r_e-1\right\}$}; \node[color=black] at (9.25,-0.1) (eq2) {$\displaystyle\;+$}; \end{scope} % \begin{scope}[shift={(8,0)}, scale={.9}, transform shape] \fill[shade,thick] (4,0) circle (.8); \node[ball,text width=.18cm,fill,color=red,label={[left=1.2cm]-30:$\mbox{\tiny $x_{v_e}+y_e$}$}] at ({.7*cos(-120)+4},{.7*sin(-120)}) (xv1) {}; \node[ball,text width=.18cm,fill,color=red,label={[right=-.2cm]-30:$\mbox{\tiny $x_{v'_e}+y_e$}$}] at ({.7*cos(-60)+4},{.7*sin(-60)}) (xv2) {}; \draw[-,ultra thick,color=red] (xv1.south) edge[bend right=100] (xv2.south); \coordinate (lre) at ($(xv1.south)!0.5!(xv2.south)$); \node[below=.15cm of lre, scale=.75] {$\displaystyle\left\{l_e-r_e-1\right\}$}; \node[scale=1.125] at ($(4,0)+(.5,0)$) {$\displaystyle \left. \begin{array}{l} \phantom{ldots}\\ \phantom{ldots}\\ \phantom{ldots} \end{array} \right]+$}; \end{scope} % \begin{scope}[shift={(2,-2)}, scale={.9}, transform shape] \node[color=black, scale=1.125] at (2.25,0) (eq) {$\displaystyle\quad+\:\sum_{e\in\mathcal{E}}\hat{\mathcal{O}}_{l_e}^{\mbox{\tiny $(2)$}} \hspace{-.125cm} \left[ \begin{array}{l} \phantom{ldots}\\ \phantom{ldots}\\ \phantom{ldots} \end{array} \right.$}; \fill[shade,thick] (4,0) circle (.8); \node[text width=.18cm,color=black] at (4,0) (Fl) {$\displaystyle\psi_{\mathcal{L}}$}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at (3.2,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(150)+4},{.8*sin(150)}) (x2) {}; \node[ball,text width=.18cm,fill,color=red,label={[label distance=.05mm]30:$\hspace{-.1cm}\mbox{\tiny $x_{v_e}+y_e$}$}] at ({.8+4},{0}) (xv) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(150)+4},{-.8*sin(150)}) (xn) {}; \fill[shade,thick] (8,0) circle (.8); \node[text width=.18cm,color=black] at (8,0) (Fr) {$\displaystyle\psi_{\mathcal{R}}$}; \node[ball,text width=.18cm,fill,color=red,label={[left=1cm]-30:$\hspace{-.1cm}\mbox{\tiny $x_{v'_e}+y_e$}$}] at (7.2,0) (xv2) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(30)+8},{.8*sin(30)}) (x2) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8+8},{0}) (xt) {}; \node[ball,text width=.18cm,fill,color=black, scale=.75] at ({.8*cos(-30)+8},{.8*sin(-30)}) (xn) {}; \draw[-,dashed,ultra thick,color=red] (xv) -- (xv2); \coordinate (lre) at ($(xv)!0.5!(xv2)$); \node[color=black] at (9.25,-0.1) (eq2) {$\displaystyle\;+$}; \end{scope} % \begin{scope}[shift={(8,-2)}, scale={.9}, transform shape] \fill[shade,thick] (4,0) circle (.8); \node[ball,text width=.18cm,fill,color=red,label={[left=1.2cm]-30:$\mbox{\tiny $x_{v_e}+y_e$}$}] at ({.7*cos(-120)+4},{.7*sin(-120)}) (xv1) {}; \node[ball,text width=.18cm,fill,color=red,label={[right=-.2cm]-30:$\mbox{\tiny $x_{v'_e}+y_e$}$}] at ({.7*cos(-60)+4},{.7*sin(-60)}) (xv2) {}; \draw[-,dashed,ultra thick,color=red] (xv1.south) edge[bend right=100] (xv2.south); \coordinate (lre) at ($(xv1.south)!0.5!(xv2.south)$); \node[scale=1.125] at ($(4,0)+(.5,0)$) {$\displaystyle \left. \begin{array}{l} \phantom{ldots}\\ \phantom{ldots}\\ \phantom{ldots} \end{array} \right]$}; \end{scope} \end{tikzpicture} \end{equation} where the operators $\hat{\mathcal{O}}_{r_e}^{\mbox{\tiny $(1)$}}$ and $\hat{\mathcal{O}}_{l_e}^{\mbox{\tiny $(2)$}}$ are defined as \begin{equation}\eqlabel{eq:rrOps} \hat{\mathcal{O}}_{r_e}^{\mbox{\tiny $(1)$}}\,=\,2\left(l_e-r_e-\frac{1}{2}\right)\left(\frac{\partial}{\partial x_{v_e}}+\frac{\partial}{\partial x_{v'_e}}\right) \left(-\frac{\partial^2}{\partial x_{v_e}\partial x_{v'_e}}\right)^{r_e}, \qquad \hat{\mathcal{O}}_{l_e}^{\mbox{\tiny $(2)$}}\,=\,\left(-\frac{\partial^2}{\partial x_{v_e}\partial x_{v'_e}}\right)^{l_e}. \end{equation} The $\{l_e\}$ on the left-hand-side of \eqref{eq:rrWFpic} indicates that each edge $e$ has a weight $l_e\,\in\,\mathbb{Z}$, identifying the state that propagates on that edge; the solid red lines in the first line on the right-hand-side indicate that the corresponding edge $e$ has a lower weight $l_e-r_e-1$, while the dashed red lines in the second line indicate that the corresponding edge has been erased. Hence, the recursion relation in \eqref{eq:rrWFpic} -- and its functional expression in \eqref{eq:rrWFfin} -- states that the wavefunction $\psi_{\mbox{\tiny $\mathcal{G}$}}$, related to a graph $\mathcal{G}$ and having internal states labelled by the integers $l_e$ associated to the edges $e$ of $\mathcal{G}$, can be expressed in terms of wavefunctions related to the very same graph $\mathcal{G}$ but with lower masses $l_e-r_e-1$ as well as lower point and lower order wavefunctions. For the concrete case of $l_e\,=\,1$ for any edge $e$ -- which corresponds to the case of all internal massless states in cosmologies with $\alpha\,=\,2/(d-1)$ as well as to states with squared mass $m^2\,=\,(d^2-9)/4$ in dS${}_{d+1}$--, then the order-raising operator $\hat{\mathcal{O}}'_{e}$ reduces just to the second derivative term in \eqref{eq:rrWFfin2} and the wavefunction can be expressed as \begin{equation}\eqlabel{eq:WFnu1} \psi_{\mathcal{G}}^{\mbox{\tiny $(\{1\})$}}\:=\:n_e!\prod_{e\in\mathcal{E}}\left(-\frac{\partial^2}{\partial x_{v_e}\partial x_{v'_e}}\right)\psi_{\mathcal{G}}^{\mbox{\tiny $(\{0\})$}}, \end{equation} and, consequently, the tree-level two-site graphs with weights $0$ and $1$ are related to each other via a two-dimensional wave equation with sources \begin{equation}\eqlabel{eq:WF1we} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-3,-.125); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$1$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \node[right=.125cm of v2] (eq) {$\displaystyle=\,-\frac{\partial^2}{\partial x_1\partial x_2}$}; \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1b) at ($(eq)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2b) at ($(v1b)+(1cm,0)$); \draw[-,thick] (v1b) -- node[above, scale=.75] {$0$} (v2b); \draw[fill,black] (v1b) circle (2pt); \draw[fill,black] (v2b) circle (2pt); \end{tikzpicture} \end{equation} \\ There is a further information that the recursion relation make manifest and which can be read off by just looking at the edge-weighted graphs: the order of the poles. Given a graph $\mathcal{G}$, while their locations $\mathfrak{p}(x,y)$ are associated to the subgraphs $\mathfrak{g}$ of $\mathcal{G}$ -- being the point where the sum of the energies which are external to $\mathfrak{g}$ vanishes --, their order $\mathfrak{o}(l)$ is fixed in terms of the weights $l_e$ \begin{equation}\eqlabel{eq:OrdPol} \mathfrak{p}(x,y)\:\equiv\:\sum_{v\in\mathfrak{g}}x_v+\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny ext}}}y_e \hspace{1.5cm} \mathfrak{o}(l)\:\equiv\:\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny int}}}2l_e+\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny ext}}}l_e+1 \end{equation} where $\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny int}}$ and $\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny ext}}$ are the sets of edges which are respectively internal and external to $\mathfrak{g}$. \subsection{Some examples}\label{subsec:ExEW} It is useful to provide some explicit expressions of this new class of recursion relations. In the next two subsections we will discuss the two simplest examples in some detail: the two- three-site line graphs. \subsubsection{Two-site line graph}\label{subsec:ExEw2sT} The simplest example is given by the two-site line graph. In this case we can directly treat the case of a generic edge-weight $l$: \begin{equation}\eqlabel{eq:WFl2pl} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-4,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$l$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \node[right=.125cm of v2] (eq) {$\displaystyle=\,\left[\frac{2(l-1)}{x_1+x_2}\left(\frac{\partial}{\partial x_1}+\frac{\partial}{\partial x_2}\right)-\frac{\partial^2}{\partial x_1\partial x_2}\right]$}; \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1b) at ($(eq)+(3.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2b) at ($(v1b)+(1cm,0)$); \draw[-,thick] (v1b) -- node[above, scale=.75] {$l-1$} (v2b); \draw[fill,black] (v1b) circle (2pt); \draw[fill,black] (v2b) circle (2pt); \end{tikzpicture} \end{equation} \\ As usual the singularities are given as the sum of the energies which are external to all the subgraphs. However, their are no longer simple poles, rather they are higher order poles, with the order given by \eqref{eq:OrdPol}: \begin{equation}\eqlabel{eq:Sings} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] % \begin{scope} \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-5,0); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(2,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$l$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \coordinate (ce) at ($(v1)!0.5!(v2)$); \draw[-,color=red!50!black] (ce) ellipse (1.25cm and .25cm); \node[below=.5cm of ce, scale=.875] (pol) {$\displaystyle x_1+x_2$}; \node[below=.125cm of pol, scale=.875] (ord) {$\displaystyle\mathfrak{o}\,=\,2l+1$}; \end{scope} % \begin{scope}[shift={(4,0)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-5,0); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(2,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$l$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \coordinate (ce) at ($(v1)!0.5!(v2)$); \draw[-,color=red!50!black] (v1) circle (4pt); \node[below=.5cm of ce, scale=.875] (pol) {$\displaystyle x_1+y$}; \node[below=.125cm of pol, scale=.875] (ord) {$\displaystyle\mathfrak{o}\,=\,l+1$}; \end{scope} % \begin{scope}[shift={(8,0)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-5,0); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(2,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$l$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \coordinate (ce) at ($(v1)!0.5!(v2)$); \draw[-,color=red!50!black] (v2) circle (4pt); \node[below=.5cm of ce, scale=.875] (pol) {$\displaystyle y+x_2$}; \node[below=.125cm of pol, scale=.875] (ord) {$\displaystyle\mathfrak{o}\,=\,l+1$}; \end{scope} % \end{tikzpicture} \end{equation} \\ \noindent In order to compute the wavefunction of the universe, we can iterate the recursion relation \eqref{eq:WFl2pl} until to reach the seed $l\,=\,0$ of the recursion, which is fixed by simple combinatorial rules. However, we can also make the following observation: from the order of the operator in \eqref{eq:WFl2pl}, it is straightforward to see that the contribution to the wavefunction with an internal $l$ state is a rational function of overall degree $\delta_{\psi}\,=\,-(2l+3)$. We can thus write the function associated to the two-site line graph as \begin{equation}\eqlabel{eq:WFl2sl} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] % \begin{scope} \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-5,0); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(2,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$l$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); ' \node[right=.125cm of v2] (eq) {$\displaystyle=\,\sum_{r_1=0}^{l-1}\sum_{r_2=0}^{l-1}\frac{a_{r_1 r_2}^{\mbox{\tiny $(l)$}}}{(x_1+x_2)^{2l+1-r_1-r_2}(x_1+y)^{r_1+1}(y+x_2)^{r_2+1}}$}; \end{scope} % \end{tikzpicture} \end{equation} \\ The differential recursion relation \eqref{eq:WFl2pl} then translates into an algebraic one for the coefficients $a_{r_1 r_2}$: \begin{equation}\eqlabel{eq:WFarr} \begin{split} &a_{r_1 r_2}^{\mbox{\tiny $(l)$}}\:=\:-(2l-1-r_1-r_2)[2(3l-2)-r_1-r_2]a_{r_1 r_2}^{\mbox{\tiny $(l-1)$}}-r_1[2(2l-1)-r_1-r_2]a_{r_1-1,r_2}^{\mbox{\tiny $(l-1)$}}-\\ &\phantom{a_{r_1 r_2}^{\mbox{\tiny $(l)$}}\:=\:}-r_2[2(2l-1)-r_1-r_2]a_{r_1,r_2-1}^{\mbox{\tiny $(l-1)$}}-r_1r_2a_{r_1-1,r_2-1}^{\mbox{\tiny $(l-1)$}}, \end{split} \end{equation} with $a_{r_1 r_2}^{\mbox{\tiny $(l)$}}\,=\,0$ for $r_j\,>\,l$ and $r_j\,<\,0$ ($j\,=\,1,\,2$). Notice that the expression \eqref{eq:WFl2sl} resemble a Laurent expansion of the wavefunction in $x_1+x_2$, making some of the physical content manifest: the coefficient of the term with highest order for $x_1+x_2$ is related to the flat-space scattering amplitude, and becomes proportional to it, with the proportionality coefficient given by $a_{00}^{\mbox{\tiny $(l)$}}$, on the sheet $x_1+x_2\,=\,0$. Such a coefficient, together with $a_{0l}^{\mbox{\tiny $(l)$}}$, $a_{l0}^{\mbox{\tiny $(l)$}}$ and $a_{ll}^{\mbox{\tiny $(l)$}}$, can be written in closed form {\resizebox{\textwidth}{!}{ $\displaystyle a_{00}^{\mbox{\tiny $(l)$}}\:=\:\prod_{k=0}^{l-1}(-2)(2l-2k-1)(3l-3k-2),\quad a_{0l}^{\mbox{\tiny $(l)$}}\:=\:\prod_{k=0}^{l-1}\left[-(l-k)(3(l-k)-2)\right]\:=\:a_{l0}^{\mbox{\tiny $(l-1)$}},\quad a_{ll}^{\mbox{\tiny $(l)$}}\:=\:\prod_{k=1}^{l-1}[-(l-k)^2]. $ }} Actually, any of the other terms can also be related to (derivative of) the high energy limit flat-space amplitude. This can be conveniently seen by introducing the variables $x_T\,\equiv\,x_1+x_2$, $x_L\,\equiv\,x_1+y$, $x_R\,\equiv\,y+x_2$ \begin{equation}\eqlabel{eq:WFl2sl2} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] % \begin{scope} \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,0); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(2,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$l$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y$} (v2); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); ' \node[right=.125cm of v2] (eq) {$\displaystyle=\,\sum_{r_1=0}^{l-1}\sum_{r_2=0}^{l-1}\frac{a_{r_1 r_2}^{\mbox{\tiny $(l)$}}}{r_1!r_2!} \frac{1}{x_T^{2l+1-r_1-r_1}}\left(\frac{\partial}{\partial x_L}\right)^{r_1}\psi_1(x_L)\left(\frac{\partial}{\partial x_R}\right)^{r_2}\psi_1(x_R)$}; \end{scope} % \end{tikzpicture} \end{equation} \\ where $\psi_1(x)\,=\,x^{-1}$ is nothing but the one-site wavefunction of the universe, {\it i.e.} the wavefunction for a contact interaction. As $x_T\,\longrightarrow\,0$, the order $(2l+1-u)$-th coefficient $\psi_{T}^{\mbox{\tiny $(2l+1-u)$}}$ (with $u\,\equiv\,r_1+r_2\,=\,$ fixed, $u\,\in\,[0,2l]$) can be written as \begin{equation}\eqlabel{eq:WFl2sl3} \psi_{T}^{\mbox{\tiny $(2l+1-u)$}}\:=\:\sum_{r_1=0}^u\sum_{r_2=0}^u\frac{a_{r_1 r_2}^{\mbox{\tiny $(l)$}}}{r_1!r_2!}\left(\frac{\partial}{\partial x_L}\right)^{r_1}\left(\frac{\partial}{\partial x_R}\right)^{r_2}\mathcal{A}_2 \end{equation} $\mathcal{A}_2$ being the (high energy limit of the) scattering amplitude. As $x_L\,\longrightarrow\,0$ we can also write all the coefficients in the Laurent expansion around such a point in terms of lower-point wavefunctions and, equivalently, in terms of the flat-space scattering amplitude: \begin{equation}\eqlabel{eq:WFl2sl4} \begin{split} \psi_L^{\mbox{\tiny $(r_1)$}}\:&=\:(-1)^{2l-r_1}\sum_{r_2=0}^{l-1} a_{r_1 r_2}^{\mbox{\tiny $(l)$}} \left(\frac{\partial}{\partial x_{-}}\right)^{r_1} \left(\frac{\partial}{\partial x_{+}}\right)^{r_2} \frac{1}{2y}\left[\psi_1(x_-)-\psi_1(x_+)\right]\:\equiv\\ &\equiv\:(-1)^{2l-r_1+1}\sum_{r_2=0}^{l-1} a_{r_1 r_2}^{\mbox{\tiny $(l)$}} \left(\frac{\partial}{\partial x_{-}}\right)^{r_1} \left(\frac{\partial}{\partial x_{+}}\right)^{r_2} \mathcal{A}_2 \end{split} \end{equation} where $x_{\mp}\,\equiv\,x_2\mp y$ (with $x_-\,\equiv\,\left.x_T\right|_{x_L=0}$). A similar formula can be obtained for the coefficients of the Laurent expansion as $x_R\,\longrightarrow\,0$. These formulas make manifest how the main physical information encoded is the (high energy limit of the) flat-space scattering amplitudes. In a sense, with the recursion relation \eqref{eq:WFl2pl} at hand (and, more generally, the recursion relations \eqref{eq:rrWFfin2} and \eqref{eq:rrWFpic} for higher point processes), this is not a big surprise: it relates the graph with the edge-weight $l$ to the one with edge-weight $0$ which was already proven to be reconstructible from the knowledge of the flat-space scattering amplitude and the requirement of Bunch-Davies condition (which translates into requiring the final answer to be function on sum of energies only) \cite{Benincasa:2018ssx}. \subsubsection{Three-site line graph}\label{subsec:ExEw3sT} Let us now consider the next-to-simplest case of the three-site line graph with edge-weights $l_{12}\,=\,0$ and $l_{23}\,=\,1$\footnote{The indices $ij$ in $l_{ij}$ indicates the labels of the sites that the edge the weight is associated to connects.}. Then, the recursion relation allows us to write such a graph as a differential operator acting on the same graphs but with all the edge-weights equal to zero: \vspace{-.125cm} \begin{equation}\eqlabel{eq:WF3sl} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-4,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.5cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.5cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \node[right=.125cm of v3] (eq) {$\displaystyle=\,-\frac{\partial^2}{\partial x_2\partial x_3}$}; \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1b) at ($(eq)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2b) at ($(v1b)+(1.5cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3b) at ($(v2b)+(1.5cm,0)$); \draw[-,thick] (v1b) -- node[above, scale=.75] {$0$} (v2b); \draw[-,thick] (v2b) -- node[above, scale=.75] {$0$} (v3b); \draw[fill,black] (v1b) circle (2pt); \draw[fill,black] (v2b) circle (2pt); \draw[fill,black] (v3b) circle (2pt); \end{tikzpicture} \end{equation} \\ \noindent Indeed we know how to compute the $0$-edge-weight graph on the right-hand-side thanks to the combinatorial rules provided in \cite{Arkani-Hamed:2017fdk}, so we can just use them and apply the differential operator in \eqref{eq:WF3sl} -- which is the way to go if we were merely interested in the final answer. However, it is instructive to read off some of its features, such as the order of the poles $\displaystyle\mathfrak{o}\:\equiv\:\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny int}}}2l_e+\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny ext}}}l_e+1\mbox{ :}$ \begin{equation*} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] % \begin{scope} \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \draw[thick, color=red!50!black] (v2) ellipse (1.5cm and .25cm); \node[below=.5cm of v2, scale=.825] (xT) {$\displaystyle x_1+x_2+x_3,\quad \mathfrak{o}\,=\,3$}; \end{scope} % \begin{scope}[shift={(5,0)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \coordinate (ce) at ($(v1)!0.5!(v2)$); \draw[thick, color=red!50!black] (ce) ellipse (.75cm and .25cm); \node[below=.5cm of v2, scale=.825] (xT) {$\displaystyle x_1+x_2+y_{23},\quad \mathfrak{o}\,=\,2$}; \end{scope} % \begin{scope}[shift={(10,0)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \coordinate (ce) at ($(v2)!0.5!(v3)$); \draw[thick, color=red!50!black] (ce) ellipse (.75cm and .25cm); \node[below=.5cm of v2, scale=.825] (xT) {$\displaystyle x_1+x_2+y_{23},\quad \mathfrak{o}\,=\,3$}; \end{scope} % \begin{scope}[shift={(0,-1.75)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \draw[thick, color=red!50!black] (v1) circle (4pt); \node[below=.5cm of v2, scale=.825] (xT) {$\displaystyle x_1+y_{12},\quad \mathfrak{o}\,=\,1$}; \end{scope} % \begin{scope}[shift={(5,-1.75)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \draw[thick, color=red!50!black] (v2) circle (4pt); \node[below=.5cm of v2, scale=.825] (xT) {$\displaystyle y_{12}+x_2+y_{23},\quad \mathfrak{o}\,=\,2$}; \end{scope} % \begin{scope}[shift={(10,-1.75)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \draw[thick, color=red!50!black] (v3) circle (4pt); \node[below=.5cm of v2, scale=.825] (xT) {$\displaystyle y_{23}+x_3,\quad \mathfrak{o}\,=\,2$}; \end{scope} % \end{tikzpicture} \end{equation*} \\ \vspace{1.5cm} \noindent It can also determine the coefficient of the Laurent expansion of our edge-weighted graph as any of the singularity is approached, in terms of the residues of the wavefunction represented by the zero-edge-weighted graph. For example, as the total energy pole location is approached: \begin{equation*} \begin{tikzpicture}[overlay, node distance=2cm, cross/.style={cross out, draw, inner sep=0pt, outer sep=0pt}, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}] % \begin{scope} \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6.5,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \draw[thick, color=red!50!black] (v2) ellipse (1.5cm and .25cm); \node[right=.25cm of v3] (extT) {$\displaystyle\sim\frac{-2\mathcal{A}_3}{(x_1+x_2+x_3)^3}+\frac{1}{(x_1+x_2+x_3)^2}\left(\frac{\partial}{\partial y_{12}}+\frac{\partial}{\partial x_2}+\frac{\partial}{\partial y_{23}}\right)\mathcal{A}_3+\ldots$}; \end{scope} % \begin{scope}[shift={(0,-1.75)}, transform shape] \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (v1) at (-6.5,-.25); \coordinate [label=below:{\footnotesize $\displaystyle x_2$}] (v2) at ($(v1)+(1.25cm,0)$); \coordinate [label=below:{\footnotesize $\displaystyle x_3$}] (v3) at ($(v2)+(1.25cm,0)$); \draw[-,thick] (v1) -- node[above, scale=.75] {$0$} (v2); \draw[-,thick] (v1) -- node[below, scale=.75] {$y_{12}$} (v2); \draw[-,thick] (v2) -- node[above, scale=.75] {$1$} (v3); \draw[-,thick] (v2) -- node[below, scale=.75] {$y_{23}$} (v3); \draw[fill,black] (v1) circle (2pt); \draw[fill,black] (v2) circle (2pt); \draw[fill,black] (v3) circle (2pt); \coordinate (ce) at ($(v1)!0.5!(v2)$); \draw[thick, color=red!50!black] (ce) ellipse (.75cm and .25cm); \node[right=.25cm of v3] (extL1) {$\displaystyle\sim\frac{1}{(x_1+x_2+y_{23})^2}\frac{\partial}{\partial x_3}\left[\mathcal{A}_2\times\frac{1}{2y_{23}}\left[\psi_1(x_3-y_{23})-\psi_1(x_3+y_{23})\right]\right]+\ldots$}; \node[below=.25cm of extL1] (extL2) {$\displaystyle\hspace{-4.75cm}\equiv\frac{1}{(x_1+x_2+y_{23})^2}\frac{\partial}{\partial x_3}\left[-\mathcal{A}_2\times\mathcal{A}_2\right]+\ldots$}; \end{scope} % \end{tikzpicture} \end{equation*} \\ \vspace{2.5cm} \noindent Notice that the operator in the last line acts just on the right-factor, so that the coefficient of $(x_1+x_2+y_{23})^{-2}$ really factorises as $\mathcal{A}_2\times\partial_{x_3}\mathcal{A}_2$. \section{Perturbative mass}\label{sec:Pert} The discussion so far has been restricted to a subclass of models identified by $\mu(\eta)^2\,=\,\mu_{\alpha}^2\eta^{-2}$ with $\mu_{\alpha}^2\,=\,-l(l+1)$ ($l\,\in\,\mathbb{Z}$), which includes states $m\,=\,0$ for cosmologies $a(\eta)\,\propto\,(-\eta)^{\alpha}$ for certain values of $\alpha\,=\,\alpha(l,d)$, as well as states with $m\,=\,m(l,d)$ (which can be non-zero) in $dS_{d+1}$. In these cases the recursion relations \eqref{eq:rrWfin1pic} and \eqref{eq:rrWFpic} have a natural seed, given by the massless scalar with time-dependent polynomial interactions in flat-space -- which contains the conformally coupled scalar in FRW cosmologies. In this section, we will extend such an analysis by considering the time-dependent mass perturbatively. This can be done in two ways: it is possible to consider $\mu^2(\eta)\,=\,\lambda_{2}(\eta)$, or $\mu(\eta)\,=\,\lambda_2(\eta)+\mu_{\alpha}\eta^{-2}$, with $\lambda_{2}$ being the dimensionless small expansion parameter. While in the first case, the free states are massless particles in flat space-time and the analysis applies to arbitrary $a(\eta)$, the second choice holds for cosmologies $a(\eta)\,=\,(-\eta)^{-\alpha}$, with $\lambda_2(\eta)\,\equiv\,m^2(-\eta)^{-2\alpha}$ and $\mu_{\alpha}\,=\,-l(l+1)$ for $\alpha\,=\,2l/(d-1)$: in this latter class of cases, the free states are labelled by $\nu\,=\,l+1/2$. Thus, performing a perturbative analysis in these two cases is equivalent to do perturbation theory around two different free propagation, which, for the above choices of cosmologies and parameters, corresponds to the scalar being conformally or minimally coupled respectively. However, as we just saw in the previous sections, the integrand defined by considering all the couplings in Fourier space satisfies recursion relations, which relate the $l\,\neq\,0$ states to the $l\,=\,0$ one, {\it i.e.} the massless free propagation. Hence, we will begin with analysing the case of a massless particles in flat-space with time-dependent couplings, including a two point coupling $\lambda_2(\eta)$, which will be also treated in its Fourier space \begin{equation}\eqlabel{eq:2ptcFS} \lambda_2(\eta)\:=\:\int_{-\infty}^{+\infty}d\omega\,e^{i\omega\eta}\tilde{\lambda}_2(\omega). \end{equation} Thus, a generic contribution to the perturbative wavefunction of the universe can be represented via Feynman diagrams which now allow for two-point vertices, and its general form can be written as \begin{equation}\eqlabel{eq:PWF2ma} \tilde{\psi}_{\mathcal{G}_{\circ}}\:=\:\int_{-\infty}^{+\infty}\prod_{v\in\mathcal{V}}\left[dx_{v}\tilde{\lambda}_{k_v}(x_v-X_v)\right] \int_{-\infty}^{+\infty}\prod_{w\in\mathcal{V}_2}\left[d\omega_w\tilde{\lambda}_2(\omega_w)\right]\psi_{\mathcal{G}_{\circ}}(\{\omega_w;\,x_v,\,y_e\}) \end{equation} where \begin{equation}\eqlabel{eq:PWF2mb} \psi_{\mathcal{G}_{\circ}}(\{\omega_w,\varepsilon_v\})\:=\:\int_{-\infty}^0\prod_{v\in\mathcal{V}}\left[d\eta_v\,e^{ix_v\eta_v}\right] \int_{-\infty}^0\prod_{w\in\mathcal{V}_2}\left[d\eta_w\,e^{i\omega_w\eta_w}\right] \prod_{e\in\mathcal{E}}G_e(y_e;\,\eta_{v_e},\eta'_{v_e}), \end{equation} $\mathcal{V}_2$ being the set of two point vertices. Notice that all the two-point sites in formula \eqref{eq:PWF2mb} connect two edges of the graph $\mathcal{G}_{\circ}$: this means that such a formula considers just mass corrections to the internal propagators and not on the external states. Indeed nothing forbids to consider also (or only) mass corrections on the external states: the expression for the wavefunction integrand \eqref{eq:PWF2mb} is structurally the same with the extra exponential terms having argument $\hat{x}_w\,\equiv\,E_w+\omega_w$, $E$ being the energy of the state which is receiving the mass correction. Further, the arguments of the relevant two-point couplings in \eqref{eq:PWF2ma} gets shifted to $\hat{x}_w-E_w$, with the integration over $\hat{x}_w$. We will comment about this case separately, while, for the time being, we will focus on \eqref{eq:PWF2mb}. Graphically, the two point couplings can be identified by a white site with valence $2$ \begin{figure}[H] \centering \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope}[shift={(-3,0)}, transform shape] \coordinate (w1) at (0,0); \coordinate (x2) at ($(w1)+(1.5,0)$); \draw[-,thick] (w1) -- (x2); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \end{scope} % \begin{scope} \coordinate (x1) at (0,0); \coordinate (x2) at ($(x1)+(1.5,0)$); \draw[-,thick] (x1) -- (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \end{scope} % \begin{scope}[shift={(3,0)}, transform shape] \coordinate (x1) at (0,0); \coordinate (w1) at ($(x1)+(1,0)$); \coordinate (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \end{scope} % \begin{scope}[shift={(6.5,0)}, transform shape] \coordinate (x1) at (0,0); \coordinate (w1) at ($(x1)+(1,0)$); \coordinate (w2) at ($(w1)+(1,0)$); \coordinate (x2) at ($(x1)+(3,0)$); \draw[-,thick] (x1) -- (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill=white] (w2) circle (2pt); \draw[fill,black] (x2) circle (2pt); \end{scope} % \end{tikzpicture} \end{figure} \noindent with the first graph representing a mass correction to an external state, while the third and the fourth represent mass corrections to the two-site graph (appearing as second graph). Equivalently, the graphs above can be thought of as subgraphs so that they will represent mass correction to some internal (or external, in the case of the first graph) state in a more complicated graph. Importantly, two edges connected via a white site have the same $y$ associated to it because of spatial momentum conservation. This introduce a novel feature in the function form of the integrand: it is bounded to develop higher poles, which become explicit in some of its residues. Notice that the formula \eqref{eq:PWF2mb} for the wavefunction integrand has the very same structure as the one without the two-point couplings: this means that it satisfies the very same recursion relation proven in \cite{Arkani-Hamed:2017fdk} and, consequently, the combinatorial rules on the graphs implementing it holds with no modification: one iteratively splits the graph in connected subgraphs associating to it the sum of the energies which are external to it, and sums over all the possibility in which such a decomposition can be done. Writing explicitly some example: \begin{equation}\eqlabel{eq:2sgei} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope} \coordinate[label=below:{\footnotesize $\tilde{x}_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(1.5,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $y$} (x2); \draw[fill=white] (x1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[right=.5cm of x2] (eq) {$\displaystyle=\:\frac{1}{(\tilde{x}_1+x_2)(\tilde{x}_1+y)(y+x_2)}$}; \end{scope} % \begin{scope}[shift={(-4,-1.5)}, transform shape] \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega$}] (w1) at ($(x1)+(1,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $y$} (w1); \draw[-,thick] (w1) -- node[above] {\footnotesize $y$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[right=.5cm of x2] (eq) {$\displaystyle=\:\frac{x_1+2y+2\omega+x_2}{(x_1+\omega+x_2)(x_1+y)(\omega+2y)(y+x_2)(x_1+\omega+y)(y+\omega+x_2)}$}; \end{scope} % \end{tikzpicture} \end{equation} From the second line in \eqref{eq:2sgei} it is easy to check that it develops a double pole: taking a residue iteratively in three out of the four variables, {\it e.g.} $\{x_1,\,x_2,\,y\}$ or $\{x_1,\,x_2,\,\omega\}$, one gets as a result $1/(2y)^2$ (or $1/(2\omega)^2$). This can be actually understood in full generality. Let us consider a generic graph $\mathcal{G}$ with black sites only. The iterated residue of the associated meromorphic function on the $x_i$'s is given by \cite{Arkani-Hamed:2017fdk}: \begin{equation}\eqlabel{eq:ItRes} \mbox{Res}\left\{\psi_{\mathcal{G}},\{z_i(x)=0\}\right\}\,=\,\prod_{e\in\mathcal{E}}\frac{1}{2y_e}. \end{equation} If we now map the graph $\mathcal{G}$ into a graph $\mathcal{G}_{\circ}$ by substituting some of the black site connecting two edges only with white ones, then the meromorphic function associated to $\mathcal{G}_{\circ}$ is the very same one but with all the $y_e$'s related to edges which connect to each other via the white sites being now the same ones. Therefore, taking the iterated residues with respect the variables associated to the sites, one obtains \eqref{eq:ItRes} but with a number of the $y$'s collapsing onto each other, generating multiple poles. Looking at the example in the second line of \eqref{eq:2sgei}, the graph has just two edges connected via a white site, which allows us to predict that it has to develop a double pole $1/(2y)^2$. As a final comment, the wavefunction with a massive internal state can be thought of series in the white site insertions on a given edge. Considering a two site graph, then it is given by \begin{equation}\eqlabel{eq:WFser} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega_1$}] (w1) at ($(x1)+(1,0)$); \coordinate (t1) at ($(w1)+(.25,0)$); \coordinate (t2) at ($(t1)+(1,0)$); \coordinate[label=below:{\footnotesize $\omega_a$}] (w2) at ($(t2)+(.25,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(w2)+(1,0)$); \draw[-,thick] (x1) -- (w1) -- (t1); \draw[-,dotted] (t1) -- (t2); \draw[-,thick] (t2) -- (w2) -- (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill=white] (w2) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[left=.5cm of x1] (eq) {$\displaystyle\psi_2^{\mbox{\tiny $(\lambda_2)$}}\:=\:\sum_{a=0}^{+\infty}\int_{-\infty}^{+\infty}\prod_{r=1}^{a}\left[d\omega_r\tilde{\lambda}_2(\omega_r)\right]$}; \end{tikzpicture} \end{equation} where, for cosmologies $a(\eta)\,=\,(-\eta)^{-\alpha}$, the $\omega$-dependent coupling is $\tilde{\lambda}_2(\omega_r)\,=\,\lambda_2\,i^{2\alpha}\omega_r^{2\alpha-1}\vartheta(\omega_r)$. In other words, it is given by re-summing over all the line graphs integrated over the variables $\omega_r$ ($r\,=\,1,\ldots,a$) attached to their internal (white) sites. The study of a possible closed form for the re-summed two-site graph \eqref{eq:WFser} is postponed to future work\footnote{Despite it might look like the standard textbook discussion on resummation of the self-energy corrections on a two-point function, the structure we would need to re-sum does not have a geometric series structure.}. However, for the time being, there is a comment which can be made: at least for cosmologies with $\alpha\,\in\,\mathbb{Z}_{\frac{1}{2}+}$, the structure of the integrand immediately implies that the integration over the $\omega_r$ returns polylogarithms with rational coefficients. Let us write explicitly the simplest example in $dS$ ({\it i.e.} $\alpha\,=\,1$): \begin{equation}\eqlabel{eq:c1int} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope} \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega$}] (w1) at ($(x1)+(1,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $y$} (w1); \draw[-,thick] (w1) -- node[above] {\footnotesize $y$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[left=.125cm of x1, scale=.9] (int) {$\displaystyle\int_{0}^{+\infty}\hspace{-.325cm}d\omega\,\omega$}; \node[right=.125cm of x2, scale=.9] (eq) {$\displaystyle=\:\frac{(x_1+x_2)\log{(x_1+x_2)}+2y\log{(2y)}-(x_1+y)\log{(x_1+y)}-(y+x_2)\log{(y+x_2)}}{(y^2-x_1^2)(y^2-x_2^2)}$}; \end{scope} \end{tikzpicture} \end{equation} Notice that the integrated expression above seems to have poles in $y-x_i$, which are not really expected for the Bunch-Davies wavefunction. In fact, if we compute the residues of such poles they are indeed zero! Let us close this section commenting on the perturbative treatment with $\nu\,=\,l+1/2$ ($l\,\in\,\mathbb{Z}_+$) states as free states. As for the case just discussed, the structure \eqref{eq:PWF2mb} of the wavefunction integrand stays unchanged, with the propagators now being the propagators for the $l$-states. This means that the recursion relations \eqref{eq:rrWfin1pic} and \eqref{eq:rrWFpic} hold. Hence, in this case we have edge-weighted white/black-site graphs, which, because of the recursion relation proved in this paper, can be rewritten as a differential operator acting on the related $l\,=\,0$ edge-weighted ones ({\it i.e.} the ones discussed above) upon iteration: \begin{equation}\eqlabel{eq:bwrr} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope} \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega_1$}] (w1) at ($(x1)+(1,0)$); \coordinate (t1) at ($(w1)+(.25,0)$); \coordinate (t2) at ($(t1)+(1,0)$); \coordinate[label=below:{\footnotesize $\omega_a$}] (w2) at ($(t2)+(.25,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(w2)+(1,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $l_{11}$} (w1) -- (t1); \draw[-,dotted] (t1) -- node[above] {\footnotesize $l_e$} (t2); \draw[-,thick] (t2) -- (w2) -- node[above] {\footnotesize $l_{a2}$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill=white] (w2) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[right=.125cm of x2, scale=.9] (eq) {$\displaystyle=\:\sum_{e\in\mathcal{E}} \left[\frac{2(l_e-1)}{\displaystyle\sum_{v\in\mathcal{V}}x_v}\left(\frac{\partial}{\partial x_{v_e}}+\frac{\partial}{\partial x_{v'_e}}\right)-\frac{\partial^2}{\partial x_{v_e}\partial x_{v'_e}}\right]$}; \end{scope} % \begin{scope}[shift={(10.75,0)}, transform shape] \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega_1$}] (w1) at ($(x1)+(1,0)$); \coordinate (t1) at ($(w1)+(.25,0)$); \coordinate (t2) at ($(t1)+(1,0)$); \coordinate[label=below:{\footnotesize $\omega_a$}] (w2) at ($(t2)+(.25,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(w2)+(1,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $l_{11}$} (w1) -- (t1); \draw[-,dotted] (t1) -- node[above] {\footnotesize $l_e-1$} (t2); \draw[-,thick] (t2) -- (w2) -- node[above] {\footnotesize $l_{a2}$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill=white] (w2) circle (2pt); \draw[fill,black] (x2) circle (2pt); \end{scope} \end{tikzpicture} \end{equation} where $x_v\,\equiv\,\omega_v$ for the internal sites. For example, the $l\,=\,1$ edge-weighted graph with one white site can be written as: \begin{equation}\eqlabel{eq:bwrrl1} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope} \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega$}] (w1) at ($(x1)+(1,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $1$} (w1); \draw[-,thick] (w1) -- node[above] {\footnotesize $1$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[right=.125cm of x2] (eq1) {$\displaystyle =$}; \end{scope} % \begin{scope}[shift={(6.25,0)}, transform shape] \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega$}] (w1) at ($(x1)+(1,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $0$} (w1); \draw[-,thick] (w1) -- node[above] {\footnotesize $0$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[left=.125cm of x1, scale=.9] (eq1) {$\displaystyle\left(-\frac{\partial^2}{\partial x_1\partial\omega}\right)\left(-\frac{\partial^2}{\partial\omega\partial x_2}\right)$}; \end{scope} % \end{tikzpicture} \end{equation} which in the $dS$ case integrates to \begin{equation}\eqlabel{eq:c1l1int} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope} \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega$}] (w1) at ($(x1)+(1,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $1$} (w1); \draw[-,thick] (w1) -- node[above] {\footnotesize $1$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[left=.125cm of x1, scale=.9] (int) {$\displaystyle\int_{0}^{+\infty}\hspace{-.325cm}d\omega\,\omega$}; \node[right=.125cm of x2, scale=.9] (eq) {$\displaystyle=\:$} \end{scope} % \begin{scope}[shift={(4,0)}, transform shape] \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega$}] (w1) at ($(x1)+(1,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(x1)+(2,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $0$} (w1); \draw[-,thick] (w1) -- node[above] {\footnotesize $0$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[left=.125cm of x1, scale=.9] (op) {$\displaystyle\frac{\partial^2}{\partial x_1\partial x_2}$}; \node[right=.1cm of x2] (val) {$\displaystyle\Bigg|_{\omega\,=\,0}\:=\:\frac{4\left[(x_1+x_2)^3+x_1y x_2\right]}{2y(x_1+x_2)^3(x_1+y)^3(y+x_2)^3}.$}; \end{scope} % \end{tikzpicture} \end{equation} Summarising, the combinatorial structures encountered earlier extends in the case of perturbative mass, both around the massless flat-space scalars (which contains the conformally coupled one) as well as the scalars with time-dependent mass $\mu(\eta)\,=\,-l(l+1)\eta^{-2}$ (containing the minimally coupled scalars). It would be astonishing if the peculiar structure this perturbative expansion would allow us to re-sum it. As already mentioned, we leave this exploration for future work. \section{Cosmological polytopes and edge-weighted graphs}\label{sec:CPl1} Graphs with $l\,=\,0$ return the wavefunction of the universe for massless scalars, which are naturally associated with the so-called {\it cosmological polytopes}. In this section we can show how this is also true for the case $l\,=\,1$, {\it i.e.} for internal massless scalars in cosmologies $a(\eta)\,\propto\,(-\eta)^{-\alpha}$. \subsection{Cosmological polytopes and the wavefunction of the universe: a concise review}\label{subsec:CPrev} Given the space of $n_e$ triangles $\left\{\Delta_i\right\}$ identified via their midpoints $(\mathbf{x}_i,\,\mathbf{y}_i,\,\mathbf{x}'_i)$, cosmological polytopes are defined as the convex hulls of the $3\,n_e$ vertices of such triangles intersected in the midpoints $(\mathbf{x}_i,\,\mathbf{x}'_i)$ of at most two out of their three sides (see Figure \ref{fig:CP}). \begin{figure}[h] \centering \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[shift={(0,2.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_i$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_i$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_i$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[shift={(2.5,2.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_j$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_j$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_j$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[scale={.4}, shift={(-7,2)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,thick,color=red] (B1) -- (C1); \draw[-,very thick,color=blue] (A1) -- (B1); \draw[-,very thick,color=blue] (A2) -- (B2); \draw[-,very thick,color=blue] (A1) -- (C1); \draw[-, dashed, very thick, color=red] (A2) -- (C2); \draw[-, dashed, thick, color=blue] (B2) -- (C2); \coordinate[label=below:{\Large ${\bf x'}_i$}] (x2) at ($(A1)!0.5!(B1)$); \draw[fill,color=blue] (x2) circle (2.5pt); \coordinate[label=left:{\Large ${\bf x}_i$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \coordinate[label=right:{\Large ${\bf x}_j$}] (x3) at ($(B2)!0.5!(C2)$); \draw[fill,color=blue] (x3) circle (2.5pt); \node[right=2.25cm of x2] (cht) {$\displaystyle\xrightarrow{\substack{\mbox{convex} \\ \mbox{hull}}}$}; \end{scope} % \begin{scope}[scale={.4}, shift={(-.5,2)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \draw[-,dashed,fill=blue!30, opacity=.7] (A1) -- (B2) -- (C1) -- cycle; \draw[-,thick,fill=blue!20, opacity=.7] (A1) -- (A2) -- (C1) -- cycle; \draw[-,thick,fill=blue!20, opacity=.7] (B1) -- (B2) -- (C1) -- cycle; \draw[-,thick,fill=blue!35, opacity=.7] (A2) -- (B1) -- (C1) -- cycle; \draw[-,dashed,fill=red!30, opacity=.3] (A1) -- (B2) -- (C2) -- cycle; \draw[-,dashed, thick, fill=red!50, opacity=.5] (B2) -- (B1) -- (C2) -- cycle; \draw[-,dashed,fill=red!40, opacity=.3] (A1) -- (A2) -- (C2) -- cycle; \draw[-,dashed, thick, fill=red!45, opacity=.5] (A2) -- (B1) -- (C2) -- cycle; \end{scope} \begin{scope}[scale={.5}, shift={(4.5,.75)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (c1b) at (0.75,0,-.75*\factor); \coordinate (b1b) at (-.75,0,-.75*\factor); \coordinate (a2b) at (0.75,-.65,.75*\factor); \coordinate (c2b) at (1.5,-3,-1.5*\factor); \coordinate (b2b) at (-1.5,-3,-1.5*\factor); \coordinate (a1b) at (1.5,-3.75,1.5*\factor); \coordinate (Int1) at (intersection of b2b--c2b and b1b--a1b); \coordinate (Int2) at (intersection of b2b--c2b and c1b--a1b); \coordinate (Int3) at (intersection of b2b--a2b and b1b--a1b); \coordinate (Int4) at (intersection of a2b--c2b and c1b--a1b); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white] at (Int1) {}; \node[rectangle, color=white, fill=white] at (Int2) {}; }}} ] \node at (c1b) (c1c) {}; \node at (b1b) (b1c) {}; \node at (a2b) (a2c) {}; \node at (c2b) (c2c) {}; \node at (b2b) (b2c) {}; \node at (a1b) (a1c) {}; \draw[interrupt,thick,color=red] (b2b) -- (c2b); \draw[-,very thick,color=red] (b1b) -- (c1b); \draw[-,very thick,color=blue] (b1b) -- (a1b); \draw[-,very thick,color=blue] (a1b) -- (c1b); \draw[-,very thick,color=blue] (b2b) -- (a2b); \draw[-,very thick,color=blue] (a2b) -- (c2b); \node[ball,text width=.15cm,fill,color=blue, above=-.06cm of Int3, label=left:{\large ${\bf x}_i$}] (Inta) {}; \node[ball,text width=.15cm,fill,color=blue, above=-.06cm of Int4, label=right:{\large ${\bf x'}_i$}] (Intb) {}; \node[right=.875cm of Intb, scale=.75] (chl) {$\displaystyle\xrightarrow{\substack{\mbox{convex} \\ \mbox{hull}}}$}; \end{scope} % \begin{scope}[scale={.5}, shift={(9,.75)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (c1b) at (0.75,0,-.75*\factor); \coordinate (b1b) at (-.75,0,-.75*\factor); \coordinate (a2b) at (0.75,-.65,.75*\factor); \coordinate (c2b) at (1.5,-3,-1.5*\factor); \coordinate (b2b) at (-1.5,-3,-1.5*\factor); \coordinate (a1b) at (1.5,-3.75,1.5*\factor); \draw[-,dashed,fill=green!50,opacity=.6] (c1b) -- (b1b) -- (b2b) -- (c2b) -- cycle; \draw[draw=none,fill=red!60, opacity=.45] (c2b) -- (b2b) -- (a1b) -- cycle; \draw[-,fill=blue!,opacity=.3] (c1b) -- (b1b) -- (a2b) -- cycle; \draw[-,fill=green!50,opacity=.4] (b1b) -- (a2b) -- (a1b) -- (b2b) -- cycle; \draw[-,fill=green!45!black,opacity=.2] (c1b) -- (a2b) -- (a1b) -- (c2b) -- cycle; \end{scope} % \end{tikzpicture} \caption{Cosmological polytopes constructed from the space of $n_e\,=\,2$ triangles. The (red) blue sides of the triangles (first line in the picture) are (non)-intersectable. Out of these two triangles, there are two ways of constructing more complicated objects: they can be intersected on the midpoint of one of the two (blue) intersectable sides ($\mathbf{x}'_i\,=\,\mathbf{x}'_j$), or in both ($\mathbf{x}_i\,=\,\mathbf{x}_j$; $\mathbf{x}'_i\,=\,\mathbf{x}'_j$), originating the polytopes on the bottom left and right respectively.} \label{fig:CP} \end{figure} \begin{wrapfigure}{l}{4.5cm} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.125}] % \begin{scope}[scale={2.5}] \draw[axis] (-0.6,0) -- (+0.6,0) node(xline)[right, scale=.75]{space}; \draw[axis] (0,-0.6) -- (0,+0.6) node(yline)[above, scale=.75]{time}; \fill[red] (0,0) circle (.65pt); \draw[-, thick, color=red] (-0.45,+0.45) -- (+0.45,-0.45); \draw[-, thick, color=red] (-0.45,-0.45) -- (+0.45,+0.45); \node[draw, ultra thick, align=center, color=blue, fill=white, scale=.75] at (0,+0.3) {FUTURE}; \node[draw, ultra thick, align=center, color=blue, fill=white, scale=.75] at (0,-0.3) {PAST}; \node[draw, ultra thick, align=center, color=red, fill=white, scale=.75] at (-0.3,0) {SPACE -}; \node[draw, ultra thick, align=center, color=red, fill=white, scale=.75] at (+0.3,0) {LIKE}; \end{scope} % \end{tikzpicture} \end{wrapfigure} Interestingly, this construction has the space-time causal structure imprinted: the two intersectable edges of a triangle correspond to the two space-time regions with a definite causal relation ({\it past} and {\it future}), while the non-intersectable one represents the region with no causal relation ({\it space-like}). Or, turning the table around, the causal structure of the space-time provides a rationale to having triangles as fundamental objects as well as to the prescription of considering the class of polytopes generated by intersecting at most two out of the three sides of the triangles. While the $n_e$ non-intersected triangles live in $\mathbb{P}^{3n_e-1}$, a cosmological polytope generated by intersecting them by imposing $r$ constraints live in $\mathbb{P}^{3n_e-r-1}$. \begin{wrapfigure}{l}{5cm} \begin{tikzpicture}[shift={(1,0)}, line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, scale={.4}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate (m2) at ($(A)!0.5!(C)$); \draw[-, thick, color=blue] (B) -- (A); \draw[-, thick, color=blue] (A) -- (C); \draw[-, thick, color=red] (B) -- (C); \node[right=.75cm of m2.east] (lra) {$\longleftrightarrow$}; \node[ball,text width=.18cm,fill,color=blue,right=.75cm of lra.east] (x1) {}; \node[ball,text width=.18cm,fill,color=blue,right=1cm of x1.east] (x2) {}; \draw[-,thick,color=red] (x1.east) -- (x2.west); \end{tikzpicture} \end{wrapfigure} There is a $1-1$ correspondence between cosmological polytopes and the $l\,=\,0$ graphs: any triangle $\triangle_i$, characterised via its midpoints $(\mathbf{x}_i,\,\mathbf{y}_i,\,\mathbf{x}'_i)$, is associated to a two-site graph, with each site corresponding to the intersectable sides of $\triangle_i$ and its only edge with the non-intersectable one. Thus, a cosmological polytope $\mathcal{P}_{\mathcal{G}}$ generated by intersecting $n_e$ triangles, is associated to a graph $\mathcal{G}$ with $n_e$ edges constructed from a collection of two-site graphs by identifying some of their vertices: \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[shift={(5,2)}, transform shape] \coordinate[label=below:{\tiny $x_i$}] (v1) at (0,0); \coordinate[label=below:{\tiny $x'_i$}] (v2) at ($(v1)+(1,0)$); \coordinate[label=below:{\tiny $x_j$}] (v3) at ($(v2)+(1,0)$); \coordinate[label=below:{\tiny $x'_j$}] (v4) at ($(v3)+(1,0)$); \coordinate[label=above:{\tiny $y_i$}] (yi) at ($(v1)!0.5!(v2)$); \coordinate[label=above:{\tiny $y_j$}] (yj) at ($(v3)!0.5!(v4)$); \draw[thick, color=red] (v1) -- (v2); \draw[thick, color=red] (v3) -- (v4); \draw[fill=blue, color=blue] (v1) circle (2pt); \draw[fill=blue, color=blue] (v2) circle (2pt); \draw[fill=blue, color=blue] (v3) circle (2pt); \draw[fill=blue, color=blue] (v4) circle (2pt); \coordinate (t1) at ($(v2)!0.5!(v3)$); \coordinate[label=below:{\tiny $x'_i$}] (s2) at ($(t1)+(0,-1.5)$); \coordinate[label=below:{\tiny $x_i$}] (s1) at ($(s2)-(1,0)$); \coordinate[label=below:{\tiny $x_j$}] (s3) at ($(s2)+(1,0)$); \coordinate[label=above:{\tiny $y_i$}] (yyi) at ($(s1)!0.5!(s2)$); \coordinate[label=above:{\tiny $y_j$}] (yyj) at ($(s2)!0.5!(s3)$); \draw[thick, color=red] (s1) -- (s2) -- (s3); \draw[fill=blue, color=blue] (s1) circle (2pt); \draw[fill=blue, color=blue] (s2) circle (2pt); \draw[fill=blue, color=blue] (s3) circle (2pt); \coordinate[label=left:{\tiny $x_i$}] (n1) at ($(s1)!0.5!(s2)+(0,-1.5)$); \coordinate[label=right:{\tiny $x'_j$}] (n2) at ($(s2)!0.5!(s3)+(0,-1.5)$); \coordinate (nc) at ($(n1)!0.5!(n2)$); \coordinate[label=above:{\tiny $y_i$}] (yyyi) at ($(nc)+(0,.5cm)$); \draw[thick, color=red] (nc) circle (.5cm); \draw[fill=blue, color=blue] (n1) circle (2pt); \draw[fill=blue, color=blue] (n2) circle (2pt); \end{scope} % \begin{scope}[shift={(0,2.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_i$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_i$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_i$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[shift={(2.5,2.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_j$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_j$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_j$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[scale={.5}, shift={(0,2)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,thick,color=red] (B1) -- (C1); \draw[-,very thick,color=blue] (A1) -- (B1); \draw[-,very thick,color=blue] (A2) -- (B2); \draw[-,very thick,color=blue] (A1) -- (C1); \draw[-, dashed, very thick, color=red] (A2) -- (C2); \draw[-, dashed, thick, color=blue] (B2) -- (C2); \coordinate[label=below:{\Large ${\bf x'}_i$}] (x2) at ($(A1)!0.5!(B1)$); \draw[fill,color=blue] (x2) circle (2.5pt); \coordinate[label=left:{\Large ${\bf x}_i$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \coordinate[label=right:{\Large ${\bf x}_j$}] (x3) at ($(B2)!0.5!(C2)$); \draw[fill,color=blue] (x3) circle (2.5pt); \end{scope} % \begin{scope}[scale={.6}, shift={(3.75,.75)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (c1b) at (0.75,0,-.75*\factor); \coordinate (b1b) at (-.75,0,-.75*\factor); \coordinate (a2b) at (0.75,-.65,.75*\factor); \coordinate (c2b) at (1.5,-3,-1.5*\factor); \coordinate (b2b) at (-1.5,-3,-1.5*\factor); \coordinate (a1b) at (1.5,-3.75,1.5*\factor); \coordinate (Int1) at (intersection of b2b--c2b and b1b--a1b); \coordinate (Int2) at (intersection of b2b--c2b and c1b--a1b); \coordinate (Int3) at (intersection of b2b--a2b and b1b--a1b); \coordinate (Int4) at (intersection of a2b--c2b and c1b--a1b); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white] at (Int1) {}; \node[rectangle, color=white, fill=white] at (Int2) {}; }}} ] \node at (c1b) (c1c) {}; \node at (b1b) (b1c) {}; \node at (a2b) (a2c) {}; \node at (c2b) (c2c) {}; \node at (b2b) (b2c) {}; \node at (a1b) (a1c) {}; \draw[interrupt,thick,color=red] (b2b) -- (c2b); \draw[-,very thick,color=red] (b1b) -- (c1b); \draw[-,very thick,color=blue] (b1b) -- (a1b); \draw[-,very thick,color=blue] (a1b) -- (c1b); \draw[-,very thick,color=blue] (b2b) -- (a2b); \draw[-,very thick,color=blue] (a2b) -- (c2b); \node[ball,text width=.15cm,fill,color=blue, above=-.06cm of Int3, label=left:{\large ${\bf x}_i$}] (Inta) {}; \node[ball,text width=.15cm,fill,color=blue, above=-.06cm of Int4, label=right:{\large ${\bf x'}_i$}] (Intb) {}; \end{scope} \end{tikzpicture} \end{equation*} Each vertex and edge are then labelled respectively by $x_v$ and $y_e$ which are related to the midpoints $\mathbf{x}_i$ and $\mathbf{y}_i$. These labels can be identified with the energies of the sites $v$ and edge $e$. Vice versa, starting with a graph $\mathcal{G}$, it is possible to define the space of the energies associates to its sites and edges $\mathcal{Y}\,=\,(x's,y's)\,\in\,\mathbb{P}^{n_v+n_e-1}$, $n_v$ and $n_e$ being the number of vertices and edges of $\mathcal{G}$, with a basis formed by the vectors $\mathbf{x}_v\,\equiv\,x_v\mathbf{X}_v$ and $\mathbf{y}_e\,\equiv\,y_e\mathbf{Y}_e$ associated to vertices and edges. Each edge of $\mathcal{G}$ is then associated with the set of vertices $\{\mathbf{x}_i-\mathbf{y}_i+\mathbf{x}'_i,\, \mathbf{x}_i+\mathbf{y}_i-\mathbf{x}'_i,\, -\mathbf{x}_i+\mathbf{y}_i+\mathbf{x}'_i,\}$. The cosmological polytope $\mathcal{P}_{\mathcal{G}}$ associated to the graph $\mathcal{G}$ is thus the convex hull defined by these vertices. Finally, any cosmological polytope $\mathcal{P}_{\mathcal{G}}\,\in\,\mathbb{P}^{n_v+n_e-1}$ has an associated canonical differential top form $\omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})$, $\mathcal{Y}$ being a generic point of $\mathcal{P}_{\mathcal{G}}$, which is uniquely fixed by the requirement that it has logarithmic singularities on (and only on) all the faces of $\mathcal{P}_{\mathcal{G}}$. Defining the coefficient $\Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})$ of the canonical form $\omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})$ by stripping the universal top-form measure $\langle\mathcal{Y},d^N\mathcal{Y}\rangle$ out ($N\,\equiv\,n_v+n_e$), it returns (the integrand of) the wavefunction of the universe $\Psi_{\mathcal{G}}(x,y)$ associated to the graph $\mathcal{G}$ \cite{Arkani-Hamed:2017fdk} \begin{equation}\eqlabel{eq:CFWF} \omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})\:\equiv\:\Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})\langle\mathcal{Y},d^N\mathcal{Y}\rangle\:=\:\frac{\Psi_{\mathcal{G}}(x_v,y_e)}{\mbox{Vol}\{GL(1)\}}\prod_{\substack{v\in\mathcal{V} \\ e\in\mathcal{E}}}dx_vdy_e. \end{equation} When any singularity is reached, the canonical form $\omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})$ reduces to a lower-dimensional one which characterises the lower dimensional polytope $\mathcal{P}_{\mathfrak{g}}$ corresponding to the face of $\mathcal{P}_{\mathcal{G}}$ identified by a subgraph $\mathfrak{g}$ of $\mathcal{G}$. The definition of the cosmological polytopes as convex hull of the vertices of $n_e$ intersecting triangles allows a direct combinatorial characterisation of their faces as well: given a certain cosmological polytope $\mathcal{P}_{\mathcal{G}}$, any of its facets is defined as the collection $\mathcal{V}_{\mathcal{F}}$ of vertices $\mathbf{V}_a^I$ ($a\,=\,1,\ldots\,3n_e$) of $\mathcal{P}_{\mathcal{G}}$ such that $\mathcal{W}_I\mathbf{V}^I\,=\,0$, $\mathcal{W}_I\,\equiv\,\tilde{x}_v\tilde{\mathbf{X}}_{vI}+\tilde{y}_e\mathbf{\tilde{Y}}_{eI}$ being an hyperplane in $\mathbb{P}^{n_v+n_e-1}$ with $\mathbf{\tilde{X}}_v\cdot\mathbf{x}_{v'}\,=\,\delta_{vv'}$, $\mathbf{\tilde{Y}}_e\cdot\mathbf{y}_{e'}\,=\,\delta_{ee'}$ and $\mathbf{\tilde{X}}_v\cdot\mathbf{y}_{e'}\,=\,0$, compatibly with the constraints on the midpoints of the sides of the generating triangles -- those vertices of $\mathcal{P}_{\mathcal{G}}$ which are not on the facet identified by a certain hyperplane $\mathcal{W}_I$, satisfy instead the inequality $\mathcal{W}\cdot\mathbf{V}_a\,\ge\,0$. Each of these hyperplanes is in a $1-1$ correspondence with a subgraph $\mathfrak{g}$ of the graph $\mathcal{G}$ associated to $\mathcal{P}_{\mathcal{G}}$, and it is given by $\mathcal{W}\,=\,\sum_{v\in\mathfrak{g}}\tilde{x}_v\mathbf{\tilde{X}}_v+\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny ext}}}\tilde{y}_e\mathbf{\tilde{Y}}_e$, with $\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny ext}}$ being the set of edges external to $\mathfrak{g}$. It is possible to graphically keep track of the vertices belonging to a certain hyperplane $\mathcal{W}$ introducing a marking on the associated graph $\mathcal{G}$ indicating the vertices which do not belong to $\mathcal{W}$ \begin{equation*} \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, scale={1.125}, transform shape] \begin{scope \node[ball,text width=.18cm,fill,color=black,label=below:{\footnotesize $v\phantom{'}$}] at (0,0) (v1) {}; \node[ball,text width=.18cm,fill,color=black,label=below:{\footnotesize $v'$},right=1.5cm of v1.east] (v2) {}; \draw[-,thick,color=black] (v1.east) edge node [text width=.18cm,below=.1cm,midway] {\footnotesize $e$} (v2.west); \node[very thick, cross=4pt, rotate=0, color=blue, right=.7cm of v1.east]{}; \coordinate (x) at ($(v1)!0.5!(v2)$); \node[right=1.5cm of v2, scale=.9] (lb1) {$\mathcal{W}\cdot({\bf x}_v+{\bf x}_{v'}-{\bf y}_e)>\,0$}; \end{scope} % \begin{scope}[shift={(0,-1)} \node[ball,text width=.18cm,fill,color=black,label=below:{\footnotesize $v\phantom{'}$}] at (0,0) (v1) {}; \node[ball,text width=.18cm,fill,color=black,label=below:{\footnotesize $v'$},right=1.5cm of v1.east] (v2) {}; \draw[-,thick,color=black] (v1.east) edge node [text width=.18cm,below=.1cm,midway] {\footnotesize $e$} (v2.west); \node[very thick, cross=4pt, rotate=0, color=blue, left=.1cm of v2.west]{}; \coordinate (x) at ($(v1)!0.5!(v2)$); \node[right=1.5cm of v2, scale=.9] (lb1) {$\mathcal{W}\cdot({\bf x}_{v'}+{\bf y}_e-{\bf x}_v)>\,0$}; \end{scope} % \begin{scope}[shift={(0,-2)} \node[ball,text width=.18cm,fill,color=black,label=below:{\footnotesize $v\phantom{'}$}] at (0,0) (v1) {}; \node[ball,text width=.18cm,fill,color=black,label=below:{\footnotesize $v'$},right=1.5cm of v1.east] (v2) {}; \draw[-,thick,color=black] (v1.east) edge node [text width=.18cm,below=.1cm,midway] {\footnotesize $e$} (v2.west); \node[very thick, cross=4pt, rotate=0, color=blue, right=.1cm of v1.east]{}; \coordinate (x) at ($(v1)!0.5!(v2)$); \node[right=1.5cm of v2, scale=.9] (lb1) {$\mathcal{W}\cdot({\bf x}_v+{\bf y}_e-{\bf x}_{v'})>\,0$}; \end{scope} \end{tikzpicture} \end{equation*} If $\mathcal{G}\,=\,\mathfrak{g}$, then all the edges are marked in their middle and the hyperplane identifying this facet is given by $\mathcal{W}\,=\,\sum_{v}\tilde{x}_v{\bf \tilde{X}}_v$, {\it i.e.} it is the scattering facet and it is identified by the collection of $2n_e$ vertices $\{\mathbf{x}_v+\mathbf{y}_e-\mathbf{x'}_v,\,-\mathbf{x}_v+\mathbf{y}_e+\mathbf{x'}_v\}$. For a more general facet, the graph gets marked in the middle of the edges of $\mathcal{G}$ which are internal to $\mathfrak{g}$ and in the extreme close to $\mathfrak{g}$ for the edges of $\mathcal{G}$ which are external to $\mathfrak{g}$: \begin{equation*} \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, scale=1, transform shape] \begin{scope} \node[ball,text width=.18cm,fill,color=black,label=above:{$x_1$}] at (0,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black,right=1.2cm of x1.east, label=above:{$x_2$}] (x2) {}; \node[ball,text width=.18cm,fill,color=black,right=1.2cm of x2.east, label=above:{$x_3$}] (x3) {}; \node[ball,text width=.18cm,fill,color=black, label=left:{$x_4$}] at (-1,.8) (x4) {}; \node[ball,text width=.18cm,fill,color=black, label=right:{$x_5$}] at (-1,-.8) (x5) {}; \node[ball,text width=.18cm,fill,color=black, label=left:{$x_6$}] at (-1.7,-2) (x6) {}; \node[ball,text width=.18cm,fill,color=black, label=right:{$x_7$}] at (-.3,-2) (x7) {}; \node[above=.35cm of x5.north] (ref2) {}; \coordinate (Int2) at (intersection of x5--x1 and ref2--x2); \coordinate (t1) at (x3.east); \coordinate (t2) at (x4.west); \coordinate (t3) at (x1.south west); \coordinate (t4) at (x2.south); \draw[-,thick,color=black] (x1) -- (x2) -- (x3); \draw[-,thick,color=black] (x1) -- (x4); \draw[-,thick,color=black] (x5) -- (x1); \draw[-,thick,color=black] (x5) -- (x7); \draw[-,thick,color=black] (x5) -- (x6); \draw[red!50!black, thick] plot [smooth cycle] coordinates {(3,-.1) (1.2,1) (-1.2,.9) (t3) (1.5,-.5)}; \node[color=red!50!black,right=.3cm of x3.east] {\large $\mathfrak{g}$}; \coordinate (m1) at ($(x1)!0.5!(x4)$); \coordinate (m2) at ($(x1)!0.5!(x2)$); \coordinate (m3) at ($(x2)!0.5!(x3)$); \coordinate (e1) at ($(x1)!0.25!(x5)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (m1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m2) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m3) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (e1) {}; \end{scope} % \begin{scope}[shift={(5,-1.75)}, scale={1.5}, transform shape] \coordinate[label=below:{\tiny $x_1$}] (v1) at (0,0); \coordinate[label=above:{\tiny $x_2$}] (v2) at ($(v1)+(0,1.25)$); \coordinate[label=above:{\tiny $x_3$}] (v3) at ($(v2)+(1,0)$); \coordinate[label=above:{\tiny $x_4$}] (v4) at ($(v3)+(1,0)$); \coordinate[label=right:{\tiny $x_5$}] (v5) at ($(v4)-(0,.625)$); \coordinate[label=below:{\tiny $x_6$}] (v6) at ($(v5)-(0,.625)$); \coordinate[label=below:{\tiny $x_7$}] (v7) at ($(v6)-(1,0)$); \draw[thick] (v1) -- (v2) -- (v3) -- (v4) -- (v5) -- (v6) -- (v7) -- cycle; \draw[thick] (v3) -- (v7); \draw[fill=black] (v1) circle (2pt); \draw[fill=black] (v2) circle (2pt); \draw[fill=black] (v3) circle (2pt); \draw[fill=black] (v4) circle (2pt); \draw[fill=black] (v5) circle (2pt); \draw[fill=black] (v6) circle (2pt); \draw[fill=black] (v7) circle (2pt); \coordinate (v12) at ($(v1)!0.5!(v2)$); \coordinate (v23) at ($(v2)!0.5!(v3)$); \coordinate (v34) at ($(v3)!0.5!(v4)$); \coordinate (v45) at ($(v4)!0.5!(v5)$); \coordinate (v56) at ($(v5)!0.5!(v6)$); \coordinate (v67) at ($(v6)!0.5!(v7)$); \coordinate (v71) at ($(v7)!0.5!(v1)$); \coordinate (v37) at ($(v3)!0.5!(v7)$); \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625] at (v34) {}; \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625] at (v45) {}; \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625, left=.15cm of v3] (v3l) {}; \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625, below=.15cm of v3] (v3b) {}; \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625, below=.1cm of v5] (v5b){}; \coordinate (a) at ($(v3l)!0.5!(v3)$); \coordinate (b) at ($(v3)+(0,.125)$); \coordinate (c) at ($(v34)+(0,.175)$); \coordinate (d) at ($(v4)+(0,.125)$); \coordinate (e) at ($(v4)+(.125,0)$); \coordinate (f) at ($(v45)+(.175,0)$); \coordinate (g) at ($(v5)+(.125,0)$); \coordinate (h) at ($(v5b)!0.5!(v5)$); \coordinate (i) at ($(v5)-(.125,0)$); \coordinate (j) at ($(v45)-(.175,0)$); \coordinate (k) at ($(v34)-(0,.175)$); \coordinate (l) at ($(v3)-(0,.125)$); \draw [thick, red!50!black] plot [smooth cycle] coordinates {(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)}; \node[below=.05cm of k, color=red!50!black] {\footnotesize $\displaystyle\mathfrak{g}$}; \end{scope} \end{tikzpicture} \end{equation*} and the facets are identified by the collection of: the vertices $\{\mathbf{x}_v+\mathbf{y}_e-\mathbf{x'}_v,\,-\mathbf{x}_v+\mathbf{y}_e+\mathbf{x'}_v\}$ for each edge $e$ marked in the middle, $\{\mathbf{x}_v+\mathbf{y}_e-\mathbf{x'}_v,\,\mathbf{x}_v-\mathbf{y}_e+\mathbf{x'}_v$ for each edge marked close to the vertex $\mathbf{x'}_v$, and all the three vertices for those edges which are unmarked. \subsection{Cosmological polytopes and $l\,=\,1$ states}\label{subsec:CPl1} As reminded above, a cosmological polytope -- but in truth any positive geometry -- is uniquely characterised by its canonical form, with the properties of having logarithmic singularities on (and only on) its boundary and with the residue of any of its singularities being a lower-dimensional canonical form. The recursion relation for the edge-weighted graphs crucially involves differential operators. This immediately implies that for an edge-weighted graph $\mathcal{G}_{\{l_e\}}$ with any $l_e\,\neq\,0$, the related wavefunction of the universe is characterised by poles with order higher than one. This fact seems to exclude the possibility of a description of any wavefunction of the universe associated to edge-weighted graphs with non-zero edge weights via any positive geometry. In this section we instead want to argue that it is indeed possible to extract it from combinatorial and geometrical objects in the same class of the cosmological polytopes. \subsubsection{Cosmological polytopes, derivatives and high order poles}\label{subsubsec:CPdhp} Let us begin with considering the space of two triangles with vertices $\{\mathbf{x}_i-\mathbf{y}_i+\mathbf{x}'_i,\,\mathbf{x}_i+\mathbf{y}_i-\mathbf{x}'_i,\,-\mathbf{x}_i+\mathbf{y}_i+\mathbf{x}'_i,\}$ ($i\,=\,1,\,2$). The general prescription for generating more complex objects is to intersect them on the midpoints of at most two of their sides, which we previous referred to them as {\it intersectable sides}. However, this prescription can be extended with the inclusion of the vertices of the triangles, {\it i.e.} we can allow to intersect the triangles in the midpoints of their intersectable edges and in their vertices as well. An idea which goes along these lines was discussed in \cite{Benincasa:2018ssx}, where the triangles were allowed to intersect also on the vertex shared by their intersectable sides -- which was dubbed for this reason as {\it intersectable vertex}. In our present context, we allow the two triangles to intersect in one of their midpoints and in the vertices opposite to them, {\it e.g.} imposing the constraints $\mathbf{x}'_1\,=\,\mathbf{x}'_2$ and $\mathbf{x}_1+\mathbf{y}_1-\mathbf{x}'_1\,=\,\mathbf{x}_2+\mathbf{y}_2-\mathbf{x}'_2$: \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_1$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_1$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[shift={(0,-1.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_2$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_2$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[scale={.5}, shift={(7,.25)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,very thick,color=blue] (A1) -- (B1); \draw[interrupt,very thick,color=blue] (A2) -- (B2); \draw[-,very thick,color=red] (B1) -- (C1); \draw[-,very thick,color=blue] (A1) -- (C1); \draw[-, very thick, color=red] (A2) -- (C1); \draw[-, very thick, color=blue] (B2) -- (C1); \coordinate[label=below:{\Large ${\bf x'}$}] (x2) at ($(A1)!0.5!(B1)$); \draw[fill,color=blue] (x2) circle (2.5pt); \coordinate[label=left:{\Large ${\bf x}_1$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \coordinate[label=right:{\Large ${\bf x}_2$}] (x3) at ($(B2)!0.5!(C1)$); \draw[fill,color=blue] (x3) circle (2.5pt); \end{scope} % \begin{scope}[scale={.5}, shift={(13,.25)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,color=blue] (A1) -- (B1); \draw[interrupt,color=blue] (A2) -- (B2); \draw[-,color=red] (B1) -- (C1); \draw[-,color=blue] (A1) -- (C1); \draw[-, color=red] (A2) -- (C1); \draw[-, color=blue] (B2) -- (C1); \draw[draw=none,fill=green!80,opacity=.3] (A2) -- (B1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=blue!60, opacity=.45] (C1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=red!60, opacity=.7] (C1) -- (A2) -- (B1) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (A1) -- (A2) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (B2) -- (B1) -- cycle; \coordinate[label=left:{\Large ${\bf x}_1$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \coordinate[label=right:{\Large ${\bf x}_2$}] (x3) at ($(B2)!0.5!(C1)$); \draw[fill,color=blue] (x3) circle (2.5pt); \end{scope} % \end{tikzpicture} \end{equation*} The polytope generated in this way is a square pyramid in $\mathbb{P}^3$ with vertices \begin{equation}\eqlabel{eq:VerPir} \{\mathbf{x}_1-\mathbf{y}_1+\mathbf{x}',\quad \mathbf{x}_1+\mathbf{y}_1-\mathbf{x}',\quad -\mathbf{x}_1+\mathbf{y}_1+\mathbf{x}',\quad 2\mathbf{x}'-\mathbf{h}_2,\quad \mathbf{h}_2\}, \end{equation} where $\mathbf{h}_2\,\equiv\,\mathbf{x}_2-\mathbf{y}_2+\mathbf{x}'\,=\,-2\mathbf{y}_2+\mathbf{x}_1+\mathbf{y}_1+\mathbf{x}'$, together with $\mathbf{x}_1$, $\mathbf{y}_1$ and $\mathbf{x}'$, parametrises the (ungauged) degrees of freedom. One comment is now in order. This construction is equivalent to just intersect a triangle $\{\mathbf{x}_1-\mathbf{y}_1+\mathbf{x}',\, \mathbf{x}_1+\mathbf{y}_1-\mathbf{x}',\, -\mathbf{x}_1+\mathbf{y}_1+\mathbf{x}'_1\}$ with a segment $\{2\mathbf{x}'_2-\mathbf{h}_2,\quad \mathbf{h}_2\}$, with the latter which can be thought of as a projection of a triangle $\{\mathbf{x}_2-\mathbf{h}_2+\mathbf{x}'_2,\, \mathbf{x}_2+\mathbf{h}_2-\mathbf{x}'_2,\, -\mathbf{x}_2+\mathbf{y}_2+\mathbf{x}'_2\}$ through the cone with origin $\mathbf{O}\,\equiv\,\mathbf{x}_2-\mathbf{x}'_2$: \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_1$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_1$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[shift={(0,-1.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=below:{\footnotesize $\displaystyle 2{\bf x'}_2-{\bf h}_2$}] (m1) at (B); \coordinate [label=below:{\footnotesize $\displaystyle \;{\bf h}_2$}] (m2) at (C); \coordinate [label=above:{\footnotesize $\displaystyle {\bf x'}_2$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (C); \draw[color=red, fill=red] (C) circle (2pt); \draw[color=blue, fill=blue] (B) circle (2pt); \end{scope} % \begin{scope}[scale={.5}, shift={(7,.25)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,very thick,color=blue] (A1) -- (B1); \draw[interrupt,very thick,color=blue] (A2) -- (B2); \draw[-,very thick,color=red] (B1) -- (C1); \draw[-,very thick,color=blue] (A1) -- (C1); \coordinate[label=below:{\Large ${\bf x'}$}] (x2) at ($(A1)!0.5!(B1)$); \draw[fill,color=blue] (x2) circle (2.5pt); \coordinate[label=left:{\Large ${\bf x}_1$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \draw[fill,color=red] (A2) circle (2.5pt); \draw[fill,color=blue] (B2) circle (2.5pt); \coordinate [label=above:{$\displaystyle 2{\bf x'}_2-{\bf h}_2$}] (B2b) at (B2); \coordinate [label=below:{$\displaystyle \;{\bf h}_2$}] (A2b) at (A2); \end{scope} % \begin{scope}[scale={.5}, shift={(13,.25)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,color=blue] (A1) -- (B1); \draw[interrupt,color=blue] (A2) -- (B2); \draw[-,color=red] (B1) -- (C1); \draw[-,color=blue] (A1) -- (C1); \draw[draw=none,fill=green!80,opacity=.3] (A2) -- (B1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=blue!60, opacity=.45] (C1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=red!60, opacity=.7] (C1) -- (A2) -- (B1) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (A1) -- (A2) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (B2) -- (B1) -- cycle; \coordinate[label=left:{\Large ${\bf x}_1$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \coordinate [label=right:{$\displaystyle 2{\bf x'}_2-{\bf h}_2$}] (B2b) at (B2); \coordinate [label=below:{$\displaystyle \;{\bf h}_2$}] (A2b) at (A2); \end{scope} % \end{tikzpicture} \end{equation*} This point of view makes the connection to a graph straightforward: while, as reviewed in Section \ref{subsec:CPrev}, a triangle is in a $1-1$ correspondence with a two-site line graph, the segment is with a one-site one-loop graph -- obtaining the segment as a projection of a triangle through a cone with origin $\mathbf{O}\,\equiv\,\mathbf{x}_2-\mathbf{x}'_2$ corresponds from the graph point of view to take a two-site line graph and merging its two sites (this is nothing but the procedure to obtain a $(L+1)$-loop wavefunctions from an $L$-loop one defined in \cite{Benincasa:2018ssx}). Hence, the square pyramid with vertices $\{\mathbf{x}_1-\mathbf{y}_1+\mathbf{x}',\, \mathbf{x}_1+\mathbf{y}_1-\mathbf{x}',\, -\mathbf{x}_1+\mathbf{y}_1+\mathbf{x}',\, 2\mathbf{x}'-\mathbf{h}_2,\, \mathbf{h}_2\}$ obtained by intersecting a triangle and a segment as just described is in a $1-1$ correspondence with the graph obtaining by merging a two-site line graph and a one-site one-loop graph: \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle {\bf x}_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle \;{\bf x'}_1$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle {\bf y}_1$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[color=red,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (A); \draw[-, very thick, color=blue] (A) -- (C); \draw[-, very thick, color=red] (B) -- (C); \end{scope} % \begin{scope}[shift={(0,-1.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=below:{\footnotesize $\displaystyle 2{\bf x'}_2-{\bf h}_2$}] (m1) at (B); \coordinate [label=below:{\footnotesize $\displaystyle \;{\bf h}_2$}] (m2) at (C); \coordinate [label=above:{\footnotesize $\displaystyle {\bf x'}_2$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (C); \draw[color=red, fill=red] (C) circle (2pt); \draw[color=blue, fill=blue] (B) circle (2pt); \end{scope} % \begin{scope}[shift={(3,.25)}, scale={.75}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=below:{\footnotesize $\displaystyle x'_1$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_1$}] (m3) at ($(m1)!0.5!(m2)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \end{scope} % \begin{scope}[shift={(3,-1.5)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate (m3) at ($(B)!0.5!(C)$); \coordinate [label=left:{\footnotesize $\displaystyle x'_2$}] (x) at ($(m3)+(-1.25cm,0)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (h) at ($(m3)+(1.25cm,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[very thick, color=red] (m3) circle (1.25cm); \draw[fill,color=blue] (x) circle (3pt); \end{scope} % \begin{scope}[scale={.5}, shift={(13,1.5)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate (B2) at (1.5,-3,-1.5*\factor); \coordinate (A1) at (-1.5,-3,-1.5*\factor); \coordinate (B1) at (1.5,-3.75,1.5*\factor); \coordinate (A2) at (-1.5,-3.75,1.5*\factor); \coordinate (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[interrupt,very thick,color=blue] (A1) -- (B1); \draw[interrupt,very thick,color=blue] (A2) -- (B2); \draw[-,very thick,color=red] (B1) -- (C1); \draw[-,very thick,color=blue] (A1) -- (C1); \coordinate[label=below:{\Large ${\bf x'}$}] (x2) at ($(A1)!0.5!(B1)$); \draw[fill,color=blue] (x2) circle (2.5pt); \coordinate[label=left:{\Large ${\bf x}_1$}] (x1) at ($(C1)!0.5!(A1)$); \draw[fill,color=blue] (x1) circle (2.5pt); \draw[fill,color=red] (A2) circle (2.5pt); \draw[fill,color=blue] (B2) circle (2.5pt); \coordinate [label=above:{$\displaystyle 2{\bf x'}_2-{\bf h}_2$}] (B2b) at (B2); \coordinate [label=below:{$\displaystyle \;{\bf h}_2$}] (A2b) at (A2); \end{scope} % \begin{scope}[shift={(6,.-1.75)}, scale={.75}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=below:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x'$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_1$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(2,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (1cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \end{scope} \end{tikzpicture} \end{equation*} The canonical form of the square pyramid returns the wavefunction of the universe for the two-site tadpole-like graph above. If $\{\mathbf{V}_a^I\}$ ($a\,=\,1,\,\ldots,5$, with the labels identifying the vertices in the order as they appear in \eqref{eq:VerPir}) is the set of the five vertices of the square pyramid, then the coefficient of its canonical form can be easily computed via a contour integral \cite{Arkani-Hamed:2017tmz, Arkani-Hamed:2017fdk} \begin{equation}\eqlabel{eq:CCFpyd} \begin{split} \Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})\:&=\:\frac{1}{(2\pi i)3!}\int_{\mathbb{R}^4}\prod_{j=1}^5\frac{dc_j}{c_j-i\varepsilon_j}\,\delta^{\mbox{\tiny $(4)$}}\left(\mathcal{Y}-\sum_{j=1}^5 c_j\mathbf{V}^{\mbox{\tiny $(j)$}}\right)\:=\\ &=\:\frac{x_1+y_1+2x_2+2h_2}{(x_1+x_2+2h_2)(x_1+x_2)(x_1+y_1)(y_1+x_2+h_2)(y_1+x_2)}. \end{split} \end{equation} From this expression it is immediate to see that it reduces to (minus) the first derivative with respect to $x_2$ of the canonical form of the triangle, {\it i.e.} of the wavefunction of the universe related to a(n $l=0$) two-site graph! Explicitly \begin{equation}\eqlabel{eq:CCFh20} \lim_{h_2\longrightarrow 0} \Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})\:=\:\frac{x_1+y_1+2x_2}{(x_1+x_2)^2(x_1+y_1)(y_1+x_2)^2}\:\equiv\:-\frac{\partial}{\partial x_2}\frac{1}{(x_1+x_2)(x_1+y)(y+x_2)}. \end{equation} This can be even more straightforwardly seen by considering one of the two triangulations of the square pyramid, specifically $\{1234\}+\{1235\}$ \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate[label=right:{$\mathbf{4}$}] (B2) at (1.5,-3,-1.5*\factor); \coordinate[label=left:{$\mathbf{1}$}] (A1) at (-1.5,-3,-1.5*\factor); \coordinate[label=right:{$\mathbf{3}$}] (B1) at (1.5,-3.75,1.5*\factor); \coordinate[label=left:{$\mathbf{5}$}] (A2) at (-1.5,-3.75,1.5*\factor); \coordinate[label=above:{$\mathbf{2}$}] (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[draw=none,fill=green!80,opacity=.3] (A2) -- (B1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=blue!60, opacity=.45] (C1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=red!60, opacity=.7] (C1) -- (A2) -- (B1) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (A1) -- (A2) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (B2) -- (B1) -- cycle; \node[right=1.75cm of B2, scale=1.5] (eq) {$\displaystyle =$}; \end{scope} % \begin{scope}[scale={.5}, shift={(7,0)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate[label=right:{$\mathbf{4}$}] (B2) at (1.5,-3,-1.5*\factor); \coordinate[label=left:{$\mathbf{1}$}] (A1) at (-1.5,-3,-1.5*\factor); \coordinate[label=right:{$\mathbf{3}$}] (B1) at (1.5,-3.75,1.5*\factor); \coordinate[label=left:{$\mathbf{5}$}] (A2) at (-1.5,-3.75,1.5*\factor); \coordinate[label=above:{$\mathbf{2}$}] (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[draw=none,fill=green!80,opacity=.6] (A1) -- (B1) -- (C1) -- cycle; \draw[draw=none,fill=green!80,opacity=.3] (A1) -- (B1) -- (A2) -- cycle; \draw[draw=none,fill=blue!60, opacity=.15] (C1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=red!60, opacity=.7] (C1) -- (A2) -- (B1) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (A1) -- (A2) -- cycle; \draw[draw=none,fill=blue!70, opacity=.15] (C1) -- (B2) -- (B1) -- cycle; \node[right=1.75cm of B2, scale=1.5] (pl) {$\displaystyle +$}; \end{scope} % \begin{scope}[scale={.5}, shift={(14,0)}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate[label=right:{$\mathbf{4}$}] (B2) at (1.5,-3,-1.5*\factor); \coordinate[label=left:{$\mathbf{1}$}] (A1) at (-1.5,-3,-1.5*\factor); \coordinate[label=right:{$\mathbf{3}$}] (B1) at (1.5,-3.75,1.5*\factor); \coordinate[label=left:{$\mathbf{5}$}] (A2) at (-1.5,-3.75,1.5*\factor); \coordinate[label=above:{$\mathbf{2}$}] (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[draw=none,fill=green!80,opacity=.1] (A2) -- (A1) -- (B1) -- cycle; \draw[draw=none,fill=green!80,opacity=.4] (B2) -- (A1) -- (B1) -- cycle; \draw[draw=none,fill=blue!60, opacity=.5] (C1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=green!80,opacity=.7] (C1) -- (A1) -- (B1) -- cycle; \draw[draw=none,fill=red!60, opacity=.3] (C1) -- (A2) -- (B1) -- cycle; \draw[draw=none,fill=blue!70, opacity=.2] (C1) -- (A1) -- (A2) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (B2) -- (B1) -- cycle; \end{scope} % \end{tikzpicture} \end{equation*} \begin{equation}\eqlabel{eq:CCfpydT} \begin{split} \Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}})\:&=\:\frac{\langle1234\rangle^3}{\langle\mathcal{Y}123\rangle\langle\mathcal{Y}234\rangle\langle\mathcal{Y}341\rangle\langle\mathcal{Y}412\rangle}+ \frac{\langle3215\rangle^3}{\langle\mathcal{Y}321\rangle\langle\mathcal{Y}215\rangle\langle\mathcal{Y}153\rangle\langle\mathcal{Y}532\rangle}\:=\\ &=\:\frac{1}{2h_2(x_1+x_2)(x_1+y_1)(y_1+x_2)}-\frac{1}{2h_2(x_1+x_2+2h_2)(x_1+y_1)(y_1+x_2+2h_2)}\:=\\ &=\:-\frac{\Omega_{\mathcal{T}}(x_1,y_1,x_2+2h_2)-\Omega_{\mathcal{T}}(x_1,y_1,x_2)}{2h_2} \end{split} \end{equation} where the last line just serves to make explicit how, in the limit $h_2\,\longrightarrow\,0$, this triangulation of the square-pyramid matches the textbook definition of the derivative of the canonical form coefficient $\Omega_{\mathcal{T}}$ of the triangle $\mathcal{T}$. Thus, one can start from a set of $n_e$ triangles and $n_h$ segments, and intersect them in their midpoints -- or, equivalently, one can consider a set of $n_e+n_h$ triangles and intersect $n_e$ of them on the midpoints of at most two of their intersectable edges, and each of the other $n_h$ ones on one of its midpoints {\it and} the vertex opposite to it\footnote{The difference between the two ways of looking at this procedure boils down simply to a different choice of basis in $\mathbb{R}^{3n_e+n_h}$: when we intersect triangles and segments in their midpoints the basis is chosen once for all by the choice of the parametrisation of the vertices, which for the segments is conveniently chosen by thinking them as a projection of a triangle through a certain cone; when we instead consider the intersection of just triangles in their intersectable midpoints or in one of the midpoints and the vertex opposite to it, the basis depends on the way the constraints $\{{\bf x'}_i\,=\,{\bf x'}_j,\: {\bf x}_i+{\bf y}_i-{\bf x'}_i\,=\,{\bf x}_j+{\bf y}_j-{\bf x'}_j\}$ are parametrised -- the choice in \eqref{eq:VerPir}, which matches the triangle-segment construction, is $\mathbf{h}_j\,=\,{\bf x}_j-{\bf y}_j+{\bf x'}_j$, but one could analogously take $\mathbf{h}_j\,=\,{\bf x}_j-{\bf y}_j$, corresponding just to a shift $h_j\,\longrightarrow\,h_j+x'_j$. This change of basis indeed reflects in the way that the graph related to the polytope is labelled: with the choice just mentioned, the site with a tadpoles labelled by $x_j-h_j$ (rather than by $x_j$).}. This procedure beautifully provides a combinatorial, geometric and graph theoretical understanding of derivative operators: allowing the triangles to intersect in one of their midpoints and in the vertices opposite to them -- or, equivalently, introducing a segment as a second building block and allowing it to intersect with a triangle in their midpoints -- generates a polytope whose canonical form coefficient is nothing but the Newton's difference quotient, which, from a graph theoretical point of view corresponds to glue a one-loop one-site graph to the original graph on one of its vertices. Then, taking the limit $h_j\,\longrightarrow\,0$ for all $j\,=\,1,\ldots,\,n_h$, the canonical form constructed in this way reduces -- up to a sign $(-1)^{n_h}$ -- to the action of an $n_h$-order derivative operator onto the canonical form of the cosmological polytope constructed our of the $n_e$ triangles in the usual way. \subsubsection{The wavefunction of the universe for $l=1$ states} Let us now consider the space of $n_e$ triangles with vertices $\{\mathbf{x}_i-\mathbf{y}_i+\mathbf{x'}_i,\,\mathbf{x}_i+\mathbf{y}_i-\mathbf{x'}_i,\,-\mathbf{x}_i+\mathbf{y}_i+\mathbf{x'}_i\}$ ($i\,=\,1,\ldots,n_e$), and of $2n_e$ segments with vertices $\{2\mathbf{\tilde{x}}_j-\mathbf{h}_j,\,\mathbf{h}_j\}$ ($j\,=\,1,\,\ldots,2n_e$). From it, we can define the space of $n_e$ polytopes $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$ ($i\,=\,1,\ldots,n_e$) by intersecting each of the triangles with two segments, one for each midpoint of its intersectable edges. Consequently, from the discussion in the previous section, a polytope $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$ is identified as the convex hull of the following vertices \begin{equation}\eqlabel{eq:PB2} \left\{ \mathbf{x}_i-\mathbf{y}_i+\mathbf{x'}_i,\quad\mathbf{x}_i+\mathbf{y}_i-\mathbf{x'}_i,\quad-\mathbf{x}_i+\mathbf{y}_i+\mathbf{x'}_i,\quad2\mathbf{x}_i-\mathbf{h}_i,\quad\mathbf{h}_i,\quad2\mathbf{x'}_i-\mathbf{h'}_i,\quad\mathbf{h'}_i \right\}. \end{equation} One feature of $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$ is that the pair of vertices of each intersectable side of the triangle belongs to the same plane of the vertices of one segment identifying a square face with midpoint $\mathbf{x}_i$ ($\mathbf{x'}_i$) -- this is just a consequence of the constraints defining each $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$. Thus, given a set of $n_e$ $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$'s, we can generate more complicated polytopes $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ by intersecting them in the midpoints of their square facets. The resulting polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ lives in $\mathbb{P}^{5n_e-r-1}$, $r$ being the number of constraints on the midpoints. For example, given two polytopes $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$ ($i\,=\,1,2$) identified by the vertices \eqref{eq:PB2}, we can glue them in one of such midpoints via the constraint $\mathbf{x'}_1\,=\,\mathbf{x'}_2\,(\equiv\,\mathbf{x'})$. The polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ lives in $\mathbb{P}^8$ and is then the convex hull of the vertices \begin{equation*} \begin{split} &\left\{ \mathbf{x}_1-\mathbf{y}_1+\mathbf{x'},\quad\mathbf{x}_1+\mathbf{y}_1-\mathbf{x'},\quad-\mathbf{x}_1+\mathbf{y}_1+\mathbf{x'},\quad2\mathbf{x}_1-\mathbf{h}_1,\quad\mathbf{h}_1,\quad2\mathbf{x'}-\mathbf{h'}_1,\quad\mathbf{h'}_1, \right.\\ &\left. \hspace{.125cm}\mathbf{x}_2-\mathbf{y}_2+\mathbf{x'},\quad\mathbf{x}_2+\mathbf{y}_2-\mathbf{x'},\quad-\mathbf{x}_2+\mathbf{y}_2+\mathbf{x'},\quad2\mathbf{x}_2-\mathbf{h}_2,\quad\mathbf{h}_2,\quad2\mathbf{x'}-\mathbf{h'}_2,\quad\mathbf{h'}_2 \right\}. \end{split} \end{equation*} Importantly, any collection of such intersecting $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$'s has a graph associated to it. Recall that any $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$ is in turn defined as the intersection of one triangle and two segments. Recall further that the triangle can be represented by a two-site line graph, while the segment is represented by a one-site one-loop graph. In both cases, the sites of the graphs represent the sides of the triangle/segments which they can be intersected on. Hence, being the intersection of one triangle and two segments, the polytope $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$ is represented by the graph $\mathfrak{t}$ obtained by gluing two one-loop one-site graphs with a two-site line graph, one at each site of the latter \begin{equation} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] \begin{scope}[scale={.75}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x'$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_1$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \node[left=2cm of m1] (eq) {$\displaystyle\xleftrightarrow{\hspace*{2cm}}$}; \node[left=2cm of eq] (Pt) {$\displaystyle\mathcal{P}_{\mathfrak{t}}$}; \end{scope} \end{tikzpicture} \end{equation} Thus, a collection of $\mathcal{P}_{\mathfrak{t}}^{\mbox{\tiny $(i)$}}$'s is represented by a collection of graphs $\mathfrak{t}_i$'s depicted above, and a polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ is then represented by the intersection of the $\mathfrak{t}_i$'s in their sites. For example\footnote{It is worth to stress that the disposition of the one-loop one-site subgraph (nested, internal, external, etc.) is meaningless: they are drawn as they are for pictorial convenience.}: \begin{equation*} \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, scale={.9}, transform shape] \coordinate (x1) at (-.5,0) {}; \coordinate (x2) at (-1.75,-1.15) {}; \coordinate (x3) at (-1.75,-2.225) {}; \coordinate (x4) at (.5,-.25) {}; \coordinate (x5) at (1,-1.75) {}; \coordinate (x6) at (-1.25,-3.375) {}; \coordinate (x7) at (.75,-2.625) {}; \coordinate (x8) at (0,-3.375) {}; \draw[-,thick,color=red] (x1) -- (x2); \draw[-,thick,color=red] (x4) -- (x5); \draw[-,thick,color=red] (x3) -- (x6); \draw[-,thick,color=red] (x7) -- (x8); \coordinate (c12r) at ($(x1)!-0.125!(x2)$); \draw[thick, color=red] (c12r) circle (.2125cm); \coordinate (c12l) at ($(x2)!-0.125!(x1)$); \draw[thick, color=red] (c12l) circle (.2125cm); \coordinate (c45r) at ($(x4)!-0.125!(x5)$); \draw[thick, color=red] (c45r) circle (.2125cm); \coordinate (c45l) at ($(x5)!-0.125!(x4)$); \draw[thick, color=red] (c45l) circle (.2125cm); \coordinate (c36r) at ($(x3)!-0.125!(x6)$); \draw[thick, color=red] (c36r) circle (.175cm); \coordinate (c36l) at ($(x6)!-0.125!(x3)$); \draw[thick, color=red] (c36l) circle (.175cm); \coordinate (c78r) at ($(x7)!-0.15!(x8)$); \draw[thick, color=red] (c78r) circle (.175cm); \coordinate (c78l) at ($(x8)!-0.15!(x7)$); \draw[thick, color=red] (c78l) circle (.175cm); \draw[fill, color=blue] (x1) circle (2pt); \draw[fill, color=blue] (x2) circle (2pt); \draw[fill, color=blue] (x3) circle (2pt); \draw[fill, color=blue] (x4) circle (2pt); \draw[fill, color=blue] (x5) circle (2pt); \draw[fill, color=blue] (x6) circle (2pt); \draw[fill, color=blue] (x7) circle (2pt); \draw[fill, color=blue] (x8) circle (2pt); \node[right=1cm of x5.east] (arr) {$\displaystyle \xrightarrow{\hspace{1cm}}$}; \coordinate (xa) at ($(x1.east)+(5cm, 0)$); \coordinate (xb) at ($(xa.east)+(1cm,0)$); \coordinate (xc) at ($(xb.east)+(1cm,0)$) ; \coordinate (xd) at ($(xc.east)+(1cm,0)$); \coordinate (xe) at ($(xd.east)+(1cm,0)$); \draw[-,thick,color=red] (xa.east) -- (xb.west); \draw[-,thick,color=red] (xb.east) -- (xc.west); \draw[-,thick,color=red] (xc.east) -- (xd.west); \draw[-,thick,color=red] (xd.east) -- (xe.west); \coordinate (cal) at ($(xa)!-0.15!(xb)$); \draw[thick, color=red] (cal) circle (.175cm); \coordinate (cbu) at ($(xb)+(0,.175cm)$); \draw[thick, color=red] (cbu) circle (.175cm); \coordinate (cbd) at ($(xb)-(0,.175cm)$); \draw[thick, color=red] (cbd) circle (.175cm); \coordinate (ccu) at ($(xc)+(0,.175cm)$); \draw[thick, color=red] (ccu) circle (.175cm); \coordinate (ccd) at ($(xc)-(0,.175cm)$); \draw[thick, color=red] (ccd) circle (.175cm); \coordinate (cdu) at ($(xd)+(0,.175cm)$); \draw[thick, color=red] (cdu) circle (.175cm); \coordinate (cdd) at ($(xd)-(0,.175cm)$); \draw[thick, color=red] (cdd) circle (.175cm); \coordinate (cer) at ($(xe)!-0.15!(xd)$); \draw[thick, color=red] (cer) circle (.175cm); \draw[fill, color=blue] (xa) circle (2pt); \draw[fill, color=blue] (xb) circle (2pt); \draw[fill, color=blue] (xc) circle (2pt); \draw[fill, color=blue] (xd) circle (2pt); \draw[fill, color=blue] (xe) circle (2pt); \coordinate (xg) at ($(xc)-(0,1.75cm)$); \coordinate (xf) at ($(xg)-(1.5cm,0)$); \coordinate (xh) at ($(xg)+(1.5cm,0)$); \draw[thick, color=red] ($(xf)!0.5!(xg)$) circle (.75); \draw[thick, color=red] ($(xg)!0.5!(xh)$) circle (.75); \coordinate (xfl) at ($(xf.west)+(-.175cm,0)$); \draw[thick, color=red] (xfl) circle (.175cm); \coordinate (xfr) at ($(xf.east)+(.175cm,0)$); \draw[thick, color=red] (xfr) circle (.175cm); \coordinate (xgla) at ($(xg.west)+(-.175cm,0)$); \draw[thick, color=red] (xgla) circle (.175cm); \coordinate (xglb) at ($(xg.west)+(-.3cm,0)$); \draw[thick, color=red] (xglb) circle (.3cm); \coordinate (xgra) at ($(xg.east)+(.175cm,0)$); \draw[thick, color=red] (xgra) circle (.175cm); \coordinate (xgrb) at ($(xg.east)+(.3cm,0)$); \draw[thick, color=red] (xgrb) circle (.3cm); \coordinate (xhl) at ($(xh.west)+(-.175cm,0)$); \draw[thick, color=red] (xhl) circle (.175cm); \coordinate (xhr) at ($(xh.east)+(.175cm,0)$); \draw[thick, color=red] (xhr) circle (.175cm); \draw[fill,blue] (xf) circle (2pt); \draw[fill,blue] (xg) circle (2pt); \draw[fill,blue] (xh) circle (2pt); \coordinate (xi) at ($(xb)-(0,3.3cm)$); \coordinate (xj) at ($(xf)-(0,2.4cm)$); \coordinate (xk) at ($(xg)-(0,2.2cm)$); \coordinate (xl) at ($(xk.east)+(1.58cm,0)$); \draw[-,thick,color=red] (xi) -- (xj) -- (xk) -- (xi); \draw[-,thick,color=red] (xk) -- (xl); \coordinate (ciu) at ($(xi)+(0,+.175cm)$); \draw[thick,color=red] (ciu) circle (.175cm); \coordinate (cid) at ($(xi)+(0,.3cm)$); \draw[thick,color=red] (cid) circle (.3cm); \coordinate (cju) at ($(xj)+(-.175cm,0)$); \draw[thick,color=red] (cju) circle (.175cm); \coordinate (cjd) at ($(xj)+(-.3cm,0)$); \draw[thick,color=red] (cjd) circle (.3cm); \coordinate (cka) at ($(xk)+(.045cm,+.169cm)$); \draw[thick,color=red] (cka) circle (.175cm); \coordinate (ckb) at ($(xk)+(.077cm,.29cm)$); \draw[thick,color=red] (ckb) circle (.3cm); \coordinate (ckd) at ($(xk)+(0,-.175cm)$); \draw[thick,color=red] (ckd) circle (.175cm); \coordinate (cl) at ($(xl)+(.175cm,0)$); \draw[thick,color=red] (cl) circle (.175cm); \draw[fill,blue] (xi) circle (2pt); \draw[fill,blue] (xj) circle (2pt); \draw[fill,blue] (xk) circle (2pt); \draw[fill,blue] (xl) circle (2pt); \end{tikzpicture} \end{equation*} As shown in the previous subsection, the presence of one-loop one-site subgraphs in the graph $\mathcal{G}_{\mathfrak{t}}$ implies that the associated canonical form is nothing but a Newton's difference quotient of the canonical form of the related graph $\mathcal{G}$ {\it without} one-loop one-site subgraphs with respect to the variables associated to the sites of $\mathcal{G}_{\mathfrak{t}}$ where the one-loop one-site subgraphs are glued. Thus, given a graph $\mathcal{G}_{\mathfrak{t}}$, there is a polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ living in\footnote{Here we write the dimension of the projective space where the polytope lives in terms of the data of the graph $\{n_v,\,n_e,\,n_h\}$, which are respectively the numbers of vertices, edges and tadpoles, as well as using the relation among these parameters, {\it i.e.} $n_v\,=\,n_e+1-L$ and $n_h\,=\,2n_e$ , $L$ being the number of internal loops.} $\mathbb{P}^{n_v+n_e+n_h-1}\,\equiv\,\mathbb{P}^{4n_e-L}$ associated to it, whose canonical form coefficient $\Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}})$ is the Newton' difference quotient of the wavefunction of the universe for $l\,=\,0$ with respect to a subset of variables which can be equivalently identified with {\it midpoints} of special hyperplanes in $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}})$ and with the sites of $\mathcal{G}_{\mathfrak{t}}$ with one-loop one-site graphs. In the limit $h_j\,\longrightarrow\,0$ ($\forall\:j\,=\,1,\ldots,2n_e$), the canonical form for $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ returns the wavefunction of the universe for $l\,=\,1$ states, reproducing \eqref{eq:WFnu1} \begin{equation}\eqlabel{eq:CFWFl1} \lim_{\{h_j\}\longrightarrow 0}\Omega(\mathcal{Y};\,\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}})\:=\:(-1)^{n_e}\Psi_{\mathcal{G}_{\mathfrak{t}}}(x_v,y_e) \end{equation} \subsection{Facets of the polytopes $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$}\label{subsec:FacP} The polytopes $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ are nothing but specific sub-class of the standard cosmological polytopes. Consequently, the structure of their faces can still be analysed via the marking described in Section \ref{subsec:CPrev}. There is just one subtlety which is encoded in the presence of one-loop one-site subgraphs in the associated graph $\mathcal{G}_{\mathfrak{t}}$. In order to discuss it, we can just consider a one-loop one-site graph, {\it i.e.} the segment polytope \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=below:{\footnotesize $\displaystyle 2{\bf x}-{\bf h}$}] (m1) at (B); \coordinate [label=below:{\footnotesize $\displaystyle \;{\bf h}$}] (m2) at (C); \coordinate [label=above:{\footnotesize $\displaystyle {\bf x}$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[color=blue,fill=red] (m3) circle (2pt); \draw[-, very thick, color=blue] (B) -- (C); \draw[color=red, fill=red] (C) circle (2pt); \draw[color=blue, fill=blue] (B) circle (2pt); \end{scope} % \begin{scope}[shift={(4,0)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate (m3) at ($(B)!0.5!(C)$); \coordinate [label=left:{\footnotesize $\displaystyle x$}] (x) at ($(m3)+(-1.25cm,0)$); \coordinate [label=right:{\footnotesize $\displaystyle h$}] (h) at ($(m3)+(1.25cm,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[very thick, color=red] (m3) circle (1.25cm); \draw[fill,color=blue] (x) circle (3pt); \end{scope} % \end{tikzpicture} \end{equation*} It has two boundaries, identified by the two vertices $\{\mathbf{V}_1,\,\mathbf{V}_2\}\,=\{\mathbf{2x}-\mathbf{h},\,\mathbf{h}\}$ and which belongs to the straight lines $x+2h\,=\,0$ and $x\,=\,0$ respectively. Now, the associated graph can be thought of being obtained from a two-site line graph by merging its two sites into one. Such a constraint implies that the two vertices which are kept track of via a marking on the two extremes of the edge of the two-site line graph are made coincident. However, for consistency with the previous notation we will keep marking both. Therefore, the two facets of the segment polytope can be indicated as \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=above:{\footnotesize $\displaystyle {\bf h}$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[fill,red] (m3) circle (2pt);i \end{scope} % \begin{scope}[shift={(2.5,0)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate (m3) at ($(B)!0.5!(C)$); \coordinate [label=left:{\footnotesize $\displaystyle x$}] (x) at ($(m3)+(-1.25cm,0)$); \coordinate [label=right:{\footnotesize $\displaystyle h$}] (h) at ($(m3)+(1.25cm,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[very thick, color=red] (m3) circle (1.25cm); \draw[fill,color=blue] (x) circle (3pt); \draw[thick,color=red!50!black] (m3) circle (1.75cm); \coordinate [label=right:{\footnotesize $\displaystyle\mathfrak{g}_{\mathfrak{t}}$}] (gt) at ($(m3)+(1.75cm,0)$); \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625] (X2) at (h) {}; \coordinate [label={\small $\displaystyle\mathcal{W}_I\mathbf{V}_2^I\,\equiv\,x\,=\,0$}] (hyp1) at ($(m3)-(0,2.75cm)$); \end{scope} % \begin{scope}[shift={(6.5,0)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=above:{\footnotesize $\displaystyle 2{\bf x} - {\bf h}$}] (m3) at ($(B)!0.5!(C)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[fill,blue] (m3) circle (2pt);i \end{scope} % \begin{scope}[shift={(9,0)}, scale={.5}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate (m3) at ($(B)!0.5!(C)$); \coordinate [label=left:{\footnotesize $\displaystyle x$}] (x) at ($(m3)+(-1.25cm,0)$); \coordinate [label=right:{\footnotesize $\displaystyle h$}] (h) at ($(m3)+(1.25cm,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[very thick, color=red] (m3) circle (1.25cm); \draw[fill,color=blue] (x) circle (3pt); \draw[thick,color=red!50!black] (x) circle (.25cm); \coordinate [label=left:{\footnotesize $\displaystyle\mathfrak{g}_{\mathfrak{t}}$}] (gt) at ($(x)+(1cm,0)$); \coordinate (va) at ($(x)+(.02,.375)$); \coordinate (vb) at ($(x)+(.02,-.375)$); \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625] (X1a) at (va) {}; \node[very thick, cross=4pt, rotate=0, color=blue, scale=.625] (X1b) at (vb) {}; \coordinate [label={\small $\displaystyle\mathcal{W}_I\mathbf{V}_1^I\,\equiv\,x+2h\,=\,0$}] (hyp1) at ($(m3)-(0,2.75cm)$); \end{scope} % \end{tikzpicture} \end{equation*} where the two vertices indicated by a marking close to the only site indicate the very same vertex $\mathbf{h}$. So, given a generic graph $\mathcal{G}_{\mathfrak{t}}$, the vertices on the facets of $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ can be kept track of following the very same rule as for the standard cosmological polytopes keeping in mind that the markings associated to the two ends of the edge of the one-loop one-site subgraphs identify the very same vertex. With this in mind we can analyse the facets of $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$. A first observation is that all the facets corresponding to subgraphs containing the lowest codimension graph without tadpoles lives in $\mathbb{P}^{4n_e-L-1}$ and has $4n_e$ vertices. \begin{wrapfigure}{l}{4.5cm} \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, scale=.8, transform shape] \begin{scope} \node[ball,text width=.18cm,fill,color=black] at (0,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black,right=1.2cm of x1.east] (x2) {}; \node[ball,text width=.18cm,fill,color=black,right=1.2cm of x2.east] (x3) {}; \node[ball,text width=.18cm,fill,color=black] at (-1,.8) (x4) {}; \node[ball,text width=.18cm,fill,color=black] at (-1,-.8) (x5) {}; \node[ball,text width=.18cm,fill,color=black] at (-1.7,-2) (x6) {}; \node[ball,text width=.18cm,fill,color=black] at (-.3,-2) (x7) {}; \node[above=.35cm of x5.north] (ref2) {}; \coordinate (Int2) at (intersection of x5--x1 and ref2--x2); \def.325{.225} \pgfmathsetmacro\x{.325*cos{60}}; \pgfmathsetmacro\y{.325*sin{60}}; \coordinate (c1u) at ($(x1)+(\x,\y)$); \coordinate (c1l) at ($(x1)!-0.175!(x2)$); \coordinate (c1r) at ($(x1)+(\x,-\y)$); \coordinate (c2u) at ($(x2)+(0,.2cm)$); \coordinate (c2d) at ($(x2)-(0,.2cm)$); \coordinate (c3) at ($(x3)!-0.15!(x2)$); \coordinate (c4) at ($(x4)!-0.15!(x1)$); \coordinate (c5u) at ($(x5)+(-\x,\y)$); \coordinate (c5r) at ($(x5)+(.3cm, 0)$); \coordinate (c5d) at ($(x5)+(0,-.3cm)$); \coordinate (c6) at ($(x6)!-0.15!(x5)$); \coordinate (c7) at ($(x7)!-0.15!(x5)$); \coordinate (c1xu) at ($(c1u)+(\x,\y)$); \coordinate (c1xl) at ($(x1)!-0.35!(x2)$); \coordinate (c1xr) at ($(x1)+(\x,-\y)+(\x,-\y)$); \coordinate (c2xu) at ($(x2)+(0,.4cm)$); \coordinate (c2xd) at ($(x2)-(0,.4cm)$); \coordinate (c3x) at ($(x3)+(.4cm,0)$); \coordinate (c4x) at ($(x4)!-.3!(x1)$); \coordinate (c5xu) at ($(c5u)+(-\x,\y)$); \coordinate (c5xr) at ($(c5r)+(.25cm,0)$); \coordinate (c5xd) at ($(c5d)+(0,-.25cm)$); \coordinate (c6x) at ($(x6)!-0.3!(x5)$); \coordinate (c7x) at ($(x7)!-0.3!(x5)$); \draw[-,thick,color=black] (x1) -- (x2) -- (x3); \draw[-,thick,color=black] (x1) -- (x4); \draw[-,thick,color=black] (x5) -- (x1); \draw[-,thick,color=black] (x5) -- (x7); \draw[-,thick,color=black] (x5) -- (x6); \draw[thick] (c1u) circle (.2cm); \draw[thick] (c1l) ellipse (.25cm and .15cm); \draw[thick] (c1r) circle (.2cm); \draw[thick] (c2u) circle (.2cm); \draw[thick] (c2d) circle (.2cm); \draw[thick] (c3) circle (.2cm); \draw[thick] (c4) circle (.2cm); \draw[thick] (c5u) circle (.2cm); \draw[thick] (c5r) ellipse (.25cm and .15cm); \draw[thick] (c5d) ellipse (.15cm and .25cm); \draw[thick] (c6) circle (.2cm); \draw[thick] (c7) circle (.2cm); \def.113{.113} \pgfmathsetmacro\xx{.113*cos{60}}; \pgfmathsetmacro\yy{.113*sin{60}}; \coordinate (a1u) at ($(c1xu)+(\xx,\yy)$); \coordinate (a1l) at ($(x1)!-0.7!(x2)$); \coordinate (a1r) at ($(c1xr)+(\xx,-\yy)$); \coordinate (a2u) at ($(c2xu)+(0, .25cm)$); \coordinate (a2d) at ($(c2xd)-(0, .25cm)$); \coordinate (a3u) at ($(x3)+(0, .5cm)$); \coordinate (a3r) at ($(x3)!-0.5!(x2)$); \coordinate (a3d) at ($(x3)-(0,.5cm)$); \coordinate (a4ur) at ($(c4x)+(\x,\y)$); \coordinate (a4ul) at ($(x4)!-.6!(x1)$); \coordinate (a4dl) at ($(c4x)-(\x,\y)$); \coordinate (a5u) at ($(c5xu)+(-\x,\y)$); \coordinate (a5r) at ($(c5xr)+(.1cm,0)$); \coordinate (a6ul) at ($(c6x)+(-.25,0)$); \coordinate (a6d) at ($(x6)!-.45!(x5)$); \coordinate (a6r) at ($(c6x)+(.25,0)$); \coordinate (a67) at ($(x6)!0.5!(x7)$); \coordinate (a7l) at ($(c7x)-(.25,0)$); \coordinate (a7d) at ($(x7)!-0.45!(x5)$); \coordinate (a7r) at ($(c7x)+(.25,0)$); \draw[red!50!black, thick] plot [smooth cycle] coordinates {(a3r) (a3u) (a2u) (a1u) (a4ur) (a4ul) (a4dl) (a1l) (a5u) (a6ul) (a6d) (a6r) (a67) (a7l) (a7d) (a7r) (a5r) (a1r) (a2d) (a3d)}; \node[color=red!50!black,] at ($(x5)+(3,0)$) {\large $\mathfrak{g}_{\mathfrak{t}}\,=\,\mathcal{G}_{\mathfrak{t}}$}; \coordinate (m1) at ($(x1)!0.5!(x4)$); \coordinate (m2) at ($(x1)!0.5!(x2)$); \coordinate (m3) at ($(x2)!0.5!(x3)$); \coordinate (m4) at ($(x1)!0.5!(x5)$); \coordinate (m5) at ($(x5)!0.5!(x6)$); \coordinate (m6) at ($(x5)!0.5!(x7)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (m1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m2) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m3) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m4) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m5) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (m6) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c1xu) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c1xl) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c1xr) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c2xu) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c2xd) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c3x) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c4x) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c5xu) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c5xr) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c5xd) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c6x) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (c7x) {}; \end{scope} % \end{tikzpicture} \end{wrapfigure} This counting is easily done. The total number of vertices of a polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ is $3n_e+2n_h$, {\it i.e.} three for each straight edge of the associated graph $\mathcal{G}_{\mathfrak{t}}$ and two for each tadpole; furthermore by construction $n_h\,=\,2n_e$ and, therefore, the total number of vertices of $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ is $7n_e$. When we consider any subgraph which contains all the straight edges, the related facet will have two vertices for each straight edge and one for each tadpole, so that the total number of vertices in it is $4n_e$. Thus, such a facet is a polytope which lives in $\mathbb{P}^{4n_e-L-1}$ and has $4n_e$ vertices. Consequently, for graphs with no internal loops ($L\,=\,0$), such codimension-$1$ facets are simplices -- an example is given in the picture above. A second important observation is that the scattering amplitude is encoded in higher codimension faces. Concretely, the {\it scattering face} of a polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$, is the face of codimension $n_h+1\,\equiv\,2n_e+1,$ identified on the associated graph $\mathcal{G}_{\mathfrak{t}}$ by the subgraph $\mathfrak{g}_{\mathfrak{t}}\,=\,\mathcal{G}_t$ and {\it all} the $2n_e$ subgraphs which exclude one tadpole at a time. Interestingly, there is an isomorphic face which is identified by the subgraph which includes {\it none} of the tadpoles and, again, {\it all} the $2n_e$ subgraphs which exclude one tadpole at a time. We will see explicit examples of them in the next subsection. What is worth to emphasise now is that the {\it scattering face} we have been discussed has the very same structure of the {\it scattering facet} which arises for the standard cosmological polytopes, being it identified by the set of vertices $\{\mathbf{x}_i+\mathbf{y}_i-\mathbf{x'}_i,\,-\mathbf{x}_i+\mathbf{y}_i+\mathbf{x'}_i\}$ ($i\,=\,1,\ldots,n_e$) associated to the sides of each edge which connects two different sites of $\mathcal{G}_{\mathfrak{t}}$ \begin{equation*} \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, scale=.8, transform shape] \begin{scope} \node[ball,text width=.18cm,fill,color=black] at (0,0) (x1) {}; \node[ball,text width=.18cm,fill,color=black,right=1.2cm of x1.east] (x2) {}; \node[ball,text width=.18cm,fill,color=black,right=1.2cm of x2.east] (x3) {}; \node[ball,text width=.18cm,fill,color=black] at (-1,.8) (x4) {}; \node[ball,text width=.18cm,fill,color=black] at (-1,-.8) (x5) {}; \node[ball,text width=.18cm,fill,color=black] at (-1.7,-2) (x6) {}; \node[ball,text width=.18cm,fill,color=black] at (-.3,-2) (x7) {}; \node[above=.35cm of x5.north] (ref2) {}; \coordinate (Int2) at (intersection of x5--x1 and ref2--x2); \def.325{.225} \pgfmathsetmacro\x{.325*cos{60}}; \pgfmathsetmacro\y{.325*sin{60}}; \coordinate (c1u) at ($(x1)+(\x,\y)$); \coordinate (c1l) at ($(x1)!-0.175!(x2)$); \coordinate (c1r) at ($(x1)+(\x,-\y)$); \coordinate (c2u) at ($(x2)+(0,.2cm)$); \coordinate (c2d) at ($(x2)-(0,.2cm)$); \coordinate (c3) at ($(x3)!-0.15!(x2)$); \coordinate (c4) at ($(x4)!-0.15!(x1)$); \coordinate (c5u) at ($(x5)+(-\x,\y)$); \coordinate (c5r) at ($(x5)+(.3cm, 0)$); \coordinate (c5d) at ($(x5)+(0,-.3cm)$); \coordinate (c6) at ($(x6)!-0.15!(x5)$); \coordinate (c7) at ($(x7)!-0.15!(x5)$); \coordinate (c1xu) at ($(c1u)+(\x,\y)$); \coordinate (c1xl) at ($(x1)!-0.35!(x2)$); \coordinate (c1xr) at ($(x1)+(\x,-\y)+(\x,-\y)$); \coordinate (c2xu) at ($(x2)+(0,.4cm)$); \coordinate (c2xd) at ($(x2)-(0,.4cm)$); \coordinate (c3x) at ($(x3)+(.4cm,0)$); \coordinate (c4x) at ($(x4)!-.3!(x1)$); \coordinate (c5xu) at ($(c5u)+(-\x,\y)$); \coordinate (c5xr) at ($(c5r)+(.25cm,0)$); \coordinate (c5xd) at ($(c5d)+(0,-.25cm)$); \coordinate (c6x) at ($(x6)!-0.3!(x5)$); \coordinate (c7x) at ($(x7)!-0.3!(x5)$); \draw[-,thick,color=black] (x1) -- (x2) -- (x3); \draw[-,thick,color=black] (x1) -- (x4); \draw[-,thick,color=black] (x5) -- (x1); \draw[-,thick,color=black] (x5) -- (x7); \draw[-,thick,color=black] (x5) -- (x6); \draw[thick] (c1u) circle (.2cm); \draw[thick] (c1l) ellipse (.25cm and .15cm); \draw[thick] (c1r) circle (.2cm); \draw[thick] (c2u) circle (.2cm); \draw[thick] (c2d) circle (.2cm); \draw[thick] (c3) circle (.2cm); \draw[thick] (c4) circle (.2cm); \draw[thick] (c5u) circle (.2cm); \draw[thick] (c5r) ellipse (.25cm and .15cm); \draw[thick] (c5d) ellipse (.15cm and .25cm); \draw[thick] (c6) circle (.2cm); \draw[thick] (c7) circle (.2cm); \coordinate (c01) at ($(x1)!0.175!(x4)$); \coordinate (c02) at ($(x1)!0.825!(x4)$); \coordinate (c03) at ($(x1)!0.175!(x2)$); \coordinate (c04) at ($(x1)!0.825!(x2)$); \coordinate (c05) at ($(x2)!0.175!(x3)$); \coordinate (c06) at ($(x2)!0.825!(x3)$); \coordinate (c07) at ($(x1)!0.175!(x5)$); \coordinate (c08) at ($(x1)!0.825!(x5)$); \coordinate (c09) at ($(x5)!0.175!(x6)$); \coordinate (c10) at ($(x5)!0.825!(x6)$); \coordinate (c11) at ($(x5)!0.175!(x7)$); \coordinate (c12) at ($(x5)!0.825!(x7)$); \draw[very thick, color=blue] (c01) circle (3pt); \draw[very thick, color=blue] (c02) circle (3pt); \draw[very thick, color=blue] (c03) circle (3pt); \draw[very thick, color=blue] (c04) circle (3pt); \draw[very thick, color=blue] (c05) circle (3pt); \draw[very thick, color=blue] (c06) circle (3pt); \draw[very thick, color=blue] (c07) circle (3pt); \draw[very thick, color=blue] (c08) circle (3pt); \draw[very thick, color=blue] (c09) circle (3pt); \draw[very thick, color=blue] (c10) circle (3pt); \draw[very thick, color=blue] (c11) circle (3pt); \draw[very thick, color=blue] (c12) circle (3pt); \end{scope} % \end{tikzpicture} \end{equation*} -- the open circles indicate the vertices of $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ {\it belonging} to the face. Notice that this facet is identified by the intersection of the hyperplane $\mathcal{W}^{\mbox{\tiny (T)}}\,\equiv\,\sum_{v\in\mathcal{V}}\mathbf{x}_v$ with all the hyperplanes of the form $\mathcal{W}^{\mbox{\tiny ($\mathfrak{h}$)}}\equiv\,\sum_{v\in\mathcal{V}}\mathbf{x}_v+\sum_{e\in\mathcal{E}_{\mathfrak{h}}}\mathbf{h}_{e}$, where $\mathcal{E}_{\mathfrak{h}}$ is one of the subset of $\mathcal{E}$ containing only the edges of the tadpoles but one. The appearance of the scattering face as a codimension $2n_e+1$ face is the beautiful avatar of the fact that the high energy limit of the scattering amplitude is encoded into the leading coefficient of the Laurent expansion of the wavefunction in the neighbourhood of the total energy pole! In fact, in the limit $h_i\,\longrightarrow\,0$ ($\forall\,i\,=\,1,\ldots,2n_e$), the poles in the coefficient of the canonical form identified by these hyperplanes collapse to form a $(2n_e+1)$-order pole. This indeed the same order we would read off from the related edge.weighted graph. According \eqref{eq:OrdPol}, the order of the total energy pole is $\sum_{e\in\mathcal{E}_{\mathfrak{g}}^{\mbox{\tiny int}}}2l_e+1$, given that the total energy pole corresponds to the subgraph $\mathfrak{g}\,=\,\mathcal{G}$, with thus no external edges. Given that in our case all the edges have weight $l_e\,=\,1$, the sum in the counting formula returns the number $n_e$ of the edges of the graph. Consequently, the order of the pole is $2n_e+1$ as from the polytope analysis. \subsection{An illustrative example}\label{subsec:ExCF} For the sake of clarity, let us illustrate the construction just described with some example. The simplest case is given by the polytope $\mathcal{P}_{\mathcal{G}_{\mathfrak{t}}}$ itself, which lives in $\mathbb{P}^4$ and is identified by the vertices \eqref{eq:PB2}, which we write here again for convenience: \begin{equation*} \left\{ \mathbf{x}_1-\mathbf{y}_{12}+\mathbf{x}_2,\quad\mathbf{x}_1+\mathbf{y}_{12}-\mathbf{x}_2,\quad-\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\quad2\mathbf{x}_1-\mathbf{h}_1,\quad\mathbf{h}_1,\quad2\mathbf{x}_2-\mathbf{h}_2,\quad\mathbf{h}_2 \right\}. \end{equation*} It is in $1-1$ correspondence with the graph $\mathfrak{t}$: \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] \begin{scope}[scale={.75}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \end{scope} \end{tikzpicture} \end{equation*} Let us analyse in detail its facet structure. As mentioned in the previous section, some of the facets are simplices: they are identified by any subgraph containing the lowest codimension subgraph without tadpoles. In this case, the lowest codimension subgraph without tadpoles is the two-site line graph. \begin{wrapfigure}{l}{4cm} \centering \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, transform shape] \begin{scope}[scale={1}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \coordinate (cA) at ($(m1)!0.5!(m5)$); \coordinate (cB) at ($(m2)!0.5!(m4)$); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \coordinate (c12) at ($(m1)!0.5!(m2)$); \coordinate (cL) at ($(m1)-(.5,0)$); \coordinate (cR) at ($(m2)+(.5,0)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (c12) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR) {}; \node[below=.5cm of c12] (W1) {$\displaystyle x_1+x_2\,=\,0$}; \def.325{.325} \pgfmathsetmacro\bcx{.325*cos{45}}; \pgfmathsetmacro\bcy{.325*sin{45}}; \pgfmathsetmacro1.75{.325*cos{30}}; \pgfmathsetmacro3{.325*sin{30}}; \coordinate (a) at ($(cA)+(-.325,0)$); \coordinate (ab) at ($(cA)+(-\bcx,\bcy)$); \coordinate (b) at ($(cA)+(0,+.325)$); \coordinate (bc) at ($(cA)+(\bcx,\bcy)$); \coordinate (c) at ($(cA)+(1.75,3)$); \coordinate (d) at ($(cB)+(-1.75,3)$); \coordinate (de) at ($(cB)+(-\bcx,\bcy)$); \coordinate (e) at ($(cB)+(0,.325)$); \coordinate (ef) at ($(cB)+(\bcx,\bcy)$); \coordinate (f) at ($(cB)+(.325,0)$); \coordinate (fg) at ($(cB)+(\bcx,-\bcy)$); \coordinate (g) at ($(cB)+(0,-.325)$); \coordinate (gh) at ($(cB)+(-\bcx,-\bcy)$); \coordinate (h) at ($(cB)+(-1.75,-3)$); \coordinate (i) at ($(cA)+(1.75,-3)$); \coordinate (ij) at ($(cA)+(\bcx,-\bcy)$); \coordinate (j) at ($(cA)+(0,-.325)$); \coordinate (ja) at ($(cA)+(-\bcx,-\bcy)$); \draw[thick, red!50!black] plot [smooth cycle] coordinates {(a) (ab) (b) (bc) (c) (d) (de) (e) (ef) (f) (fg) (g) (gh) (h) (i) (ij) (j) (ja)}; \node[above=.05cm of b, color=red!50!black] {\footnotesize $\displaystyle\mathfrak{g}_{\mathfrak{t}}^{\mbox{\tiny $(1)$}}\,=\,\mathcal{G}_{\mathfrak{t}}$}; \end{scope} % \end{tikzpicture} \end{wrapfigure} There are four of such facets. The first one is identified by the graph itself and corresponds to the hyperplane $\mathcal{W}\,=\,\mathbf{x}_1+\mathbf{x}_2$. The four vertices on such a facet are \begin{equation}\eqlabel{eq:fver1} \left\{ \mathbf{x}_1+\mathbf{y}_{12}-\mathbf{x}_2,\; -\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;\mathbf{h}_1,\;\mathbf{h}_2 \right\} \end{equation} which is indeed a tetrahedron in $\mathbb{P}^3$ with canonical form coefficient \begin{equation}\eqlabel{eq:CFr1} \Omega^{\mbox{\tiny $(1)$}}\:=\:\frac{1}{(2h_1)(2h_2)(y_{12}^2-x_1^2)}. \end{equation} The other three facets of this type are identified by the hyperplanes related to the subgraphs which exclude either of the tadpoles as well as both, while containing the two-site line subgraph: \begin{equation*} \begin{tikzpicture}[ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, transform shape] % \begin{scope} \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \coordinate (cA) at ($(m1)!0.5!(m5)$); \coordinate (cB) at ($(m2)!0.5!(m4)$); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \coordinate (c12) at ($(m1)!0.5!(m2)$); \coordinate (cL) at ($(m1)-(.5,0)$); \pgfmathsetmacro\ax{.25*cos{135}}; \pgfmathsetmacro\ay{.25*sin{135}}; \coordinate (cR1) at ($(cB)+(\ax,\ay)$); \coordinate (cR2) at ($(cB)+(\ax,-\ay)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (c12) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR2) {}; \node[below=.5cm of c12] (W1) {$\displaystyle x_1+x_2+2h_2\,=\,0$}; \def.325{.325} \pgfmathsetmacro\bcx{.325*cos{45}}; \pgfmathsetmacro\bcy{.325*sin{45}}; \pgfmathsetmacro1.75{.325*cos{30}}; \pgfmathsetmacro3{.325*sin{30}}; \coordinate (a) at ($(cA)+(-.325,0)$); \coordinate (ab) at ($(cA)+(-\bcx,\bcy)$); \coordinate (b) at ($(cA)+(0,+.325)$); \coordinate (bc) at ($(cA)+(\bcx,\bcy)$); \coordinate (c) at ($(cA)+(1.75,3)$); \coordinate (d) at ($(cB)+(-1.75,3)$); \coordinate (h) at ($(cB)+(-1.75,-3)$); \coordinate (i) at ($(cA)+(1.75,-3)$); \coordinate (ij) at ($(cA)+(\bcx,-\bcy)$); \coordinate (j) at ($(cA)+(0,-.325)$); \coordinate (ja) at ($(cA)+(-\bcx,-\bcy)$); \draw[thick, red!50!black] plot [smooth cycle] coordinates {(a) (ab) (b) (bc) (c) (d) (h) (i) (ij) (j) (ja)}; \node[above=.05cm of b, color=red!50!black] {\footnotesize $\displaystyle\mathfrak{g}_{\mathfrak{t}}^{\mbox{\tiny $(2)$}}$}; \end{scope} % \begin{scope}[shift={(5,0)}, transform shape] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \coordinate (cA) at ($(m1)!0.5!(m5)$); \coordinate (cB) at ($(m2)!0.5!(m4)$); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \coordinate (c12) at ($(m1)!0.5!(m2)$); \coordinate (cR) at ($(m2)+(.5,0)$); \pgfmathsetmacro\ax{.25*cos{45}}; \pgfmathsetmacro\ay{.25*sin{45}}; \coordinate (cL1) at ($(cA)+(\ax,\ay)$); \coordinate (cL2) at ($(cA)+(\ax,-\ay)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (c12) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL2) {}; \node[below=.5cm of c12] (W1) {$\displaystyle x_1+x_2+2h_1\,=\,0$}; \def.325{.325} \pgfmathsetmacro\bcx{.325*cos{45}}; \pgfmathsetmacro\bcy{.325*sin{45}}; \pgfmathsetmacro1.75{.325*cos{30}}; \pgfmathsetmacro3{.325*sin{30}}; \coordinate (b) at ($(cA)+(0,+.325)$); \coordinate (c) at ($(cA)+(1.75,3)$); \coordinate (d) at ($(cB)+(-1.75,3)$); \coordinate (de) at ($(cB)+(-\bcx,\bcy)$); \coordinate (e) at ($(cB)+(0,.325)$); \coordinate (ef) at ($(cB)+(\bcx,\bcy)$); \coordinate (f) at ($(cB)+(.325,0)$); \coordinate (fg) at ($(cB)+(\bcx,-\bcy)$); \coordinate (g) at ($(cB)+(0,-.325)$); \coordinate (gh) at ($(cB)+(-\bcx,-\bcy)$); \coordinate (h) at ($(cB)+(-1.75,-3)$); \coordinate (i) at ($(cA)+(1.75,-3)$); \draw[thick, red!50!black] plot [smooth cycle] coordinates {(c) (d) (de) (e) (ef) (f) (fg) (g) (gh) (h) (i)}; \node[above=.05cm of b, color=red!50!black] {\footnotesize $\displaystyle\mathfrak{g}_{\mathfrak{t}}^{\mbox{\tiny $(3)$}}$}; \end{scope} % \begin{scope}[shift={(10,0)}, transform shape] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \coordinate (cA) at ($(m1)!0.5!(m5)$); \coordinate (cB) at ($(m2)!0.5!(m4)$); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \coordinate (c12) at ($(m1)!0.5!(m2)$); \coordinate (cR) at ($(m2)+(.5,0)$); \pgfmathsetmacro\ax{.25*cos{45}}; \pgfmathsetmacro\ay{.25*sin{45}}; \coordinate (cL1) at ($(cA)+(\ax,\ay)$); \coordinate (cL2) at ($(cA)+(\ax,-\ay)$); \coordinate (cR1) at ($(cB)+(-\ax,\ay)$); \coordinate (cR2) at ($(cB)+(-\ax,-\ay)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (c12) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL2) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR2) {}; \node[below=.5cm of c12] (W1) {$\displaystyle x_1+x_2+2h_1+2h_2\,=\,0$}; \def.325{.325} \pgfmathsetmacro\bcx{.325*cos{45}}; \pgfmathsetmacro\bcy{.325*sin{45}}; \pgfmathsetmacro1.75{.325*cos{30}}; \pgfmathsetmacro3{.325*sin{30}}; \coordinate (b) at ($(cA)+(0,+.325)$); \coordinate (c) at ($(cA)+(1.75,3)$); \coordinate (d) at ($(cB)+(-1.75,3)$); \coordinate (h) at ($(cB)+(-1.75,-3)$); \coordinate (i) at ($(cA)+(1.75,-3)$); \draw[thick, red!50!black] plot [smooth cycle] coordinates {(c) (d) (h) (i)}; \node[above=.05cm of b, color=red!50!black] {\footnotesize $\displaystyle\mathfrak{g}_{\mathfrak{t}}^{\mbox{\tiny $(4)$}}$}; \end{scope} % \end{tikzpicture} \end{equation*} whose respective vertices and canonical form coefficients are given by \begin{equation}\eqlabel{eq:fver2} \begin{split} & \{\mathbf{x}_1+\mathbf{y}_{12}-\mathbf{x}_2,\; -\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;\mathbf{h}_1,\;2\mathbf{x}_2-\mathbf{h}_2\}, \hspace{1.625cm} \Omega^{\mbox{\tiny $(2)$}}\:=\:-\frac{1}{(2h_1)(2h_2)(y_{12}^2-x_1^2)}\\ & \{\mathbf{x}_1+\mathbf{y}_{12}-\mathbf{x}_2,\; -\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;2\mathbf{x}_1-\mathbf{h}_1,\;\mathbf{h}_2\}, \hspace{1.625cm} \Omega^{\mbox{\tiny $(3)$}}\:=\:-\frac{1}{(2h_1)(2h_2)(y_{12}^2-x_2^2)}\\ & \{\mathbf{x}_1+\mathbf{y}_{12}-\mathbf{x}_2,\; -\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;2\mathbf{x}_1-\mathbf{h}_1,\;2\mathbf{x}_2-\mathbf{h}_2\}, \qquad \Omega^{\mbox{\tiny $(4)$}}\:=\:\frac{1}{(2h_1)(2h_2)(y_{12}^2-(x_1+2h_1)^2)} \end{split} \end{equation} These facets share a codimension $2$ (codimension $3$ with respect to the original polytope) face, which is identified by the intersection between the hyperplanes identified by the lowest codimension subgraph of $\mathcal{G}_{\mathfrak{t}}$ excluding only and only one tadpole at a time \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] \begin{scope}[scale={.75}] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \coordinate (cL) at ($(m1)!.125!(m2)$); \coordinate (cR) at ($(m1)!.875!(m2)$); \draw[very thick, color=blue] (cL) circle (3pt); \draw[very thick, color=blue] (cR) circle (3pt); \end{scope} \end{tikzpicture} \end{equation*} where, as in the previous section, the open circles indicates the two vertices {\it belonging} to the face. This is equivalent to taking the residues in $h_1\,=\,0$ and $h_2\,=\,0$ in each of the four canonical form coefficients $ \Omega^{\mbox{\tiny $(j)$}}$ ($j\,=\,1,\ldots,4$), returning the Lorentz-invariant flat-space scattering amplitude. In this concrete example we are just seeing what we discussed in full generality at the end of the previous section: the scattering face is a higher codimension face of the polytope, and its codimension corresponds to the order of the pole when the limits $h_j\,\longrightarrow\,0$ ($j\,=\,1,\,2$) are taken. The polytope in question has four more facets: they are identified by any other subgraph which does not include the two-site line one, {\it i.e.} in our concrete example, they can include one tadpole and one site at a time or just one site: \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, cross/.style={cross out, draw, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.75}, transform shape] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[thick, color=red!50!black] ($(m1)!0.5!(m5)$) circle (.325cm); \coordinate (cLL) at ($(m1)-(.5,0)$); \coordinate (cL) at ($(m1)!.125!(m2)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (cLL) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL) {}; \coordinate (c12) at ($(m1)!0.5!(m2)$); \node[below=.5cm of c12] (W1) {$\displaystyle x_1+y_{12}\,=\,0$}; \node[right=2cm of m2] (v1) {$\displaystyle\{\mathbf{x}_1-\mathbf{y}_{12}+\mathbf{x}_2,\;-\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;\mathbf{h}_1,\;2\mathbf{x}_2-\mathbf{h}_2,\;\mathbf{h}_2\}$}; \end{scope} % \begin{scope}[scale={.75}, shift={(0,-2)}, transform shape] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[thick, color=red!50!black] (m1) circle (4pt); \coordinate (cA) at ($(m1)!0.5!(m5)$); \coordinate (cB) at ($(m2)!0.5!(m4)$); \coordinate (cL) at ($(m1)!.125!(m2)$); \pgfmathsetmacro\ax{.25*cos{45}}; \pgfmathsetmacro\ay{.25*sin{45}}; \coordinate (cL1) at ($(cA)+(\ax,\ay)$); \coordinate (cL2) at ($(cA)+(\ax,-\ay)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (cL1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL2) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cL) {}; \coordinate (c12) at ($(m1)!0.5!(m2)$); \node[below=.5cm of c12] (W1) {$\displaystyle x_1+y_{12}+2h_1\,=\,0$}; \node[right=2cm of m2] (v1) {$\displaystyle\{\mathbf{x}_1-\mathbf{y}_{12}+\mathbf{x}_2,\;-\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;2\mathbf{x}_1-\mathbf{h}_1,\;2\mathbf{x}_2-\mathbf{h}_2,\;\mathbf{h}_2\}$}; \end{scope} % \begin{scope}[scale={.75}, shift={(0,-4)}, transform shape] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[thick, color=red!50!black] ($(m2)!0.5!(m4)$) circle (.325cm); \coordinate (cRR) at ($(m2)+(.5,0)$); \coordinate (cR) at ($(m1)!.875!(m2)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (cRR) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR) {}; \coordinate (c12) at ($(m1)!0.5!(m2)$); \node[below=.5cm of c12] (W1) {$\displaystyle y_{12}+x_2\,=\,0$}; \node[right=2cm of m2] (v1) {$\displaystyle\{\mathbf{x}_1-\mathbf{y}_{12}+\mathbf{x}_2,\;\mathbf{x}_1+\mathbf{y}_{12}-\mathbf{x}_2,\;2\mathbf{x}_1-\mathbf{h}_1,\;\mathbf{h}_1,\;\mathbf{h}_2\}$}; \end{scope} % \begin{scope}[scale={.75}, shift={(0,-6)}, transform shape] \coordinate (A) at (0,0); \coordinate (B) at (-1.75,-2.25); \coordinate (C) at (+1.75,-2.25); \coordinate [label=left:{\footnotesize $\displaystyle x_1$}] (m1) at ($(A)!0.5!(B)$); \coordinate [label=right:{\footnotesize $\displaystyle x_2$}] (m2) at ($(A)!0.5!(C)$); \coordinate [label=below:{\footnotesize $\displaystyle y_{12}$}] (m3) at ($(m1)!0.5!(m2)$); \coordinate [label=right:{\footnotesize $\displaystyle h_2$}] (m4) at ($(m2)+(.5,0)$); \coordinate [label=left:{\footnotesize $\displaystyle h_1$}] (m5) at ($(m1)-(.5,0)$); \tikzset{point/.style={insert path={ node[scale=2.5*sqrt(\pgflinewidth)]{.} }}} \draw[-, very thick, color=red] (m1) -- (m2); \draw[very thick, color=red] ($(m2)!0.5!(m4)$) circle (.25cm); \draw[very thick, color=red] ($(m1)!0.5!(m5)$) circle (.25cm); \draw[color=blue,fill=blue] (m1) circle (2pt); \draw[color=blue,fill=blue] (m2) circle (2pt); \draw[thick, color=red!50!black] (m2) circle (4pt); \coordinate (cA) at ($(m1)!0.5!(m5)$); \coordinate (cB) at ($(m2)!0.5!(m4)$); \coordinate (cR) at ($(m1)!.875!(m2)$); \pgfmathsetmacro\ax{.25*cos{45}}; \pgfmathsetmacro\ay{.25*sin{45}}; \coordinate (cR1) at ($(cB)+(-\ax,\ay)$); \coordinate (cR2) at ($(cB)+(-\ax,-\ay)$); \node[very thick, cross=4pt, rotate=0, color=blue] at (cR1) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR2) {}; \node[very thick, cross=4pt, rotate=0, color=blue] at (cR) {}; \coordinate (c12) at ($(m1)!0.5!(m2)$); \node[below=.5cm of c12] (W1) {$\displaystyle x_1+y_{12}+2h_1\,=\,0$}; \node[right=2cm of m2] (v1) {$\displaystyle\{\mathbf{x}_1-\mathbf{y}_{12}+\mathbf{x}_2,\;-\mathbf{x}_1+\mathbf{y}_{12}+\mathbf{x}_2,\;2\mathbf{x}_1-\mathbf{h}_1,\;\mathbf{h}_1\;2\mathbf{x}_2-\mathbf{h}_2\}$}; \end{scope} % \end{tikzpicture} \end{equation*} These facets are just square pyramids, whose square face has $\mathbf{x}_i$ related to the site outside of the subgraph as a midpoint \begin{equation*} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}, scale={1.25}] % \begin{scope}[scale={.5}, transform shape] \pgfmathsetmacro{\factor}{1/sqrt(2)}; \coordinate[label=right:{$\mathbf{4}$}] (B2) at (1.5,-3,-1.5*\factor); \coordinate[label=left:{$\mathbf{1}$}] (A1) at (-1.5,-3,-1.5*\factor); \coordinate[label=right:{$\mathbf{3}$}] (B1) at (1.5,-3.75,1.5*\factor); \coordinate[label=left:{$\mathbf{5}$}] (A2) at (-1.5,-3.75,1.5*\factor); \coordinate[label=above:{$\mathbf{2}$}] (C1) at (0.75,-.65,.75*\factor); \coordinate (C2) at (0.4,-6.05,.75*\factor); \coordinate (Int) at (intersection of A2--B2 and B1--C1); \coordinate (Int2) at (intersection of A1--B1 and A2--B2); \tikzstyle{interrupt}=[ postaction={ decorate, decoration={markings, mark= at position 0.5 with { \node[rectangle, color=white, fill=white, below=-.1 of Int] {}; }}} ] \draw[draw=none,fill=green!80,opacity=.3] (A2) -- (B1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=blue!60, opacity=.45] (C1) -- (B2) -- (A1) -- cycle; \draw[draw=none,fill=blue!80, opacity=.7] (C1) -- (A2) -- (B1) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (A1) -- (A2) -- cycle; \draw[draw=none,fill=blue!70, opacity=.5] (C1) -- (B2) -- (B1) -- cycle; \node[right=1.75cm of B2, scale=1.5] (eq) {$\displaystyle\Omega^{\mbox{\tiny $(a)$}}\:=\:(-1)^{\sigma_i}\frac{2(x_j+h_j)}{2h_i(y_{12}^2-x_j^2)(y_{12}^2-(x_j+2h_j)^2)}$}; \end{scope} % \end{tikzpicture} \end{equation*} which is identified by the hyperplane corresponding to the equation $x_i+y_{12}+\sigma_i2h_i$, with $\sigma_i\,=\,0,1$, and $(i\,=\,1,\,2)$. If we now go on the facet of this square pyramid identified by $h_i\,=\,0$, {\it i.e.} its square base, which is a codimension $2$ face of the original polytope, the related canonical form is such that, in the degenerate limit $h_j\,\longrightarrow\,0$, reduces to the derivative of the scattering amplitude with respect to $x_j$: \begin{equation}\eqlabel{eq:deglimCF} \Omega\:=\:(-1)^{\sigma_i}\frac{2(x_j+h_j)}{(y_{12}^2-x_j^2)(y_{12}^2-(x_j+2h_j)^2)}\:\xrightarrow{h_j\longrightarrow0}\:(-1)^{\sigma_i}\frac{2x_j}{(y_{12}^2-x_j^2)^2} \:\equiv\:(-1)^{\sigma_i}\frac{\partial}{\partial x_j}\frac{1}{y_{12}^2-x_j^2} \end{equation} Hence, the degenerate limit of the canonical form of the codimension two face of the original polytope $\mathcal{P}_{\mathfrak{t}}$ is exactly the coefficient of the expected double pole in $x_i+y_{12}$. Thus, with this simple example we have provided an illustration of the beautiful fact, proved in general in the previous section, that the coefficient of the highest order poles of the wavefunction of the universe emerge as the degenerate limit ({\it i.e.} $h_i\,\longrightarrow\,0$ for all $i$'s) of the canonical form of higher codimension faces, which are easily identified via the associated graph. Beautifully, the codimension corresponds to the order of the pole in the wavefunction. \subsection{Cosmological polytopes and perturbative mass}\label{subsec:CPpert} Let us now discuss the combinatorics for the contribution to the wavefunction considering two-point couplings, which has been discussed in Section \ref{sec:Pert}. One of the key features of the graphs with two point vertices is the fact the same $y$ is associated to the two edges joined by such a vertex, which, as already shown, implies the presence of a high order pole: taking residues of the wavefunction associated to the graph with respect to the variables associated to its sites, then if $n_e$ edges are connected via $n_e-1$ white sites, then one gets an $n_e$-order pole in $2y_e$, {\it i.e.} $\prod_{e\in\bar{\mathcal{E}}}(2y_e)^{-n_e}$ -- where $\bar{\mathcal{E}}$ is the subset of edges which differs among each other for the associated $y_e$. Consequently, it is not in principle possible to associated a canonical positive geometry to these type of graphs, given that the canonical form associated to them are characterised by having single poles only. However, the discussion on the halohedron and the one-loop bi-adjoint scalar \cite{Salvatori:2018aha} as well as the discussion of the previous section, taught us that functions with high order poles can be thought of as a degenerate limit of some canonical form. In the case of graphs with black and white vertices introduced in Section \ref{sec:Pert}, it is straightforward to identify the positive geometric which we should take the degenerate limit of: given that any graph with black and white sites satisfies the same combinatorial rules as the standard reduced graphs and can be thought of as a limit of them\footnote{Despite in \eqref{eq:WFdeg} we draw a line graph, such a relation hold generally: it is enough to substitute the two black nodes with arbitrary complicated graphs.} \begin{equation}\eqlabel{eq:WFdeg} \begin{tikzpicture}[line join = round, line cap = round, ball/.style = {circle, draw, align=center, anchor=north, inner sep=0}, axis/.style={very thick, ->, >=stealth'}, pile/.style={thick, ->, >=stealth', shorten <=2pt, shorten>=2pt}, every node/.style={color=black}] % \begin{scope} \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega_1$}] (w1) at ($(x1)+(1,0)$); \coordinate (t1) at ($(w1)+(.25,0)$); \coordinate (t2) at ($(t1)+(1,0)$); \coordinate[label=below:{\footnotesize $\omega_a$}] (w2) at ($(t2)+(.25,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(w2)+(1,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $y$} (w1) -- (t1); \draw[-,dotted] (t1) -- (t2); \draw[-,thick] (t2) -- (w2) -- node[above] {\footnotesize $y$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=white] (w1) circle (2pt); \draw[fill=white] (w2) circle (2pt); \draw[fill,black] (x2) circle (2pt); \node[right=.25cm of x2] (eq) {$\displaystyle=\:\int\prod_{e\in\mathcal{E}}dy_e\delta(y_e-y)$}; \end{scope} % \begin{scope}[shift={(7.5,0)}, transform shape] \coordinate[label=below:{\footnotesize $x_1$}] (x1) at (0,0); \coordinate[label=below:{\footnotesize $\omega_1$}] (w1) at ($(x1)+(1,0)$); \coordinate (t1) at ($(w1)+(.25,0)$); \coordinate (t2) at ($(t1)+(1,0)$); \coordinate[label=below:{\footnotesize $\omega_a$}] (w2) at ($(t2)+(.25,0)$); \coordinate[label=below:{\footnotesize $x_2$}] (x2) at ($(w2)+(1,0)$); \draw[-,thick] (x1) -- node[above] {\footnotesize $y_{e_{11}}$} (w1) -- (t1); \draw[-,dotted] (t1) -- (t2); \draw[-,thick] (t2) -- (w2) -- node[above] {\footnotesize $y_{e_{a2}}$} (x2); \draw[fill,black] (x1) circle (2pt); \draw[fill=black] (w1) circle (2pt); \draw[fill=black] (w2) circle (2pt); \draw[fill,black] (x2) circle (2pt); \end{scope} \end{tikzpicture} \end{equation} Hence, the graphs with black and white graphs are related to a degenerate limit of the canonical form of the standard cosmological polytopes. As a final comment, we can also consider the mass insertion on the edge-weighted graphs, as already discussed in Section \ref{sec:Pert}. Then, the mass correction corresponding to a graph with $n_m$ mass insertions is obtained as a double degenerate limit of the polytopes introduced in this paper: one class of limits, $h_j\,\longrightarrow\,0$, which makes poles collapse into higher order ones, and the other, $y_e\,\longrightarrow y$ for each $e$ connected by two point vertices. \section{Conclusion}\label{sec:Concl} In the last two years we started to scratch the surface of what the general features of cosmological observables, equivalently the wavefunction of the universe and the spatial correlators, may be and how fundamental physics is encoded into them. Contrasting to the status of the physics at sufficiently high energies, the requirements of unitarity, locality and Lorentz invariance are extremely constraining, determining which interactions are allowed, and fixing the basic structure of scattering processes. It would be ideal to reach a similar understanding in cosmology, but we are still pretty far from achieving it. This happens for a good reason: those principles which are basic in flat-space become approximate, and thus it is no longer clear which are the fundamental rules governing the physics at cosmological scales. There are two complementary approaches -- see \cite{Arkani-Hamed:2018kmz} and \cite{Arkani-Hamed:2017fdk, Arkani-Hamed:2018ahb, Benincasa:2018ssx} -- which recently have been undertaken to make progress in this direction, both of which are inspired by the most recent developments in the context of scattering amplitudes. The present paper is a generalisation of the second one, and has started the detailed analysis of the wavefunction of the universe for more general scalars, which are described by a scalar in flat-space with time-dependent mass as well as time dependent couplings. Treating the time-dependent mass in its Fourier space, the wavefunction integrands which get defined in this way satisfy novel recursion relations, connecting states with different masses and involving certain differential operators. For certain masses, these recursion relations have the flat-space massless case -- {\it i.e.} the conformally coupled scalar in cosmology -- as a seed and thus the full structure for the wavefunction with these internal states is just inherited from the seed via these differential operators. This means that the very same combinatorial rules holding for the conformally coupled scalar can be translated to such a case via differential operators. However, it has an additional implication. From \cite{Benincasa:2018ssx} we learnt that the residues of all the poles of (the integrand of the) wavefunction of the universe for a massless scalar with time-dependent coupling constants can be interpreted as scattering processes or can be expressed in terms of scattering processes. Consequently, via the differential operators in the recursion relations for these more general states, also all the coefficients in a Laurent expansion around any of the singular points of the wavefunction integrand can be expressed in terms of scattering amplitudes. Indeed, what was observed so far was that the leading coefficient of the Laurent expansion around the total energy pole was (proportional to) the flat-space scattering amplitudes. For more generic, but still light states, we can perform a perturbative treatment for the mass corrections: we obtain a diagrammatics which is a straightforward generalisation of the one we have been using for the conformally-coupled case and it is related to it via a limit which imposes the energy conservation between two edge joined by a mass insertion. Now, if we were interested in computing the wavefunction with a given internal massive state, treating the mass perturbatively one realises that there is a very specific class of graphs which contributes: {\it e.g.} given a two-site graph, the perturbative mass corrections to it are all line graphs with internal two-point vertices, representing the mass insertions. While it is indeed not trivial to re-sum them because of the time dependence of the mass (one does not end up having a trivial geometric series), the fact that there is a very specific class of graphs involved leaves some hope for the possibility of re-summing it. We have not faced this issue explicitly, leaving it for future work. One of the aims of the approach \cite{Arkani-Hamed:2017fdk, Arkani-Hamed:2018ahb, Benincasa:2018ssx} is to find an underlying first-principle mathematical structure which the wavefunction of the universe arises from. The cosmological polytopes, which encodes the wavefunction of the universe for the conformally coupled scalar in FRW cosmologies, are characterised -- as any other positive geometries -- by a canonical differential form having logarithmic singularities only in correspondence of the boundaries of the polytope. Its coefficient returns the wavefunction of the universe. Now, when we treat massive states, it is no longer true that even the wavefunction integrand has simple poles ({\it i.e.} logarithmic singularities) only. This would in principle suggest that the wavefunctions for these states should not be describable in terms of positive geometries, at least according to our current understanding. In this paper, we show that it is not the case for some specific values of the mass: starting from the very same building blocks as the cosmological polytope, {\it i.e.} the space of triangles which can be intersected in the midpoints of two out of its three sides to form an actual cosmological polytope, we can define a generalisation of this construction requiring to intersect the triangles in one of the intersectable midpoints and {\it the vertex opposite to it}. Or equivalently, we can enlarge the set of building blocks considering both triangles and segments, and intersecting them in their midpoints (holding the distinction between intersectable and non-intersectable sides for the triangles). Taking this last point of view, the result of such a prescription is a polytope whose canonical form $\Omega$ is nothing but the Newton's difference quotient of the canonical form $\hat{\Omega}$ that one would obtain by intersecting the very same number of triangles. Thus, a degenerate limit of $\Omega$ returns the derivative of $\hat{\Omega}$ with respect to the energies associated to the midpoints where the triangles have been intersected with the segments. All the polytopes constructed considering two segments for each triangle, intersected one for each intersectable side, are characterised by a canonical form that, in the degenerate limit, returns the wavefunction integrand for the $l\,=\,1$ states. These polytopes are still in a $1-1$ correspondence with graphs, with the triangles still associated to two-site graphs, while the segments to tadpoles ({\it i.e.} one-site one-loop graphs). The faces of these polytopes beautifully encode the information of the wavefunction, {\it before} the degenerate limit. In particular, the scattering amplitude emerges from a higher codimension face: this is just the statement that the scattering amplitude is associated with the leading term in the Laurent expansion around the total energy pole. Beautifully, the codimension of this face {\it is} the order of such a pole. This is more generally true for other faces. One of the important points of \cite{Benincasa:2018ssx} was that the tree wavefunction can be reconstructed from the flat space scattering amplitude and requiring the absence of some unphysical singularities, with the loop ones which can be obtained via a particular projection. This is also true in the case discussed in the present paper, and it is manifest in the polytope/graph picture. We can consider a cosmological polytope constructed in the standard way, which a tree graph is associated to. We can then project it through cones with origin in ${\bf x}_i-{\bf x'}_i$, ${\bf x}_i$ and ${\bf x'}_i$ being associated with the sites of the $i$-th most external two-site subgraph: such a projection produces a polytope whose associated graph has an external tadpole for each $i$. Thus, the polytopes introduced in this paper can be reconstructed from the knowledge of the flat-space scattering amplitudes via the class of projections just discussed. As mentioned at the beginning of this section, we have been just scratching the surface and there are a large number of questions. Let us mention two of them, which are more immediately inherent to the discussion presented in this paper. Our analysis holds for light states only, {\it i.e.} in dS${}_{d+1}$ for masses $m\,\in\,[0,\,d/2]$ (the complementary series in $d\,=\,3$), and in cosmologies $a(\eta)\,=\,\eta^{-\alpha}$ for $m\,=\,0$. So, together with investigating the possible resummation of the graphs contributing when the mass is treated perturbatively, it is indeed interesting to explore the structure for heavier states. Notice that the recursion relation we proved, it is valid in general. However, for heavier states, it does not have a clear seed and, importantly, it seems that it introduces states which are out of the Hilbert space. Secondly, we are now in the position of treating states with spin different than zero. The natural first candidate is looking at the wavefunction for spin-$1$ states. However, what we have been accustomed to do so far, is to perform a graph by graph analysis. If on one side it has been illuminating in the cases of scalars, when we move to spin-$1$ states considering the sum of graphs becomes compulsory because we need to have a gauge invariant observable -- while a single graph is always gauge-dependent. This goes in parallel with the issue of finding a picture for the sum of graphs even in the scalar case: we expect that this picture is bounded to exist because we already know that the relevant underlying combinatorial structure for scattering amplitude of scalar interactions \cite{Arkani-Hamed:2017mur, Frost:2018djd, Salvatori:2018aha, Banerjee:2018tun, Raman:2019utu} which are all included in the cosmological polytope description. \section*{Acknowledgements} It is a pleasure to thank Humberto Gomez, Enrico Pajer and Cristian Vergu for insightful discussions. I am especially in debt with Enrico Pajer for comments on the paper. I would also like to thank the developers of SageMath \cite{sagemath}, Maxima \cite{maxima} and Tikz \cite{tantau:2013a}. I am supported in part by a grant from the Villum Fonden, an ERC-StG grant (N. 757978) and the Danish National Research Foundation (DNRF91).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{Introduction}Introduction} Over the past few decades, thermoelectric effects have emerged as powerful tools to study the electronic properties of materials. Being intimately connected to the electrical conductivity $\sigma$ and thermal conductivity $\kappa$, the thermoelectric conductivity $\alpha$ provides different, but complimentary information. In the case of metals and semiconductors, the diagonal component $\alpha_{xx}$ of the thermoelectric tensor $\tilde{\alpha}$ is the dominant thermoelectric response \cite{behnia2015fundamentals,Behnia2016}. The off diagonal elements, which typically appear only in the presence of a magnetic field, are usually small due to generic symmetries of the Fermi surface. In the case of superconductors, the dominant response is the off-diagonal component $\alpha_{xy}$, and the diagonal response is vanishingly small. This is because in superconductors, the source of the thermoelectric effects is primarily superconducting fluctuations rather than quasiparticles as in the case of metals. Since superconducting fluctuations are carriers of entropy, they travel down the temperature gradient, generating a transverse phase-slip voltage in the process. Being a signal arising purely from superconducting fluctuations, measurement of $\alpha _{xy}$ becomes particularly interesting in the field of 2D disordered superconductors. The physical property that is accessible in a thermoelectric measurement is the Nernst effect, i.e. the transverse voltage per unit temperature gradient in the presence of perpendicular magnetic field: $ N = \frac{\partial V _y}{\partial \nabla T} \big| _{\nabla T || x ; B || z}$ and the Nernst coefficient $\nu=\frac{\partial N}{\partial B}\big| _{B \rightarrow 0}$. In the case of superconductors which are characterized by an effective particle-hole symmetry, the Nernst coefficient can be expressed as a product of the resistivity and the transverse Peltier coefficient $\nu=\rho_{xx} \cdot \alpha_{xy}$. During the past few years the Nernst effect has been shown to be a very effective tool to study vortex motion in the fluctuation regime both above and below $T_C$ \cite{Wang2002,Wang2006,Pourret2006,Pourret2007,Spathis2008,Roy2018}. In 2D superconductors Nernst effect measurements become especially appealing due to several exotic vortex phases which are expected in these thin films. These system often undergo a direct superconductor to insulator transition as a function of thickness or disorder. Experiments have revealed signs for cooper pairing and a finite energy gap in the insulating phase. Such a phase has been dubbed a "Bosonic insulator". An example is a thin film of amorphous indium oxide a-InO${_x}$ in which evidence for vortices motion or a finite energy gap have been detected in the insulator \cite{Poran2011,Kopnov2012,Sacepe2011,Sherman2012,Sherman2013}. Indeed, significant Nernst coefficients have been measured in both the superconducting and the insulating phases \cite{Roy2018}. Experiments on other systems \cite{Aubin2006,Marrache-Kikuchi2008,Tsen2016,Kapitulnik2019} and theories \cite{Das1999,Dalidovich2001} have also raised the possibility of an intermediate anomalous "Boson metal" phase between the insulator and the superconductor. Such a phase, that contradicts the accepted notion that a 2D metallic state cannot exist, is under heavy deliberations nowadays \cite{Tamir_2019} and Nernst measurements may assist in the elucidation of this issue. In yet other systems like MoGe, an exotic hexatic vortex fluid is encountered on the approach to the zero-resistance state in the presence of a magnetic field \cite{Roy2019}. By studying the field dependence of the Nernst signal in systems like these, the characteristic length scales associated with superconductivity can be extracted, giving information about the underlying physcial processes both at the microscopic and mesoscopic level. In high-T$_c$ cuprates, the Nernst effect has been used extensively to attempt to understand the nature of the pseudogap state \cite{Wang2002,Wang2006}. The nature of the pseudogap state of High-T$_c$ cuprates like YBCO is being hotly debated even to this day, with suggested explanations diverging between two key ideas, a fluctuation-dominated pseudogap phase with preformed Cooper pairs, and a competing-order hypothesis that describes the pseudogap state as a competing ground state with a magnetic order. Nernst effect, originating purely out of superconducting fluctuations has played a key role in being complimentary to electrical conductivity measurements, and has helped to refine the phase diagrams of many such systems \cite{Wang2002,Ri1994,Tafti2014}. In semimetals like bismuth and graphite, Nernst effect is of electronic origin, and shows Landau quantization in the form of oscillatory response to a magnetic field in the lines of de Haas-van Alpen and Shubnikov-de Haas oscillations \cite{Behnia2007,Mangez1976,STEELE1955,Zhu2010,Zhu2011}. Based on these observations, Nernst effect has emerged as a new tool to study the Fermi surface properties of 2d materials like graphite \cite{Checkelsky2009}. Anomalous Nernst effect, like anomalous Hall effect may arise even at zero magnetic field in materials such as Weyl semimetals, which have non-zero Berry curvature of the reciprocal space \cite{Xiao2006,Watzman2018,Sakai2018,Guin2019,PhysRevLett.118.136601}. A special form of the anomalous Nernst effect is observed in materials with strong spin-orbit interaction. Termed as the spin-Nernst effect \cite{Meyer2017,Sheng2017}, it is an accumulation of electronic spins, instead of charge in a direction transverse to the heat flow, caused by the differential scattering of up-spin and down-spin electrons by the heavy element atoms. Thus clearly, the Nernst effect has a rich phenomenology and is an important tool to study the physics of several classes of materials. It should be noted that the Nernst response $N$ typically varies over many orders of magnitude with temperature but seldom exceeds a few $\mu$V/K. Information is often contained in regimes where the signal is only a few nV, which makes precise measurement of utmost importance. Two classes of Nernst effect measurement are encountered in literature depending on whether lockin techniques are used for the voltage measurement. DC measurements, involving a constant heating current and constant temperature gradient are realtively simple to interpret, but come with the additional complexity of compensating for stray thermoelectric voltages as well as extreme sensitivity to environmental and instrumental noise. Noise levels are kept within acceptable levels by careful shielding and filtering and require the use of special low-noise electronics. Stray thermoelectric voltages originating in the measurement leads are minimized by avoiding junctions altogether in the measurement path. Using this method, a signal detection threshold of 5 nV \cite{Wang2006} or even 1 nV \cite{Pourret2006} has been achieved. Following the original suggestion by Corbino \cite{Corbino1910} of using higher harmonics of the alternating current in an incandescent lamp to study properties of the filament, the use of higher harmonics like $2\omega$ or $3\omega$ has now emerged as a powerful technique to study material properties that can be modulated through temperature oscillations. Some of the most sensitive measurements are those involving specific heat ($2\omega$ or $3\omega$) \cite{Lu2001,Poran2014}, thermal conductivity ($3\omega$) \cite{Lu2001,Sikora2012,Mishra2015} and thermoelectric effects ($2\omega$) \cite{Kettler1986,Oussena1992,Choi2001}. AC measurements using lockin techniques are somewhat less demanding with regard to the requirement of noise shielding and special low-noise amplifiers, but their interpretation is complex and usually require elaborate calibration procedures. In the context of the Nernst effect, the use of low-frequency alternating currents for heating and detecting the $2\omega$ signal by lockin techniques, has resulted in a signal detection threshold of 0.5nV \cite{Kettler1986}. Thus, several factors need to be considered to make accurate measurements of the Nernst effect. (i) The sample and measurement leads should be shielded from ambient electromagnetic radiation (ii) Thermoelectric voltages at junctions of measurement wires must be carefully compensated (iii) A large-enough temperature gradient should be generated for producing measurable signal (iv) The temperatures and temperature gradients must be accurately determined. This is achieved by making heat transfer between the substrate and the heater/sample/thermometers efficient. In the present article we describe a method suitable for the measurement of magneto-thermoelectric effects of thin film samples at low temperatures in the range $\sim$ 0.3K to 10K. The novelty of our technique rests in the fact that ac techniques have not been used so far in the low-temperature regime, where the time scales for heat transfer increase dramatically due to dropping thermal conductivities. Because slow heat transfer rates are a central problem in the use of ac techniques at low temperature, we use lithographically fabricated on-board thermometers instead of commercial calibrated temperature sensors. This has enabled us to use a relatively high frequency of heating (2Hz) without generating significant thermal lags, which has helped significantly in achieving a remarkable signal detection threshold of $\sim$1 nV despite having a DC noise level as high as 30nV in the same setup. \section{\label{Experiment}Experiment} \subsection{Samples} \begin{figure}[h] \vspace{0cm} \centering \includegraphics[width=0.5\textwidth]{chip.pdf} \caption{(color online) (a) Optical image of the chip with 4 devices: 2 thermometers, 1 heater and 1 sample. (b) and (d) Magnified images of the two thermometers, 30nm thick insulating Indium oxide with leads for four-terminal resistance measurement. (c) Magnified image of the sample position, with leads for measurement of Seebeck ($S_{xx}+$,$S_{xx}-$) and Nernst ($S_{xy}+$,$S_{xy}-$) leads. Seebeck and Nernst leads double as Hall and resistance leads respectively when current is passed between $I+$ and $I-$. (e) Chip carrier on cold finger with the Nernst chip.}. \label{chip} \end{figure} The thermoelectric setup, comprising a heater, two thermometers and the sample are fabricated on a chip of MEMpax\texttrademark borosilicate glass substrates of dimension 1cm$\times$1cm$\times$0.3mm by optical lithography, (Fig. \ref{chip}). A gold meander serves as a heater. It is fabricated out of 30nm thick Au with a 4nm underlayer of Cr, and is designed to have a resistance of $200 \Omega$ at room temperature. The two thermometers are fabricated from e-beam evaporated Indium Oxide, with the growth parameters being tuned to achieve moderately insulating behaviour ($R _{\square} \sim 10-200k \Omega $ at low temperature). The sample is grown by standard thin film deposition techniques like thermal/ebeam evaporation, pulsed laser ablation etc. Layered 2d materials can also be used as samples using the van der Waals transfer method with suitable modifications to the electrical leads. The choice of substrate is dictated by the requirement of having a very low thermal conductivity which enables the setting up of a large enough heat current without the application of excessive heating power. Glass has a thermal conductivity in the range of $ 0.01 W m^{-1} K ^{-1}$ at 1K, among the lowest of all materials. Glass is also a natural choice when the films to be grown are of amorphous nature. The glass substrate is suspended from the cold finger in the manner shown in Fig.\ref{chip}e, thermal contact being provided on the substrate edge far from the heater. \begin{figure}[ht] \vspace{1cm} \centering \includegraphics[width=0.45\textwidth]{dT_resubmit_new.pdf} \caption{(color online) Temperature difference between two thermometers as a function of heater current (DC) : Comparison between experimental points (symbols, each color indicating a different sample) and finite element modelling (blue line). Inset (a) Isothermal contours with a 1 mA heater current obtained by finite element modelling. Inset (b) Simulated temperature profile in a direction parallel to the heat current. The straight line indicates the region of linear approximation used for experiment.} \label{dT} \end{figure} \subsection{Temperature measurements} In the absence of standard calibration curves, the thermometers are calibrated through a measurement of R-T curves against calibrated sensors in the cryostat prior to the measurement. Both AC and DC measurement techniques are employed. For DC measurements, two pairs of Yokogawa 7651 current sources and Keithley 2000 multimeters are used. The current is kept in the range of 10 - 100nA. For AC measurements, two SR830 lockin amplifiers are used, the current being 10nA. Temperature gradients are then measured as a function of the applied heater current, which forms the second part of the calibration in the case of DC measurements. $\Delta T _{DC} = T ^{1} _{DC} - T ^{2} _{DC}$, $\nabla T _{DC} = \Delta T _{DC}/ (X_2 -X_1)$. In Fig.\ref{dT}, the temperature difference between the two thermometers is plotted as a function of current through the heater for different samples prepared identically along with a simulated curve generated from a finite element simulation. For the simulation COMSOL finite element package was used \cite{Note2}, with tetrahedral elements having a maximum mesh size of 240$\mu m$. Adiabatic boundary conditions were chosen for the free surfaces of the sample and isothermal conditions for the interface with the cold finger. Low-temperature thermal conductivity and specific heat were modelled using values obtained from \cite{R.B.Stephens1973,Note1}. Calculated isotherms indicated no transverse temperature gradients at the sample position (Fig.\ref{dT}a) and very small non-linearity in the temperature gradient between the two thermometers at X = 2.5 mm and X = 6.5 mm (Fig.\ref{dT}b). Like in any experiment, the measurement of temperatures and temperature gradients are subject to two types of errors, random and systematic. Random errors in the form of measurement noise can be estimated from the sensitivity $\lvert\frac{T}{R} \frac{dR}{dT}\rvert$ of the sensor at various temperatures (Fig.\ref{sensitivity}). Our resistance measurement noise $\frac{dR}{R}\vert _{noise}$ being a constant $5 \times 10^{-3}$ between 0.5K to 10K enabled us to estimate a temperature readout error of $\sim 2.5 mK$ at $0.3K$ and $\sim 25 mK$ at $3K$. By adopting AC technique of measurement of thermometer resistances, the noise level $\frac{dR}{R}\vert _{noise}$ can be reduced to $3 \times 10^{-4}$, improving sensitivity by an order of magnitude to $0.2 mK$ at $0.3K$. However its relevance to temperature measurement accuracy is unclear owing to a lack of detailed knowledge on the reproducibility of thermometer characteristics. It may be noted here that commercial Cernox\texttrademark sensors are limited to a temperature accuracy of $3-6 mK$ in this temperature range, irrespective of the quality of measurement \cite{Note1}. We are, however, aware of temperature difference measurement accuracy of $0.1mK$ \cite{Pourret2006} and temperature measurement accuracy $\sim 5\mu K $ \cite{Ptak2005} in this temperature range. Systematic errors are mainly due to the use of the linear approximation described above. Such errors were estimated from the finite element simulation. At a DC heating current of 1mA, the following numbers were obtained: $T _{DC} (exact) =0.756K; \ T _{DC} (lin.approx.) = 0.726K; \ \nabla T _{DC}(exact) =0.0726K/mm; \ \nabla T _{DC}(lin.approx.) =0.0739K/mm$, indicating that for practical purposes, the linear approximation was adequate since the systematic error in the estimate of $\nabla T _{DC}$ was less than $2 \%$. On the other hand, measurement noise adds another $\sim 2 \%$ uncertainty to the temperature gradient, making the total uncertainty $\sim 4 \%$. \begin{figure}[ht] \vspace{1cm} \centering \includegraphics[width=0.45\textwidth]{sensitivity_new_new.pdf} \caption{(color online) Sensitivity $ \frac{T}{R} \frac{dR}{dT}$ of a typical Indium Oxide thermometer (black line) compared with two common types of thermometers that are used in this temperature range: Cernox\texttrademark and Ruthenium oxide. Inset: R vs T curve of the same thermometer. Red bars indicate the range of sensitivities of Cernox sensor CX1030. Blue bars indicate the range of sensitivities covered by 3 types of RuO\textsubscript{2} sensors, RX-102A, RX-103A and RX-202A. Data was reproduced from Temperature Measurement and Control Catalog, Lake Shore Cryotronics, Inc. (2016). } \label{sensitivity} \end{figure} \subsection{DC measurement} To minimize unwanted thermoelectric voltages at metal junctions, two pairs of manganin wires are run directly from the measuring instrument to the chip carrier. The thermoelectric signals are measured via Keithley 182 or Keithley 2182 nanovoltmeters using a relaxation+acquisition protocol. Magnetic field is applied perpendicular to the sample plane with superconduting magnets in a \textsuperscript{3}He cryostat. Since an ordinary Nernst effect is antisymmetric w.r.t. the applied magnetic field, any residual background voltage is removed by numerically anti-symmetrizing the raw data. Fig.\ref{DCAC}a and \ref{DCAC}b show typical Nernst responses of an a-InOx 2D disordered superconductor as a function of temperature and magnetic field. In this case the Nernst signal is generated by Gaussian fluctuations of the superconducting order parameter close to a disorder-tuned superconductor insulator quantum phase transition. The asymmetric peaked structure is typical of superconductors in this regime \cite{behnia2015fundamentals}. The rms noise level in these DC measurements is in the range of $\approx 10 nV$. We note that this noise level is higher than the instrumental noise floor. Our samples are highly disordered superconductors on the verge of being insulators which possess intrinsic sample noise due quantum superconducting phase fluctuations, two level systems etc. \begin{figure*} \vspace{1cm} \centering \includegraphics[width=0.95\textwidth]{ac_dc_new.pdf} \caption{(color online) DC and AC Nernst measurements of a 30nm thick weakly disordered superconducting a-InOx film(a-c) and a 25nm thick superconducting amorphous MoGe film (d-f). (a) Large scale DC measurements of an a-InOx film.(b) A DC Nernst response near $T_c$ taken from (a). (c) AC Nernst response at higher temperatures. (d) Large scale DC measurements of an anamorphous MoGe film .(e) Zoom into the low field regime of (d). (f) AC Nernst measurement of the low field regime.} \label{DCAC} \end{figure*} Figure \ref{DCAC}d shows the DC measurement of Nernst effect in a 25nm thick MoGe film. Here the signal is generated by the motion of superconducting vortices after the vortex-lattice undergoes a melting transition due to the application of magnetic field \cite{Roy2019}. A zoom into the low field regime (Fig. \ref{DCAC}e) indicates that a finite Nernst effect may be present. However it is overshadowed by the measurement noise. Hence in this regime the DC measurement technique, which may be adequate for most purposes, is not sufficient and a more sensitive AC technique is required. \begin{figure} \vspace{1cm} \centering \includegraphics[width=0.45\textwidth]{phase+f.pdf} \caption{(color online)(a) Contours of constant phase difference between two thermometers as a function of heater frequency and cryostat temperature.\textquoteleft$\times$\textquoteright marks the parameter pair where (c) was obtained. (b) AC temperature amplitudes of two thermometers showing that the temperature difference decrease with an increasing heating frequency. Heating current: 5mA (c) Simultaneous measurement of heater current, power and temperature from two thermometers. The plot is obtained from a measurement of the demodulated outputs of two lockin amplifiers measuring thermometer resistances (operating at a much higher modulating frequency) along with the heater driving voltage in an oscilloscope. Though the phase difference between thermometers is small, large phase difference is seen to exist between the calculated heater power and the two temperatures. } \label{phase} \end{figure} \subsection{AC measurement} This technique involves driving the heater with a sinusoidal current. An AC heating current at frequency $\omega$ generates a heating power which has 2 components: a constant background and an oscillating component with a frequency $2\omega$. As a result the temperature gradients can be divided into two components. The measurement involves detection of the transverse voltage at a frequency of $2\omega$ by lock-in techniques \cite{Kettler1986,Oussena1992,Choi2001}. Calibration of the set-up involves the additional step of measuring the AC component of the temperature gradient $\nabla T _{AC}$. This is done by passing a constant (dc) bias current through the thermometers and recording the voltage amplitude at $2\omega$ with lock-in amplifiers. The lock-in method eliminates higher harmonics ($ >2 ^{nd}$) of the heater current, which arise due to the strong temperature dependences of the thermal conductivity and specific heat of the substrate. When the temperatures of the two thermometers are in phase, $\nabla T _{AC}$ can be simply estimated from the difference in the temperature amplitudes at the two thermometers $\Delta T _{AC} = T ^{1} _{AC} - T ^{2} _{AC}$, $\nabla T _{AC} = \Delta T _{AC}/ (X_2 -X_1)$. However, factors like thermal conductivity and specific heat of the substrate generally cause the phase difference to be non-zero even at low frequencies. So an additional goal of the calibration is to find an optimal frequency that is high enough for a lock-in measurement to be practical, at the same time low enough for phase difference between thermometers to be small. $\omega$ also needs to be low enough for the amplitudes $T ^{1} _{AC}$ and $ T ^{2} _{AC}$ to be large. This has to be found experimentally, because even the simplest analytical treatment of such a situation gives rise to complicated expressions for the gradient and phase, whose applicability to real-world systems is unclear \cite{Sullivan1968}. The result of one such optimization is shown in Fig.\ref{phase}a. At very low frequencies like 0.5 Hz, the phase difference is small, and increases monotonically with increasing frequency. It is also found to increase with increasing temperature, which probably reflects the different temperature dependences of the two factors, specific heat and thermal conductivity, which determine the relaxation time. $\tau = C _p /K$ \cite{Sullivan1968}. We choose 1Hz for our measurements, which provides the right compromise between ease of measurement and small phase difference. In Fig.\ref{phase}c, the instantaneous temperatures are plotted together with the heater current and power at 1Hz driving frequency and a setpoint of 4K. Interestingly, even though the phase difference between the two thermometers is minimal, a substantial frequency dependent phase shift exists between the heater power and the two thermometers. This is another effect of the finite thermal relaxation time of the substrate, and causes the Nernst signal to appear almost entirely in the Y-channel of the lock-in amplifier for these settings. Measurement of the Nernst signal is carried out using an EG$\&$G 7265 lock-in amplifier and SR552 and SR560 preamplifiers. The former, with an input impedance of $100k \Omega$ and an input noise level of $\sim 4nV / \sqrt{Hz}$ at 2Hz was used as the first stage and is connected directly to the sample leads. Its output is fed to the SR 560 where it is sent through a built-in band-pass filter with a passband of $0.3Hz$ to $10Hz$ and amplified by a factor of about 1000 before being sampled by the lock-in amplifier. The lock-in operation is carried out with a time constant of 10sec, giving it a bandwidth of $\sim 0.02Hz$. Due to the phase shift between the heating power and the temperature gradient as mentioned above, both the X and Y components are recorded. Post measurement, which involves a set of magnetic field levels at a constant temperature and heating current amplitude, the phase of the signal is adjusted to make one of the components as close to zero as possible. The resultant noise level after antisymmetrization is in the range of 1nV. Fig. \ref{DCAC}f shows the results of the AC Nernst measurement for the low field regime of the MoGe sample. A clear reproducible signal is visible in a region that was completely overshadowed by noise in the DC measurement. With minimal signal processing, a noise level in the range of 300pV was obtained, a vast improvement over the >30nV noise level in the DC measurement for this sample (Fig. \ref{DCAC}e) or the value of 10 nV for an $InO _x$ sample (Fig. \ref{DCAC}a). This is to be compared with the best DC (1 nV) and AC (500 pV) measurements of Nernst effect found in the literature \cite{Pourret2006,Kettler1986}. \subsection{Artefacts} One drawback of a one-heater-two-thermometer thermoelectric setup is that it is difficult to control the temperature and temperature gradient independently. For DC measurements, the temperature of the sample is obtained by interpolation between the two thermometers. The same is applicable to an AC measurement when the phase difference between the thermometers is low enough. But there is no easy way of obtaining the $T_{DC}$ and $\nabla T _{AC}$ when higher frequencies are used apart from direct measurement of the demodulated signal with an oscilloscope as shown in Fig.\ref{phase}c. This can make measurements slow at higher frequencies above 2Hz, involving regression methods to estimate the phase difference at $2\omega$. This condition is generally avoided. \begin{figure} \vspace{1cm} \centering \includegraphics[width=0.45\textwidth]{artefact1.pdf} \caption{(color online) Corrupted Nernst signal as a function of magnetic field due to artefacts mentioned in the text. Sample is MoGe of thickness 25 nm. Heater current is 5mA with a cryostat temperature of 4K.} \label{artefact} \end{figure} A second artefact arises when a current of a large amplitude >$\sim$3mA is used for driving the heater. While this creates a larger $\nabla T _{AC}$ which improves the S/N ratio, a contribution from $T _{AC}$ starts to influence the observed readings. Put simply, the AC voltage measured in a lock-in measurement is composed of two parts: $\frac {dV}{dt}(T,\nabla T) = \frac{\partial V}{\partial \nabla T} \big| _T \frac{d\nabla T}{dt} + \frac{\partial V}{\partial T} \big| _{\nabla T} \frac{dT}{dt} $. The first part constitutes the Nernst response of the sample, while the second is an artefact with no physical significance. It is small under normal circumstances, but when measuring the Nernst response of a superconductor, the second term can become very important close to the sample's $T_c$ when the resistance of the sample changes rapidly with temperature. There is no easy way of separating these two components, and at high temperatures and magnetic fields, the spurious voltage can dominate the measured transverse voltage. This is illustrated in Fig.\ref{artefact} for an AC heater current of 5mA at a setpoint of 4K, the sample being superconducting MoGe with a $T_c$ of 7K. It can be shown with a simple mathematical treatment that this artefact appears as a voltage orthogonal in phase to the Nernst signal, and is proportional to $\nabla \phi \times T _{AC}$, the product of the phase gradient and the temperature oscillation amplitude. Thus the phase gradient, though small, can cause significant interference to the Nernst measurement at large heating currents, which result in large $T _{AC}$. \section{Conclusions} We have developed a thermoelectric measurement setup suitable for thin films at low temperatures. With the heater and thermometers fabricated on-chip, the setup minimizes thermal lags, thus enabling accurate measurements of temperatures and reliable AC measurements. We have developed comprehensive protocols for measurements in DC and AC modes. In AC mode with severely restricted bandwidth, a noise level of 0.3-1 nV is obtainable, much smaller than the noise level of the DC method applied on the same sample. An array of technical considerations and artefacts limit the useful regime to $0.5 Hz$ to $2Hz$ for glass substrates below 10K. Though with our present measurement apparatus it is possible to achieve 1nV resolution regularly, there are several sources of noise pickup which become evident in DC measurements. In the absence of intrinsic sample noise, it is possible to improve the signal quality further by using special low-noise electronics. The lack of reproducibility of the temperature gradient between different samples (Fig. \ref{dT}) is at least partly due to errors in thermometry attributable to high intrinsic noise and low reproducibility of the $InO _x$ thermometers. This can also be improved using $RuO _2$ or $NbN$ as thermometric material. This one-heater-two-thermometer setup is particularly suited for the measurement of 2D superconducting thin films, for which Nernst effect measurement is an important tool to study quantum fluctuations. The technique can be extended to study Nernst effect in VdW stacks and (by changing to a crystalline substrate like STO) to crystalline materials like cuprate thin films and iron chalcogenides. The low noise level makes it feasible to employ techniques of noise measurement to study fluctuations in thermoelectric effects, which has been predicted to be a useful tool in several branches of in condensed matter physics, including the search for Majorana states in condensed matter systems \cite{Smirnov2018,Smirnov2019}, study of vortex phase transitions in superconductors \cite{Chung2012} and the study of spin-Seebeck effects \cite{Matsuo2018}. \vspace{2cm} \begin{acknowledgments} The authors would like to thank Pratap Raychaudhuri, Kamran Behnia and Olivier Bourgeois for valuable discussions. This research was supported by the Israel science foundation (Grant No. 783/17) and NSF-BSF (Grant No. 2017677) \end{acknowledgments} \vspace{2cm} The data presented in this manuscript are available from the corresponding author upon reasonable request.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \subsection{Model description} \label{sec:model-description} In this article, we consider a quasi-stationary fluid-structure interaction problem for plaque growth, which describes the formation of plaque during the reaction-diffusion and transport of different cells in human blood and vessels. The problem is set up in a smooth domain $ \Omega^t \subset \mathbb{R}^3 $, with three disjoint parts $ \Omega^t = \Omega_{f}^{t} \cup \Omega_{s}^{t} \cup \Gamma^{t} $, where $ \Gamma^t = \partial \Omega_{f}^{t} $, $ \Bar{\Omega_{f}^{t}} \subset \Omega^t $ and $ \Omega_{f}^{t} $, $ \Omega_{s}^{t} $ denote the domains for the fluid and solid, respectively. $ \Gamma_s^t = \partial \Omega^t $ stands for the outer boundary of $ \Omega^t $, which is also a free boundary. Given $ T > 0 $, let \begin{alignat*}{4} Q^T & := \bigcup_{t \in (0,T)} \Omega^t \times \{ t \}, & \quad Q_{f/s}^T & := \bigcup_{t \in (0,T)} \Omega_{f/s}^t \times \{ t \}, \\ S^T & := \bigcup_{t \in (0,T)} \Gamma^t \times \{ t \}, & \quad S_s^T & := \bigcup_{t \in (0,T)} \Gamma_s^t \times \{ t \}. \end{alignat*} As in \cite{AL2021a,AL2021b,Yang2016}, the flood is described by the incompressible Navier--Stokes equation \begin{alignat}{3} \rho_f (\partial_t + \mathbf{v}_f \cdot \nabla) \mathbf{v}_f & = \Div \mathbb{T}_f, && \quad \text{in } Q_f^T, \label{Eqs:v_f-Eulerian}\\ \Div \mathbf{v}_f & = 0, && \quad \text{in } Q_f^T, \label{Eqs:fluid-mass-Eulerian} \end{alignat} where $ \mathbb{T}_f := - \pi_f \mathbb{I} + \nu_s (\nabla \mathbf{v}_f + \nabla^\top \mathbf{v}_f) $ denotes the Cauchy stress tensor. $ \bv_{f} : \mathbb{R}^3 \times \mathbb{R}_+ \rightarrow \mathbb{R}^3 $, $ \pi_f : \mathbb{R}^3 \times \mathbb{R}_+ \rightarrow \mathbb{R} $ are the unknown velocity and pressure of the fluid. $ \rho_f > 0 $ stands for the fluid density and $ \nu_s $ represents the viscosity of the fluid. Compared to the problems in \cite{AL2021a,AL2021b,Yang2016}, where the evolutionary neo-Hookean material was employed, the vessel in this manuscript is assumed to be quasi-stationary since it moves far slower than the blood from a macro point of view. Thus, we model the blood vessel by the equilibrium of a nonlinear elastic equation. To mathematically describe the elasticity conveniently, the Lagrangian coordinate was commonly used, see e.g. \cite{Goriely2017:growth,Gurtin2010}. Thus, we set reference configuration as the initial domain which is defined by $ \Omega := \rvm{\Omega^t}_{t = 0} $, as well as $ \Omega_{f} = \Omega_f^0 $, $ \Omega_{s} = \Omega_s^0 $ and $ \Gamma = \Gamma^0 $, then $ \Omega = \Omega_{f} \cup \Omega_{s} \cup \Gamma $. Let $ \mathbf{X} $ be the spatial variable in the reference configuration. Now we introduce the Lagrangian flow map \begin{equation*} \varphi: \Omega \times (0,T) \rightarrow Q_T, \end{equation*} with \begin{equation} \label{Eqs:Lagrangian flow map} \mathbf{x}(\mathbf{X}, t) = \varphi(\mathbf{X},t) = \mathbf{X} + \mathbf{u}(\mathbf{x}(\mathbf{X}, t), t) \end{equation} for all $ \mathbf{X} \in \Omega $ and $ \mathbf{x}(\mathbf{X}, 0) = \mathbf{X} $, where \begin{equation*} \mathbf{u}(\mathbf{x}(\mathbf{X}, t), t) = \left\{ \begin{aligned} & \bu_{f}(\mathbf{x}(\mathbf{X}, t),t) = \int_{0}^t \bv_{f}(\mathbf{x}(\mathbf{X}, t), \tau) \d \tau, && \text{if } \mathbf{X} \in \Omega_{f}, \\ & \bu_{s}(\mathbf{x}(\mathbf{X}, t),t), && \text{if } \mathbf{X} \in \Omega_{s}, \end{aligned} \right. \end{equation*} denotes the displacement for fluid or solid. In the sequel, without special statement, the quantities with a hat will indicate those in Lagrangian reference configuration, e.g., $ \hat{\mathbf{u}}(\mathbf{X}, t) = \mathbf{u}(\mathbf{x}(\mathbf{X}, t), t) $, while the operators with a hat means those act on the quantities in Lagrangian coordinate. Then the tensor field \begin{equation} \label{Eqs:deformation gradient} \mathbf{F}(\mathbf{x}(\mathbf{X},t),t) = \hat{\bF}(\mathbf{X},t) := \frac{\partial}{\partial \mathbf{X}} \varphi(\mathbf{X},t) = \hat{\nabla} \varphi(\mathbf{X},t) = \mathbb{I} + \hat{\nabla} \hat{\bu}(\mathbf{X}, t), \ \forall\, \mathbf{X} \in \Omega, \end{equation} with $ \hF_{f}(\mathbf{X}, 0) = \mathbb{I} $ and $ \hF_{s}(\mathbf{X}, 0) = \mathbb{I} + \hat{\nabla} \hu_{s}^0 $ is referred to be the deformation gradient and $ J = \hat{J} := \det(\hat{\bF}) $ denotes its determinant. For the blood vessels, since the growth is taken into account, we impose the so-called \textit{multiplicative decomposition} for the solid deformation gradient $ \hF_{s} $ as \begin{equation*} \hF_{s} = \hF_{s,e}\hF_{s,g} \end{equation*} with $ \hJ_s = \hJ_{s,e} \hJ_{s,g} $, where $ \hF_{s,e} $ is the pure elastic deformation tensor and $ \hF_{s,g} $ denotes the growth tensor, which will be specified later. For more details about the decomposition, see e.g. \cite{Goriely2017:growth,JC2012,RHM1994,Yang2016}. Inspired by Goriely \cite[Chapter 11--13]{Goriely2017:growth} and Jones--Chapman \cite[Section 3.2]{JC2012}, a general incompressible hyperelastic material is considered for solid as \begin{equation} \label{Eqs:elastic-Eulerian} - \Div \mathbb{T}_s = 0, \quad \text{in } Q_s^T, \end{equation} where $ \mathbb{T}_s := - \pi_s \mathbb{I} + J_{s,e}^{-1} DW(\bF_{s,e}) \tran{\bF_{s,e}} $ stands for the Cauchy stress tensor. $ \pi_s : \mathbb{R}^3 \times \mathbb{R}_+ \rightarrow \mathbb{R} $ is the unknown pressure of the solid. The scalar function $ W : \mathbb{R}^{3 \times 3} \rightarrow \mathbb{R}_+ $ is called the strain energy density function (also known as the stored energy density), which needs some general assumptions for the sake of analysis. \begin{assumption} \label{assumption:W-energy-density} $ $ \par \begin{enumerate}[label=\textbf{(H\arabic*)}] \item \label{assumptions:frame-difference} $ W $ is frame-indifferent, i.e., $ W(\mathbf{R} \mathbf{F}) = W(\mathbf{F}) $, for all $ \mathbf{R} \in SO(3) $ and $ \mathbf{F} \in \mathbb{R}^{3 \times 3} $, where $ SO(3) := \{\mathbf{A} \in \mathbb{R}^{3 \times 3}: \tran{\mathbf{A}} \mathbf{A} = \mathbb{I}, \det \mathbf{A} = 1\} $ is the set of all proper orthogonal tensors. \item \label{assumptions:smoothness} $ W \in C^4(\mathbb{R}^{3 \times 3}; \mathbb{R}) $. \item \label{assumptions:DW(I)} $ DW(\mathbb{I}) = 0 $, $ W(\mathbf{R}) = 0 $ for all $ \mathbf{R} \in SO(3) $. \item \label{assumptions:coercive} There exists a constant $ C_0 > 0 $, such that $ W(\mathbf{F}) \geq C_0 \mathrm{dist}^2(\mathbf{F}, SO(3)) $. \end{enumerate} Here, $ DW(\mathbf{F}) := \frac{\partial W}{\partial F_{ij}} \mathbf{e}_i \otimes \mathbf{e}_j $ for all $ \mathbf{F} \in \mathbb{R}^{3 \times 3} $. $ \mathrm{dist}(\mathbf{F}, SO(3)) := \min_{\mathbf{Q} \in SO(d)} \abs{\mathbf{F} - \mathbf{Q}} $. \end{assumption} \begin{remark} \label{remark:epllipticity} In fact, the Assumption \ref{assumptions:coercive} implies that \begin{equation*} D^2 W(\mathbb{I})\mathbf{F} : \mathbf{F} \geq C_1 \abs{\mathrm{sym} \mathbf{F}}^2, \end{equation*} for some constant $ C_1 > 0 $, where $ \mathrm{sym} \mathbf{F}:= \frac{1}{2}(\mathbf{F} + \tran{\mathbf{F}}) $. Then for any $ \mathbf{a}, \mathbf{b} \in \mathbb{R}^3 $, one can derive the so-called \textit{Legendre-Hadamard} condition \begin{equation*} D^2 W(\mathbb{I})(\mathbf{a} \otimes \mathbf{b}) : (\mathbf{a} \otimes \mathbf{b}) \geq \frac{C_1}{2} \abs{\mathbf{a}}^2 \abs{\mathbf{b}}^2, \end{equation*} by the Taylor expansion and the polar decomposition, which ensures that the operator $ - \Div D^2 W(\mathbb{I}) \nabla $ is strongly normally elliptic, see e.g. \cite[Page 271]{PS2016}. \end{remark} Besides \eqref{Eqs:elastic-Eulerian}, the balance of mass should hold as well together with the growth, namely, \begin{equation} \label{Eqs:solid-mass-Eulerian} \left(\partial_t + \bv_{s} \cdot \nabla \right) \rho_s + \rho_s \Div \bv_{s} = f_s^g, \quad \text{in } Q_s^T, \end{equation} where $ \rho_s > 0 $ is the solid density. $ \bv_{s} := \partial_t \bu_{s} $ denotes the velocity of the solid. The function $ f_s^g := \gamma f_s^r $ with $ \gamma > 0 $ on the right-hand side of \eqref{Eqs:solid-mass-Eulerian} stands for the growth rate, which comes from the cells reaction assigned later. There are two essential assumptions when either the density or the volume does not change with growth. Generally, a \textit{constant-density} type growth is assumed for the incompressible tissue, see e.g. \cite{JC2012,RHM1994}, in this paper we assume this kind of growth as in \cite{AL2021a,AL2021b,Yang2016}. Then by \cite[(13.10)]{Goriely2017:growth}, one obtains \begin{equation} \label{Eqs:grwoth-before} \hr_s \mathrm{tr}(\inv{\hF_{s,g}} \partial_t \hF_{s,g}) = f_s^g, \quad \text{in } \Omega_{s} \times (0,T). \end{equation} As in \cite{AL2021a,AL2021b,Yang2016}, the plaque grows when the macrophages in the vessels accumulate a lot from the monocytes in the blood flow and turn to be the foam cells. Thus, denoting by $ c_f $, $ c_s $, $ c_s^* $ the monocytes, the macrophages and the foam cells respectively, we introduce \begin{alignat}{3} \partial_t c_f + \Div \left( c_f \bv_{f} \right) - D_f \Delta c_f & = 0, && \quad \text{in } Q_f^T, \label{Eqs:c_f-Euler}\\ \partial_t c_s + \Div \left( c_s \bv_{s} \right) - D_s \Delta c_s & = - f_s^r, && \quad \text{in } Q_s^T, \label{Eqs:c_s-Euler}\\ \partial_t c_s^* + \Div \left( c_s^* \bv_{s} \right) & = f_s^r, && \quad \text{in } Q_s^T, \label{Eqs:c_ss-Euler} \end{alignat} where $ D_{f/s} > 0 $ are the diffusion coefficients in the blood and the vessel, respectively. $ f_s^r := \beta c_s $ with $ \beta > 0 $ stands for the reaction function, modeling the rate of transformation from macrophages into foam cells. Noticing that the foam cells is supposed to only move with vessels motion, one has \eqref{Eqs:c_ss-Euler} above. To close the system, one still needs to impose suitable boundary and initial conditions. For this purpose, we follow a similar setting as in authors' previous work \cite{AL2021a}. On the free interface $ \Gamma^{t} $, one has the continuity of velocity and normal stress tensor for the fluid-structure part, while for the cells, the concentration flux is continuous and the jump of the concentration is determined by the permeability and flux across the vessel wall. \begin{alignat}{3} \label{Eqs:vjump-Eulerian} \jump{\mathbf{v}} = 0, \quad \jump{\mathbb{T}} \bn_{\Gt} & = 0, && \quad \text{on } S^T, \\ \label{Eqs:cjump-Eulerian} \jump{D \nabla c} \cdot \bn_{\Gt} = 0, \quad \zeta \jump{c} - D_s \nabla c_s \cdot \bn_{\Gt} & = 0, && \quad \text{on } S^T, \end{alignat} where $ \bn_{\Gt} $ represents the outer unit normal vertor on $ \Gamma^{t} $ pointing from $ \Omega_{f}^{t} $ to $ \Omega_{s}^{t} $. For a quantity $ f $, $ \jump{f} $ denotes the jump defined on $ \Omega_{f}^{t} $ and $ \Omega_{s}^{t} $ across $ \Gamma^{t} $, namely, \begin{equation*} \jump{f}(\mathbf{x}) := \lim_{\theta \rightarrow 0} {f(\mathbf{x} + \theta \bn_{\Gt}(\mathbf{x})) - f(\mathbf{x} - \theta \bn_{\Gt}(\mathbf{x}))}, \quad \forall\, \mathbf{x} \in \Gamma^{t}. \end{equation*} $ \zeta $ denotes the permeability of the sharp interface $ \Gamma^{t} $ with respect to the monocytes, which basically should depend on the hemodynamic stress $ \mathbb{T}_f \bn_{\Gt} $. It is supposed to be a constant for simplicity. Moreover, $ \Gamma_s^t $ is assumed to be a free boundary as well due to the physical compatibility, see e.g. \cite[Remark 1.3]{AL2021a}. Then we give the boundary conditions on $ \Gamma_s^t $, \begin{alignat}{3} \label{Eqs:vboundary-Eulerian} \mathbb{T}_s \bn_{\Gst} & = 0, && \quad \text{on } S_s^T, \\ \label{Eqs:cboundary-Eulerian} D_s \nabla c_s \cdot \bn_{\Gst} & = 0, && \quad \text{on } S_s^T. \end{alignat} Finally, the initial values are prescribed as \begin{alignat}{3} \label{Eqs:ofinitial-Eulerian} \rv{\bv_{f}}_{t = 0} = \bv_{f}^0, \quad \rv{c_f}_{t = 0} & = c_f^0, && \quad \text{in } \Omega_{f}, \\ \label{Eqs:osinitial-Eulerian} \rv{\bu_{s}}_{t = 0} = \bu_{s}^0, \quad \rv{c_s}_{t = 0} = c_s^0, \quad \rv{c_s^*}_{t = 0} = c_*^0, \quad {\hF_{s,g}}\vert_{t = 0} & = g^0\mathbb{I}, && \quad \text{in } \Omega_{s}. \end{alignat} \subsection{Previous work} Our main goal is to investigate the short time existence of strong solutions to \eqref{Eqs:v_f-Eulerian}--\eqref{Eqs:osinitial-Eulerian}. Before discussing the technical details, let us recall some literature related to our work. In the case of 3d-3d fluid-structure interaction problem with a free interface, the strong solution results can be tracked back to Coutand--Shkoller \cite{CS2005}, who addressed the interaction problem between the incompressible Navier--Stokes equation and a linear Kirchhoff elastic material. The results were extended to fluid-elastodynamics case by them in \cite{CS2006}, where they regularized the hyperbolic elastic equation by a particular parabolic artificial viscosity and then obtained the existence of strong solutions by delicate a priori estimates. Thereafter, systems consisting of an incompressible Navier--Stokes equation and (damped) wave/Lam\'{e} equations were continuously investigated in e.g. \cite{IKLT2012,IKLT2017,KT2012,RV2014} with further developments. Besides the models above, one refers to \cite{BG2010} for a compressible barotropic fluid coupled with elastic bodies and to \cite{SWY2021} for the magnetohydrodynamics (MHD)-structure interaction system, where the fluid is described by the incompressible viscous non-resistive MHD equation and the structure is modeled by the wave equation of a superconductor material. In the context of 3d-2d/2d-1d models, various kinds of models and results were established during the last twenty years. The widely focused case is the fluid-beam/plate systems where the beam/plate equations were imposed with different mechanical mechanism (rigidity, stretching, friction, rotation, etc.), readers are refer to e.g. \cite{CDEG2005,Grandmont2008,MC2013,TW2020} for weak solution results and to e.g. \cite{Beirao da Veiga2004,DS2020,GHL2019,Lequeurre2013,MT2021NARWA,Mitra2020} for strong solutions respectively. Besides, the fluid-shells interaction problems were studied as well in e.g. \cite{BS2018,LR2013} for weak solutions and in e.g. \cite{CCS2007,CS2010,MRR2020} for strong solutions. It is worth mentioning that in recent works \cite{DS2020,MT2021NARWA}, a maximal regularity framework, which requires lower initial regularity and less compatibility conditions compared to the energy method, was employed. Recently inspired by \cite{Yang2016}, the authors considered the local well-posedness of a fluid-structure interaction problem for plaque growth in a smooth domain \cite{AL2021a} and a cylindrical domain \cite{AL2021b} respectively. By the maximal regularity theory, one obtains the well-posedness of linearized systems and then derives the existence of a unique strong solution to the nonlinear system via a fixed-point argument. However, concerning the origin of the model, which consists of blood flow and blood vessels, it is reasonable that the movement of vessels is supposed to be far slower than the blood, namely, the time scales of motions of the fluid and the solid are distinguished and the kinetic energy of the vessel is neglected. Thus, in this paper, we take an incompressible quasi-stationary hyperelastic equation into account, i.e., for every time $ t $ the elastic stresses are in equilibrium. To the best of our knowledge, this is the first result concerning on the strong solutions to a quasi-stationary fluid-structure interaction problem with growth. \subsection{Technical discussions} Under the above setting, the fluid-structure part is of parabolic-elliptic type, while the cells part is similar to the ones in \cite{AL2021a,AL2021b}. To solve the nonlinear problem, our basic strategy is still a fixed-point argument in the framework of maximal regularity theory, while more issues come up when we consider the linearized systems. More precisely, for the linearization of the fluid-structure part, it is hard to solve it directly by the maximal regularity theory due to the parabolic-elliptic type coupling, which results in the unmatched regularity between $ \hv_{f} $ and $ \hu_{s} $ on the sharp interface $ \Gamma $. To overcome the problem, one tries to decouple the system to a nonstationary Stokes equation with respect to fluid velocity $ \hv_{s} $ and a quasi-stationary Stokes-type equation with regard to the solid displacement $ \hu_{s} $. Note that one key point is to separate the kinetic and dynamic condition on the interface correctly. Specifically, we impose the Neumann boundary condition for $ \hv_{s} $ and a Dirichlet boundary condition for $ \hu_{s} $, see Section \ref{sec:analysis-linear} below. Otherwise if $ \rvm{\hv_{f}}_{\Gamma} = \hv_{s} $, one may face the problem that there is no regularity information about the velocity of solid $ \hv_{s} = \ptial{t}\hu_{s} $, since the solid equation is quasi-stationary without any damping. Another issue is the choice of function spaces for the elastic equation, i.e., how to assemble suitable function spaces for the data in the linearized solid equation so that the regularity of $ \hat{\nabla} \hu_{s} $ matches with $ \hat{\nabla} \hv_{s} $ of fluid on the interface in the nonlinear system. Our choice is \begin{equation*} \mathbf{f} \in \H{1/2}(0,T; W_{q, \Gamma}^{-1}(\Omega_{s})^3) \cap \Lq{q}(0,T; \Lq{q}(\Omega_{s})^3). \end{equation*} This space is motivated by the observation that the nonstationary Stokes equation \eqref{Eqs:linear-nonsta} is uniquely solvable if the Neumann boundary data \begin{equation*} \mathbf{h} \in \W{1/2 - 1/2q}(0,T; \Lq{q}(\Gamma)^3) \cap \Lq{q}(0,T; \W{1 - 1/q}(\Gamma)^3) \end{equation*} holds, which implies that \begin{equation*} D^2 W(\mathbb{I}) \hat{\nabla} \hu_{s} \in \W{1/2 - 1/2q}(0,T; \Lq{q}(\Gamma)^3) \cap \Lq{q}(0,T; \W{1 - 1/q}(\Gamma)^3) \end{equation*} as well. Here $ H^s_q $ and $ W^s_q $ are Bessel potential space and Sobolev--Slobodeckij space respectively, which will be given later in Section \ref{sec:function-spaces}. In fact, the anisotropic Bessel potential space we assigned is a sharp regularity if one goes back to the anisotropic trace operator, see Lemma \ref{lemma:trace-time-regularity} below, and it is natural to equip $ \mathbf{f} $ with the regularity above. Because of this sharp regularity setting, one can not expect the Lipschitz estimates of nonlinear terms only with small time, which gives birth to an additional smallness assumption \eqref{Eqs:us0-smallness} on the initial solid displacement and pressure. Detailed discussion can be found in Remark \ref{remark:smallness} and Proposition \ref{prop:Lipschitzestimate} later. To solve the quasi-stationary (linearized) elastic equation, one treated it as a Stokes-type problem with respect to the displacement $ \hu_{s} $ and the pressure $ \hpi_s $, due to the incompressibility. However, as we assigned the certain regularity space for it as above and the Stokes operator is not a standard one (namely, $ \Div(D^2 W(\mathbb{I}) \nabla \cdot ) $), one needs to consider a generalized stationary Stokes equation with $ \mathbf{f} $ in $ \Lq{q} $ and $ W_{q, \Gamma_1}^{-1} $ respectively, for which the maximal regularity of analytic C$ _0 $ semigroups is applied, as well as a complex interpolation method with a \textit{very weak solution} in $ \Lq{q} $ of a mixed-boundary Stokes-type equation, which can be solved by a duality argument. For the cells part there is a problem of the positivity for the concentrations, compared to \cite{AL2021a}. The idea in \cite{AL2021a} to prove it is to apply the maximum principle to the original equation and deduce a contradiction with the help of Hopf's Lemma. However, due to the lack of regularity of $ \bv_{s} = \partial_t \bu_{s} $, one can not expect it to be H\"older continuous in space-time, even continuous. To deal with this trouble, we make use of the idea of mollification, i.e., approximating $ \bv_{s} $ by sufficient smooth functions $ \bv_{s}^\epsilon $ such that $ \int_0^t \bv_{s}^\epsilon \rightarrow \bu_{s} $ in certain space. Then arguing by a similar procedure in \cite{AL2021a}, we obtain an approximate nonnegative solution $ c^\epsilon $. Finally one can show that it converges to a nonnegative function $ c $, which exactly satisfies the original equations of cell concentrations. \subsection{Structure of the paper} The paper is organized as follows. In Section \ref{sec:preliminaries} we briefly introduce some notations and function spaces together with some corresponding properties. Moreover, a reformulation of the system is done in Section \ref{sec:reformulation} and later we give the main result for the reformulated system. Section \ref{sec:analysis-linear} is devoted to three linearized systems in Section \ref{sec:nonsta-Stokes}, \ref{sec:sta-Stokes-mixed} and \ref{sec:heat-neumann} respectively. The main results of this section are the $ L^q $-solvability for these linear problems, for which a careful analysis is carried out. In Section \ref{sec:nonlinear-wpd}, we first introduce some preliminary lemmas in Section \ref{sec:useful-lemmas}, which will be frequently used in proving the Lipschitz estimates later in Section \ref{sec:lipschitz-estimates}. Then by the Banach fixed-point Theorem, we derive the short time existence of strong solutions to the nonlinear system in Section \ref{sec:nonlinear-proof}. Moreover, the cell concentrations are shown to be nonnegative, provided that the initial concentration is nonnegative. In addition, we establish the solvability of a Stokes-type resolvent problem with mixed boundary condition in Appendix \ref{sec:sta-Stokes-lower-regularity}. \section{Preliminaries} \label{sec:preliminaries} \subsection{Notations} \label{sec:notations} Let us introduce some notations. For a vector $ \mathbf{u} \in \mathbb{R}^d $, $ d \in \mathbb{N}_+ $, we define the gradient as $ (\nabla \mathbf{u})_{ij} = \ptial{i} \mathbf{u}_j $. For a matrix $ \mathbf{A} = (A_{ij})_{i,j=1}^d \in \mathbb{R}^{d \times d} $, we introduce the divergence of a matrix row by row as $ (\Div \mathbf{A})_{i} = \sum_{j = 1}^{d} \ptial{j} A_{ij} $. Given matrices $ \mathbf{A} = (A_{ij})_{i,j=1}^d, \mathbf{B} = (B_{ij})_{i,j=1}^d \in \mathbb{R}^{d \times d} $, we know $ (\mathbf{A}\mathbf{B})_{ik} = \sum_{j = 1}^{d} A_{ij}B_{jk} $. Then we denote by $ \mathbf{A} : \mathbf{B} := \mathrm{tr}(\tran{\mathbf{B}}\mathbf{A}) = \sum_{j,k = 1}^{d}B_{jk} A_{jk} $ the Frobenius product and $ \abs{\mathbf{A}} := \sqrt{\mathbf{A} : \mathbf{A}} $ the induced modulus. For a differentiable and invertible mapping $ \mathbf{A} : I \subset \mathbb{R} \rightarrow \mathbb{R}^{d \times d} $, we have \begin{equation} \label{Eqs:dt:detA-invA} \frac{\d}{\d t} \det \mathbf{A} = \mathrm{tr} \Big(\inv{\mathbf{A}} \frac{\d}{\d t}\mathbf{A}\Big) \det \mathbf{A}, \quad\frac{\d}{\d t} \inv{\mathbf{A}} = - \inv{\mathbf{A}} \Big(\frac{\d}{\d t} \mathbf{A}\Big) \inv{\mathbf{A}}, \text{ for } t \in I, \end{equation} see e.g. \cite{Goriely2017:growth,Gurtin2010}. Moreover, we define $ \mathbb{R}_{\mathrm{sym}+}^{d \times d} $ as the space of all positive-definite symmetric matrices and \begin{equation*} SO(d) := \{\mathbf{A} \in \mathbb{R}^{d \times d}: \tran{\mathbf{A}} \mathbf{A} = \mathbb{I}, \det \mathbf{A} = 1\} \end{equation*} as the set of all proper orthogonal tensors. Generally, $ B_X(x,r) $ denotes the open ball with radius $ r > 0 $ around $ x $ in a metric space $ X $. For normed spaces $ X, Y $ over $ \mathbb{K} = \mathbb{R} $ or $ \mathbb{C} $, the set of bounded, linear operators $ T : X \rightarrow Y $ is denoted by $ \mathcal{L}(X,Y) $ and in particular, $ \mathcal{L}(X) = \mathcal{L}(X,X) $. As usual, the letter $ C $ in the present paper denotes a generic positive constant which may change its value from line to line, even in the same line, unless we give a special declaration. \subsection{Function spaces and properties} \label{sec:function-spaces} For an open set $ M \subset \mathbb{R}^d $ with Lipschitz boundary $ \partial M $, $ d \in \mathbb{N}_+ $, we recall the standard Lebesgue space $ \Lq{q}(M) $ and Sobolev space $ \W{m}(M) $, $ m \in \mathbb{N} $, $ 1 \leq q \leq \infty $, as well as \begin{gather*} W_{q, 0}^{m}(M) = \overline{C_0^\infty(M)}^{W_q^{m}(M)}, \quad W_q^{-m}(M) = \left[W_{q', 0}^{m}(M)\right]', \end{gather*} where $ q' $ is the conjugate exponent of $ q $ satisfying $ 1/q + 1/q' = 1 $. Furthermore, we set the Sobolev spaces associated with $ \Gamma \subset \partial M $ as \begin{equation*} W_{q, \Gamma}^m(M) = \left\{ \psi \in \W{m}(M): \psi|_{\Gamma} = 0 \right\}, \quad W_{q, \Gamma}^{-m}(M) := [ W_{q', \Gamma}^m(M) ]', \end{equation*} The vector-valued variants are denoted by $ \Lq{q}(M; X) $ and $ \W{m}(M; X) $, where $ X $ is a Banach space. In particular, $ \Lq{q}(M) = \W{0}(M) $, $ \Lq{q}(M; X) = \W{0}(M; X) $. Moreover, for $ 1 < q < \infty $ and $ 1 \leq p \leq \infty $ the standard Besov space $ \Bqp{s} $ and Bessel potential space $ \H{s} $ coincide with the real and complex interpolation of Sobolev spaces respectively (see e.g. Lunardi \cite{Lunardi2018}, Runst--Sickel \cite{RS1996}, Triebel \cite{Triebel1978}) \begin{equation*} \Bqp{s}(M) = \left( \W{k}(M), \W{m}(M) \right)_{\theta ,p}, \quad \H{s}(M) = \left[ \W{k}(M), \W{m}(M) \right]_{\theta}, \end{equation*} where $ s = (1 - \theta)k + \theta m $, $ \theta \in (0,1) $, $ k, m \in \mathbb{N} $, $ k < m $. In particular, $ \W{l} = \H{l} $ for $ l \in \mathbb{N} $ and setting $ p = q $ above yields the Sobolev--Slobodeckij space \begin{equation*} \W{s}(M) = \Bq{s}(M) = \left( \W{k}(M), \W{m}(M) \right)_{\theta ,q}, \text{ if } s \notin \mathbb{N}. \end{equation*} Then we define the seminorm of $ \W{s}(M) $ as \begin{equation*} \seminorm{f}_{\W{s}(M)} = \left( \int_{M \times M} \frac{\abs{f(x) - f(y)}^q}{\abs{x - y}^{n + sq}} \,\d x \d y\right)^{\frac{1}{q}}, \end{equation*} for $ f \in \W{s}(M) $ with $ 0 < s < 1 $, $ 1 < q < \infty $. For an interval $ I \subset \mathbb{R} $, $ 0 < s < 1 $, $ 1 < q < \infty $, we recall the Banach space-valued spaces $ \K{s}(I; X) $, $ K \in \{W,H\} $ with norm \begin{equation*} \norm{f}_{\K{s}(I; X)} = \left( \norm{f}_{\Lq{q}(I; X)}^q + \seminorm{f}_{\K{s}(I; X)}^q \right)^{\frac{1}{q}}. \end{equation*} For convenience, with $ T > 0 $ we define the corresponding space with vanishing initial trace at $ t = 0 $ as \begin{equation*} \KO{s}(0,T; X) = \left\{f \in \K{s}(0,T; X) : \rv{f}_{t = 0} = 0 \right\} \text{ for } s > \frac{1}{q}. \end{equation*} Now we recall several embedding results and properties for Banach space-valued spaces that will be frequently used later. \begin{lemma} \label{lemma:timeembedding} Suppose $ 0 < r < s \leq 1 $ and $ 1 \leq q < \infty $. $ X $ is a Banach space and $ I = (0,T) \subset \mathbb{R} $ is a bounded interval for $ 0 < T < \infty $. Then $ \K{s}(I; X) \hookrightarrow \W{r}(I; X) $ and for some $ \delta > 0 $ \begin{equation*} \seminorm{f}_{\W{r}(I; X)} \leq \abs{I}^\delta \seminorm{f}_{\K{s}(I; X)} \text{ for all } f \in \K{s}(I; X). \end{equation*} In particular, we have \begin{equation*} \W{1}(I; X) \hookrightarrow \W{\theta}(I; X) \ \text{for all } 0 < \theta < 1 \end{equation*} and \begin{equation*} \seminorm{f}_{\W{\theta}(I; X)} \leq T^{1 - \theta} \norm{\partial_t f}_{\Lq{q}(I; X)} \text{ for all } f \in \W{1}(I; X). \end{equation*} \end{lemma} \begin{proof} The case $ K = W $ was shown in Simon \cite[Corollary 17]{Simon1990}. Then taking $ t $ satisfying $ r < t < s $, one has \begin{equation*} \seminorm{f}_{\W{r}(I; X)} \leq \abs{I}^{t - r} \seminorm{f}_{\W{t}(I; X)} \leq C \abs{I}^{t - r} \seminorm{f}_{\K{s}(I; X)}, \end{equation*} where the last inequality holds in virtue of $ \H{r}(I; X) \hookrightarrow \W{t}(I; X) $ for $ r > t > 0 $. The second assertion can be easily derived by means of the observation \begin{equation*} f(t) - f(t - h) = h \int_0^1 \partial_t f(t + (\tau - 1)h) \d \tau \end{equation*} and the definition of Sobolev--Slobodeckij space, so we omit it here. \end{proof} \begin{lemma} \label{lemma:timeembedding-continuous} Let $ 0 < s < 1 $, $ 1 < q < \infty $ satisfying $ sq > 1 $, $ X $ be a Banach space and $ I = (0,T) \subset \mathbb{R} $ be a bounded interval for $ 0 < T < \infty $. Then \begin{equation*} \K{s}(I; X) \hookrightarrow C(\bar{I}; X). \end{equation*} Moreover, for some $ \delta > 0 $ and all $ f \in \KO{s}(I; X) $, \begin{equation*} \norm{f}_{C(\bar{I}; X)} \leq C T^{\delta} \norm{f}_{\KO{s}(I; X)}, \end{equation*} where $ C $ is independent of $ I $. \end{lemma} \begin{proof} By Meyries--Schnaubelt \cite[Proposition 2.10]{MS2012} with $ \mu = 2 $ there, one has the first assertion and for $ K = W $, $ 1/q< r < s $, \begin{equation*} \norm{f}_{C(\bar{I}; X)} \leq C \norm{f}_{\WO{r}(I; X)}, \end{equation*} where $ C $ is independent of $ I $. Then it follows from Lemma \ref{lemma:timeembedding} that \begin{equation*} \norm{f}_{C(\bar{I}; X)} \leq C \norm{f}_{\WO{r}(I; X)} = C \big( \norm{f}_{\Lq{q}(I; X)} + \seminorm{f}_{\WO{r}(I; X)} \big) \leq C \abs{I}^\delta \seminorm{f}_{\K{s}(I; X)}, \end{equation*} for some $ \delta > 0 $. \end{proof} Adapting from Meyries--Schnaubelt \cite[Proposition 3.2]{MS2012} and Pr\"uss--Simonett \cite[Section 4.5.5]{PS2016}, we use the following time-space embedding lemma. \begin{lemma} \label{lemma:time-space embedding} Let $ 1 < q < \infty $, $ 0 < \alpha, s < 2 $ and $ 0 < r < s $ , we have the embeddings \begin{equation*} \H{s}(0,T; \Lq{q}) \cap \Lq{q}(0,T; \K{\alpha}) \hookrightarrow \H{r}(0,T; \K{\alpha(1 - \frac{r}{s})}). \end{equation*} In particular, \begin{gather*} \H{\frac{1}{2}}(0,T; \Lq{q}) \cap \Lq{q}(0,T; \W{1}) \hookrightarrow \H{\frac{1}{4}}(0,T; \H{\frac{1}{2}}), \\ \W{1}(0,T; \Lq{q}) \cap \Lq{q}(0,T; \W{2}) \hookrightarrow \H{\frac{1}{2}}(0,T; \W{1}). \end{gather*} All these assertions remain true if one replaces $ W $- and $ H $- spaces by $ {_0W} $- and $ {_0H} $- spaces respectively, and the embedding constants in this case does not depend on $ T > 0 $. \end{lemma} Now we include results on multiplication and composition. \begin{lemma}[Multiplication] \label{lemma:multiplication} Let $ \Omega \subset \mathbb{R}^d $, $ d \in \mathbb{N}_+ $, be a bounded Lipschitz domain. For $ f,g \in \K{s}(\Omega) $ and $ sq > d $ with $ s > 0 $, $ 1 < q < \infty $, we have \begin{equation*} \norm{fg}_{\K{s}(\Omega)} \leq M_q \norm{f}_{\K{s}(\Omega)} \norm{g}_{\K{s}(\Omega)}, \end{equation*} where $ M_q $ is a constant depending on $ q $. \end{lemma} \begin{proof} See \cite[Theorem 4.6.1/1 (5)]{RS1996} for the case $ K = H $ with $ q = q_1 = q_2 = 2 $ therein, \cite[Theorem 4.6.1/2 (18)]{RS1996} for the case $ K = W $ with $ p = q = q_1 = q_2 $ therein. \end{proof} \begin{lemma}[Composition of Bessel potential/Slobodeckij functions] \label{lemma:composition-Slobodeckij} Let $ \Omega \subset \mathbb{R}^d $, $ d \in \mathbb{N}_+ $, be a bounded domain with boundary of $ C^1 $ class. Let $ N \in \mathbb{N}_+ $, $ 0 < s < 1 $ and $ 1 \leq p < \infty $ with $ s > d/p $. Then for all $ f \in C^1 (\mathbb{R}^N) $ and every $ R > 0 $ there exists a constant $ C > 0 $ depending on $ R $ such that for all $ \mathbf{u} \in K_p^s(\Omega)^N $ with $ \norm{\mathbf{u}}_{K_p^s(\Omega)^N} \leq R $, it holds that $ f(\mathbf{u}) \in K_p^s(\Omega) $ and $ \norm{f(\mathbf{u})}_{K_p^s(\Omega)} \leq C(R) $. Moreover, if $ f \in C^2(\mathbb{R}^N) $, then for all $ R > 0 $ there exists a constant $ L > 0 $ depending on $ R $ such that \begin{equation*} \norm{f(\mathbf{u}) - f(\mathbf{v})}_{K_p^s(\Omega)} \leq L(R) \norm{\mathbf{u} - \mathbf{v}}_{K_p^s(\Omega)^N} \end{equation*} for all $ \mathbf{u},\mathbf{v} \in K_p^s(\Omega)^N $ with $ \norm{\mathbf{u}}_{K_p^s(\Omega)^N}, \norm{\mathbf{v}}_{K_p^s(\Omega)^N} \leq R $. \end{lemma} \begin{proof} The first part follows from Runst--Sickel \cite[Theorem 5.5.1/1]{RS1996}. We note that in \cite{RS1996}, the function spaces act on the full space $ \mathbb{R}^d $. Here we just need to employ suitable extensions for $ \Omega $ so that we can recover the case of full space. For the second part, let $ u,v $ be arbitrary two functions in $ K_p^s(\Omega)^N $ with $ \norm{\mathbf{u}}_{K_p^s(\Omega)^N}, \norm{\mathbf{v}}_{K_p^s(\Omega)^N} \leq R $. By a simple calculation, one obtains \begin{equation} \label{Eqs:composition} (f(\mathbf{u}) - f(\mathbf{v}))(x) = \int_{0}^{1} Df(t\mathbf{u} + (1 - t)\mathbf{v})(x) \,\d t \cdot (\mathbf{u} - \mathbf{v})(x), \end{equation} where $ Df(\mathbf{u}) := \partial_{u_j}f(\mathbf{u}) $, $ j = 1,2,...,N $. Now let $ g(\mathbf{u},\mathbf{v})(x) := \int_{0}^{1} Df(t\mathbf{u} + (1 - t)\mathbf{v})(x) \,\d t $, we have $ g(\mathbf{u},\mathbf{v}) \in C^1(\mathbb{R}^N \times \mathbb{R}^N) $ since $ f(\mathbf{u}) \in C^2(\mathbb{R}^N) $. Then the first part implies that \begin{equation*} \norm{g(\mathbf{u},\mathbf{v})}_{K_p^s(\Omega)} \leq C(R), \end{equation*} which completes the proof with \eqref{Eqs:composition} and the multiplication property Lemma \ref{lemma:multiplication} with $ s > d/p $. \end{proof} \begin{remark} \label{remark:composition-Sobolev} We comment that for the case $ s = 1 $, the lemma above holds true as well due to \cite{RS1996}. \end{remark} In the following, an anisotropic trace lemma is introduced for a fractional power space. \begin{lemma}[Anisotropic trace on the boundary] \label{lemma:trace-time-regularity} Let $ 1 < q < \infty $ and $ \Omega \subset \mathbb{R}^d $, $ d \in \mathbb{N}_+ $, be a bounded domain with $ \Gamma := \partial \Omega $ of class $ C^1 $, $ T > 0 $, and \begin{equation*} X_T := \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega)) \cap \Lq{q}(0,T; \W{1}(\Omega)). \end{equation*} Then there is a trace operator \begin{equation*} \gamma: X_T \rightarrow X_{\gamma,T} := \W{\frac{1}{2} - \frac{1}{2q}}(0,T;\Lq{q}(\Gamma)) \cap \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma)), \end{equation*} such that $ \gamma f = \rvm{f}_{\Gamma} $ for $ f \in X_T \cap C([0,T] \times \Bar{\Omega}) $ and \begin{equation*} \norm{\gamma f}_{X_{\gamma,T}} \leq C \norm{f}_{X_T}, \end{equation*} where $ C > 0 $ is independent of $ T $. Moreover, it is surjective and has a continuous right-inverse. \end{lemma} \begin{proof} By means of a coordinate transformation and a partition of unity of $ \Omega $, one can easily reduce it to case of a half-space $ \mathbb{R}^{d - 1} \times \mathbb{R}_+ $. Then thanks to \cite[Theorem 4.5]{MS2012} with $ s = 1/2 $, $ m = 1 $, $ \mu = 1 $ (see also \cite[Proposition 6.2.4]{PS2016} with $ m = 1 $, $ \mu = 1 $ and $ p = q $ there), one completes the proof. \end{proof} \section{Reformulation and main result} \subsection{System in Lagrangian coordinates} \label{sec:reformulation} In this section, we transform \eqref{Eqs:v_f-Eulerian}--\eqref{Eqs:osinitial-Eulerian} in deformed domain $ \Omega^t $ to the reference domain $ \Omega $, whose definition are given in Section \ref{sec:model-description}. Let $ \phi / \hat{\phi} $ be any scalar function in $ \Omega^t / \Omega $ and $ \mathbf{w} / \hat{\mathbf{w}} $ be any vector-valued function in $ \Omega^t / \Omega $. Then one can easily derive the relations between derivatives in different configurations as \begin{gather} \label{ptu} \partial_t \hat{\mathbf{w}}(\mathbf{X},t) = \left( \partial_t + \mathbf{v}(\mathbf{x},t) \cdot \nabla \right) \mathbf{w}(\mathbf{x},t), \\ \label{grad} \nabla \phi = \inv{\hat{\bF}} \hat{\nabla} \hat{\phi}, \quad \nabla \mathbf{w} = \inv{\hat{\bF}} \hat{\nabla} \hat{\mathbf{w}}, \\ \label{div} \Div \mathbf{w} = \mathrm{tr} ( \nabla \mathbf{w} ) = \mathrm{tr} ( \inv{\hat{\bF}} \hat{\nabla} \hat{\mathbf{w}} ) = \invtr{\hat{\bF}} : \hat{\nabla} \hat{\mathbf{w}}, \end{gather} where $ \hat{\bF} $ is defined as in \eqref{Eqs:deformation gradient}. Before the reformulation, we assume an \textit{isotropic growth}, which is the simplest nontrivial form for the growth tensor. It is taken as a multiple of the identity, namely, \begin{equation*} \hF_{s,g} = \hat{g} \mathbb{I}, \quad \text{in } \Omega_{s}, \end{equation*} where $ \hat{g} = \hat{g}(\mathbf{X},t) $ is the metric of growth, a scalar function depending on the concentration of macrophages. Note that there are other possibilities for growth, see e.g. \cite{Goriely2017:growth,JC2012}, the isotropic one is taken for the sake of analysis. Then we have $ \hJ_{s,g} = \hat{g}^3 $ indicating the isotropic change of a volume element and \eqref{Eqs:grwoth-before} becomes \begin{equation} \label{Eqs:grwoth-after} \partial_t \hat{g} = \frac{f_s^g}{3 \hr_s} \hat{g}, \quad \text{in } \Omega_{s} \times (0,T), \end{equation} with $ \hat{g}(\mathbf{X}, 0) = \hat{g}^0 $. Since it follows from \eqref{Eqs:dt:detA-invA} that \begin{equation*} \partial_t \hat{J} = \mathrm{tr} \left( \inv{\hat{\bF}} \partial_t \hat{\bF} \right) \hat{J} = \mathrm{tr} \left( \inv{\hat{\bF}} \hat{\nabla} \hat{\bv} \right) \hat{J} = \Div \mathbf{v} \hat{J}, \end{equation*} we have \begin{equation*} \hJ_f = \rv{\hJ_f}_{t = 0} = \det \mathbb{I} = 1, \quad \text{in } \Omega_{f}. \end{equation*} By the decomposition of $ \hF_{s} $ and the incompressibility of the solid, we know that $ \hJ_{s,e} = 1 $ and \begin{equation*} \hJ_s = \hJ_{s,g} = \hat{g}^3, \quad \text{in } \Omega_{s}. \end{equation*} Similar to \cite{AL2021a}, the reformulated system now reads as: \begin{subequations} \label{Eqs:fullsystem-Lagrangian} \begin{alignat}{3} \label{Eqs:fluid-Lagrangian} \hr_f \partial_t \hv_{f} - \hdiv \mathbb{P}_f = 0, \quad \invtr{\hF_{f}} : \hat{\nabla} \hv_{f} & = 0 && \quad \text{in } \Omega_{f} \times (0, T), \\ \label{Eqs:cf-Lagrangian} \partial_t \hc_f - \hD_f \hdiv \big( \inv{\hF_{f}} \invtr{\hF_{f}} \hat{\nabla} \hc_f \big) & = 0 && \quad \text{in } \Omega_{f} \times (0, T), \\ \label{Eqs:solid-Lagrangian} - \hdiv \mathbb{P}_s = 0, \quad \invtr{\hF_{s}} : \hat{\nabla} \hu_{s} - \int_0^t \frac{\gamma \beta}{\hr_s} \hc_s \,\d \tau & = 0 && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:cs-Lagrangian} \partial_t \hc_s - \hD_s \inv{\hJ_s} \hdiv \big( \hJ_s \inv{\hF_{s}} \invtr{\hF_{s}} \hat{\nabla} \hc_s \big) + \beta \hc_s \big( 1 + \frac{\gamma}{\hr_s} \hc_s \big) & = 0 && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:css-Lagrangian} \partial_t \hcs^* - \beta \hc_s + \frac{\gamma \beta}{\hr_s} \hc_s \hcs^* = 0, \quad \partial_t \hat{g} - \frac{\gamma \beta}{3 \hr_s} \hc_s \hat{g} & = 0 && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:vjump-Lagrangian} \jump{\hat{\bv}} = 0, \quad \jump{\mathbb{P}} \hn_{\Gamma} & = 0 && \quad \text{on } \Gamma \times (0, T), \\ \label{Eqs:cjump-Lagrangian} \jump{\hat{D} \inv{\hat{\bF}} \invtr{\hat{\bF}} \hat{\nabla} \hat{c}} \hn_{\Gamma} = 0, \ \zeta \jump{\hat{c}} - \hD_s \inv{\hF_{s}} \invtr{\hF_{s}} \hat{\nabla} \hc_s \cdot \hn_{\Gamma} & = 0 && \quad \text{on } \Gamma \times (0, T), \\ \label{Eqs:boundary-Lagrangian} \mathbb{P}_s \hn_{\Gs} = 0, \ \hD_s \inv{\hF_{s}} \invtr{\hF_{s}} \hat{\nabla} \hc_s \cdot \hn_{\Gs} & = 0 && \quad \text{on } \Gamma_s \times (0, T), \\ \label{Eqs:ofinitial-Lagrangian} \rv{\hv_{f}}_{t = 0} = \hv^0_f, \quad \rv{\hc_f}_{t = 0} & = \hc^0_f && \quad \text{in } \Omega_{f}, \\ \label{Eqs:osinitial-Lagrangian} \rv{\hu_{s}}_{t = 0} = \hu_{s}^0, \quad \rv{\hc_s}_{t = 0} = \hc^0_s, \quad \rv{\hcs^*}_{t = 0} = \hat{c}_*^0, \quad \rv{\hat{g}}_{t = 0} & = \hat{g}^0 && \quad \text{in } \Omega_{s}, \end{alignat} \end{subequations} where $ \mathbb{P}_i := \hat{J}_i \hat{\mathbb{T}}_i \invtr{\hat{\bF}}_i $, $ i \in \{f,s\} $, denotes the first Piola--Kirchhoff stress tensor associated with the Cauchy stress tensor $ \mathbb{T}_i $ defined in Section \ref{sec:model-description}. \subsection{Compatibility condition and well-posedness} Before stating our main theorem, one still needs to impose suitable function spaces and compatibility conditions. Following the general setting of maximal regularity, e.g. \cite{AL2021a,AL2021b,PS2016}, where the basic space is $ \Lq{q}(\Omega) $, we assume that \begin{gather*} \hv^0_f \in \Bq{2 - \frac{2}{q}}(\Omega_{f})^3 =: \Dq^1, \quad \hc^0 \in \Bq{2 - \frac{2}{q}}(\Omega \backslash \Gamma) =: \Dq^2, \quad \hat{c}_*^0, \hat{g}^0 \in \W{1}(\Omega_{s}), \end{gather*} and $ \cD_q := \Dq^1 \times \Dq^2 $. Moreover, the solution space are defined by $ Y_T := \prod_{j = 0}^7 Y_T^j $, where \begin{gather*} Y_T^1 := \W{1}(0,T; \Lq{q}(\Omega_{f})^3) \cap \Lq{q}(0,T; \W{2}(\Omega_{f})^3), \\ Y_T^2 := \H{\frac{1}{2}}(0,T; \W{1}(\Omega_{s})^3) \cap \Lq{q}(0,T; \W{2}(\Omega_{s})^3), \\ Y_T^3 := \left\{ \begin{aligned} & \pi \in \Lq{q}(0,T; \W{1}(\Omega_{f})): \\ & \qquad \qquad \rv{\pi}_{\Gamma} \in \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma)) \cap \Lq{q}(0,T; \W{1 -\frac{1}{q}}(\Gamma)) \end{aligned} \right\}, \\ Y_T^4 := \Lq{q}(0,T; \W{1}(\Omega_{s})) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})), \\ Y_T^5 := \W{1}(0,T; \Lq{q}(\Omega)) \cap \Lq{q}(0,T; \W{2}(\Omega \backslash \Gamma)), \\ Y_T^6 := \W{1}(0,T; \W{1}(\Omega_{s})), \quad Y_T^7 := \W{1}(0,T; \W{1}(\Omega_{s})). \end{gather*} Analogous to \cite{AL2021a}, the compatibility conditions for $ \hv_{f}^0 $ and $ \hat{c}^0 $ read as \begin{equation} \label{Eqs:compatibility} \begin{gathered} \Div \hv^0_f = 0, \quad \rv{\hv^0_f}_\Gamma = 0, \\ \rv{\big( \zeta \jump{\hc^0} - \hD_s \hat{\nabla} \hc^0_s \cdot \hn_{\Gamma} \big)}_\Gamma = 0, \quad \rv{\jump{\hat{D} \hat{\nabla} \hc^0} \cdot \hn_{\Gamma}}_\Gamma = 0, \quad \rv{\hD_s \hat{\nabla} \hc^0_s \cdot \hn_{\Gs}}_{\Gamma_s} = 0, \end{gathered} \end{equation} Generally speaking, one does not need to assign any initial pressure for the Stokes equation. However, in this manuscript the coupling on the interface does lead to a condition on the initial fluid pressure since the solid equation is quasi-stationary and holds at $ t = 0 $. More specifically, we assume that there exists $ \hpi_f^0 \in \W{1 - 3/q}(\Gamma) $ and $ (\hu_{s}^0, \hpi_s^0) \in \W{2 - 2/q}(\Omega_{s})^3 \times \W{1 - 2/q}(\Omega_{s}) $ satisfying \begin{equation} \label{Eqs:us0-smallness} \norm{\hat{\nabla} \hu_{s}^0}_{\W{1 - \frac{2}{q}}(\Omega_{s})} + \norm{\hpi_s^0}_{\W{1 - \frac{2}{q}}(\Omega_{s})} \leq \kappa, \end{equation} for sufficiently small $ \kappa > 0 $, such that \begin{equation} \label{Eqs:us0-equation} \begin{alignedat}{3} - \hdiv (DW(\mathbb{I} + \hat{\nabla} \hu_{s}^0)) + \hat{\nabla} \hpi_s^0 & = 0, && \quad \text{in } \Omega_{s}, \\ \hdiv \hu_{s}^0 & = 0, && \quad \text{in } \Omega_{s}, \\ \big(- \hpi_s^0 \mathbb{I} + DW(\mathbb{I} + \hat{\nabla} \hu_{s}^0)\big) \hat{\bn}_{\Gamma} & = \big(- \hpi_f^0 \mathbb{I} + \nu_f (\hat{\nabla} \hv^0_f + \hat{\nabla}^\top \hv^0_f)\big) \hat{\bn}_{\Gamma}, && \quad \text{on } \Gamma, \\ \big(- \hpi_s^0 \mathbb{I} + DW(\mathbb{I} + \hat{\nabla} \hu_{s}^0)\big) \hat{\bn}_{\Gamma_s} & = 0, && \quad \text{on } \Gamma_s. \end{alignedat} \end{equation} \begin{remark} Here, the regularity for $ \hpi_f^0 $ on the interface $ \Gamma $ is initiated from the matched regularity of $ \hat{\nabla} \hv^0_f $, $ \hpi_s^0 $ and $ DW(\mathbb{I} + \hat{\nabla} \hu_{s}^0) $ on $ \Gamma $. Moreover, it coincides with the regularity of $ \hpi_f $ by the trace method of interpolation (see e.g. \cite[Example 3.4.9]{PS2016}), i.e., \begin{equation*} \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma)) \cap \Lq{q}(0,T; \W{1 -\frac{1}{q}}(\Gamma)) \hookrightarrow C([0,T]; \W{1 - \frac{3}{q}}(\Gamma)). \end{equation*} \end{remark} \begin{remark} \label{remark:smallness} In this paper, we need the smallness assumption of initial displacement to guarantee the estimates with respect to the deformation gradient, e.g. \eqref{Eqs:Fs}, which is a key element to derive the final contraction property of the certain operator. This is because we consider the general case of $ \rvm{\hu_{s}}_{t = 0} $ and linearize the elastic equation around the identity $ \mathbb{I} $, not the initial deformation gradient $ \mathbb{I} + \hat{\nabla} \hu_{s}^0 $. Specifically, one can not control $ (\hF_{s} - \mathbb{I}) $ by a small constant only with a short time. In particular, for the case $ \hu_{s}^0 = 0 $, one knows $ \rvm{\hF_{s}}_{t = 0} = \mathbb{I} $ and hence the estimates later is uniform with respect to time $ T > 0 $. Moreover, for initial pressure it does also need the smallness due to the sharp regularity of pressure, see e.g. \eqref{Eqs:pressureEstimate}. \end{remark} \begin{theorem} \label{theorem: main} Let $ 5 < q < \infty $ and $ \kappa > 0 $ be a sufficiently small constant. $ \Omega \subset \mathbb{R}^3 $ is the domain defined above with $ \Gamma $, $ \Gamma_s $ hypersurfaces of class $ C^3 $. Assume that $ (\hv^0_f, \hc^0) \in \cD_q $ satisfying the compatibility condition \eqref{Eqs:compatibility}, $ \hpi_f^0 \in \W{1 - 3/q}(\Gamma) $, $ \hat{c}_*^0, \hat{g}^0 \in \W{1}(\Omega_{s}) $ and $ (\hu_{s}^0, \hpi_s^0) \in \W{2 - 2/q}(\Omega_{s})^3 \times \W{1 - 2/q}(\Omega_{s}) $ fulfilling \eqref{Eqs:us0-smallness} and \eqref{Eqs:us0-equation}. Then there is a positive $ T_0 = T_0(\hv^0_f, \hc^0, \hat{c}_*^0, \hat{g}^0, \kappa) < \infty $ such that for $ 0 < T < T_0 $, the problem \eqref{Eqs:fullsystem-Lagrangian} admits a unique solution $ (\hv_{f}, \hu_{s}, \hpi_f, \hpi_s, \hat{c}, \hcs^*, \hat{g}) \in Y_T $. Moreoever, $ \hat{c}, \hcs^*, \hat{g} \geq 0 $ if $ \hc^0, \hat{c}_*^0, \hat{g}^0 \geq 0 $. \end{theorem} Motivated by \cite{AL2021a,PS2016}, we prove Theorem \ref{theorem: main} via the Banach fixed point theorem. To be more precise, we are going to linearize \eqref{Eqs:fullsystem-Lagrangian} in the first step, show the well-posedness of the linear system, estimate the nonlinear terms in suitable function spaces with small time and then constract a contraction mapping. \begin{remark} \label{remark:higherdimension} In fact, Theorem \ref{theorem: main} still holds true in even more general dimensional case $ n \geq 2 $ as long as $ q $ has a adapted restriction with respect to $ n $. This is also an advantage of making use of maximal regularity theory. \end{remark} \subsection{Linearization} \label{sec:linearization} Now following the linearization procedure in \cite{AL2021a}, we linearize \eqref{Eqs:fullsystem-Lagrangian} first, equate all the lower-order terms to the right-hand side and then arrive at the equivalent system: \begin{subequations} \label{Eqs:fullsystem-Lagrangian-linear} \begin{alignat}{3} \label{Eqs:fluid-Lagrangian-linear} \hr_f \partial_t \hv_{f} - \hdiv \mathbf{S}(\hv_{f}, \hpi_f) & = \mathbf{K}_f && \quad \text{in } \Omega_{f} \times (0, T), \\ \label{Eqs:fluidmass-Lagrangian-linear} \hdiv \hv_{f} & = G_f && \quad \text{in } \Omega_{f} \times (0, T), \\ \label{Eqs:fluidgamma-Lagrangian-linear} \mathbf{S}(\hv_{f}, \hpi_f) \hn_{\Gamma} - (D^2 W(\mathbb{I}) \hat{\nabla} \bu_{s} - \hpi_s \mathbb{I}) \hn_{\Gamma} & = \mathbf{H}_f^1 && \quad \text{on } \Gamma \times (0, T), \\ \label{Eqs:solid-Lagrangian-linear} - \hdiv (D^2 W(\mathbb{I}) \hat{\nabla} \bu_{s}) + \hat{\nabla} \hpi_s & = \mathbf{K}_s && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:solidmass-Lagrangian-linear} \hdiv \hu_{s} - \int_0^t \frac{\gamma \beta}{\hr_s} \hc_s \,\d \tau & = G_s && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:solidgamma-Lagrangian-linear} \hu_{s} & = \mathbf{H}_s^1 && \quad \text{on } \Gamma \times (0, T), \\ \label{Eqs:solidgs-Lagrangian-linear} (D^2 W(\mathbb{I}) \hat{\nabla} \bu_{s} - \hpi_s \mathbb{I}) \hn_{\Gs} & = \mathbf{H}^2 && \quad \text{on } \Gamma_s \times (0, T), \\ \label{Eqs:cf-Lagrangian-linear} \partial_t \hc_f - \hD_f \widehat{\Delta} \hc_f & = F_f^1 && \quad \text{in } \Omega_{f} \times (0, T), \\ \label{Eqs:cfgamma-Lagrangian-linear} \hD_f \hat{\nabla} \hc_f \cdot \hn_{\Gamma} & = F_f^2 && \quad \text{on } \Gamma \times (0,T), \\ \label{Eqs:cs-Lagrangian-linear} \partial_t \hc_s - \hD_s \widehat{\Delta} \hc_s & = F_s^1 && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:csgamma-Lagrangian-linear} \hD_s \hat{\nabla} \hc_s \cdot \hn_{\Gamma} & = F_s^2 && \quad \text{on } \Gamma \times (0, T), \\ \label{Eqs:csgs-Lagrangian-linear} \hD_s \hat{\nabla} \hc_s \cdot \hn_{\Gs} & = F^3 && \quad \text{on } \Gamma_s \times (0, T), \\ \label{Eqs:css-Lagrangian-linear} \partial_t \hcs^* + \beta(\frac{\gamma \hat{c}_*^0}{\hr_s} - 1) \hc_s = F^4, \quad \partial_t \hat{g} - \frac{\gamma \beta \hat{g}^0}{3 \hr_s} \hc_s & = F^5 && \quad \text{in } \Omega_{s} \times (0, T), \\ \label{Eqs:ofinitial-Lagrangian-linear} \rv{\hv_{f}}_{t = 0} = \hv^0_f, \quad \rv{\hc_f}_{t = 0} & = \hc^0_f && \quad \text{in } \Omega_{f}, \\ \label{Eqs:osinitial-Lagrangian-linear} \rv{\hu_{s}}_{t = 0} = \hu_{s}^0, \quad \rv{\hc_s}_{t = 0} = \hc^0_s, \quad \rv{\hcs^*}_{t = 0} = \hat{c}_*^0, \quad \rv{\hat{g}}_{t = 0} & = \hat{g}^0 && \quad \text{in } \Omega_{s}, \end{alignat} \end{subequations} where $ \mathbf{S}(\hv_{f}, \hpi_f) := - \hpi_f + \nu_f (\hat{\nabla} \hv_{f} + \tran{\hat{\nabla}} \hv_{f}) $ and \begin{align*} & \mathbf{K}_f = \hdiv \tilde{\bK}_f, \quad \mathbf{K}_s = \hdiv \tilde{\bK}_s, \\ & G_f = - \left( \invtr{\hFf} - \bbi \right) : \hat{\nabla} \hv_{f}, \quad G_s = - ( \invtr{\hFs} - \bbi ) : \hat{\nabla} \hu_{s} \\ & \mathbf{H}_f^1 = -\tilde{\bK}_f \hn_{\Gamma} + \tk_s \hn_{\Gamma}, \quad \mathbf{H}_s^1 = \int_0^t \hv_{f}(\mathbf{X}, \tau) \d \tau, \quad \mathbf{H}^2 = - \tk_s \hn_{\Gs}, \\ & F_f^1 = \hdiv \tF_f, \quad F_s^1 = \hdiv \tF_s - \beta \hc_s \left( 1 + \frac{\gamma}{\hr_s} \hc_s \right) - \frac{3 \hat{\nabla} \hat{g}}{\hat{g}} \cdot \left( \hD_s \inv{\hF_{s}} \invtr{\hF_{s}} \hat{\nabla} \hc_s \right), \\ & F_f^2 = \hD_s \nabla \hc_s \cdot \hn_{\Gamma} - \jump{\tilde{F}} \cdot \hn_{\Gamma}, \quad F_s^2 = \zeta \jump{\hat{c}} - \tF_s \cdot \hn_{\Gamma}, \\ & F^3 = -\tF_s \cdot \hn_{\Gs}, \quad F^4 = - \frac{\gamma \beta}{\hr_s} \hc_s (\hcs^* - \hat{c}_*^0), \quad F^5 = - \frac{\gamma \beta}{3 \hr_s} \hc_s \left( \hat{g} - \hat{g}^0 \right), \end{align*} with \begin{align*} & \tk_f = - \hpi_f ( \inv{\hFf} - \bbi ) + \nu_f \big( \inv{\hF_{f}} \hat{\nabla} \hv_{f} + \tran{\hat{\nabla}} \hv_{f} \invtr{\hF_{f}} \big) ( \invtr{\hFf} - \bbi ) \\ & \qquad \quad + \nu_f \big( (\inv{\hFf} - \bbi) \hat{\nabla} \hv_{f} + \tran{\hat{\nabla}} \hv_{f} (\invtr{\hFf} - \bbi) \big), \\ & \tk_s = - \hat{g}^3 \hpi_s ( \inv{\hFs} - \bbi ) - (\hat{g}^3 - (\hat{g}^0)^3) \hpi_s \mathbb{I} - ((\hat{g}^0)^3 - 1) \hpi_s \mathbb{I} \\ & \qquad \quad + DW(\hF_{s}) \big((\hat{g}^0)^2 - \hat{g}^2\big) + + DW(\hF_{s}) \big(1 - (\hat{g}^0)^2\big) + \hat{g}^2\big(DW(\hF_{s}) - DW(\hF_{s} / \hat{g})\big) \\ & \qquad \quad + \int_{0}^1 D^3 W\big((1-s)\mathbb{I} + s\hF_{s}\big)(1-s) \d s (\hF_{s} - \mathbb{I}) (\hF_{s} - \mathbb{I}), \\ & \tilde{F} = \hat{D} \big( \inv{\hat{\bF}} \invtr{\hat{\bF}} - \mathbb{I} \big) \hat{\nabla} \hat{c}. \end{align*} \begin{remark}[Discussions on the linearization]\ \label{remark:linearization-elastic} \begin{enumerate} \item The linearization can be derived as follows. Let $ h(s) := DW((1-s)\mathbb{I} + s\mathbf{F}) $. Then $ h(0) = DW(\mathbb{I}), h(1) = DW(\mathbf{F}) $. Since \begin{equation*} h(1) = h(0) + h'(0) + \int_0^1 h''(s)(1-s) \,\d s, \end{equation*} it follows from \ref{assumptions:DW(I)} that \begin{equation*} DW(\mathbf{F}) = D^2 W(\mathbb{I}) (\mathbf{F} - \mathbb{I}) + \mathbf{R}(\mathbf{F}), \end{equation*} where \begin{equation*} \mathbf{R}(\mathbf{F}) := \int_0^1 D^3 W((1-s)\mathbb{I} + s\mathbf{F})(1-s) \,\d s (\mathbf{F} - \mathbb{I}) (\mathbf{F} - \mathbb{I}). \end{equation*} \item The linearization is similar to the one in \cite{AL2021a}, but with several modifications, one of which is deduced above. It is possible to have other kinds of linearizations but we remark that in the present paper a divergence structure of $ \hdiv \tilde{\bK}_s $ plays an essential role when we prove the linear theory and estimate it in a particular function space, see Corollary \ref{coro:f=divF} and Proposition \ref{prop:Lipschitzestimate} later. Moreover, for the solid mass balance equation \eqref{Eqs:solid-mass-Eulerian} (equivalently $ \Div \bv_{s} = {f_s^g}/{\rho_s} $ since the density is constant), we integrate it over $ (0,t) $ as \eqref{Eqs:solidmass-Lagrangian-linear} to keep the Stokes-type structure for the elastic equation with respect to the displacement $ \hu_{s} $. \item Noticing that the continuity conditions \eqref{Eqs:vjump-Lagrangian} on the interface are separated to \eqref{Eqs:fluidgamma-Lagrangian-linear} and \eqref{Eqs:solidgamma-Lagrangian-linear} formally after the linearization, we remark here that this is for the sake of analysis due to the mismatch of the regularity on $ \Gamma $. For instant, if one replaces \eqref{Eqs:fluidgamma-Lagrangian-linear} with the boundary condition $ \hv_{f} = \partial_t \hu_{s} $, it has no chance to solve the fluid part since we have no first-order temporal derivative information for the solid displacement $ \hu_{s} $. \end{enumerate} \end{remark} \begin{remark} \label{remark:G-form} Analogously to \cite[Remark 2.3]{AL2021a}, $ G $ also possesses the form \begin{equation} \label{Eqs:G-form} \begin{aligned} G_f & = - \hdiv \big( ( \inv{\hFf} - \bbi ) \hv_{f} \big), \quad G_s = - \hdiv \big( ( \inv{\hFs} - \bbi ) \hu_{s} \big) + \hu_{s} \cdot \hdiv \invtr{\hF_{s}}, \end{aligned} \end{equation} with the help of the Piola identity $ \hdiv(\hat{J} \invtr{\hat{\bF}}) = 0 $. \end{remark} \section{Analysis of the linear systems} \label{sec:analysis-linear} In this section, we are devoted to solve the linear systems associated with \eqref{Eqs:fullsystem-Lagrangian-linear}. \subsection{Nonstationary Stokes equation} \label{sec:nonsta-Stokes} Let $ \Omega $ be a bounded domain with a boundary $ \partial \Omega $ of class $ C^{3-} $, $ T > 0 $. We consider the nonstationary Stokes equation \begin{equation} \label{Eqs:linear-nonsta} \begin{alignedat}{3} \rho \partial_t \mathbf{u} - \Div S_{\mu}(\mathbf{u}, \pi) & = \mathbf{f}, && \quad \text{in } \Omega \times (0, T), \\ \Div \mathbf{u} & = g, && \quad \text{in } \Omega \times (0, T), \\ S_{\mu}(\mathbf{u}, \pi) \mathbf{n} & = \mathbf{h}, && \quad \text{on } \partial \Omega \times (0, T), \\ \rv{\mathbf{u}}_{t = 0} & = \mathbf{u}_0, && \quad \text{in } \Omega, \end{alignedat} \end{equation} where $ S_{\mu}(\mathbf{u}, \pi) = - \pi \mathbb{I} + \mu (\nabla \mathbf{u} + \tran{\nabla} \mathbf{u}) $. $ \rho, \mu > 0 $ are the constant density and viscosity. $ \mathbf{n} $ denotes the unit outer normal vector on $ \partial \Omega $. Then we have the following solvability and regularity result, which can be adapted directly from e.g. Abels \cite[Theorem 1.1]{Abels2010}, Bothe--Pr\"uss \cite[Theorem 4.1]{BP2007}, Pr\"uss--Simonett \cite[Theorem 7.3.1]{PS2016} by the argument of Abels--Liu \cite[Proposition A.1]{AL2021a}. \begin{theorem} \label{thm:linear-Stoke-nonstationary} Let $ 3 < q < \infty $, $ T_0 > 0 $. Suppose that the initial data is $ \mathbf{u}_0 \in \W{2 - 2/q}(\Omega)^3 $ satisfying compatibility conditions \begin{gather*} \Div \mathbf{u}_0 = \rv{g}_{t = 0}, \quad \rv{\mathcal{P}_{\mathbf{n}}(\mu (\nabla \mathbf{u}_0 + \tran{\nabla} \mathbf{u}_0) \mathbf{n})}_{\partial \Omega} = \rv{\mathbf{h}}_{t = 0}, \end{gather*} where $ \mathcal{P}_{\mathbf{n}} := \mathbb{I} - \mathbf{n} \otimes \mathbf{n} $ denotes the tangential projection onto $ \partial \Omega $. For given data $ (\mathbf{f}, g, \mathbf{h}) $ with \begin{gather*} \mathbf{f} \in \mathbb{F}_{\mathbf{f}}(T) := \Lq{q}(0,T; \Lq{q}(\Omega)^3), \\ g \in \mathbb{F}_g(T) := \Lq{q}(0,T; \W{1}(\Omega)) \cap\W{1}(0,T; \W{- 1}(\Omega)), \\ \mathbf{h} \in \mathbb{F}_\mathbf{h}(T) := \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\partial \Omega)^3) \cap \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\partial \Omega)^3), \end{gather*} \eqref{Eqs:linear-nonsta} admits a unique solution $ (\mathbf{u}, \pi) \in \mathbb{E}(T) := \mathbb{E}_\mathbf{u}(T) \times \mathbb{E}_\pi(T) $ where \begin{gather*} \mathbb{E}_\mathbf{u}(T) := \Lq{q}(0,T; \W{2}(\Omega)^3) \cap \W{1}(0,T; \Lq{q}(\Omega)^3), \\ \mathbb{E}_\pi(T) := \left\{ \Lq{q}(0,T; \W{1}(\Omega)): \rv{\pi}_{\partial \Omega} \in \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\partial \Omega)) \cap \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Omega)) \right\}. \end{gather*} Moreover, there is a constant $ C > 0 $ independent of $ \mathbf{f}, g, \mathbf{h}, \mathbf{u}_0, T_0 $, such that for $ 0 < T \leq T_0 $ \begin{equation*} \norm{(\mathbf{u}, \pi)}_{\mathbb{E}(T)} \leq C \left( \norm{\mathbf{f}}_{\mathbb{F}_{\mathbf{f}}(T)} + \norm{g}_{\mathbb{F}_g(T)} + \norm{\mathbf{h}}_{\mathbb{F}_\mathbf{h}(T)} + \norm{\mathbf{u}_0}_{\W{2 - 2/q}(\Omega)} \right). \end{equation*} \end{theorem} \begin{remark} In our case, there will be a term of the form $ (D^2 W(\mathbb{I}) \nabla \mathbf{v} - p \mathbb{I}) \mathbf{n} $ in the third equation of \eqref{Eqs:linear-nonsta} with certain regularity. It is not a problem since given $ (\mathbf{v},p) $ such that $ (D^2 W(\mathbb{I}) \nabla \mathbf{v} - p \mathbb{I}) \mathbf{n} $ is endowed with the same regularity of $ \mathbf{h} $, one can solve the original equation with $ \mathbf{h} = (D^2 W(\mathbb{I}) \nabla \mathbf{v} - p \mathbb{I}) \mathbf{n} $ and $ (\mathbf{f}, g, \mathbf{u}_0) = 0 $ by Theorem \ref{thm:linear-Stoke-nonstationary} and add the solution above to recover the case. \end{remark} \subsection{Quasi-stationary Stokes equation with mixed boundary conditions} \label{sec:sta-Stokes-mixed} Let $ \Omega $ be a bounded domain with a boundary $ \partial \Omega $ of class $ C^{3-} $, $ \partial \Omega = \Gamma_1 \cup \Gamma_2 $ consisting of two closed, disjoint, nonempty components. Consider the generalized stationary Stokes-type equation \begin{equation} \label{Eqs:linear-sta} \begin{alignedat}{3} - \Div (D^2 W(\mathbb{I}) \nabla \mathbf{u}) + \nabla \pi & = \mathbf{f}, && \quad \text{in } \Omega, \\ \Div \mathbf{u} & = g, && \quad \text{in } \Omega, \\ \mathbf{u} & = \mathbf{h}^1, && \quad \text{on } \Gamma_1, \\ (D^2 W(\mathbb{I}) \nabla \mathbf{u} - \pi\mathbb{I}) \mathbf{n} & = \mathbf{h}^2, && \quad \text{on } \Gamma_2, \end{alignedat} \end{equation} where $ \mathbf{n} $ denotes the unit outer normal vector on $ \partial \Omega $. $ W : \mathbb{R}^{3 \times 3} \rightarrow \mathbb{R}_+ $ is the elastic energy density such that \ref{assumption:W-energy-density} holds. Before going to the quasi-stationary case, we first investigate the weak solution and strong solution in $ L^q $-class of the stationary problem \eqref{Eqs:linear-sta}. \begin{theorem} \label{thm:linear-sta} Let $ 1 < q < \infty $ and $ s \in \{0, -1\} $. Given $ \mathbf{f} \in W_{q, \Gamma_1}^{s}(\Omega)^3 $, $ g \in \W{1 + s}(\Omega) $, $ \mathbf{h}^1 \in \W{2 + s - 1/q}(\Gamma_1)^3 $ and $ \mathbf{h}^2 \in \W{1 + s - 1/q}(\Gamma_2)^3 $. Then problem \eqref{Eqs:linear-sta} admits a unique solution $ (\mathbf{u}, \pi) \in \W{2 + s}(\Omega)^3 \times \W{1 + s}(\Omega) $. Moreover, there is a constant $ C > 0 $ such that \begin{equation*} \norm{\mathbf{u}}_{\W{2 + s}(\Omega)^3} + \norm{\pi}_{\W{1 + s}(\Omega)} \leq C \Big( \norm{g}_{\W{1 + s}(\Omega)} + \norm{\mathbf{h}^1}_{\W{2 + s - \frac{1}{q}}(\Gamma_1)^3} + \NORM{\mathcal{F}}_{s} \Big). \end{equation*} where $ \NORM{\mathcal{F}}_{s} := \norm{\mathbf{f}}_{\Lq{q}(\Omega)^3} + \norm{\mathbf{h}^2}_{\W{1 - \frac{1}{q}}(\Omega)^3} $ if $ s = 0 $ and when $ s = -1 $, \begin{equation*} \NORM{\mathcal{F}}_{s} := \sup_{\norm{\mathbf{w}}_{W_{q',\Gamma_1}^{1}(\Omega)^3} = 1} \Big( \inner{\mathbf{f}}{\mathbf{w}}_{W_{q,\Gamma_1}^{-1}(\Omega)^3 \times W_{q',\Gamma_1}^{1}(\Omega)^3} + \inner{\mathbf{h}^2}{\rv{\mathbf{w}}_{\Gamma_2}}_{\W{- \frac{1}{q}}(\Gamma_2)^3 \times W_{q'}^{1 - \frac{1}{q'}}(\Gamma_2)^3} \Big). \end{equation*} \end{theorem} \begin{proof} First let $ s = 0 $, we reduce the system \eqref{Eqs:linear-sta} to the case $ (g, \mathbf{h}^1, \mathbf{h}^2) = 0 $. To this end, take a cutoff funtion $ \psi \in C_0^\infty((0,T)) $ such that \begin{equation*} \int_{T/4}^{3T/4} \psi(t) \,\d t = 1, \quad \text{in } [T/4, 3T/4]. \end{equation*} Then \begin{gather*} \psi(t) g \in \Lq{p}(0,T; \W{1}(\Omega)) \cap W_p^{1}(0,T; W_{q, \Gamma_2}^{-1}(\Omega)), \\ \psi(t) \mathbf{h}^j \in \Lq{p}(0,T; \W{3 - j - \frac{1}{q}}(\Gamma_j)^3) \cap W_p^{\frac{1}{j} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma_j)^3), \ j = 1,2. \end{gather*} In view of Remark \ref{remark:epllipticity} and the maximal $ L^q $-regularity result for the generalized Stokes problems (e.g., \cite[Theorem 4.1]{BP2007}, Pr\"uss--Simonett \cite[Theorem 7.3.1]{PS2016}), we solve the system \begin{equation*} \begin{alignedat}{3} \partial_t \mathbf{u} - \Div (D^2 W(\mathbb{I}) \nabla \mathbf{u}) + \nabla \pi & = 0, && \quad \text{in } \Omega \times (0,T), \\ \Div \mathbf{u} & = \psi(t)g, && \quad \text{in } \Omega \times (0,T), \\ \mathbf{u} & = \psi(t)\mathbf{h}^1, && \quad \text{on } \Gamma_1 \times (0,T), \\ (D^2 W(\mathbb{I}) \nabla \mathbf{u} - \pi\mathbb{I}) \mathbf{n} & = \psi(t)\mathbf{h}^2, && \quad \text{on } \Gamma_2 \times (0,T), \\ \rv{\mathbf{u}}_{t = 0} & = 0, && \quad \text{in } \Omega, \end{alignedat} \end{equation*} with $ 3 < p < \infty $, $ 1 < q < \infty $ to get a pair of solution $ (\tilde{\mathbf{u}}, \tilde{\pi}) $ fulfilling \begin{equation*} \tilde{\mathbf{u}} \in W_p^{1}(0,T; \Lq{q}(\Omega)^3) \cap \Lq{p}(0,T; \W{2}(\Omega)^3), \quad \tilde{\pi} \in \Lq{p}(0,T; \W{1}(\Omega)). \end{equation*} Then one infers \begin{equation*} (\bar{\mathbf{u}}, \bar{\pi}): = \int_{T/4}^{3T/4} (\tilde{\mathbf{u}}, \tilde{\pi})(t) \,\d t \in \W{2}(\Omega)^3 \times \W{1}(\Omega), \end{equation*} and \begin{equation*} \Div \bar{\mathbf{u}} = g, \quad \text{in } \Omega, \quad \rv{\bar{\mathbf{u}}}_{\Gamma_1} = \mathbf{h}^1, \quad \text{on } \Gamma_1, \quad \rv{(D^2 W(\mathbb{I}) \nabla \bar{\mathbf{u}} - \bar{\pi}\mathbb{I}) \mathbf{n}}_{\Gamma_2} = \mathbf{h}^2, \quad \text{on } \Gamma_2. \end{equation*} Subtracting the solution to \eqref{Eqs:linear-sta} with $ (\bar{\mathbf{u}}, \bar{\pi}) $, we are in the position to solve \eqref{Eqs:linear-sta} with $ (g, \mathbf{h}^1, \mathbf{h}^2) = 0 $, which can be referred to Theorem \ref{thm:sta-stokes-lambda} with $ \lambda = 0 $. Note that the case $ \lambda = 0 $ is applicable due to Remark \ref{remark:lambda=0}. Now we consider $ s = -1 $, namely the weak solution. In this case we only reduce $ (g, \mathbf{h}^1) $ to zero since the Neumann boundary trace need to make sense on $ \Gamma_2 $ correctly. Concerning the Stokes equation with Dirichlet boundary condition \begin{equation*} \begin{alignedat}{3} - \Delta \mathbf{u} + \nabla \pi & = 0, && \quad \text{in } \Omega, \\ \Div \mathbf{u} & = g, && \quad \text{in } \Omega, \\ \mathbf{u} & = \mathbf{h}^1, && \quad \text{on } \Gamma_1, \\ \mathbf{u} & = \mathbf{c}, && \quad \text{on } \Gamma_2, \end{alignedat} \end{equation*} where $ \mathbf{c} > 0 $ is a constant such that \begin{equation*} \int_{\Omega} g \,\d x = \int_{\Gamma_1} \mathbf{h}^1 \cdot \mathbf{n} \,\d \mathcal{H}^2 + \int_{\Gamma_2} \mathbf{c} \cdot \mathbf{n} \,\d \mathcal{H}^2, \end{equation*} holds, where $ \mathcal{H}^d $ with $ d \in \mathbb{N}_+ $ denotes the $ d $-dimensional Hausdorff measure. It follows from the weak solution theory for stationary Stokes equation, see e.g. Galdi--Simader--Sohr \cite[Section 5, (5.12)]{GSS2005} in Sobolev spaces, Schumacher \cite[Theorem 4.3]{Schumacher2009} in weighted Bessel potential spaces, that one obtains a unique solution denoted by $ (\bar{\mathbf{u}}, \bar{\pi}) $ such that \begin{equation*} (\bar{\mathbf{u}}, \bar{\pi}) \in \W{1}(\Omega)^3 \times \Lq{q}(\Omega), \end{equation*} and \begin{equation*} \Div \bar{\mathbf{u}} = g, \quad \text{in } \Omega, \quad \rv{\bar{\mathbf{u}}}_{\Gamma_1} = \mathbf{h}^1, \quad \text{on } \Gamma_1. \end{equation*} Then one can subtract the solution of \eqref{Eqs:linear-sta} with $ (\bar{\mathbf{u}}, \bar{\pi}) $ and solve \eqref{Eqs:linear-sta} with reduced data $ (g, \mathbf{h}^1) = 0 $ and modified $ (f, \mathbf{h}^2) $ (not to be relabeled). The idea of the proof is to introduce a $ \Lq{q} $-class of \textit{very weak solution} (see e.g. \cite{GSS2005,Schumacher2009}), so that one can derive a solution with certain regularity in $ \W{1}(\Omega) $ by complex interpolation, see e.g. Schumacher \cite{Schumacher2009} for the stationary Stokes equation in fractional Bessel potential spaces. Define the solenoidal space \begin{equation*} \Lqs{q}(\Omega) := \left\{\mathbf{u} \in \Lq{q}(\Omega)^3: \Div \mathbf{u} = 0, \rv{\mathbf{n} \cdot \mathbf{u}}_{\Gamma_1} = 0\right\}. \end{equation*} For $ 1 < q, q' < \infty $ satisfying $ 1/q + 1/q' = 1 $, we define a generalized Stokes-type operator with respect to \eqref{Eqs:linear-sta} as \begin{equation*} \mathcal{A}_q(\mathbf{u}) := \mathbb{P}_q \big( - \Div (D^2 W(\mathbb{I}) \nabla \mathbf{u}) \big) \text{ for all } \mathbf{u} \in \mathcal{D}(\mathcal{A}_q), \end{equation*} with \begin{equation*} \mathcal{D}(\mathcal{A}_q) = \left\{ \mathbf{u} \in \W{2}(\Omega)^3 \cap \Lqs{q}(\Omega): \rv{\mathbf{u}}_{\Gamma_1} = 0, \ \rv{\mathcal{P}_{\mathbf{n}}((D^2 W(\mathbb{I}) \nabla \mathbf{u}) \mathbf{n})}_{\Gamma_2} = 0 \right\}, \end{equation*} where $ \mathbb{P}_q $ denotes the \textit{Helmholtz--Weyl projection} onto $ \Lqs{q}(\Omega) $, see e.g. \cite[Appendix A]{Abels2010} for the existence of the projection with mixed boundary conditions. $ \mathcal{P}_{\mathbf{n}} := \mathbb{I} - \mathbf{n} \otimes \mathbf{n} $ is the tangential projection onto $ \partial \Omega $. By the result of $ s = 0 $ we see that \begin{equation*} \mathcal{A}_{q} : \mathcal{D}(\mathcal{A}_{q}) \rightarrow \Lqs{q}(\Omega) \end{equation*} is well-defined and bijective. Then one knows that its dual operator \begin{equation*} \mathcal{A}_{q'}^* : \Lqs{q'}(\Omega)' \rightarrow \mathcal{D}(\mathcal{A}_{q'})' \end{equation*} is bijective as well, which gives rise to the very weak solution. Note that $ \mathcal{A}_{q} $ and $ \mathcal{A}_{q'}^* $ are consistent, namely, \begin{equation*} \begin{aligned} & \inner{\mathcal{A}_{q'}^* \mathbf{u}}{\mathbf{w}}_{\mathcal{D}(\mathcal{A}_{q'})' \times \mathcal{D}(\mathcal{A}_{q'})} = \inner{\mathbf{u}}{\mathcal{A}_{q'} \mathbf{w}}_{\Lqs{q}(\Omega) \times \Lqs{q'}(\Omega)} \\ & \qquad = \int_\Omega \nabla \mathbf{u} : D^2 W(\mathbb{I}) \nabla \mathbf{w} \,\d x = \int_\Omega D^2 W(\mathbb{I}) \nabla \mathbf{u} : \nabla \mathbf{w} \,\d x = \inner{\mathcal{A}_{q} \mathbf{u}}{\mathbf{w}}_{\Lqs{q}(\Omega) \times \Lqs{q'}(\Omega)}, \end{aligned} \end{equation*} for $ \mathbf{u} \in \mathcal{D}(\mathcal{A}_{q}) \subseteq \Lqs{q}(\Omega) $, $ \mathbf{w} \in \mathcal{D}(\mathcal{A}_{q'}) \subseteq \Lqs{q'}(\Omega) $, where $ (D^2 W(\mathbb{I}))_{ij}^{kl} = (D^2 W(\mathbb{I}))_{kl}^{ij} $, $ i,j,k,l = 1,2,3 $. Then by the complex interpolation of operators, e.g. \cite[Theorem 2.6]{Schumacher2009}, we record that \begin{equation*} \mathcal{A}_q : \big(\Lqs{q}(\Omega), \mathcal{D}(\mathcal{A}_{q})\big)_{\left[\frac{1}{2}\right]} \rightarrow \big(\Lqs{q}(\Omega), \mathcal{D}(\mathcal{A}_{q'})'\big)_{\left[\frac{1}{2}\right]} \end{equation*} is bijective. Since $ \mathcal{A}_q $ admits a bounded $ \mathcal{H}^\infty $-calculus and has bounded imaginary powers, see e.g. \cite[Theorem 1.1]{Pruess2019}, complex interpolation methods can be used to describe domains of fractional power operators. By virtue of \cite[Theorem 1.1]{Pruess2019} and \cite[Theorem 2.6]{Schumacher2009}, one obtains \begin{gather*} \big(\Lqs{q}(\Omega), \mathcal{D}(\mathcal{A}_{q})\big)_{[\frac{1}{2}]} = \mathcal{D}(\mathcal{A}_{q}^{1/2}) = W_{\sigma,\Gamma_1}^{1,q}(\Omega), \\ \big(\Lqs{q}(\Omega), \mathcal{D}(\mathcal{A}_{q'})'\big)_{\left[\frac{1}{2}\right]} = (\Lqs{q}(\Omega)', \mathcal{D}(\mathcal{A}_{q'}))_{\left[\frac{1}{2}\right]}' = \mathcal{D}(\mathcal{A}_{q'}^{1/2})' = W_{\sigma,\Gamma_1}^{-1,q}(\Omega). \end{gather*} Consequently, \begin{equation*} \mathcal{A}_q : W_{\sigma,\Gamma_1}^{1,q}(\Omega) \rightarrow W_{\sigma,\Gamma_1}^{-1,q}(\Omega) \end{equation*} is bijective, which implies there exists a unique solution $ \mathbf{u} \in W_{\sigma,\Gamma_1}^{1,q}(\Omega) $ such that \begin{equation*} \inner{\mathcal{A}_{q} \mathbf{u}}{\mathbf{w}} = \inner{\mathcal{F}}{\mathbf{w}} \text{ for all } \mathbf{w} \in W_{\sigma,\Gamma_1}^{1,q'}(\Omega), \end{equation*} with $ \mathcal{F} \in W_{\sigma,\Gamma_1}^{-1,q}(\Omega) $ defined by \begin{equation*} \inner{\mathcal{F}}{\mathbf{w}} := \inner{\mathbf{f}}{\mathbf{w}}_{W_{q,\Gamma_1}^{-1}(\Omega) \times W_{q',\Gamma_1}^{1}(\Omega)} + \inner{\mathbf{h}^2}{\rv{\mathbf{w}}_{\Gamma_2}}_{\W{- \frac{1}{q}}(\Gamma_2) \times W_{q'}^{1 - \frac{1}{q'}}(\Gamma_2)}, \end{equation*} for all $ \mathbf{w} \in W_{\sigma,\Gamma_1}^{1,q'}(\Omega) $. Moreover, by means of the open mapping theorem, one immediately deduce the estimate \begin{equation*} \norm{\mathbf{u}}_{W_{\sigma, \Gamma_1}^{1,q}(\Omega)^3} \leq C \NORM{\mathcal{F}}_{-1}, \end{equation*} in which \begin{equation*} \NORM{\mathcal{F}}_{-1} := \sup_{\norm{\mathbf{w}}_{W_{q',\Gamma_1}^{1}(\Omega)^3} = 1} \Big( \inner{\mathbf{f}}{\mathbf{w}}_{W_{q,\Gamma_1}^{-1}(\Omega)^3 \times W_{q',\Gamma_1}^{1}(\Omega)^3} + \inner{\mathbf{h}^2}{\rv{\mathbf{w}}_{\Gamma_2}}_{\W{- \frac{1}{q}}(\Gamma_2)^3 \times W_{q'}^{1 - \frac{1}{q'}}(\Gamma_2)^3} \Big). \end{equation*} Up to now, one still needs to recover the pressure in the very weak sense, i.e., solving \begin{equation} \label{Eqs:Laplace-W-1} \int_{\Omega} \pi \Delta \varphi \,\d x = \inner{F}{\varphi}, \quad \forall\, \varphi \in \mathcal{D}(\Delta_{q',DN}), \end{equation} where \begin{gather*} \inner{F}{\varphi} := - \inner{\mathbf{f}}{\nabla \varphi} + \int_{\Omega} D^2 W(\mathbb{I}) \nabla \mathbf{u} : \nabla^2 \varphi + \inner{\mathbf{h}^2 \cdot \mathbf{n}}{\rv{\ptial{\mathbf{n}} \varphi}_{\Gamma_2}}, \\ \mathcal{D}(\Delta_{q',DN}) := \left\{ \psi \in W_{q'}^2(\Omega) : \rv{\ptial{\mathbf{n}} \psi}_{\Gamma_1} = 0, \ \rv{\psi}_{\Gamma_2} = 0 \right\}. \end{gather*} Since $ \mathbf{f} \in W_{q, \Gamma_1}^{-1}(\Omega)^3 $, $ \mathbf{u} \in W_{q,\Gamma_1}^{1}(\Omega)^3 $ and $ \mathbf{h}^2 \in \W{- 1/q}(\Gamma_2)^3 $, it is easy to verify that functional $ F $ defined above is well-defined in $ \mathcal{D}(\Delta_{q',DN})' $. For every $ u \in \Lq{q'}(\Omega) $, it follows from \cite[Corollary 7.4.5]{PS2016} that there exists a unique solution $ \varphi(u) \in \mathcal{D}(\Delta_{q',DN}) $ satisfying $ \Delta \varphi = u $. Now we define $ \pi \in \Lq{q}(\Omega) $ by duality as a linear functional on $ \Lq{q'}(\Omega) $ acting for every $ u $ as \begin{equation} \label{Eqs:dualfunctional} \inner{\pi}{u} = \inner{F}{\varphi}. \end{equation} Indeed $ \pi $ is the very weak solution we are looking for, since for all $ \varphi \in \mathcal{D}(\Delta_{q',DN}) $, we have \begin{equation*} \inner{\pi}{\Delta \varphi} = \inner{\pi}{u} = \inner{F}{\varphi}. \end{equation*} The uniqueness can be showed by letting $ F = 0 $ in \eqref{Eqs:dualfunctional} so that for all $ u \in \Lq{q'}(\Omega) $, $ \inner{\pi}{u} = 0 $, which implies $ \pi = 0 $ a.e. in $ \Omega $. Then we have the estimate \begin{equation*} \norm{\pi}_{\Lq{q}(\Omega)} \leq C \norm{F}_{\mathcal{D}(\Delta_{q',DN})'} \leq C \Big( \norm{\mathbf{u}}_{\W{1}(\Omega)^3} + \NORM{\mathcal{F}}_{-1} \Big). \end{equation*} This completes the proof. \end{proof} In Theorem \ref{thm:linear-sta}, we consider the general case of data. In fact, for applications the right-hand side terms sometimes have special structure, which is of much help to derive the estimate in a concise form. \begin{corollary} \label{coro:f=divF} In the case of $ s = -1 $, if there is an $ \mathbf{F} \in \Lq{q}(\Omega)^{3 \times 3} $ such that \begin{equation*} \mathbf{f} = \Div \mathbf{F}, \text{ in } \Omega, \quad \mathbf{h}^2 = - \mathbf{F} \mathbf{n}, \text{ on } \Gamma_2 \end{equation*} holds in the sense of distribution, i.e., \begin{equation} \label{Eqs:f=divF} \inner{\mathbf{f}}{\mathbf{w}}_{W_{q,\Gamma_1}^{-1}(\Omega)^3 \times W_{q',\Gamma_1}^{1}(\Omega)^3} + \inner{\mathbf{h}^2}{\rv{\mathbf{w}}_{\Gamma_2}}_{\W{- \frac{1}{q}}(\Gamma_2)^3 \times W_{q'}^{1 - \frac{1}{q'}}(\Gamma_2)^3} = \inner{\mathbf{F}}{\nabla \mathbf{w}}, \end{equation} for all $ \mathbf{w} \in W_{\sigma,\Gamma_1}^{1,q'}(\Omega) $, then the solution $ (\mathbf{u}, \pi) $ in Theorem \ref{thm:linear-sta} satisfies \begin{equation*} \norm{\mathbf{u}}_{\W{1}(\Omega)^3} + \norm{\pi}_{\Lq{q}(\Omega)} \leq C \Big( \norm{g}_{\Lq{q}(\Omega)} + \norm{\mathbf{h}^1}_{\W{1 - \frac{1}{q}}(\Gamma_1)^3} + \norm{\mathbf{F}}_{\Lq{q}(\Omega)^{3 \times 3}} \Big). \end{equation*} \end{corollary} \begin{proof} On account of \eqref{Eqs:f=divF} and the definition of $ \NORM{\mathcal{F}}_{-1} $ above with respect to $ (\mathbf{f}, \mathbf{h}^2) $, one has \begin{equation*} \NORM{\mathcal{F}}_{-1} = \norm{\mathbf{F}}_{\Lq{q}(\Omega)^{3 \times 3}}, \end{equation*} which gives birth to the desired estimate. \end{proof} \begin{remark} In fact, Theorem \ref{thm:linear-sta} can be generalize to $ s \in (-2,0) $ by employing complex interpolation. Namely, since $ \mathcal{A}_q $ admits bounded imaginary powers, we have the domains of any fractional powers by complex interpolation \begin{equation*} \mathcal{D}(\mathcal{A}_{q}^{\theta}) = \big(\Lqs{q}(\Omega), \mathcal{D}(\mathcal{A}_{q})\big)_{[\theta]}, \quad 0 < \theta < 1. \end{equation*} More details can be found in e.g. \cite{Abels2010,Pruess2019,Schumacher2009}. \end{remark} Combining with the temporal regularities, Theorem \ref{thm:linear-sta} and Corollary \ref{coro:f=divF}, one arrives at the following theorem. \begin{theorem} \label{thm:linear-Stoke-stationary} Let $ 1 < q < \infty $ and $ T_0 > 0 $. Given $ (\mathbf{f}, g, \mathbf{h}^1, \mathbf{h}^2) $ such that \begin{gather*} \mathbf{f} \in \mathbb{F}_{\mathbf{f}}(T) := \Lq{q}(0,T; \Lq{q}(\Omega)^3) \cap \H{\frac{1}{2}}(0,T; W_{q, \Gamma_1}^{- 1}(\Omega)^3), \\ g \in \mathbb{F}_g(T) := \Lq{q}(0,T; \W{1}(\Omega)) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega)), \\ \mathbf{h}^1 \in \mathbb{F}_{\mathbf{h}^1}(T) := \Lq{q}(0,T; \W{2 - \frac{1}{q}}(\Gamma_1)^3) \cap \H{\frac{1}{2}}(0,T; \W{1 - \frac{1}{q}}(\Gamma_1)^3), \\ \mathbf{h}^2 \in \mathbb{F}_{\mathbf{h}^2}(T) := \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma_2)^3) \cap \H{\frac{1}{2}}(0,T; \W{- \frac{1}{q}}(\Gamma_2)^3). \end{gather*} Then \eqref{Eqs:linear-sta} admits a unique solution $ (\mathbf{u}, \pi) \in \mathbb{E}(T) := \mathbb{E}_\mathbf{u}(T) \times \mathbb{E}_\pi(T) $ where \begin{gather*} \mathbb{E}_\mathbf{u}(T) := \Lq{q}(0,T; \W{2}(\Omega)^3) \cap \H{\frac{1}{2}}(0,T; \W{1}(\Omega)^3), \\ \mathbb{E}_\pi(T) := \Lq{q}(0,T; \W{1}(\Omega)) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega)) . \end{gather*} If additionally, there is an $ \mathbf{F} \in \Lq{q}(0,T; \W{1}(\Omega)^{3 \times 3}) \cap \H{1/2}(0,T; \Lq{q}(\Omega)^{3 \times 3}) $ such that \begin{equation*} \mathbf{f} = \Div \mathbf{F}, \text{ in } \Omega, \quad \mathbf{h}^2 = - \mathbf{F} \mathbf{n}, \text{ on } \Gamma_2 \end{equation*} holds in the sense of distribution. Then there is a constant $ C > 0 $ independent of $ \mathbf{f}, g, \mathbf{h}^1, \mathbf{h}^2, T $, such that for $ 0 < T < \infty $ \begin{equation*} \norm{(\mathbf{u}, \pi)}_{\mathbb{E}(T)} \leq C \Big( \norm{\mathbf{F}}_{\Lq{q}(0,T; \W{1}(\Omega)^{3 \times 3}) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega)^{3 \times 3})} + \norm{g}_{\mathbb{F}_g(T)} + \norm{\mathbf{h}^1}_{\mathbb{F}_{\mathbf{h}^1}(T)} \Big). \end{equation*} \end{theorem} Now given $ \gamma > 0 $ and $ c \in \Lq{q}(0,T; \W{2}(\Omega)) \cap \W{1}(0,T; \Lq{q}(\Omega)) $, one has the solvability of the system \begin{equation} \label{Eqs:linear-sta-c} \begin{alignedat}{3} - \Div (D^2 W(\mathbb{I}) \nabla \mathbf{u}) + \nabla \pi & = \mathbf{f}, && \quad \text{in } \Omega, \\ \Div \mathbf{u} - \gamma \int_0^t c \,\d \tau & = g, && \quad \text{in } \Omega, \\ \mathbf{u} & = \mathbf{h}^1, && \quad \text{on } \Gamma_1, \\ (D^2 W(\mathbb{I}) \nabla \mathbf{u} - \pi\mathbb{I}) \mathbf{n} & = \mathbf{h}^2, && \quad \text{on } \Gamma_2. \end{alignedat} \end{equation} \begin{corollary} Let $ \gamma > 0 $. Given $ c \in \Lq{q}(0,T; \W{2}(\Omega)) \cap \W{1}(0,T; \Lq{q}(\Omega)) $. Then under the assumptions of Theorem \ref{thm:linear-Stoke-stationary}, there is a unique solution $ (u, \pi) $ of \eqref{Eqs:linear-sta-c} satisfying \begin{gather*} (u, \pi) \in \mathbb{E}(T), \end{gather*} \end{corollary} \begin{proof} Similar to \cite[Corollary 4.1]{AL2021b}, the only point we need to check is $ \gamma \int_0^t c \d \tau \in \mathbb{F}_g(T) $, which is not hard to verify thanks to the regularity of $ c $. Then solving \eqref{Eqs:linear-sta} with $ (\mathbf{f}, \mathbf{h}^1, \mathbf{h}^2) = 0 $ and $ g $ substituted by $ \gamma \int_0^t c \d \tau \in \mathbb{F}_g(T) $, adding the resulted solution and that of \eqref{Eqs:linear-sta}, one completes the proof. \end{proof} \begin{remark} In view of Theorem \ref{thm:linear-Stoke-stationary} and Lemma \ref{lemma:trace-time-regularity}, we know that \begin{equation*} \rv{\big(D^2 W(\mathbb{I}) \nabla \mathbf{u} - \pi \mathbb{I}\big) \mathbf{n}}_{\Gamma_1} \in \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma_1)^3) \cap \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma_1)^3) \end{equation*} makes sense, which contributes to the nonlinear estimate for the fluid part. \end{remark} \subsection{Heat equations with Neumann boundary condition} \label{sec:heat-neumann} Let $ T > 0 $, $ \Omega \subset \mathbb{R}^3 $ be a bounded domain with $ \partial \Omega $ of class $ C^{3-} $. $ \nu $ denotes the unit outer normal vectors on $ \partial \Omega $. Consider the problem \begin{equation}\label{Eqs:linear-heat} \begin{alignedat}{3} \partial_t u - D \Delta u & = f, && \quad \text{in } \Omega \times (0, T), \\ D \nabla u \cdot \nu & = g, && \quad \text{on } \partial \Omega \times (0, T), \\ \rv{u}_{t = 0} & = u_0, && \quad \text{in } \Omega, \end{alignedat} \end{equation} where $ D > 0 $ is a constant. $ u : \Bar{\Omega} \times (0,T) \rightarrow \mathbb{R} $ stands for the system unknown, for example, the temperature, the concentration, etc. By e.g. \cite[Proposition A.2]{AL2021a}, the existence and uniqueness result reads as \begin{theorem} \label{parabolic: theorem} Let $ 3 < q < \infty $ and $ T_0 > 0 $. Assume that $ u_0 \in \W{2 - 2/q}(\Omega) $ with the compatibility condition $ \rvm{D \nabla u_0}_{\partial \Omega} = \rvm{g}_{t = 0} $ holds. Given known functions $ (f, g) $ with regularity \begin{gather*} f \in \mathbb{F}_{f}(T) := \Lq{q}(0,T; \Lq{q}(\Omega)), \\ g \in \mathbb{F}_g(T) := \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\partial \Omega)) \cap \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\partial \Omega)). \end{gather*} Then the parabolic equation \eqref{Eqs:linear-heat} admits a unique strong solution $ u \in \mathbb{E}(T) $ where \begin{equation*} \mathbb{E}(T) := \Lq{q}(0,T; \W{2}(\Omega)) \cap \W{1}(0,T; \Lq{q}(\Omega)). \end{equation*} Moreover, there is a constant $ C > 0 $ independent of $ f, g, u_0, T_0 $, such that for $ 0 < T < T_0 $ \begin{equation*} \norm{u}_{\mathbb{E}(T)} \leq C \left( \norm{f}_{\mathbb{F}_{f}(T)} + \norm{g}_{\mathbb{F}_g(T)} + \norm{u_0}_{\W{2 - 2/q}(\Omega)} \right). \end{equation*} \end{theorem} \subsection{Ordinary differential equations for foam cells and growth} Let $ \Omega $ be the domain defined in Section \ref{sec:heat-neumann}. Given function $ f \in \mathbb{F}(T) := \Lq{q}(0,T; \W{1}(\Omega)) $, a constant $ \gamma > 0 $ and a function $ u_0 \in \W{1}(\Omega) $, $ u_0 \geq 0 $. Then by ordinary differential equation theory, \begin{equation} \label{Eqs:growth-linear} \begin{aligned} \partial_t u - \gamma w = f, \quad & \text{in}\ \Omega \times (0, T), \\ \rv{u}_{t = 0} = u_0, \quad & \text{in}\ \Omega. \end{aligned} \end{equation} admits a unique solution \begin{equation*} u \in \mathbb{E}(T) := \W{1}(0,T; \W{1}(\Omega)), \end{equation*} provided $ w \in \Lq{q}(0,T; \W{2}(\Omega)) \cap \W{1}(0,T; \Lq{q}(\Omega)) $. Moreover, for every $ T_0 > 0 $, there exists a constant $ C > 0 $ independent of $ f, u_0, T_0 $, such that for $ 0 < T < T_0 $ \begin{equation*} \norm{u}_{\mathbb{E}(T)} \leq C \left(\norm{f}_{\mathbb{F}(T)} + \norm{w}_{\Lq{q}(0,T; \W{2}(\Omega)) \cap \W{1}(0,T; \Lq{q}(\Omega))} + \norm{u_0}_{\W{1}(\Omega)}\right). \end{equation*} \section{Nonlinear well-posdeness} \label{sec:nonlinear-wpd} We denote by $ \delta $ a universal positive function \begin{equation} \label{Eqs:deltaT} \delta : \mathbb{R}_+ \rightarrow \mathbb{R}_+ \text{ such that } \delta(t) \rightarrow 0_+, \text{ as } t \rightarrow 0_+. \end{equation} The most common example in the present paper is $ \delta(t) = t^\theta $ for different $ \theta > 0 $ from line to line. \subsection{Auxiliary lemmas} \label{sec:useful-lemmas} In this section, we give some lemmas which we shall use later on. \begin{lemma} \label{lemma:Bessel-multiplication} Let $ f,g \in \H{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega)) \cap \Lq{\infty}(\mathbb{R}; \Lq{\infty}(\Omega)) \cap W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega)) $ with $ q > 1 $ and $ 1/4 < \alpha < 1/2 $, then $ fg \in \H{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega)) $ and \begin{align*} \norm{fg}_{\H{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega))} & \leq C\Big( \norm{f}_{\H{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega))} \norm{g}_{\Lq{\infty}(\mathbb{R}; \Lq{\infty}(\Omega))} \\ & \quad + \norm{g}_{\H{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega))} \norm{f}_{\Lq{\infty}(\mathbb{R}; \Lq{\infty}(\Omega))} + \norm{f}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \norm{g}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \Big). \end{align*} Moreover, if additionally $ \rvm{f}_{t = 0} = \rvm{g}_{t = 0} = 0 $, the assertion is true as well for all $ \mathbb{R} $ substituted by an interval $ (0,T) $ and the constant $ C > 0 $ in the estimate is independent of $ T > 0 $. \end{lemma} \begin{proof} First let us recall the equivalent definition of Bessel potential space \begin{equation*} \normm{f}_{\H{s}(\mathbb{R}; \Lq{q}(\Omega))} = \normm{f}_{\Lq{q}(\mathbb{R}; \Lq{q}(\Omega))} + \normm{(- \Delta)^{\frac{s}{2}}f}_{\Lq{q}(\mathbb{R}; \Lq{q}(\Omega))}, \end{equation*} where the fractional Laplace operator is represented by the singular integral \begin{equation*} (- \Delta)^{\frac{s}{2}} f(t) = C_s \lim_{\epsilon \rightarrow 0} \int_{\abs{h} \geq \epsilon} \frac{\Delta_h f(t)}{\abs{h}^{1 + s}} \d h, \end{equation*} with $ C_s > 0 $ a constant depending on $ s $, $ 0 < s < 2 $ and $ \Delta_h f(t) := f(t) - f(t - h) $, see e.g. \cite[Chapter V, Section 6.10]{Stein1970}. For $ f,g \in H^{1/2}_q(\mathbb{R}; L^q(\Omega)) \cap L^{\infty}(\mathbb{R}; L^{\infty}(\Omega)) \cap W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega)) $ with $ q > 1 $ and $ 1/4 < \alpha < 1/2 $, we see that the integrand has an algebraic decay rate greater than one with respect to $ \abs{h} $, which means the singular integral is actually integrable and one can omit the ``$ \lim $'' in the following. Then we have \begin{align*} & \norm{(- \Delta)^{\frac{1}{4}}(fg)(t)}_{L^q(\Omega)} = C \bigg\|\int_{\mathbb{R}} \frac{\Delta_h (fg)(t)}{\abs{h}^{1 + \frac{1}{2}}} \,\d h \bigg\|_{L^q(\Omega)} \\ & = C \bigg\| g(t) \int_{\mathbb{R}} \frac{\Delta_h f(t)}{\abs{h}^{1 + \frac{1}{2}}} \,\d h + \int_{\mathbb{R}} \frac{f(t) \Delta_h g(t)}{\abs{h}^{1 + \frac{1}{2}}} \,\d h + \int_{\mathbb{R}} \frac{\big(\Delta_h f(t)\big) \big(\Delta_h g(t)\big)}{\abs{h}^{1 + \frac{1}{2}}} \,\d h \bigg\|_{L^q(\Omega)} \\ & \leq C \underbrace{\left( \norm{g(t)(- \Delta)^{\frac{1}{4}}f(t)}_{L^q(\Omega)} + \norm{f(t)(- \Delta)^{\frac{1}{4}}g(t)}_{L^q(\Omega)} \right)}_{\leq \norm{g(t)}_{L^{\infty}(\Omega))} \normm{(- \Delta)^{\frac{1}{4}}f(t)}_{L^q(\Omega)} + \norm{f(t)}_{L^{\infty}(\Omega))} \normm{(- \Delta)^{\frac{1}{4}}g(t)}_{L^q(\Omega)}} \\ & \qquad \qquad + C \underbrace{\bigg\| \int_{\mathbb{R}} \frac{\big(\Delta_h f(t)\big) \big(\Delta_h g(t)\big)}{\abs{h}^{1 + \frac{1}{2}}} \,\d h \bigg\|_{L^q(\Omega)}}_{=:I(t)}. \end{align*} Dividing the region $ \mathbb{R} $ into a neighborhood of the origin and its complement, we have \begin{align*} I(t) & \leq \int_{\abs{h} \leq 1} \norm{\big(\Delta_h f(t)\big) \big(\Delta_h g(t)\big)}_{L^q(\Omega)} \frac{1}{\abs{h}^{1 + \frac{1}{2}}} \,\d h \\ & \qquad + \int_{\abs{h} > 1} \norm{\big(\Delta_hf(t)\big) \big(\Delta_h g(t)\big)}_{L^q(\Omega)} \frac{1}{\abs{h}^{1 + \frac{1}{2}}} \,\d h =: I_1(t) + I_2(t), \end{align*} where \begin{align*} I_1(t) & \leq C \int_{\abs{h} \leq 1} \norm{\Delta_h f(t)}_{L^{2q}(\Omega)}\norm{\Delta_h g(t)}_{L^{2q}(\Omega)} \frac{1}{\abs{h}^{1 + \frac{1}{2}}} \,\d h \\ & = C \int_{\abs{h} \leq 1} \norm{\Delta_h f(t)}_{L^{2q}(\Omega)} \norm{\Delta_h g(t)}_{L^{2q}(\Omega)} \left(\frac{1}{\abs{h}^{1 + \frac{q}{2} + \varepsilon \frac{q}{q'}}}\right)^{\frac{1}{q}} \left(\frac{1}{\abs{h}^{1 - \varepsilon}}\right)^{\frac{1}{q'}} \,\d h \\ & \leq C \left(\int_{\abs{h} \leq 1} \norm{\Delta_h f(t)}_{L^{2q}(\Omega)}^q \norm{\Delta_h g(t)}_{L^{2q}(\Omega)}^q \frac{\d h}{\abs{h}^{1 + \frac{q}{2} + \varepsilon \frac{q}{q'}}}\right)^{\frac{1}{q}} \underbrace{\left(\int_{\abs{h} \leq 1} \abs{h}^{-1 + \varepsilon} \,\d h \right)^{\frac{1}{q'}}}_{\leq C_\varepsilon}, \end{align*} for every $ \varepsilon > 0 $ and $ 1/q + 1/q' = 1 $. By the H\"older's inequality, \begin{align*} \int_{\mathbb{R}} \abs{I_1(t)}^q \,\d t & \leq C_\varepsilon \int_{\mathbb{R}} \int_{\abs{h} \leq 1} \norm{\Delta_h f(t)}_{L^{2q}(\Omega)}^q \norm{\Delta_h g(t)}_{L^{2q}(\Omega)}^q \frac{\d h \d t}{\abs{h}^{1 + \frac{q}{2} + \varepsilon \frac{q}{q'}}} \\ & \leq C_\varepsilon \left( \int_{\mathbb{R}} \int_{\abs{h} \leq 1} \frac{\norm{\Delta_h f(t)}_{L^{2q}(\Omega)}^{2q}}{\abs{h}^{1 + \frac{q}{2} + \varepsilon \frac{q}{q'}}} \,\d h \d t \right)^\frac{1}{2} \left( \int_{\mathbb{R}} \int_{\abs{h} \leq 1} \frac{\norm{\Delta_h g(t)}_{L^{2q}(\Omega)}^{2q}}{\abs{h}^{1 + \frac{q}{2} + \varepsilon \frac{q}{q'}}} \,\d h \d t\right)^\frac{1}{2} \\ & \leq C_\varepsilon \norm{f}_{W^{\frac{1}{4} + \frac{\varepsilon}{2 q'}}_{2q}(\mathbb{R}; L^{2q}(\Omega))}^q \norm{g}_{W^{\frac{1}{4} + \frac{\varepsilon}{2 q'}}_{2q}(\mathbb{R}; L^{2q}(\Omega))}^q. \end{align*} Moreover, \begin{align*} \int_{\mathbb{R}} \abs{I_2(t)}^q \,\d t & \leq C \int_{\mathbb{R}} \norm{f(t)}_{L^{2q}(\Omega)}^q \norm{g(t)}_{L^{2q}(\Omega)}^q \left(\int_{\abs{h} > 1} \frac{1}{\abs{h}^{1 + \frac{1}{2}}} \,\d h\right)^q \,\d t \\ & \leq C \int_{\mathbb{R}} \norm{f(t)}_{L^{2q}(\Omega)}^q \norm{g(t)}_{L^{2q}(\Omega)}^q \,\d t \leq C \norm{f}_{L^{2q}(\mathbb{R}; L^{2q}(\Omega))}^q \norm{g}_{L^{2q}(\mathbb{R}; L^{2q}(\Omega))}^q. \end{align*} Combining the estimate \begin{equation*} \norm{fg}_{\Lq{q}(\mathbb{R}; \Lq{q}(\Omega))} \leq \norm{f}_{\Lq{q}(\mathbb{R}; \Lq{q}(\Omega))} \norm{g}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \leq C \norm{f}_{\H{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega))} \norm{g}_{\Lq{\infty}(\mathbb{R}; \Lq{\infty}(\Omega))}, \end{equation*} we conclude that for $ 1/4 < \alpha < 1/2 $, \begin{align*} \norm{fg}_{H^{1/2}_q(\mathbb{R}; L^q(\Omega))} & \leq C\Big( \norm{f}_{H^{1/2}_q(\mathbb{R}; L^q(\Omega))} \norm{g}_{L^{\infty}(\mathbb{R}; L^{\infty}(\Omega))} \\ & \quad + \norm{g}_{H^{1/2}_q(\mathbb{R}; L^q(\Omega))} \norm{f}_{L^{\infty}(\mathbb{R}; L^{\infty}(\Omega))} + \norm{f}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \norm{g}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \Big). \end{align*} Now denoting by $ \mathcal{E} $ the extension operator for Bessel potential spaces with vanishing initial value from \cite[Lemma 2.5]{MS2012}, one arrives at \begin{align*} & \norm{fg}_{\HO{\frac{1}{2}}(0,T; L^q(\Omega))} = \norm{\mathcal{E}(f)\mathcal{E}(g)}_{\HO{\frac{1}{2}}(0,T; \Lq{q}(\Omega))} \leq C \norm{\mathcal{E}(f)\mathcal{E}(g)}_{\HO{\frac{1}{2}}(\mathbb{R}; \Lq{q}(\Omega))} \\ & \leq C \Big( \norm{\mathcal{E}(f)}_{\H{\frac{1}{2}}(\mathbb{R}; L^q(\Omega))} \norm{\mathcal{E}(g)}_{L^{\infty}(\mathbb{R}; L^{\infty}(\Omega))} + \norm{\mathcal{E}(g)}_{\H{\frac{1}{2}}(\mathbb{R}; L^q(\Omega))} \norm{\mathcal{E}(f)}_{L^{\infty}(\mathbb{R}; L^{\infty}(\Omega))} \\ & \quad \qquad + \norm{\mathcal{E}(f)}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \norm{\mathcal{E}(g)}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \Big) \\ & \leq C\Big( \norm{f}_{\HO{\frac{1}{2}}(0,T; L^q(\Omega))} \norm{g}_{L^{\infty}(0,T; L^{\infty}(\Omega))} \\ & \quad \qquad + \norm{g}_{\HO{\frac{1}{2}}(0,T; L^q(\Omega))} \norm{f}_{L^{\infty}(0,T; L^{\infty}(\Omega))} + \norm{f}_{W^{\alpha}_{2q}(0,T; L^{2q}(\Omega))} \norm{g}_{W^{\alpha}_{2q}(0,T; L^{2q}(\Omega))} \Big), \end{align*} where the constant $ C > 0 $ is independent of $ T $. \end{proof} \begin{lemma} \label{lemma:fF} Let $ \Omega \subset \mathbb{R}^3 $ be a bounded domain with $ C^1 $ boundary, $ R > 0 $, $ T > 0 $ and $ 5 < q < \infty $. Given \begin{gather*} f \in X := \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega)) \cap \Lq{q}(0,T; \W{1}(\Omega)) \end{gather*} with $ \norm{f}_{X} \leq R $. Then $ f \in \Lq{\infty}(0,T; \Lq{\infty}(\Omega)) $ and there exists some $ 1/4 < s < 1/2 - 5/4q $ such that $ f \in W^{s}_{2q}(0,T; L^{2q}(\Omega)) $. In addition, \begin{align} \label{Eqs:f-Linfty} & \norm{f}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \leq C \delta(T), \\ \label{Eqs:f-W-alpha} & \norm{f}_{W^{s}_{2q}(0,T; \Lq{2q}(\Omega))} \leq C, \end{align} provided $ \rvm{f}_{t = 0} = 0 $, where $ C > 0 $ depends on $ R $. Moreover, for $ f^1, f^2, g \in X $ with $ \rvm{(f^1 - f^2)}_{t = 0} = 0 $, $ \rvm{g}_{t = 0} = 0 $ and $ \norm{(f^i, g)}_{X \times X} \leq R $, $ i \in \{1,2\} $, \begin{align} \label{Eqs:f-Linfty-difference} \norm{f^1 - f^2}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \leq C \delta(T) \norm{f^1 - f^2}_X, \\ \label{Eqs:f2bF-W12Lq-difference} \norm{(f^1 - f^2) g}_X \leq C \delta(T) \norm{f^1 - f^2}_X, \end{align} where $ C > 0 $ depends on $ R $. \end{lemma} \begin{proof} For $ f \in X $, by Lemma \ref{lemma:timeembedding-continuous} and \ref{lemma:time-space embedding}, we have \begin{equation*} \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega)) \cap \Lq{q}(0,T; \W{1}(\Omega)) \hookrightarrow \H{\frac{1}{5}}(0,T; \H{\frac{3}{5}}(\Omega)) \hookrightarrow C([0,T]; C(\Bar{\Omega})), \end{equation*} for $ q > 5 $, which means $ f \in \Lq{\infty}(0,T; \Lq{\infty}(\Omega)) $. If $ \rvm{f}_{t = 0} = 0 $, the first embedding constant above is uniform with regard to $ T $ and it follows from Lemma \ref{lemma:timeembedding-continuous} that \eqref{Eqs:f-Linfty} holds true, as well as \eqref{Eqs:f-Linfty-difference}. By means of the time-space embedding Lemma \ref{lemma:time-space embedding} again, one has \begin{equation*} \HO{\frac{1}{2}}(0,T; L^q(\Omega)) \cap L^q(0,T; W^1_q(\Omega)) \hookrightarrow \HO{r}(0,T; W^{1 - 2r}_q(\Omega)), \end{equation*} for $ 1/q < r < 1/2 $, where the embedding constant does not depend on $ T > 0 $. By virtue of the embeddings $ W^{s}_q(\Omega) \hookrightarrow L^{2q}(\Omega) $ for $ s - 3/q \geq - 3/2q $ and $ H^s_q(0,T; X) \hookrightarrow W^{s - 1/2q}_{2q}(0,T; X) $, one infers the conditions \begin{equation*} 1 - 2r - \frac{3}{q} \geq - \frac{3}{2q}, \quad r - \frac{1}{2q} > \frac{1}{4}. \end{equation*} Combining the inequalities above together yields that $ r $ should satisfies \begin{equation*} \frac{1}{4} + \frac{1}{2q} < r \leq \frac{1}{2} - \frac{3}{4q}. \end{equation*} Since $ q > 5 $, it is easy to verify that $ \frac{1}{2} - \frac{3}{4q} > \frac{1}{4} + \frac{1}{2q} $, which means that such $ r $ does exist and $ f \in W^{r - 1/2q}_{2q}(0,T; L^{2q}(\Omega)) $. In addition, by means of Lemma \ref{lemma:Bessel-multiplication}, one obtains for all $ 1/4 < \alpha < 1/2 $, \begin{align*} & \norm{(f^1 - f^2) g}_{\H{\frac{1}{2}}(0,T; \Lq{q}(\Omega))} \\ & \quad \leq C \big(\underbrace{\norm{f^1 - f^2}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \norm{g}_X + \norm{f^1 - f^2}_{X} \norm{g}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))}}_{\leq C(R) \delta(T) \norm{f^1 - f^2}_X \text{ thanks to \eqref{Eqs:f-Linfty} and \eqref{Eqs:f-Linfty-difference}}} \\ & \qquad + \norm{f^1 - f^2}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \norm{g}_{W^{\alpha}_{2q}(\mathbb{R}; L^{2q}(\Omega))} \big), \end{align*} Now choosing $ \alpha $ such that $ 1/4 < \alpha < s < 1/2 - 5/4q $, where $ s $ is given in \eqref{Eqs:f-W-alpha}, one deduces that \begin{equation*} \norm{(f^1 - f^2) g}_{\H{\frac{1}{2}}(0,T; \Lq{q}(\Omega))} \leq C T^{s - \alpha} \norm{f^1 - f^2}_X. \end{equation*} Moreover, we have \begin{align*} & \norm{(f^1 - f^2) g}_{\Lq{q}(0,T; \W{1}(\Omega))} \\ & \quad = \norm{(f^1 - f^2) g}_{\Lq{q}(0,T; \Lq{q}(\Omega))} + \norm{\nabla (f^1 - f^2) g}_{\Lq{q}(0,T; \Lq{q}(\Omega))} + \norm{(f^1 - f^2) \nabla g}_{\Lq{q}(0,T; \Lq{q}(\Omega))} \\ & \quad \leq \norm{f^1 - f^2}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \norm{g}_X + \norm{f^1 - f^2}_{\Lq{q}(0,T; \W{1}(\Omega))} \norm{g}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \\ & \qquad + \norm{f^1 - f^2}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega))} \norm{g}_{\Lq{q}(0,T; \W{1}(\Omega))} \leq C(R) \delta(T) \norm{f^1 - f^2}_X. \end{align*} which proves \eqref{Eqs:f2bF-W12Lq-difference}. \end{proof} \begin{remark} If one replaces the $ \rvm{f}_{t = 0} = 0 $ condition above by $ \normm{f}_{X} \leq \kappa $ for $ \kappa > 0 $, we still have similar estimates above with $ \delta(T) $ substituted by $ \delta(T) + \kappa $, which can be done by same argument as in \eqref{Eqs:Fs-I} below. \end{remark} \begin{lemma} \label{lemma:bF} Let $ q > 5 $ and $ \hat{\bF} $ be the deformation gradient defined by \eqref{Eqs:deformation gradient} with respect to $ \hv_{f} \in Y_T^1 $ in $ \Omega_{f} $ and $ \hu_{s} \in Y_T^2 $ in $ \Omega_{s} $ respectively. Assume that $ \hu_{s}^0 := \rvm{\hu_{s}}_{t = 0} \in \W{2 - 2/q}(\Omega_{s})^3 $ and $ \normm{\hat{\nabla} \hu_{s}^0}_{\W{1 - 2/q}(\Omega_{s})^3} \leq \kappa $ with $ \kappa > 0 $ small enough. Then for every $ R > 0 $, there are a constant $ C = C(R) > 0 $ and a finite time $ 0 < T_R < 1 $ depending on $ R $ such that for all $ 0 < T < T_R $ and $ \norm{(\hv_{f}, \hu_{s})}_{Y_T^1 \times Y_T^2} \leq R $, $ \inv{\hat{\bF}} $ exists a.e. with regularities \begin{equation*} \inv{\hF_{f}} \in \W{1}(0,T; \W{1}(\Omega_{f})^{3 \times 3}), \quad \inv{\hF_{s}} \in \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})^{3 \times 3}) \cap \Lq{q}(0,T; \W{1}(\Omega_{s})^{3 \times 3}), \end{equation*} and satisfies \begin{align} \label{Eqs:Ff} & \norm{\inv{\hF_{f}}}_{\Lq{\infty}(0,T; \W{1}(\Omega_{f})^{3 \times 3})} \leq C, \quad \norm{\inv{\hF_{f}} - \mathbb{I}}_{\Lq{\infty}(0,T; \W{1}(\Omega_{f})^{3 \times 3})} \leq C \delta(T), \\ \label{Eqs:Fs} & \norm{\inv{\hF_{s}}}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega_{s})^{3 \times 3})} \leq C, \quad \norm{\inv{\hF_{s}} - \mathbb{I}}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega_{s})^{3 \times 3})} \leq C (\delta(T) + \kappa), \\ \label{Eqs:Ffidentity} & \seminorm{\inv{\hFf} - \bbi}_{\WO{r}(0,T; \W{1}(\Omega_{f})^{3 \times 3})} \leq C \delta(T), \quad 0 < r < 1, \end{align} Moreover, for $ \hat{\bw}_f \in Y_T^1 $ and $ \hat{\bw}_s \in Y_T^2 $ with $ \norm{(\hat{\bw}_f,\hat{\bw}_s)}_{Y_T^1 \times Y_T^2} \leq R $ and $ \rvm{(\hat{\bw}_f, \hat{\bw}_s)}_{t = 0} = \rvm{(\hv_{f}, \hv_{s})}_{t = 0} $, we have \begin{align} \label{Eqs:Ffdifference} & \seminorm{\inv{\hF_{f}}(\hat{\nabla}\hv_{f}) - \inv{\hF_{f}}(\hat{\nabla}\hat{\bw}_f)}_{\WO{r}(0,T; \W{1}(\Omega_{f})^{3 \times 3})} \leq C \delta(T) \norm{\hv_{f} - \hat{\bw}_f}_{Y_T^1}, \ 0 < r < 1,\\ \label{Eqs:Fsdifference-Linfinity} & \norm{\inv{\hF_{s}}(\hat{\nabla}\hv_{s}) - \inv{\hF_{s}}(\hat{\nabla}\hat{\bw}_s)}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega_{s})^{3 \times 3})} \leq C (\delta(T) + \kappa) \norm{\hv_{s} - \hat{\bw}_s}_{Y_T^2}. \end{align} \end{lemma} \begin{proof} The proof of this lemma is similar to \cite[Lemma 4.1]{AL2021a}. However, for the solid part the regularity is a bit lower due to the quasi-stationary elastic equation. For $ \hF_{f} $, one can refer to \cite[Lemma 4.1]{AL2021a} with Lemma \ref{lemma:timeembedding-continuous}, \ref{lemma:multiplication} and \ref{lemma:composition-Slobodeckij}, while \eqref{Eqs:Ffidentity} and \eqref{Eqs:Ffdifference} follows from Lemma \ref{lemma:timeembedding}. For $ \hF_{s} $, it follows from the definition of $ Y_T^2 $ that \begin{align*} \hF_{s} - \mathbb{I} = \hat{\nabla} \hu_{s} & \in \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})^{3 \times 3}) \cap \Lq{q}(0,T; \W{1}(\Omega_{s})^{3 \times 3}). \end{align*} Since $ \hat{\nabla} \hu_{s}^0 \in \W{1 - 2/q}(\Omega_{s})^3 \hookrightarrow \Lq{\infty}(\Omega_{s})^3 $ for $ q > 5 $ and $ \normm{\hat{\nabla} \hu_{s}^0}_{\W{1 - 2/q}(\Omega_{s})^3} \leq \kappa $, one obtains from Lemma \ref{lemma:fF} that \begin{equation} \label{Eqs:Fs-I} \begin{aligned} \sup_{0 \leq t \leq T} & \norm{\hF_{s} - \mathbb{I}}_{\Lq{\infty}(\Omega_{s})^{3 \times 3}} \\ & = \sup_{0 \leq t \leq T} \norm{\hF_{s} - (\mathbb{I} + \hat{\nabla} \hu_{s}^0)}_{\Lq{\infty}(\Omega_{s})^{3 \times 3}} + \norm{\hat{\nabla} \hu_{s}^0}_{\Lq{\infty}(\Omega_{s})^3} \leq C (\delta(T) + \kappa) \leq \frac{1}{2}, \end{aligned} \end{equation} by taking $ T_R, \kappa > 0 $ small enough such that $ \delta(T_R) + \kappa \leq 1/(2C) $. Then by the Neumann series, $ \inv{\hF_{s}} $ does exist. Note that $ \inv{\hF_{s}}(\hat{\nabla} \hu_{s}) = \inv{(\mathbb{I} + \hat{\nabla} \hu_{s})} $ is in the class of $ C^{\infty}(\mathbb{R}^{3 \times 3} \backslash \{- \mathbb{I}\})^{3 \times 3} $ with respect to $ \hat{\nabla} \hu_{s} $, it turns out from Lemma \ref{lemma:composition-Slobodeckij} and $ q > 5 $ that \begin{equation} \label{Eqs:Fs-inv} \inv{(\mathbb{I} + \hat{\nabla} \hu_{s}^0)} \in \W{1 - \frac{2}{q}}(\Omega_{s})^{3 \times 3}, \quad \inv{\hF_{s}} \in \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})^{3 \times 3}) \cap \Lq{q}(0,T; \W{1}(\Omega_{s})^{3 \times 3}). \end{equation} In addition, we have \begin{gather*} \inv{(\mathbb{I} + \hat{\nabla} \hu_{s}^0)} - \mathbb{I} = \int_0^1 \frac{\d}{\d \tau} \inv{(\mathbb{I} + \tau \hat{\nabla} \hu_{s}^0)} \,\d \tau = \int_0^1 - \inv{(\mathbb{I} + \tau \hat{\nabla} \hu_{s}^0)} \hat{\nabla} \hu_{s}^0 \inv{(\mathbb{I} + \tau \hat{\nabla} \hu_{s}^0)} \,\d \tau, \end{gather*} then \begin{equation*} \norm{\inv{(\mathbb{I} + \hat{\nabla} \hu_{s}^0)} - \mathbb{I}}_{\W{1 - \frac{2}{q}}(\Omega_{s})^{3 \times 3}} \leq C \kappa, \end{equation*} provided $ \kappa > 0 $ sufficiently small, where $ C > 0 $ is finite. Consequently, similarly to \eqref{Eqs:Fs-I}, one can derive that \begin{equation*} \sup_{0 \leq t \leq T} \norm{\inv{\hF_{s}} - \mathbb{I}}_{\Lq{\infty}(\Omega_{s})^{3 \times 3}} \leq C (\delta(T) + \kappa), \end{equation*} which proves \eqref{Eqs:Fs} and \eqref{Eqs:Fsdifference-Linfinity}. \end{proof} \begin{lemma} \label{lemma:DW} Let $ \Omega \subset \mathbb{R}^3 $ be a bounded domain with $ C^1 $ boundary, $ 0 < s \leq 1 $ and $ 1 < q < \infty $ with $ sq > 3 $. Let $ W : \mathbb{R}^{3 \times 3} \rightarrow \mathbb{R}_+ $ satisfy Assumption \ref{assumptions:smoothness}. Then for $ \mathbf{F} \in \K{s}(\Omega)^{3 \times 3} $, $ K \in \{W, H\} $, with $ \norm{\mathbf{F}}_{\K{s}(\Omega)^{3 \times 3}} \leq R $, there is a positive constant $ C $ depending on $ R $ such that \begin{equation*} \norm{D^k W(\mathbf{F})}_{\K{s}(\Omega)^{3^{2k}}} \leq C, \quad k \in \{0, 1, 2, 3\}. \end{equation*} Moreover, for $ \mathbf{F}^1, \mathbf{F}^2 \in \K{s}(\Omega)^{3 \times 3} $ with $ \norm{\mathbf{F}^1}_{\K{s}(\Omega)^{3 \times 3}}, \norm{\mathbf{F}^2}_{\K{s}(\Omega)^{3 \times 3}} \leq R $, we have \begin{equation*} \norm{D^k W(\mathbf{F}^1) - D^k W(\mathbf{F}^2)}_{\K{s}(\Omega)^{3^{2k}}} \leq C \norm{\mathbf{F}^1 - \mathbf{F}^2}_{\K{s}(\Omega)^{3 \times 3}}, \quad k \in \{1, 2, 3\}. \end{equation*} \end{lemma} \begin{proof} One can prove it by Lemma \ref{lemma:composition-Slobodeckij} for $ 0 < s < 1 $ directly, while the case $ s = 1 $ follows by Remark \ref{remark:composition-Sobolev}. \end{proof} \begin{lemma} \label{lemma:gpositive} Let $ T > 0 $, $ R > 0 $ and $ q \in (1, \infty) $. $ \Omega_{s} $ is the domain defined in Section \ref{sec:model-description}. Given $ \hat{g} \in \W{1}(0,T; \W{1}(\Omega_{s})) $ with $ \norm{\hat{g}}_{\W{1}(0,T; \W{1}(\Omega_{s}))} \leq R $, $ \rvm{\hat{g}(\mathbf{X}, t)}_{t = 0} = \hat{g}^0 $, there exists a time $ T_R > 0 $ such that for $ T \in (0, T_R) $, one has \begin{equation*} \hat{g}(\mathbf{X}, t) \geq \frac{1}{2}, \ \forall \, \mathbf{X} \in \Omega_{s}, t \in [0,T]. \end{equation*} \end{lemma} \begin{proof} By calculus, \begin{equation*} \hat{g}(\mathbf{X}, t) = \hat{g}^0(\mathbf{X}) + \int_0^t \partial_t \hat{g}(\mathbf{X}, \tau) \d \tau, \ \forall \, \mathbf{X} \in \Omega_{s}, t \in [0,T]. \end{equation*} Then \begin{equation*} \norm{\hat{g}(t) - \hat{g}^0}_{\Lq{\infty}(\Omega_{s})} \leq C \norm{\int_0^t \partial_t \hat{g}(\cdot, \tau) \d \tau}_{\W{1}(\Omega_{s})} \leq C T^{1 - \frac{1}{q}} R \leq \frac{1}{2}, \end{equation*} where we choose $ T_R > 0 $ sufficiently small such that $ T_R^{1 - \frac{1}{q}} \leq \frac{1}{2CR} $. Hence $ \hat{g}(\mathbf{X}, t) \geq \frac{1}{2} $, for all $ \mathbf{X} \in \Omega_{s}, t \in [0,T] $. \end{proof} \subsection{Lipschitz estimates} \label{sec:lipschitz-estimates} Now we are in the position to derive the Lipschitz estimates of the nonlinear lower-order terms in \eqref{Eqs:fullsystem-Lagrangian-linear}. To this end, let us first define the function spaces for the nonlinear terms $ Z_T := \prod_{j = 1}^{12} Z_T^j $, where \begin{gather*} Z_T^1 := \Lq{q}(0,T; \Lq{q}(\Omega_{f})^3), \\ Z_T^2 := \Lq{q}(0,T; \Lq{q}(\Omega_{s})^3) \cap \H{\frac{1}{2}}(0,T; W_{q,\Gamma}^{-1}(\Omega_{s})^3), \\ Z_T^3 := \left\{ g \in \Lq{q}(0,T; \W{1}(\Omega_{f})) \cap\W{1}(0,T; \W{- 1}(\Omega_{f})): \mathrm{tr}_\Gamma(g) \in Z_T^5 \right\}, \\ Z_T^4 := \Lq{q}(0,T; \W{1}(\Omega_{s})) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})), \\ Z_T^5 := \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma)^3) \cap \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma)^3), \\ Z_T^6 := \Lq{q}(0,T; \W{2 - \frac{1}{q}}(\Gamma)^3) \cap \H{\frac{1}{2}}(0,T; \W{1 - \frac{1}{q}}(\Gamma)^3), \\ Z_T^7 := \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma_s)^3) \cap \H{\frac{1}{2}}(0,T; \W{- \frac{1}{q}}(\Gamma_s)^3), \\ Z_T^8 := \Lq{q}(0,T; \Lq{q}(\Omega \backslash \Gamma)), \\ Z_T^9 := \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma)) \cap \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma)),\\ Z_T^{10} := \Lq{q}(0,T; \W{1 - \frac{1}{q}}(\Gamma_s)) \cap \W{\frac{1}{2} - \frac{1}{2q}}(0,T; \Lq{q}(\Gamma_s)), \\ Z_T^{11} := \Lq{q}(0,T; \W{1}(\Omega_{s})), \quad Z_T^{12} := \Lq{q}(0,T; \W{1}(\Omega_{s})), \end{gather*} Let \begin{equation} \label{Eqs:bw} \mathbf{w} := (\hv_{f}, \hu_{s}, \hpi_f, \hpi_s, \hat{c}, \hcs^*, \hat{g}) \end{equation} be in the space $ Y_T $ given in Section \ref{sec:reformulation} and define the associated initial data as \begin{equation} \label{Eqs:bw0} \mathbf{w}_0 := \rv{(\hv_{f}, \hu_{s}, \hpi_s, \hat{c}, \hcs^*, \hat{g})}_{t = 0} = (\hv^0_f, \hu_{s}^0, \hpi_s^0, \hc^0, \hat{c}_*^0, \hat{g}^0). \end{equation} Then we have the following Lipschitz estimates for the lower-order terms defined in \eqref{Eqs:fullsystem-Lagrangian-linear}. \begin{proposition} \label{prop:Lipschitzestimate} Let $ q > 5 $ and $ R > 0 $. There exist constants $ C, \kappa > 0 $ and a finite time $ T_R > 0 $ both depending on $ R $ such that for $ 0 < T < T_R $ and $ \normm{\hat{\nabla} \hu_{s}^0}_{\W{1 - 2/q}(\Omega_{s})^3} + \normm{\hpi_s^0}_{\W{1 - 2/q}(\Omega_{s})} \leq \kappa $, \begin{gather*} \norm{\mathbf{K}_f(\mathbf{w}^1) - \mathbf{K}_f(\mathbf{w}^2)}_{Z_T^1} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}, \\ \norm{(\mathbf{K}_s, \mathbf{H}^2)(\mathbf{w}^1) - (\mathbf{K}_s, \mathbf{H}^2)(\mathbf{w}^2)}_{Z_T^2 \times Z_T^7} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}, \\ \norm{G_i(\mathbf{w}^1) - G_i(\mathbf{w}^2)}_{Z_T^3 \times Z_T^4} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}, \ i \in \{f,s\}, \\ \norm{\mathbf{H}_i^1(\mathbf{w}^1) - \mathbf{H}_i^1(\mathbf{w}^2)}_{Z_T^5 \times Z_T^6} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}, \ i \in \{f,s\}, \\ \norm{F^j(\mathbf{w}^1) - F^j(\mathbf{w}^2)}_{Z_T^{7 + j}} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}, \ j \in \{1,2,3,4,5\}, \end{gather*} for all $ \norm{\mathbf{w}^1}_{Y_T}, \norm{\mathbf{w}^2}_{Y_T} \leq R $ with $ \mathbf{w}_0^1 = \mathbf{w}_0^2 $. \end{proposition} \begin{proof} For the estimates related to the fluid (with a subscript $ f $) except $ \mathbf{H}_f^1 $ and $ F^j $, $ j = 1,...,5 $, we refer to \cite[Proposition 4.2]{AL2021a}, with the help of Lemma \ref{lemma:fF} and \ref{lemma:bF}. \textbf{Estimate of $ \mathbf{K}_s, \mathbf{H}^2 $}. Thanks to the divergence form from the linearization, i.e. $ \mathbf{K}_s = \Div \tk_s $ and $ \mathbf{H}^2 = - \tk_s \hat{\bn}_{\Gamma_s} $, one can estimate $ \mathbf{K}_s, \mathbf{H}^2 $ together with cancellation of boundary data like Corollary \ref{coro:f=divF}. Namely, \begin{align*} & \norm{(\mathbf{K}_s, \mathbf{H}^2)(\mathbf{w}^1) - (\mathbf{K}_s, \mathbf{H}^2)(\mathbf{w}^2)}_{Z_T^2 \times Z_T^7} \\ & \qquad \qquad \qquad \leq \norm{\tk_s(\mathbf{w}^1) - \tk_s(\mathbf{w}^2)}_{\Lq{q}(0,T; \W{1}(\Omega)^{3 \times 3}) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})^{3 \times 3})}. \end{align*} Combining the definition of $ \tk_s $ in Section \ref{sec:linearization}, Lemma \ref{lemma:fF}--\ref{lemma:gpositive} and Assumption $ \ref{assumption:W-energy-density} $, one obtains for $ q > 5 $, \begin{equation*} \norm{(\mathbf{K}_s, \mathbf{H}^2)(\mathbf{w}^1) - (\mathbf{K}_s, \mathbf{H}^2)(\mathbf{w}^2)}_{Z_T^2 \times Z_T^7} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}. \end{equation*} In this estimate, the smallness of the initial solid pressure is employed for the term $ ((\hat{g}^0)^3 - 1) \hpi_s $ in $ \tk_s $, i.e., \begin{equation} \label{Eqs:pressureEstimate} \norm{\hpi_s}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega_{s}))} \leq \norm{\hpi_s - \hpi_s^0}_{\Lq{\infty}(0,T; \Lq{\infty}(\Omega_{s}))} + \norm{\hpi_s^0}_{\Lq{\infty}(\Omega_{s})} \leq C(\delta(T) + \kappa). \end{equation} \textbf{Estimate of $ G_s $}. By means of Lemma \ref{lemma:fF} and \ref{lemma:bF}, estimate in $ \Lq{q}(0,T; \W{1}(\Omega_{s})) $ is clear. By the definition of $ G_s $, \begin{align*} G_s(\mathbf{w}^1) - G_s(\mathbf{w}^2) = (\inv{\hF_{s}}(\mathbf{w}^1) - \mathbb{I}) : (\hat{\nabla}\hu_{s}^1 - \hat{\nabla}\hu_{s}^2) +\big(\inv{\hF_{s}}(\mathbf{w}^1) - \inv{\hF_{s}}(\mathbf{w}^2)\big) : \hat{\nabla} \hu_{s}^2. \end{align*} Then with Lemma \ref{lemma:fF}, \ref{lemma:bF} and the regularity \eqref{Eqs:Fs-inv}, we have for $ q > 5 $ that \begin{equation*} \norm{G_s(\mathbf{w}^1) - G_s(\mathbf{w}^2)}_{\H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s}))} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}. \end{equation*} \textbf{Estimate of $ \mathbf{H}_{f/s}^1 $}. With $ \hv_{f} \in Y_T^1 $, one knows \begin{equation*} \int_0^t \hv_{f}(\mathbf{X}, \tau) \d \tau \in \W{1}(0,T; \W{2}(\Omega_{f})^3) \cap \W{2}(0,T; \Lq{q}(\Omega_{f})^3) \hookrightarrow \H{\frac{3}{2}}(0,T; \W{1}(\Omega_{f})^3). \end{equation*} It follows from the trace theorem $ \mathrm{tr}_{\Gamma}: \W{k}(\Omega_{f}) \rightarrow \W{k - \frac{1}{q}}(\Gamma) $ that \begin{equation*} \rv{\int_0^t \hv_{f}(\mathbf{X}, \tau) \d \tau}_{\Gamma} \in \H{\frac{3}{2}}(0,T; \W{1 - \frac{1}{q}}(\Gamma)^3) \cap \W{1}(0,T; \W{2 - \frac{1}{q}}(\Gamma)^3), \end{equation*} whose Lipschitz estimate in $ Z_T^6 $ can be controlled by $ \delta(T) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T} $ thanks to Lemma \ref{lemma:timeembedding} and \ref{lemma:trace-time-regularity}, namely, \begin{equation*} \norm{\mathbf{H}_s^1(\mathbf{w}^1) - \mathbf{H}_s^1(\mathbf{w}^2)}_{Z_T^6} \leq C \delta(T) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}. \end{equation*} Now let us recall $ \mathbf{H}_f^1 = - \tk_f \hn_{\Gamma} + \tk_s \hn_{\Gamma} $. The first part can be addressed in the same way as in \cite[Proposition 4.2]{AL2021a}. By Lemma \ref{lemma:trace-time-regularity} the anisotropic trace theorem and $ C^3 $ interface that ensures a $ \hn_{\Gamma} $ of class $ C^2 $, the second term can be estimated by \begin{equation*} \norm{\tk_s(\mathbf{w}^1) - \tk_s(\mathbf{w}^2)}_{Z_T^5} \leq C \norm{\tk_s(\mathbf{w}^1) - \tk_s(\mathbf{w}^2)}_{\Lq{q}(0,T; \W{1}(\Omega)^{3 \times 3}) \cap \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})^{3 \times 3})} \end{equation*} Then together with Lemma \ref{lemma:fF}--\ref{lemma:gpositive} and Assumption $ \ref{assumption:W-energy-density} $, one gets \begin{equation*} \norm{\mathbf{H}_f^1(\mathbf{w}^1) - \mathbf{H}_f^1(\mathbf{w}^2)}_{Z_T^5} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}. \end{equation*} \textbf{Estimate of $ F^j $}. We can estimate $ F^1_f $ and $ F^j, j = 4, 5 $ analogously as in \cite{AL2021a}. For others, since $ Y_T^5 \hookrightarrow \H{1/2}(0,T; \W{1}(\Omega)) $ implies that \begin{equation*} \hat{\nabla} \hc_s \in \H{\frac{1}{2}}(0,T; \Lq{q}(\Omega_{s})^3) \cap \Lq{q}(0,T; \W{1}(\Omega_{s})^3), \end{equation*} one can apply Lemma \ref{lemma:fF} and \ref{lemma:bF} to derive the corresponding estimates with $ q > 5 $, combining Lemma \ref{lemma:trace-time-regularity}, the regularity of $ \tran{\hF_{s}} $, $ \hcs^* $ and $ \hat{g} $. This completes the proof. \end{proof} \subsection{Nonlinear well-posedness} \label{sec:nonlinear-proof} For a $ \mathbf{w} $ defined in \eqref{Eqs:bw}, define \begin{equation*} \mathscr{M}(\mathbf{w}) := \left( \mathbf{K}_f, \mathbf{K}_s, G_f, G_s, \mathbf{H}_f^1, \mathbf{H}_s^1, \mathbf{H}^2, F^1, F^2, F^3, F^4, F^5 \right)^\top (\mathbf{w}), \end{equation*} where the elements are given by \eqref{Eqs:fullsystem-Lagrangian-linear}. Then the following proposition holds for $ \mathscr{M}(\mathbf{w}) : Y_T \rightarrow Z_T $, where $ Y_T, Z_T $ are given in Section \ref{sec:reformulation}, \ref{sec:lipschitz-estimates} respectively. \begin{proposition} \label{prop:nonlinearestimate} Let $ q > 5 $ and $ R > 0 $. Let $ \mathbf{w} \in Y_T $ be the function as in \eqref{Eqs:bw} with the associated initial data as $ \mathbf{w}_0 $ as in \eqref{Eqs:bw0}. Then there exist constants $ C, \kappa > 0 $, a finite time $ T_R > 0 $ both depending on $ R $ and $ \delta(T) $ as in \eqref{Eqs:deltaT} such that for $ 0 < T < T_R $, $ \mathscr{M} : Y_T \rightarrow Z_T $ is well-defined and bounded together with the estimates for $ \norm{\mathbf{w}}_{Y_T} \leq R $ and $ \normm{\hat{\nabla} \hu_{s}^0}_{\W{1 - 2/q}(\Omega_{s})^3} + \normm{\hpi_s^0}_{\W{1 - 2/q}(\Omega_{s})} \leq \kappa $ that \begin{equation} \label{Eqs:Mw} \norm{\mathscr{M}(\mathbf{w})}_{Z_T} \leq C (\delta(T) + \kappa). \end{equation} Moreover, for $ \mathbf{w}^1, \mathbf{w}^2 \in Y_T $ with $ \mathbf{w}_0^1 = \mathbf{w}_0^2 $ and $ \norm{\mathbf{w}^1}_{Y_T} \norm{\mathbf{w}^2}_{Y_T} \leq R $, there exist a constant $ C > 0 $, a finite time $ T_R > 0 $ depending on $ R $ and a function $ \delta(T) $ as in \eqref{Eqs:deltaT} such that for $ 0 < T < T_R $, \begin{equation} \label{Eqs:MLipschitz} \norm{\mathscr{M}(\mathbf{w}^1) - \mathscr{M}(\mathbf{w}^2)}_{Z_T} \leq C (\delta(T) + \kappa) \norm{\mathbf{w}^1 - \mathbf{w}^2}_{Y_T}. \end{equation} \end{proposition} \begin{proof} The second part follows directly from Proposition \ref{prop:Lipschitzestimate}. Then by setting $ \mathbf{w}^2 = (0,0,0,0,0,0,1) $ in \eqref{Eqs:MLipschitz}, one derives \eqref{Eqs:Mw} immediately in view of the fact that $ \mathscr{M}(0,0,0,0,0,0,1) = 0 $. \end{proof} Now recalling the definition of solution and initial spaces in Section \ref{sec:reformulation}, we rewrite \eqref{Eqs:fullsystem-Lagrangian-linear} in the abstract form \begin{equation} \label{abstract} \mathscr{L} (\mathbf{w}) = \mathscr{N}(\mathbf{w}, \mathbf{w}_0) \quad \textrm{for all}\ \mathbf{w} \in Y_T,\ (\hv_{f}^0, \hc^0) \in \cD_q, \end{equation} where $ \mathscr{L}(\mathbf{w}) $ denotes the left-hand side of \eqref{Eqs:fullsystem-Lagrangian-linear} and $ \mathscr{N}(\mathbf{w}, \mathbf{w}_0) $ is the right-hand side. It follows from the linear theory in Section \ref{sec:analysis-linear} that $ \mathscr{L} : Y_T \rightarrow Z_T \times \cD_q $ is an isomorphism. \begin{proof}[\bf Proof of Theorem \ref{theorem: main}] For $ (\hv_{f}^0, \hc^0) \in \cD_q $ satisfying the compatibility conditions, we may solve $ \mathscr{L}(\tilde{\mathbf{w}}) = \mathscr{N}(0, \mathbf{w}_0) $ by some $ \tilde{\mathbf{w}} \in Y_T $. Then one can reduce the system to the case of trivial initial data by eliminating $ \tilde{\mathbf{w}} $ and we are able to set a well-defined constant \begin{equation*} C_{\mathscr{L}} := \sup_{0 \leq T \leq 1} \norm{\mathscr{L}^{-1}}_{\mathcal{L}(Z_T, Y_T)}, \end{equation*} which can be verified to stay boundeded as $ T \rightarrow 0 $ by the linear theories in Section \ref{sec:analysis-linear} and the estimate \eqref{Eqs:Mw}, as in \cite{AL2021a}. Choose $ R > 0 $ large such that $ R \geq 2 C_\mathscr{L} \norm{(\hv^0_f, \hc^0)}_{\cD_q} $. Then \begin{equation} \label{L0} \norm{\mathscr{L}^{-1} \mathscr{N}(0, \mathbf{w}_0)}_{Y_T} \leq C_\mathscr{L} \norm{(\hv^0_f, \hc^0)}_{\cD_q} \leq \frac{R}{2}. \end{equation} For $ \norm{\mathbf{w}^i}_{Y_T} \leq R $, $ i = 1, 2 $, we take $ T_R > 0 $ and $ \kappa > 0 $ small enough such that $ C_\mathscr{L} C(R) (\delta(T_R) + \kappa) \leq 1/2 $, where $ C(R) $ is the constant in \eqref{Eqs:MLipschitz}. Then for $ 0 < T < T_R $, we infer from Theorem \ref{prop:nonlinearestimate} that \begin{equation} \begin{aligned} \label{L12} & \norm{\mathscr{L}^{-1}\mathscr{N}(\mathbf{w}^1, \mathbf{w}_0) - \mathscr{L}^{-1}\mathscr{N}(\mathbf{w}^2, \mathbf{w}_0)}_{Y_T} \\ & \qquad \qquad \qquad \leq C_\mathscr{L} C(R) T^{\delta} \norm{w^1 - w^2}_{Y_T} \leq \frac{1}{2} \norm{w^1 - w^2}_{Y_T}, \end{aligned} \end{equation} which implies the contraction property. From \eqref{L0} and \eqref{L12}, we have \begin{align*} & \norm{\mathscr{L}^{-1}\mathscr{N}(\mathbf{w}, \mathbf{w}_0)}_{Y_T} \\ & \qquad \qquad \leq \norm{\mathscr{L}^{-1}\mathscr{N}(0, \mathbf{w}_0)}_{Y_T} + \norm{\mathscr{L}^{-1}\mathscr{N}(w, \mathbf{w}_0) - \mathscr{L}^{-1}\mathscr{N}(0, \mathbf{w}_0)}_{Y_T} \leq R. \end{align*} Define a ball in $ Y_T $ as \begin{equation*} \mathcal{M}_{R,T} := \left\{ \mathbf{w} \in \overline{B_{Y_T}(0,R)}: \mathbf{w}, \mathbf{w}_0 \text{ are as in } \eqref{Eqs:bw} \text{ and } \eqref{Eqs:bw0} \right\}, \end{equation*} a closed subset of $ Y_T $. Hence, $ \mathscr{L}^{-1}\mathscr{N} : \mathcal{M}_{R,T} \rightarrow \mathcal{M}_{R,T} $ is well-defined for all $ 0 < T < T_R $ and a strict contraction. Since $ Y_T $ is a Banach space, the Banach fixed-point Theorem implies the existence of a unique fixed-point of $ \mathscr{L}^{-1}\mathscr{N} $ in $ \mathcal{M}_{R,T} $, i.e., \eqref{Eqs:fullsystem-Lagrangian-linear} admits a unique strong solution in $ \mathcal{M}_{R,T} $ for small time $ 0 < T < T_R $. The uniqueness in $ Y_T $, $ 0 < T < T_0 $, follows easily by repeating the continuity argument in \cite[Proof of Theorem 2.1]{AL2021a}, so we omit it here. In summary, \eqref{Eqs:fullsystem-Lagrangian-linear} admits a unique solution in $ Y_T $, equivalently, \eqref{Eqs:fullsystem-Lagrangian} admits a unique solution in $ Y_T $. Now we are in the position to prove the positivity of cells concentrations. Since the regularity of $ \hv_{s} = \ptial{t} \hu_{s} $ is much lower than that in \cite{AL2021a}, we can not proceed as in \cite{AL2021a} for $ \hat{c} $. To overcome this problem, we take a smooth mollification $ \hv_{s}^\epsilon $ of $ \hv_{s} $ for $ \epsilon > 0 $ such that \begin{equation*} \int_0^t \hv_{s}^\epsilon (\cdot, \tau) \d \tau \rightarrow \hu_{s}, \text{ in } Y_T^2, \text{ as } \epsilon \rightarrow 0. \end{equation*} Consider the problem \begin{alignat*}{3} \partial_t c_f^\epsilon + \Div \left( c_f^\epsilon \bv_{f} \right) - D_f \Delta c_f^\epsilon & = 0, && \quad \text{in } Q_f^T, \\ \partial_t c_s^\epsilon + \Div \left( c_s^\epsilon \bv_{s}^\epsilon \right) - D_s \Delta c_s^\epsilon & = - f_s^r, && \quad \text{in } Q_s^T, \end{alignat*} with boundary and initial values as in Section \ref{sec:model-description}. Then with same argument in \cite[Proof of Theorem 2.1]{AL2021a}, one obtains \begin{equation*} 0 \leq c^\epsilon(\mathbf{x},t) \in \W{1}(0,T; \Lq{q}(\Omega^t)^3) \cap \Lq{q}(0,T; \W{2}(\Omega^t \backslash \Gamma^t)^3), \end{equation*} which means there is a subsequent still denoted by $ c^\epsilon $ and a function $ c $ such that \begin{equation*} c^\epsilon \rightharpoonup c \text{ weakly in } \W{1}(0,T; \Lq{q}(\Omega^t)^3) \cap \Lq{q}(0,T; \W{2}(\Omega^t \backslash \Gamma^t)^3), \end{equation*} and \begin{equation*} c^\epsilon \rightarrow c \text{ in } \mathcal{D}'(Q^T \backslash S^T), \end{equation*} where $ \mathcal{D}'(U) $ denotes the space of distributions on $ U $, $ Q^T, S^T $ are defined in Section \ref{sec:model-description}. It is standard to verify $ c $ solves the same equation with $ \bv_{s}^\epsilon $ replaced by $ \bv_{s} $. We only give the sketch of proof with respect to $ \Div(c_s^\epsilon \bv_{s}^\epsilon) $ as an example. \begin{align*} & \int_{0}^T \int_{\Omega_{s}^{t}} c_s^\epsilon \bv_{s}^\epsilon \cdot \nabla \phi \,\d \mathbf{x} \d t - \int_{0}^T \int_{\Omega_{s}^{t}} c_s \bv_{s} \cdot \nabla \phi \,\d \mathbf{x} \d t \\ & \qquad = \int_{0}^T \int_{\Omega_{s}^{t}} (c_s^\epsilon - c_s) \bv_{s}^\epsilon \cdot \nabla \phi \,\d \mathbf{x} \d t + \int_{0}^T \int_{\Omega_{s}^{t}} c_s (\bv_{s}^\epsilon - \bv_{s}) \cdot \nabla \phi \,\d \mathbf{x} \d t \rightarrow 0, \end{align*} as $ \epsilon \rightarrow 0 $, for all $ \phi \in \mathcal{D}(Q^T \backslash S^T) $, because of the regularity and convergence of $ c_s^\epsilon $ and $ \bv_{s}^\epsilon $. Note that \begin{equation*} 0 \leq \int_{0}^T \int_{\Omega^t \backslash \Gamma^t} c^\epsilon \phi \,\d \mathbf{x} \d t \leq \limsup_{\epsilon \rightarrow 0} \int_{0}^T \int_{\Omega^t \backslash \Gamma^t} c^\epsilon \phi \,\d \mathbf{x} \d t = \int_{0}^T \int_{\Omega^t \backslash \Gamma^t} c \phi \,\d \mathbf{x} \d t, \end{equation*} for all $ \phi \in \mathcal{D}(Q^T \backslash S^T) $, $ \phi \geq 0 $, one concludes that $ c \geq 0 $, a.e. in $ Q^T \backslash S^T $. The positivity of $ \hcs^* $ and $ \hat{g} $ then follows automatically, as showed in \cite{AL2021a}, which completes the proof. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{Introduction} Ions in a conducting interconnect can drift away from their equilibrium position due to current-induced forces \cite{Landauer1,Friedel,Black}. This fact degrades technological performance via, e.g., heating and electromigration in semiconductor integrated circuits \cite{Blech} and nanowires \cite{nanomigration}. However, as first envisioned by Sorbello [\onlinecite{Sorbello}], current-induced forces can also be turned to one's advantage, with the electrons-to-nuclei energy transfer used to move atoms in orbits (molecular motors) and with prospects of high payoffs for nanotechnology. {The envision of nanoscale devices converting electrical current into mechanical work is attracting a growing interest.} After the proposal in Ref.~\cite{Sorbello}, a number of theoretical investigations emerged in steady-state \cite{Todorov2001,Emberly2001,DiVentra2002,Brandbyge2003,Cizek2004,Cornaglia2004,Frederiksen2004,Paulsson2005, Frederiksen2007,Galperin2007,Galperin2008,Hartle2009,Zhang2011} and real-time \cite{Horsfield2004a,Horsfield2004b,Verdozzi2006,Sanchez2006, Todorovic2011,Albrecht2012} transport to understand and possibly manipulate current-induced forces. Their nonconservative character was pointed out in several studies \cite{DiVentra2004,Dundas2009,Todorov2010}. It was also pointed out that these forces are of two types, i.e., friction-like~\cite{Hussein2010,Lu2011,Lu2012} and Lorentz-like~\cite{Lu2012,Lu2010}. Under general nonequilibrium conditions the friction force can be negative and responsible for van der Pol oscillations of the nuclear coordinates \cite{Bennett2006,Bode2011,Bode2012,Hussein2010}, runaway modes \cite{Verdozzi2006,Lu2010} or heating \cite{Lu2015}. Interestingly enough, electronic correlations in these situations (and thus in concept-protocols of molecular motors) have not been addressed until very recently. A first step was taken by Dou et al. \cite{Dou2017a}, with a general formulation in terms of {\em $N$-particle} Green's functions, $N$ being the number of electrons in the system (see also \cite{Dou2018,Chen2018} for subsequent discussions). Afterwards, an expression for the friction force was derived via a generalized master equation in the Coulomb blockade regime~\cite{Calvo2017}. A fundamental merit of these two pioneering works is to bring the issue of electronic correlations in molecular motors into the spotlight. However, it is also the case that, at present, a general approach suitable for calculations of nuclear motion in realistic junctions is still lacking. Also, an assessment of the importance of second- and higher-order corrections in the nuclear velocities of the current-induced forces~\cite{Verdozzi2006,Metelmann2011,Nocera2011,Kartsev2014} has not yet been made. Motivated by these considerations, we derive here a formula of current-induced forces in terms of the {\em one-particle} steady-state nonequilibrium Green's function (ssGF). The main advantage of the ssGF formulation is that electronic correlations can be systematically and self-consistently included through diagrammatic approximations to the many-body self-energy, particularly suitable in first-principle approaches. Like previous non-interacting formulations, we account only for the lowest order correction in the nuclear velocities. The impact of higher-order corrections is assessed through benchmarks against mixed quantum-classical studies based on Ehrenfest dynamics (ED) for the nuclei and either the two-times Kadanoff-Baym equations~\cite{KBE,Keldysh,StefLeeu,BalzBon,Hopjan14,commentKBE,Bostrom2016,Balzer2016} (KBE) or the one-time Generalized Kadanoff-Baym Ansatz~\cite{Lipavsky1986} (GKBA) for the electronic part. We find that the ssGF scheme is quantitatively accurate and numerically highly efficient. The main physical result of our investigations is that electronic correlations hinder the emergence of negative friction. \section{Nonadiabatic Ehrenfest Dynamics}\label{NED} We consider a metal-device-metal junction and a set of classical nuclear coordinates ${\bf x}=\{x_{1},x_{2},\ldots\}$ coupled to electrons in the device. The junction is exposed to time-dependent gate voltages and biases. For heavy nuclear masses ${\bf M}=\{M_{1},M_{2},\dots\}$ an expansion of the nuclear wave functions around the classical trajectories yields~\cite{Hussein2010,Metelmann2011} ($T$ labels time) \begin{align}\label{lange} M_{\nu} d^{2}x_{\nu}/dT^{2}=-\partial_{x_{\nu}}\mathcal{U}_{\rm cl.}({\bf x}(T))+ F^{\rm el.}_{\nu}[{\bf x},T]-\xi_{\nu} , \end{align} where $\mathcal{U}_{\rm cl.}({\bf x})$ is the classical potential of the nuclei, $F^{\rm el.}_{\nu}[{\bf x},T]$ is the force exerted by the electrons and $\xi_{\nu}$ is a stochastic contribution \cite{Hussein2010,Metelmann2011,Bode2011,Bode2012,Lu2012}. For $\xi_{\nu}=0$ the Langevin-type Eq.~(\ref{lange}) reduces to the ED equation. In the following the stochastic field will be neglected. The most general device Hamiltonian can be written as \begin{align} H_{\rm C}({\bf x},T)=\sum_{ij,\sigma}h^{}_{ij}({\bf x},T)c^{\dagger}_{i\sigma}c^{}_{j\sigma}+H_{\rm int.}, \label{Hcentrale} \end{align} where $c^{\dagger}_{i\sigma}$ creates an electron with spin projection $\sigma$ on the $i$-th localized orbital of the device region. The term $H_{\rm int.}$ is independent of ${\bf x}$ and accounts for electron-electron interactions. The electronic force then reads \begin{eqnarray} \label{hamilt1} F^{\rm el.}_{\nu}[{\bf x}(T),T]&=&- \left.\langle \partial_{x_{\nu}} H_{\rm C}({\bf x},T) \rangle \right|_{{\bf x}={\bf x}(T)} \nonumber \\ &=&-\sum_{ij,\sigma} \rho_{ji}(T) \partial_{x_{\nu}} h_{ij}({\bf x}(T),T). \label{elforce} \end{eqnarray} where $\rho_{ji}(T)$ is the electronic one-particle density matrix. In general, $\rho$ depends on the history of the system, and so does the electronic force via $\rho$, as evident from Eq.~\eqref{elforce}. This is generally referred to as "non-Markovian dynamics". Below we discuss two ways how to perform the time evolution of the electronic density matrix which includes the memory effects, both formulated in the nonequilibrium Green's functions (NEGF) framework. \subsection{Kadanoff-Baym equations}\label{KBE} In the NEGF formalism \cite{BalzBon,StefLeeu,Hopjan14}, the density matrix $\rho_{}$ can be calculated from the equal-time lesser Green's function according to $\rho_{}(T)= -i G^{<}_{}(T,T^{+})$. The double-time lesser Green's function $G^{<}(t,t')$ is obtained from the contour Green's function $G(z,z')$ by setting $z=t$ on the forward branch and $z'=t'$ on the backward branch of the Keldysh contour $\gamma$~\cite{KBE,Keldysh,BalzBon,StefLeeu,Hopjan14,commentKBE}. The contour time evolution is governed by the equation of motion~\cite{StefLeeu,BalzBon,Hopjan14} \begin{align} \label{kbe} &[~{\rm i}\partial_{z}-h_{\rm HF}({\bf x}(z),z)]~G(z,z')= \nonumber \\&~~~~~~=\delta(z,z') \mathbb{1} +\int_{\gamma}({\Sigma}_{\rm corr.}+{\Sigma}_{\rm emb.})(z,\bar{z})G(\bar{z},z')d\bar{z} \end{align} where $h_{\rm HF}=h_{}+{\Sigma}_{\rm HF}$ is the sum of the single-particle Hamiltonian and Hartree-Fock (HF) self-energy. The self-energy ${\Sigma}_{\rm corr.}$ accounts for electronic correlations beyond Hartree-Fock whereas ${\Sigma}_{\rm emb.}$ is the standard embedding self-energy. A similar equation holds for $z'$. Choosing $z$ and $z'$ on different branches and breaking the contour integral into real-time integrals one obtains the KBE, see Supplemental Material (SM) \cite{SM}. They are coupled to the nuclear ED through Eq.~(\ref{lange}) resulting in a scheme that in the following we refer to as ED+KBE. In this work we solve the ED+KBE using the Second Born approximation (2BA) to ${\Sigma}_{\rm corr.}$~\cite{conserv}, whose performance has been tested previously (see, e.g., Ref.~\cite{Puig2009,commentKBE}). The physical picture behind the 2BA is that two electrons, in addition to feel a mean-field generated by all other electrons, can also scatter directly once (see also the SM). \subsection{Generalized Kadanoff-Baym Ansatz}\label{GKBA} The KBE scale as $N_T^3$, $N_T$ being the time grid size \cite{Bonitzpaper}. To reduce memory costs, the time propagation can be directly performed for $\rho$. Formally, the general exact equation for $\rho$ can be derived from the KBE at equal times, i.e. on the time diagonal $t=t'$: \begin{align} \frac{d\rho(t)}{dt}+{\rm i}[h_{\rm HF}({\bf x}(t),t),\rho(t)]=-(I(t)+{\rm H.c.}), \label{rhoeom} \end{align} where the collision integral $I$ involves lesser (denoted by "$<$") and greater (denoted by "$>$") components of the two-times functions $G$, ${\Sigma}_{\rm corr.}$ and ${\Sigma}_{\rm emb.}$ To close the equation for $\rho$ we make the Generalized Kadanoff-Baym Ansatz~\cite{Lipavsky1986} \begin{eqnarray} &G^{<}(t,t')=-G^{R}(t,t')\rho(t')+\rho(t)G^{A}(t,t'), \end{eqnarray} where a specification for $G^{R/A}$ is needed which, in this paper, is made in terms of the so called static-correlation approximation ~\cite{Latini2014} (see also SM for details). When combining the GKBA with the ED (henceforth referred to as ED+GKBA), we use the 2BA for ${\Sigma}_{\rm corr.}$, consistently with the ED+KBE scheme discussed above. For purely electronic dynamics, the two schemes were shown to be in good mutual agreement ~\cite{Latini2014}, especially for not too strong interactions. Finally, one-time ED+GKBA evolution allows for much longer propagations than the two-time ED+KBE scheme. \section{Adiabatic Ehrenfest Dynamics}\label{AED} As discussed above, in general electrons and nuclei obey coupled equations of motion (Eqs.~(\ref{lange}) and (\ref{kbe}), or Eqs.~(\ref{lange}) and (\ref{rhoeom})), and memory effects should be taken into account in the electron dynamics. In this section we show that, under specific assumptions, a simplification occurs, namely for slow nuclear dynamics the equations can be decoupled and one can propagate only Eq. \eqref{lange}. If the nuclear velocities $\dot{\bf x}$ are small, the electronic force can be expanded up to linear order in nuclear velocities $ \dot{\bf x}$. Additionally, in the adiabatic limit where the memory effect are negligible, the coefficients of the expansion can be determined by the electronic steady state corresponding to the fixed nuclear position $\bf x$ (also known as Markovian or nonequilibrium Born-Oppenheimer assumption). Under these conditions the electronic force can be divided into two contributions $F^{\rm el.}\approx F^{\rm ss}_{}[{\bf x}] +F^{\rm fric}_{}[{\bf x},{\dot{\bf x}}]$ where the first term is the steady-state force and the second one is the friction+Lorentz like force. These forces, known as current-induced forces, are introduced below in terms of the {\em one-particle} steady-state nonequilibrium Green's function (ssGF). \subsection{Current induced forces}\label{ssKBE} At the steady state, where ${\bf x}$ is time-independent, one can find the corresponding steady-state Green's functions ${G}^{}_{\rm ss}$ containing information about densities and currents in the system. The Green's functions depend only on the frequency $\omega$ and satisfy the steady-state KBE ({in matrix form and} omitting the parametric dependence on ${\bf x}$): \begin{align}\label{gadi} {G}^{R}_{\rm ss}(\omega)&= \frac{1}{\omega-{h}_{\rm HF}-{\rm \Sigma}^{ R}_{\rm ss}(\omega)} \nonumber \\ {G}^{<}_{\rm ss}(\omega)&= \frac{1}{\omega-{h}_{\rm HF}-{\rm \Sigma}^{ R}_{\rm ss}(\omega)} {\rm {\rm \Sigma}^{<}_{\rm ss}(\omega) \frac{1}{\omega-{h}_{\rm HF}-{\rm \Sigma}^{A}_{\rm ss}(\omega)}}, \end{align} where $\Sigma_{\rm ss}$ is the steady-state value of ${\Sigma}={\Sigma}_{\rm corr.} +{\Sigma}_{\rm emb.}$. The lesser steady-state Green's function ${G}^{<}_{\rm ss}$ gives direct access to the steady-state force, \begin{equation} F^{\rm ss}_{\nu}[{\bf x}]={+} 2{\rm i}\int \frac{d\omega}{2\pi} {\rm Tr}\left[G^{<}_{\rm ss}[{\bf x}](\omega)\partial_{x_{\nu}}h({\bf x})\right], \label{ssforce} \end{equation} while the friction$+$Lorentz-like is obtained as \begin{eqnarray} F^{\rm fric}_{\nu}[{\bf x},\dot{\bf x}]&= & {-}\sum_{\mu}\dot{x}_{\mu}\gamma_{\nu\mu} {[{\bf x}]}. \label{fricforce} \end{eqnarray} Here the friction coefficients $\gamma_{\nu\mu}$ are dependent on the parameter ${\bf x}$ through ${G}^{}_{\rm ss}$. Explicitly, \begin{align}\label{frictioneq} \gamma_{\nu\mu}[{\bf x}]=\int \frac{d\omega}{2\pi}{\rm Tr} \Bigl[ \Bigl(&{\mathcal Q}_\mu({G^{ R}_{\rm ss}},h_{\rm HF}+\Sigma^{R}_{\rm ss,corr.},{G^{\rm <}_{\rm ss}}) \nonumber \\ + &{\mathcal Q}_\mu({G^{\rm <}_{\rm ss}},h_{\rm HF}+\Sigma^{A}_{\rm ss,corr.},{G^{ A}_{\rm ss}}) \nonumber \\ + &{\mathcal Q}_\mu({G^R_{\rm ss}},{\rm \Sigma^<_{\rm ss, corr.}},{G^A_{\rm ss}})\Bigr)(\partial_{x_{\nu}}{h})\Bigr], \end{align} where ${\mathcal Q}_\mu(a,b,c)= [ (\partial_\omega a) (\partial_{x_{\mu}}b) c - a (\partial_{x_{\mu}} b) (\partial_\omega c) ]$. The result in Eq.~\eqref{frictioneq} applies to systems with electron-electron interactions and provides an alternative to the friction formula in terms of {\em $N$-particle} Green's functions \cite{Dou2017a,Proof}. Furthermore, Eq.~\eqref{frictioneq} directly reduces to previously published results in the noninteracting case \cite{Bode2011,Bode2012,Equiv}. More important, the advantage of the presented expression for the friction force is that electronic correlations can be systematically and self-consistently included through diagrammatic approximations \cite{Remark}. \subsection{Derivation of current induced forces from KBE}\label{ssKBE} The current induced forces presented above can be derived from the nonadiabatic KBE dynamics in the adiabatic limit. In the following we briefly discuss the main steps of the derivation (for the full derivation see the SM): {\em i)} We start with the nonadiabatic KBE dynamics where the electronic evolution is characterized by the two-times Green's functions $G(t,t')$ and we move to the Wigner representation $G(t,t') \rightarrow {G}(\omega, T)$ \cite{Wigner}, where $T = \frac{t+t'}{2}$ is the center-of-mass time and $\omega$ is the Fourier conjugate of the relative time $\tau=t-t'$. {\em ii)} Under the assumption that the nuclear velocities are small we can expand $G^{<}$ and $G^{R}$ in powers of the nuclear velocities $\dot{\bf x}$ \cite{Moyal}. To first order one finds $G^{<}(\omega,T)={G}^{<}_{\rm ss}(\omega)+ {\rm i}\sum_{\mu}\dot{x}_{\mu}(T)\Delta_{\mu}(\omega,T)$ where $\Delta_{\mu}$ is a complicated function of $G^{<}$, $G^{R}$ and their derivatives with respect to $\omega$ and $x_{\mu}$. This expansion consistently preserves the general relation $G^{>}-G^{<}=G^{R}-G^{A}$ for any finite bias~\cite{botermans}. {\em iii)} Subsequently, we invoke the assumption of adiabatic (Markovian) limit. We evaluate $\Delta_{\mu}$ at the steady-state Green's functions, thus obtaining $\Delta_{\mu}(\omega,T)\rightarrow\Delta_{\mu,\rm ss}(\omega)$. Then, we take into account that $ \rho(T)=-{\rm i}\int \frac{d\omega}{2\pi} {G}^{<}(\omega,T)$ in Eq.~(\ref{elforce}). The integral gives access to the steady state force and the friction$+$Lorentz-like force. As the nonadiabatic dynamics (ED+KBE or ED+GKBA) is the starting point to derive the adiabatic dynamics (ED+ssGF), the former can be used to benchmark the latter in the adiabatic limit. The advantage of the ED+ssGF scheme is in its computational efficiency. Once the values of the steady state and friction force are computed and tabulated (for each $\bf x$), one can evolve the nuclear coordinates {\em for any initial condition} using only Eq. \eqref{lange}. \section{Dynamics of Model System}\label{system} We demonstrate the impact of electronic correlations in the model system originally introduced in Ref.~\cite{Hussein2010}, namely a dimer { that can rigidly oscillate with frequency $\Omega$ between two leads, see inset in Fig.~\ref{friction}-a)}. As in Ref.~\cite{Hussein2010}, {we express all energies in units of $\hbar\Omega$, times in units of $1/\Omega$ and distances in units of the characteristic harmonic oscillator length ${l_{0}}=\sqrt{\hbar/(M\Omega)}$.} The dimensionless dimer Hamiltonian reads \begin{eqnarray} H_{\rm C}(x,T)&=&\sum_{\sigma}{J}_{c}(c_{1\sigma}^{\dagger}c_{2\sigma}^{}+{\rm H.c.})+ {v}_{c}(T)\sum_{i\sigma}n_{i\sigma} \nonumber\\ &+&gx\sum_{\sigma}(n_{1\sigma}-n_{2\sigma})+U\sum_{i} n_{i\uparrow}n_{i\downarrow}, \end{eqnarray} where we added a Hubbard-like interaction (last term) to the original model. { The electron-nuclear coupling has strength $g$ and describes a dipole-dipole interaction.} The dimer is {further} connected to a left (L) lead through site 1 and to a right (R) lead through site 2 with hopping amplitude $J_{\rm tun.}$. The L/R lead is a semi-infinite tight-binding chain with nearest neighbor hopping integral $J$ and {time-dependent} onsite energy (bias) $V_{\rm L/R}(T)$.\\ \begin{figure}[t!] \begin{center} \includegraphics[width=8.5cm]{figure1} \caption{{Total potential $\mathcal{U}_{\rm tot.}$ and friction $\gamma$ in 2BA as function of $x$ for different interaction strengths $U$ in (a) equilibrium with $v_{c}=0$ and (b) at finite bias $V_{L}=-V_{R}=5$ with $v_{c}=1$. The system parameters are: $g=1.58$, ${J}_{c}=-3.5$, $J=50$ and ${J}_{\rm tun.}=-8.66$. The inset shows a dimer (green circles) coupled to leads, the effective energy of sites 1 and 2 (horizontal tracts) and the charge density (lines over the dimer) for $x=0$ (solid lines) and for $x>0$ (dashed lines).}} \label{friction} \end{center} \vspace{-0.8cm} \end{figure} In Fig.~\ref{friction} we plot the total potential $\mathcal{U}_{\rm tot.}=\mathcal{U}_{\rm cl.}+\mathcal{U}_{\rm ss}$ where $\mathcal{U}^{}_{\rm ss}=-\int^{x}_{-\infty}F^{\rm ss}dx$ and friction coefficient $\gamma=\gamma_{11}$, both calculated within the 2BA \cite{conserv}. In equilibrium, hence $V_{\rm L/R}=0$, the system is symmetric under the inversion of $x$ and so are potential and friction, {see panel a)}. {For $U=0$ we have a double minimum in $\mathcal{U}_{\rm tot.}$ corresponding to the two degenerate Peierls-distorted ground states. With increasing $U$ the repulsive-energy cost of the charge-unbalanced Peierls states becomes larger than the distortion-energy gain. Consequently, $\mathcal{U}_{\rm tot.}$ develops a single minimum in $x=0$ and the charge-balanced ground state becomes favored.} {Independently of $U$, the friction remains positive, an exact equilibrium property correctly captured by our diagrammatic 2BA.} Turning on a gate voltage $v_{c}=1$ and a bias $V_{\rm L}=-V_{\rm R}=5$, {see panel b),} {electrons start flowing through the dimer}. The noninteracting formulation predicts self-sustained van der Pol oscillations \cite{Bode2011,Bode2012,Kartsev2014} since the minimum in $\mathcal{U}_{\rm tot.}$ occurs for values of $x$ where $\gamma$ is negative. {Thus, the electrical current activates an ever lasting sloshing motion of the dimer.} Electron correlations shift the position of the potential minimum away from the region $\gamma<0$, thus hindering the van der Pol oscillations. This effect is even enhanced by the flattening of $\gamma$ that causes a shrinking of the region of negative friction. We point out that the HF approximation, i.e., $\Sigma_{\rm corr.}=0$, predicts the opposite behavior. To validate the correctness of the 2BA treatment we have evaluated $\gamma$ also within the T-matrix approximation \cite{conserv} (TMA), which accounts for multiple scattering of electrons, and found similar results (see SM).\\ \begin{figure}[t!] \begin{center} \includegraphics[width=8.5cm]{figure2} \caption{{Phase space $(p,x)$ trajectories in ED+GKBA with gate $v_{c}(T)=\theta(T)[1+{\sin}^2(\frac{2\pi}{5} 2gT)]$ and $U=2.5$ in the HF and 2B approximations (time rescaled by $1/(2g)$). The inset (top right) shows the density of the two sites of the dimer.}} \label{nonadiabatic} \end{center} \vspace{-0.8cm} \end{figure} The differences between the HF and 2BA results are illustrated in time domain in Fig.~\ref{nonadiabatic} using the ED+GKBA approach. We start from an equilibrium situation and then switch on a bias $V_{\rm L}=-V_{\rm R}=5$ and a gate $v_{c}=1$. Then, after time $t=100$, we add a high-frequency time-dependent gate whose only effect is to modulate the nuclear trajectory; ultrafast field {have only a minor influence in steering molecular motors. Notice that,} although $U/J_{c}\approx 0.7$ (weakly correlated regime), the HF and 2BA trajectories are quantitatively very different.\\ {The effects of Coulomb interactions on the electromechanical energy conversion is investigated in Fig.~\ref{adiabatic}.} In the left panels we consider a steady-state system with a bias at time $t=0$ and then suddenly change the position of the nuclear coordinate to $x=0.3$. No external fields other than the bias are switched on, so all quantities depend on time {only} through $x$. {Simulations are performed with ED+ssGF and ED+GKBA (the maximum propagation time is too long for ED+KBE)}. The nuclear coordinate and site densities show an excellent agreement between the two schemes up to $U=5$. The real-time simulations confirm the conclusions drawn by inspection of Fig.~\ref{friction}. {van der Pol oscillations are ever lasting only for $U=0$ [panel a)]; for $U>0$ the dynamics is damped [panels c)-e)]. We can estimate the size of the effect for a normal mode with period $T=2\pi/\Omega\simeq 10^{1}$~fs (hence $V_{L}-V_{R}= 10\hbar\Omega\simeq 1\div 10$~eV). In this case the average current through the dimer is in the $\mu$A range (which is congruous for molecular transport) and the amplitude of the sloshing motion is, from Fig.~\ref{adiabatic}, of the order $l_{0}\simeq 10^{-1}\div 10^{-2}~{\rm \AA}$ (we assumed a dimer of mass $M\sim 25 M_{\rm proton}$ which is appropriate for molecules like, e.g., ethylen). Then for $U=5\hbar\Omega\simeq (0.1\div 1)$~eV the Coulomb-induced damping occurs on the picosecond timescale (see also SM for details).} \begin{figure}[h!] \begin{center} \includegraphics[width=8.6cm]{figure3} \caption{Comparison between ED+GKBA and ED+ssGF for nuclear coordinate and dimer densities in the 2B approximation (time rescaled by $1/(2g)$).} \label{adiabatic} \end{center} \vspace{-0.8cm} \end{figure} In the right panels of Fig.~\ref{adiabatic} we explore the performance of the ED+ssGF scheme for a situation when the system has not yet attained a steady state. At time $t=0$ we switch-on a constant (in time) gate $v_{c}=1$ and bias $V_{\rm L}=-V_{\rm R}=5$ and propagate the system using both ED+KBE and ED+GKBA. After a transient phase [time window $(0,20)$] we continue the ED+KBE propagation using the ssGF scheme with initial condition given by the ED+KBE value of the nuclear coordinate at time $t=20$. The duration of the transient phase was chosen longer than the tunneling time in order to wash out the effects of the sudden switch-on of the external fields. In the noninteracting case, panel b), the system is in a strong nonadiabatic regime, and the ssGF densities are largely deviating from the GKBA densities, especially close to the maxima of $|\dot{x}|$. Nevertheless, the ssGF and GKBA nuclear coordinates are almost identical. This is a consequence of the fact that also the density deviations on the two sites are almost identical, and hence the electronic force (which depends on the densities difference) is not affected by these deviations. For $U=5$, panel d), the system is in the adiabatic regime after the transient, and we observe a good agreement between the ED+GKBA and the ED+ssGF dynamics. To appreciate the importance of nonadiabatic effects we also plot the result of the pure ED+ssGF dynamics (red line). During the transient ED+ssGF is not expected to work since we are not close to the KBE steady state. Interestingly, however, the impact of the sudden switch-on is strong also at long times; the ED+ssGF nuclear coordinate disagrees considerably from that of ED+GKBA. Increasing the interaction further, panel f), the ED+GKBA dynamics starts to deviate from the ED+KBE dynamics, with a sizable overestimation of the amplitude of the oscillations. This is again a consequence of the failure of the GKBA for too strong $U$'s. \section{Conclusions}\label{conc} We introduced a theoretical description of molecular motors in molecular junctions, based on a coupled quantum-classical approach, with nuclei treated within the Ehrenfest dynamics (ED), and electrons within the two-times Kadanoff-Baym Equations (KBE) or the one-time Generalized Kadanoff-Baym Ansatz (GKBA). In the adiabatic limit of these descriptions, we used the steady-state nonequilibrium Green's function (ssGF) to derive an expression for the electronic friction coefficient which includes correlation effects due to Coulomb repulsions among the electrons. The adiabatic assumption allows for integrating out the electronic degrees of freedom thus providing a description of the nuclear dynamics in terms of forces that can be calculated and stored in advance. We demonstrated that the proposed ED+ssGF approach is accurate and computationally more efficient than ED+KBE and even ED+GKBA. We considered the paradigmatic Hubbard dimer to investigate the role of correlations and performed calculations in the mean-field HF approximation as well as in the correlated 2BA and TMA {to treat the Coulomb interaction}. Numerical evidence indicates that {the HF approximation is} not accurate enough and that correlation effects can change dramatically the physical picture. In fact, in a broad range of model parameters we found that correlations hinder the emergence of regions of negative friction and strongly damp the nuclear motion. Our results also suggest that fast driving fields play a minor role in designing molecular motors. Of course, the investigation of electronic correlations in molecular motors is still at its infancy. The proposed ED+ssGF approach allows for standard diagrammatic approximations and therefore well suited for first-principle treatments of realistic setups. We envisage its use to gain insight into molecular devices, and hopefully to put technological applications at a closer reach. \begin{acknowledgements} We acknowledge D. Karlsson for discussions and E. Bostr\"om for critically reading the manuscript. E.P. acknowledges funding from the European Union project MaX Materials design at the eXascale H2020-EINFRA-2015-1, Grant Agreement No. 676598 and Nanoscience Foundries and Fine Analysis-Europe H2020-INFRAIA-2014-2015, Grant Agreement No. 654360. G.S. acknowledges funding by MIUR FIRB Grant No. RBFR12SW0J and EC funding through the RISE Co-ExAN (GA644076). \end{acknowledgements} \bibliographystyle{andp2012} \providecommand{\WileyBibTextsc}{} \let\textsc\WileyBibTextsc \providecommand{\othercit}{} \providecommand{\jr}[1]{#1} \providecommand{\etal}{~et~al.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We consider a generalized regression model \[ Y_i=g(x_i)+\eps_i,\quad i=1,\ldots,n,\] with $(\eps_i)$ i.i.d., $x_1,\ldots,x_n$ in the design space ${\cal X}$ and $g:{\cal X}\to\R$. The problems we have in view are those of robust nonparametric estimation of $g$ in the presence of heavy-tailed noise $(\eps_i)$ and of nonparametric quantile estimation, which is becoming more and more popular in applications. One main application will be robust image denoising. In the spirit of classical M-estimation \cite{Huber} we therefore consider $g(x_i)$ as the location parameter in the observation $Y_i$, that is \begin{equation}\label{eqfdef} g(x_i)=\argmin_{m\in\R}\E[\rho(Y_i-m)] \end{equation} for some convex function $\rho:\R\to\R^+$ with $\rho(0)=0$. We shall assume that $g(x_i)$ is uniquely defined by \eqref{eqfdef}, which is true in all cases of interest. If the $Y_i$ have Lebesgue densities, then often an equivalent description is given by the first order condition $\E[\rho'(\eps_i)]=0$ where $\rho'$ denotes the (weak) derivative. Standard examples are $\rho(x)=x^2/2$ for the classical mean regression model ($\E[\eps_i]=0$), $\rho(x)=|x|$ for the median regression model ($\PP(\eps_i\le 0)=\PP(\eps_i\ge 0)=1/2$) and the intermediate case $\rho(x)=x^2/2$ for $\abs{x}\le k$ and $\rho(x)=k\abs{x}-k^2/2$ for $\abs{x}\ge k$ with some $k>0$ for the Huber estimator ($\E[\min(\max(\eps_i,-k),k)]=0$). The quantile regression model is obtained for $\rho(x)=\abs{x}+(2\alpha-1) x$ ($\PP(\eps_i\le 0)=\alpha$ with quantile $\alpha\in (0,1)$), see e.g. \cit{Koenker}. Since we shall care about robustness, we merely assume a mild moment condition $\eps_i\in L^r$ for some $r\ge 1$ and measure the error in $L^r$-norm. The function $g$ is not supposed to satisfy a global smoothness criterion, but we aim at estimating it locally in each point $x\in{\cal X}$ as efficiently as possible. The risk will then depend on local regularity properties, which we do not assume to be known. For spatially inhomogeneous functions, in the presence of jumps or for image denoising pointwise adaptive methods are much more appropriate than global smoothing methods. In classical mean regression local adaptivity can be achieved using wavelet thresholding or kernels with locally varying bandwidths, see \cit{Lepskietal} for a discussion. In this ideal situation a data-driven choice among linear empirical quantities is performed. M-estimators are typically nonlinear and the standard approaches do not necessarily transfer directly. \cit{Brownetal}, for example, use an intermediate data binning and then apply wavelet thresholding to the binned data for median regression. On the other hand, \cit{HallJones}, \cit{Portnoy} and \cit{vandeGeer} consider kernels, smoothing splines and more general $M$-estimation for quantile regression, but they all use global methods for choosing the tuning parameters like cross-validation or penalisation. Here, we develop a generic algorithm to select optimally among local M-estimators. In contrast to classical model selection, we do not only rely on the estimator values themselves to define a data-driven selection criterion. This has significant advantages in the present case of nonlinear base estimators. Subsequently, we assume that the statistician has chosen the suitable definition of $\rho$ for the problem at hand and we use the corresponding sample versions to construct base estimators for the (generalized) regression function $g$. In the spirit of classical nonparametrics, we assume that $g$ is locally almost constant around a given point $x\in{\cal X}$. The statistical challenge is to select adaptively the right neighbourhood $U$ of $x$ where a local $M$-estimator is applied. Let us write \begin{equation}\label{eqmdef} m(Y_i,\,x_i\in U):=\arginf_{\mu\in\R}\Big\{\sum_{i:x_i\in U}\rho(Y_i-\mu)\Big\} \end{equation} for the location estimator on the set $U\subset{\cal X}$. If the minimizer is not unique, we just select one of them (e.g., a version of the sample median for $\abs{U}$ even). Note that an extension to general local polynomial or more general local-likelihood estimation is straightforward, but this is not the focus of the present work. For each point $x$ let a family of nested neighbourhoods $U_0\subset U_1\subset\cdots\subset U_K$ be given and set \[\tilde\theta_k:=m(Y_i,\,x_i\in U_k). \] Then the family $(\tilde\theta_k)_{0\le k\le K}$ forms the class of base estimators and we aim at selecting the best estimator of $\theta:=g(x)$ in this family. \begin{example} \label{ex1} Let the design space be ${\cal X}=[0,1]$ with equidistant design points $x_i=i/n$ and take $\rho(x)=\abs{x}$. Consider the symmetric windows $U_k=[x-h_k,x+h_k]$ generated by some bandwidths $0\le h_0<h_1<\cdots<h_K$. Then $\tilde\theta_k$ is the classical median filter, see e.g. \cit{Truong} or \cit{AriasDonoho}. \end{example} Using Lepski's approach as a starting point, we present our procedure to select optimally among local M-estimators in Section \ref{SecProc}. We argue in a multiple testing interpretation that our procedure is usually more powerful. Moreover, it is equally simple to analyze and easy to implement. In Sections \ref{SecError} and \ref{SecAsymp} we derive exact and asymptotic error bounds and the latter give optimal minimax rates for H\"older classes. The simulations in Section \ref{SecSimul} show that our procedure has convincing finite sample properties. Moreover, they confirm that Lepski's classical method applied to local median estimators suffers from oversmoothing because changes in the signal are not detected early enough due to the robustness of the median. Finally, the procedure has been implemented to denoise dynamical CT image sequences in Section \ref{SecImage}, which is of key interest when assessing tumor therapies. Two more technical proofs are postponed to Section \ref{SecApp}. \section{The procedure}\label{SecProc} \subsection{Main ideas} As a starting point let us consider the standard \cit{Lepski} method for selecting among $(\tilde\theta_k)_{0\le k\le K}$, given the mean regression model with $\E[\eps_i]=0$ and $\E[\eps_i^2]<\infty$. Note that the base estimators are then ordered according to decreasing variances: $\Var(\tilde\theta_k)\le \Var(\tilde\theta_{k-1})$. On the other hand, the bias is usually increasing with increasing neighbourhoods $U_k$. This is not always the case (for example, think of local means for a linear $g$), but true in particular for the worst case bias over smoothness classes like H\"older balls of functions. Lepski's method can be understood as a multiple testing procedure where the hypothesis $H_0(k):\;g|_{U_k}\equiv \theta$ that $g$ is constant on $U_k$ is tested against the alternative of significant deviations. Always assuming that $H_0(0)$ is true, we test sequentially whether $H_0(k+1)$ is acceptable provided that the hypotheses $H_0(\ell)$ have been accepted for all $\ell\le k$. Once the test of an $H_0(k+1)$ is rejected, we select the base estimator $\tilde\theta_k$ corresponding to the last accepted hypothesis. The main point is thus to properly define the single significance tests for $H_0(k+1)$. Lepski's method accepts $H_0(k+1)$ if $\abs{\tilde\theta_{k+1}-\tilde\theta_\ell}\le z_\ell^{(k+1)}$ holds for all $\ell\le k$ with suitable critical values $z_\ell^{(k+1)}>0$. The wide applicability and success of Lepski's method is also due to this very simple and intuitive test statistics. In our nonlinear estimation case it turns out that tests for $H_0(k+1)$ based on the differences of base estimators are often not optimal. To understand this fact, let us consider a toy model of two neighbourhoods $U_1\subset U_2$ with a piecewise constant median regression function $g$ equal to $\mu_1$ on $U_1$ and to $\mu_2$ on $U_2\setminus U_1$. The procedure therefore reduces to a simple two-sample location test between the observations in $U_1$ and in $U_2\setminus U_1$. We proceed by considering abstractly a two-sample location test where the first sample $Y_1,\ldots,Y_n$ is i.i.d. with density $f_1(x)=\frac1{2\sigma} \exp(-\abs{x-\mu_1}/\sigma)$ and the second independent sample $Y_{n+1},\ldots,Y_{2n}$ is i.i.d. with density $f_2(x)=\frac1{2\sigma} \exp(-\abs{x-\mu_2}/\sigma)$. Our goal is to test $H_0: \mu_1=\mu_2$ for known $\sigma>0$. Given the Laplace distribution, we follow Lepski's idea and put $\tilde m_1=\med(Y_i,\,i=1,\ldots,n)$, the median over the first sample, and $\tilde m_2=\med(Y_i,\,i=1,\ldots,2n)$, the median over both samples. Then the test rejects if $T_L:=2\abs{\tilde m_1-\tilde m_2}>z$ holds for appropriate $z>0$. A more classical approach, though, relies on a likelihood ratio (LR) test or on a Wald-type test using the maximum likelihood estimator for $\mu_1-\mu_2$. Since the LR test is not as simple, we focus on the Wald-test statistic which is given by the difference of the medians over the two samples. Hence, we reject $H_0$ if $T_W:=\abs{\med(Y_i,\,i=1,\ldots,n)-\med(Y_i,\,i=n+1,\ldots,2n)}>z$ holds for appropriate $z>0$. The following asymptotic result for the two test statistics is proved in Section \ref{SecApp1}. \begin{proposition}\label{PropTest} Let $f:\R\to\R^+$ be a symmetric and continuous density with $f(0)>0$ and let $Y_1,\ldots,Y_n\sim f$, $Y_{n+1},\ldots,Y_{2n}\sim f(\cdot-\Delta)$ with $\Delta>0$ be independently distributed. Then with $F$ denoting the cumulative distribution function of $f$ we obtain for $n\to\infty$ \begin{align*} &\sqrt{n}\Big(\med(Y_i,\,i=n+1,\ldots,2n)-\med(Y_i,\,i=1,\ldots,n)-\Delta\Big) \Rightarrow N(0,\sigma_W^2)\\ &\quad\text{ with } \sigma_W^2=\frac{1}{2f^2(0)},\\ &\sqrt{n}\Big(2\Big(\med(Y_i,\,i=1,\ldots,2n)-\med(Y_i,\,i=1,\ldots,n)\Big)-\Delta\Big) \Rightarrow N(0,\sigma_L^2)\\ &\quad\text{ with } \sigma_L^2= \frac{2F(\Delta/2)(1-F(\Delta/2))}{f^2(\Delta/2)}+\frac{1}{f^2(0)}-\frac{2(1-F(\Delta/2))}{f(0)f(\Delta/2)}. \end{align*} In particular, for $\Delta=0$ we have $\sigma_L^2=\sigma_W^2$ and for $\Delta\to 0$ we have the order $\sigma_L^2=\sigma_W^2(1+2\Delta f(0)+O(\Delta^2f(0)))$, provided $f$ is Lipschitz continuous at zero. \end{proposition} Putting $\Delta=\abs{\mu_1-\mu_2}$ this result shows that under $H_0$, i.e. $\Delta=0$, the test statistics $T_L$ and $T_W$ are asymptotically identically distributed, whereas $T_L$ has a larger asymptotic variance under any alternative $\Delta>0$ than $T_W$. In the above Laplace model with densities $f_1,f_2$ this deterioration is only negligible if the signal-to-noise ratio satisfies $\abs{\mu_1-\mu_2}/\sigma\ll 1$. This is exactly what we see in simulations, see e.g. Example 1 in Section \ref{SecSimul} below. Since the Laplace model is Hellinger differentiable, the Wald-type test is (locally) asymptotically efficient for $n\to\infty$ as is the LR test, see e.g. \cit{vanderVaart}. Strictly speaking, when considering local alternatives for fixed $\sigma>0$ and $n\to\infty$, i.e. $\abs{\mu_1-\mu_2}=O(n^{-1/2})$, then the deterioration in using $T_L$ becomes also negligible. From a practical perspective, these local asymptotics are often not adequate, e.g. in image denoising, where we face relatively large signal differences $\Delta$ at borders between objects and do not dispose of a very large number $n$ of observed pixels. More generally, two-sample location tests can naturally be based on the difference of the in-sample location estimators. In consequence, we proceed differently in testing the hypotheses $H_0(k+1)$ of homogeneity: When the hypotheses $H_0(\ell)$ for $\ell\le k$ have been accepted, we ask whether the observations $Y_i$ in the new points $x_i\in U_{k+1}\setminus U_k$ are homogeneous with those in $U_\ell$ for $\ell\le k$. This means that our tests reject if the empirical location in the additional data \[\tilde\theta_{(k+1)\setminus k}:=m(Y_i,\,x_i\in U_{k+1}\setminus U_k)\] satisfies with certain critical values $z_\ell^{(k+1)}>0$: \[ \exists\ell\le k:\:\abs{\tilde\theta_{(k+1)\setminus k}-\tilde\theta_\ell}>z_\ell^{(k+1)}.\] As in Lepski's method, it is necessary to perform the testing for all $\ell\le k$ and not only with $\ell=k$ to avoid that the signal slowly drifts away as the neighbourhoods grow. In most cases, though, $H_0(k+1)$ will be rejected because the new piece $\tilde\theta_{(k+1)\setminus k}$ is not in line with $\tilde\theta_k$: due to the smaller variance of $\tilde\theta_k$ compared to $\tilde\theta_\ell$, $\ell< k$, this last test is the most powerful. It is then interesting to observe that for linear $m$ the test statistic $\tilde\theta_{(k+1)\setminus k}-\tilde\theta_k$ is just a multiple of $\tilde\theta_{k+1}-\tilde\theta_k$. Consequently, for mean regression with linear base estimators our method will not differ much from Lepski's standard method, whereas the general nonlinear M-estimators are treated in a significantly different way, note also the numerical results in Section \ref{SecSimul}. Observe that our approach breaks an ubiquitous paradigm in modern statistics and learning theory (see e.g. \cit{Massart} for model selection or \cit{Tsybakov} for aggregation): we select the best base learner among $(\tilde\theta_k)$ in a data-driven way not only based on the estimator values themselves, but additionally on the statistics $(\tilde\theta_{(k+1)\setminus k})$. Not only in the abstract modeling above, but also in implementations this idea turns out to be very advantageous for nonlinear estimators. \subsection{The algorithm}\label{SecAlgo} We want to select the best estimator among the family $\{\tilde\theta_k\,|\,k=0,\ldots,K\}$. Considering the law $\PP_0$ generated by the no-bias setting $g\equiv 0$, we introduce the stochastic error levels \begin{equation}\label{EqStochErr} s_j:=\E_0[\abs{\tilde\theta_j}^r]^{1/r},\quad s_{kj}:=\E_0[\abs{\tilde\theta_{(k+1)\setminus k}-\tilde\theta_j}^r]^{1/r}. \end{equation} We apply the following sequential procedure for prescribed critical values $(z_j)_{j=0,\ldots,K-1}$ and set $z_K:=1$: {\tt \begin{itemize} \item initialize $k:=0$; \item repeat\\ \hspace*{3mm} if for all $j=0,\ldots,k$ \[\abs{\tilde\theta_{(k+1)\setminus k}-\tilde\theta_j}\le z_j s_{kj}+ z_{k+1} s_{k+1} \] \hspace*{6mm} then increase $k$\\ \hspace*{6mm} else stop\\ until $k=K$; \item put $\hat k:=k$ and $\hat\theta:=\tilde\theta_{\hat k}$. \end{itemize}} This algorithm to determine $\hat k$ can be cast in one formula: \[ \hat k:=\inf\Big\{k\ge 0\,\Big|\,\exists j\le k:\: \abs{\tilde\theta_{(k+1)\setminus k}-\tilde\theta_j}> z_j s_{kj}+ z_{k+1} s_{k+1}\Big\}\wedge K. \] \section{Error analysis}\label{SecError} \subsection{Propagation and stopping late} We need a very natural property of the $M$-estimator. \begin{assumption}\label{AssMean} The location estimator in \eqref{eqmdef} satisfies for any set $S$ and any partition $S=\bigcup_j S_j$ with pairwise disjoint sets $S_j$: \[ \textstyle\min_j m(Y_i,\,x_i\in S_j) \le m(Y_i,\,x_i\in S)\le \max_j m(Y_i,\,x_i\in S_j).\] \end{assumption} \begin{lemma}\label{LemMeanconvex} If the function $\rho$ is strictly convex, then Assumption \ref{AssMean} is satisfied. \end{lemma} \begin{proof} Let us write $m_T$ as short-hand for $m(Y_i,\,x_i\in T)$, $T\subset{\cal X}$. Denoting by $\rho'_+,\rho'_-$ the right- and left-handed derivatives of the convex function $\rho$, the functions $\rho'_+$, $\rho'_-$ are strictly increasing with $\rho'_+(x)< \rho'_-(y)\le \rho'_+(y)$ for all $x<y$ and \[ \sum_{x_i\in T}\rho'_-(Y_i-m_T)\le 0,\quad \sum_{x_i\in T}\rho'_+(Y_i-m_T)\ge 0. \] If $m_S<m_{S_j}$ were true for all $j$, then \[ \sum_{x_i\in S}\rho'_-(Y_i-m_S)> \sum_j\sum_{x_i\in S_j}\rho'_+(Y_i-m_{S_j})\ge 0, \] which contradicts the minimizing property of $m_S$. Hence, $m_S\ge\min_j m_{S_j}$ holds and a symmetric argument shows $m_s\le \max_jm_{S_j}$. \end{proof} \begin{remark} If $\rho$ is not strictly convex, then we usually impose additional conditions to define $m$ uniquely. For any reasonable specific choice Assumption \ref{AssMean} should be satisfied. In particular, this is true for the sample median where we take for an even number $N$ of data points $Y_i$ the mean $(Y_{(N/2)}+Y_{(1+N/2)})/2$ of the order statistics. \end{remark} \begin{comment} \begin{lemma}\label{LemMed} Let $x_1,\ldots,x_{2m+1}$ be arbitrary real numbers and suppose that $0\le m_0< \cdots< m_k=m$. Then \begin{align*} &\abs{\med(x_i,\,i=1,\ldots,2m+1)-\med(x_i,\,i=1,\ldots,2m_0+1)}\\ &\quad \le \max_{j=1,\ldots,k} \abs{\med(x_i,\,i=2(m_{j-1}+1),\ldots,2m_j+1) -\med(x_i,\,i=1,\ldots,2m_0+1)}. \end{align*} \end{lemma} \begin{proof} By homogeneity of the median we can assume without loss of generality that $\med(x_i,\,i=1,\ldots,2m_0+1)=0$. Then the assertion reduces to the fact that the median over a set is always bounded in absolute value by the maximal median over subsets from a partition of this set. (\MR clear...?) \end{proof} \end{comment} \begin{proposition}\label{PropLateErr} Grant Assumption \ref{AssMean}. Then we have for any $k=0,\ldots,K-1$ \[ \abs{\hat\theta-\tilde\theta_k}{\bf 1}(\hat k>k)\le \max_{j=k+1,\ldots,K-1} \big(z_k s_{jk}+ z_{j+1} s_{j+1}\big). \] \end{proposition} \begin{remark} This error propagation result is true '$\omega$-wise', that is, it does not depend on the noise realisation. It is built into the construction of the selection procedure. An analogous result holds for Lepski's original procedure \cite{Lepski,Lepskietal}. \end{remark} \begin{proof} From Assumption \ref{AssMean} we infer for $\ell>k$ \[ \abs{\tilde\theta_\ell-\tilde\theta_k}\le \max_{k+1\le j\le \ell}\abs{\tilde\theta_{j\setminus(j-1)}-\tilde\theta_k}. \] We therefore obtain on the event $\{\hat k>k\}$ by construction \begin{align*} \abs{\hat\theta-\tilde\theta_k}&\le \max_{j=k+1,\ldots,\hat k} \abs{\tilde\theta_{j\setminus(j-1)}-\tilde\theta_k}\le \max_{j=k+1,\ldots,K-1} \big( z_k s_{jk}+ z_{j+1} s_{j+1} \big). \end{align*} \end{proof} \begin{example} For geometrically decreasing stochastic error levels $s_k$ in \eqref{EqStochErr}, in particular for the median filter from Example \ref{ex1} with bandwidths $h_k=h_0q^k$, we have $s_{jk}\lesssim s_k$ for $j>k$, where $A\lesssim B$ means $A={\cal O}(B)$ in the ${\cal O}$-notation. The late stopping error is of order $z_k^rs_k^r$, provided the critical values $(z_k)$ are non-increasing. This will imply that the error due to stopping later than some optimal $k^\ast$ is increased by at most the order of $z_{k^\ast}^r$: \[ \E_\theta[\abs{\hat\theta-\theta}^r{\bf 1}(\hat k>k^\ast)] \lesssim \E_\theta[\abs{\tilde\theta_{k^\ast}-\theta}^r]+ z_{k^\ast}^rs_{k^\ast}^r \le (1+z_{k^\ast}^r)\E_\theta[\abs{\tilde\theta_{k^\ast}-\theta}^r]. \] \end{example} \subsection{Critical values and stopping early} As the preceding analysis shows, small critical values $(z_k)$ lead to small errors caused by stopping late. On the other hand, the $(z_k)$ should not be too small in order to control the error of stopping early. To this end, we shall require a condition on the critical values $(z_k)$ in the no-bias situation under $\PP_0$, that is for constant $g\equiv 0$. In fact, we face a multiple testing problem, but with an estimation-type loss function. For some confidence parameter $\alpha>0$ we select $z_k>0$, $k=0,\ldots,K-1$, such that the condition \begin{equation}\label{eqhypzkgen} \sum_{j=0}^{K-1}\E_0\Big[\abs{\tilde\theta_j}^r {\bf 1}\big(\exists \ell\le j:\; \abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_\ell}>z_\ell s_{j\ell}\big)\Big] \le\alpha s_K^r \end{equation} is satisfied. In order to obtain a unique prescription for each $z_k$ that equilibrates the errors for different stopping times of the algorithm, we can select the $(z_k)$ sequentially. We choose $z_0$ such that \[ \sum_{j=0}^{K-1}\E_0\Big[\abs{\tilde\theta_j}^r {\bf 1}\big(\abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_0}>z_0s_{j0}\big)\Big] \le\tfrac{\alpha}{K} s_K^r \] and then each $z_k$ for given $z_0,\ldots,z_{k-1}$ such that \begin{equation}\label{eqhypzk} \sum_{j=k}^{K-1}\E_0\Big[\abs{\tilde\theta_j}^r {\bf 1}\Big(\abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_k}>z_ks_{jk}\text{, }\forall \ell<k:\; \abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_\ell}\le z_\ell s_{j\ell}\Big)\Big] \le\tfrac{\alpha}{K} s_K^r. \end{equation} To determine the $(z_k)$ in practice, we simulate in Monte Carlo iterations the pure noise case $g\equiv0$ and calculate for each $k$ the error when the algorithm stops before the (theoretically optimal) index $K$ due to a rejected test involving $z_k$. The critical values are determined such that this error is a fraction of the oracle estimation error $s_K^r$. For this calibration step the original algorithm of Section \ref{SecAlgo} is taken, only modified by using $z_js_{kj}$ instead of $z_js_{kj}+z_{k+1}s_{k+1}$ in the testing parts. The selection rule for the critical values in Lepski's procedure is the focus in the work by \cit{SpokVial}. Their idea is to transfer properties from the no-bias situation to the general nonparametric specification by bounding the likelihood between the two observation models. This approach, the so-called small modeling bias condition, could be applied here as well and will give similar results. On a practical level, the difference is that \cit{SpokVial} enlarge the moment from $r$ to $2r$ in the calibration step, while we add the term $z_{k+1}s_{k+1}$ to the testing values $z_js_{kj}$ from the calibration. In the asymptotic analysis, however, the method by \cit{SpokVial} costs us some power in the logarithmic factor and we would thus not attain optimal rates over H\"older balls, cf. Section \ref{SecAsymp}. Moreover, for robustness reasons, we do not want to require higher moment bounds for the error variables and the likelihood. \begin{definition} Given the regression function $g$, introduce its variation on $U_k$ \[\V_k(g):=\sup_{y_1,y_2\in U_k}\abs{g(y_1)-g(y_2)}\] and consider the oracle-type index \[ k^\ast:=\min \{k=0,\ldots,K-1\,|\, \V_{k+1}(g)> z_{k+1}s_{k+1}\}\wedge K. \] \end{definition} This definition implies that for all $k\le k^\ast$ the maximal bias $V_k(g)$ of $\tilde\theta_k$ is less than its stochastic error level $s_k$ from \eqref{EqStochErr} times the critical value $z_k$. The next result, when specialised to $k=k^\ast$, means intuitively that the error due to stopping before $k^\ast$ can be bounded in terms of the stochastic error of $\tilde\theta_{k^\ast}$, involving the critical value $z_{k^\ast}$ as a factor. Let us also mention here that the rationale for the choice $z_K=1$ in the algorithm of Section \ref{SecAlgo} is to equilibrate maximal bias and stochastic error at step $k=K-1$. \begin{proposition}\label{PropEarlyrr} We have for any $k=0,\ldots,k^\ast$ \[ \E\big[\abs{\hat\theta-\tilde\theta_k}^r{\bf 1}(\hat k<k)\big]\le (3^{r-1}\vee 1)(z_k^r+1+\alpha)s_k^r. \] \end{proposition} \begin{proof} We shall write $\hat k(g)$, $\tilde\theta_k(g)$ etc. to indicate that $\hat k$, $\tilde\theta_k$ etc. depend on the underlying regression function $g$. We shall need the inequality \begin{equation}\label{eqEstg0} \abs{\tilde\theta_j(g)-\tilde\theta_k(g)}\le \abs{\tilde\theta_j(0)-\tilde\theta_k(0)}+V_k(g)\text { for } j<k \end{equation} which follows from \begin{align*} \tilde\theta_j(g)-\tilde\theta_k(g) &=m(g(x_i)+\eps_i,\,x_i\in U_j)-m(g(x_i)+\eps_i,\,x_i\in U_k)\\ &\le m(\eps_i,\,x_i\in U_j)+\sup_{x\in U_j}g(x)-m(\eps_i,\,x_i\in U_k)-\inf_{x\in U_k}g(x)\\ &\le \tilde\theta_j(0)-\tilde\theta_k(0)+V_k(g) \end{align*} and by a symmetric argument for $\tilde\theta_k(g)-\tilde\theta_j(g)$. By definition of $k^\ast$ and using the condition on the $(z_k)$ as well as \eqref{eqEstg0} for $\tilde\theta_j$ and $\tilde\theta_{(j+1)\setminus j}$, we obtain for all $k\le k^\ast$ \begin{align*} &\E\big[\abs{\hat\theta(g)-\tilde\theta_k(g)}^r{\bf 1}(\hat k(g)<k)\big]\\ &=\sum_{j=0}^{k-1}\E\big[\abs{\tilde\theta_j(g)-\tilde\theta_k(g)}^r{\bf 1}(\hat k(g)=j)\big]\\ &\le \sum_{j=0}^{k-1}\E\big[(\V_k(g) + \abs{\tilde\theta_j(0)}+\abs{\tilde\theta_k(0)})^r{\bf 1}(\hat k(g)=j)\big]\\ &\le (3^{r-1}\vee 1)\Big(\V_k(g)^r+\E[\abs{\tilde\theta_k(0)}^r]+\\ &\quad+ \sum_{j=0}^{k-1}\E\big[\abs{\tilde\theta_j(0)}^r {\bf 1}\big(\exists \ell\le j:\:\abs{\tilde\theta_{(j+1)\setminus j}(g)-\tilde\theta_\ell(g)}> z_\ell s_{j\ell}+ z_{j+1} s_{j+1}\big)\big]\Big)\\ &\le (3^{r-1}\vee 1)\Big(z_k^r s_k^r+s_k^r+\\ &\quad +\sum_{j=0}^{k-1}\E\big[\abs{\tilde\theta_j(0)}^r {\bf 1}\big(\exists \ell\le j:\:\abs{\tilde\theta_{(j+1)\setminus j}(0)-\tilde\theta_\ell(0)}+V_{j+1}(g)> z_\ell s_{j\ell}+ z_{j+1} s_{j+1}\big)\big]\Big)\\ &\le (3^{r-1}\vee 1)\Big(z_k^rs_k^r+s_k^r+\alpha s_K^r\Big). \end{align*} The result follows from the isotonic decay of $(s_k)$. \end{proof} \begin{comment} \begin{proposition}\label{PropEarlyrr} We have for any $k=0,\ldots,k^\ast$ \[ \E\big[\abs{\hat\theta}^r{\bf 1}(\hat k<k)\big]\le (2^{r-1}\vee 1)(z_k^r+\alpha)s_k^r. \] \end{proposition} \begin{proof} We shall write $\hat k(f)$, $\tilde\theta_k(f)$ etc. to indicate that $\hat k$, $\tilde\theta_k$ etc. depend on the underlying regression function $f$. By definition of $k^\ast$ and using the condition on the $(z_k)$, we obtain for all $k\le k^\ast$ \begin{align*} &\E\big[\abs{\hat\theta(f)}^r{\bf 1}(\hat k(f)<k)\big]\\ &=\sum_{j=0}^{k-1}\E\big[\abs{\tilde\theta_j(f)}^r{\bf 1}(\hat k(f)=j)\big]\\ &\le \sum_{j=0}^{k-1}\E\big[(\V_k(f) + \abs{\tilde\theta_j(0)})^r{\bf 1}(\hat k(f)=j)\big]\\ &\le (2^{r-1}\vee 1)\Big(\V_k(f)^r\\ &\quad+ \sum_{j=0}^{k-1}\E\big[\abs{\tilde\theta_j(0)}^r {\bf 1}\big(\exists \ell\le j:\:\abs{\tilde\theta_{(j+1)\setminus j}(Y(f))-\tilde\theta_\ell(f)}> z_\ell s_{j\ell}+ z_{j+1} s_{j+1}\big)\big]\Big)\\ &\le (2^{r-1}\vee 1)\Big(z_k^rs_k^r+ \sum_{j=0}^{k-1}\sum_{\ell=0}^j\E\big[\abs{\tilde\theta_j(0)}^r {\bf 1}(\abs{\tilde\theta_{(j+1)\setminus j}(0)-\tilde\theta_\ell(0)}> z_\ell s_{j\ell})\big]\Big)\\ &\le (2^{r-1}\vee 1)\Big(z_k^rs_k^r+\alpha s_K^r\Big). \end{align*} The result follows from the isotonic decay of $(s_k)$. \end{proof} \end{comment} \subsection{Total risk bound} \begin{theorem}\label{thmrisk} Assume that $(z_ks_k)$ is non-increasing in $k$. Then under Assumption \ref{AssMean} the following excess risk estimate holds for all $k\le k^\ast$: \[ \E[\abs{\hat\theta-\tilde\theta_k}^r]\le (3^{r-1}\vee 1)\Big( (2z_k^r+1+\alpha)s_k^r + z_k^r\max_{j=k+1,\ldots,K-1}s_{jk}^r\Big). \] \end{theorem} \begin{proof} For the late-stopping error Proposition \ref{PropLateErr} and the decay of $(z_ks_k)$ give \[\abs{\hat\theta-\tilde\theta_k}^r{\bf 1}(\hat k>k)\le (2^{r-1}\vee 1) \max_{j>k}(z_k^r s_{jk}^r+ z_{j+1}^r s_{j+1}^r)\le (2^{r-1}\vee 1) z_k^r \big(s_k^r+\max_{j>k}s_{jk}^r\big). \] Add the early-stopping error from Proposition \ref{PropEarlyrr}. \end{proof} \begin{comment} \begin{theorem}\label{thmrisk} Assume that $z_ks_k$ is non-increasing in $k$. Then the following risk estimate holds: \begin{align*} \E[\abs{\hat\theta}^r]&\le (2^{r-1}\vee 1)\min_{k\le k^\ast} \Big( (2^r\vee 2)z_k^r+1+\alpha)s_k^r + z_k^r(2^{r-1}\vee 1)\max_{j=k+1,\ldots,K-1}s_{jk}^r\Big). \end{align*} \end{theorem} \begin{proof} For the late-stopping error Proposition \ref{PropLateErr} and the assumption give \[\abs{\hat\theta-\tilde\theta_k}^r{\bf 1}(\hat k>k)\le (2^{r-1}\vee 1) \max_{j>k}(z_k^r s_{jk}^r+ z_{j+1}^r s_{j+1}^r)\le (2^{r-1}\vee 1) z_k^r \big(s_k^r+\max_{j>k}s_{jk}^r\big). \] Then use $\abs{\hat\theta}^r\le(2^{r-1}\vee 1)(\abs{\hat\theta-\tilde\theta_k}^r+\abs{\tilde\theta_k}^r)$ to obtain \[ \E\big[\abs{\hat\theta}^r{\bf 1}(\hat k>k)\big]\le (2^{r-1}\vee 1)^2 z_k^r \big(s_k^r+\max_{j>k}s_{jk}^r\big)+(2^{r-1}\vee 1)s_k^r. \] Finally, add the early-stopping error from Proposition \ref{PropEarlyrr}. \end{proof} \end{comment} \begin{example}[continued] For geometrically increasing bandwidths $(h_k)$ we obtain $s_{jk}\lesssim s_k$ for $j>k$ and thus \[\E[\abs{\hat\theta-\tilde\theta_{k^\ast}}^r]\lesssim (\alpha+z_{k^\ast}^r)s_{k^\ast}^r. \] The factor $\alpha+z_{k^\ast}^r$ is the term we pay for adaptation. \end{example} \section{Asymptotic risk}\label{SecAsymp} \subsection{General result} We shall derive convergence rates for $n\to\infty$ of the critical values $(z_k)$. All quantities in the procedure may depend on $n$, but we still write $U_k$, $K$ and $z_k$ instead of $U_k(n)$, $K(n)$, $z_k(n)$. The notation $A\lesssim B$ will always mean $A(n)\le cB(n)$ with some $c>0$ independent of $n$ and $A\thicksim B$ is short for $A\lesssim B$ and $B\lesssim A$. We work under the following assumption whose validity under mild conditions will be derived in the next subsection. \begin{assumption}\label{AssAsymp}\mbox{} \begin{enumerate} \item The cardinalities $N_k$ of the neighbourhoods $U_k$ grow with geometric order: \[ q_1N_k\le N_{k+1}\le q_2N_k\quad\text{ for all $k=0,\ldots K-1$} \] for some fixed $q_2\ge q_1>1$ and with $N_1/\log(N_K)\to\infty$, $N_K\thicksim n$ as $n\to\infty$. \item For all sufficiently large $N$ we have \[ \E[\abs{m(\eps_i,\,i=1,\ldots,N)}^{r}]^{1/r}\thicksim \E[\abs{m(\eps_i,\,i=1,\ldots,N)}^{2r}]^{1/2r}\thicksim N^{-1/2}. \] \item For all $\tau_N\to\infty$ with $\tau_N N^{-1/2}\to 0$ a moderate deviations bound applies: there is some $c>0$ such that \[ \limsup_{N\to\infty}e^{c\tau_N^2}\PP\big(N^{1/2}\abs{m(\eps_i,\,i=1,\ldots,N)}>\tau_N\big)<\infty. \] \end{enumerate} \end{assumption} The following asymptotic bounds follow directly from the definitions: \begin{lemma}\label{Lemskj} Assumption \ref{AssAsymp}(b) implies $s_j\thicksim N_j^{-1/2}$ and $N_j^{-1/2}\wedge(N_{k+1}-N_k)^{-1/2}\lesssim s_{kj}\lesssim N_j^{-1/2}\vee(N_{k+1}-N_k)^{-1/2}$. Assumption \ref{AssAsymp}(a) then yields for $k\ge j$ \[ s_j\thicksim s_{kj}\thicksim N_j^{-1/2}. \] \end{lemma} Under Assumption \ref{AssAsymp} critical values of the same order as in the Gaussian case suffice. \begin{proposition}\label{propasympzk} Grant Assumption \ref{AssAsymp} and suppose $\alpha\in (0,1)$. We can choose \[z_k^2=\zeta\big(2r\log(s_k/s_K)+\log(\alpha^{-1})+\log(K)\big),\quad k=0,\ldots,K-1, \] with $\zeta>0$ a sufficiently large constant in order to satisfy Condition \eqref{eqhypzk}. For $K\thicksim \log n$ this yields asymptotically $z_k\thicksim\sqrt{\log n}$. \end{proposition} Note that the chosen critical values $z_k$ are decreasing in $k$, which has the desirable effect that we do not permit stopping at an early stage with the same probability as stopping at higher indices $k$. Moreover, this guarantees that $z_ks_k$ is non-increasing in $k$, the hypothesis in Theorem \ref{thmrisk}. From Theorem \ref{thmrisk} we therefore obtain the following asymptotic risk bound. \begin{corollary}\label{corasymprisk} Grant Assumptions \ref{AssMean} and \ref{AssAsymp} and let $K\thicksim \log n$. Choosing the critical values as in Proposition \ref{propasympzk} gives \[ \E[\abs{\hat\theta-\theta}^r]\lesssim (\log n)^{r/2}\E[\abs{\tilde\theta_{k^\ast}-\theta}^r].\] \end{corollary} \begin{example}[continued] Let us specify to $s$-H\"older continuous $g:[0,1]\to\R$, equidistant design and kernel estimators with geometrically increasing bandwidths $h_k=h_0q^k$, $K\thicksim\log(n)$. Then we can choose $z_k\thicksim \sqrt{\log(n)}$ and the index $k^\ast$ satisfies $\V_{k^\ast}(f)^2\thicksim h_{k^\ast}^{2s}\thicksim (nh_{k^\ast})^{-1}\log(n)$, that is $h_{k^\ast}\thicksim (\log(n)/n)^{1/(2s+1)}$ and $z_{k^\ast}s_{k^\ast}\thicksim (\log(n)/n)^{s/(2s+1)}$. This is the classical minimax rate for pointwise adaptive estimation in the one-dimensional $s$-H\"older continuous case, see \cit{Lepskietal} for the Gaussian case. Here, we have derived the same rate for pointwise adaptive $M$-estimation under very weak conditions on the error distribution, compare the discussion on specific models below. Let us also mention that \cit{Truong} obtains the same rate result, but without logarithmic factor, for the non-adaptive median regression case. \end{example} \begin{proof}[Proof of Proposition \ref{propasympzk}] Let $j\ge k$. For $n$ sufficiently large Assumption \ref{AssAsymp}(c) together with the asymptotics $z_ks_{jk}\lesssim (\log(N_K)N_k^{-1})^{1/2}\to 0$ (using Assumption \ref{AssAsymp}(a,b) and Lemma \ref{Lemskj}) yields \begin{align*} &\PP_0(\abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_k}>z_k s_{jk})\\ &\le \PP_0(\abs{\tilde\theta_{(j+1)\setminus j}}>z_k s_{jk}/2)+\PP_0(\abs{\tilde\theta_k}>z_k s_{jk}/2)\\ &\lesssim \exp(-c z_k^2s_{jk}^2(N_{j+1}-N_j)/4)+\exp(-c z_k^2s_{jk}^2N_k/4). \end{align*} By Lemma \ref{Lemskj} there is another constant $c'>0$ such that for large $z_k$ \[ \PP_0(\abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_k}>z_k s_{jk})\lesssim \exp(-c'z_k^2). \] Our choice of $z_k$ with $\zeta$ sufficiently large guarantees $\exp(-c'z_k^2/2)=o(\alpha(s_K/s_k)^r K^{-2})$ for large $K$. We therefore more than satisfy \eqref{eqhypzkgen} and the construction in \eqref{eqhypzk} provided $n$ is sufficiently large: \begin{align*} \sum_{j=k}^{K-1} & \E_0\big[\abs{\tilde\theta_j}^r {\bf 1}(\abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_k}>z_ks_{jk})\big]\\ &\le \sum_{j=k}^{K-1}\E_0\big[\abs{\tilde\theta_j}^{2r}]^{1/2} \PP_0(\abs{\tilde\theta_{(j+1)\setminus j}-\tilde\theta_k}>z_ks_{jk})^{1/2}\\ &\lesssim \sum_{j=k}^{K-1}s_j^r\exp(-c'z_k^2/2)\\ &=o\big( (K-k)s_k^r\alpha(s_K/s_k)^{r}K^{-2}\big)\\ &= o\Big(\frac{\alpha s_K^r}{K}\Big). \end{align*} For $K\thicksim \log N$ we obtain $\log(N_K/N_k)\le (K-k)\log q_2\lesssim \log N$ and thus $z_k^2\thicksim\log n$. \end{proof} \subsection{Specific models} The preceding asymptotic analysis was based on Assumption \ref{AssAsymp} where part (a) can be ensured by construction whereas parts (b) and (c) depend on the noise model and the choice of M-estimator. The most severe restriction will usually be the moderate deviation property of Assumption \ref{AssAsymp}(c). In the case where the law of the error variable $\eps_i$ is absolutely continuous, this property holds by Corollary 2.1 in \cit{Arcones} under the following conditions: \begin{enumerate} \item $\E[\rho(\eps_i+h)-\rho(\eps_i)]=Vh^2+o(h^2)$ for some $V>0$ and $\abs{h}\to 0$; \item $\rho$ is Lebesgue-almost everywhere differentiable with derivative $\rho'$; \item there are $\lambda,\delta>0$ such that $\E[\exp(\lambda\abs{\rho'(\eps_i)})]$ and $\E[\exp(\lambda\sup_{\abs{h}\le\delta}\abs{\rho(\eps_i+h)-\rho(\eps_i)-h\rho'(\eps_i)}/h)]$ are finite. \end{enumerate} For mean regression $\rho(x)=x^2$ we have $V=1$ and $\rho'(\eps_i)=2\eps_i$ such that a finite exponential moment for $\eps_i$ is required. For median regression the result applies with $V=f_\eps(0)/2$ and $\rho'(\eps_i)=\sgn(\eps_i)$ and because of $\abs{\abs{\eps_i+h}-\abs{\eps_i}-h\sgn(\eps_i)}\le 2h$ no moment bound is required. The same is true for any robust statistic with bounded influence function, in particular for the Huber estimator and general quantile estimators. \cit{Arcones} discusses that an exponential tail estimate for $\rho'(\eps_i)$ is also necessary to obtain a moderate deviation bound, which might be a serious drawback when using Lepski's method with linear non-robust estimators. For the median the requirements are not difficult to verify directly. Assumption \ref{AssAsymp}(b) is for example established by \cit{ChuHot}, who show that for $f_\eps$ continuously differentiable around zero, $f_\eps(0)>0$, $r\in\N$ and $Z\sim N(0,1)$: \[ \lim_{N\to\infty}N^r\E[\med(\eps_1,\ldots,\eps_N)^{2r}]=(2f_\eps(0))^{-r}\E[Z^{2r}]. \] Using a coupling result, we can establish Assumption \ref{AssAsymp}(b,c) under even more general conditions, see Section \ref{SecApp2} for a proof: \begin{proposition}\label{PropMoments} Assume that the $\eps_i$ have a Lebesgue density $f_\eps$ which is Lipschitz continuous at zero and satisfies $\int_{-\infty}^0 f_\eps(x)\,dx=1/2$, $f_\eps(0)>0$, $\E[\abs{\eps_i}^r]<\infty$. Noting $\med(\eps):=\med(\eps_1,\ldots,\eps_N)$, $N$ odd, we have \[ \forall N\ge 5:\;\E[\abs{\med(\eps)}^r]\thicksim N^{-r/2}\text{ and }\E[\abs{\med(\eps)}^{2r}]\thicksim N^{-r} \] as well as for $\tau_N\to\infty$ with $\tau_N=o(N^{1/2})$ \[ \limsup_{N\to\infty}\PP\big(2N^{1/2}g_\eps(0)\abs{\med(\eps)}>\tau_N\big)\exp(\tau_N^2/8)\le 2.\] \end{proposition} \section{Simulation results}\label{SecSimul} \begin{figure}[t] \includegraphics[width=6.5cm]{RobustLepskiSim1aa.png} \includegraphics[width=6.5cm]{RobustLepskiBoxplot1a.png} \caption{Example 1 with Laplace noise: A typical realisation and a box plot of the sample errors in 1000 Monte Carlo runs.} \end{figure} \begin{figure}[t] \end{figure} We illustrate our procedure by an implementation for median regression on ${\cal X}=[-1,1]$ and the estimation of the regression function at $x=0$. We simulate $n=200$ equidistant observations $(Y_i)$ with standardized errors $(\eps_i)$ ($\E[\eps_i]=0$, $\Var(\eps_i)=1$) that are (a) Laplace, (b) normal and (c) Student t-distributed with three degrees of freedom. The location is each time estimated by local sample means as well as by local sample medians. As neighbourhoods we take symmetric intervals $U_k$ around zero containing $\floor{5^k/4^{k-1}}$ data points. This gives $K=17$ different base estimators. The calibration of the procedure is performed for Laplace distributed errors with $r=2$ and $\alpha=1$. The variances $s_j$, $s_{jk}$ of the sample means are calculated exactly and those of the sample medians are approximated by their asymptotic values (which are quite close to Monte Carlo values). The critical values $(z_k)$ are chosen according to the prescription in \eqref{eqhypzkgen}. This is achieved in both cases, mean and median estimators, by using the choice in Proposition \ref{propasympzk} with values $\zeta$ that are calibrated by 10000 Monte Carlo runs for the pure noise situation. It turned out that this gives almost equally sized error contributions for the different values $z_k$, as postulated in \eqref{eqhypzk}. The same calibration principle was applied for the original Lepski procedure with mean and median estimators. \begin{figure}[t] \includegraphics[width=6.5cm]{RobustLepskiBoxplot1b.png} \includegraphics[width=6.5cm]{RobustLepskiBoxplot1c.png} \caption{Box plot of the sample errors in 1000 Monte Carlo runs for Gaussian (left) and Student t(3) noise (right).} \end{figure} \begin{figure}[t] \end{figure} As a first example we take a simple change point problem by considering the regression function $g(x)=0$ for $\abs{x}\le 0.2$ and $g(x)=2$ for $\abs{x}>0.2$, which can be considered as a toy model for edge detection in image restauration or for structural breaks in econometrics. In Figure 1 we show a typical data set in the Laplace case (a) together with box plots for the absolute error of the different methods in 1000 Monte Carlo repetitions: local means with Lepski's and with our method, local medians with Lepski's and with our method and the oracle method, which is just the sample median over $[-0.2,0.2]=\{x:\,g(x)=0\}$. For exactly the same methods, especially still calibrated to Laplace errors, Figure 2 presents the results for Gaussian and heavy-tailed Student t(3) errors. It is obvious that in all cases Lepski's method applied to sample medians as base estimators works quite badly. This is due to the fact that this method stops far too late: the sample median over the complete intervals $U_k$ does not really 'notice' the jump in the data. In fact, in the Laplace simulation study the oracle $k=10$ is selected by this method in less than $1\%$ of the cases while most often ($65\%$) the selection is $k=12$ which yields the $1.5$ times larger window $U_{12}=[-0.29,0.29]$. The methods using the sample mean estimators perform reasonably well and especially both very similarly. Still, they are clearly beaten by our median based procedure in cases (a) and (c) where the median is the more efficient location estimator. It is remarkable here that we nearly achieve the risk of the oracle median estimator. Even in the Gaussian case (b) the linear procedures have only minor advantages. Finally, we notice the robustness property that the calibration with the wrong error distribution in Figure 2 does not seriously affect the results. \begin{figure}[t] \includegraphics[width=6.5cm]{RobustLepskiSim2aa.png} \includegraphics[width=6.5cm]{RobustLepskiBoxplot2a.png} \caption{Example 2 with Laplace noise. A typical realisation and a box plot of the sample errors in 1000 Monte Carlo runs.} \end{figure} \begin{figure}[t] \end{figure} In a second example we consider the smooth regression function $g(x)=2x(x+1)$. Because we are estimating locally around $x=0$, this is a caricature of a $C^2$-function with $g'(0)=2$ and $g''(0)=4$. Figure 3 shows again a typical data set and boxplots for the different methods in 1000 Monte Carlo runs under Laplace errors. This time the oracle choice is the window $[-0.39,0.39]$. Our median based procedure outperforms the others where the advantage over the mean-based approaches is again mainly due to the relative efficiency gain of size $1/\sqrt{2}$ induced by the base estimators in the Laplace model. This gain, though, is not at all visible when using Lepski's method for selecting among the sample medians. The results for the error distributions (b) and (c) resemble those of the first example, we confine ourselves to summarizing the numerical results for all examples in the following table, each time stating the Monte Carlo median of the absolute error: {\centering \begin{tabular}{|l|r|r|r|r|r|r|}\hline Ex. & Mean Lepski & Mean RR & Median Lepski & Median RR & Median Oracle\\\hline\hline 1a & 0.1446 & 0.1450 & 0.2871 & 0.0897 & 0.0763\\\hline 1b & 0.1640 & 0.1630 & 0.2795 & 0.1647 & 0.1325\\\hline 1c & 0.0982 & 0.0978 & 0.3012 & 0.0596 & 0.0560\\\hline 2a & 0.1846 & 0.1924 & 0.3051 & 0.1246 & 0.1005\\\hline 2b & 0.1808 & 0.1886 & 0.3430 & 0.1586 & 0.1241\\\hline 2c & 0.2102 & 0.2126 & 0.2455 & 0.1047 & 0.0822\\\hline \end{tabular}\\ } \bigskip Further simulation experiments confirm this picture. Especially for lower values of the moment $r$ our median-based procedure is very efficient, while sometimes for $r=2$ the mean-based procedures profit from less severe outliers in the Monte Carlo runs. In all these experiments the location is equally described by mean and median and we mainly see the efficiency gain of the sample median for non-Gaussian noise. For general quantile regression, however, linear methods do not apply and the standard Lepski procedures based on the nonlinear base estimators will perform badly. Our approach gives significantly better results. The error reductions by a factor of two and more, achieved in the median procedures above, confirm this very clearly. \begin{comment} \begin{figure}[t] \begin{tabular}{|l|l|l|l|l|l|l|}\hline Example & Mean L & Mean RR & Med L & Med RR & Oracle\\\hline\hline 1a (mean) & 0.0482 & 0.0487 & 0.1146 & 0.0218 & 0.0154 \\\hline 1a (med) & 0.0209 & 0.0210 & 0.0824 & 0.0081 & 0.0058\\\hline\hline 1b (mean) & 0.0041 & 0.0507 & 0.1455 & 0.0563 & 0.0384 \\\hline 1b (med) & 0.0269 & 0.0266 & 0.0781 & 0.0271 & 0.0175\\\hline\hline 1c (mean) & 0.0026 & 0.0878 & 0.1119 & 0.0084 & 0.0069 \\\hline 1c (med) & 0.0096 & 0.0096 & 0.0907 & 0.0035 & 0.0031\\\hline\hline 2a (mean) & 0.0310 & 0.0533 & 0.1050 & 0.0304 & ---\\\hline\hline 2a (med) & 0.0341 & 0.0370 & 0.0931 & 0.0155 & ---\\\hline\hline 2b (mean) & 0.0306 & 0.0525 & 0.1327 & 0.0485 & --- \\\hline 2b (med) & 0.0327 & 0.0356 & 0.1177 & 0.0251 & ---\\\hline\hline 2c (mean) & 0.0386 & 0.1187 & 0.0645 & 0.0186 & --- \\\hline 2c (med) & 0.0442 & 0.0452 & 0.0603 & 0.0110 & ---\\\hline\hline \end{tabular} \caption{MC study of squared errors with MC mean and median} \end{figure} ----------------------------------------------------------- simulation with $n=200$, ${\cal X}=[0,1]$, $x=0$, $U_k=[0,u_0q_0^k/n]$, $u_0=4$, $q0=1.25$, $f=2\cdot{\bf 1}([1/8,1])$, $r=1$, $\alpha=0.9$, $K$ maximal and $(\eps_i)$ standardized Laplace errors. First calibration with critical values $z_k=z$ constant give $z=0.95$ for original Lepski (method 1), $z=1.2$ for local mean as proposed in the first part (method 2), $z=1.8$ for Lepski on median estimators (method 3) and $z=1.6$ for local median as explained in the second part (method 4). Method 5 is the oracle method using the median over all $Y_i$ with $x_i\in [0,1/8)$. $N=1000$ Monte Carlo runs give for the risk the following mean values \[ 1:\: 0.2655630 \quad 2:\:0.2871909 \quad 3:\:0.3611273 \quad 4:\:0.1767623 \quad 5:\:0.1252699 \] and standard deviations \[ 1:\:0.1941443 \quad 2:\:0.1959647 \quad 3:\:0.2391690 \quad 4:\:0.1601253 \quad 5:\:0.1050758 \] which is presented graphically in Figure 1. This shows a major advantage for method 4 which is close to the oracle. Using Lepski's method applied to the medians is not recommended in this situation and also the methods 1 and 2 based on local means are not working equally well. \begin{figure}[t] \caption{A typical realisation and a box plot of the risk in 1000 Monte Carlo runs (cut-off beyond 1).} \end{figure} Applying exactly the same methods (not changing the calibration) for Gaussian noise yields the mean risks \[ 1:\:0.2634859 \quad 2:\:0.2849437 \quad 3:\:0.4407968 \quad 4:\:0.2799637 \quad 5:\:0.2027861\] and for a standardized heavy-tailed $t$-distribution with degree 3 of freedom the mean risks (note that the density for the standardized $t$-distribution has value approx. 1.1 at zero which implies a small variance for the median) \[1:\:0.23641725 \quad 2:\:0.26690858 \quad 3:\:0.36192589 \quad 4:\:0.11225181 \quad 5:\:0.08870602 \] The expected experience is that Lepski's original method is better for Gaussian noise, but only marginally, while for heavy-tailed distributions the proposed local median procedure performs much better. This is exactly the robustness required. \end{comment} \begin{comment} \section{Extension to image denoising}\label{SecImage} Methods for image restauration need to take account of possibly complex geometric features that are not well described by classical smoothness concepts. Often, a model where the function is (almost) constant in different regions, separated by sharp edges, is more appropriate. The difficulty in denoising is then to find for every pixel suitable local neighbourhoods on which M-estimation can be applied. Our basic procedure gives a data-driven rule to select a neighbourhood $U_k$, but we assume so far that these neighbourhoods are predefined and nested. Typical 2D-examples in this setting are concentric balls or squares, which for points near edges should be very small although enlarging the neighbourhoods in directions away from the edge would be highly beneficial. Similarly, in the univariate setting non-symmetric windows are preferable near change points. Instead of neighbourhoods $(U_k)$ we thus consider local partitions in areas $A_k$ that are pairwise disjoint. Setting $U_k=\bigcup_{\ell\le k}A_\ell$ transfers all we have done so far to this situation. Now, however, we are also interested in the base estimators \[\textstyle\bar\theta_I:=m(Y_i,\,x_i\in\bigcup_{k\in I}A_k)\] for some index set $I\subset [0:K]:=\{0,1,\ldots,K\}$ with $0\in I$ which is not necessarily of the form $I=[0:k]$ (note $\bar\theta_{[0:k]}=\tilde\theta_k$). For example in images, $(A_k)$ could be defined as follows: $A_0=\{x\}$, $A_1,\ldots,A_8$ are the eight neighbouring pixels (including diagonals) of $x$, $A_9,\ldots,A_{17}$ are the eight neighbouring $3\times 3$-squares of the square $\bigcup_{k\le 8}A_k$ etc. The modification in our sequential testing procedure for finding a suitable neighbourhood is now that we do not stop once the data in some area $A_k$ was rejected, but we just continue to extend the neighbourhood to all accepted areas until the last index $K$ has been reached. More specifically, let us write \[ \bar s_I:=\E_0[\abs{\bar\theta_I}^r]^{1/r},\quad \bar s_{\ell,I}:=\E_0[\abs{\bar\theta_{\{\ell\}}-\bar\theta_I}^r]^{1/r},\quad \ell=1,\ldots,K,\,I\subset[0:K]. \] Furthermore, we introduce a partial ordering on all index sets, i.e. for sets $I,J\subset[0:K]$ with $0\in I,0\in J$: \[ J \prec I :\iff J\subsetneqq I\text{ and } \max(J)<\min(I\setminus J) \] as well as $J\preceq I$ if $J\prec I$ or $J=I$. Note that whenever our algorithm produces an adaptive index set $\hat I=\{0<i_1<\cdots<i_m\}$, at intermediate stages exactly the index sets $J\prec \hat I$, i.e. $J=\{0<i_1<\cdots<i_\ell\}$ with some $\ell<m$, must have been accepted. Formally, we consider the following modified algorithm in terms of the base estimators $\bar\theta_I$ and critical values $\bar z_I$: {\tt\begin{itemize} \item initialise $I:=\{0\}$; \item for $k:=0$ to $K-1$ do\\ \hspace*{3mm} if for all $J\preceq I$ \[\abs{\bar\theta_{\{k+1\}}-\bar\theta_J}\le \bar z_J \bar s_{k+1,J}+ \bar z_{J\cup\{k+1\}} \bar s_{J\cup\{k+1\}} \] \hspace*{3mm} then set $I:=I\cup \{k+1\}$;\\ \item define $\hat I:=I$, $\hat\theta:=\bar\theta_{\hat I}$. \end{itemize}} Main properties of $\hat\theta$ are derived as before. The propagation result of Proposition \ref{PropLateErr} becomes \[ \abs{\hat\theta-\bar\theta_I}{\bf 1}(I\prec \hat I)\le \max_{j=\max(I)+1,\ldots,K-1} \big(\bar z_I \bar s_{j,I}+ \bar z_{I\cup\{j\}} \bar s_{I\cup\{j\}}\big). \] The critical values $(\bar z_I)$ are chosen in extension of \eqref{eqhypzkgen} in such a way that we have for any index set $I$ \begin{equation}\label{eqhypzkgen2} \sum_{J\prec I}\E_0\Big[\abs{\bar\theta_J}^r {\bf 1}\big(\exists J'\preceq J:\; \frac{\abs{\bar\theta_{\{\min(I\setminus J)\}}-\bar\theta_{J'}}}{\bar s_{\min(I\setminus J),J'}}>\bar z_{J'}\big)\Big] \le\alpha s_K^r. \end{equation} This can be achieved by choosing $\bar z_{\{0\}}$ such that for all index sets $I$ \begin{equation}\label{eqhypz02} \sum_{J\prec I}\E_0\Big[\abs{\bar\theta_J}^r {\bf 1}\Big(\frac{\abs{\bar\theta_{\min(I\setminus J)}-\bar\theta_{\{0\}}}}{\bar s_{\min(I\setminus J),\{0\}}}>\bar z_{\{0\}}\Big)\Big] \le\tfrac{\alpha}{K} s_K^r \end{equation} and by defining $\bar z_{J_0}$ sequentially, provided all $\bar z_{J'}$ with $J'\prec J_0$ are already defined: for all index sets $I\succ J_0$ we impose \begin{equation}\label{eqhypzk2} \sum_{J:J_0\preceq J\prec I}\E_0\Big[\abs{\bar\theta_J}^r {\bf 1}\Big(\frac{\abs{\bar\theta_{\min(I\setminus J)}-\bar\theta_{J_0}}}{\bar s_{\min(I\setminus J),J_0}}>\bar z_{J_0}\text{, }\forall J'\prec J_0:\; \frac{\abs{\bar\theta_{\min(I\setminus J)}-\bar\theta_{J'}}}{\bar s_{\min(I\setminus J),J'}}\le \bar z_{J'}\Big)\Big] \le\tfrac{\alpha}{K} s_K^r. \end{equation} This uniform condition over all index sets $I$ can be considerably simplified in the ideal case where the $(\eps_i)$ are Gaussian and local means are employed. Assuming further that the cardinalities of $(A_k)$ grow (i.e. $\forall k:\;\abs{A_{k+1}}\ge\abs{A_k}$), we obtain $\sum_{k\in J}\abs{A_k}\ge \sum_{k=0}^{\abs{J}-1}\abs{A_k}$ and by independence of $\bar\theta_{A}$ from $\bar\theta_B$ for disjoint index sets $A$ and $B$: \begin{align*} \E_0\Big[\abs{\bar\theta_J}^r {\bf 1} \Big(\frac{\abs{\bar\theta_{\min(I\setminus J)}-\bar\theta_{\{0\}}}}{s_{\min(I\setminus J),\{0\}}}>z_{\{0\}}\Big)\Big] &\ge \E_0\Big[\abs{\bar\theta_{[0:\abs{J}-1]}}^r {\bf 1} \Big(\frac{\abs{\bar\theta_{\min(I\setminus J)}-\bar\theta_{\{0\}}}}{s_{\min(I\setminus J),\{0\}}}>z_{\{0\}}\Big)\Big]\\ &=\E_0\Big[\abs{\bar\theta_{[0:\abs{J}-1]}}^r {\bf 1} \Big(\frac{\abs{\bar\theta_{\abs{J}}-\bar\theta_{\{0\}}}}{s_{\abs{J},\{0\}}}>z_{\{0\}}\Big)\Big]. \end{align*} Hence, condition \eqref{eqhypz02} follows for all $I$ from \[ \sum_{j=0}^{K-1}\E_0\Big[\abs{\bar\theta_{[0:j]}}^r {\bf 1}\Big(\frac{\abs{\bar\theta_{\{j+1\}}-\bar\theta_{\{0\}}}}{s_{j+1,\{0\}}}>z_{\{0\}}\Big)\Big] \le\tfrac{\alpha}{K} s_K^r. \] Hence, $z_{\{0\}}$ is equal to $z_0$ from above. Similarly, the condition \eqref{eqhypzk2} is satisfied once it holds for $I=J_0\cup[\max(J_0)+1:K]$: \[ \sum_{j=\max(J_0)}^{K-1}\E_0\Big[\abs{\bar\theta_{J_0\cup[\max(J_0):j]}}^r {\bf 1}\Big(\frac{\abs{\bar\theta_{\{j+1\}}-\bar\theta_{J_0}}}{s_{j+1,J_0}}>z_{J_0}\text{, }\forall J'\prec J_0:\; \frac{\abs{\bar\theta_{\{j+1\}}-\bar\theta_{J'}}}{s_{j+1,J'}}\le z_{J'}\Big)\Big] \le\tfrac{\alpha}{K} s_K^r, \] which yields for $J_0=[0:j_0]$ the same critical value $z_{J_0}=z_{j_0}$ as before. In this sense, the actual procedure is just an extension of the previous one ; only when the first hypothesis is rejected, we now continue enlarging the neighbourhood, while the propagation ensures that the late stopping error is not increased. Since in practical applications the calibration of all critical values $(\bar z_I)_{0\in I\subset [0:K-1]}$ by Monte Carlo simulations will be tedious, we adopt here an asymptotic point of view. In complete analogy to the proof of Proposition \ref{propasympzk} one can show: In the situation of Assumption \ref{AssAsymp} conditions \eqref{eqhypz02} and \eqref{eqhypzk2} are satisfied by the choice \[ \bar z_{J_0}^2=\zeta\big(2r\log(\bar s_{J_0}/s_K)+\log(\alpha^{-1})+\log(K)\big),\quad k=0,\ldots,K-1, \] with $\zeta>0$ sufficiently large. Using this calibration, the actual procedure is just an extension of the previous one: until the first hypothesis is rejected, we use the same critical values as before. A problem in the formal analysis remains open. Our choice of the $(\bar z_I)$ prevents the algorithm with high probability from excluding the areas $A_k$ which are present in some oracle index $I^\ast$. The propagation property, on the other hand, bounds the error due to including more areas $A_k$ once $I^\ast$ has been accepted. What we cannot guarantee in full generality is the error of erroneously including areas $A_k$ before the last index in $I^\ast$ is reached. We therefore first present a risk bound on the event that $\hat I$ does not include indices smaller than $\max(I^\ast)$ (the proof follows the lines of proving Theorem \ref{thmrisk}). \begin{theorem}\label{thmrisk2} Assume that condition \eqref{eqhypzk2} as well as $\bar z_J\bar s_J\le \bar z_{J'}\bar s_{J'}$ for $J'\prec J$ hold. Then under Assumption \ref{AssMean} the following excess risk estimate holds for all index sets $I^\ast$ satisfying $\sup_{y_1,y_2\in \bigcup_{k\in I^\ast}A_k}\abs{g(y_1)-g(y_2)}\le \bar z_{I^\ast}\bar s_{I^\ast}$: \begin{align*} \E\Big[\abs{&\hat\theta-\bar\theta_{I^\ast}}^r{\bf 1}\big(\hat I\subset I^\ast\cup[\max(I^\ast)+1:K]\big)\Big]\\ &\le (3^{r-1}\vee 1)\Big( (2\bar z_{I^\ast}^r+1+\alpha)\bar s_{I^\ast}^r + \bar z_{I^\ast}^r\max_{j=\max(I^\ast)+1,\ldots,K-1}\bar s_{j,I^\ast}^r\Big). \end{align*} \end{theorem} The event that there is an index $k\in \hat I$ with $k\notin I^\ast$ and $k<\max(I^\ast)$ will have small probability if the regression function $g$ on $A_k$ differs significantly from $g$ on $\bigcup_{i\in I^\ast,i<k}A_i$. Assume the asymptotic setup of Assumption \ref{AssAsymp} in terms of $U_j:=\bigcup_{i<j}A_i$ and think of an image domain ${\cal X}=[0,1]^d$ with $n=m^d$ equidistant observations. Then $A_k$ is not included if the difference $\Delta$ in $g$ on $A_k$ and $\bigcup_{i\in I^\ast,i<k}A_i$ is much larger than $\bar z_{I^\ast\cap[0:k-1]}\bar s_{I^\ast\cap[0:k-1]}$, which is of order $\sigma\sqrt{\log n}(\sum_{i\in I^\ast,i<k}\abs{A_i})^{-1/2}$ with level $\sigma$ of the noise $(\eps_i)$. For image denoising at a point $x$ near an edge this means that the neighbourhoods of $x$ do not grow over the edge provided sufficiently many areas $A_i$ have been included, before areas beyond the edge are reached. Asymptotically, this is seen to be equivalent in dimension $d$ to requiring that the Euclidean distance between the point $x$ and the edge (the closest change point) satisfies $\dist(x,\text{edge})^{d/2}\gg \Delta^{-1}\sigma\sqrt{\log(n)/n}$. Noting that pixels in ${\cal X}=[0,1]^d$ have distance $n^{-1/d}$, the condition $\dist(x,\text{edge})\gtrsim (\log(n)/n)^{1/d}$ means that our procedure will have good denoising properties even very close to edges. This rate should also be compared to symmetric windows (or kernels) in estimation for $d$-dimensional Lipschitz functions which need much larger diameters (or bandwidths) of order $n^{-1/(d+2)}$ to attain optimal rates. A related, but different setup for edge detection (e.g. assuming sufficiently regular edges) can be treated by other locally adaptive methods, see Section 4.3 in \cit{PolSpok03} for a discussion. We refer also to \cit{KatSpok} for an anisotropic approach based on circle segments of different radii. \begin{figure}[t] \caption{CT scan of the liver(\MR?), result of denoising and neighbourhood sizes (!\MR) [\MR: better and 'theoretically correct' pictures needed]} \end{figure} \MR\MR\MR The proposed procedure is applied to denoise medical images used in the surveillance of cancer therapies. A radioactive liquid is injected in the human body and its diffusion over time in different cell tissues is a good marker for the cell type. Using computer tomography, the radioactive intensity is measured in each voxel over time. An analysis of residuals shows that the observational noise is well modeled by the Laplace distribution. Moreover, sometimes human movements produce significant outliers. Therefore local median estimation is employed. Especially for dynamical image sequences, the denoising is remarkably successful when the same spatial neighbourhoods are used over the whole observation period. Details of the experimental setup and the estimation procedure are discussed in \cite{RRC}. We just illustrate the numerical performance in Figure 4 where the original (static) image, the result of denoising and the size of the neighbourhood for each point are presented.\MR\MR\MR \end{comment} \section{Application}\label{SecImage} \begin{figure}[t] \includegraphics[width=6.5cm]{original-fullsize-C65.png} \includegraphics[width=6.5cm]{denoised-fullsize-C65.png} \caption{CT scan of the upper abdomen, original and result of denoising} \end{figure} The proposed procedure is applied to denoise images used in the surveillance of cancer therapies. In Dynamic Contrast Enhanced Computer Tomography (DCE-CT) a contrast agent is injected in the human body and its diffusion over time is observed which is specific for different kinds of cell tissues and allows thus the surveillance of cancer therapies. For medical reasons the dose of contrast agent is kept small which leads to a poor signal-to-noise ratio. An analysis of residuals shows that the observational noise is well modeled by the Laplace distribution. Moreover, sometimes human movements produce significant outliers. Therefore local median estimation is employed. Especially for dynamical image sequences, the denoising is remarkably successful when the same spatial neighbourhoods are used over the whole observation period. This means that at each voxel location $x_i$ a vector-valued intensity function $g:{\cal X}\to\R^K$ is observed under vector-valued noise $\eps_i$. The vector $g(x_i)$ encodes the intensity at time points $(t_1,\dots,t_K)$ recorded at spatial location $x_i$. Our previously developed procedure perfectly applies to this situation, we just need a testing procedure between vector-valued local M-estimators. Details of the experimental setup and the estimation procedure are discussed in \cit{RRC} and we merely give a rough description of the setting. A multiresolution test procedure is applied to compare different vector estimates. In a first pre-selection step for each voxel $x_i$ we disregard voxels that are significantly different from $x_i$ and construct then circular neighbourhoods around $x_i$ consisting only of non-rejected voxels. This allows geometrically richer neighborhood structures that in practice adapt well to the structure. Mathematically, the analysis of the algorithm remains the same when conditioning on the result of this first pre-selection. \begin{figure}[t] \includegraphics[width=6.5cm]{residuals-fullsize-C65.png} \includegraphics[width=6.5cm]{neighborhoods-C65.png} \caption{CT scan of the upper abdomen, residuals and zoom in denoised image with neighbourhood constructions around one voxel} \end{figure} For the present example we dispose of a DCE-CT sequence of $K=53$ recordings of $512\times512$-pixel images in the upper abdomen of a cancer patient. In Figure 4 the original image at time step 23 is depicted together with the result of our denoising procedure. The noise reduction is remarkable while fine structures like edges are well preserved and not smoothed out. The residuals in Figure 5(left) show some artefacts due to human body movements and CT radial artefacts, which our procedure removed as well. In Figure 5(right) a zoom into Figure 4(right) is shown together with the sequence of neighbourhoods constructed for one voxel inside the cancerogeneous tissue. The effect of the pre-selection step is clearly visible by the geometrically adaptive form of the neighbourhoods. Further results, in particular the denoised dynamics in certain voxels and an application to automatic clustering of cell tissues are reported in \cit{RRC}. The generality of our procedure has the potential to provide statistical solutions in many further applications where spatial inhomogeneity and robustness are key issues. \section{Appendix}\label{SecApp} \subsection{Proof of Proposition \ref{PropTest}}\label{SecApp1} The asymptotic normality of the sample median $\sqrt{n}\med(Y_1,\ldots,Y_n)\Rightarrow N(0,1/(4f^2(0)))$ is well known \cite[Corollary 21.5]{vanderVaart} and implies by independence the first asymptotic result. Since the sample medians in the second case are not independent, we consider their joint distribution using empirical processes. Let us write $F_\Delta$ for the cumulative distribution function of $f(\cdot-\Delta)$ and denote by $B^1,B^2$ two independent standard Brownian bridges. Then empirical process theory yields by independence \[\sqrt{n}\Big(\frac1n\sum_{i=1}^n{\bf 1}([Y_i,\infty))-F,\frac1n\sum_{i=n+1}^{2n}{\bf 1}([Y_i,\infty))-F_\Delta\Big) \Rightarrow (B^1\circ F,B^2\circ F_\Delta).4g \] The joint median $\med(Y_i,i=1,\ldots,2n)$ satisfies in terms of the empirical distribution functions $F^n$ and $F^n_\Delta$ of the two samples \[ F^n(\med(Y_i,i=1,\ldots,2n))+F_\Delta^n(\med(Y_i,i=1,\ldots,2n))=1.\] Hence, it can be expressed as the functional $(F^n+F_\Delta^n)^{-1}(1)$ of $(F^n,F_\Delta^n)$, assuming that the inverse is defined properly (e.g. giving the mean of all admissible values). Combining two-dimensional versions of Theorem 20.8 and Lemma 21.4 of \cit{vanderVaart}, we infer \begin{align*} &\sqrt{n}\Big(\med(Y_i,\,i=1,\ldots,n),\med(Y_i,\,i=1,\ldots,2n)-\Delta/2\Big)\\ &\quad\Rightarrow \Big(-(B^1\circ F/f)\circ F^{-1}(1/2), -(B^1\circ F+B^2\circ F_\Delta)/(f+f(\cdot-\Delta))\circ (F+F_\Delta)^{-1}(1)\Big). \end{align*} By symmetry of $f$ the right-hand side simplifies to \[\Big(-B^1(1/2)/f(0), -\big(B^1(F(\Delta/2))+B^2(F_\Delta(\Delta/2))\big)/(2f(\Delta/2))\Big). \] Consequently, $\sqrt{n}(2(\med(Y_i,\,i=1,\ldots,2n)-\med(Y_i,\,i=1,\ldots,n))-\Delta)$ is asymptotically normal with mean zero and variance \begin{align*} \sigma_L^2&=4\E\Big[\Big(-(B^1(F(\Delta/2))+B^2(F(-\Delta/2)))/(2f(\Delta/2))+B^1(1/2)/f(0)\Big)^2\Big]\\ &= 4\Big(\frac{F(-\Delta/2)(1-F(-\Delta/2))}{4f^2(\Delta/2)}+\frac{F(\Delta/2)(1-F(\Delta/2))}{4f^2(\Delta/2)} +\frac{1}{4f^2(0)}-\frac{1-F(\Delta/2)}{2f(0)f(\Delta/2)}\Big)\\ &= \frac{2F(\Delta/2)(1-F(\Delta/2))}{f^2(\Delta/2)}+\frac{1}{f^2(0)}-\frac{2(1-F(\Delta/2))}{f(0)f(\Delta/2)}. \end{align*} While $\sigma_L^2=\sigma_W^2$ for $\Delta=0$ is straight-forward, we rewrite $\sigma_L^2$ in terms of $R=F/f$ to study the behaviour as $\Delta\to 0$: \[ \sigma_L^2=2R(\Delta/2)R(-\Delta/2)+4R^2(0)-4R(0)R(-\Delta/2).\] Because of $R(\Delta/2)-R(-\Delta/2)=\Delta+O(\Delta^2)$ by the Lipschitz property of $f$, we obtain asymptotically \[ \sigma_L^2=\sigma_W^2+2\Big(\big(R(-\Delta/2)-R(0)\big)^2+ R(-\Delta/2)\big(R(\Delta/2)-R(-\Delta/2)\big)\Big)=\sigma_W^2+\Delta+O(\Delta^2). \] This gives $\sigma_L^2=\sigma_W^2(1+2\Delta f(0)+O(\Delta^2f(0)))$.\hfill\qed\\ \subsection{Proof of Proposition \ref{PropMoments}}\label{SecApp2} We shall only consider the case of odd $N=2m+1$. Under the conditions of the proposition \cit{Brownetal} show the following result. \begin{theorem}\label{ThmCoupling} For all $m\ge 0$ the sample $\eps_1,\ldots,\eps_{2m+1}$ can be realised on the same probability space as a standard normal random variable $Z$ such that $\med(\eps):=\med(\eps_i,\,i=1,\ldots,2m+1)$ satisfies \[ \babs{\med(\eps)-\frac{Z}{\sqrt{4(2m+1)}f_\eps(0)}}\le \frac{C}{2m+1}\Big(1+Z^2\Big) \text{ if } \abs{Z}\le\delta\sqrt{2m+1}, \] where $\delta,C>0$ are constants depending on $f_\eps$, but independent of $m$. \end{theorem} The construction and the inequality of the theorem yield with some constant $C'>0$ \begin{align*} \E[&\abs{\med(\eps)}^{2r}{\bf 1}(\abs{Z}\le\delta\sqrt{2m+1})]\\ &\le (2^{r-1}\vee 1)\E\Big[\big(4(2m+1)f_\eps(0)^2\big)^{-r}\abs{Z}^{2r}+ \tfrac{C^{2r}}{(2m+1)^{2r}}\Big(1+Z^2\Big)^{2r}\Big]\\ &\le C'(2m+1)^{-r}. \end{align*} On the other hand, because of $\eps_i\in L^r$ we have for $z\to\infty$ that the cdf satisfies $F_\eps(-z)\lesssim \abs{z}^{-r}$ and $1-F_\eps(z)\lesssim \abs{z}^{-r}$. From the formula for the density of $\med(\eps)$ \[ f_m(z)= \binom{2m+1}{m+1}(m+1)f_\eps(z)F_\eps(z)^{m}(1-F_\eps(z))^{m} \] we therefore infer that $\norm{\med(\eps)}_{L^{3r}}$ for $m\ge 2$ is finite and uniformly bounded. Hence, the H\"older inequality gives \[ \E[\abs{\med(\eps)}^{2r}{\bf 1}(\abs{Z}>\delta\sqrt{2m+1})]\le \E[\abs{\med(\eps)}^{3r}]^{2/3}\PP(\abs{Z}>\delta\sqrt{2m+1})^{1/3}, \] which by Gaussian tail estimates is of order $\exp(-\delta^2(2m+1)/6)$ and thus for $m\to\infty$ asymptotically negligible. This gives the upper moment bound for $\med(\eps)$, the lower bound follows symmetrically. The $r$-th moment is bounded by even simpler arguments. The second assertion follows via quantile coupling from \begin{align*} &\PP\big(\sqrt{4(2m+1)}f_\eps(0)\abs{\med(\eps)}>\tau_m\big)\\ &\le \PP\big(\abs{Z}+\tfrac{C}{\sqrt{2m+1}}(1+Z^2)>\tau_m\big)+\PP\big(\abs{Z}>\delta\sqrt{2m+1}\big)\\ &\le \PP(2\abs{Z}>\tau_m)+\PP(\abs{Z}>\delta\sqrt{2m+1})\\ &\le 2\exp(-\tau_m^2/8). \end{align*} \mbox{}\hfill\text{\qed}\\ \bibliographystyle{dcu} \thebibliography{99} \harvarditem{Arcones}{2002}{Arcones} {\sc Arcones, M. A.} (2002). Moderate Deviations for $M$-estimators. {\sl Test} {\bf 11}(2), 465-500. \harvarditem{Arias-Castro and Donoho}{2006}{AriasDonoho} {\sc Arias-Castro, E.} and {\sc D. Donoho} (2006). Does median filtering truly preserve edges better than linear filtering?, {\sl Preprint} in Math arXive {\tt math/0612422v1}. \harvarditem{Brown {\it et al.}}{2008}{Brownetal} {\sc Brown, L. D., T. Cai} and {\sc H. Zhou} (2008). Robust nonparametric estimation via wavelet median regression. {\sl Annals of Statistics} {\bf 36}(5), 2055--2084. \harvarditem{Chu and Hotelling}{1955}{ChuHot} {\sc Chu, J. T.} and {\sc H. Hotelling} (1955). The moments of the sample median. {\sl Annals of Math. Statistics} {\bf 26}(4), 593--606. \harvarditem{Hall and Jones}{1990}{HallJones} {\sc Hall, P.} and {\sc M. C. Jones} (1990). Adaptive $M$-estimation in nonparametric regression. {\sl Annals of Statistics} {\bf 18}(4), 1712--1728. \harvarditem{Huber}{1964}{Huber} {\sc Huber, P. J.} (1964). Robust estimation of a location parameter. {\sl Annals of Mathematical Statistics} {\bf 35}(1), 73--101. \harvarditem{Katkovnik and Spokoiny}{2008}{KatSpok} {\sc Katkovnik, V.} and {\sc V. Spokoiny} (2008). Spatially adaptive estimation via fitted local likelihood techniques. {\sl IEEE Transactions on Signal Processing} {\bf 56}(3), 873--886. \harvarditem{Koenker}{2005}{Koenker} {\sc Koenker, R.} (2005). {\sl Quantile Regression}. Econometric Society Monographs 38, Cambridge University Press. \harvarditem{Lepski}{1990}{Lepski} {\sc Lepskii, O. V.} (1990). A problem of adaptive estimation in Gaussian white noise. {\sl Theory Probab. Appl.} {\bf 35}(3), 454--466. Translated from {\sl Teor. Veroyatnost. i Primenen.} {\bf 35}(3) (1990), 459--470. \harvarditem{Lepski {\it et al.}}{1997}{Lepskietal} {\sc Lepski, O., E. Mammen} and {\sc V. Spokoiny} (1997). Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. {\sl Annals of Statistics} {\bf 25}(3), 929--947. \harvarditem{Massart}{2007}{Massart} {\sc Massart, P.} (2005). Concentration inequalities and model selection. Ecole d'Et\'e de Probabilit\'es de Saint-Flour XXXIII -- 2003, {\sl Lecture Notes in Mathematics 1896}, Springer, Berlin. \harvarditem{Polzehl and Spokoiny}{2003}{PolSpok03} {\sc Polzehl, J.} and {\sc V. Spokoiny} (2003). Image denoising: pointwise adaptive approach. {\sl Annals of Statistics} {\bf 31}, 30--57. \harvarditem{Portnoy}{1997}{Portnoy} {\sc Portnoy, S.} (1997). Local asymptotics for quantile smoothing splines. {\sl Annals of Statistics} {\bf 25}(1), 414--434. \harvarditem{Rozenholc {\it et al.}}{2009}{RRC} {\sc Rozenholc, Y., M. Rei\ss, D. Balvay} and {\sc C.-A. Cuenod}(2009). Growing time-homogeneous neighbourhoods for denoising and clustering dynamic contrast enhanced-CT sequences, {\sl Preprint Universit\'e Paris V}. \harvarditem{Spokoiny and Vial}{2009}{SpokVial} {\sc Spokoiny, V.} and {\sc C. Vial} (2009). Parameter tuning in pointwise adaptation using a propagation approach, {\sl Annals of Statistics}, to appear. \harvarditem{Truong}{1989}{Truong} {\sc Truong, Y.K.} (1989). Asymptotic Properties of Kernel Estimators Based on Local Medians, {\sl Annals of Statistics} {\bf 17}(2), 606--617. \harvarditem{Tsybakov}{2004}{Tsybakov} {\sc Tsybakov, A. B.} (2004). Optimal aggregation of classifiers in statistical learning. {\sl Annals of Statistics} {\bf 32}(1) 135--166. \harvarditem{van de Geer}{2003}{vandeGeer} {\sc van de Geer, S.} (2003). Adaptive quantile regression, in {\sl Recent Trends in Nonparametric Statistics} (Eds. M.G. Akritas and D.N. Politis), Elsevier Science, 235--250. \harvarditem{van der Vaart}{1998}{vanderVaart} {\sc van der Vaart, A.} (1998). {\sl Asymptotic Statistics}, Cambridge University Press. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Parametrization of lepton angular distribution} \label{sec:param} A probability density function (pdf) for a single event can be defined through the matrix element as~\cite{Alwall:2010cq} \begin{eqnarray} \rho(\mathbf{p}^{\text{vis}}|\lambda) = \dfrac{1}{\sigma_{\lambda}} \sum_{a,b} \int \d x_1 \d x_2 f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \int \d \Phi \dfrac{\d\hat{\sigma}}{\d \Phi} \prod_{i\in\text{vis}} \delta(\mathbf{p}_i-\mathbf{p}_i^{vis}), \label{eq:pdf} \end{eqnarray} where $\Phi$ represents the Lorentz invariant phase space, in our case, a four body version $\Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}})$ with $l=e,\mu$ and $\chi$ for the DM particle. $f_a(x,\mu_{\mathrm{F}})$ corresponds to the parton distribution function of parton $a$, with an energy fraction of $x$ and a factorization scale $\mu_{\mathrm{F}}$. $\lambda$ stands for a set of parameters of interest. The visible part of the phase space is determined through observables, while the invisible part is integrated over. The general cross section formula is written as: \begin{eqnarray} \sigma &=& \sum_{a,b} \int \d x_1 \d x_2 f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \int \d \Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}}) \dfrac{\d \hat{\sigma}}{\d \Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}})}. \end{eqnarray} For the same process, it follows that the $\rho(\mathbf{p}^{\text{vis}}|\lambda)$ is indeed a probability density function for the visible kinematics: \begin{eqnarray} \left(\prod_{i\in\text{vis}}\int\d^3p_i\right) \rho(\mathbf{p}^{\text{vis}}|\lambda) = 1. \end{eqnarray} To calculate the production of a Z boson in association with a DM mediator, we parametrize the four-momenta as follows: \begin{eqnarray} p_1^{\mu} =& x_1 \dfrac{\sqrt{s}}{2} (1,0,0,1)^T &= \dfrac{\sqrt{\hs}}{2} \sqrt{\dfrac{x_1}{x_2}} (1,0,0,1)^T , \\ \nonumber p_2^{\mu} =& x_2 \dfrac{\sqrt{s}}{2} (1,0,0,-1)^T &= \dfrac{\sqrt{\hs}}{2} \sqrt{\dfrac{x_2}{x_1}} (1,0,0,-1)^T , \\ \nonumber \py^{\mu} =& (\py^0, -\qt, 0, \py^3)^T &= \left( \dfrac{\sqrt{s}}{2} \xy \cosh \yy, -\qt, 0, \dfrac{\sqrt{s}}{2} \xy \sinh \yy \right)^T, \\ \nonumber \pz^{\mu} =& (\pz^0, \qt, 0, \pz^3)^T &= \left( \dfrac{\sqrt{s}}{2} \xz \cosh \yz, \qt, 0, \dfrac{\sqrt{s}}{2} \xz \sinh \yz \right)^T, \end{eqnarray} where $\xz=\dfrac{2\sqrt{\sz+\qt^2}}{\sqrt{s}}$, $\xy=\dfrac{2\sqrt{\sy+\qt^2}}{\sqrt{s}}$. It is common to study the decaying lepton angular distribution in the Collins-Soper (CS) frame~\cite{PhysRevD.16.2219}. The Collins-Soper frame, as shown in Fig.\ref{fig:frames}, is a Z boson rest frame, with the z-axis lying in a way bisects the opening angle $\theta_{ab}$ between the beam and negative target momenta directions. In this frame, momenta of the two incoming partons become: \begin{eqnarray} \label{eq:kinCS} p^{CS}_1 &=& \dfrac{x_1}{2} \sqrt{\dfrac{s}{\sz}} \ee^{-\yz} ( \sqrt{\sz+\qt^2} , -\qt,0,\sqrt{\sz}), \\ \nonumber p^{CS}_2 &=& \dfrac{x_2}{2} \sqrt{\dfrac{s}{\sz}} e^{\yz} ( \sqrt{\sz+\qt^2} , -\qt,0,-\sqrt{\sz}), \end{eqnarray} where the $x_{1,2}$ and $\yz$ dependences have been factorized out. Determined by these two momenta, the z-axis of this frame treats the in- and out-partons equally and $\tan\frac{\theta_{ab}}{2}=\frac{|\bf{\qt}|}{\sqrt{\sz}}$ is invariant under the longitudinal boost. This feature makes it suitable for the study of effects at finite $|\bf{\qt}|$. To avoid possible dilutions by the initial states swapped processes, we performed a rotation of $\pi$ around the x-axis for events with $\yz<0$~\cite{Aad:2016izn,Khachatryan:2015paa}. This rotation makes all angular coefficients distribute symmetric in $\yz$. \begin{figure}[htbp] \centering \includegraphics[width=9.0cm]{figs/CSframe.ps} \caption{\label{fig:frames} Sketch of the Collins-Soper frame. $\mathbf{p}_1, \mathbf{p}_2$ correspond to the three momenta of the right- and left- flying protons.} \end{figure} In experiment, only the two decaying lepton pair are measurable, giving a set of visible variables $\yz,\qt,\sz,\cos\theta_{CS},\phi_{CS}$, where the latter two denote polar and azimuthal angles of the charged lepton in the CS frame. We parametrize the Lorentz invariant phase space in a way such that the invisible part $\sy,\yy,\cos\theta_{\chi},\phi_{\chi}$ can be integrated over: \begin{eqnarray} \int \d \Phi_4(k_{l},k_{\bar{l}},k_{\chi},k_{\bar{\chi}}) &=& \int \dfrac{\d \sz}{2\pi} \dfrac{\d \sx}{2\pi} \int \d\Phi'_2(\py,\pz) \d\Phi_2(k_{l},k_{\bar{l}}) \d\Phi_2(k_{\chi},k_{\bar{\chi}}), \\ \int \d\Phi'_2(\py,\pz) &=& \int \dfrac{\d^3 \pz}{(2\pi)^3 2 \pz^0} \dfrac{\d^3 \py}{(2\pi)^3 2 \py^0} (2\pi)^4 \delta^4(p_1+p_2-\pz-\py), \\ \nonumber &=& \dfrac{1}{4\pi s} \int \d \yz \d \yy \d \qt \cdot \qt \\ \nonumber && \delta(x_1-\dfrac{\xz}{2}\ee^{\yz}-\dfrac{\xy}{2}\ee^{\yy}) \delta(x_2-\dfrac{\xz}{2}\ee^{-\yz}-\dfrac{\xy}{2}\ee^{-\yy}) \\ \int \d \Phi_2(k_1,k_2) &=& \dfrac{1}{8\pi} \bar{\beta}(\dfrac{\mathrm{m}_1^2}{s_{12}},\dfrac{\mathrm{m}_2^2}{s_{12}}) \dfrac{\d \cos\theta}{2} \dfrac{\d \phi}{2\pi}, \\ \bar{\beta}(a,b) &=& \sqrt{\lambda(1,a,b)} = \sqrt{1+a^2+b^2-2 a-2 b-2 a b}. \nonumber \end{eqnarray} Then we factorize the decay angular distribution in terms of nine harmonic polynomials and eight angular coefficients $A_i,i=0,...,7$~\cite{Aad:2016izn,Khachatryan:2015paa}: \begin{eqnarray} \dfrac{\d\sigma}{\d\qt\d\yz\d\sz\d\cos\theta\d\phi} &&= \left( \int \d\cos\theta \d\phi \dfrac{\d\sigma}{\d\qt\d\yz\d\sz\d\cos\theta\d\phi} \right) \dfrac{3}{16 \pi} \\ \nonumber && \Bigg\{ (1+\cos^2\theta) + \dfrac{1}{2} A_0 (1-3\cos^2\theta) + A_1 \sin2\theta \cos\phi \\ \nonumber && + \dfrac{1}{2} A_2 \sin^2\theta \cos2\phi + A_3 \sin\theta \cos\phi + A_4 \cos\theta \\ \nonumber && + A_5 \sin^2\theta \sin2\phi + A_6 \sin2\theta \sin\phi + A_7 \sin\theta\sin\phi \Bigg\}, \end{eqnarray} where the polar and azimuthal angles $\theta,\phi$ are measured in the CS frame. Coefficients $A_5-A_7$ are parity-odd and do not contribute at tree level and are found to be very small for a Z boson production~\cite{Aad:2016izn,Khachatryan:2015paa}. Therefore in this analysis, we consider only $A_0-A_4$. \section{Cross checks with the {\sc MG5} program} \label{app:valid} To make the {\sc MG5} results comparable, we implemented similar setups as described in the paper. These include coupling constants, the choice of PDF set, renormalization and factorization scales, Breit-Wigner cutoff, and BL selections as described in Table~\ref{tab:selections} of the paper. The Table~\ref{tab:validation1} compares our results with the {\sc MG5} ones with one on-shell Z boson in the final states. For all the cases, the differences lie within statistical uncertainty. The Table~\ref{tab:validation2} compares our results with the {\sc MG5} with the Z boson leptonicalled decayed. Our program considered all the BL-selections with NWA, while the {\sc MG5} ones replace the NWA with $|\mathrm{m}_{ll}-\mz|<15\times\Gamma_{\mathrm{Z}}$. This replacement leads to slightly smaller {\sc MG5} cross sections comparing to ours, but in general, the differences are not large. Normalizations of the signal and all of the background pdfs are also checked to be consistent with one. \begin{table}[htb] \centering \scalebox{0.8}{ \begin{tabular}{c|c|c|c|c} \hline \hline Process/Benchmark & Cross section & Cross section & Relative & Relative \\ & (fb) & from {\sc MG5} (fb) & Difference (\%) & Statistical uncertainty (\%) \\ \hline S0$_a$ & 0.1535 & 0.1536 & 0.052 & 0.34 \\ S0$_b$ & 0.1452 & 0.1454 & 0.14 & 0.29 \\ S0$_c$ & 4.436$\times 10^{-7}$ & 4.459$\times 10^{-7}$ & 0.52 & 0.14 \\ \hline S1$_a$ & 37.16 & 37.21 & 0.14 & 0.23 \\ S1$_b$ & 7.931 & 7.943 & 0.15 & 0.24 \\ S1$_c$ & 66.94 & 67.01 & 0.11 & 0.25 \\ \hline Z($\to 2\nu$)Z & 3561 & 3564 & 0.081 & 0.16 \\ W($\to e\nu$)Z & 2547 & 2556 & 0.39 & 0.26 \\ Z+jet & 1.189$\times 10^{7}$ & 1.192$\times 10^{7}$ & 0.23 & 0.23 \\ \hline \hline \end{tabular}} \caption{ Comparison of cross sections obtained by our program and the {\sc MG5}, with one on-shell Z boson in the final states. Their differences and the statistical uncertainties taken from the {\sc MG5} are presented relative to the {\sc MG5} ones. } \label{tab:validation1} \end{table} \begin{table}[htb] \centering \scalebox{0.8}{ \begin{tabular}{c|c|c|c|c} \hline \hline Process/Benchmark & Cross section & Cross section & Relative & Relative \\ & (fb) & from {\sc MG5} (fb) & Difference (\%) & Statistical uncertainty (\%) \\ \hline S0$_a$ & 4.748$\times 10^{-3}$ & 4.688$\times 10^{-3}$ & 1.3 & 0.31 \\ S0$_b$ & 4.333$\times 10^{-3}$ & 4.382$\times 10^{-3}$ & 1.1 & 0.33 \\ S0$_c$ & 1.667$\times 10^{-8}$ & 1.649$\times 10^{-8}$ & 1.1 & 0.27 \\ \hline S1$_a$ & 1.149 & 1.034 & 11 & 0.23 \\ S1$_b$ & 0.2431 & 0.2186 & 11 & 0.27 \\ S1$_c$ & 2.070 & 1.861 & 11 & 0.23 \\ \hline ZZ$\to 2l 2\nu$ & 27.71 & 26.50 & 4.6 & 0.13 \\ WZ($\to e\nu 2l$) & 17.05 & 18.39 & 7.3 & 0.26 \\ Z($\to l^+l^-$)+jet & 36125 & 34440 & 4.9 & 0.30 \\ \hline \hline \end{tabular}} \caption{ Comparison of cross sections obtained by our program and the {\sc MG5}, with Z boson leptonically decayed. Our program considered all the BL-selections with NWA, while the {\sc MG5} ones replace the NWA with $|\mathrm{m}_{ll}-\mz|<15\times\Gamma_{\mathrm{Z}}$. Hence the {\sc MG5} results are in general slightly smaller than ours. Their differences and the statistical uncertainties taken from the {\sc MG5} are presented relative to the {\sc MG5} ones. } \label{tab:validation2} \end{table} \section{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}% \def\subsection{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\subsubsection{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape \centering }% }% \def\paragraph{% \@startsection {paragraph}% {4}% {\parindent}% {\z@}% {-1em}% {\normalfont\normalsize\itshape}% }% \def\subparagraph{% \@startsection {subparagraph}% {5}% {\parindent}% {3.25ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\bfseries}% }% \def\section@preprintsty{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries }% }% \def\subsection@preprintsty{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries }% }% \def\subsubsection@preprintsty{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape }% }% \@ifxundefined\frontmatter@footnote@produce{% \let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote }{}% \def\@pnumwidth{1.55em} \def\@tocrmarg {2.55em} \def\@dotsep{4.5pt} \setcounter{tocdepth}{3} \def\tableofcontents{% \addtocontents{toc}{\string\tocdepth@munge}% \print@toc{toc}% \addtocontents{toc}{\string\tocdepth@restore}% }% \def\tocdepth@munge{% \let\l@section@saved\l@section \let\l@section\@gobble@tw@ }% \def\@gobble@tw@#1#2{}% \def\tocdepth@restore{% \let\l@section\l@section@saved }% \def\l@part#1#2{\addpenalty{\@secpenalty}% \begingroup \set@tocdim@pagenum{#2}% \parindent \z@ \rightskip\tocleft@pagenum plus 1fil\relax \skip@\parfillskip\parfillskip\z@ \addvspace{2.25em plus\p@}% \large \bf % \leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@ \hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip \par \nobreak % \endgroup }% \def\tocleft@{\z@}% \def\tocdim@min{5\p@}% \def\l@section{% \l@@sections{}{section }% \def\l@f@section{% \addpenalty{\@secpenalty}% \addvspace{1.0em plus\p@}% \bf }% \def\l@subsection{% \l@@sections{section}{subsection }% \def\l@subsubsection{% \l@@sections{subsection}{subsubsection }% \def\l@paragraph#1#2{}% \def\l@subparagraph#1#2{}% \let\toc@pre\toc@pre@auto \let\toc@post\toc@post@auto \def\listoffigures{\print@toc{lof}}% \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} \def\listoftables{\print@toc{lot}}% \let\l@table\l@figure \appdef\class@documenthook{% \@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}% \raggedcolumn@sw{\raggedbottom}{\flushbottom}% }% \def\tableft@skip@float{\z@ plus\hsize}% \def\tabmid@skip@float{\@flushglue}% \def\tabright@skip@float{\z@ plus\hsize}% \def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}% \def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}% \def\@makefntext#1{% \def\baselinestretch{1}% \reset@font \footnotesize \leftskip1em \parindent1em \noindent\nobreak\hskip-\leftskip \hb@xt@\leftskip{% \Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}% \hss\@makefnmark\ }% #1% \par }% \prepdef \section*{Acknowledgments} This work would not be possible without what D. Yang have learned from Kaoru Hagiwara (KEK) back in PITT PACC (U.S.) and Xinjiang Univ. (China). We have benefited from useful discussions with many people, to name a few, Kaoru Hagiwara, Tao Han, Junmo Chen, Xing Wang (PITT), Kai Ma (SUT) and Yandong Liu, Jing Li (PKU). D. Yang would also like to thank the PITT particle physics group and the Xinjiang Univ. theoretical physics group for warm hospitality during the stay. We are also grateful to Junichi Kanzaki (KEK), Yajuan Zheng (NTU) for useful advises in using {\sc BASES}. This work is supported in part by the National Natural Science Foundation of China, under Grants No. 11475190 and No. 1157500, and by a short-term internship program from the graduate school of Peking University. \section{Numerical results of angular coefficients in the Collins-Soper frame} \label{sec:angC} As we are not directly searching for a resonance, the $\sz$ is expected to give no sensitivity and a narrow width approximation (NWA) is applied for convenience. Apart from that, we have four observables from the Z boson decay: $\yz,\qt,\cos\theta_{CS},\phi_{CS}$. To study the features of this four-dimensional data, we calculate angular coefficients in the $\yz-\qt$ plane for both the major background process $\mathrm{Z Z}\to 2l2\nu$ production and different dark sector models. The angular coefficients can be extracted using the method of moments~\cite{Beaujean:2015xea}. In the experiment, it is more straightforward to extract from a likelihood fit~\cite{Aad:2016izn,Khachatryan:2015paa}. Applying NWA for the Z boson, the cross section can be calculated through spin density matrices of the Z boson production ($\rho^{\rm{P}}$) and decay ($\rho^{\rm{D}}$): \begin{eqnarray} \dfrac{\d\sigma}{\d\yz\d\qt\d\sy\d\Phi_2(k_{\chi},k_{\bar{\chi}})\d\cos\theta\d\phi} = \dfrac{\d\sigma_{P}}{\d\yz\d\qt\d\sy\d\Phi_2(k_{\chi},k_{\bar{\chi}})} \cdot \mathrm{Br}(\mathrm{Z}\to l^+ l^-) \cdot 3 \sum_{s,s'} \rho^{\mathrm{P}}_{s s'} \rho^{\mathrm{D}}_{s s'}. \nonumber \end{eqnarray} The Z boson production density matrix is defined in a specific range (${\cal R}$) of $\yz-\qt$ as follows: \begin{eqnarray} \rm{Tr}\rho^{\rm{P}} &=& \int_{\cal R} \d\Phi'_2(\py,\pz) \d\Phi_2(k_{\chi},k_{\bar{\chi}}) \sum_{a,b} f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \dfrac{1}{2\hs} \overline{\sum_{\text{ext}}} \sum_{s} \left| {\cal M}_s \right|^2, \\ \nonumber \rho^{\rm{P}}_{s s'} &=& \dfrac{1}{\rm{Tr}\rho^{\rm{P}}} \int_{\cal R} \d\Phi'_2(\py,\pz) \d\Phi_2(k_{\chi},k_{\bar{\chi}}) \sum_{a,b} f_a(x_1,\mu_{\mathrm{F}}) f_b(x_2,\mu_{\mathrm{F}}) \dfrac{1}{2\hs} \overline{\sum_{\text{ext}}} {\cal M}_s {\cal M}^*_{s'} \end{eqnarray} where $\overline{\sum}_{\text{ext}}$ means sum over spins and colors of all external particles other than the Z boson and averaged for the initial state ones. The decay density matrix is obtained using the Z boson decay amplitudes and parametrized similar as in Ref.\cite{Dutta:2008bh}. The production and decay density matrices are both normalized such that the trace is one. To obtain the amplitudes, we start from the {\sc FeynRules} models implimented by authors of Ref.\cite{Mattelaer:2015haa,Backovic:2015soa,Neubert:2015fka,Das:2016pbk,Kraml:2017atm} and use {\sc ALOHA} in the {\sc MadGraph} framework to generate {\sc HELAS} subroutines for the helicity amplitudes~\cite{Alloul:2013bka,deAquino:2011ub,Alwall:2014hca,HAGIWARA19861,DREINER20101}. In the CS frame, we choose the z-axis as spin quantization axis, hence a rotation is necessary to bring the helicity frame results to the CS frame ones. We choose the y-axis to be common for the two frames and find the opening angle between the two frames $\omega$ can be obtained through \begin{eqnarray} \cos\omega &=& \dfrac{2\sqrt{\tau_{\mathrm{Z}}}\sinh\yz}{\sqrt{\xz^2\cosh^2\yz-4\tau_{\mathrm{Z}}}}, \end{eqnarray} where $\tau_{\mathrm{Z}} \equiv \dfrac{\sz}{s}$ and $\omega \in [0,\pi)$. The density matrices are then rotated according to Wigner's d-functions: \begin{eqnarray} \rho_{s s'}^{\mathrm{P},HEL} &=& \sum_{\alpha, \beta} d^{J=1}_{\alpha s} (\omega) d^{J=1}_{\beta s'} (\omega) \rho^{\mathrm{P},CS}_{\alpha \beta}, \\ \nonumber \rho_{s s'}^{\mathrm{P},CS} &=& \sum_{\alpha, \beta} d^{J=1}_{\alpha s} (-\omega) d^{J=1}_{\beta s'} (-\omega) \rho^{\mathrm{P},HEL}_{\alpha \beta}, \\ \nonumber \end{eqnarray} where we have used the following notations: \begin{eqnarray} g_{\alpha\beta} &=& - \sum_{s} \epsilon^*_{\alpha}(p,s) \epsilon_{\beta}(p,s) \\ \nonumber \epsilon^{\mu}(p,s) \epsilon_{\mu}(p,s') &=& - d^{J=1}_{s s'}(\theta_{s,s'}), \\ \nonumber d^{J=1}_{s=+,-,0;s'=+,-,0} (\theta) &=& \left( \begin{array}{ccc} \dfrac{1+\cos\theta}{2} & \dfrac{1-\cos\theta}{2} & -\dfrac{\sin\theta}{\sqrt{2}} \\ \dfrac{1-\cos\theta}{2} & \dfrac{1+\cos\theta}{2} & \dfrac{\sin\theta}{\sqrt{2}} \\ \dfrac{\sin\theta}{\sqrt{2}} & -\dfrac{\sin\theta}{\sqrt{2}} & \cos\theta \\ \end{array} \right). \end{eqnarray} The phase space is prepared analytically, and integration is performed using {\sc BASES}~\cite{Kawabata:1995th} and {\sc GNU Scientific Library}. We mapped the phase space variables to increase integration efficiencies. Specifically, for a massive propagator with mass m and width $\Gamma$, the invariant mass is generated with \begin{eqnarray} s &=& \mathrm{m}^2 + \mathrm{m} \Gamma \tan( x (y_{max}-y_{min}) + y_{min} ), \text{ where } \\ y_{min/max} &=& \arctan(\dfrac{s_{min/max}-\mathrm{m}^2}{\mathrm{m}\Gamma}), \\ \nonumber \mathrm{Jacobian} &=& \dfrac{y_{max}-y_{min}}{\mathrm{m}\Gamma} \left( (s-\mathrm{m}^2)^2 + (\mathrm{m}\Gamma)^2 \right), \end{eqnarray} and $x$ is a uniformly generated random number. The simulation considers $\sin\theta_W=0.23129$, $\mz=91.1876$~GeV, $\Gamma_{\mathrm{Z}}=2.4952$~GeV and $\alpha(\mz)^{-1}=127.95$~\cite{Patrignani:2016xqp}. The W boson mass is obtained through $\mz \cos\theta_W$, assuming $\rho$ parameter equals to one. The $\alpha_S$ is chosen to be consistent with the one in the parton distribution functions (PDF). We use PDF set {\sc NNPDF23}~\cite{Ball:2013hta} with $\alpha_S(\mz)=0.130$ at leading order. The factorization scale is set to be equal to the Z boson transverse energy $E_{\mathrm{T}}=\sqrt{\qt^2+\sz}$. Cross sections in this section consider the visible Z boson decays to electrons and muons with NWA and $\mathrm{Br}(\mathrm{Z}\to l^+ l^-)=6.73$\%~\cite{Patrignani:2016xqp}. The advantage of our program is that high statistical accuracy can be achieved through a direct integration. To validate our program, we checked our angular coefficients through toy measurements based on {\sc MadGraph5\_aMC@NLO (MG5)} generated events. \subsection{SM $\mathrm{Z Z}\to 2l2\nu$ background} The SM $\mathrm{Z Z}\to 2l2\nu$ production is the major background of our DM search. It has a similar final state signature as the signal process, as depicted in Fig.~\ref{fig:feynzz}. Hence we first take a look at the Fig.~\ref{fig:angzz} for the angular coefficients of this process. In general, the angular coefficient $A_0$ measures the difference between longitudinal and transverse polarizations, and it looks more longitudinal at high $\qt$. The coefficient $A_4$ measures forward-backward asymmetry, the Z boson looks more like left-handed in the forward region. The $A_2$ measures the interference between the transverse amplitudes and the $A_{1,3}$ measures the interference between transverse and longitudinal. \clearpage \begin{figure}[htb] \centering \includegraphics[width=10.0cm]{figs/zz_decayed.ps} \caption{\label{fig:feynzz} Representative Feynman diagrams of the SM $\mathrm{Z Z}\to 2l2\nu$ production. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSsm_zz_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSsm_zz_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSsm_zz_yz.eps} \caption{\label{fig:angzz} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the SM $\mathrm{Z Z}\to 2l2\nu$ process. } \end{figure} \subsection{Spin-0 mediator} \begin{figure}[!h] \centering \includegraphics[width=5.0cm]{figs/zy0_decayed.ps} \caption{\label{fig:feyns0} Representative Feynman diagrams of the dark sector with a spin-0 mediator. For the S0$_c$ model, there is no virtual photon propagator. } \end{figure} We consider a simplified model with a scalar s-channel mediator as described in Ref.~\cite{Neubert:2015fka}. The dark sector model is constructed as follows: \begin{eqnarray} {\cal L}_{SM EW}^{Y_0} &=& \dfrac{1}{\Lambda} g^S_{h3} (D^{\mu}\phi)^{\dagger} (D_{\mu}\phi) Y_0 \\ & & + \dfrac{1}{\Lambda} B_{\mu\nu} \left( g^S_{B} B^{\mu\nu} + g^P_B \tilde{B}^{\mu\nu} \right) Y_0 + \dfrac{1}{\Lambda} W_{\mu\nu}^i \left( g^S_W W^{i,\mu\nu} + g^P_W \tilde{W}^{i,\mu\nu} \right) Y_0, \\ {\cal L}_{X}^{Y_0} &=& \mathrm{m}_{\chi_C} g^S_{X_C} \chi^*_C \chi_C Y_0 + \bar{\chi}_D ( g^S_{X_D} + i g^P_{X_D} \gamma_5 ) \chi_{D} Y_0, \end{eqnarray} where $\tilde{V}^{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}V^{\rho\sigma}$ is the dual field strength tensor of $V$ field, $\Lambda$ is a high energy scale. As discussed in Ref.~\cite{Neubert:2015fka}, this operator can be induced by a fermion loop graph with heavy fermion integrated out. Signature of this model is very different from the SM $\mathrm{Z Z}\to 2l2\nu$ process, the dark mediator is emitted from the SM gauge bosons as depicted in Fig.~\ref{fig:feyns0}. We consider three benchmark scenarios of the parameters labeled by S0$_{a,b,c}$. As our angular distributions are more sensitive to changes in couplings, we fix the mass of dark matter $\mchi=10$~GeV and the mass of the mediator $\mys=1000$~GeV. The angular distributions won't be changed drastically as long as $2\mchi$ is much smaller than $\mys$. The parameter values and inclusive cross sections are listed in Table~\ref{tab:exhi-s0}. Angular coefficients of the benchmark scenatios S0$_{a,b,c}$ are shown in Fig.~\ref{fig:angs0a}, Fig.~\ref{fig:angs0b} and Fig.\ref{fig:angs0c} respectively. Comparing to the SM $\mathrm{Z Z}\to 2l2\nu$, the dark matter signal is produced with much higher $\qt$ and have very different angular coefficients distributions, e.g., more transverse at low $\qt$. The S0$_a$ and S0$_b$ can be distinguished from $A_0,A_2$, where the $\yz$ dependences are very different. In the case of S0$_c$, $Y_0$ couples to weak bosons like a Higgs boson and cannot perturb the coupling structure with the Z boson production. Consequently, the $A_0$, $A_1$ and $A_3$ in the CS frame are all zero hence are not shown in the figure. \begin{table}[htb] \centering \begin{tabular}{c|cccccccc} \hline \hline Benchmark & S0$_a$ & S0$_b$ & S0$_c$ \\ \hline $g^S_{X_D}$ & 1 & 0 & 0 \\ $g^P_{X_D}$ & 0 & 1 & 0 \\ $g^S_{X_C}$ & 0 & 0 & 1 \\ \hline $g^S_{W}$ & 0.25 & 0 & 0 \\ $g^P_{W}$ & 0 & 0.25 & 0 \\ $g^S_{h3}$ & 0 & 0 & 1 \\ $\Lambda$ (GeV) & 3000 & 3000 & 3000 \\ \hline Interaction & CP-even & CP-odd & CP-even \\ $\mchi$ (GeV) & 10 & 10 & 10 \\ $\mys$ (GeV) & 1000 & 1000 & 1000 \\ \hline $\Gamma_{Y_0}$ (GeV)& 41.4 & 41.4 & 1.05 \\ Cross section (fb) & 0.0103 & 0.00977 & 2.98e-08 \\ \hline \hline \end{tabular} \caption{ Benchmark scenarios with a spin-0 mediator. } \label{tab:exhi-s0} \end{table} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin0_SwXd_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin0_SwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin0_SwXd_yz.eps} \caption{\label{fig:angs0a} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S0$_a$. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin0_PwXd_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin0_PwXd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin0_PwXd_yz.eps} \caption{\label{fig:angs0b} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benckmark scenario S0$_b$. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.5cm]{figs/fig_A2_BSspin0_HXc_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin0_HXc_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin0_HXc_yz.eps} \caption{\label{fig:angs0c} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S0$_c$. Comparing with other figures, we extended the range of the $A_2$ for a better demonstration. } \end{figure} \subsection{Spin-1 mediator} We consider the same dark sector with a spin-1 mediator as in the LHC experiment~\cite{Abercrombie:2015wmb} with the following interactions of the dark sector: \begin{eqnarray} {\cal L}_{X_D}^{Y_1} &=& \bar{\chi}_D \gamma_{\mu} \left( g^V_{X_D} + g^A_{X_D} \gamma_5 \right) \chi_D Y_1^{\mu} \\ \nonumber {\cal L}_{SM}^{Y_1} &=& \bar{d}_i \left( g^V_{d_{ij}} + g^A_{d_{ij}}\gamma_5 \right) d_j Y_1^{\mu} + \bar{u}_i \left( g^V_{u_{ij}} + g^A_{u_{ij}}\gamma_5 \right) u_j Y_1^{\mu} \end{eqnarray} \begin{figure}[htb] \centering \includegraphics[width=10.0cm]{figs/zy1_decayed.ps} \caption{\label{fig:feyns1} Representative Feynman diagrams of the dark sector with a spin-1 mediator. } \end{figure} The masses of the dark matter and the mediator are chosen to be the same as in the spin-0 model. A sound discussion of the impact of the choice of masses is available in the Ref.~\cite{Abercrombie:2015wmb}. Since our analysis is more suitable for testing couplings, we consider benchmark scenarios as listed in Table~\ref{tab:exhi-s1}. The signal signature is close to the SM $\mathrm{Z Z}\to 2l2\nu$ process, as shown in Fig.~\ref{fig:feyns1}, and we include here the SM $\mathrm{Z Z}\to 2l2\nu$ as a special case with zero coupling for comparison. The S1$_b$ and S1$_c$ project out the right- and left-handed part the Z-q-$\mathrm{\bar{q}}$ couplings. Since the magnitude of the left-handed couplings are larger than the one of the right-handed, cross section of the S1$_c$ scenario is found to be much larger than the S1$_b$ scenario. Angular coefficients of the benchmark scenarios S1$_{a,b,c}$ are shown in Fig.~\ref{fig:angs1a}, Fig.~\ref{fig:angs1b} and Fig.~\ref{fig:angs1c} respectively. Comparing with the SM $\mathrm{Z Z}\to 2l2\nu$ and spin-0 dark sector models, $A_0$ of the spin-1 models are found to be very significant. Among the three scenarios, most signatures look similar, but $A_3$ and $A_4$ take different signs between the S1$_b$ and S1$_c$. Hence the $A_3$ and $A_4$ can be used to quantify the parity violation of the dark sector. \begin{table}[htb] \centering \begin{tabular}{c|ccccccc} \hline \hline Benchmark & S1$_a$ & S1$_b$ & S1$_c$ & S1$_0$ \\ & Spin independent & Right handed & Left handed & SM ($\mathrm{Z Z}\to2l2\nu$) \\ \hline $g^V_{X_D}$ & 1 & $1/\sqrt{2}$ & $1/\sqrt{2}$ & - \\ $g^A_{X_D}$ & 0 & $1/\sqrt{2}$ & -$1/\sqrt{2}$ & - \\ $g^V_{X_C}$ & 0 & 0 & 0 & - \\ \hline $g^V_{u}$ & 0.25 & $\sqrt{2}/8$ & $\sqrt{2}/8$ & - \\ $g^A_{u}$ & 0 & $\sqrt{2}/8$ & -$\sqrt{2}/8$ & - \\ $g^V_{d}$ & 0.25 & $\sqrt{2}/8$ & $\sqrt{2}/8$ & - \\ $g^A_{d}$ & 0 & $\sqrt{2}/8$ & -$\sqrt{2}/8$ & - \\ \hline $\mchi$ (GeV) & 10 & 10 & 10 & - \\ $\myv$ (GeV) & 1000 & 1000 & 1000 & - \\ \hline $\Gamma_{Y_1}$ (GeV)& 56.3 & 55.9 & 55.9 & - \\ Cross section (fb) & 2.50 & 0.533 & 4.50 & 239 \\ \hline \hline \end{tabular} \caption{ Benchmark scenarios with a spin-1 mediator. } \label{tab:exhi-s1} \end{table} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin1_VV_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin1_VV_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin1_VV_yz.eps} \caption{\label{fig:angs1a} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S1$_a$. } \end{figure} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin1_LL_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin1_LL_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin1_LL_yz.eps} \caption{\label{fig:angs1b} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S1$_b$. } \end{figure} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin1_RR_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin1_RR_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin1_RR_yz.eps} \caption{\label{fig:angs1c} Angular coefficients $A_0-A_4$ and the $\yz-\qt$ differential cross section of the benchmark scenario S1$_c$. } \end{figure} \clearpage \subsection{Spin-2 mediator} The dark sector with a spin-2 mediator is also tested. We consider a model as described in the Ref.\cite{Kraml:2017atm}, with benchmark scenarios listed in Table~\ref{tab:exhi-s2}. The masses of the dark matter and the mediator are also chosen to be the same as in the spin-0 model. Despite an increase of complexity in the computation, we found the angular coefficients look similar to the benchmark scenario S1$_a$. We show only the angular coefficients of the benchmark scenario S2$_a$ in Fig.~\ref{fig:angs2a}. Some visible differences from the S1$_a$ can be observed from the $A_0$ and $A_2$ distributions. Since we do not measure the DM, the angular coefficients of S2$_{b,c}$ are found to be very close to the ones of S2$_a$. \begin{table}[htb] \centering \begin{tabular}{c|cccccccc} \hline \hline Benchmark & S2$_a$ & S2$_b$ & S2$_c$ \\ \hline $g^T_{X_D}$ & 1 & 0 & 0 \\ $g^T_{X_R}$ & 0 & 1 & 0 \\ $g^T_{X_V}$ & 0 & 0 & 1 \\ $g^T_{SM}$ & 1 & 1 & 1 \\ \hline $\mchi$ (GeV) & 10 & 10 & 10 \\ $\myt$ (GeV) & 1000 & 1000 & 1000 \\ $\Lambda$ & 3000 & 3000 & 3000 \\ \hline $\Gamma_{Y_2}$ (GeV)& 95.3 & 93.7 & 97.7 \\ Cross section (fb) & 2.73 & 0.0462 & 0.578 \\ \hline \hline \end{tabular} \caption{ Benchmark scenarios with a spin-2 mediator. Angular coefficients of the three scenarios look all the same. } \label{tab:exhi-s2} \end{table} \begin{figure}[htp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_BSspin2_Xd_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_BSspin2_Xd_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_BSspin2_Xd_yz.eps} \caption{\label{fig:angs2a} Angular coefficients and the $\yz-\qt$ differential cross section of the benchmark scenario S2$_a$. } \end{figure} \clearpage \section{Introduction} The existence of dark matter (DM) is now well established. Current measurement gives a cold DM density of $25.8$\%, which is much significant than the $4.84$\% baryon density~\cite{Patrignani:2016xqp,Ade:2015xua}. Despite being an essential constituent of the universe, intrinsic properties of the DM, like mass, spin and nongravitational interaction between the standard model (SM) particles are still elusive at present. Assuming that DM is weakly interacting with the SM particles, the DM annihilation cross section will be constraint by the precisely measured relic DM abundance and a weak-scale DM candidate is usually expected for consistency~\cite{PhysRevLett.39.165}. The WIMP DM candidate can be produced at the LHC, and its missing from detection typically leads to large missing transverse energy, resulting in mono-X signatures, where X may denote a jet~\cite{Beltran:2010ww,Aaboud:2017buf,Sirunyan:2017hci}, especially t-/b-jet~\cite{Aaboud:2017rzf,Sirunyan:2017xgm}, a photon~\cite{Sirunyan:2017ewk}, a Z boson~\cite{Carpenter:2012rg,Sirunyan:2017onm,Aaboud:2017bja}, a W boson~\cite{Bai:2012xg,Aaboud:2017efa} or a Higgs boson~\cite{Aaboud:2017yqz,Sirunyan:2017hnk}. Numerous efforts have been performed at the LHC searching for the DM, many results from 13 TeV collisions are now available~\cite{Sirunyan:2017onm,Sirunyan:2016iap,Sirunyan:2017hci,Khachatryan:2016jww,Sirunyan:2017hnk,Sirunyan:2017xgm,Sirunyan:2017ewk,Sirunyan:2017nvi,Aaboud:2017rzf,Aaboud:2017bja,Aaboud:2017buf,Aaboud:2017buh,Aaboud:2017efa,Aaboud:2017yqz}, with strategies and benchmark models described in Ref.~\cite{Abercrombie:2015wmb}. In this analysis, we explore the effectiveness of the Z boson leptonic decay with mono-Z signature in probing properties of the dark sector. Compared with other search channels, this channel has a relatively lower cross section and may not be the most powerful one at the stage of searching. However, precisely measured electrons and muons provide a clean signature and can be used to increase the signal feasibility. Phenomenology of this channel has been explored in Ref.~\cite{Neubert:2015fka,Petriello:2008pu,Alves:2015dya,Han:1999ne,Yu:2014ula}, including higher-order QCD predictions, multivariate analysis, a search for extra dimension and effects on electron-positron colliders. LHC measurements are also available, and limits have been set on several dark sector models~\cite{Sirunyan:2017onm,Aaboud:2017bja,Aad:2014wca}. To better exploit the powerfulness of the lepton angular distribution, we study systematically information carried by the angular distribution and how they are affected by the dark sector. The modeling of the dark sector can be implemented in many models. As there is no strong support for the correctness of a specific model, it is now popular to set limits on parameters of effective or simplified theories~\cite{Goodman:2010ku,Goodman:2010yf,Abercrombie:2015wmb,Cao:2009uw}. Despite the simplicity, these models may not be realistic if we are not applying them in a suitable case. Either oversimplification nor overdress of the theory can lead to ineffectual results. For example, going to very high energy can result in the violation of unitarity in effective theories~\cite{Abercrombie:2015wmb,Cotta:2012nj}. On the other hand, some features are general among models and can have less dependence on the variations of model parameters, e.g., spin and mass of the dark mediator, parity or charge conjugation parity (CP) of the couplings. If applying carefully, those effective or simplified models can help us better understand the phenomenology of the dark sector. Motivated by this, we look for specific variables that can have discrimination power on general features of the dark sector. We consider the associated production of a Z boson and a dark mediator, where the Z boson decays to a pair of electrons or muons and the dark mediator decays to a pair of dark matter. As the dark matter is unmeasurable, the typical feature of the event is a single leptonically decaying Z boson, with $\pt$ balanced by the missing transverse momentum vector. With precisely measured electron or muon momenta, one can reconstruct the Z boson rest frame and study in detail information carried by the Z boson spin density matrix. We consider simplified models for spin-0, spin-1, and spin-2 mediators~\cite{Mattelaer:2015haa,Backovic:2015soa,Neubert:2015fka,Das:2016pbk,Kraml:2017atm}. In each case, only a few benchmark scenarios are considered with representative parameter values. For the spin-0 model, we assume the dark mediator can only weakly interact with bosons through a set of dimension-5 operators as described in Ref.~\cite{Neubert:2015fka}. In this case, the mono-Z boson channel is advantageous as a triple boson coupling is necessary for the production. If introducing couplings to the SM fermions assuming minimal flavor violation, their effects are suppressed due to proportionalities to the Yukawa couplings~\cite{Backovic:2015soa,Cheung:2010zf,Lin:2013sca}. The spin-1 mediator model is chosen to be consistent with the one adopted in the LHC experiment~\cite{Abercrombie:2015wmb}. A spin-2 mediator model described in Ref.\cite{Kraml:2017atm} is also tested. To maximally exploit the statistical power of the data, we present a framework to use the matrix element method (MEM) with a dynamical construction of event likelihood function and set unbinned limits on parameters of the dark sector~\cite{doi:10.1143/JPSJ.57.4126,doi:10.1143/JPSJ.60.836,Gao:2010qx,Chatrchyan:2012sn,DeRujula:2010ys}. We parametrize the test statistic in a way such that the sensitivity of MEM can be quantified through a term proportional to the KL-divergence of two probability density functions~\cite{kullback1951}. Limits on the coupling strengths of the dark sector models are set at 95\% confidence level (CL) based on the asymptotic approximation. As the spin-2 scenarios are found to have similar angular coefficients to the one of a spin-independent spin-1 model, they are not considered in the limit setting. An example application of a matrix-element-kinematic-discriminator is also demonstrated with simulated events. This paper is organized as follows: Section~\ref{sec:param} introduces the parametrization of lepton angular distribution. Section~\ref{sec:angC} describes computational details and presents numerical results of angular coefficients in the Collins-Soper frame. Section~\ref{sec:limits} explained the statistical method for setting limits and present results on the coupling strengths of dark sector models. Section~\ref{sec:summary} summarizes our major findings and outlooks aspects of the study. \section{Setting limits on the coupling strength parameters of dark sector models} \label{sec:limits} In Section~\ref{sec:angC}, we have shown that angular coefficients of the benchmark dark sector models can have distinct signatures from the SM $\mathrm{Z Z}\to 2l2\nu$ background process in the $\yz-\qt$ plane. In this section, we take advantage of these signatures and set limit on the coupling strength parameter $\lambda$ of each dark sector model, based on observables $\mathbf{x}=(\yz,\qt,\cos\theta_{CS},\phi_{CS})$. The invisible part $(\yy,\sy,\cos\theta_{\chi},\phi_{\chi})$ was integrated out to construct pdfs, as described in Section~\ref{sec:param}. \subsection{Statistical method} With the pdfs of the signal and background processes obtained through MEM, one can construct an unbinned likelihood function over N events in the data sample~\cite{Barlow:1990vc}: \begin{eqnarray} {\cal L}(\text{data}|\lambda,\boldsymbol{\theta}) &=& \text{Poisson}(N|S(\lambda,\boldsymbol{\theta})+B(\boldsymbol{\theta})) \rho(\boldsymbol{\theta}) \prod_i \rho(\mathbf{x}^i|\lambda,\boldsymbol{\theta}), \\ \rho(\mathbf{x}|\lambda,\boldsymbol{\theta}) &=& \dfrac{ S(\lambda,\boldsymbol{\theta}) \rho_s(\mathbf{x}^i,\lambda) + B(\boldsymbol{\theta}) \rho_b(\mathbf{x}^i) }{ S(\lambda,\boldsymbol{\theta}) + B(\boldsymbol{\theta}) }, \end{eqnarray} where $\rho_s(\mathbf{x},\lambda)$ and $\rho_b(\mathbf{x})$ represent pdfs of the signal and background, $S(\lambda,\boldsymbol{\theta})$ and $B(\boldsymbol{\theta})$ corresponding to the expected signal and background yields. The $\boldsymbol{\theta}$ represents the full set of nuisance parameters with pdf $\rho(\boldsymbol{\theta})$, which are designed to incorporate systematic uncertainties. To set limits on the parameters $\lambda$, we compare the compatibility of the data with the $\lambda$ fixed and $\lambda$ floated hypotheses and construct a test statistic based on the profile likelihood ratio: \begin{eqnarray} t_{\lambda} = -2\ln\dfrac{ {\cal L}(\text{data}|\lambda,\boldsymbol{\hat{\theta}}_{\lambda}) }{ {\cal L}(\text{data}|\hat{\lambda},\boldsymbol{\hat{\theta}}) }. \end{eqnarray} According to the Wilk's theorem, this test statistic satisfies the $\chi^2$ distribution of the same degrees of freedom as $\lambda$ in the large sample limit~\cite{wilks1938}. One can, therefore, set limits on the $\lambda$ through a parameter space scan and cut on the $-2\ln\Delta{\cal L}$ values. Neglecting pdf of the nuisance parameters, it follows that \begin{eqnarray} t_{\lambda} = -2\ln\dfrac{\text{Poisson}(N|S(\lambda)+B)}{\text{Poisson}(N|S(\hat{\lambda})+B)} -2 \sum_i \ln \dfrac{\rho(\mathbf{x}^i|\lambda)}{\rho(\mathbf{x}^i|\hat{\lambda})} \end{eqnarray} For setting limits on $\lambda$, we assume that there is a single dataset in agreement with $\lambda=0$. In the large sample limit, we have: \begin{eqnarray} t_{\lambda} &\xrightarrow{N\to\infty}& -2\ln\dfrac{\text{Poisson}(N|S(\lambda)+B)}{\text{Poisson}(N|B)} +2 N \int \d \mathbf{x} \rho(\mathbf{x}|\lambda=0) \ln \dfrac{\rho(\mathbf{x}|\lambda=0)}{\rho(\mathbf{x}|\lambda)} \\ \nonumber &=& -2\ln\dfrac{\text{Poisson}(N|S(\lambda)+B)}{\text{Poisson}(N|B)} + 2 N \cdot D(\rho(\mathbf{x}|\lambda=0) || \rho(\mathbf{x}|\lambda)). \end{eqnarray} where the first term is a test statistic for simple counting experiment and the second term is proportional to $N$ and a KL-divergence~\cite{kullback1951}. As the KL-divergence measures the difference of the pdfs $\rho(\mathbf{x}|\lambda)$ and $\rho(\mathbf{x}|\lambda=0)$, it quantifies the powerfulness of the MEM. For simplicity, we will call the first term as normalization term and the second one as KL-divergence term. In our study, the likelihood function is prepared by {\sc BASES} numerical integration with HELAS subroutines for the helicity amplitudes. The evaluation of the KL-divergence term is performed using a plain integration provided by the {\sc GNU Scientific Library}. We validate our program by checking the normalizations of all the constructed pdfs and by comparing the angular coefficients and cross sections of all involved processes with the {\sc MG5}. See more information in Appendix~\ref{app:valid}. \subsection{Background modeling and event selections} To make our limits more realistic, we consider a few selections -- marked as BL selections -- as listed in Table~\ref{tab:selections} to capture major detector acceptance effects for the processes involved. The values of these selections are set refering to recent 13 TeV LHC measurements~\cite{Sirunyan:2017onm,Aaboud:2017bja}. There are several additional selections considered in experiments to improve the signal feasibility, e.g., jet counting, $3^{rd}$-lepton veto, top quark veto, and $\Delta\phi_{ll,\ptvecmiss}$, $|\etmiss-\ptll|/\ptll$ for momentum balance~\cite{Sirunyan:2017onm}. These selections reject most background from misidentification but lead to different acceptance efficiencies for different processes. Without detector simulation, we determine the event rate according to the CMS results (Table 3 of Ref.~\cite{Sirunyan:2017onm}), with an ancillary $A\cdot \epsilon$ incorporating the additional selections in the experiment and a scale factor normalizing to 150~$\fbinv$ data. The signal dark matter processes are assumed to have the same ancillary $A\cdot \epsilon$ as the SM ZZ$\to 2l 2\nu$ process. \begin{table}[htb] \centering \begin{tabular}{c|c} \hline \hline Variable & Requirements \\ \hline $p^l_{\mathrm{T}}$ & $>20$~GeV \\ \sz & \text{NWA} \\ $E^{\text{miss}}_{\mathrm{T}}$ & $>80$~GeV \\ $|\eta_{l}|$ & $<2.4$ \\ $\Delta R_{ll}$ & $>0.4$ \\ $|\yz|$ & $<2.5$ \\ \hline \hline \end{tabular} \caption{ Selections considered in our computations (BL-selections), where $l=e,\mu$. Additional selection requirements are considered in experiments to improve the signal feasibility. Their effects are included through an Ancillary $A\cdot \epsilon$. } \label{tab:selections} \end{table} Our background pdf is constructed based on components summarized in Table~\ref{tab:bkgevt}. Apart from the non-resonant-$ll$ background, which is constructed using only the phase space, other components are built using matrix elements. The WZ$\to 3l\nu$ matrix element assumes W$\to e \nu$, where the electron is not identified by a detector. The Z/$\gamma^*\to l^+l^-$ is estimated with matrix element of the Z$\to l^+l^-$ plus one jet production, phase space of this process reduces to three final state particles. \begin{table}[htb] \centering \begin{tabular}{c|c|c|c} \hline \hline Process & Cross section with BL-selections (fb) & Ancillary $A\cdot \epsilon$ & Events \\ \hline ZZ$\to 2l 2\nu$ & 27.7 & 0.488 & 2028 \\ Non-resonant-$ll$ & 1.57$\times 10^{3}$ & 5.80$\times 10^{-3}$ & 1370 \\ WZ($\to e\nu 2l$) & 17.05 & 0.296 & 757 \\ Z/$\gamma^*\to l^+l^-$ & 3.61$\times 10^{4}$ & 1.23$\times 10^{-4}$ & 665 \\ \hline \hline \end{tabular} \caption{ Background estimation with cross sections calculated in a phase space with BL-selections and ancillary $A\cdot \epsilon$ to obtain the same event rate as in Table 3 of Ref.~\cite{Sirunyan:2017onm}. The number of events has been translated into 150~$\fbinv$ data. } \label{tab:bkgevt} \end{table} In the presence of selections, angular coefficients can be distorted. Fig.~\ref{fig:bkgak} shows the angular coefficients $A_0-A_4$ for the background only hypothesis. Irregular distributions on the boundaries are mainly caused by the selections on $|\eta_{l}|$ and $\Delta R_{ll}$. With the coupling strength at our expected limit, the presence of signal can only perturb the shapes of the background only ones. \begin{figure}[htbp] \centering \includegraphics[width=4.5cm]{figs/fig_A0_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A1_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A2_bkgak_yz.eps} \\ \includegraphics[width=4.5cm]{figs/fig_A3_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_A4_bkgak_yz.eps} \includegraphics[width=4.5cm]{figs/fig_hrate_bkgak_yz.eps} \caption{\label{fig:bkgak} Angular coefficients $A_0-A_4$ in the CS frame and $\yz-\qt$ differential cross section for background only hypothesis. Selections in Table~\ref{tab:selections} have been applied and cause irregular shapes in kinematic boundaries.} \end{figure} \subsection{Limits on the coupling strength parameters of the dark sector models} In our dark sector models, it is necessary to have two couplings: one for the interaction with SM particles, one for the DM decay. For conciseness, we assume that both couplings in the benchmark model are scaled by a strength parameter $\lambda$. This assumption makes the cross sections change with two orders severer in couplings than ones for limits of a single coupling. We compare the upper limits set from the normalization term $-2\ln\text{Poisson}$ and from the KL-divergence term $2N\cdot D(\rho(\mathbf{x}|0)||\rho(\mathbf{x}|\lambda))$ in Fig.~\ref{fig:lims0} for the S0 benchmark scenarios and in Fig.~\ref{fig:lims1} for the S1 benchmark scenarios. The shapes provide significant improvements in all cases. The KL-divergence terms drive the final limits for the S0 benchmark scenarios and are close to the normalization terms in the S1 benchmark scenarios. \begin{figure}[htbp] \centering \includegraphics[width=4.5cm]{figs/limitS0a.eps} \includegraphics[width=4.5cm]{figs/limitS0b.eps} \includegraphics[width=4.5cm]{figs/limitS0c.eps} \caption{\label{fig:lims0} Upper limits on the coupling strength parameters of the S0 benchmark scenarios. } \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=4.5cm]{figs/limitS1a.eps} \includegraphics[width=4.5cm]{figs/limitS1b.eps} \includegraphics[width=4.5cm]{figs/limitS1c.eps} \caption{\label{fig:lims1} Upper limits on the coupling strength parameters of the S1 benchmark scenarios. } \end{figure} We provide in Table~\ref{tab:limit} 95\% CL upper limits of the strength parameters. In our evaluation, the numerical uncertainty of the normalization terms can be easily made negligible. However, the evaluation of the KL-divergence terms can be computationally expensive. It takes us roughly $700\times 6$ CPU hours, functioning at about 2.4 GHz, for us to obtain 30\%-50\% uncertainties on the KL-divergence terms around the limit values. The signal cross sections at the limit values are also reported. Since a counting experiment calculate limits based on signal background yields, the results from the normalization term are almost the same. The ones from the KL-divergence terms, however, depend on the shape difference between the signal and background. As the KL-divergence is a measure the shape difference, a lower cross section means a larger the difference in shape. These quantitative results are in agreement with qualitative features of the angular coefficients among models provided in Section~\ref{sec:angC}. \begin{table}[htb] \centering \scalebox{1.0}{ \begin{tabular}{r|ccc|ccc} \hline \hline Benchmark ~~~~~~~~~~ & S0$_a$ & S0$_b$ & S0$_c$ & S1$_a$ & S1$_b$ & S1$_c$ \\ \hline Limit from the normalization term ($\lambda_1$) & 4.4 & 4.6 & 103 & 1.1 & 1.7 & 0.97 \\ Signal cross section at $\lambda_1$ (fb) & 1.86 & 1.87 & 1.86 & 1.87 & 1.87 & 1.87 \\ \hline Limit from the KL-divergence term ($\lambda_2$) & 3.5 & 3.6 & 81 & 1.1 & 1.7 & 0.99 \\ Signal cross section at $\lambda_2$ (fb) & 0.75 & 0.70 & 0.72 & 1.9 & 2.0 & 2.0 \\ \hline Combined limit ($\lambda_0$) & 3.5 & 3.5 & 79 & 1.0 & 1.5 & 0.89 \\ \hline \hline \end{tabular}} \caption{ Upper limits on the coupling strength parameters of the dark sector models at 95\% CL, with signal cross sections at the limit values. } \label{tab:limit} \end{table} \subsection{Example application of MEKD} Our computation considered only parton level matrix element at leading order (LO). We comment that there are already efforts to extend the MEM to Next-to-Leading Order (NLO)~\cite{Campbell:2012cz} and incorporates parton shower effects~\cite{Soper:2011cr}. There is an easier approach to exploit the LO matrix elements, called the matrix element kinematic discriminator (MEKD)~\cite{Avery:2012um,Chatrchyan:2012sn,Chatrchyan:2012jja}. This method construct a variable named MEKD that can be calculated for events with required observables. By construction, it utilizes the matrix element and can be used to distinguish the signal and background. The advantage of this method is that detector effects and theoretical uncertainties in the construction of likelihood function is independent of the application. Based on the pdfs defined as in Eq.~\ref{eq:pdf} of the signal and combined background, we define the MEKD as: \begin{eqnarray} \text{MEKD} = \ln \dfrac{\rho_s(\mathbf{x},\lambda)}{\rho_b(\mathbf{x})}, \end{eqnarray} where $\mathbf{x}=(\yz,\qt,\cos\theta_{CS},\phi_{CS})$ and the invisible part has been integrated out. Then we use the {\sc MG5} program to generate events for the applications. For the LO simulations, we consider the same setup as has been used in our program. For the NLO simulations, we consider NNPDF23\_nlo with default renormalization and factorization scales, defined as the sum of the transverse masses divided by two of all final state particles and partons. Negatively weighted events in the NLO simulations have been incorporated consistently. The Fig.~\ref{fig:mekd} stacks MEKD distributions of both signal and backgrounds. On the left plot, all of the processes are generated with LO accuracy (NLO in QCD for Z($\to l^+l^-$)+jet). The signal considers S0$_a$ benchmark model with $\lambda=3.5$. We multiplied the signal yield by a factor of five for a better demonstration. The Non-resonant-$ll$ process is expected to be obtained from data-driven in the experiment. We mimic its contribution by using a $\mathrm{t\bar{t}}(\to 2l2\nu2 \mathrm{b-jets})$ sample. The right plot replaces the SM ZZ$\to 2l 2\nu$, WZ($\to e\nu 2l$) and Z($\to l^+l^-$)+jet with simulated events at NLO accuracy. In both cases, the MEKD shows very nice discrimination power on the signal and background. It is made clear that NLO simulated events are applicable, with a reasonable loss of sensitivity. \begin{figure}[!h] \centering \includegraphics[width=7.5cm]{figs/mekd_LO.eps} \includegraphics[width=7.5cm]{figs/mekd_NLO.eps} \caption{\label{fig:mekd} Example MEKD distributions with {\sc MG5} generated events. The left plot is obtained with simulated events at LO accuracy. The right plot considers events of Z($\to l^+l^-$)+jet processes at NLO accuracy. The signal considers S0$_a$ benchmark model with $\lambda=3.5$. We multiplied the signal yield by a factor of five for a better demonstration. } \end{figure} \section{Summary} \label{sec:summary} In this paper, we have exploited the Z boson leptonic decay information to probe the dark sector with a scalar, vector, and tensor mediators. We obtained angular coefficients of the SM $\mathrm{Z Z}\to 2l2\nu$ background and benchmark scenarios of the dark sector models in the $\yz-\qt$ plane. Our results show that the angular coefficients $A_{0}-A_{4}$ behave very differently between the SM $\mathrm{Z Z}\to 2l2\nu$ process and the dark sector signal processes. The angular coefficients among dark sector models of spin-0 and spin-1 mediators are also found to be different from scenario to scenario. Specifically, the angular coefficients have sensitivities on the parity violation of the spin-1 model and the CP-violation of the spin-0 model. The angular coefficients in the spin-2 model are found to be similar to the spin-independent scenario of the spin-1 model but still have minor differences. To quantify the shape information that can be used for the search of dark sectors, we consider unbinned fits to the four-dimensional $\yz-\qt-\cos\theta_{CS}-\phi_{CS}$ distributions based on dynamically constructed matrix element likelihood functions and set 95\% CL upper limits on the coupling strength parameters of the spin-0 and spin-1 benchmark scenarios. To be realistic, we emulate the acceptance and efficiency effects referring to the 13 TeV LHC measurement~\cite{Sirunyan:2017onm,Aaboud:2017bja}. To make our framework concise, we obtained all the results using asymptotic approximation without event generation. Our evaluated KL-divergence term quantifies the shape effect in each case. The obtained results demonstrate significant improvements in the limits, especially on the S0 benchmark models. For easier usage of experimental data, we provide an example application of MEKD with simulated events. We show that our MEKD constructed with LO matrix elements are applicable for NLO events and preserves good discrimination power on the signal and background. We expect this kind of MEKDs to be useful for exploiting the lepton angular distributions in experimental analyses.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The category of Lie algebroids has proved useful to formulate problems in applied mathematics, algebraic topology, and differential geometry. In the context of Mechanics, an ambitious program was proposed in~\cite{Weinstein} in order to develop formulations of the dynamical behavior of Lagrangian and Hamiltonian systems on Lie algebroids and discrete mechanics on Lie groupoids. In the last years, this program has been actively developed by many authors, and as a result, a powerful mathematical structure is emerging. The main feature of the Lie algebroid framework is its inclusive nature. Under the same umbrella, one can consider such disparate situations as systems with symmetry, systems evolving on semidirect products, Lagrangian and Hamiltonian systems on Lie algebras, and field theory equations (see~\cite{CoLeMaMaMa,LeMaMa} for recent topical reviews illustrating this). The Lie algebroid approach to Mechanics builds on the particular structure of the tangent bundle to develop a geometric treatment of Lagrangian systems parallel to Klein's formalism~\cite{Cr,Klein}. At the same time, the attention devoted to Lie algebroids from a purely geometrical viewpoint has led to an spectacular development of the field, e.g., see~\cite{BoKoSt,CaNuSaI,Mackenzie,Sau} and references therein. The merging of both perspectives has already provided mutual benefit, and will undoubtedly lead to important developments in the future. The other main theme of this paper are nonholonomic Lagrangian systems, i.e., systems subject to constraints involving the velocities. This topic is a classic subject in Mathematics and Mechanics, dating back to the early times of Lagrange; a comprehensive list of classical references can be found in~\cite{NF}. At the beginning of the nineties, the work~\cite{Ko} sparked a renewed interest in the geometric study of nonholonomic mechanical systems, with a special emphasis on symmetry aspects. In the last years, several authors have extended the ideas and techniques of the geometrical treatment of unconstrained systems to the study of nonholonomic mechanical systems, see the recent monographs~\cite{Bl,cortes}. These include symplectic~\cite{CaRa,LeMa,LeMa2}, Hamiltonian~\cite{VaMa}, and Lagrangian approaches~\cite{CuKeSnBa,KoMa1}, the study of almost Poisson brackets~\cite{CaLeMa,IbLeMaMa,KoMa2}, and symmetry and reduction of the dynamics~\cite{BaSn,BlKrMaMu,CaCoLeMa,CaLeMaMa,CaLeMaMa2,CoLe,marle}. In this paper we develop a comprehensive treatment of nonholonomic systems on Lie algebroids. This class of systems was introduced in~\cite{CoMa} when studying mechanical control systems (see also~\cite{MeLa} for a recent approach to mechanical systems on Lie algebroids subject to linear constraints). Here, we build on the geometry of Lie algebroids to identify suitable regularity conditions guaranteeing that the nonholonomic system admits a unique solution. We develop a projection procedures to obtain the constrained dynamics as a modification of the unconstrained one, and define an almost-Poisson nonholonomic bracket. We show that many of the properties that standard nonholonomic systems enjoy have their counterpart in the proposed setup. As important examples, we highlight that the analysis here provides a natural interpretation for the use of pseudo-coordinates techniques and lends itself to the treatment of constrained systems with symmetry, following the ideas developed in~\cite{CoMa,MaROMP}. We carefully examine the reduction procedure for this class of systems, paying special attention to the evolution of the momentum map. From a methodological point of view, the approach taken in the paper has enormous advantages. This fact must mainly be attributed to the inclusive nature of Lie algebroids. Usually, the results on nonholonomic systems available in the literature are restricted to a particular class of nonholonomic systems, or to a specific context. However, as illustrated in Table~\ref{tab:examples}, many different nonholonomic systems fit under the Lie algebroid framework, and this has the important consequence of making the results proved here widely applicable. With the aim of illustrating this breadth, we consider various examples throughout the paper, including the Suslov problem, the Chaplygin sleigh, the Veselova system, Chaplygin Gyro-type systems, the two-wheeled planar mobile robot, and a ball rolling on a rotating table. We envision that future developments within the proposed framework will have a broad impact in nonholonomic mechanics. In the course of the preparation of this manuscript, the recent research efforts~\cite{CaNuSaII,Me} were brought to our attention. These references, similar in spirit to the present work, deal with nonholonomic Lagrangian systems and focus on the reduction of Lie algebroid structures under symmetry. {\small \begin{table}[tbhp] \centering \begin{tabular}{|l|l|l|l|} \hline% \parbox[t]{.25\linewidth}{Nonholonomic\\ Lagrangian system} & Lie algebroid & Dynamics & Example\\ \hline \hline Standard & Tangent bundle & Lagrande-d'Alembert & Rolling disk~\cite{NF}\\ \hline% On a Lie algebra & Lie algebra & Euler-Poincar\'e-Suslov & \parbox[t]{.15\linewidth}{Chaplygin sleigh~\cite{Ch}}\\ \hline% \parbox[t]{.255\linewidth}{ Nonholonomic LR\\ systems} &\parbox[t]{.15\linewidth}{Right action Lie algebroid} & \parbox[t]{.25\linewidth}{Reduced Poincar\'e-Chetaev} & \parbox[t]{.195\linewidth}{Veselova problem \cite{VeVe}}\\ \hline% \parbox[t]{.254\linewidth}{ Nonholonomic systems with semidirect product symmetry} &\parbox[t]{.15\linewidth}{Left action Lie algebroid} & \parbox[t]{.253\linewidth}{Nonholonomic Euler-Poincar\'e with an advected parameter} & \parbox[t]{.195\linewidth}{Chaplygin's gyro \cite{Ma,MT:04}}\\ \hline% Symmetry-invariant & Atiyah algebroid & \parbox[t]{.24\linewidth}{Nonholonomic Lagrange-Poincar\'e} & \parbox[t]{.15\linewidth}{Snakeboard~\cite{BlKrMaMu}}\\ \hline \end{tabular} \medskip \caption{The Lie algebroid framework embraces different classes of nonholonomic systems.} \label{tab:examples} \end{table} } The paper is organized as follows. In Section~\ref{preliminaries} we collect some preliminary notions and geometric objects on Lie algebroids, including differential calculus, morphisms and prolongations. We also describe classical Lagrangian systems within the formalism of Lie algebroids. In Section~\ref{linear}, we introduce the class of nonholonomic Lagrangian systems subject to linear constraints, given by a regular Lagrangian $L : E \longrightarrow \mathbb{R}$ on the Lie algebroid $\tau : E \longrightarrow M$ and a constraint subbundle $D$ of $E$. We show that the known results in Mechanics for these systems also hold in the context of Lie algebroids. In particular, drawing analogies with d'Alembert principle, we derive the Lagrange-d'Alembert equations of motion, prove the conservation of energy and state a Noether's theorem. We also derive local expressions for the dynamics of nonholonomic Lagrangian systems, which are further simplified by the choice of a convenient basis of $D$. As an illustration, we consider the class of nonholonomic mechanical systems. For such systems, the Lagrangian $L$ is the polar form of a bundle metric on $E$ minus a potential function on $M$. In Section~\ref{sec:regularity}, we perform the analysis of the existence and uniqueness of solutions of constrained systems on general Lie algebroids, and extend the results in~\cite{BaSn,CaLeMaMa,CaLeMa,CoLe,LeMa} for constrained systems evolving on tangent bundles. We obtain several characterizations for the regularity of a nonholonomic system, and prove that a nonholonomic system of mechanical type is always regular. The constrained dynamics can be obtained by projecting the unconstrained dynamics in two different ways. Under the first projection, we develop a distributional approach analogous to that in~\cite{BaSn}, see also~\cite{MeLa}. Using the second projection, we introduce the nonholonomic bracket. The evolution of any observable can be measured by computing its bracket with the energy of the system. Section~\ref{sec:reduction} is devoted to studying the reduction of the dynamics under symmetry. Our approach follows the ideas developed in~\cite{CeMaRa}, who defined a minimal subcategory of the category of Lie algebroids which is stable under Lagrangian reduction. We study the behavior of the different geometric objects introduced under morphisms of Lie algebroids, and show that fiberwise surjective morphisms induce consistent reductions of the dynamics. This result covers, but does not reduce to, the usual case of reduction of the dynamics by a symmetry group. In accordance with the philosophy of the paper, we study first the unconstrained dynamics case, and obtain later the results for the constrained dynamics using projections. A (Poisson) reduction by stages procedure can also be developed within this formalism. It should be noticed that the reduction under the presence of a Lie group of symmetries $G$ is performed in two steps: first we reduce by a normal subgroup $N$ of $G$, and then by the residual group. In Section~\ref{momentum-equation}, we prove a general version of the momentum equation introduced in~\cite{BlKrMaMu}. In Section~\ref{examples}, we show some interesting examples and in Section~\ref{nonlinear}, we extend some of the results previously obtained for linear constraints to the case of nonlinear constraints. The paper ends with our conclusions and a description of future research directions. \section{Preliminaries} \label{preliminaries} In this section we recall some well-known facts concerning the geometry of Lie algebroids. We refer the reader to~\cite{CaWe,HiMa,Mackenzie} for details about Lie groupoids, Lie algebroids and their role in differential geometry. \subsection{Lie algebroids} Let $M$ be an $n$-dimensional manifold and let $\map{\tau}{E}{M}$ be a vector bundle. A vector bundle map $\map{\rho}{E}{TM}$ over the identity is called an \emph{anchor map}. The vector bundle $E$ together with an anchor map $\rho$ is said to be an \emph{anchored vector bundle} (see~\cite{PoPo}). A structure of \emph{Lie algebroid} on $E$ is given by a Lie algebra structure on the $\cinfty{M}$-module of sections of the bundle, $(\Sec{E},[\cdot\,,\cdot])$, together with an anchor map, satisfying the compatibility condition \[ [\sigma,f\eta] = f[\sigma,\eta] + \bigl( \rho(\sigma)f \bigr) \eta . \] Here $f$ is a smooth function on $M$, $\sigma$, $\eta$ are sections of $E$ and $\rho(\sigma)$ denotes the vector field on $M$ given by $\rho(\sigma)(m)=\rho(\sigma(m))$. From the compatibility condition and the Jacobi identity, it follows that the map $\sigma\mapsto\rho(\sigma)$ is a Lie algebra homomorphism from the set of sections of $E$, $\Sec{E}$, to the set of vector fields on $M$, $\mathfrak{X}(M)$. In what concerns Mechanics, it is convenient to think of a Lie algebroid $\map{\rho}{E}{TM}$, and more generally an anchored vector bundle, as a substitute of the tangent bundle of $M$. In this way, one regards an element $a$ of $E$ as a generalized velocity, and the actual velocity $v$ is obtained when applying the anchor to $a$, i.e., $v=\rho(a)$. A curve $\map{a}{[t_0,t_1]}{E}$ is said to be \emph{admissible} if $\dot{m}(t)=\rho(a(t))$, where $m(t)=\tau(a(t))$ is the base curve. We will denote by $\Adm{E}$ the space of admissible curves on $E$. Given local coordinates $(x^i)$ in the base manifold $M$ and a local basis $\{e_\alpha\}$ of sections of $E$, we have local coordinates $(x^i,y^\alpha)$ in $E$. If $a\in E$ is an element in the fiber over $m\in M$, then we can write $a=y^\alpha e_\alpha(m)$ and thus the coordinates of $a$ are $(m^i,y^\alpha)$, where $m^i$ are the coordinates of the point $m$. The anchor map is locally determined by the local functions $\rho^i_\alpha$ on $M$ defined by $\rho(e_\alpha)=\rho^i_\alpha(\partial/\partial x^i)$. In addition, for a Lie algebroid, the Lie bracket is determined by the functions $C^\gamma_{\alpha\beta}$ defined by $[e_\alpha,e_\beta]=C^\gamma_{\alpha\beta}e_\gamma$. The functions $\rho^i_\alpha$ and $C^\gamma_{\alpha\beta}$ are called \emph{the structure functions} of the Lie algebroid in this coordinate system. They satisfy the following relations \begin{align*} \rho^j_\alpha\pd{\rho^i_\beta}{x^j} - \rho^j_\beta\pd{\rho^i_\alpha}{x^j} = \rho^i_\gamma C^\gamma_{\alpha\beta} \quad\text{and}\quad \sum_{\mathrm{cyclic}(\alpha,\beta,\gamma)} \left[\rho^i_\alpha\pd{ C^\nu_{\beta\gamma}}{x^i} + C^\mu_{\beta\gamma} C^\nu_{\alpha\mu}\right]=0, \end{align*} which are called \emph{the structure equations} of the Lie algebroid. \subsection{Exterior differential} The anchor $\rho$ allows to define the differential of a function on the base manifold with respect to an element $a\in E$. It is given by \[ df(a)=\rho(a)f. \] It follows that the differential of $f$ at the point $m\in M$ is an element of $E_m^*$. Moreover, a structure of Lie algebroid on $E$ allows to extend the differential to sections of the bundle $\ext[p]{E}$, which will be called $p$-sections or just $p$-forms. If $\omega\in\Sec{\ext[p]{E}}$, then $d\omega\in\Sec{\ext[p+1]{E}}$ is defined by \begin{align*} d\omega(\sigma_0,\sigma_1,\ldots,\sigma_p) &= \sum_i(-1)^i\rho(\sigma_i)( \omega(\sigma_0,\ldots,\widehat{\sigma_i},\ldots,\sigma_p))\\ &\qquad{}+ \sum_{i<j}(-1)^{i+j} \omega([\sigma_i,\sigma_j],\sigma_0,\ldots, \widehat{\sigma_i},\ldots,\widehat{\sigma_j},\ldots,\sigma_p). \end{align*} It follows that $d$ is a cohomology operator, that is, $d^2=0$. Locally the exterior differential is determined by \[ dx^i=\rho^i_\alpha e^\alpha \qquad\text{and}\qquad de^\gamma=-\frac{1}{2}C^\gamma_{\alpha\beta}e^\alpha\wedge e^\beta. \] Throughout this paper, the symbol $d$ will refer to the exterior differential on the Lie algebroid $E$ and not to the ordinary exterior differential on a manifold. Of course, if $E=TM$, then both exterior differentials coincide. The usual Cartan calculus extends to the case of Lie algebroids (see~\cite{Mackenzie, Nijenhuis}). For every section $\sigma$ of $E$ we have a derivation $i_\sigma$ (contraction) of degree $-1$ and a derivation $d_\sigma=i_\sigma\circ d+d\circ i_\sigma$ (Lie derivative) of degree $0$. Since $d^2=0$, we have that $d_\sigma\circ d=d\circ d_\sigma$. \subsection{Morphisms} Let $\map{\tau}{E}{M}$ and $\map{\tau'}{E'}{M'}$ be two anchored vector bundles, with anchor maps $\map{\rho}{E}{TM}$ and $\map{\rho'}{E'}{TM'}$. A vector bundle map $\map{\Phi}{E}{E'}$ over a map $\map{\varphi}{M}{M'}$ is said to be \emph{admissible} if it maps admissible curves onto admissible curves, or equivalently $T\varphi\circ\rho = \rho'\circ\Phi$. If $E$ and $E'$ are Lie algebroids, then we say that $\Phi$ is a \emph{morphism} if $\Phi\pb d\theta=d\Phi\pb\theta$ for every $\theta\in\Sec{\ext{E'}}$. It is easy to see that morphisms are admissible maps. In the above expression, the pullback $\Phi\pb\beta$ of a $p$-form $\beta$ is defined by $$ (\Phi\pb\beta)_m(a_1,a_2,\ldots,a_p)= \beta_{\varphi(m)}\bigl(\Phi(a_1),\Phi(a_2),\ldots,\Phi(a_p)\bigr), $$ for every $a_1,\ldots,a_p\in E_m$. For a function $f\in\cinfty{M'}$ (i.e., for $p=0$), we just set $\Phi\pb f=f\circ\varphi$. Let $(x^i)$ and $(x'{}^i)$ be local coordinate systems on $M$ and $M'$, respectively. Let $\{e_\alpha\}$ and $\{e'_\alpha\}$ be local basis of sections of $E$ and $E'$, respectively, and $\{e^\alpha\}$ and $\{e'{}^\alpha\}$ the corresponding dual basis. The bundle map $\Phi$ is determined by the relations $\Phi\pb x'{}^i = \phi^i(x)$ and $\Phi\pb e'{}^\alpha = \phi^\alpha_\beta e^\beta$ for certain local functions $\phi^i$ and $\phi^\alpha_\beta$ on $M$. Then, $\Phi$ is admissible if and only if \[ \rho^j_\alpha\pd{\phi^i}{x^j}=\rho'{}^i_\beta\phi^\beta_\alpha. \] The map $\Phi$ is a morphism of Lie algebroids if and only if, in addition to the admissibility condition above, one has \[ \phi^\beta_\gamma C^\gamma_{\alpha\delta} = \left(\rho^i_\alpha\pd{\phi^\beta_\delta}{x^i} - \rho^i_\delta\pd{\phi^\beta_\alpha}{x^i}\right) + C'{}^\beta_{\theta\sigma}\phi^\theta_\alpha\phi^\sigma_\delta. \] In these expressions, $\rho^i_\alpha$, $C^\alpha_{\beta\gamma}$are the local structure functions on $E$ and $\rho'{}^i_\alpha$, $C'{}^\alpha_{\beta\gamma}$ are the local structure functions on $E'$. \subsection{Prolongation of a fibered manifold with respect to a Lie algebroid} Let $\map{\pi}{P}{M}$ be a fibered manifold with base manifold $M$. Thinking of $E$ as a substitute of the tangent bundle of $M$, the tangent bundle of $P$ is not the appropriate space to describe dynamical systems on $P$. This is clear if we note that the projection to $M$ of a vector tangent to $P$ is a vector tangent to $M$, and what one would like instead is an element of $E$, the `new' tangent bundle of $M$. A space which takes into account this restriction is the \emph{$E$-tangent bundle} of $P$, also called the \emph{prolongation} of $P$ with respect to $E$, which we denote by $\prol[E]{P}$ (see~\cite{LeMaMa,LMLA,MaMeSa,PoPo}). It is defined as the vector bundle $\map{\tau^E_P}{\prol[E]{P}}{P}$ whose fiber at a point $p\in P_m$ is the vector space \[ \prol[E]{P}[p] =\set{(b,v)\in E_m\times T_pP}{\rho(b)=T_p\pi(v)}. \] We will frequently use the redundant notation $(p,b,v)$ to denote the element $(b,v)\in\prol[E]{P}[p]$. In this way, the map $\tau^E_P$ is just the projection onto the first factor. The anchor of $\prol[E]{P}$ is the projection onto the third factor, that is, the map $\map{\rho^1}{\prol[E]{P}}{TP}$ given by $\rho^1(p,b,v)=v$. The projection onto the second factor will be denoted by $\map{\prol{\pi}}{\prol[E]{P}}{E}$, and it is a vector bundle map over $\pi$. Explicitly $\prol{\pi}(p,b,v)=b$. An element $z\in\prol[E]{P}$ is said to be vertical if it projects to zero, that is $\prol{\pi}(z)=0$. Therefore it is of the form $(p,0,v)$, with $v$ a vertical vector tangent to $P$ at $p$. Given local coordinates $(x^i,u^A)$ on $P$ and a local basis $\{e_\alpha\}$ of sections of $E$, we can define a local basis $\{\mathcal{X}_\alpha,\mathcal{V}_A\}$ of sections of $\prol[E]{P}$ by \[ \mathcal{X}_\alpha(p) =\Bigl(p,e_\alpha(\pi(p)),\rho^i_\alpha\pd{}{x^i}\at{p}\Bigr) \qquad\text{and}\qquad \mathcal{V}_A(p) = \Bigl(p,0,\pd{}{u^A}\at{p}\Bigr). \] If $z=(p,b,v)$ is an element of $\prol[E]{P}$, with $b=z^\alpha e_\alpha$, then $v$ is of the form $v=\rho^i_\alpha z^\alpha\pd{}{x^i}+v^A\pd{}{u^A}$, and we can write \[ z=z^\alpha\mathcal{X}_\alpha(p)+v^A\mathcal{V}_A(p). \] Vertical elements are linear combinations of $\{\mathcal{V}_A\}$. The anchor map $\rho^1$ applied to a section $Z$ of $\prol[E]{P}$ with local expression $Z = Z^\alpha\mathcal{X}_\alpha+V^A\mathcal{V}_A$ is the vector field on $P$ whose coordinate expression is \[ \rho^1(Z) = \rho^i_\alpha Z^\alpha \pd{}{x^i} + V^A\pd{}{u^A}. \] If $E$ carries a Lie algebroid structure, then so does $\prol[E]{P}$. The associated Lie bracket can be easily defined in terms of projectable sections, so that $\prol{\pi}$ is a morphism of Lie algebroids. A section $Z$ of $\prol[E]{P}$ is said to be projectable if there exists a section $\sigma$ of $E$ such that $\prol{\pi}\circ Z=\sigma\circ\pi$. Equivalently, a section $Z$ is projectable if and only if it is of the form $Z(p)=(p,\sigma(\pi (p)),X(p))$, for some section $\sigma$ of $E$ and some vector field $X$ on $E$ (which projects to $\rho(\sigma)$). The Lie bracket of two projectable sections $Z_1$ and $Z_2$ is then given by \[ [Z_1,Z_2](p)=(p,[\sigma_1,\sigma_2](m),[X_1,X_2](p)), \qquad p \in P,\,\;\;\; m=\pi(p). \] It is easy to see that $[Z_1,Z_2](p)$ is an element of $\prol[E]{P}[p]$ for every $p\in P$. Since any section of $\prol[E]{P}$ can be locally written as a linear combination of projectable sections, the definition of the Lie bracket for arbitrary sections of $\prol[E]{P}$ follows. The Lie brackets of the elements of the basis are \[ [\mathcal{X}_\alpha,\mathcal{X}_\beta]= C^\gamma_{\alpha\beta}\:\mathcal{X}_\gamma, \qquad [\mathcal{X}_\alpha,\mathcal{V}_B]=0 \qquad\text{and}\qquad [\mathcal{V}_A,\mathcal{V}_B]=0, \] and the exterior differential is determined by \begin{align*} &dx^i=\rho^i_\alpha \mathcal{X}^\alpha, &&du^A=\mathcal{V}^A,\\ &d\mathcal{X}^\gamma=-\frac{1}{2}C^\gamma_{\alpha\beta}\mathcal{X}^\alpha\wedge\mathcal{X}^\beta, &&d\mathcal{V}^A=0, \end{align*} where $\{\mathcal{X}^\alpha,\mathcal{V}^A\}$ is the dual basis corresponding to $\{\mathcal{X}_\alpha,\mathcal{V}_A\}$. \subsection{Prolongation of a map} Let $\map{\Psi}{P}{P'}$ be a fibered map from the fibered manifold $\map{\pi}{P}{M}$ to the fibered manifold $\map{\pi'}{P'}{M'}$ over a map $\map{\varphi}{M}{M'}$. Let $\map{\Phi}{E}{E'}$ be an admissible map from $\map{\tau}{E}{M}$ to $\map{\tau'}{E'}{M'}$ over the same map $\varphi$. The prolongation of $\Phi$ with respect to $\Psi$ is the mapping $\map{\prol[\Phi]{\Psi}}{\prol[E]{P}}{\prol[E']{P'}}$ defined by \[ \prol[\Phi]{\Psi}(p,b,v) =(\Psi(p),\Phi(b),(T_p\Psi)(v)). \] It is clear from the definition that $\prol[\Phi]{\Psi}$ is a vector bundle map from $\map{\tau^E_P}{\prol[E]{P}}{P}$ to $\map{\tau^{E'}_{P'}}{\prol[E']{P'}}{P'}$ over $\Psi$. Moreover, in ~\cite{CFTLAMF} it is proved the following result. \begin{proposition} The map $\prol[\Phi]{\Psi}$ is an admissible map. Moreover, $\prol[\Phi]{\Psi}$ is a morphism of Lie algebroids if and only if $\Phi$ is a morphism of Lie algebroids. \end{proposition} Given local coordinate systems $(x^i)$ on $M$ and $(x'{}^i)$ on $M'$, local adapted coordinates $(x^i,u^A)$ on $P$ and $(x'{}^i,u'{}^A)$ on $P'$ and a local basis of sections $\{e_\alpha\}$ of $E$ and $\{e'_\alpha\}$ of $E'$, the maps $\Phi$ and $\Psi$ are determined by $\Phi\pb e'{}^\alpha=\Phi^\alpha_\beta e^\beta$ and $\Psi(x,u)=(\phi^i(x),\psi^A(x,u))$. Then the action of $\prol[\Phi]{\Psi}$ is given by \begin{align*} (\prol[\Phi]{\Psi})\pb\mathcal{X}'{}^\alpha &= \Phi_\beta^\alpha\mathcal{X}^\beta,\\ (\prol[\Phi]{\Psi})\pb\mathcal{V}'{}^A &=\rho^i_\alpha\pd{\psi^A}{x^i}\mathcal{X}^\alpha+\pd{\psi^A}{u^B}\mathcal{V}^B. \end{align*} We finally mention that the composition of prolongation maps is the prolongation of the composition. Indeed, let $\Psi'$ be another bundle map from $\map{\pi'}{P'}{M'}$ to another bundle $\map{\pi''}{P''}{M''}$ and $\Phi'$ be another admissible map from $\map{\tau'}{E'}{M'}$ to $\map{\tau''}{E''}{M''}$ both over the same base map. Since $\Phi$ and $\Phi'$ are admissible maps then so is $\Phi'\circ\Phi$, and thus we can define the prolongation of $\Psi'\circ\Psi$ with respect to $\Phi'\circ\Phi$. We have that $\prol[\Phi'\circ\Phi]{(\Psi'\circ\Psi)} =(\prol[\Phi']{\Psi'})\circ(\prol[\Phi]{\Psi})$. In the particular case when the bundles $P$ and $P'$ are just $P=E$ and $P'=E'$, whenever we have an admissible map $\map{\Phi}{E}{E'}$ we can define the prolongation of $\Phi$ along $\Phi$ itself, by $\prol[\Phi]{\Phi}(a,b,v)=(\Phi(a),\Phi(b),T\Phi(v))$. From the result above, we have that $\prol[\Phi]{\Phi}$ is a Lie algebroid morphism if and only if $\Phi$ is a Lie algebroid morphism. In coordinates we obtain \begin{align*} (\prol[\Phi]{\Phi})\pb\mathcal{X}'{}^\alpha &= \Phi_\beta^\alpha\mathcal{X}^\beta,\\ (\prol[\Phi]{\Phi})\pb\mathcal{V}'{}^\alpha &=\rho^i_\beta\pd{\Phi^\alpha_\gamma}{x^i}y^\gamma\mathcal{X}^\beta + \Phi^\alpha_\beta\mathcal{V}^\beta, \end{align*} where $(x^i,y^\gamma)$ are the corresponding fibred coordinates on $E$. From this expression it is clear that $\prol[\Phi]{\Phi}$ is fiberwise surjective if and only if $\Phi$ is fiberwise surjective. \subsection{Lagrangian Mechanics} In~\cite{LMLA} (see also \cite{PoPo}) a geometric formalism for Lagrangian Mechanics on Lie algebroids was defined. Such a formalism is similar to Klein's formalism \cite{Klein} in standard Lagrangian mechanics and it is developed in the prolongation $\TEE$ of a Lie algebroid $E$ over itself. The canonical geometrical structures defined on $\TEE$ are the following: \begin{itemize} \item The \emph{vertical lift} $\map{\xi\sup{V}}{\tau^*E}{\TEE}$ given by $\xi\sup{V}(a,b)=(a,0,b\sup{V}_a)$, where $b\sup{V}_a$ is the vector tangent to the curve $a+tb$ at $t=0$, \item The \emph{vertical endomorphism} $\map{S}{\TEE}{\TEE}$ defined as follows: \[ S(a,b,v)=\xi\sup{V}(a,b)=(a,0,b_a\sup{V}), \] \item The \emph{Liouville section} which is the vertical section corresponding to the Liouville dilation vector field: \[ \Delta(a)=\xi\sup{V}(a,a)=(a,0,a_a\sup{V}). \] \end{itemize} A section $\Gamma$ of $\TEE$ is said to be a \textsc{sode}\ section if $S\Gamma = \Delta$. Given a Lagrangian function $L\in\cinfty{E}$ we define the \emph{Cartan 1-form} $\theta_L$ and the \emph{Cartan 2-form} $\omega_L$ as the forms on $\prol[E]{E}$ given by \begin{equation} \label{Cartan-forms} \theta_L=S^*(dL)\qquad\text{and}\qquad \omega_L=-d\theta_L. \end{equation} The real function $E_{L}$ on $E$ defined by $E_{L} = d_{\Delta}L - L$ is the \emph{energy function} of the Lagrangian system. By a solution of the Lagrangian system (a solution of the \emph{Euler-Lagrange equations}) we mean a \textsc{sode}\ section $\Gamma$ of $\TEE$ such that \begin{equation} \label{Euler-Lagrange} i_\Gamma\omega_L-dE_L=0. \end{equation} The local expressions for the vertical endomorphism, the Liouville section, the Cartan $2$-form and the Lagrangian energy are \begin{equation}\label{endverlo} S\mathcal{X}_{\alpha} = \mathcal{V}_{\alpha}, \makebox[.3cm]{} S\mathcal{V}_{\alpha} = 0, \makebox[.3cm]{} \mbox{ for all } \alpha, \end{equation} \begin{equation} \label{Lioulo} \Delta = y^{\alpha}\mathcal{V}_{\alpha}, \end{equation} \begin{equation} \label{omegaL} \omega_L =\pd{^2L}{y^\alpha\partial y^\beta}\mathcal{X}^\alpha\wedge \mathcal{V}^\beta +\frac{1}{2}\left( \pd{^2L}{x^i\partial y^\alpha}\rho^i_\beta-\pd{^2L}{x^i\partial y^\beta}\rho^i_\alpha+\pd{L}{y^\gamma}C^\gamma_{\alpha\beta} \right)\mathcal{X}^\alpha\wedge \mathcal{X}^\beta, \end{equation} \begin{equation} \label{EL} E_L=\pd{L}{y^\alpha}y^\alpha-L. \end{equation} Thus, a \textsc{sode}\ $\Gamma$ is a section of the form \[ \Gamma=y^\alpha\mathcal{X}_\alpha+f^\alpha\mathcal{V}_\alpha. \] The \textsc{sode}\ $\Gamma$ is a solution of the Euler-Lagrange equations if and only if the functions $f^\alpha$ satisfy the linear equations \begin{equation} \label{free-forces} \pd{^2L}{y^\beta\partial y^\alpha}f^\beta+\pd{^2L}{x^i\partial y^\alpha}\rho^i_\beta y^\beta +\pd{L}{y^\gamma}C^\gamma_{\alpha\beta}y^\beta -\rho^i_\alpha\pd{L}{x^i} =0, \mbox{ for all } \alpha. \end{equation} The \emph{Euler-Lagrange differential equations} are the differential equations for the integral curves of the vector field $\rho^1(\Gamma)$, where the section $\Gamma$ is the solution of the Euler-Lagrange equations. Thus, these equations may be written as $$\dot{x}^i=\rho_\alpha^iy^\alpha,\;\;\;\; \frac{d}{dt}(\frac{\partial L}{\partial y^\alpha})-\rho_\alpha^i\frac{\partial L}{\partial x^i} + \frac{\partial L}{\partial y^\gamma}C_{\alpha\beta}^\gamma y^\beta=0.$$ In other words, if $\delta L: \Adm{E}\to E^*$ is the \emph{Euler-Lagrange operator}, which locally reads \[ \delta L=(\frac{d}{dt}(\frac{\partial L}{\partial y^\alpha}) + C_{\alpha\beta}^\gamma y^\beta \frac{\partial L}{\partial y^\gamma}-\rho_\alpha^i\frac{\partial L}{\partial x^i})e^\alpha, \] where $\{e^\alpha\}$ is the dual basis of $\{e_\alpha\}$, then the Euler-Lagrange differential equations read \[ \delta L=0. \] The function $L$ is said to be \emph{regular Lagrangian} if $\omega_{L}$ is regular at every point as a bilinear map. In such a case, there exists a unique section $\Gamma_{L}$ of $\TEE$ which satisfies the equation \[ i_{\Gamma_{L}}\omega_{L} - dE_{L} = 0. \] Note that from (\ref{endverlo}), (\ref{Lioulo}), (\ref{omegaL}) and (\ref{EL}), it follows that \begin{equation} \label{2.4'} i_{SX} \omega_{L} = -S^*(i_{X}\omega_{L}), \makebox[.3cm]{} i_{\Delta}\omega_{L} = -S^*(dE_{L}), \end{equation} for $X \in \Sec{{\prol[E]{E}}}$. Thus, using (\ref{2.4'}), we deduce that \[ i_{S\Gamma_{L}}\omega_{L} = i_{\Delta}\omega_{L} \] which implies that $\Gamma_{L}$ is a \textsc{sode}\ section. Therefore, for a regular Lagrangian function $L$ we will say that the dynamical equations (\ref{Euler-Lagrange}) are just the Euler-Lagrange equations. On the other hand, the vertical distribution is isotropic with respect to $\omega_L$, see~\cite{LeMaMa}. This fact implies that the contraction of $\omega_L$ with a vertical vector is a semibasic form. This property allows us to define a symmetric 2-tensor $\GL$ along $\tau$ by \begin{equation} \label{GL} \GL[a](b,c)=\omega_L(\tilde{b},c_a\sup{V}), \end{equation} where $\tilde{b}$ is any element in $\TEE[a]$ which projects to $b$, i.e., $\prol{\tau}(\tilde{b})=b$, and $a \in E$. In coordinates $\GL=W_{\alpha\beta}e^\alpha\otimes e^\beta$, where the matrix $W_{\alpha\beta}$ is given by \begin{equation} \label{HessianL} W_{\alpha\beta}=\pd{^2L}{y^\alpha\partial y^\beta}. \end{equation} It is easy to see that the Lagrangian $L$ is regular if and only the matrix $W$ is regular at every point, that is, if the tensor $\GL$ is regular at every point. By the kernel of $\GL$ at a point $a$ we mean the vector space \[ \Ker\GL[a]=\set{b\in E_{\tau(a)}}{\GL[a](b,c)=0\text{ for all }c\in E_{\tau(a)}}. \] In the case of a regular Lagrangian, the Cartan 2-section $\omega_L$ is symplectic (non-degenerate and $d$-closed) and the vertical subbundle is Lagrangian. It follows that a 1-form is semi-basic if and only if it is the contraction of $\omega_L$ with a vertical element. Finally, we mention that the \emph{complete lift} $\sigma\sup{C}$ of a section $\sigma\in\Sec{E}$ is the section of $\TEE$ characterized by the two following properties: \begin{enumerate} \item projects to $\sigma$, i.e., $\prol{\tau}\circ\sigma\sup{C}=\sigma\circ\tau$, \item $d_{\sigma\sup{C}}\hat{\mu}=\widehat{d_\sigma\mu}$, \end{enumerate} where by $\hat{\alpha}\in\cinfty{E}$ we denote the linear function associated to a 1-section $\alpha\in\Sec{E^*}$. Note that \begin{equation}\label{SODEcompl} \Gamma \mbox{ {\sc sode} section, } \sigma \in \Sec{E} \Rightarrow S[\sigma^{c}, \Gamma ] = 0, \end{equation} \vspace{-.5cm} \begin{equation}\label{Vertcompl} S \gamma = 0,\;\;\; \sigma \in \Sec{E} \Rightarrow S[\sigma^{c}, \gamma ] = 0. \end{equation} \section{Linearly constrained Lagrangian systems} \label{linear} Nonholonomic systems on Lie algebroids were introduced in~\cite{CoMa}. This class of systems includes, as particular cases, standard nonholonomic systems defined on the tangent bundle of a manifold and systems obtained by the reduction of the action of a symmetry group. The situation is similar to the non-constrained case, where the general equation $\delta L=0$ comprises as particular cases the standard Lagrangian Mechanics, Lagrangian Mechanics with symmetry, Lagrangian systems with holonomic constraints, systems on semi-direct products and systems evolving on Lie algebras, see e.g.,~\cite{LMLA}. We start with a free Lagrangian system on a Lie algebroid $E$. As mentioned above, these two objects can describe a wide class of systems. Now, we plug in some nonholonomic linear constraints described by a subbundle $D$ of the bundle $E$ of admissible directions. If we impose to the solution curves $a(t)$ the condition to stay on the manifold $D$, we arrive at the equations $\delta L_{a(t)}=\lambda(t)$ and $a(t)\in D$, where the constraint force $\lambda(t)\in E^*_{\tau(a(t))}$ is to be determined. In the tangent bundle geometry case ($E=TM$), the d'Alembert principle establishes that the mechanical work done by the constraint forces vanishes, which implies that $\lambda$ takes values in the annihilator of the constraint manifold $D$. Therefore, in the case of a general Lie algebroid, the natural equations one should pose are (see~\cite{CoMa}) \[ \delta L_{a(t)}\in\Do[\tau(a(t))]\qquad\text{and}\qquad a(t)\in D. \] In more explicit terms, we look for curves $a(t)$ on $E$ such that \begin{itemize} \item[--] they are admissible, $\rho(a(t))=\dot{m}(t)$, where $m=\tau\circ a$, \item[--] they stay in $D$, $a(t)\in D_{m(t)}$, \item[--] there exists $\lambda(t)\in \Do[m(t)]$ such that $\delta L_{{a}(t)}=\lambda(t)$. \end{itemize} If $a(t)$ is one of such curves, then $(a(t),\dot{a}(t))$ is a curve in $\TEE$. Moreover, since $a(t)$ is in $D$, we have $\dot{a}(t)$ is tangent to $D$, that is, $(a(t),\dot{a}(t))\in\TDD$. Under some regularity conditions (to be made precise later on), we may assume that the above curves are integral curves of a section $\Gamma$, which as a consequence will be a \textsc{sode}\ section taking values in $\TDD$. Based on these arguments, we may reformulate geometrically our problem as the search for a \textsc{sode}~$\Gamma$ (defined at least on a neighborhood of $D$) satisfying $(i_\Gamma\omega_L-dE_L)_a\in\tDo[\tau(a)]$ and $\Gamma(a)\in {\mathcal T}_{a}^{D}D$, at every point $a\in D$. In the above expression $\tDo$ is the pullback of $\Do$ to $\TEE$, that is, $\alpha\in\tDo[\tau(a)]$ if and only if there exists $\lambda\in\Do[\tau(a)]$ such that $\alpha=\lambda\circ\prol{\tau}[a]$. \begin{definition} A nonholonomically \emph{constrained Lagrangian system} on a Lie algebroid $E$ is a pair $(L,D)$, where $L$ is a smooth function on $E$, \emph{the Lagrangian}, and $i\colon D\hookrightarrow E$ is a smooth subbundle of $E$, known as the \emph{constraint subbundle}. By a solution of the nonholonomically constrained Lagrangian system $(L,D)$ we mean a \textsc{sode}\ section $\Gamma\in\TEE$ which satisfies the \emph{Lagrange-d'Alembert equations} \begin{equation} \label{Lagrange-d'Alembert} \begin{aligned} &(i_\Gamma\omega_L-dE_L)|_D\in \Sec{\tDo},\\ &\Gamma|_D\in \Sec{\TDD}. \end{aligned} \end{equation} \end{definition} With a slight abuse of language, we will interchangeably refer to a solution of the constrained Lagrangian system as a section or the collection of its corresponding integral curves. The restriction of the projection $\map{\tau}{E}{M}$ to $D$ will be denoted by $\pi$, that is, $\map{\pi=\tau|_D}{D}{M}$. \begin{remark}[Domain of definition of solutions of the Lagrange-d'Alembert equations] {\rm We want to stress that a solution of the Lagrange-d'Alembert equations needs to be defined only over $D$, but for practical purposes we consider it extended to $E$ (or just to a neighborhood of $D$ in $E$). We will not make any notational distinction between a solution on $D$ and any of its extensions. Solutions which coincide on $D$ will be considered as equal. See~\cite{GrMe,LeMa} for a more in-depth discussion. In accordance with this convention, by a \textsc{sode}\ on $D$ we mean a section of $\TDD$ which is the restriction to $D$ of some \textsc{sode}\ defined in a neighborhood of $D$. Alternatively, a \textsc{sode}\ on $D$ is a section $\Gamma$ of $\TDD$ such that $\prol{\tau}(\Gamma(a))=a$ for every $a\in D$.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \begin{remark}[Holonomic constraints] {\rm A nonholonomically constrained Lagrangian system $(L, D)$ on a Lie algebroid $E$ is said to be \emph{holonomic} if $D$ is a Lie subalgebroid of $E$. This means that $[X, Y] \in \Sec{D}$, for $X, Y \in \Sec{D}$. Thus, the real function $L_{D} = L_{|D}: D \to \mathbb{R}$ defines an unconstrained (free) Lagrangian system on the Lie algebroid $D$. Moreover, it is easy to prove that $\mathcal{I} \circ \Delta_{D} = \Delta \circ i$ and $\mathcal{I} \circ S_{D} = S \circ \mathcal{I}$, where $\mathcal{I} = {\prol[i]{i}}: \TDD \to \TEE$ is the prolongation of the Lie algebroid morphism $i\colon D\hookrightarrow E$ and $\Delta_{D}$ (respectively, $S_{D}$) is the Liouville section (respectively, the vertical endomorphism) of the Lie algebroid $\TDD$. Therefore, since $L \circ i = L_D$, we deduce that \[ \mathcal{I}^*(\theta_{L}) = \theta_{L_{D}}, \makebox[.4cm]{} \mathcal{I}^*(\omega_{L}) = \omega_{L_{D}}, \makebox[.4cm]{} \mathcal{I}^*(dE_{L}) = dE_{L_{D}}. \] Consequently, if $\Gamma$ is a \textsc{sode}\ section of $\TEE$, $a, b \in D$, $(b, X) \in \prol[E]{E}[a]$ and $(b, Y) \in \prol[E]{D}[a]$ then \[ (i_{\Gamma}\omega_{L} - dE_{L})(a)(b, X) = (i_{\Gamma_{|D}} \omega_{L_{D}} - dE_{L_{D}})(a)(b, Y) + (i_{\Gamma}\omega_{L} - dE_{L})(a)(0, Z), \] $(0, Z)$ being a vertical element of $\prol[E]{E}[a]$. Now, using (\ref{2.4'}), we have that $(i_{\Gamma}\omega_{L} - dE_{L})(a)(0, Z) = 0$ which implies that \[ (i_{\Gamma}\omega_{L} - dE_{L})(a)(b, X) = (i_{\Gamma_{|D}} \omega_{L_{D}} - dE_{L_{D}})(a)(b, Y). \] The above facts prove that a \textsc{sode}\ section $\Gamma$ of $\TEE$ is a solution of the holonomic Lagrangian system $(L, D)$ on $E$ if and only if $\Gamma_{|D}$ is a solution of the Euler-Lagrange equations for the (unconstrained) Lagrangian function $L_D$ on the Lie algebroid $D$. In other words, the holonomic Lagrangian system $(L, D)$ on $E$ may be considered as an unconstrained (free) Lagrangian system on the Lie algebroid $D$.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} Next, suppose that $(L, D)$ is a nonholonomically constrained Lagrangian system on the Lie algebroid $E$. Then, the different spaces we will consider are shown in the following commutative diagram \[ \xymatrix{ &&TM\ar@{=}[r]&TM&\\ &&D\ar[u]_{\rho_D}\ar[r]^i&E\ar[u]^\rho\\ &TD\ar[ruu]^{T\pi}\ar[rd]_{\tau_D}& \TDD\ar[u]_{\prol{\pi}}\ar[r]^{\mathcal{I}}\ar[d]^{\pi^D_D}\ar[l]_{\rho^1}& \TEE\ar[u]^{\prol{\tau}}\ar[d]_{\tau^E_E}\ar[r]^{\rho^1} & TE\ar[luu]_{T\tau}\ar[dl]^{\tau_E}\\ &&D\ar[r]^i\ar[d]_{\pi}&E\ar[d]^{\tau}&\\ &&M\ar@{=}[r]&M } \] As an intermediate space in our analysis of the regularity of the constrained systems, we will also consider $\TED$, the $E$-tangent to $D$. The main difference between $\TED$ and $\TDD$ is that the former has a natural Lie algebroid structure while the later does not. The following two results are immediate consequences of the above form of the Lagrange-d'Alembert equations. \begin{theorem}[Conservation of energy] If $(L,D)$ is a constrained Lagrangian system and $\Gamma$ is a solution of the dynamics, then $d_\Gamma E_L=0$ (on $D$). \end{theorem} \begin{proof} Indeed, for every $a\in D$, we have $\Gamma(a)\in\TDD[a]$, so that $\prol{\tau}(\Gamma(a))\in D$. Therefore $i_\Gamma\tDo=0$ and contracting $0=i_\Gamma(i_\Gamma\omega_L-dE_L)=-d_\Gamma E_L$ at every point in $D$. \end{proof} \begin{theorem}[Noether's theorem] Let $(L,D)$ be a constrained Lagrangian system which admits a unique \textsc{sode}\ $\Gamma$ solution of the dynamics. If $\sigma$ is a section of $D$ such that there exists a function $f\in\cinfty{M}$ satisfying \[ d_{\sigma\sup{C}}L=\dot{f}, \] then the function $F=\pai{\theta_L}{\sigma\sup{C}}-f$ is a constant of the motion, that is, $d_\Gamma F=0$ (on $D$). \end{theorem} \begin{proof} Using that $\theta_L(\Gamma)=d_\Delta(L)$, we obtain $i_{\sigma\sup{C}}(i_\Gamma\omega_L-dE_L) = i_{\sigma\sup{C}}(-d_\Gamma \theta_L + dL)= d_{\sigma\sup{C}}L-d_\Gamma\pai{\theta_L}{\sigma\sup{C}} + \theta_L [\Gamma,\sigma\sup{C}]$ and, since $[\Gamma,\sigma\sup{C}]$ is vertical, we deduce \[ i_{\sigma^c}(i_\Gamma \omega_L-dE_L)=d_{\sigma^c}L-d_\Gamma\pai{\theta_L}{\sigma\sup{C}}.\] Thus, taking into account that $i_{\sigma\sup{C}}\tDo=0$, we get $0=d_\Gamma(\pai{\theta_L}{\sigma\sup{C}}-f)=-d_\Gamma F$. \end{proof} \begin{example}[Mechanical systems with nonholonomic constraints] {\rm Let $\mathcal{G}:E\times_M E\to \mathbb{R}$ be a bundle metric on $E$. The \emph{ Levi-Civita connection} $\nabla^\mathcal{G}$ is determined by the formula \begin{align*} 2\mathcal{G}(\nabla^{\mathcal{G}}_\sigma\eta,\zeta) &=\rho(\sigma)(\mathcal{G}(\eta,\zeta)) + \rho(\eta)(\mathcal{G}(\sigma,\zeta))-\rho(\zeta)(\mathcal{G}(\eta,\sigma))\\ & \qquad + \mathcal{G}(\sigma,[\zeta,\eta]) + \mathcal{G}(\eta,[\zeta,\sigma])-\mathcal{G}(\zeta, [\eta,\sigma]) , \end{align*} for $\sigma,\eta,\zeta\in \Sec{E}$. The coefficients of the connection $\nabla^{\mathcal{G}}$ are given by \[ \Gamma_{\beta\gamma}^\alpha = \frac{1}{2}\mathcal{G}^{\alpha\nu}([\nu,\beta;\gamma]+[\nu,\gamma; \beta] + [\beta,\gamma; \nu]), \] where $\mathcal{G}_{\alpha\nu}$ are the coefficients of the metric $\mathcal{G}$, $(\mathcal{G}^{\alpha\nu})$ is the inverse matrix of $(\mathcal{G}_{\alpha\nu})$ and \[ [\alpha,\beta;\gamma]=\frac{\partial \mathcal{G}_{\alpha\beta}}{\partial x^i}\rho_\gamma^i + C_{\alpha\beta}^\mu\mathcal{G}_{\mu\gamma}. \] Using the covariant derivative induced by $\nabla^{\mathcal{G}}$, one may introduce the notion of a geodesic of $\nabla^{\mathcal{G}}$ as follows. An admissible curve $a:I\to E$ is said to be a \emph{geodesic} if $\nabla_{a(t)}^{\mathcal{G}}a(t)=0$, for all $t\in I$. In local coordinates, the conditions for being a geodesic read \[ \frac{da^\gamma}{dt} + \frac{1}{2}(\Gamma_{\alpha\beta}^\gamma + \Gamma_{\beta\alpha}^\gamma)a^\alpha a^\beta=0,\;\;\; \mbox{for all }\gamma. \] The geodesics are the integral curves of a \textsc{sode}\ section $\Gamma_{\nabla^{\mathcal{G}}}$ of $\prol[E]{E}$, which is locally given by \[ \Gamma_{\nabla^{\mathcal{G}}}=y^\gamma{\mathcal X}_\gamma-\frac{1}{2}(\Gamma_{\alpha\beta}^\gamma + \Gamma_{\beta\alpha}^\gamma)y^\alpha y^\beta\mathcal{V}_\gamma. \] $\Gamma_{\nabla^{\mathcal{G}}}$ is called the \emph{geodesic flow} (for more details, see \cite{CoMa}). The class of systems that were considered in detail in~\cite{CoMa} is that of \emph{mechanical systems with nonholonomic constraints}\footnote{In fact, in~\cite{CoMa}, we considered controlled mechanical systems with nonholonomic constraints, that is, mechanical systems evolving on Lie algebroids and subject to some external control forces.}. The Lagrangian function $L$ is of mechanical type, i.e., it is of the form \[ L(a)=\frac{1}{2} \mathcal{G}(a,a) - V(\tau(a)), \quad a\in E, \] with $V$ a function on $M$. The Euler-Lagrange section for the unconstrained system can be written as $$\Gamma_L=\Gamma_{\nabla^\mathcal{G}} - (\grad_\mathcal{G} V) \sup{V}.$$ In this expression, by $\grad_\mathcal{G}{V}$ we mean the section of $E$ such that $\pai{dV(m)}{a}=\mathcal{G}(\grad_\mathcal{G} V(m),a)$, for all $m\in M$ and all $a\in E_m$, and where we remind that $d$ is the differential in the Lie algebroid. The Euler-Lagrange differential equations can be written as \begin{equation}\label{Mechanical:eqs-motion}\begin{array}{ll} \dot{x}^i & =\rho^i_\alpha y^\alpha , \\[5pt] \dot{y}^\alpha &= -\displaystyle\frac{1}{2} \left( \Gamma^\alpha_{\beta \gamma} +\Gamma^\alpha_{\gamma\beta}\right) y^\beta y^\gamma - \mathcal{G}^{\alpha \beta} \rho^i_{\beta} \pd{V}{x^i}. \end{array} \end{equation} Alternatively, one can describe the dynamical behavior of the mechanical control system by means of an equation on $E$ via the covariant derivative. An admissible curve $a: t \mapsto a(t)$ with base curve $t\mapsto m(t)$ is a solution of the system~\eqref{Mechanical:eqs-motion} if and only if \begin{align} \label{Mechanical:eqs-motion-connection} \nabla^{\mathcal{G}}_{a(t)}a(t) + \grad_\mathcal{G} V(m(t)) = 0. \end{align} Note that \[ \mathcal{G} (m(t))(\nabla_{a(t)}^\mathcal{G} a(t) + \grad_\mathcal{G} V(m(t)),b)=\delta L(a(t))(b),\;\;\;\; \mbox{ for } b\in E_{m(t)}. \] If this mechanical control system is subject to the constraints determined by a subbundle $D$ of $E$, we can do the following. Consider the orthogonal decomposition $E=D\oplus D^\perp$, and the associated orthogonal projectors $\map{P}{E}{D}$, $\map{Q}{E}{D^\perp}$. Using the fact that $\mathcal{G}(P\cdot,\cdot) = \mathcal{G}(\cdot,P\cdot)$, one can write the Lagrange-d'Alembert equations in the form \[ P(\nabla^\mathcal{G}_{a(t)}a(t)) + P(\grad_\mathcal{G} V(m(t))) = 0, \qquad Q(a)=0. \] A specially well-suited form of these equations makes use of the \emph{constrained connection} $\check{\nabla}$ defined by $\check{\nabla}_\sigma\eta =P(\nabla^\mathcal{G}_\sigma\eta)+\nabla^\mathcal{G}_\sigma(Q\eta)$. In terms of $\check{\nabla}$, we can rewrite this equation as $\check{\nabla}_{a(t)}a(t) + P(\grad_\mathcal{G} V(m(t))) = 0$, $Q(a)=0$, where we have used the fact that the connection $\check{\nabla}$ restricts to the subbundle $D$. Moreover, following the ideas in~\cite{Lewis}, we proved in~\cite{CoMa} that the subbundle $D$ is geodesically invariant for the connection $\check{\nabla}$, that is, any integral curve of the spray $\Gamma_{\check{\nabla}}$ associated with $\check{\nabla}$ starting from a point in $D$ is entirely contained in $D$. Since the terms coming from the potential $V$ also belongs to $D$, we have that the constrained equations of motion can be simply stated as \begin{align} \label{eq:eqs-motion-connection-nh} \check{\nabla}_{a(t)}a(t) + P(\grad_\mathcal{G} V(m(t))) = 0, \qquad a(0)\in D. \end{align} Note that one can write the constrained equations of the motion as follows \[ \dot{a}(t)=\rho^1(\Gamma_{\check{\nabla}}(a(t))-{P}(\grad_{\mathcal{G}}V)^v(a(t))) \] and that the restriction to $D$ of the vector field $\rho^1(\Gamma_{\check{\nabla}}-{P}(\grad_{\mathcal{G}}V)^v)$ is tangent to ${ D}.$ The coordinate expression of equations (\ref{eq:eqs-motion-connection-nh}) is greatly simplified if we take a basis $\{e_\alpha\}=\{e_a,e_A\}$ of $E$ adapted to the orthogonal decomposition $E=D\oplus D^\perp$, i.e., $D =\operatorname{span}\{e_a\}$, $D^\perp = \operatorname{span}\{e_A\}$. Denoting by $(y^\alpha)=(y^a,y^A)$ the induced coordinates, the constraint equations $Q(a)=0$ just read $y^A=0$. The differential equations of the motion are then \begin{align*} \dot{x}^i & = \rho^i_a y^a , \\ \dot{y}^a & = - \frac{1}{2} \left(\check{\Gamma}^a_{bc}+\check{\Gamma}^a_{cb}\right) y^by^c-\mathcal{G}^{ab}\rho^i_b\pd{V}{x^i}, \\ y^A &= 0, \end{align*} where $\check{\Gamma}^\alpha_{\beta \gamma}$ are the connection coefficients of the constrained connection $\check{\nabla}$. } \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{example} In the above example the dynamics exists and is completely determined whatever the (linear) constraints are. As we will see in Section~\ref{sec:regularity}, this property is lost in the general case. \subsection{Lagrange-d'Alembert equations in local coordinates} Let us analyze the form of the Lagrange-d'Alembert equations in local coordinates. Following the example above, let us choose a special coordinate system adapted to the structure of the problem as follows. We consider local coordinates $(x^i)$ on an open set $\mathcal{U}$ of $M$ and we take a basis $\{e_a\}$ of local sections of $D$ and complete it to a basis $\{e_a,e_A\}$ of local sections of $E$ (both defined on the open $\mathcal{U}$). In this way, we have coordinates $(x^i,y^a,y^A)$ on $E$. In this set of coordinates, the constraints imposed by the submanifold $D\subset E$ are simply $y^A=0$. If $\{e^a,e^A\}$ is the dual basis of $\{e_a,e_A\}$, then a basis for the annihilator $\Do$ of $D$ is $\{e^A\}$ and a basis for $\tDo$ is $\mathcal{X}^A$. An element $z$ of $\TED$ is of the form $z= u^\alpha\mathcal{X}_\alpha+z^a\mathcal{V}_a = u^a\mathcal{X}_a+u^A\mathcal{X}_A+z^a\mathcal{V}_a$, that is, the component $\mathcal{V}_A$ vanishes since $\rho^1(z)$ is a vector tangent to the manifold $D$ with equations $y^A=0$. The projection of $z$ to $E$ is $\prol{\tau}(z)=u^a e_a+u^A e_A$, so that the element $z$ is in $\TDD$ if and only if $u^A=0$. In other words, an element in $\TDD$ is of the form $z=u^a\mathcal{X}_a+z^a\mathcal{V}_a$. Let us find the local expression of the Lagrange-d'Alembert equations in these coordinates. We consider a section $\Gamma$ such that $\Gamma_{|D} \in \Sec{\TDD}$, which is therefore of the form $\Gamma=g^a\mathcal{X}_a+f^a\mathcal{V}_a$. From the local expression~\eqref{omegaL} of the Cartan 2-form and the local expression~\eqref{EL} of the energy function, we get \[ 0=\pai{i_\Gamma\omega_L-dE_L}{\mathcal{V}_\alpha}= -y^B\pd{^2L}{y^\alpha\partial y^B} -(y^b-g^b)\pd{^2L}{y^\alpha\partial y^b}. \] If we assume that the Lagrangian $L$ is regular, when we evaluate at $y^A=0$, we have that $g^a=y^a$ and thus $\Gamma$ is a \textsc{sode}. Moreover, contracting with $\mathcal{X}_a$, after a few calculations we get \[ 0=\pai{i_\Gamma\omega_L-dE_L}{\mathcal{X}_a} =-\left\{ d_\Gamma\left(\pd{L}{y^a}\right) +\pd{L}{y^\gamma}C^\gamma_{a\beta}y^\beta -\rho^i_a\pd{L}{x^i} \right\}, \] so that (again after evaluation at $y^A=0$), the functions $f^a$ are solution of the linear equations \begin{equation} \label{LD-explicit} \pd{^2L}{y^b\partial y^a}f^b+\pd{^2L}{x^i\partial y^a}\rho^i_by^b +\pd{L}{y^\gamma}C^\gamma_{ab}y^b -\rho^i_a\pd{L}{x^i} =0, \end{equation} where all the partial derivatives of the Lagrangian are to be evaluated on $y^A=0$. As a consequence, we get that there exists a unique solution of the Lagrange-d'Alembert equations if and only if the matrix \begin{equation} \label{gld-local} \C_{ab}(x^i,y^c)=\pd{^2L}{y^a\partial y^b}(x^i,y^c,0) \end{equation} is regular. Notice that $\C_{ab}$ is a submatrix of $W_{\alpha\beta}$, evaluated at $y^A=0$ and that, as we know, if $L$ is of mechanical type then the Lagrange-d'Alembert equations have a unique solution. The differential equations for the integral curves of the vector field $\rho^1(\Gamma)$ are the Lagrange-d'Alembert differential equations, which read \begin{equation} \label{LD-edo} \begin{aligned} &\dot{x}^i=\rho^i_ay^a,\\ &\frac{d}{dt}\left(\pd{L}{y^a}\right) + \pd{L}{y^\gamma}C^\gamma_{ab}y^b -\rho^i_a\pd{L}{x^i}=0,\\ &y^A=0. \end{aligned} \end{equation} Finally, notice that the contraction with $\mathcal{X}_A$ just gives the components $\lambda_A=\pai{i_\Gamma\omega_L-dE_L}{\mathcal{X}_A}|_{y^A=0}$ of the constraint forces $\lambda=\lambda_Ae^A$. \begin{remark}[Equations in terms of the constrained Lagrangian] {\rm In some occasions, it is useful to write the equations in the form \begin{equation}\label{LD-edo2} \begin{aligned} &\dot{x}^i=\rho^i_ay^a,\\ &\frac{d}{dt}\left(\pd{L}{y^a}\right) + \pd{L}{y^c}C^c_{ab}y^b -\rho^i_a\pd{L}{x^i}=-\pd{L}{y^A}C^A_{ab}y^b,\\ &y^A=0, \end{aligned} \end{equation} where, on the left-hand side of the second equation, all the derivatives can be calculated from the value of the Lagrangian on the constraint submanifold $D$. In other words, we can substitute $L$ by the constrained Lagrangian $L_c$ defined by $L_c(x^i,y^a)=L(x^i,y^a,0)$.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \begin{remark}[Lagrange-d'Alembert equations in quasicoordinates] {\rm A particular case of this construction is given by constrained systems defined in the standard Lie algebroid $\map{\tau_M}{TM}{M}$. In this case, the equations~\eqref{LD-edo} are the Lagrange-d'Alembert equations written in quasicoordinates, where $C^{\alpha}_{\beta\gamma}$ are the so-called Hamel's transpositional symbols, which obviously are nothing but the structure coefficients (in the Cartan's sense) of the moving frame $\{e_\alpha\}$, see e.g.,~\cite{EhKoMoRi,Ha}.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \subsection{Solution of Lagrange-d'Alembert equations} \label{sec:regularity} \begin{assumption} In what follows, we will assume that the Lagrangian $L$ is regular at least in a neighborhood of $D$. \end{assumption} Let us now perform a precise global analysis of the existence and uniqueness of the solution of Lagrange-d'Alembert equations. \begin{definition} A constrained Lagrangian system $(L,D)$ is said to be \emph{regular} if the Lagrange-d'Alembert equations have a unique solution. \end{definition} In order to characterize geometrically those nonholonomic systems which are regular, we define the tensor $\GLD$ as the restriction of $\GL$ to $D$, that is, $\GLD[a](b,c)=\GL[a](b,c)$ for every $a\in D$ and every $b,c\in D_{\tau(a)}$. In coordinates adapted to $D$, we have that the local expression of $\GLD$ is $\GLD=\C_{ab}e^a\otimes e^b$ where the matrix $\C_{ab}$ is given by equation~\eqref{gld-local}. A second important geometric object is the subbundle $F\subset\TEE|_D\to D$ whose fiber at the point $a\in D$ is $F_a=\omega_L^{-1}(\tDo[\tau(a)])$. More explicitly, \[ F_a = \set{z\in\TEE[a]}{\text{exists $\zeta\in\Do[\tau(a)]$ s.t. $\omega_L(z,u)=\pai{\zeta}{\prol{\tau}(u)}$ for all $u\in\TEE[a]$}}. \] From the definition, it is clear that the rank of $F$ is $\rank {F}=\rank{\Do}=\rank{E}-\rank{D}$. Finally, we also consider the subbundle $(\TDD)^\perp\subset\TEE|_D\to D$, the orthogonal to $\TDD$ with respect to the symplectic form $\omega_L$. The rank of $(\TDD)^\perp $ is $\rank{\TDD}^\perp=\rank{\TEE} - \rank{\TDD} = 2(\rank{E} - \rank{D}) = 2 \rank{\Do}$. The relation among these three objects is described by the following result. \begin{lemma} \label{F-TDD} The following properties are satisfied: \begin{enumerate} \item The elements in $F$ are vertical. An element $\xi\sup{V}(a,b)\in F_a$ if and only if $\GL[a](b,c)=0$ for all $c\in D_{\tau(a)}$. \item $(\TDD)^\perp\cap\Ver{\TEE}=F$. \end{enumerate} \end{lemma} \begin{proof} (1) The elements in $F$ are vertical because the elements in $\tDo$ are semi-basic. If $\xi\sup{V}(a,b)\in F_a$ then there exists $\zeta\in \Do[\tau (a)]$ such that $\omega_L(\xi\sup{V}(a,b),u)=\pai{\zeta}{\prol{\tau}(u)}$ for all $u\in\TEE[a]$. In terms of $\GL$ and writing $c=\prol{\tau}(u)$, the above equation reads $-\GL[a](b,c)=\pai{\zeta}{c}$. By taking $u\in\prol{\tau}^{-1}(D)$, then $c$ is in $D$ and therefore $\GL[a](b,c)=0$ for all $c\in D_{\tau(a)}$. Conversely, if $\GL[a](b,c)=0$ for all $c\in D_{\tau(a)}$, then the 1-form $\zeta=-\GL[a](b,\ )$ is in $\Do[\tau(a)]$. Therefore $\omega_L(\xi\sup{V}(a,b),u) = - \GL[a](b,\prol{\tau}(u)) = \pai{\zeta}{\prol{\tau}(u)})$, which is the condition for $\xi\sup{V}(a,b)\in F_a$. \noindent (2) The condition for a vertical element $\xi\sup{V}(a,b)$ to be in $(\TDD)^\perp$ is $\omega_L(\xi\sup{V}(a,b),w)=0$ for all $w\in\TDD[a]$, or equivalently, $\GL[a](b,\prol{\tau}(w))=0$. The vector $c=\prol{\tau}(w)$ is an arbitrary element of $D_{\tau(a)}$, so that the above condition reads $\GL[a](b,c)=0$, for all $c\in D_{\tau(a)}$, which is precisely the condition for $\xi\sup{V}(a,b)$ to be in $F_a$. \end{proof} \begin{theorem} \label{regularity} The following properties are equivalent: \begin{enumerate} \item The constrained Lagrangian system $(L,D)$ is regular, \item $\Ker\GLD=\{0\}$, \item $\TED \cap F=\{0\}$, \item $\TDD\cap(\TDD)^\perp=\{0\}$. \end{enumerate} \end{theorem} \begin{proof}{[(1)$\Leftrightarrow$(2)]} The equivalence between the first two conditions is clear from the local form of the Lagrange-d'Alembert equations~\eqref{LD-explicit}, since the coefficients of the unknowns $f^a$ are precisely the components~\eqref{gld-local} of $\GLD$. [(2)$\Leftrightarrow$(3)] ($\Rightarrow$) Let $a\in D$ and consider an element $z\in \TED[a]\cap F_a$. Since the elements of $F$ are vertical, we have $z=\xi\sup{V}(a,b)$ for some $b\in E_{\tau(a)}$. Moreover, $z\in\TED[a]$ implies that $b$ is an element in $D_{\tau(a)}$. On the other hand, if $z=\xi\sup{V}(a,b)$ is in $F_a$, then Lemma~\ref{F-TDD} implies that $\GL[a](b,c)=0$ for all $c\in D_{\tau(a)}$. Thus $\GLD[a](b,c)=0$ for all $c\in D_{\tau(a)}$, from where $b=0$, and hence $z=0$. ($\Leftarrow$) Conversely, if for some $a\in D$, there exists $b\in\Ker\GLD[a]$ with $b\neq0$ then, using Lemma \ref{F-TDD}, we deduce that $z=\xi\sup{V}(a,b)\in\TED[a]\cap F_a$ and $z\neq0$. [(2)$\Leftrightarrow$(4)] ($\Rightarrow$) Let $a\in D$ and consider an element $v\in \TDD[a]\cap(\TDD[a])^\perp$, that is, $\omega_L(v,w)=0$ for all $w\in\TDD[a]$. If we take $w=\xi\sup{V}(a,b)$ for $b\in D_{\tau(a)}$ arbitrary, then we have $\omega_L(v,\xi\sup{V}(a,b)) = \GLD[a](\prol{\tau}(v),b)=0$ for all $b\in D_{\tau(a)}$, from where it follows that $\prol{\tau}(v)=0$. Thus $v$ is vertical, $v=\xi\sup{V}(a,c)$, for some $c\in D$ and then $\omega_L(\xi\sup{V}(a,c),w) = -\GLD[a](c,\prol{\tau}(w))=0$ for all $w\in\TDD[a]$. Therefore $c=0$ and hence $v=0$. ($\Leftarrow$) Conversely, if for some $a\in D$, there exists $b\in\Ker\GLD[a]$ with $b\neq0$, then $0\neq\xi\sup{V}(a,b) \in \TDD[a] \cap(\TDD[a])^\perp$, because $\omega_L(\xi\sup{V}(a,b),w) = \GLD(b,\prol{\tau}(w))=0$ for all $w\in\TDD[a]$. \end{proof} In the case of a constrained mechanical system, the tensor $\GL$ is given by $\GL[a](b,c) = \mathcal{G}_{\tau(a)}(b,c)$, so that it is positive definite at every point. Thus the restriction to any subbundle $D$ is also positive definite and hence regular. Thus, nonholonomic mechanical systems are always regular. \begin{proposition} Conditions (3) and (4) in Theorem~\ref{regularity} are equivalent, respectively, to \begin{itemize} \item[(3')] $\TEE|_D=\TED\oplus F$, \item[(4')] $\TEE|_D=\TDD\oplus(\TDD)^\perp$. \end{itemize} \end{proposition} \begin{proof} The equivalence between (4) and (4') is obvious, since we are assuming that the free Lagrangian is regular, i.e., $\omega_L$ is symplectic. The equivalence of (3) and (3') follows by computing the dimension of the corresponding spaces. The ranks of $\TEE$, $\TED$ and $F$ are \begin{align*} &\rank {\TEE} = 2 \rank{E} , \\ &\rank{\TED} = \rank{E} +\rank{D} , \\ &\rank{F} = \rank{\Do} = \rank{E} - \rank{D}. \end{align*} Thus $\rank{\TEE}=\rank{\TED} + \rank{F}$, and the result follows. \end{proof} \subsection{Projectors}\label{Projectors} We can express the constrained dynamical section in terms of the free dynamical section by projecting to the adequate space, either $\TED$ or $\TDD$, according to each of the above decompositions of $\TEE|_D$. Of course, both procedures give the same result. \subsubsection*{Projection to $\TED$} Assuming that the constrained system is regular, we have a direct sum decomposition \[ \TEE[a]=\TED[a]\oplus F_a, \] for every $a\in D$, where we recall that the subbundle $F\subset\TED$ is defined by $F=\omega_L^{-1}(\tDo)$, or equivalently $\tDo=\omega_L(F)$. Let us denote by $P$ and $Q$ the complementary projectors defined by this decomposition, that is, \[ \map{P_a}{\TEE[a]}{\TED[a]} \quad\text{and}\quad \map{Q_a}{\TEE[a]}{F_a}, \mbox{ for all }a\in D. \] Then we have, \begin{theorem}\label{projection-external} Let $(L,D)$ be a regular constrained Lagrangian system and let $\Gamma_L$ be the solution of the free dynamics, i.e., $i_{\Gamma_L}\omega_L=dE_L$. Then the solution of the constrained dynamics is the \textsc{sode}\ $\Gamma_{(L, D)}$ obtained by projection $\Gamma_{(L, D)}=P(\Gamma_L|_D)$. \end{theorem} \begin{proof} Indeed, if we write $\Gamma_{(L, D)}(a) = \Gamma_L(a)-Q(\Gamma_L(a))$ for $a\in D$, then we have \[ i_{\Gamma_{(L, D)}(a)}\omega_L-dE_L(a) = i_{\Gamma_L(a)}\omega_L-i_{Q(\Gamma_L(a))}\omega_L-dE_L(a) = -i_{Q(\Gamma_L(a))}\omega_L\in\tDo[\tau(a)] , \] which is an element of $\tDo[\tau(a)]$ because $Q(\Gamma_L(a))$ is in $F_a$. Moreover, since $\Gamma_L$ is a \textsc{sode}\ and $Q(\Gamma_L)$ is vertical (since it is in $F$), we have that $\Gamma_{(L, D)}$ is also a \textsc{sode}. \end{proof} We consider adapted local coordinates $(x^i,y^a,y^A)$ corresponding to the choice of an adapted basis of sections $\{e_a,e_A\},$ where $\{e_a\}$ generate $D$. The annihilator $\Do$ of $D$ is generated by $\{e^A\}$, and thus $\tDo$ is generated by $\{\mathcal{X}^A\}$. A simple calculation shows that a basis $\{Z_A\}$ of local sections of $F$ is given by \begin{equation}\label{zetas} Z_A=\mathcal{V}_A-Q^a_A\mathcal{V}_a, \end{equation} where $Q^a_A = W_{Ab}\C^{ab}$ and $\C^{ab}$ are the components of the inverse of the matrix $\C_{ab}$ given by equation~\eqref{gld-local}. The local expression of the projector over $F$ is then \[ Q = Z_A\otimes \mathcal{V}^A. \] If the expression of the free dynamical section $\Gamma_L$ in this local coordinates is \[ \Gamma_L=y^\alpha\mathcal{X}_\alpha+f^\alpha\mathcal{V}_\alpha, \] (where $f^\alpha$ are given by equation~\eqref{free-forces}), then the expression of the constrained dynamical section is \[ \Gamma_{(L, D)} =y^a\mathcal{X}_a+(f^a+f^AQ^a_A)\mathcal{V}_a, \] where all the functions $f^\alpha$ are evaluated at $y^A=0$. \subsubsection*{Projection to $\TDD$} We have seen that the regularity condition for the constrained system $(L,D)$ can be equivalently expressed by requiring that the subbundle $\TDD$ is a symplectic subbundle of $(\TEE,\omega_L)$. It follows that, for every $a\in D$, we have a direct sum decomposition \[ \TEE[a]=\TDD[a]\oplus(\TDD[a])^\perp. \] Let us denote by $\bar{P}$ and $\bar{Q}$ the complementary projectors defined by this decomposition, that is, \[ \map{\bar{P}_a}{\TEE[a]}{\TDD[a]} \qquad\text{and}\qquad \map{\bar{Q}_a}{\TEE[a]}{(\TDD[a])^\perp},\;\;\mbox{ for all } a\in D. \] Then, we have the following result: \begin{theorem}\label{projection-internal} Let $(L,D)$ be a regular constrained Lagrangian system and let $\Gamma_L$ be the solution of the free dynamics, i.e., $i_{\Gamma_L}\omega_L=dE_L$. Then the solution of the constrained dynamics is the \textsc{sode}\ $\Gamma_{(L, D)}$ obtained by projection $\Gamma_{(L, D)}=\bar{P}(\Gamma_L|_D)$. \end{theorem} \begin{proof} From Theorem~\ref{projection-external} we have that the solution $\Gamma_{(L, D)}$ of the constrained dynamics is related to the free dynamics by $\Gamma_L|_D=\Gamma+Q(\Gamma_L|_D)$. Let us prove that $Q(\Gamma_L|_D)$ takes values in $(\TDD)^\perp$. Indeed, $Q(\Gamma_L|_D)$ takes values in $F=(\TDD)^\perp\cap\Ver{\TEE}$, so that, in particular, it takes values in $(\TDD)^\perp$. Thus, since $\Gamma$ is a section of $\TDD$, it follows that $\Gamma_L|_D=\Gamma_{(L, D)}+Q(\Gamma_L|_D)$ is a decomposition of $\Gamma_L|_D$ according to $\TEE|_D=\TDD\oplus(\TDD)^\perp$, which implies $\Gamma_{(L, D)}=\bar{P}(\Gamma_L|_D)$. \end{proof} In adapted coordinates, a local basis of sections of $(\TDD)^\perp$ is $\{Y_A,Z_A\},$ where the sections $Z_A$ are given by~\eqref{zetas} and the sections $Y_A$ are \[ Y_A = \mathcal{X}_A-Q^a_A\mathcal{X}_a+\C^{bc}(M_{Ab}-M_{ab}Q^a_A)\mathcal{V}_c , \] with $M_{\alpha\beta}=\omega_L(\mathcal{X}_\alpha,\mathcal{X}_\beta)$. Therefore the expression of the projector onto $(\TDD)^\perp$ is \[ \bar{Q}=Z_A\otimes\mathcal{V}^A+Y_A\otimes\mathcal{X}^A. \] Note that $S(Y_A) = Z_A$. \subsection{The distributional approach} The equations for the Lagrange-d'Alembert section $\Gamma$ can be entirely written in terms of objects in the manifold $\TDD$. Recall that $\TDD$ is not a Lie algebroid. In order to do this, define the 2-section $\omega\sup{L,D}$ as the restriction of $\omega_L$ to $\TDD$. If $(L,D)$ is regular, then $\TDD$ is a symplectic subbundle of $(\TEE,\omega_L)$. From this, it follows that $\omega\sup{L,D}$ is a symplectic section on that bundle. We also define $\varepsilon\sup{L,D}$ to be the restriction of $dE_L$ to $\TDD$. Then, taking the restriction of Lagrange-d'Alembert equations to $\TDD$, we get the following equation \begin{equation}\label{LDBS} i_\Gamma\omega\sup{L,D}=\varepsilon\sup{L,D}, \end{equation} which uniquely determines the section $\Gamma$. Indeed, the unique solution $\Gamma$ of the above equations is the solution of Lagrange-d'Alembert equations: if we denote by $\lambda$ the constraint force, we have for every $u\in\TDD[a]$ that \[ \omega_L(\Gamma(a),u)-\pai{dE_L(a)}{u} = \pai{\lambda(a)}{\prol{\tau}(u)}=0 , \] where we have taken into account that $\prol{\tau}(u)\in D$ and $\lambda(a)\in\Do$. This approach, the so called \emph{distributional approach}, was initiated by Bo\-cha\-rov and Vinogradov (see \cite{ViKu}) and further developed by \'Sniatycki and coworkers~\cite{BaSn,CuSn,Sn98}. Similar equations, within the framework of Lie algebroids, are the base of the theory proposed in~\cite{MeLa}. \begin{remark}[Alternative description with $\TED$] {\rm One can also consider the restriction to $\TED$, which is a Lie algebroid, but no further simplification is achieved by this. If $\bar{\omega}$ is the restriction of $\omega_L$ to $\TED$ and $\bar{\varepsilon}$ is the restriction of $dE_L$ to $\TED$, then the Lagrange-d'Alembert equations can be written in the form $i_\Gamma\bar{\omega}-\bar{\varepsilon}=\bar{\lambda}$, where $\bar{\lambda}$ is the restriction of the constraint force to $\TED$, which, in general, does not vanish. Also notice that the 2-form $\bar{\omega}$ is closed but, in general, degenerated.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \subsection{The nonholonomic bracket}\label{NHBracket} Let $f, g$ be two smooth functions on $D$ and take arbitrary extensions to $E$ denoted by the same letters (if there is no possibility of confusion). Suppose that $X_f$ and $X_g$ are the Hamiltonian sections on $\TEE$ given respectively by \[ i_{X_f} \, \omega_L = df \qquad\text{and}\qquad i_{X_g} \, \omega_L = dg. \] We define the \emph{nonholonomic bracket} of $f$ and $g$ as follows: \begin{equation} \label{nhb} \{f, g\}_{nh} = \omega_L(\bar{P}(X_f), \bar{P}(X_g)). \end{equation} Note that if $f'$ is another extension of $f$, then $(X_f-X_{f'})_{|D}$ is a section of $(\TDD)^{\perp}$ and, thus, we deduce that~\eqref{nhb} does not depend on the chosen extensions. The nonholonomic bracket is an almost-Poisson bracket, i.e., it is skew-symmetric, a derivation in each argument with respect to the usual product of functions and does not satisfy the Jacobi identity. In addition, one can prove the following formula \begin{equation} \label{evolution} \dot{f} = \{f, E_L\}_{nh}. \end{equation} Indeed, we have \begin{align*} \dot{f} & = d_{\Gamma_{(L, D)}} f = i_{\Gamma_{(L, D)}} df = i_{\Gamma_{(L, D)}} i_{X_f} \omega_L \\ &= \omega_L(X_f, \Gamma_{(L, D)}) = \omega_L(X_f, \bar{P}(\Gamma_L)) \\ &= \omega_L(\bar{P}(X_f), \bar{P}(\Gamma_L)) = \{f, E_L\}_{nh}. \end{align*} Equation~\eqref{evolution} implies once more the conservation of the energy (by the skew-symmetric character of the nonholonomic bracket). Alternatively, since $\TDD$ is an anchored vector bundle, one can take the function $f \in \cinfty{D}$ and its differential $\bar{d}f\in\Sec{(\TDD)^*}$. Since $\omega\sup{L,D}$ is regular, we have a unique section $\bar{X}_f \in \Sec{\TDD}$ defined by $i_{\bar{X}_f}\omega\sup{L,D} = \bar{d}f$. Then the nonholonomic bracket of two functions $f$ and $g$ is $\{f,g\}_{nh} = \omega\sup{L,D}(\bar{X}_f,\bar{X}_g)$. Note that if $\tilde{f}\in C^\infty(E)$ (resp. $\tilde{g}\in C^\infty(E)$) is an extension to $E$ of $f$ (resp., $g$), then $\bar{X}_f = \bar{P}(X_{\tilde{f}})_{|D}$ (resp., $\bar{X}_g = \bar{P}(X_{\tilde{g}})_{|D}$). \section{Morphisms and reduction} \label{sec:reduction} One important advantage of dealing with Lagrangian systems evolving on Lie algebroids is that the reduction procedure can be naturally handled by considering morphisms of Lie algebroids, as it was already observed by Weinstein~\cite{Weinstein}. We study in this section the transformation laws of the different geometric objects in our theory and we apply these results to the study of the reduction theory. \begin{proposition}\label{transformation-S} Let $\map{\Phi}{E}{E'}$ be a morphism of Lie algebroids, and consider the $\Phi$-tangent prolongation of $\Phi$, i.e $\map{\prol[\Phi]{\Phi}}{\TEE}{{\mathcal T}^{E'}E'}$. Let $\xi\sup{V}$ and $\xi'{}\sup{V}$, $S$ and $S'$, and $\Delta$ and $\Delta'$, be the vertical liftings, the vertical endomorphisms, and the Liouville sections on $E$ and $E'$, respectively. Then, \begin{enumerate} \item $\prol[\Phi]{\Phi}(\xi\sup{V}(a,b)) = \xi'{}\sup{V}(\Phi(a),\Phi(b))$, for all $(a,b)\in E\times_M E$, \item $\prol[\Phi]{\Phi}\circ\Delta = \Delta'\circ\Phi$, \item $\prol[\Phi]{\Phi}\circ S = S'\circ \prol[\Phi]{\Phi}$. \end{enumerate} \end{proposition} \begin{proof} For the first property, we notice that both terms are vertical, so that we just have to show that their action on functions coincide. For every function $f'\in\cinfty{E'}$, we deduce that \[ \begin{aligned} \rho'^{1}(\prol[\Phi]{\Phi}(\xi\sup{V}(a,b)))f' &=T\Phi(\rho^{1}(\xi\sup{V}(a,b)))f' =T\Phi(b\sup{V}_a)f' =b\sup{V}_a(f'\circ\Phi)\\ &=\frac{d}{dt}f'(\Phi(a+tb))\at{t=0} =\frac{d}{dt}f'(\Phi(a)+t\Phi(b))\at{t=0}\\ &=\Phi(b)\sup{V}_{\Phi(a)}(f') =\rho'^{1}(\xi'{}\sup{V}(\Phi(a),\Phi(b)))f'. \end{aligned} \] For the second property, we have $\Delta(a)=\xi\sup{V}(a,a)$ so that applying the first property it follows that \[ \prol[\Phi]{\Phi}(\Delta(a)) =\prol[\Phi]{\Phi}(\xi\sup{V}(a,a)) =\xi'{}\sup{V}(\Phi(a),\Phi(a)) =\Delta'(\Phi(a)). \] Finally, for any $z=(a,b,V)\in\TEE$, we obtain that \[ \begin{aligned} \prol[\Phi]{\Phi}(S(z)) &=\prol[\Phi]{\Phi}(\xi\sup{V}(a,b)) =\xi'{}\sup{V}(\Phi(a),\Phi(b))\\ &=S'(\Phi(a),\Phi(b),T\Phi(V)) =S'(\prol[\Phi]{\Phi}(z)), \end{aligned} \] which concludes the proof. \end{proof} \begin{proposition}\label{transformation-omegaL} Let $L\in\cinfty{E}$ be a Lagrangian function, $\theta_L$ the Cartan form and $\omega_L=-d\theta_L$. Let $\map{\Phi}{E}{E'}$ be a Lie algebroid morphism and suppose that $L=L'\circ \Phi$, with $L'\in \cinfty{E'}$ a Lagrangian function. Then, we have \begin{enumerate} \item $(\prol[\Phi]{\Phi})\pb\theta_{L'}=\theta_L$, \item $(\prol[\Phi]{\Phi})\pb\omega_{L'}=\omega_L$, \item $(\prol[\Phi]{\Phi})\pb E_{L'}=E_L$, \item $G\sup{L'}_{\Phi(a)}(\Phi(b),\Phi(c))=\GL[a](b,c)$, for every $a\in E$ and every $b,c\in E_{\tau(a)}$. \end{enumerate} \end{proposition} \begin{proof} Indeed, for every $Z\in\TEE$ we have \[ \begin{aligned} \pai{(\prol[\Phi]{\Phi})\pb\theta_{L'}}{Z} &=\pai{\theta_{L'}}{\prol[\Phi]{\Phi}(Z)} =\pai{dL'}{S'(\prol[\Phi]{\Phi}(Z))} =\pai{dL'}{\prol[\Phi]{\Phi}(S(Z))}\\ &=\pai{(\prol[\Phi]{\Phi})\pb dL'}{S(Z)} =\pai{d(\prol[\Phi]{\Phi})\pb L'}{S(Z)} =\pai{d(L'\circ\Phi)}{S(Z)}\\ &=\pai{dL}{S(Z)} =\pai{\theta_L}{Z}, \end{aligned} \] where we have used the transformation rule for the vertical endomorphism. The second property follows from the fact that $\prol[\Phi]{\Phi}$ is a morphism, so that $(\prol[\Phi]{\Phi})\pb d=d(\prol[\Phi]{\Phi})\pb$. The third one follows similarly and the fourth is a consequence of the second property and the definitions of the tensors $\GL$ and $G\sup{L'}$. \end{proof} Let $\Gamma$ be a \textsc{sode}\ and $L\in\cinfty{E}$ be a Lagrangian. For convenience, we define the 1-form $\delta_\Gamma L\in\Sec{(\TEE)^*}$ by \[ \pai{\delta_\Gamma L}{Z}=\pai{dE_L-i_\Gamma\omega_L}{Z} =\pai{dE_L}{Z}-\omega_L(\Gamma,Z), \] for every section $Z$ of $\TEE$. We notice that $\Gamma$ is the solution of the free dynamics if and only if $\delta_\Gamma L=0$. On the other hand, notice that the 1-form $\delta_\Gamma L$ is semibasic, because $\Gamma$ is a \textsc{sode}. \begin{proposition}\label{transformation-deltaL} Let $\Gamma$ be a \textsc{sode}\ in $E$ and $\Gamma'$ a \textsc{sode}\ in $E'$. Let $L\in\cinfty{E}$ and $L'\in\cinfty{E'}$ be Lagrangian functions defined on $E$ and $E'$, respectively, such that $L=L'\circ\Phi$. Then, \begin{equation} \label{Gamma-relation} \pai{\delta_\Gamma L-(\prol[\Phi]{\Phi})\pb\delta_{\Gamma'} L'}{Z} =\omega_{L'}(\Gamma'\circ\Phi-\prol[\Phi]{\Phi} \circ \Gamma,\prol[\Phi]{\Phi}(Z)), \end{equation} for every section $Z$ of $\TEE$. \end{proposition} \begin{proof} Indeed, from $(\prol[\Phi]{\Phi})\pb dE_{L'}=d E_L$, we have that \[ \begin{aligned} \pai{\delta_\Gamma L-(\prol[\Phi]{\Phi})\pb\delta_{\Gamma'} L'}{Z} &=\pai{(\prol[\Phi]{\Phi})\pb i_{\Gamma'}\omega_{L'}-i_\Gamma\omega_L} {Z}\\ &=\pai{(\prol[\Phi]{\Phi})\pb i_{\Gamma'}\omega_{L'} - i_\Gamma(\prol[\Phi]{\Phi})\pb\omega_{L'}}{Z}\\ & = \omega_{L'}(\Gamma'\circ\Phi - \prol[\Phi]{\Phi}\circ\Gamma,\prol[\Phi]{\Phi}(Z)), \end{aligned} \] which concludes the proof. \end{proof} \subsection{Reduction of the free dynamics} Here, we build on Propositions~\ref{transformation-omegaL} and~\ref{transformation-deltaL} to identify conditions under which the dynamics can be reduced under a morphism of Lie algebroids. We first notice that, from Proposition~\ref{transformation-omegaL}, if $\Phi$ is fiberwise surjective morphism and $L$ is a regular Lagrangian on $E$, then $L'$ is a regular Lagrangian on $E'$ (note that $\prol[\Phi]{\Phi}:{\mathcal T}^EE\to {\mathcal T}^{E'}{E'}$ is a fiberwise surjective morphism). Thus, the dynamics of both systems is uniquely defined. \begin{theorem}[Reduction of the free dynamics]\label{reduction-free-dynamics} Suppose that the Lagrangian functions $L$ and $L'$ are $\Phi$-related, that is, $L = L' \circ \Phi$. If $\Phi$ is a fiberwise surjective morphism and $L$ is a regular Lagrangian then $L'$ is also a regular Lagrangian. Moreover, if $\Gamma_{L}$ and $\Gamma_{L'}$ are the solutions of the free dynamics defined by $L$ and $L'$ then \[ \prol[\Phi]{\Phi}\circ\Gamma_L=\Gamma_{L'}\circ\Phi. \] Therefore, if $a(t)$ is a solution of the free dynamics defined by~$L$, then $\Phi(a(t))$ is a solution of the free dynamics defined by~$L'$. \end{theorem} \begin{proof} If $\Gamma_L$ and $\Gamma_{L'}$ are the solutions of the dynamics, then $\delta_{\Gamma_L} L=0$ and $\delta_{\Gamma_{L'}}L'=0$ so that the left-hand side in equation~\eqref{Gamma-relation} vanishes. Thus \[ \omega_{L'}(\Gamma_{L'}\circ\Phi-\prol[\Phi]{\Phi} \circ \Gamma_L,\prol[\Phi]{\Phi}(Z)) = 0, \] for every $Z\in\Sec{\TEE}$. Therefore, using that $L'$ is regular and the fact that $\prol[\Phi]{\Phi}$ is a fiberwise surjective morphism, we conclude the result. \end{proof} We will say that the unconstrained dynamics $\Gamma_{L'}$ is the \emph{reduction of the unconstrained dynamics} $\Gamma_L$ by the morphism $\Phi$. \subsection{Reduction of the constrained dynamics} The above results about reduction of unconstrained Lagrangian systems can be easily generalized to nonholonomic constrained Lagrangian systems whenever the constraints of one system are mapped by the morphism to the constraints of the second system. Let us elaborate on this. Let $(L,D)$ be a constrained Lagrangian system on a Lie algebroid $E$ and let $(L',D')$ be another constrained Lagrangian system on a second Lie algebroid $E'$. Along this section, we assume that there is a fiberwise surjective morphism of Lie algebroids $\map{\Phi}{E}{E'}$ such that $L=L'\circ\Phi$ and $\Phi(D)=D'$. The latter condition implies that the base map is also surjective, so that we will assume that $\Phi$ is an epimorphism (i.e., in addition to being fiberwise surjective, the base map $\varphi$ is a submersion). As a first consequence, we have $G\sup{L',D'}_{\Phi(a)} (\Phi(b),\Phi(c)) = \GLD[a](b,c)$, for every $a\in D$ and every $b,c\in D_{\pi(a)}$, and therefore, if $(L,D)$ is regular, then so is $(L',D')$. \begin{lemma}\label{l5.5} With respect to the decompositions $\TEE|_D=\TED\oplus F$ and $\prol[E']{E'}|_{D'}=\prol[E']{D'}\oplus F'$, we have the following properties: \begin{enumerate} \item $\prol[\Phi]{\Phi}(\TED)=\prol[E']{D'}$, \item $\prol[\Phi]{\Phi}(F)=F'$, \item If $P,Q$ and $P',Q'$ are the projectors associated with $(L,D)$ and $(L',D'),$ respectively, then $P'\circ\prol[\Phi]{\Phi}=\prol[\Phi]{\Phi}\circ P$ and $Q'\circ\prol[\Phi]{\Phi}=\prol[\Phi]{\Phi}\circ Q$. \end{enumerate} With respect to the decompositions $\TEE|_D=\TDD\oplus(\TDD)^\perp$ and $\prol[E']{E'}|_{D'}=\prol[D']{D'}\oplus(\prol[D']{D'})^\perp$ we have the following properties: \begin{enumerate}\setcounter{enumi}{3} \item $\prol[\Phi]{\Phi}(\TDD)=\prol[D']{D'}$, \item $\prol[\Phi]{\Phi}\bigl((\TDD)^\perp\bigr) = (\prol[D']{D'})^\perp$, \item If $\bar{P},\bar{Q}$ and $\bar{P}',\bar{Q}'$ are the projectors associated with $(L,D)$ and $(L',D'),$ respectively, then $\bar{P}'\circ\prol[\Phi]{\Phi}=\prol[\Phi]{\Phi}\circ\bar{P}$ and $\bar{Q}'\circ\prol[\Phi]{\Phi}=\prol[\Phi]{\Phi}\circ\bar{Q}$. \end{enumerate} \end{lemma} \begin{proof} From the definition of $\prol[\Phi]{\Phi}$, it follows that \[ (\prol[\Phi]{\Phi})({\mathcal T}^ED)\subseteq {\mathcal T}^{E'}D',\;\;\;\;(\prol[\Phi]{\Phi})({\mathcal T}^DD)\subseteq {\mathcal T}^{D'}D'. \] Thus, one may consider the vector bundle morphisms \[ \prol[\Phi]{\Phi}:{\mathcal T}^ED\to {\mathcal T}^{E'}D',\;\;\; \prol[\Phi]{\Phi}:{\mathcal T}^DD\to {\mathcal T}^{D'}D'. \] Moreover, using that $\Phi$ is fiberwise surjective and that $\varphi$ is a submersion, we deduce that the rank of the above morphisms is maximum. This proves (1) and (4). The proof of (5) is as follows. For every $a'\in D'$, one can choose $a\in D$ such that $\Phi(a)=a'$, and one can write any element $w'\in\prol[D']{D'}[a']$ as $w'=\prol[\Phi]{\Phi}(w)$ for some $w\in\prol[D]{D}[a]$. Thus, if $z\in(\TDD[a])^\perp$, for every $w'\in\prol[D']{D'}[a']$ we have \[ \omega_{L'}(\prol[\Phi]{\Phi}(z),w') =\omega_{L'}(\prol[\Phi]{\Phi}(z),\prol[\Phi]{\Phi}(w)) =\omega_L(z,w)=0 , \] from where it follows that $\prol[\Phi]{\Phi}(z)\in(\prol[D']{D'})^\perp$. In a similar way, using that ${\mathcal T}^\Phi\Phi:(\prol[E]{E})_{|D}\to (\prol[E']{E'})_{|D'}$ is fiberwise surjective, (2) in Proposition~\ref{transformation-omegaL} and (4), we obtain that $(\prol[D']{D'})^\perp\subseteq (\prol[\Phi]{\Phi})((\prol[D]{D})^\perp)$. For the proof of (2) we have that \[ \prol[\Phi]{\Phi}(F) =\prol[\Phi]{\Phi}((\TDD)^\perp\cap\Ver{\TEE}) \subseteq (\prol[D']{D'})^\perp\cap\Ver{\prol[E']{E'}} =F'. \] Thus, using that $\prol[\Phi]{\Phi}: (\prol[E]{E})_{|D}\to (\prol[E']{E'})_{|D'}$ is fiberwise surjective, the fact that $(\prol[E]{E})_{|D}=\prol[E]{D}\oplus F$ and (1), it follows that \[ (\prol[E']{E'})_{|D'}=\prol[E']{D'} \oplus (\prol[\Phi]{\Phi})(F). \] Therefore, since $(\prol[E']{E'})_{|D'}=\prol[E']{D'}\oplus F'$, we conclude that (2) holds. Finally, (3) is an immediate consequence of (1) and (2), and similarly, (6) is an immediate consequence of (4) and (5). \end{proof} From the properties above, we get the following result. \begin{theorem}[Reduction of the constrained dynamics]\label{t5.6} Let $(L,D)$ be a regular constrained Lagrangian system on a Lie algebroid $E$ and let $(L',D')$ be a constrained Lagrangian system on a second Lie algebroid $E'$. Assume that a fiberwise surjective morphism of Lie algebroids $\map{\Phi}{E}{E'}$ exists such that $L=L'\circ\Phi$ and $\Phi(D)=D'$. If $\Gamma_{(L, D)}$ is the constrained dynamics for $L$ and $\Gamma_{(L', D')}$ is the constrained dynamics for $L'$, then $\prol[\Phi]{\Phi}\circ\Gamma_{(L, D)}=\Gamma_{(L', D')} \circ\Phi$. If $a(t)$ is a solution of Lagrange-d'Alembert differential equations for $L$, then $\Phi(a(t))$ is a solution of Lagrange-d'Alembert differential equations for~$L'$. \end{theorem} \begin{proof} For the free dynamics, we have that $\prol[\Phi]{\Phi}\circ\Gamma_L=\Gamma_{L'}\circ\Phi$. Moreover, from property (3) in Lemma~\ref{l5.5}, for every $a\in D$, we have that \[ \begin{array}{l} \prol[\Phi]{\Phi}(\Gamma_{(L, D)}(a)) =\prol[\Phi]{\Phi}(P(\Gamma_L(a))) =P'(\prol[\Phi]{\Phi}(\Gamma_L(a))) \\[5pt] =P'(\Gamma_{L'}(\Phi(a))) =\Gamma_{(L', D')}(\Phi(a)), \end{array} \] which concludes the proof. \end{proof} We will say that the constrained dynamics $\Gamma_{(L', D')}$ is the \emph{reduction of the constrained dynamics} $\Gamma_{(L, D)}$ by the morphism $\Phi$. \begin{theorem}\label{t5.7} Under the same hypotheses as in Theorem \ref{t5.6}, we have that \[ \{f'\circ \Phi,g'\circ \Phi\}_{nh}=\{f',g'\}'_{nh}\circ \Phi, \] for $f',g'\in C^\infty(D'),$ where $\{\cdot,\cdot\}_{nh}$ (respectively, $\{\cdot,\cdot\}_{nh}')$ is the nonholonomic bracket for the constrained system $(L,D)$ (respectively, $(L',D')$). In other words, $\Phi:D\to D'$ is an almost-Poisson morphism. \end{theorem} \begin{proof} Using (2) in Proposition~\ref{transformation-omegaL} and the fact that $\Phi$ is a Lie algebroid morphism, we deduce that \[ (i_{X_{f'\circ \Phi}}(\prol[\Phi]{\Phi})^*\omega_{L'})=i_{X_{f'}}\omega_{L'}\circ \Phi. \] Thus, since $\prol[\Phi]{\Phi}$ is fiberwise surjective, we obtain that \[ \prol^\Phi\Phi\circ X_{f'\circ \Phi}=X_{f'}\circ \Phi. \] Now, from (\ref{nhb}) and Lemma~\ref{l5.5}, we conclude that \[ \{f'\circ \Phi,g'\circ \Phi\}_{nh}=\{f',g'\}_{nh}'\circ \Phi. \] \end{proof} One of the most important cases in the theory of reduction is the case of reduction by a symmetry group. In this respect, we have the following result. \begin{theorem}[\cite{LeMaMa,Mackenzie}] \label{quotient-Lie-algebroid} Let $\map{q^Q_G}{Q}{M}$ be a principal $G$-bundle, let $\map{\tau}{E}{Q}$ be a Lie algebroid, and assume that we have an action of $G$ on $E$ such that the quotient vector bundle $E/G$ is well-defined. If the set $\Sec{E}^G$ of equivariant sections of $E$ is a Lie subalgebra of $\Sec{E}$, then the quotient $E'=E/G$ has a canonical Lie algebroid structure over $M$ such that the canonical projection $\map{q^E_G}{E}{E/G}$, given by $a\mapsto[a]_G$, is a (fiberwise bijective) Lie algebroid morphism over $q^Q_G$. \end{theorem} As a concrete example of application of the above theorem, we have the well-known case of the Atiyah or Gauge algebroid. In this case, the Lie algebroid $E$ is the standard Lie algebroid $TQ\to Q$, the action is by tangent maps $gv\equiv T\psi_g(v)$, the reduction is the Atiyah Lie algebroid $TQ/G\to Q/G$ and the quotient map $\map{q^{TQ}_G}{TQ}{TQ/G}$ is a Lie algebroid epimorphism. It follows that if $L$ is a $G$-invariant regular Lagrangian on $TQ$ then the unconstrained dynamics for $L$ projects to the unconstrained dynamics for the reduced Lagrangian $L'$. Moreover, if the constraints $D$ are also $G$-invariant, then the constrained dynamics for $(L,D)$ reduces to the constrained dynamics for $(L',D/G)$. On a final note, we mention that the pullback of the distributional equation $i_{\Gamma'}\omega\sup{L',D'}-\varepsilon\sup{L',D'}=0$ by $\prol[\Phi]{\Phi}$ is precisely $(i_\Gamma\omega\sup{L,D}-\varepsilon\sup{L,D})\circ\prol[\Phi]{\Phi}=0$. \subsection{Reduction by stages} As a direct consequence of the results exposed above, one can obtain a theory of reduction by stages. In Poisson geometry, reduction by stages is a straightforward procedure. Given the fact that the Lagrangian counterpart of Poisson reduction is Lagrangian reduction, it is not strange that reduction by stages in our framework becomes also straightforward. The Lagrangian theory of reduction by stages is a consequence of the following basic observation: \begin{quote} Let $\map{\Phi_1}{E_0}{E_1}$ and $\map{\Phi_2}{E_1}{E_2}$ be a fiberwise surjective morphisms of Lie algebroids and let $\map{\Phi}{E_0}{E_2}$ be the composition $\Phi=\Phi_2\circ\Phi_1$. The reduction of a Lagrangian system in $E_0$ by $\Phi$ can be obtained by first reducing by $\Phi_1$ and then reducing the resulting Lagrangian system by $\Phi_2$. \end{quote} This result follows using that $\prol[\Phi]{\Phi} = \prol[\Phi_2]{\Phi_2} \circ \prol[\Phi_1]{\Phi_1}$. Based on this fact, one can analyze one of the most interesting cases of reduction: the reduction by the action of a symmetry group. We consider a group $G$ acting on a manifold $Q$ and a closed normal subgroup $N$ of $G$. The process of reduction by stages is illustrated in the following diagram \[ \xymatrix{ \ar@/_{15pt}/[dd]^{\ \cdot/G}_{\text{Total reduction }} &\map{\tau_Q}{E_0=TQ}{M_0=Q}\ar[d]^{\ \cdot/N}_{\text{First reduction }}\\ &\map{\tau_1}{E_1=TQ/N}{M_1=Q/N}\ar[d]^{\ \cdot/(G/N)}_{\text{Second reduction }}\\ &\map{\tau_2}{E_2=(TQ/N)/(G/N)}{M_2=(Q/N)/(G/N)} } \] In order to prove our results about reduction by stages, we have to prove that $E_0, E_1$ and $E_2$ are Lie algebroids, that the quotient maps $\map{\Phi_1}{E_0}{E_1}$, $\map{\Phi_2}{E_1}{E_2}$ and $\map{\Phi}{E_0}{E_2}$ are Lie algebroids morphisms and that the composition $\Phi_1\circ\Phi_2$ equals to $\Phi$. Our proof is based on the following well-known result (see~\cite{CeMaRa}), which contains most of the ingredients in the theory of reduction by stages. \begin{theorem}(\cite{CeMaRa})\label{principal-reduction} Let $\map{q^Q_G}{Q}{M}$ be a principal $G$-bundle and $N$ a closed normal subgroup of $G$. Then, \begin{enumerate} \item $\map{q^Q_N}{Q}{Q/N}$ is a principal $N$-bundle, \item $G/N$ acts on $Q/N$ by the rule $[g]_N[q]_N=[gq]_N$, \item $\map{q^{Q/N}_{G/N}}{Q/N}{(Q/N)/(G/N)}$ is a principal $(G/N)$-bundle. \item The map $\map{i}{Q/G}{(Q/N)/(G/N)}$ defined by $[q]_G\mapsto[[q]_N]_{G/N}$ is a diffeomorphism. \end{enumerate} \end{theorem} Building on the previous results, one can deduce the following theorem, which states that the reduction of a Lie algebroid can be done by stages. \begin{theorem} Let $\map{q^Q_G}{Q}{M}$ be a principal $G$-bundle and $N$ be a closed normal subgroup of $G$. Then, \begin{enumerate} \item $\map{\tau_{TQ/G}}{TQ/G}{Q/G}$ is a Lie algebroid and $\map{q^{TQ}_G}{TQ}{TQ/G}$ is a Lie algebroid epimorphism, \item $\map{\tau_{TQ/N}}{TQ/N}{Q/N}$ is a Lie algebroid and $\map{q^{TQ}_N}{TQ}{TQ/N}$ is a Lie algebroid epimorphism, \item $G/N$ acts on $TQ/N$ by the rule $[g]_N[v]_N=[gv]_N$, \item $\map{\tau_{(TQ/N)/(G/N)}}{(TQ/N)/(G/N)}{(Q/N)/(G/N)}$ is a Lie algebroid and $\map{q^{TQ/N}_{G/N}}{TQ/N}{(TQ/N)/(G/N)}$ is a Lie algebroid epimorphism, \item The map $\map{I}{TQ/G}{(TQ/N)/(G/N)}$ defined by $[v]_G\mapsto[[v]_N]_{G/N}$ is an isomorphism of Lie algebroids over the map $i$. \end{enumerate} \end{theorem} \begin{proof} The vector bundle $\tau_{TQ/G}:TQ/G\to Q/G$ (respectively, $\tau_{TQ/N}:TQ/N\to Q/N)$ is the Atiyah algebroid for the principal $G$-bundle $q_G^Q:Q\to Q/G$ (respectively, $q_N^Q:Q\to Q/N$), so that (1) and (2) are obvious. Condition (3) is just condition (2) of Theorem~\ref{principal-reduction} applied to the principal $N$-bundle $TQ\to TQ/N$. To prove condition (4), we notice that the action of $G/N$ on the Lie algebroid $TQ/N$ is free and satisfies the conditions of Theorem~\ref{quotient-Lie-algebroid}. Finally, the Lie algebroid morphism $\map{j}{TQ}{TQ/N}$ is equivariant with respect to the $G$-action on $TQ$ and the $(G/N)$-action on $TQ/N$. Thus, it induces a morphism of Lie algebroids in the quotient. It is an isomorphism since it is a diffeomorphism by Theorem~\ref{principal-reduction}. \end{proof} The following diagram illustrates the above situation: \[ \xymatrix{% TQ\ar@/^{20pt}/[rr]^{\Phi} \ar[d]_{\tau_Q}\ar[r]_{\kern-10pt\Phi_1} & TQ/N\ar[d]^{\tau_1}\ar[r]_{\kern-20pt \Phi_2} & (TQ/N)/(G/N)\ar[d]^{\tau_2}\\ Q\ar@/_{20pt}/[rr]_{q^Q_G}\ar[r]^{q_N^Q} & Q/N\ar[r]^{\kern-15pt q_{G/N}^{Q/N}}&(Q/N)/(G/N) } \] In particular, for the unconstrained case one has the following result. \begin{theorem}[Reduction by stages of the free dynamics] Let $\map{q^Q_G}{Q}{Q/G}$ be a principal $G$-bundle, and $N$ a closed normal subgroup of $G$. Let $L$ be a Lagrangian function on $Q$ which is $G$-invariant. Then the reduction by the symmetry group $G$ can be performed in two stages: \begin{itemize} \item[1.] reduce by the normal subgroup $N$, \item[2.] reduce the resulting dynamics from 1. by the residual symmetry group $G/N$. \end{itemize} \end{theorem} Since the dynamics of a constrained system is obtained by projection of the free dynamics, we also the following result. \begin{theorem}[Reduction by stages of the constrained dynamics] Let $\map{q^Q_G}{Q}{Q/G}$ be a principal $G$-bundle and $N$ a closed normal subgroup of $G$. Let $(L,D)$ be a $G$-invariant constrained Lagrangian system. Then the reduction by the symmetry group $G$ can be performed in two stages: \begin{itemize} \item[1.] reduce by the normal subgroup $N$, \item[2.] reduce the resulting dynamics from 1. by the residual symmetry group $G/N$. \end{itemize} \end{theorem} \section{The momentum equation} \label{momentum-equation} In this section, we introduce the momentum map for a constrained system on a Lie algebroid, and examine its evolution along the dynamics. This gives rise to the so-called momentum equation. \subsection{Unconstrained case} Let us start by discussing the unconstrained case. Let $\tau_E:E\to M$ be a Lie algebroid over a manifold $M$ and $L:E\to \mathbb{R}$ be a regular Lagrangian function. Suppose that $\tau_K:K\to M$ is a vector bundle over $M$ and that $\Psi:K\to E$ is a vector bundle morphism (over the identity of $M$) between $K$ and $E$. Then, we can define \emph{the unconstrained momentum map} $J_{(L,\Psi)}:E\to K^*$ \emph{associated with $L$ and $\Psi$} as follows \[ J_{(L,\Psi)}(a)\in K_x^*, \mbox{ for } a\in E_x, \] and \[ (J_{(L,\Psi)}(a))(k)=\frac{d}{dt}_{|t=0}L(a+t\Psi(k))=\Psi(k)_a\sup{V}(L),\mbox{ for }k\in K_x. \] If $\sigma:M\to K$ is a section of $\tau_K:K\to M$ then, using the momentum map $J_{(L,\Psi)}$, we may introduce the real function $J_{(L,\Psi)}^\sigma:E\to \mathbb{R}$ given by \begin{equation}\label{Momentum1} J_{(L,\Psi)}^\sigma(a) = J_{(L,\Psi)}(a)(\sigma(x))=\Psi(\sigma(x))_a\sup{V}(L),\mbox{ for }a\in E_x. \end{equation} \begin{theorem}[The unconstrained momentum equation]\label{theorem6.1} Let $\Gamma_L$ be the Euler-Lagrange section associated with the regular Lagrangian function $L:E\to \mathbb{R}$. If $\sigma:M\to K$ is a section of $\tau_K:K\to M$ and $(\Psi\circ \sigma)^c\in Sec(\prol[E]{E})$ is the complete lift of $(\Psi\circ \sigma)\in Sec(E)$, we have that \begin{equation}\label{EqMomen1} <d^{\prol[E]{E}}J_{(L,\Psi)}^\sigma,\Gamma_L>=<d^{\prol[E]{E}}L, (\Psi\circ \sigma)^c>, \end{equation} where $d^{{\mathcal T}^EE}$ is the differential of Lie algebroid ${\prol[E]{E}}\to E.$ In particular, if \linebreak $<d^{\prol[E]{E}}L,(\Psi\circ \sigma)^c>=0$, then the real function $J_{(L,\Psi)}^\sigma$ is a constant of the motion for the Lagrangian dynamics associated with the Lagrangian function $L.$ \end{theorem} \begin{proof} Let $S:{\prol[E]{E}}\to {\prol[E]{E}}$ be the vertical endomorphism. If $(\Psi\circ \sigma)^v\in Sec(\prol[E]{E})$ is the vertical lift of $(\Psi\circ \sigma)\in Sec(E)$ then, using (\ref{Momentum1}) and the fact that $S(\Psi\circ \sigma)^c=(\Psi\circ \sigma)^v,$ it follows that \begin{equation}\label{Momen2} J_{(L,\Psi)}^\sigma=\theta_L((\Psi\circ \sigma)^c), \end{equation} where $\theta_L$ is the Cartan $1$-form associated with $L$. Thus, from (\ref{Momen2}), we deduce that \[ d^{\prol[E]{E}}J_{(L,\Psi)}^\sigma={\mathcal L}_{(\Psi\circ \sigma)^c}^{\prol[E]{E}}\theta_L + i(\Psi\circ \sigma)^c(\omega_L),\] $\omega_L$ being the Cartan $2$-form associated with $L$. Therefore, if $E_L:E\to \mathbb{R}$ is the Lagrangian energy, we obtain that \begin{equation}\label{Formula} \begin{array}{rcl} <d^{\prol[E]{E}}J^\sigma_{(L,\Psi)},\Gamma_L> & = & <d^{\prol[E]{E}}(\theta_L(\Gamma_L)), (\Psi\circ \sigma)^c>-<d^{\prol[E]{E}}E_L,(\Psi\circ \sigma)^c>\\[8pt] &&-<\theta_L,[(\Psi\circ \sigma)^c,\Gamma_L]>. \end{array} \end{equation} Now, from (\ref{SODEcompl}) and since $\Gamma_L$ is a \textsc{sode}\ section, it follows that \[ \Theta_L(\Gamma_L)=<d^{\prol[E]{E}}L,\Delta>,\;\;\;\; <\theta_L, [(\Psi\circ \sigma)^c,\Gamma_L]> =0, \] where $\Delta\in Sec({\prol[E]{E}})$ is the Liouville section. Consequently, using (\ref{Formula}) we deduce that (\ref{EqMomen1}) holds. \end{proof} \begin{remark}[Conservation of momentum on $TM$]\label{remark6.2} {\rm Let $L:TM\to \mathbb{R}$ be an standard regular Lagrangian function on $TM.$ Suppose that $G$ is a Lie group with Lie algebra ${\frak g}$ and that $\psi:G\times M\to M$ is a (left) action of $G$ on $M.$ Then, we may consider the trivial vector bundle over $M$ \[ K=M\times {\frak g}\to M \] and the vector bundle morphism $\Psi:K\to TM$ (over the identity of $M$) defined by \begin{equation}\label{Psi} \Psi(x,\xi)=\xi_M(x), \end{equation} where $\xi_M\in {\frak X}(M)$ is the infinitesimal generator of the action $\psi$ associated with $\xi\in {\frak g}$. A direct computation proves that the (unconstrained) momentum map $J_{L,\Psi)}:E=TM\to K^*=M\times {\frak g}^*$ associated with $L$ and $\Psi$ is given by $$J_{(L,\Psi)}(v_x)=(x,J(v_x)),\mbox{ for } v_x\in T_xM,$$ where $J:TM\to {\frak g}^*$ is the standard momentum map associated with $L$ and the action $\psi$ defined by $$J(v_x)(\xi)=\frac{d}{dt}_{|t=0}L(v_x+t\xi_M(x)), \mbox{ for } v_x\in T_xM \mbox{ and } \xi\in {\frak g}$$ (see, for instance, \cite{AM}). Now, each $\xi\in {\frak g}$ defines a (constant) section $\sigma$ of the vector bundle $K=M\times {\frak g}\to M$ and the real function $J_{(L,\Psi)}^\sigma$ is just the momentum $J_\xi:TM\to \mathbb{R}$ in the direction of $\xi$. On the other hand, if $\eta\in {\frak g}$, then the infinitesimal generator $\eta_{TM}$ of the tangent action $T\psi:G\times TM\to TM$ associated with $\eta$ is the (standard) complete lift $\eta_M^c\in {\frak X}(TM)$ of $\eta_M.$ Therefore, using Theorem \ref{theorem6.1}, we deduce a well-known result~\cite{AM}: ``If the Lagrangian function $L:TM\to \mathbb{R}$ is invariant under the tangent action $T\psi$ of $G$ on $TM$ then, for every $\xi\in {\frak g}$, the momentum $J_\xi:TM\to \mathbb{R}$ in the direction of $\xi$ is a constant of the motion of the Lagrangian dynamics.''} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \subsection{Constrained case} Next, let us discuss the constrained case. Suppose that $L:E\to \mathbb{R}$ is a regular Lagrangian function on a Lie algebroid $\tau_E:E\to M$, that $\tau_K:K\to M$ is a vector bundle over $M$ and that $\Psi:K\to E$ is a vector bundle morphism (over the identity of $M$) between $K$ and $E$. In addition, let $\tau_D:D\to M$ be a vector subbundle of $\tau_E:E\to M$ such that the nonholonomic Lagrangian system $(L,D)$ is regular. If $x$ is point of $M$ we consider the vector subspace $K_x^D$ of $K_x$ given by \[ K_x^D=\{k\in K_x/\Psi(k)\in D_x\}. \] We will denote by $i_x:K_x^D\to K_x$ the canonical inclusion, by $i_x^*:K_x^*\to (K_x^D)^*$ the canonical projection and by $K^D$ and $(K^D)^*$ the sets \[ K^D=\bigcup_{x\in M}K_x^D,\;\;\;\; (K^D)^*=\bigcup_{x\in M}(K_x^D)^*. \] Then, we define the \emph{nonholonomic momentum map $J_{(L,D,\Psi)}:E\to (K^D)^*$ associated with the system $(L,D)$ and the morphism $\Psi$} as follows \[ (J_{(L,D,\Psi)})_{|E_x}=i_x^*\circ (J_{(L,\Psi)})_{|E_x},\;\;\; \mbox{ for } x\in M. \] Now, if $\sigma:M\to K$ is a section of $\tau_{K}:K\to M$ such that $\sigma(x)\in K_x^D$, for all $x\in M,$ we may introduce the real function $J^\sigma_{(L,D,\Psi)}:E\to \mathbb{R}$ given by \[ J^\sigma_{(L,D,\Psi)}(a)=J_{(L,D,\Psi)}(a)(\sigma(x)),\mbox{ for } a\in E_x, \] that is, $J_{(L,D,\Psi)}^\sigma=J_{(L,\Psi)}^\sigma.$ \begin{theorem}[The nonholonomic momentum equation]\label{theorem6.3} Let $\Gamma_{(L,D)}$ be the solution of the constrained dynamics for the nonholonomic Lagrangian system $(L,D)$. If $\sigma:M\to K$ is a section of $\tau_K:K\to M$ such that $\sigma(x)\in K_x^D$, for all $x\in M$, and $(\Psi\circ \sigma)^c\in Sec({\prol[E]{E}})$ is the complete lift of $(\Psi\circ \sigma)\in Sec(E)$ then we have that \begin{equation}\label{EqMomen2} <d^{\prol[E]{D}}((J_{(L,D,\Psi)})_{|D}),\Gamma_{(L,D)}> = <d^{\prol[E]{E}}L,(\Psi\circ \sigma)^c>_{|D}, \end{equation} where $d^{\prol[E]{D}}$ (respectively, $d^{\prol[E]{E}}$) is the differential of Lie algebroid ${\prol[E]{D}}\to D$ (respectively, ${\prol[E]{E}}\to E$). In particular, if $<d^{\prol[E]{E}}L,(\Psi\circ \sigma)^c>_{|D}=0$, then the real function $J_{(L,D,\Psi)}^\sigma$ is a constant of the motion for the constrained dynamics associated with the nonholonomic Lagrangian system $(L,D)$. \end{theorem} \begin{proof} Denote by $j:D\to E$ and by ${\mathcal J}:{\prol[E]{D}}\to {\prol[E]{E}}$ the canonical inclusions and by $Q:{{\mathcal T}_D^EE}\to F$ the corresponding projector, where $F=\omega_L^{-1}(\widetilde{D}^0)$ (see Section \ref{Projectors}). Then, as we know, \[ \Gamma_{(L,D)}=(\Gamma_L-Q\Gamma_L)_{|D}. \] Moreover, the pair $({\mathcal J},j)$ is a Lie algebroid monomorphism which implies that \[ d^{\prol[E]{D}}((J_{(L,D,\Psi)}^\sigma)_{|D})=({\mathcal J},j)^*(d^{\prol[E]{E}}J^\sigma_{(L,D,\Psi)}). \] Thus, using that $J_{(L,D,\Psi)}^\sigma=J^\sigma_{(L,\Psi)}$ and proceedings as in the proof of Theorem \ref{theorem6.1}, we deduce that \begin{equation}\label{step1} \begin{array}{rcl} <d^{\prol[E]{D}}((J_{(L,D,\Psi)}^\sigma)_{|D}),\Gamma_{(L,D)}> & = &<d^{\prol[E]{E}}L,(\Psi\circ \sigma)^c>_{|D}\\&&\kern-80pt-\{({\mathcal L}_{(\Psi\circ \sigma)^c}^{\prol[E]{E}}\theta_L)(Q\Gamma_L) + (i(\Psi\circ \sigma)^c(\omega_L))(Q\Gamma_L)\}_{|D}. \end{array} \end{equation} Now, since $S(Q\Gamma_L)=0$, then $S[(\Psi\circ \sigma)^c,Q\Gamma_L]=0$ (see (\ref{Vertcompl})) and it follows that \[ \theta_L(Q\Gamma_L)=0,\;\;\;\; \theta_L[(\Psi\circ \sigma)^c,Q\Gamma_L]=0. \] Therefore, \begin{equation}\label{Lider} ({\mathcal L}_{(\Psi\circ \sigma)^c}^{\prol[E]{E}}\theta_L)(Q\Gamma_L)=0. \end{equation} On the other hand, we have that \[ (i(Q\Gamma_L)\omega_L)_{|D}=S^*(\alpha_{(L,D)}),\mbox{ with } \alpha_{(L,D)}\in Sec(({\prol[E]{D}})^0). \] Consequently, $$\{(i(\psi\circ \sigma)^c\omega_L)(Q\Gamma_L)\}_{|D}=-\alpha_{(L,D)}((\Psi\circ \sigma)^v_{|D}).$$ But, since $\Psi\circ \sigma$ is a section of $\tau_D:D\to M$, it follows that $(\Psi\circ \sigma)^v_{|D}$ is a section of ${\prol[E]{D}}\to D.$ This implies that \begin{equation}\label{QGamma} \{(i(\Psi\circ \sigma)^c\omega_L)(Q\Gamma_L)\}_{|D}=0. \end{equation} Finally, using (\ref{step1}), (\ref{Lider}) and (\ref{QGamma}), we conclude that (\ref{EqMomen2}) holds. \end{proof} \begin{remark}[Nonholonomic momentum equation on $TM$ and horizontal symmetries]\label{Remark6.4} {\rm Suppose that $L:TM\to \mathbb{R}$ is an standard regular Lagrangian function on $E=TM$ and that $\psi:G\times M\to M$ is a (left) action of a Lie group $G$ on $M$. Then, we consider the trivial vector bundle $\tau_K:K=M\times {\frak g}\to M$ and the vector bundle morphism $\Psi:K\to TM$ (over the identity of $M$) defined by (\ref{Psi}). Now, let $D$ be a vector subbundle (over $M$) of the vector bundle $\tau_{M}:TM\to M$, that is, $D$ is a distribution on $M$, and assume that the nonholonomic Lagrangian system $(L,D)$ is regular. If $x$ is a point of $M,$ we have that $K_x^D=\{x\}\times {\frak g}^x,$ where ${\frak g}^x$ is the vector subspace of ${\frak g}$ given by \[ {\frak g}^x=\{\xi\in {\frak g}/\xi_M(x)\in D_x\}. \] We also remark that the sets $K^D$ and $(K^D)^*$ may be identified with the sets \[ {\frak g}^D=\bigcup_{x\in M}{\frak g}^x,\;\;\;\; ({\frak g}^D)^*=\bigcup_{x\in M}({\frak g}^x)^*. \] Under this identification, the nonholonomic momentum map $J_{(L,D,\Psi)}:E\to (K^D)^*$ associated with the system $(L,D)$ and the morphism $\Psi$ is just the standard nonholonomic momentum map $J^{nh}:TM\to ({\frak g}^D)^*$ associated with the system $(L,D)$ and the action $\psi$ (see \cite{BlKrMaMu,CaLeMaMa,CaLeMaMa2}). Now, if $\widetilde{\xi}:M\to {\frak g}$ is an smooth map the $\widetilde{\xi}$ defines, in a natural way, a section $\sigma_{\widetilde{\xi}}:M\to K=M\times {\frak g}$ of the vector bundle $\tau_K:K=M\times {\frak g}\to M.$ We denote by $J^{nh}_{\widetilde{\xi}}:TM\to \mathbb{R}$ the real function $J_{(L,D,\Psi)}^{\sigma_{\widetilde{\xi}}}:E\to \mathbb{R}$ and by $\Xi_{\widetilde{\xi}}$ the vector field $\Psi\circ \sigma_{\widetilde{\xi}}$ on $M.$ Then, using Theorem \ref{theorem6.3}, we deduce a well-known result (see \cite{BlKrMaMu,CaLeMaMa,CaLeMaMa2}): ``If $\Gamma_{(L,D)}$ is the solution of the constrained dynamics for the nonholonomic system $(L,D)$, we have that \[ \Gamma_{L,D}((J_{\widetilde{\xi}}^{nh})_{|D})=(\Xi_{\widetilde{\xi}})^c_{|D}(L)." \] The above equality is an intrinsic expression of the \emph{standard nonholonomic momentum equation}. In addition, using again Theorem \ref{theorem6.3} we also deduce another well-known result (see \cite{BlKrMaMu,CaLeMaMa,CaLeMaMa2}): ``If the Lagrangian function $L:TM\to \mathbb{R}$ is invariant under the tangent action $T\psi$ of $G$ on $TM$ and $\xi\in {\frak g}$ is \emph{a horizontal symmetry} (that is, $\xi\in {\frak g}^x$, for all $x\in M$) then the real function $(J^{nh}_{\widetilde{\xi}})_{|D}$ is a constant of the motion for the constrained Lagrangian dynamics, where $\widetilde{\xi}:M\to {\frak g}$ is the constant map \[ \widetilde{\xi}(x)=\xi, \mbox{ for all } x\in M." \tag*{$\bullet$} \] } \end{remark} \section{Examples}\label{examples} \newcommand{\mathfrak{g}} \renewcommand{\d}{\mathfrak{d}}{\mathfrak{g}} \renewcommand{\d}{\mathfrak{d}} \newcommand{\mathfrak{h}}{\mathfrak{h}} \newcommand{\pe}[2]{\langle\langle#1,#2\rangle\rangle} As in the unconstrained case, constrained Lagrangian systems on Lie algebroids appear frequently. We show some examples next. \subsection{Nonholonomic Lagrangian systems on Lie algebras} Let ${\frak g}$ be a real algebra of finite dimension. Then, it is clear that ${\frak g}$ is a Lie algebroid over a single point. Now, suppose that $(l,{\frak d})$ is a nonholonomic Lagrangian system on ${\frak g}$, that is, $l:{\frak g}\to \mathbb{R}$ is a Lagrangian function and ${\frak d}$ is a vector subspace of ${\frak g}$. If $w:I\to {\frak g}$ is a curve on ${\frak g}$ then \[ dl(\omega(t))\in T_{\omega(t)}^*{\frak g}\cong {\frak g}^*,\;\;\;\; \forall t\in I, \] and thus, the map $dl\circ \omega$ may be considered as a curve on ${\frak g}^*$ \[ dl\circ \omega:I\to {\frak g}^*. \] Therefore, \[ (dl\circ \omega)'(t)\in T_{dl(\omega(t))}{\frak g}^*\cong {\frak g}^*,\;\;\;\forall t\in I. \] Moreover, from (\ref{LD-edo}), it follows that $\omega$ is a solution of the Lagrange-d'Alembert equations for the system $(l,{\frak d})$ if and only if \begin{equation}\label{EPSeq} (dl\circ \omega)'(t)-ad^*_{\omega(t)}(dl(\omega(t)))\in {\frak d}^\circ,\;\;\; \omega(t)\in {\frak d}, \; \; \; \; \forall t \end{equation} where $ad^*:{\frak g}\times {\frak g}^*\to {\frak g}^*$ is the infinitesimal coadjoint action. The above equations are just the so-called \emph{Euler-Poincar{\'e} -Suslov equations} for the system $(l,{\frak d})$ (see \cite{FeZe}). We remark that in the particular case when the system is unconstrained, that is, ${\frak d}={\frak g}$, then one recovers the \emph{the standard Euler-Poincar{\'e} equations }for the Lagrangian function $l:{\frak g}\to \mathbb{R}.$ If $G$ is a Lie group with Lie algebra ${\frak g}$ then nonholonomic Lagrangian systems on ${\frak g}$ may be obtained (by reduction) from nonholonomic LL mechanical systems with configuration space the Lie group $G.$ In fact, let $e$ be the identity element of $G$ and $\mathbb{I}:{\frak g} \to {\frak g}^*$ be a symmetric positive definite inertia operator. Denote by $g_e:{\frak g}\times {\frak g}\to \mathbb{R}$ the corresponding scalar product on ${\frak g}$ given by \[ g_e(\omega,\omega')=<\mathbb{I}(\omega),\omega'>, \mbox{ for }\omega,\omega'\in {\frak g}\cong T_eG. \] $g_e$ induces a left-invariant Riemannian metric $g$ on $G$. Thus, we way consider the Lagrangian function $L:TG\to \mathbb{R}$ defined by \[ L(v_h)=\frac{1}{2}g_h(v_h,v_h),\;\;\; \mbox{ for }v_h\in T_hG. \] In other words, $L$ is the kinetic energy associated with the Riemannian metric $g$. Now, let $D$ be a left-invariant distribution on $G$. Then, since $L$ is a left-invariant function, the pair $(L,D)$ is an standard nonholonomic LL system in the terminology of \cite{FeZe}. On the other hand, the Lagrangian momentum map $\Phi:TG\to {\frak g}$ given by \[ \Phi(v_h)=(T_h l_{h^{-1}})(v_h),\mbox{ for } v_h\in T_hG \] is a fiberwise bijective morphism of Lie algebroids. Moreover, if $l=L_{|{\frak g}}$ and ${\frak d}=D_e$ then the pair $(l,{\frak d} )$ is a nonholonomic Lagrangian system on ${\frak g}$ and \[ l\circ \Phi=L \mbox{ and } \Phi(D)={\frak d}. \] Thus, the system $(l,{\frak d})$ is regular. In addition, if $v:I\to TG$ is a solution of the Lagrange-d'Alembert equations for the system $(L,D)$ then, using Theorem \ref{t5.6}, we deduce that the curve $\Phi\circ v:I\to {\frak g}$ is a solution of the Lagrange-d'Alembert equations for the system $(l,{\frak d}).$ We remark that \[ l(\omega)=\frac{1}{2}g_e(\omega,\omega) = \frac{1}{2}<\mathbb{I}(\omega),\omega>,\mbox{ for }\omega\in {\frak g}. \] Therefore, if $\omega:I\to {\frak g}$ is a curve on ${\frak g},$ we have that \[ (dl\circ \omega)(t)=\mathbb{I}(\omega(t)),\mbox{ for all } t \] and, using (\ref{EPSeq}), it follows that $\omega$ is a solution of the Lagrange-d'Alembert equations for the system $(l,{\frak d})$ if and only if \[ \dot{\omega}-\mathbb{I}^{-1}(ad_{\omega(t)}^*\mathbb{I}(\omega(t)))\in {\frak d}^{\perp},\;\;\; \omega(t)\in {\frak d}, \mbox{ for all } t, \] where ${\frak d}^\perp$ is the orthogonal complement of the subspace ${\frak d}$, that is, \[ {\frak d}^\perp=\{\omega'\in {\frak g}/<\mathbb{I}(\omega'),\omega>=0,\forall \omega\in {\frak d}\}. \] Two simple examples of the above general situation are the following ones. \subsection*{The Suslov system} The most natural example of LL system is the \emph{nonholonomic Suslov problem}, which describes the motion of a rigid body about a fixed point under the action of the following nonholonomic constraint: the body angular velocity vector is orthogonal to a some fixed direction in the body frame. The configuration space of the problem is the group $G=SO(3)$. Thus, in this case, the Lie algebra ${\frak g}$ may be identified with $\mathbb{R}^3$ and, under this identification, the Lie bracket on ${\frak g}$ is just the cross product $\times$ on $\mathbb{R}^3.$ Moreover, if $\mathbb{I}:\mathbb{R}^3\to (\mathbb{R}^3)^*\cong \mathbb{R}^3$ is the inertia tensor of the body then a curve $\omega:I\to \mathbb{R}^3$ on $\mathbb{R}^3$ is a solution of the Euler-Poincar{\'e}-Suslov equations for the system if and only if \begin{equation}\label{Suseq} \dot{\omega}=\mathbb{I}^{-1}((\mathbb{I}\omega) \times \omega)) + \lambda \mathbb{I}^{-1}(\Gamma),\;\;\;\; <\omega,\Gamma>=0, \end{equation} where $\lambda$ is the Lagrange multiplier, $\Gamma$ is a fixed unit vector in $\mathbb{R}^3$ and $<\cdot,\cdot>$ is the standard scalar product in $\mathbb{R}^3$. Since the nonholonomic system is regular, the Lagrange multiplier $\lambda$ is uniquely determined. In fact, differentiating the equation $<\omega,\Gamma>=0$, we find \[ \lambda=-\frac{<\mathbb{I}\omega\times \omega, \mathbb{I}^{-1}\Gamma>}{<\Gamma, \mathbb{I}^{-1}\Gamma>} \] and, consequently, Eqs. (\ref{Suseq}) are equivalent to \[ \dot{\omega}=\mathbb{I}^{-1}(<\mathbb{I}\omega,\Gamma>\omega\times \mathbb{I}^{-1}\Gamma),\;\;\;\; <\omega,\Gamma>=0. \] Multidimensional generalizations of the Suslov problem have been discussed by several authors (see~\cite{FeKo,Jo,ZeBl1}). \subsection*{The Chaplygin sleigh} The Chaplygin sleigh is a rigid body sliding on a horizontal plane. The body is supported at three points, two of which slide freely without friction while the third is a knife edge, a constraint that allows no motion orthogonal to this edge. This mechanical system was introduced and studied in 1911 by Chaplygin \cite{Ch} (see also \cite{NF}). The configuration space of this system is the group $SE(2)$ of Euclidean motions of the two-dimensional plane $\mathbb{R}^2.$ As we know, we may choose local coordinates $(\theta,x,y)$ on $SE(2)$. $\theta$ and $(x,y)$ are the angular orientation of the blade and position of the contact point of the blade on the plane, respectively. Now, we introduce a coordinate system called the body frame by placing the origin at the contact point and choosing the first coordinate axis in the direction of the knife edge. Denote the angular velocity of the body by $\omega=\dot{\theta}$ and the components of the linear velocity of the contact point relative to the body frame by $v_1,v_2$. The set $(\omega,v_1,v_2)$ is regarded as an element of the Lie algebra ${\frak{se}}(2)$. Note that \[ v_1=\dot{x}\cos\theta + \dot{y} \sin \theta,\;\;\;\; v_2=\dot{y}\cos\theta - \dot{x}\sin\theta. \] The position of the center of mass is specified by the coordinates $(a,b)$ relative to the body frame. Let $m$ and $J$ denote the mass and moment of inertia of the sleigh relative to the contact point. Then, the corresponding symmetric positive definite inertia operator $\mathbb{I}:{\frak se}(2)\to {\frak se}(2)^*$ and the reduced nonholonomic Lagrangian system $(l,{\frak d})$ on ${\frak se}(2)$ are given by \[ \begin{array}{rcl} \mathbb{I} (\omega,v_1,v_2)&=&\left( \begin{array}{ccc} J + m(a^2+b^2)&-bm&am\\ -bm&m&0\\ am&0&m \end{array} \right)\left(\begin{array}{c}\omega\\v_1\\v_2\end{array}\right), \\[20pt] l(\omega,v_1,v_2)&=&\frac{1}{2}[(J+m(a^2+b^2))\omega^2 + m(v_1^2 + v_2^2)\\[5pt]&&-2mb\omega v_1 + 2am\omega v_2],\\[8pt] {\frak d}&=&\{(\omega, v_1,v_2)\in {\frak se}(2)/v_2=0\},\end{array} \] (see \cite{FeZe}). Thus, the Lagrange-d'Alembert equations for the system $(l,{\frak d})$ are $$\begin{array}{rcl} \dot{\omega}&=&\displaystyle\frac{am\omega}{J+ma^2}(b\omega-v_1),\\[8pt] \dot{v_1} & = & \displaystyle\frac{a\omega}{J+ma^2}((J+m(a^2+b^2))\omega-mbv_1),\\[8pt] v_2&=&0. \end{array}$$ Multidimensional generalizations of the Chaplygin sleigh were discussed in \cite{FeZe} (see also \cite{NF} and \cite{ZeBl2}). \subsection{Nonholonomic LR systems and right action Lie algebroids} Here, we show how the reduction of a nonholonomic LR system produces a nonholonomic Lagrangian system on a right action Lie algebroid. Let us start by recalling the definition of a right action Lie algebroid (see \cite{HiMa}). Let $(F, [\cdot, \cdot]_{F}, \rho_{F})$ be a Lie algebroid over a manifold $N$ and $\pi: M \to N$ be a smooth map. \emph{A right action of $F$ on $\pi: M \to N$} is a $\mathbb{R}$-linear map \[ \Psi: Sec (F) \to {\frak X}(M), \; \; \; X\in Sec (F) \to \Psi(X) \in {\frak X}(M) \] such that \[ \begin{array}{l} \Psi (f X) = (f \circ \pi)\Psi (X), \; \; \Psi([X, Y]_{F}) = [\Psi(X), \Psi(Y)], \\[5pt] (T_{m}\pi)(\Psi(X)(m)) = \rho_{F}(X(\pi(m))), \end{array} \] for $f \in C^{\infty}(N)$, $X, Y \in Sec (F)$ and $m \in M$. If $\Psi: Sec (E) \to {\frak X}(M)$ is a right action of $F$ on $\pi: M \to N$ and $\tau_{F}: F \to N$ is the vector bundle projection then the pullback vector bundle of $F$ over $\pi$, \[ E = \pi^{*}F = \{(m, f) \in M \times F / \tau_{F}(f) = \pi (m)\} \] is a Lie algebroid over $M$ with Lie algebroid structure $([\cdot, \cdot]_{E}, \rho_{E})$ which is characterized by \[ [X, Y]_{E} = [X, Y]_{F} \circ \pi, \; \; \; \rho_{E}(X)(m) = \Psi(X)(m), \] for $X, Y \in Sec(E)$ and $m \in M$. The triple $(E, [\cdot, \cdot]_{E}, \rho_{E})$ is called the \emph{right action Lie algebroid of $F$ over $\pi$} and it is denoted by $\pi_{\Psi}F$ (see \cite{HiMa}). Note that if the Lie algebroid $F$ is a real Lie algebra ${\frak g}$ of finite dimension and $\pi: M \to \{\mbox{a point}\}$ is the constant map then a right action of ${\frak g}$ on $\pi$ is just a right infinitesimal action $\Psi: {\frak g} \to {\frak X}(M)$ of ${\frak g}$ on the manifold $M$. In such a case, the corresponding right action Lie algebroid is the trivial vector bundle $pr_{1}: M \times {\frak g} \to M$. Next we recall the definition of a nonholonomic LR system following~\cite{FeJo,Jo2}. Let $G$ be a compact connected Lie group with Lie algebra ${\frak g}$ and $<\cdot,\cdot>:{\frak g}\times {\frak g}\to \mathbb{R}$ be an $Ad_G$-invariant scalar product on ${\frak g}$. Now, suppose that ${\mathcal I}:{\frak g} \to {\frak g}$ is a inertia operator which is symmetric and definite positive with respect to the scalar product $<\cdot,\cdot>$. Denote by $g$ the left-invariant Riemannian metric given by \begin{equation}\label{metrica} g_h(v_h,v_{h'})=<{\mathcal I}(T_hl_{h^{-1}}(v_h)),(T_hl_{h^{-1}})(v_h')> \end{equation} for $h\in G$ and $v_h,v_h'\in T_hG.$ Then, the Lagrangian function $L:TG\to \mathbb{R}$ of the system is \begin{equation}\label{L} L(v_h)=\frac{1}{2}g_h(v_h,v_h)-V(h), \mbox{ for }v_h\in T_hG, \end{equation} $V:G\to \mathbb{R}$ being the potential energy. The constraint distribution $D$ is a right-invariant distribution on $G$. Thus, if $e$ is the identity element of $G$ and ${\frak d} =D_e$, we have that \begin{equation}\label{eq:D} D_h=(T_er_h)({\frak d} )=(T_el_h)(Ad_{h^{-1}}({\frak d})),\mbox{ for }h\in G \end{equation} where $Ad:G\times {\frak g}\to {\frak g}$ is the adjoint action. The nonholonomic Lagrangian system $(L,D)$ on $TG$ is called a \emph{nonholonomic LR system} in the terminology of \cite{FeJo,Jo2}. Note that, since $L$ is a Lagrangian function of mechanical type, the system $(L,D)$ is regular. Now, assume that \[ {\frak s}={\frak d}^{\perp}=\{\omega' \in {\frak g} / < \omega,\omega'>=0,\forall \omega\in {\frak d}\} \] is a Lie subalgebra of ${\frak g}$, that $S$ is a closed Lie subgroup of $G$ with Lie algebra ${\frak s}$ and that the potential energy $V$ is $S$-invariant. Next, let us show that the nonholonomic LR system $(L,D)$ may be reduced to a nonholonomic Lagrangian system on a right action Lie algebroid. In fact, consider the Riemannian homogeneous space $M=S\setminus G$ and the standard transitive right action $\psi$ of $G$ on $M=S\setminus G$. Denote by $\Psi:{\frak g}\to {\frak X}(S\setminus G)$ the corresponding right infinitesimal action of ${\frak g}$ on $S\setminus G$. Then, $\Psi$ induces a Lie algebroid structure on the trivial vector bundle $pr_1:S\setminus G\times {\frak g}\to S\setminus G$. On the other hand, using that the potential energy $V$ is $S$ invariant, we deduce that $V$ induces a real function $\widetilde{V}:S\setminus G\to \mathbb{R}$ on $S\setminus G$ such that \begin{equation}\label{vtil} \widetilde{V}\circ \pi=V, \end{equation} where $\pi:G\to S\setminus G$ is the canonical projection. Thus, we can introduce the Lagrangian function $\tilde{L}: S\setminus G\times {\frak g}\to \mathbb{R}$ on the action Lie algebroid $pr_1:S\setminus G\times {\frak g}\to S\setminus G$ defined by \begin{equation}\label{Ltil} \widetilde{L}(\widetilde{h},\omega)=\frac{1}{2}<{\mathcal I}(\omega),\omega>-\widetilde{V}(\widetilde{h}),\;\;\; \mbox{ for }\widetilde{h}\in S\setminus G \mbox{ and }\omega\in {\frak g}. \end{equation} Now, for every $h\in G$, we consider the subspace ${\frak d} (h)$ of ${\frak g}$ given by \begin{equation}\label{Deltah} {\frak d}(h)=Ad_{h^{-1}}({\frak d}). \end{equation} The dimension of ${\frak d}(h)$ is equal to the dimension of ${\frak d}$. Moreover, since $<\cdot,\cdot>$ is $Ad_G$-invariant, it follows that \[ {\frak d}(h)=(Ad_{h^{-1}}({\frak s}))^\perp=\{\omega'\in {\frak g}/<\omega',Ad_{h^{-1}}(\omega)>=0,\;\;\forall \omega\in {\frak s}\}. \] In particular, we have that \[ {\frak d}(s)={\frak s}^\perp={\frak d}, \;\;\forall s\in S \] which implies that ${\frak d} (sh)={\frak d} (h)$, for all $h\in G.$ Therefore, we can define a vector subbundle $D$ of the Lie algebroid $pr_1:S\setminus G\times {\frak g}\to S\setminus G$ as follows \begin{equation}\label{Dtil} \widetilde{D}_{\widetilde{h}}=\{\widetilde{h}\}\times {\frak d} (h),\mbox{ for } \widetilde{h}\in S\setminus G \end{equation} with $h\in G$ and $\pi(h)=\widetilde{h}.$ Consequently, the pair $(\widetilde{L},\widetilde{D})$ is a nonholonomic Lagrangian system on the action Lie algebroid $pr_1:S\setminus G\times {\frak g}\to S\setminus G.$ In addition, we may prove the following result \begin{proposition}\label{propositon7.1} \begin{enumerate} \item If $\widetilde{\Phi}:TG\to S\setminus G\times {\frak g}$ is the map given by \begin{equation}\label{Fitil} \widetilde{\Phi}(v_h)=(\pi(h),(T_hl_{h^{-1}}(v_h)),\mbox{ for all } v_h\in T_hG \end{equation} then $\widetilde{\Phi}$ is a fiberwise bijective Lie algebroid morphism over $\pi$. \item The nonholonomic Lagrangian systems $(L,D)$ and $(\tilde{L},\widetilde{D})$ on $TG$ and $S/G\times {\frak g}$ are $\widetilde{\Phi}$-related, that is, \[ \tilde{L} \circ \widetilde{\Phi}=L,\;\;\; \widetilde{\Phi}(D)=\widetilde{D}. \] \item The system $(\widetilde{L},\widetilde{D})$ is regular and if $\gamma:I\to TG$ is a solution of the Lagrange-d'Alembert equations for the system $(L,D)$ then $\widetilde{\Phi}\circ \gamma: I \to S\setminus G\times {\frak g}$ is a solution of the Lagrange-d'Alembert equations for the system $(\widetilde{L},\widetilde{D}).$ \end{enumerate} \end{proposition} \begin{proof} $(1)$ Consider the standard (right) action $r$ of $G$ on itself \[ r:G\times G\to G, \; \; \; (h,h')\in G\times G\to r_{h'}(h)=hh'\in G. \] As we know, the infinitesimal generator of $r$ associated with an element $\omega$ of ${\frak g}$ is \[ \omega_G=\lvec{\omega}, \] where $\lvec{\omega}$ is the left-invariant vector field on $G$ such that $\lvec{\omega}(e)=\omega$. On the other hand, it is clear that the projection $\pi:G\to S\setminus G$ is equivariant with respect to the actions $r$ and $\psi$. Thus, \[ (T_h\pi)(\lvec{\omega}(h))=\Psi(\omega)(\pi(h)), \mbox{ for } h\in G. \] Therefore, if $\rho:S/G\times {\frak g}\to T(S\setminus G)$ is the anchor map of the Lie algebroid $pr_{1}: S\setminus G\times {\frak g} \to S\setminus G$, it follows that \[ \rho(\widetilde{\Phi}(\lvec{\omega}(h))) = \rho(\pi(h),\omega)=(T_h\pi)(\rvec{\omega}(h)), \; \; \; \mbox{ for } h\in G. \] Furthermore, since \[ [\lvec{\omega},\lvec{\omega}']=\lvec{[\omega,\omega']_{\frak g}}, \; \; \; \; \mbox{ for } \omega,\omega'\in {\frak g}, \] we conclude that $\widetilde{\Phi}$ is a Lie algebroid morphism over $\pi.$ In addition, it is obvious that if $h\in G$ then \[ \widetilde{\Phi}_{|T_hG}:T_hG\to \{\pi(h)\}\times {\frak g} \] is a linear isomorphism. \medskip \noindent $(2)$ From (\ref{metrica}), (\ref{L}), (\ref{vtil}), (\ref{Ltil}) and $(\ref{Fitil})$, we deduce that \[ \widetilde{L}\circ \widetilde{\Phi}=L. \] Moreover, using (\ref{eq:D}), (\ref{Deltah}), (\ref{Dtil}) and (\ref{Fitil}), we obtain that \[ \widetilde{\Phi}(D)=\widetilde{D}. \] $(3)$ It follows from $(1),$ $(2)$ and using the results of Section \ref{sec:reduction} (see Theorem \ref{t5.6}). \end{proof} Next, we obtain the necessary and sufficient conditions for a curve $(\widetilde{h},\omega):I\to S\setminus G\times {\frak g}$ to be a solution of the Lagrange-d'Alembert equations for the system $(\widetilde{L},\widetilde{D})$. Let $\flat_{<\cdot,\cdot>}:{\frak g}\to {\frak g}^*$ be the linear isomorphism induced by the scalar product $<\cdot,\cdot>:{\frak g}\times {\frak g}\to \mathbb{R}$ and $\mathbb{I}:{\frak g}\to {\frak g}^*$ be the inertia operator given by \begin{equation}\label{Inertia2} \mathbb{I}(w_1)(w_2)=<{\mathcal I}(\omega_1),\omega_2>, \mbox{ for }\omega_1,\omega_2\in {\frak g}. \end{equation} On the other hand, if $\widetilde{h}'\in S\setminus G$ we will denote by $\Psi_{\tilde{h}'}:{\frak g}\to T_{\widetilde{h}'}(S\setminus G)$ the linear epimorphism defined by \[ \Psi_{\widetilde{h}'}(\omega')=\Psi({\omega'})(\widetilde{h}'), \mbox{ for } \omega'\in {\frak g}. \] In addition, if $\pi(h')=\widetilde{h}'$, we identify the vector space $\widetilde{D}_{\widetilde{h}'}$ with the vector subspace ${\frak d}(h')$ of ${\frak g}$. Then, using (\ref{LD-edo}), (\ref{Ltil}) and (\ref{Inertia2}), we deduce that the curve $(\widetilde{h},\omega)$ is a solution of the Lagrange-d'Alembert equations for the system $(\widetilde{L},\widetilde{D})$ if and only if \[ \begin{array}{l} \dot{\widetilde{h}}(t)=\Psi_{\widetilde{h}(t)}(\omega(t))\\ \{\dot{\omega}(t)-\mathbb{I}^{-1}(ad_{\omega(t)}^{*}\mathbb{I} (\omega(t)))-\mathbb{I}^{-1}(\Psi^{*}_{\widetilde{h}(t)} (d\widetilde{V}(\widetilde{h}(t))))\}\in \widetilde{D}_{\widetilde{h}(t)}^\perp,\\ \omega(t)\in \widetilde{D}_{\widetilde{h}(t)}, \end{array} \] for all $t$, where $\widetilde{D}_{\widetilde{h}(t)}^\perp$ is the orthogonal complement of the vector subspace $\widetilde{D}_{\widetilde{h}(t)}\subseteq {\frak g}$ with respect to the scalar product $<\cdot,\cdot>$. These equations will be called \emph{the reduced Poincar\'e-Chetaev equations}. We treat next a simple example of the above general situation. \subsection*{The Veselova system} The most descriptive illustration of an LR system is the Veselova problem on the motion of a rigid body about a fixed point under the action of the nonholonomic constraint $$<\omega,\gamma>=0.$$ Here, $\omega$ is the vector of the angular velocity in the body frame, $\gamma$ is a unit vector which is fixed in an space frame and $<\cdot,\cdot>$ denotes the standard scalar product in $\mathbb{R}^3$ (see~\cite{VeVe}). The Veselova system is an LR system on the Lie group $G=SO(3)$ which is the configuration space of the rigid body motion. Thus, in this case, the Lie algebra ${\frak g}$ may be identified with $\mathbb{R}^3$ and, under this identification, the Lie bracket $[\cdot,\cdot]_{\frak g}$ is the cross product $\times$ on $\mathbb{R}^3$. Moreover, the adjoint action of $G=SO(3)$ on ${\frak g}\cong \mathbb{R}^3$ is the standard action of $SO(3)$ on $\mathbb{R}^3$. This implies that $<\cdot,\cdot>$ is an $Ad_{SO(3)}$-invariant scalar product on ${\frak g}\cong \mathbb{R}^3.$ The vector subspace ${\frak d}$ of $\mathbb{R}^3$ is just the orthogonal complement (with respect to $<\cdot,\cdot>$) of a vector subspace $<\gamma_0>$ of dimension $1$, with $\gamma_0$ a unit vector in $\mathbb{R}^3$, that is, \[ {\frak d}=\{\omega\in \mathbb{R}^3/<\omega,\gamma_0>=0\}. \] Therefore, \[ {\frak s}={\frak d}^\perp = <\gamma_0> \] is a Lie subalgebra of ${\frak g}\cong \mathbb{R}^3$. Furthermore, the isotropy group $S$ of $\gamma_0$ with respect to the adjoint action of $G=SO(3),$ \[ S=\{s\in SO(3)/s\gamma_0^T=\gamma_0^T\}, \] is a closed Lie subgroup with Lie algebra ${\frak s}$. We remark that $S$ is isomorphic to the circle $S^1.$ Consequently, the corresponding homogeneous space $M=S\setminus SO(3)$ is the orbit of the adjoint action of $SO(3)$ on $\mathbb{R}^3$ over the point $\gamma_0$ and, it is well-known that, such an orbit may be identified with the unit sphere $S^2$. In fact, the map \[ S\setminus SO(3)\to S^2, \;\;\; [h]\to \gamma_0 h=(h^{-1}\gamma_0^T)^T \] is a diffeomorphism (see, for instance, \cite{MaRa}). Under the above identification the (right) action of SO(3) on $M=S\setminus SO(3)$ is just the standard (right) action of $SO(3)$ on $S^2$. Thus, our action Lie algebroid is the trivial vector bundle $pr_1:S^2\times \mathbb{R}^3\to S^2$ and the Lie algebroid structure on it is induced by the standard infinitesimal (right) action $\Psi:\mathbb{R}^3\to {\frak X}(S^2)$ of the Lie algebra $(\mathbb{R}^3,\times)$ on $S^2$ defined by \[ \Psi(\omega)(\gamma)=\gamma\times \omega,\mbox{ for } \omega\in \mathbb{R}^3 \mbox{ and } \gamma\in S^2. \] In the presence of a potential $\tilde{V}: \gamma\to \widetilde{V}(\gamma)$ the nonholonomic Lagrangian system $(\widetilde{L},\widetilde{D})$ on the Lie algebroid $pr_1:S^2\times \mathbb{R}^3\to \mathbb{R}^3$ is given by \[ \widetilde{L}(\gamma,\omega)=\frac{1}{2} \mathbb{I}(\omega)(\omega)-\widetilde{V}(\gamma), \; \; \; \widetilde{D}(\gamma)=\{\gamma\}\times \{\omega\in\mathbb{R}^3/<\omega,\gamma>=0\}, \] $\mathbb{I}:\mathbb{R}^3\to\mathbb{R}^3$ being the inertia tensor of the rigid body. The Lagrange-d'Alembert equations for $(\widetilde{L},\widetilde{D})$ are \begin{equation}\label{Vese1} \dot{\gamma}=\gamma\times \omega,\;\;\; \dot{\omega}=\mathbb{I}^{-1}\{(\mathbb{I}\omega\times\omega) + \omega\times \frac{\partial \widetilde{V}}{\partial \gamma} + \lambda\gamma\},\;\;\; <\omega,\gamma>=0 \end{equation} where $\lambda$ is the Lagrange multiplier. Since the system $(\widetilde{L}, \widetilde{D})$ is regular, $\lambda$ is uniquely determined. In fact, \begin{equation}\label{Vese2} \lambda=-\frac{<\mathbb{I}\omega\times \omega + \gamma\times \frac{\partial \widetilde{V}}{\partial \gamma}, \mathbb{I}^{-1}\gamma>}{<\mathbb{I}^{-1}\gamma,\gamma>}. \end{equation} Eqs (\ref{Vese1}) and (\ref{Vese2}) are just the classical dynamical equations for the Veselova system (see \cite{VeVe}; see also \cite{FeJo,Jo2}). \subsection{Semidirect product symmetry and left action Lie algebroids} Here, we show how the reduction of some nonholonomic mechanical systems with semidirect product symmetry produces nonholonomic Lagrangian systems on left action Lie algebroids. Let us start by recalling the definition of a left action Lie algebroid (see \cite{HiMa}). Let $(F, [\cdot, \cdot]_{F}, \rho_{F})$ be a Lie algebroid over a manifold $N$ and $\pi: M \to N$ be a smooth map. \emph{A left action of $F$ on $\pi: M \to N$} is a $\mathbb{R}$-linear map \[ \Psi: Sec (F) \to {\frak X}(M), \; \; \; X\in Sec (F) \to \Psi(X) \in {\frak X}(M) \] such that \[ \begin{array}{l} \Psi (f X) = (f \circ \pi)\Psi (X), \; \; \Psi([X, Y]_{F}) = -[\Psi(X), \Psi(Y)], \\[5pt] (T_{m}\pi)(\Psi(X)(m)) = -\rho_{F}(X(\pi(m))), \end{array} \] for $f \in C^{\infty}(N)$, $X, Y \in Sec (F)$ and $m \in M$. If $\Psi: Sec (E) \to {\frak X}(M)$ is a left action of $F$ on $\pi: M \to N$ and $\tau_{F}: F \to N$ is the vector bundle projection then the pullback vector bundle of $F$ over $\pi$, \[ E = F^{*}\pi = \{(f, m) \in F \times M / \tau_{F}(f) = \pi (m)\} \] is a Lie algebroid over $M$ with Lie algebroid structure $([\cdot, \cdot]_{E}, \rho_{E})$ which is characterized by \[ [X, Y]_{E} = [X, Y]_{F} \circ \pi, \; \; \; \rho_{E}(X)(m) = -\Psi(X)(m), \] for $X, Y \in Sec(E)$ and $m \in M$. The triple $(E, [\cdot, \cdot]_{E}, \rho_{E})$ is called \emph{the left action Lie algebroid of $F$ over $\pi$} and it is denoted by $F_{\Psi}\pi$ (see \cite{HiMa}). Next, we consider a particular class of nonholonomic Lagrangian systems on left action Lie algebroids. Let $V$ be a real vector space of finite dimension and $\cdot: G \times V \to V$ be a left representation of a Lie group $G$ on $V$. We also denote by $\cdot: {\frak g} \times V \to V$ the left infinitesimal representation of the Lie algebra ${\frak g}$ of $G$ on $V$. Then, we can consider the semidirect Lie group $S= G \circledS V$ with the multiplication \[ (g, v) (g', v') = (gg', v + g \cdot v'). \] The Lie algebra ${\frak s}$ of $S$ is the semidirect product ${\frak s} = {\frak g} \circledS V$ with the Lie bracket $[\cdot, \cdot]_{\frak s}: {\frak s} \times {\frak s} \to {\frak s}$ given by \[ [(\omega, \dot{v}), (\omega', \dot{v}')]_{\frak s} = ([\omega, \omega']_{\frak g}, \omega \cdot \dot{v}' - \omega' \cdot \dot{v}) \] for $\omega, \omega' \in {\frak g}$ and $\dot{v}, \dot{v}' \in V$. Here, $[\cdot, \cdot]_{\frak g}$ is the Lie bracket on ${\frak g}$. Moreover, we use the following notation. If $v \in V$ then $\rho_{v}: {\frak g} \to V$ is the linear map defined by \[ \rho_{v}(\omega) = \omega \cdot v, \; \; \; \mbox{ for } \omega \in {\frak g}, \] and $\rho_{v}^{*}: V^{*} \to {\frak g}^{*}$ is the dual map of $\rho_{v}: {\frak g} \to V$. Now, let $N$ be a smooth manifold. Then, it is clear that the product manifold $F = {\frak s} \times TN$ is the total space of a vector bundle over $N$. Moreover, if $(\omega, \dot{v}) \in {\frak s}$ and $X$ is a vector field on $N$ then the pair $((\omega, \dot{v}), X)$ defines a section of the vector bundle $\tau_{F}: F = {\frak s} \times TN \to N$. In fact, if $\{\omega_{i}\}$ is a basis of ${\frak g}$, $\{\dot{v}_{j}\}$ is a basis of $V$ and $\{X_{k}\}$ is a local basis of ${\frak X}(N)$ then $\{((\omega_{i}, 0), 0), ((0, \dot{v}_{j}), 0), ((0, 0), X_{k})\}$ is a local basis of $Sec (F)$. The vector bundle $\tau_{F}: F \to N$ admits a Lie algebroid structure $([\cdot, \cdot]_{F}, \rho_{F})$, which is characterized by the following relations \begin{equation}\label{LiealgF} \begin{array}{rcl} [((\omega, \dot{v}), X), ((\omega', \dot{v}'), X')]_{F} & = & ([(\omega, \dot{v}), (\omega', \dot{v}')]_{\frak s}, [X, X']) \\ & = &([\omega, \omega']_{\frak g}, \omega \cdot \dot{v}' - \omega' \cdot \dot{v}, [X, X']), \\[5pt] \rho_{F}((\omega, \dot{v}), X) & = & X, \end{array} \end{equation} for $((\omega, \dot{v}), X), ((\omega', \dot{v}'), X') \in {\frak s} \times {\frak X}(N)$. Next, suppose that $v_{0}$ is a point of $V$ and that ${\mathcal O}_{v_{0}}$ is the orbit of the action of $G$ on $V$ by $v_{0}$, that is, \[ {\mathcal O}_{v_{0}} = \{g \cdot v_{0} \in V / g \in G \}. \] Denote by $\pi: M = N \times {\mathcal O}_{v_{0}} \to N$ the canonical projection on the first factor and by $\Psi: Sec (F) \to {\frak X}(M)$ the left action of $F$ on $\pi$, which is characterized by the following relation \[ \Psi ((\omega, \dot{u}), X)(n, v) = (-X(n), \omega \cdot v) \] for $((\omega, \dot{u}), X) \in {\frak s} \times {\frak X}(N)$ and $(n, v) \in N \times {\mathcal O}_{v_{0}} = M$. Then, we have the corresponding left action Lie algebroid $\tau_{E}: E = ({\frak s} \times TN)_{\Psi} \pi \to M = N \times {\mathcal O}_{v_{0}}$. Note that $E = ({\frak s} \times TN)_{\Psi}\pi = ({\frak s} \times TN) \times {\mathcal O}_{v_{0}}$ and that the anchor map $\rho_{E}: E = ({\frak s} \times TN) \times {\mathcal O}_{v_{0}} \to TM = TN \times T{\mathcal O}_{v_{0}}$ of $\tau_{E}: E \to M$ is given by \begin{equation}\label{AnclaE} \rho_{E}((\omega, \dot{u}), X_{n}, v) = (X_{n}, -\omega \cdot v) \end{equation} for $((\omega, \dot{u}), X_{n}, v) \in {\frak s} \times T_{n}N \times {\mathcal O}_{v_{0}}$. Now, let $L: ({\frak s} \times TN) \times {\mathcal O}_{v_{0}} \to \mathbb{R}$ be a Lagrangian function and $D$ be the vector subbundle of $\tau_{E}: E \to M$ whose fiber $D_{(n, v)}$ over the point $(n, v) \in N \times {\mathcal O}_{v_{0}} = M$ is defined by \begin{equation}\label{D} \begin{array}{l} D_{(n, v)} = \{(((\omega, \omega \cdot v), X_{n}), v) / \omega \in {\frak g}, X_{n} \in T_{n}N \} \\ [5pt] \; \; \; \; \subseteq E_{(n, v)} = ({\frak s} \times T_{n}N) \times \{v\}. \end{array} \end{equation} Next, we obtain the Lagrange-d'Alembert equations for the system $(L, D)$. For this purpose, we choose a basis $\{\omega_{\alpha}\}$ of ${\frak g}$, a basis $\{u_{A}\}$ of $V$, a system of local fibred coordinates $(x^{i}, \dot{x}^{i})$ on $TN$ and a system of local coordinates $(v^{i})$ on ${\mathcal O}_{v_{0}}$. Denote by $\omega^{\alpha}$ (respectively, $u^{A}$) the global coordinates on ${\frak g}$ (respectively, $V$) induced by the basis $\{\omega_{\alpha}\}$ (respectively, $\{u_{A}\}$). Suppose that \[ [\omega_{\alpha}, \omega_{\beta}]_{\frak g} = c_{\alpha \beta}^{\gamma} \omega_{\gamma}, \; \; \; \omega_{\alpha} \cdot u_{A} = a_{\alpha A}^{B} u_{B}. \] Then, we have that \[ c_{\alpha \beta}^{\gamma} a_{\gamma A}^{B} = a_{\beta A}^{C}a_{\alpha C}^{B} - a_{\alpha A}^{C}a_{\beta C}^{B}. \] Next, we consider the local basis of sections $\{e_{i}, e_{\alpha}, e_{A}\}$ of $E$ given by \[ \begin{array}{rcl} e_{i}(n, v) & = & ((0, 0, \frac{\partial}{\partial x^{i}}_{|n}), v), \; \; \; e_{\alpha}(n, v) \; = \; ((\omega_{\alpha}, \omega_{\alpha} \cdot v, 0_{n}), v) \\ [5pt] e_{A}(n, v) & = & ((0, u_{A}, 0_{n}), v) \end{array} \] for $(n, v) \in N \times {\mathcal O}_{v_{0}} = M$. Note that $\{e_{i}, e_{\alpha}\}$ is a local basis of sections of the constraint subbundle $D$. In addition, if $(x^{i}, v^{j}; y^{i}, y^{\alpha}, y^{A})$ are the local coordinates on $E$ induced by the basis $\{e_{i}, e_{\alpha}, e_{A}\}$, it follows that \begin{equation}\label{Changecoor} y^{i} = \dot{x}^{i}, \; \; \; y^{\alpha} = \omega^{\alpha}, \; \; \; y^{A} = u^{A} - a_{\alpha B}^{A}u^{B}_{0} \omega^{\alpha}, \end{equation} where $u^{B}_{0}$ is the local function on $M = N \times {\mathcal O}_{v_{0}}$ defined by $u^{B}_{0} = (u^{B})_{|{\mathcal O}_{v_{0}}}.$ Moreover, \[ \begin{array}{l} \rho_{E}(e_{i}) = -\frac{\partial}{\partial x^{i}}, \; \; \; \rho_{E}(e_{\alpha}) = \rho^{i}_{\alpha}\frac{\partial}{\partial v^{i}}, \; \; \; \rho_{E}(e_{A}) = 0, \\[5pt] [e_{\alpha}, e_{\beta}]_{E} = c_{\alpha \beta}^{\gamma} (e_{\gamma} + a_{\gamma A}^{B}u^{A}_{0} e_{B}), \; \; \; [e_{\alpha}, e_{A}]_{E} = -[e_{A}, e_{\alpha}]_{E} = a_{\alpha A}^{B} e_{B}, \end{array} \] and the rest of the fundamental Lie brackets are zero. Thus, a curve \[ t \to (x^{i}(t), v^{j}(t); y^{i}(t), y^{\alpha}(t), y^{A}(t)) \] is a solution of the Lagrange-d'Alembert equations for the system $(L, D)$ if and only if \[ \begin{array}{l} \dot{x}^{i} = y^{i}, \; \; \; \dot{v}^{j} = \rho_{\alpha}^{j}y^{\alpha}, \mbox{ for all } i \mbox{ and } j, \\[5pt] \displaystyle \frac{d}{dt}(\displaystyle \frac{\partial L}{\partial y^{i}}) - \displaystyle \frac{\partial L}{\partial x^{i}} = 0, \mbox{ for all } i, \\[8pt] \displaystyle \frac{d}{dt}(\displaystyle \frac{\partial L}{\partial y^{\alpha}}) + \displaystyle (\frac{\partial L}{\partial y^{\gamma}} + \displaystyle \frac{\partial L}{\partial y^{B}} a_{\gamma A}^{B}u^{A}_{0})c_{\alpha \beta}^{\gamma}y^{\beta} - \rho_{\alpha}^{i}\displaystyle \frac{\partial L}{\partial v^{i}} = 0, \mbox{ for all } \alpha, \\[8pt] y^{A} = 0, \mbox{ for all } A. \end{array} \] If we consider the local expression of the curve in the coordinates $(x^{i}, v^{j}; \dot{x}^{i}, \omega^{\alpha}, u^{A})$ then, from (\ref{Changecoor}), we deduce that the above equations are equivalent to \[ \begin{array}{l} \dot{x}^{i} = y^{i}, \; \; \; \dot{v}^{j} = \rho_{\alpha}^{j}\omega^{\alpha}, \mbox{ for all } i \mbox{ and } j, \\[5pt] \displaystyle \frac{d}{dt}(\displaystyle \frac{\partial L}{\partial \dot{x}^{i}}) - \displaystyle \frac{\partial L}{\partial x^{i}} = 0, \mbox{ for all } i, \\[8pt] \displaystyle \frac{d}{dt}(\displaystyle \frac{\partial L}{\partial \omega^{\alpha}}) + \displaystyle \frac{\partial L}{\partial \omega^{\gamma}} c_{\alpha \beta}^{\gamma} \omega^{\beta} + \displaystyle \frac{d}{dt}(a_{\alpha B}^{A}u^{B}_{0} \displaystyle \frac{\partial L}{\partial u^{A}}) \\[8pt] \hspace{1cm} + 2a_{\gamma B}^{A}u^{B}_{0} \displaystyle \frac{\partial L}{\partial u^{A}} c_{\alpha \beta}^{\gamma}\omega^{\beta} - \rho_{\alpha}^{i} \displaystyle \frac{\partial L}{\partial v^{i}} = 0, \mbox{ for all } \alpha, \\[5pt] u^{A} = a_{\alpha B}^{A}u^{B}_{0} \omega^{\alpha}, \mbox{ for all } A, \end{array} \] or, in vector notation, \[ \begin{array}{l} \dot{v} = - \omega \cdot v, \\[5pt] \displaystyle \frac{d}{dt}(\displaystyle \frac{\partial L}{\partial \dot{x}}) - \displaystyle \frac{\partial L}{\partial x} = 0, \\[8pt] \displaystyle \frac{d}{dt}(\displaystyle \frac{\partial L}{\partial \omega}) + (ad^{*}_{\omega} \displaystyle \frac{\partial L}{\partial \omega}) = -\displaystyle \frac{d}{dt} (\rho^{*}_{v} \displaystyle \frac{\partial L}{\partial u}) -2 ad^{*}_{\omega}(\rho^{*}_{v} \displaystyle \frac{\partial L}{\partial u}) - \rho^{*}_{v} \displaystyle \frac{\partial L}{\partial v}, \\[5pt] u = \rho_{v}\omega. \end{array} \] Nonholonomic Lagrangian systems, of the above type, on the left action Lie algebroid $\tau_{E}: E = ({\frak s} \times TN) \times {\mathcal O}_{v_{0}} \to M = N \times {\mathcal O}_{v_{0}}$ may be obtained (by reduction) from an standard nonholonomic Lagrangian system with semidirect product symmetry. In fact, let $Q$ be the product manifold $S \times N$ and suppose that we have a Lagrangian function $\tilde{L}: TQ \to \mathbb{R}$ and a distribution $\tilde{D}$ on $Q$ whose characteristic space $\tilde{D}_{((g, v), n)} \subseteq T_{g}G \times T_{v}V \times T_{n}N \simeq T_{g}G \times V \times T_{n}N$ at the point $((g, v), n) \in S \times N$ is \begin{equation}\label{Dtilv0} \tilde{D}_{((g, v), n)} = \{((\dot{g}, \dot{v}), \dot{n}) \in T_{g}G \times V \times T_{n}N / \dot{v} = (T_{g}r_{g^{-1}})(\dot{g}) \cdot v_{0} \}, \end{equation} where $v_{0}$ is a fixed point of $V$. We can consider the natural left action of the Lie group $S$ on $Q$ and, thus, the left action $A$ of the Lie subgroup $H_{v_{0}} = G_{v_{0}} \circledS V$ of $S$ on $Q$, where $G_{v_{0}}$ is the isotropy group of $v_{0}$ with respect to the action of $G$ on $V$. The tangent lift $TA$ of $A$ is given by \begin{equation}\label{TA} TA ((\tilde{g}, \tilde{u}), (v_{g}, (v, \dot{v}), X_{n})) = ((T_{g}l_{\tilde{g}})(v_{g}), (\tilde{u} + \tilde{g} \cdot v, \tilde{g} \cdot \dot{v}), X_{n}) \end{equation} for $(\tilde{g}, \tilde{u}) \in H_{v_{0}}$ and $(v_{g}, (v, \dot{v}), X_{n}) \in T_{((g, v), n)}Q \simeq T_{g}G \times V \times T_{n}N$. Using (\ref{TA}), it follows that the distribution $\tilde{D}$ is invariant under the action $TA$ of $H_{v_{0}}$ on $TQ$. Moreover, we will assume that the Lagrangian function is also $H_{v_{0}}$-invariant. Therefore, we have a nonholonomic Lagrangian system $(\tilde{L}, \tilde{D})$ on the standard Lie algebroid $TQ \to Q$ which is $H_{v_{0}}$-invariant. This type of systems were considered in \cite{MT:04}. Since the function $\tilde{L}$ is $H_{v_{0}}$-invariant, we deduce that there exists a real function $L: ({\frak s} \times TN) \times {\mathcal O}_{v_{0}} \to \mathbb{R}$ on the left action Lie algebroid $\tau_{E}: E = ({\frak s} \times TN) \times {\mathcal O}_{v_{0}} \to M = N \times {\mathcal O}_{v_{0}}$ which is defined by \begin{equation}\label{Ele} L(((\omega, \dot{v}), X_{n}), u) = \tilde{L}((T_{e}l_{g})(\omega), (v, g \cdot \dot{v}), X_{n}), \end{equation} for $(((\omega, \dot{v}), X_{n}), u) \in {\frak s} \times T_{n}N \times {\mathcal O}_{v_{0}}$, with $g \in G$, $u = g^{-1}v_{0}$ and $v \in V$. Moreover, we may prove the following result. \begin{proposition} \begin{enumerate} \item If $\Phi: TQ \simeq TG \times (V \times V) \times TN \to E = ({\frak s} \times TN) \times {\mathcal O}_{v_{0}}$ and $\varphi: G \times V \times N \to N \times {\mathcal O}_{v_{0}}$ are the maps defined by \begin{equation}\label{Defmor} \begin{array}{l} \Phi(u_{g}, (v, \dot{v}), X_{n}) = ((((T_{g}l_{g^{-1}})(u_{g}), g^{-1} \cdot \dot{v}), X_{n}), g^{-1} \cdot v_{0}), \\[5pt] \varphi(g, v, n) = (n, g^{-1} \cdot v_{0}), \end{array} \end{equation} then $\Phi$ is a fiberwise bijective Lie algebroid morphism over $\varphi$. \item The nonholonomic Lagrangian systems $(\tilde{L}, \tilde{D})$ and $(L, D)$ on $TQ$ and $E = ({\frak s} \times TN)\times {\mathcal O}_{v_{0}}$ are $\Phi$-related, that is, \[ L \circ \Phi = \tilde{L}, \; \; \; \Phi(\tilde{D}) = D. \] Here, $D$ is the vector subbundle of the vector bundle $E$ whose fiber at the point $(n, v) \in N \times {\mathcal O}_{v_{0}}$ is given by (\ref{D}). \item If the system $(\tilde{L}, \tilde{D})$ is regular then the system $(L, D)$ is also regular. In addition, if $\gamma: I \to TQ$ is a solution of the Lagrange-d'Alembert equations for $(\tilde{L}, \tilde{D})$ then $\Phi \circ \gamma: I \to ({\frak s} \times TN) \times {\mathcal O}_{v_{0}}$ is a solution of the Lagrange-d'Alembert equations for $(L, D)$. \end{enumerate} \end{proposition} \begin{proof} $(1)$ Suppose that $\omega_{1}$ and $\omega_{2}$ are elements of ${\frak g}$, that $\dot{v}_{1}$ and $\dot{v}_{2}$ are vectors of $V$ and that $X_{1}$ and $X_{2}$ are vector fields on $N$. Then, we consider the vector fields $Z_{1}$ and $Z_{2}$ on $Q$ defined by \[ \begin{array}{l} Z_{1}(g, v, n) = (\lvec{\omega}_{1}(g), g \cdot \dot{v}_{1}, X_{1}(n)) \in T_{g}G \times V \times T_{n}N, \\[5pt] Z_{2}(g, v, n) = (\lvec{\omega}_{2}(g), g \cdot \dot{v}_{2}, X_{2}(n)) \in T_{g}G \times V \times T_{n}N, \end{array} \] for $(g, v, n) \in G \times V \times N = Q$, where $\lvec{\omega}_{1}$ (respectively, $\lvec{\omega}_{2}$) is the left-invariant vector field on $G$ such that $\lvec{\omega}_{1}(e) = \omega_{1}$ (respectively, $\lvec{\omega}_{2}(e) = \omega_{2}$), $e$ being the identity element of $G$. A direct computation proves that \[ [Z_{1}, Z_{2}] (g, v, n) = (\lvec{[\omega_{1}, \omega_{2}]}_{{\frak g}}(g), g(\omega_{1} \cdot \dot{v}_{2} - \omega_{2} \cdot \dot{v}_{1}), [X_{1}, X_{2}](n)). \] Moreover, if $((\omega_{1}, \dot{v}_{1}), X_{1})$ (respectively, $((\omega_{2}, \dot{v}_{2}), X_{2})$) is the section of the vector bundle $\tau_{E}: E \to M$ induced by $\omega_{1}$, $\dot{v}_{1}$ and $X_{1}$ (respectively, $\omega_{2}$, $\dot{v}_{2}$ and $X_{2}$) then it is clear that \[ \Phi \circ Z_{1} = ((\omega_{1}, \dot{v}_{1}), X_{1}) \circ \varphi, \; \; \; \Phi \circ Z_{2} = ((\omega_{2}, \dot{v}_{2}), X_{2}) \circ \varphi . \] Thus, using (\ref{LiealgF}), it follows that \begin{equation}\label{Bracket} \Phi \circ [Z_{1}, Z_{2}] = [((\omega_{1}, \dot{v}_{1}), X_{1}), ((\omega_{2}, \dot{v}_{2}), X_{2})]_{E} \circ \varphi. \end{equation} On the other hand, we have that \[ \begin{array}{l} (T_{(g, v, n)}\varphi)(u_{g}, \dot{v}, X_{n}) = (X_{n}, -(T_{g}l_{g^{-1}})(u_{g}) \cdot (g^{-1} \cdot v_{0})) \\[5pt] \in T_{n}N \times T_{g^{-1} \cdot v_{0}}{\mathcal O}_{v_{0}} \subseteq T_{n}N \times V, \end{array} \] for $(g, v, n) \in Q$ and $(u_{g}, \dot{v}, X_{n}) \in T_{g}G \times V \times T_{n}N \simeq T_{(g, v, n)}Q$. Therefore, from (\ref{AnclaE}) and (\ref{Defmor}), we deduce that \begin{equation}\label{Anchor} T\varphi = \rho_{E} \circ \Phi. \end{equation} Consequently, using (\ref{Bracket}) and (\ref{Anchor}), we conclude that the pair $(\Phi, \varphi)$ is a Lie algebroid morphism. Note that one may choose a local basis $\{Z_{i}\}$ of vector fields on $Q$ such that \[ Z_{i}(g, v, n) = (\lvec{\omega}_{i}(g), g \cdot \dot{v}_{i}, X_{i}(n)), \; \; \mbox{ for } (g, v, n) \in Q \] with $\omega_{i} \in {\frak g}$, $\dot{v}_{i} \in V$ and $X_{i} \in {\frak X}(N)$. Finally, if $(g, v, n) \in Q$, it is clear that \[ \Phi_{|T_{(g,v,n)}Q}: T_{(g,v,n)}Q \simeq T_{g}G \times V \times T_{n}N \to E_{(n, g^{-1} \cdot v_{0})} \simeq {\frak g} \times V \times T_{n}N \] is a linear isomorphism. $(2)$ It follows from (\ref{D}), (\ref{Dtilv0}), (\ref{Ele}) and (\ref{Defmor}). $(3)$ It follows using $(1)$, $(2)$ and the results of Section \ref{sec:reduction} (see Theorem \ref{t5.6}). \end{proof} The above theory may be applied to a particular example of a mechanical system: \emph{the Chaplygin Gyro} (see \cite{Ma,MT:04}). This system consists of a Chaplygin sphere (that is, a ball with nonhomogeneous mass distribution) with a gyro-like mechanism, consisting of a gimbal and a pendulous mass, installed in it. The gimbal is a circle-like structure such that its center coincides with the geometric center of the Chaplygin sphere. It is free to rotate about the axis connecting the north and south poles of the Chaplygin sphere. The pendulous mass can move along the smooth track of the gimbal. For this particular example, the vector space $V$ is $\mathbb{R}^{3}$, the Lie group $G$ is $SO(3)$ and the manifold $N$ is $\mathbb{R}^{2}$. The action of $SO(3)$ on $\mathbb{R}^{3}$ is the standard one and $v_{0} = (0, 0, 1)$ is the advected parameter, see~\cite{MT:04} for more details. \newcommand{J\!K}{J\!K} \subsection{Chaplygin-type systems} A frequent situation is the following. Consider a constrained Lagrangian system $(L,D)$ on a Lie algebroid $\map{\tau}{E}{M}$ such that the restriction of the anchor to the constraint distribution, $\map{\rho|D}{D}{TM}$, is an isomorphism of vector bundles. Let $\map{h}{TM}{D\subset E}$ be the right-inverse of $\rho|_D$, so that $\rho\circ h=\id_{TM}$. It follows that $E$ is a transitive Lie algebroid and $h$ is a splitting of the exact sequence \[ \xymatrix{0\ar[r]&\Ker(\rho)\ar[r]&E\ar[r]^\rho&TM\ar[r]&0\,.} \] Let us define the function $\bar{L}\in\cinfty{TM}$ by $\bar{L}=L\circ h$. The dynamics defined by $L$ does not reduce to the dynamics defined by $\bar{L}$ because, while the map $\Phi=\rho$ is a morphism of Lie algebroids and $\Phi(D)=TM$, we have $\bar{L}\circ\Phi=L\circ h\circ\rho\neq L$. Nevertheless, we can use $h$ to express the dynamics on $TM$, by finding relations between the dynamics defined by $L$ and $\bar{L}$. We need some auxiliary properties of the splitting $h$ and its prolongation. We first notice that $h$ is an admissible map over the identity in $M$, because $\rho_E\circ h=\id_{TM}$ and $T\id_M\circ\rho_{TM}=id_{TM}$, but in general $h$ is not a morphism. We can define the tensor $K$, a $\ker(\rho)$-valued differential 2-form on $M$, by means of \[ K(X,Y)=[h\circ X,h\circ Y]-h\circ[X,Y] \] for every $X,Y\in\mathfrak{X}(M)$. It is easy to see that $h$ is a morphism if and only if $K=0$. In coordinates $(x^i)$ in $M$, $(x^i,v^i)$ in $TM$, and linear coordinates $(x^i,y^i,y^A)$ on $E$ corresponding to a local basis $\{e^i,e^A\}$ of sections of $E$ adapted to the splitting $h$, we have that \[ K=\frac{1}{2}\Omega_{ij}^A\,dx^i\wedge dx^j\otimes e_A, \] where $\Omega_{ij}^A$ are defined by $[e_i,e_j]=\Omega_{ij}^Ae_A$. Since $h$ is admissible, its prolongation $\prol[h]{h}$ is a well-defined map from $T(TM)$ to $\TEE$. Moreover, it is an admissible map, which is a morphism if and only if $h$ is a morphism. In what respect to the energy and the Cartan 1-form, we have that $(\prol[h]{h})\pb E_L=E_{\bar{L}}$ and $(\prol[h]{h})\pb\theta_L=\theta_{\bar{L}}$. Indeed, notice that by definition, $(\prol[h]{h})\pb E_L= E_L\circ h$ and \begin{align*} E_L(h(v)) &=\frac{d}{dt}L(h(v)+t(h(v))|_{t=0}-L(h(v)) =\frac{d}{dt}L(h(v+tv))|_{t=0}-L(h(v))\\ &=\frac{d}{dt}\bar{L}(v+tv)|_{t=0}-\bar{L}(v) =E_{\bar{L}}(v). \end{align*} On the other hand, for every $V_v\equiv (v,w,V)\in T(TM)\equiv \prol[TM]{(TM)}$ where $w=T\tau(V)$, we have \begin{align*} \pai{(\prol[h]{h})\pb\theta_L}{V} &=\pai{\theta_L}{\prol[h]{h}(v,w,V)} =\pai{\theta_L}{(h(v),h(w),Th(V))}\\ &=\frac{d}{dt}L(h(v)+t(h(w))|_{t=0} =\frac{d}{dt}L(h(v+tw))|_{t=0}\\ &=\frac{d}{dt}\bar{L}(v+tw)|_{t=0} =\pai{\theta_{\bar{L}}}{V}. \end{align*} Nevertheless, since $h$ is not a morphism, and hence $(\prol[h]{h})\pb\circ d\neq d\circ (\prol[h]{h})\pb$, we have that $(\prol[h]{h})\pb\omega_L\neq\omega_{\bar{L}}$. Let $J\!K$ be the 2-form on $TM$ defined by \[ J\!K_v(V,W)=\pai{J_{h(v)}}{K_{h(v)}(T\tau_M(V),T\tau_M(W))} \] where $J$ is the momentum map defined by $L$ and $\Ker{\rho}$ and $V,W\in T_{h(v)}(TM)$. The notation resembles the contraction of the momentum map $J$ with the curvature tensor $K$. Instead of being symplectic, the map $\prol[h]{h}$ satisfies \[ (\prol[h]{h})\pb\omega_L=\omega_{\bar{L}}+J\!K. \] Indeed, we have that \[ (\prol[h]{h})\pb\omega_L-\omega_{\bar{L}}= [d\circ(\prol[h]{h})\pb-(\prol[h]{h})\pb\circ d]\,\theta_L \] and on a pair of projectable vector fields $U,V$ projecting onto $X,Y$ respectively, one can easily prove that \[ [d\circ(\prol[h]{h})\pb-(\prol[h]{h})\pb\circ d]\,\theta_L(U,V) =\pai{\theta_L}{[\prol[h]{h}(U),\prol[h]{h}(V)]-\prol[h]{h}([U,V])} \] from where the result follows by noticing that $\prol[h]{h}\circ U$ is a projectable section and projects to $h\circ X$, and similarly $\prol[h]{h}\circ V$ projects to $h\circ Y$. Hence $[\prol[h]{h}(U),\prol[h]{h}(V)]-\prol[h]{h}([U,V]$ is projectable and projects to $K(X,Y)$. Let now $\Gamma$ be the solution of the nonholonomic dynamics for $(L, D)$, so that $\Gamma$ satisfies the equation $i_\Gamma\omega_L-dE_L\in\tDo$ and the tangency condition $\Gamma\big|_D\in\TDD$. From this second condition we deduce the existence of a vector field $\bar{\Gamma}\in\mathfrak{X}(TM)$ such that $\prol[h]{h}\circ\bar{\Gamma}=\Gamma\circ h$. Explicitly, the vector field $\bar{\Gamma}$ is defined by $\bar{\Gamma}=\prol[\rho]{\rho}\circ\Gamma\circ h$, from where it immediately follows that $\bar{\Gamma}$ is a \textsc{sode}\ vector field on $M$. Taking the pullback by $\prol[h]{h}$ of the first equation we get $(\prol[h]{h})\pb\bigl( i_\Gamma\omega_L-dE_L\bigr)=0$ since $(\prol[h]{h})\pb\tDo=0$. Therefore \begin{align*} 0 &=(\prol[h]{h})\pb i_\Gamma\omega_L-(\prol[h]{h})\pb dE_L\\ &=i_{\bar{\Gamma}}(\prol[h]{h})\pb\omega_L-d (\prol[h]{h})\pb E_L\\ &=i_{\bar{\Gamma}}\bigl(\omega_{\bar{L}}+J\!K)-dE_{\bar{L}}\\ &=i_{\bar{\Gamma}}\omega_{\bar{L}}-dE_{\bar{L}}+i_{\bar{\Gamma}}J\!K. \end{align*} Therefore, the vector field $\bar{\Gamma}$ is determined by the equations \[ i_{\bar{\Gamma}}\omega_{\bar{L}}-dE_{\bar{L}}=-\pai{J}{K(\mathbb{T},\, \cdot\, )}, \] where $\mathbb{T}$ is the identity in $TM$ considered as a vector field along the tangent bundle projection $\tau_M$ (also known as the total time derivative operator). Equivalently we can write these equations in the form \[ d_{\bar{\Gamma}}\theta_{\bar{L}}-d\bar{L}=\pai{J}{K(\mathbb{T},\, \cdot\, )}. \] Note that if $\bar{a}: I \to TM$ is an integral curve of $\bar{\Gamma}$ then $a = h \circ \bar{a}: I \to D$ is a solution of the constrained dynamics for the nonholonomic Lagrangian system $(L, D)$ on $E$. Conversely, if $a: I \to D$ is a solution of the constrained dynamics then $\rho \circ a: I \to TM$ is an integral curve of the vector field $\bar{\Gamma}$. Finally we mention that extension of the above decomposition for non transitive Lie algebroids is under development. \subsection*{Chaplygin systems and Atiyah algebroids} A particular case of the above theory is that of ordinary Chaplygin systems(see~\cite{BlKrMaMu, CaCoLeMa,cortes,Ko} and references there in). In such case we have a principal $G$-bundle $\map{\pi}{Q}{M=Q/G}$. Then, we may consider the quotient vector bundle $E = TQ/G \to M=Q/G$ and, it is well-known that, the space of sections of this vector bundle may be identified with the set of $G$-invariant vector fields on $Q$. Thus, using that the Lie bracket of two $G$-invariant vector fields is also $G$-invariant and the fact that a $G$-invariant vector field is $\pi$-projectable, we may define a Lie algebroid structure $([\cdot , \cdot], \rho)$ on the vector bundle $E= TQ/G \to M = Q/G$. The resultant Lie algebroid is called the {\bf Atiyah (gauge) algebroid} associated with the principal bundle $\pi: Q \to M = Q/G$ (see \cite{Mackenzie}). Note that the canonical projection $\Phi: TQ \to E= TQ/G$ is a fiberwise bijective Lie algebroid morphism. Now, suppose that $(L_{Q}, D_{Q})$ is an standard nonholonomic Lagrangian system on $TQ$ such that $L_{Q}$ is $G$-invariant and $D_{Q}$ is the horizontal distribution of a principal connection on $\pi: Q \to M= Q/G$. Then, we have a reduced nonholonomic Lagrangian system $(L, D)$ on $E$. In fact, $L_{Q} = L \circ \Phi$ and $\Phi((D_{Q})_{q}) = D_{\pi(q)}$, for all $q \in Q$. Moreover, $\rho_{|D}: D \to TM = T(Q/G)$ is an isomorphism (over the identity of M) between the vector bundles $D \to M$ and $TM \to M$. Therefore, we may apply the above general theory. Next, we describe the nonholonomic Lagrangian system on the Atiyah algebroid associated with a particular example of a Chaplygin system: a two-wheeled planar mobile robot (see \cite{cortes} and the references there in). Consider the motion of two-wheeled planar mobile robot which is able to move in the direction in which it points and, in addition, can spin about a vertical axis. Let $P$ be the intersection point of the horizontal symmetry axis of the robot and the horizontal line connecting the centers of the two wheels. The position and orientation of the robot is determined, with respect to a fixed Cartesian reference frame by $(x, y, \theta) \in SE(2)$, where $\theta \in S^1$ is the heading angle, the coordinates $(x, y) \in \mathbb{R}^{2}$ locate the point $P$ and $SE(2)$ is the group of Euclidean motions of the two-dimensional plane $\mathbb{R}^{2}$. Let $\psi_{1}, \psi_{2} \in S^1$ denote the rotation angles of the wheels which are assumed to be controlled independently and roll without slipping on the floor. The configuration space of the system is $Q = \mathbb{T}^{2} \times SE(2)$, where $\mathbb{T}^{2}$ is the real torus of dimension $2$. The Lagrangian function $L_{Q}$ is the kinetic energy corresponding to the metric $g_{Q}$ \[ \begin{array}{rcl} g_{Q} &=& m dx \otimes dx + m dy \otimes dy + m_{0}l \cos \theta (dy \otimes d\theta + d\theta \otimes dy)\\&& - m_{0}l \sin \theta (dx \otimes d\theta + d\theta \otimes dx) + J d\theta \otimes d\theta + J_{2} d\psi_{1} \otimes d\psi_{1} + J_{2} d\psi_{2} \otimes d\psi_{2}, \end{array} \] where $m = m_{0} + 2m_{1}$, $m_{0}$ is the mass of the robot without the wheels, $J$ its momenta of inertia with respect to the vertical axis, $m_{1}$ the mass of each wheel, $J_{2}$ the axial moments of inertia of the wheels, and $l$ the distance between the center of mass $C$ of the robot and the point $P$. Thus, \[ \begin{array}{rcl} L_{Q} &=& \displaystyle \frac{1}{2} (m \dot{x}^{2} + m \dot{y}^{2} + 2 m_{0}l \dot{y}\dot{\theta} \cos \theta - 2 m_{0}l \dot{x}\dot{\theta}\sin \theta \\ && + J\dot{\theta}^{2} + J_{2}\dot{\psi}_{1}^{2} + J_{2}\dot{\psi}_{2}^{2}). \end{array} \] The constraints, induced by the conditions that there is no lateral sliding of the robot and that the motion of the wheels also consists of a rolling without sliding, are \[ \begin{array}{rcl} \dot{x}\sin \theta - \dot{y} \cos \theta &=& 0,\\ \dot{x}\cos \theta + \dot{y}\sin \theta + c\dot{\theta} + R\dot{\psi}_{1} &= & 0, \\ \dot{x}\cos \theta + \dot{y}\sin \theta - c\dot{\theta} + R\dot{\psi}_{2} &=& 0, \end{array} \] where $R$ is the radius of the wheels and $2c$ the lateral length of the robot. The constraint distribution $D$ is then spanned by \[ \begin{array}{lr} \{H_{1} = \displaystyle \frac{\partial}{\partial \psi_{1}} - \frac{R}{2}(\cos \theta \frac{\partial}{\partial x} +\sin \theta \frac{\partial}{\partial y} + \frac{1}{c} \frac{\partial}{\partial \theta}),& \\ & \kern-38pt H_{2} = \displaystyle \frac{\partial}{\partial \psi_{2}} - \frac{R}{2}(\cos \theta \frac{\partial}{\partial x} +\sin \theta \frac{\partial}{\partial y} - \frac{1}{c} \frac{\partial}{\partial \theta})\}, \end{array} \] Note that if $\{\xi_{1}, \xi_{2}, \xi_{3}\}$ is the canonical basis of ${\frak se}(2)$, \[ [\xi_{1}, \xi_{2}] = 0, \; \; [\xi_{1}, \xi_{3}] = -\xi_{2}, \; \; [\xi_{2}, \xi_{3}] = \xi_{1}, \] then \[ H_{1} = \displaystyle \frac{\partial}{\partial \psi_{1}} - \frac{R}{2} \lvec{\xi_{1}} - \frac{R}{2c} \lvec{\xi_{3}}, \; \; H_{2} = \displaystyle \frac{\partial}{\partial \psi_{2}} - \frac{R}{2} \lvec{\xi_{1}} + \frac{R}{2c} \lvec{\xi_{3}}, \] where $\lvec{\xi_{i}}$ ($i = 1, 2, 3$) is the left-invariant vector field of $SE(2)$ such that $\lvec{\xi_{i}}(e) = \xi_{i}$, $e$ being the identity element of $SE(2)$. On the other hand, it is clear that $Q = \mathbb{T}^{2} \times SE(2)$ is the total space of a trivial principal $SE(2)$-bundle over $M = \mathbb{T}^{2}$. Moreover, the metric $g_{Q}$ is $SE(2)$-invariant and $D_{Q}$ is the horizontal distribution of a principal connection on $Q = \mathbb{T}^{2} \times SE(2) \to \mathbb{T}^{2}$. Now, we consider the corresponding Atiyah algebroid \[ E = TQ/SE(2) \simeq (T\mathbb{T}^{2} \times TSE(2))/SE(2) \to M = \mathbb{T}^{2}. \] Using the left-translations on $SE(2)$, we have that the tangent bundle to $SE(2)$ may be identified with the product manifold $SE(2) \times {\frak se}(2)$ and, under this identification, the Atiyah algebroid is isomorphic to the trivial vector bundle \[ \tilde{\tau}_{\mathbb{T}^{2}} = \tau_{\mathbb{T}^{2}} \circ pr_{1}: T\mathbb{T}^{2} \times {\frak se}(2) \to \mathbb{T}^{2}, \] where $\tau_{\mathbb{T}^{2}}: T\mathbb{T}^{2} \to \mathbb{T}^{2}$ is the canonical projection. In addition, if $([ \cdot , \cdot ], \rho)$ is the Lie algebroid structure on $\tilde{\tau}_{\mathbb{T}^{2}}: T\mathbb{T}^{2} \times {\frak se}(2) \to \mathbb{T}^{2}$ and $\{\displaystyle \frac{\partial}{\partial \psi_{1}}, \frac{\partial}{\partial \psi_{2}}, \xi_{1}, \xi_{2}, \xi_{3} \}$ is the canonical basis of sections of $\tilde{\tau}_{\mathbb{T}^{2}}: T\mathbb{T}^{2} \times {\frak se}(2) \to \mathbb{T}^{2}$ then \[ \begin{array}{rclrclrclrcl} \rho(\displaystyle \frac{\partial}{\partial \psi_{1}}) &=& \displaystyle \frac{\partial}{\partial \psi_{1}}, \; \; & \rho(\displaystyle \frac{\partial}{\partial \psi_{2}}) & = & \displaystyle \frac{\partial}{\partial \psi_{2}}, \; \; &\rho(\xi_{i}) & = &0, \; \;& i = 1, 2, 3 \\[8pt] [\xi_{1}, \xi_{3}] & = & -\xi_{2}, \; \; & [\xi_{2}, \xi_{3}] & = & \xi_{1}, && && \end{array} \] and the rest of the fundamental Lie brackets are zero. Denote by $(\psi_{1}, \psi_{2}, \dot{\psi}_{1}, \dot{\psi}_{2}, \omega^{1}, \omega^{2}, \omega^{3})$ the (local) coordinates on $T\mathbb{T}^{2} \times {\frak se}(2)$ induced by the basis $\{\displaystyle \frac{\partial}{\partial \psi_{1}}, \frac{\partial}{\partial \psi_{2}}, \xi_{1}, \xi_{2}, \xi_{3} \}$. Then, the reduced Lagrangian $L: T\mathbb{T}^{2} \times {\frak se}(2) \to \mathbb{R}$ is given by \[ L = \displaystyle \frac{1}{2} (m (\omega^{1})^{2} + m (\omega^{2})^{2} + 2 m_{0}l \omega^{2}\omega^{3} + J(\omega^{3})^{2} + J_{2}\dot{\psi}_{1}^{2} + J_{2}\dot{\psi}_{2}^{2}) \] and the constraint vector subbundle $D$ is generated by the sections \[ e_{1} = \displaystyle \frac{\partial}{\partial \psi_{1}} - \frac{R}{2}\xi_{1} - \frac{R}{2c} \xi_{3}, \; \; e_{2} = \displaystyle \frac{\partial}{\partial \psi_{2}} - \frac{R}{2} \xi_{1} + \frac{R}{2c} \xi_{3}. \] Since the system $(L_{Q}, D_{Q})$ is regular on the standard Lie algebroid $\tau_{Q}: TQ \to Q$, we deduce that the nonholonomic Lagrangian system $(L, D)$ on the Atiyah algebroid $\tilde{\tau}_{\mathbb{T}^{2}}: T\mathbb{T}^{2} \times {\frak se}(2) \to \mathbb{T}^{2}$ is also regular. Now, as in Section \ref{linear}, we consider a basis of sections of $\tilde{\tau}_{\mathbb{T}^{2}}: T\mathbb{T}^{2} \times {\frak se}(2) \to \mathbb{T}^{2}$ which is adapted to the constraint subbundle $D$. This basis is \[ \{e_{1}, e_{2}, \xi_{1}, \xi_{2}, \xi_{3}\}. \] The corresponding (local) coordinates on $T\mathbb{T}^{2} \times {\frak se}(2)$ are $(\psi_{1}, \psi_{2}, \dot{\psi}_{1}, \dot{\psi}_{2}, \tilde{\omega}^{1}, \tilde{\omega}^{2}, \tilde{\omega}^{3})$, where \[ \omega^{1} = \tilde{\omega}^{1} - \displaystyle \frac{R}{2} \dot{\psi}_{1} - \frac{R}{2}\dot{\psi}_{2}, \; \; \omega^{2} = \tilde{\omega}^{2}, \; \; \omega^{3} = \tilde{\omega}^{3} - \displaystyle \frac{R}{2c} \dot{\psi}_{1} + \frac{R}{2c}\dot{\psi}_{2}. \] Therefore, using (\ref{LD-edo}), we deduce that the Lagrange-d'Alembert equations for the system $(L, D)$ are \[ \begin{array}{rclrcl} \ddot{\psi}_{1} & = & \displaystyle \frac{U(\dot{\psi}_{2} - \dot{\psi}_{1})}{P^{2} - S^{2}} (P\dot{\psi}_{2} + S\dot{\psi}_{1}), \; \; & \ddot{\psi}_{2} & = & \displaystyle - \frac{U(\dot{\psi}_{2} - \dot{\psi}_{1})}{P^{2} - S^{2}} (P\dot{\psi}_{1} + S\dot{\psi}_{2}),\\ [8pt] \tilde{\omega}^{1} &=& \tilde{\omega}^{2} = \tilde{\omega}^{3} = 0,&&& \end{array} \] where $P$, $S$ and $U$ are the real numbers \[ P = \displaystyle \frac{R^{2}}{4} (m + \frac{J}{c^{2}}) + J_{2}, \; \; S = \displaystyle \frac{R^{2}}{4} (m - \frac{J}{c^{2}}), \; \; U = \displaystyle \frac{R^{3}}{4c^{2}}m_{0}l. \] On the other hand, the Lagrangian function $\bar{L}: T\mathbb{T}^{2} \to \mathbb{R}$ on $T\mathbb{T}^{2}$ is given by \[ \bar{L}(\psi_{1}, \psi_{2}, \dot{\psi}_{1}, \dot{\psi}_{2}) = \displaystyle \frac{1}{2} (P \dot{\psi}_{1}^{2} + P \dot{\psi}_{2}^{2} + 2S \dot{\psi}_{1} \dot{\psi}_{2}) \] and the $1$-form $\pai{J}{K(\mathbb{T},\, \cdot\, )}$ on $T\mathbb{T}^{2}$ is \[ \pai{J}{K(\mathbb{T},\, \cdot\, )} = -U (\dot{\psi}_{2} - \dot{\psi}_{1}) (\dot{\psi}_{1} d\psi_{2} - \dot{\psi}_{2} d\psi_{1}). \] \section{Nonlinearly constrained Lagrangian systems} \label{nonlinear} We show in this section how the main results for linearly constrained Lagrangian systems can be extended to the case of Lagrangian systems with nonlinear nonholonomic constraints. This is true under the assumption that a suitable version of the classical Chetaev's principle in nonholonomic mechanics is valid (see e.g.,~\cite{LeMa2} for the study of standard nonholonomic Lagrangian systems subject to nonlinear constraints). Let $\tau: E \to M$ be a Lie algebroid and $\cm$ be a submanifold of $E$ such that $\map{\pi=\tau|_\cm}{\cm}{M}$ is a fibration. $\cm$ is the constraint submanifold. Since $\pi$ is a fibration, the prolongation $\prol[E]{\cm}$ is well-defined. We will denote by $r$ the dimension of the fibers of $\map{\pi}{\cm}{M}$, that is, $r=\operatorname{dim}\cm-\operatorname{dim} M$. We define the bundle $\vd\to \cm$ of \emph{virtual displacements} as the subbundle of $\tau^*E$ of rank $r$ whose fiber at a point $a\in\cm$ is \[ \vd[a]=\set{b\in E_{\tau(a)}}{b_a\sup{V}\in T_a\cm}. \] In other words, the elements of $\vd$ are pairs of elements $(a,b)\in E\oplus E$ such that \[ \frac{d}{dt}\phi(a+tb)\at{t=0}=0, \] for every local constraint function $\phi$. We also define the bundle of \emph{constraint forces} $\cf$ by $\cf=S^*((\prol[E]{\cm})^\circ)$, in terms of which we set the Lagrange-d'Alembert equations for a regular Lagrangian function $L \in C^{\infty}(E)$ as follows: \begin{equation}\label{8.1} \begin{array}{ll} &(i_\Gamma\omega_L-dE_L)|_\cm\in\Sec{\cf}, \\[5pt] &\Gamma|_\cm\in\Sec{\prol[E]{\cm}}, \end{array} \end{equation} the unknown being the section $\Gamma$. The above equations reproduce the corresponding ones for standard nonlinear constrained systems. From (\ref{2.4'}) and (\ref{8.1}), it follows that \[ (i_{S\Gamma}\omega_{L} - i_{\Delta}\omega_{L})|_\cm = 0, \] which implies that a solution $\Gamma$ of equations (\ref{8.1}) is a \textsc{sode}\ section along $\cm$, that is, $(S\Gamma - \Delta)| \cm = 0$. Note that the rank of the vector bundle $(\prol[E]{\cm})^\circ \to \cm$ is $s = \rank{E}-r$ and, since $\pi$ is a fibration, the transformation $S^*: (\prol[E]{\cm})^{\circ} \to \Psi $ defines an isomorphism between the vector bundles $(\prol[E]{\cm})^{\circ} \to \cm$ and $\Psi \to \cm$. Therefore, the rank of $\Psi$ is also $s$. Moreover, if $a \in \cm$ we have \begin{equation}\label{psia} \Psi_a=S^*((\prol[E]{\cm}[a])^\circ)=\set{\zeta\circ \prol{\tau}}{\zeta\in {\mathcal V}_a^\circ}. \end{equation} In fact, if $\alpha_a\in (\prol[E]{\cm}[a])^\circ$, we may define $\zeta\in E_{\tau(a)}^*$ by \[ \zeta(b)=\alpha_a(\xi\sup{V}(a,b)),\mbox{ for } b\in E_{\tau(a)}. \] Then, a direct computation proves that $\zeta\in {\mathcal V}_a^\circ$ and $S^*(\alpha_a)=\zeta\circ \prol{\tau}$. Thus, we obtain \[ \Psi_a\subseteq \set{\zeta\circ \prol{\tau}}{\zeta\in {\mathcal V}_a^\circ} \] and, using that the dimension of both spaces is $s$, we deduce (\ref{psia}) holds. Note that, in the particular case when the constraints are linear, we have ${\mathcal V}=\tau^*(D)$ and $\Psi=\widetilde{D^\circ}.$ Next, we consider the vector bundles $F$ and $\prol[\vd]{\cm}$ over ${\cm}$ whose fibers at the point $a\in {\cm}$ are \[ F_{a} = \omega_{L}^{-1}(\Psi_{a}), \makebox[.4cm]{} \prol[\vd]{\cm}[a]=\set{(b,v)\in\vd[a]\times T_a\cm}{T\pi(v)=\rho(b)}. \] It follows that \[ F_{a} = \set{ z \in \prol[E]{E}[a]}{\mbox{ exists } \zeta \in \mathcal{V}_{a}^0 \mbox{ and } i_{z}\omega_{L}(a) = \zeta \circ \prol{\tau}} \] and \begin{equation}\label{8.1'} \prol[\vd]{\cm}[a]=\set{z\in\prol[E]{\cm}[a]}{\prol{\pi}(z)\in\vd[a]}=\set{z\in \prol[E]{\cm}[a]}{S(z)\in \prol[E]{\cm}[a]}. \end{equation} Note that the dimension of $\prol[\vd]{\cm}[a]$ is $2r$ and, when the constraints are linear, i.e., ${\mathcal M}$ is a vector subbundle $D$ of $E$, we obtain \[ \prol[\vd]{\cm}[a]=\prol[D]{D}[a], \mbox{ for all } a\in \cm = D. \] Moreover, from (\ref{8.1'}), we deduce that the vertical lift of an element of $\vd$ is an element of $\prol[\vd]{\cm}$. Thus we can define for $b,c\in\vd[a]$ \[ \GLV[a](b,c)=\omega_L(a)(\tilde{b},\xi\sup{V}(a,c)), \] where $\tilde{b} \in \prol[E]{E}[a]$ and $\prol{\tau}(\tilde{b}) = b$. \subsection{Dynamics in local coordinates} Here we analyze the local nature of equations (\ref{8.1}). We consider local coordinates $(x^i)$ on an open subset $U$ of $M$ and take a basis $\{e_{\alpha}\}$ of local sections of $E$. In this way, we have local coordinates $(x^i, y^{\alpha})$ on $E$. Suppose that the local equations defining $\cm$ as a submanifold of $E$ are \[ \phi^{A}= 0, \makebox[.4cm]{} A = 1, \dots , s , \] where $\phi^{A}$ are independent local constraint functions. Since $\pi: \cm \to M$ is a fibration, it follows that the matrix $\displaystyle (\frac{\partial \phi^{A}}{\partial y^{\alpha}})$ is of rank $s$. Thus, if $d$ is the differential of the Lie algebroid $\prol[E]{E} \to E$, we deduce that $\{d\phi^{A}|_\cm\}_{A=1, \dots , s}$ is a local basis of sections of the vector bundle $(\prol[E]{\cm})^{0} \to \cm$. Note that \[ d\phi^{A} = \displaystyle \rho^j_{\alpha}\frac{\partial \phi^{A}}{\partial x^j}\mathcal{X}^{\alpha} + \frac{\partial \phi^{A}}{\partial y^{\alpha}}\mathcal{V}^{\alpha}. \] Moreover, $\{S^*(d\phi^{A})|_\cm = \displaystyle \frac{\partial \phi^{A}}{\partial y^{\alpha}} \mathcal{X}^{\alpha}|_\cm\}_{A=1, \dots , s}$ is a local basis of sections of the vector bundle $\Psi \to \cm$. Next, we introduce the local sections $\{Z_{A}\}_{A=1, \dots , s}$ of $\prol[E]{E} \to E$ defined by \[ i_{Z_{A}} \omega_{L} = S^*(d\phi^{A}) = \displaystyle \frac{\partial \phi^{A}}{\partial y^{\alpha}} \mathcal{X}^{\alpha}. \] A direct computation, using (\ref{omegaL}), proves that \begin{equation}\label{Zeta} Z_{A} = \displaystyle -\frac{\partial \phi^{A}}{\partial y^{\alpha}} W^{\alpha \beta}\mathcal{V}_{\beta}, \makebox[.4cm]{} \mbox{ for all } A, \end{equation} where $(W^{\alpha \beta})$ is the inverse matrix of $(W_{\alpha \beta} = \displaystyle \frac{\partial^{2}L}{\partial y^{\alpha}y^{\beta}})$. Furthermore, it is clear that $\{Z_{A}|_\cm\}$ is a local basis of sections of the vector bundle $F \to \cm$. On the other hand, if $\Gamma_L$ is the Euler-Lagrange section associated with the regular Lagrangian $L$, then a section $\Gamma$ of $\prol[E]{\cm}\to \cm$ is a solution of equations (\ref{8.1}) if and only if \[ \Gamma=(\Gamma_L + \lambda^A Z_A)|_\cm\ \] with $\lambda^A$ local real functions on $E$ satisfying \[ (\lambda^Ad\phi^B(Z_A) + d\phi^B(\Gamma_L))|_\cm\ =0, \mbox{ for all } B=1,\dots ,s. \] Therefore, using (\ref{Zeta}), we conclude that there exists a unique solution of the Lagrange-d'Alembert equations (\ref{8.1}) if and only if the matrix \begin{align}\label{eq:matrix} \big({\mathcal C}^{AB} = \frac{\partial \phi^A}{\partial y^\alpha}W^{\alpha\beta}\frac{\partial \phi^B}{\partial y^\beta}\big)_{A,B=1,\dots ,s} \end{align} is regular. We are now ready to prove the following result. \begin{theorem}\label{regular-nonl} The following properties are equivalent: \begin{enumerate} \item The constrained Lagrangian system $(L,\cm)$ is regular, that is, there exists a unique solution of the Lagrange-d'Alembert equations, \item $\Ker\GLV =\{0\}$, \item $\prol[E]{\cm} \cap F=\{0\}$, \item $\prol[{\mathcal V}]{\cm} \cap(\prol[{\mathcal V}]{\cm})^\perp=\{0\}$. \end{enumerate} \end{theorem} \begin{proof} It is clear that the matrix $({\mathcal C}^{AB})$ in~\eqref{eq:matrix} is regular if and only if $\prol[E]{\cm}\cap F=\{0\}$. Thus, the properties (1) and (3) are equivalent. Moreover, proceeding as in the proof of Theorem \ref{regularity}, we deduce that the properties (2) and (3) (respectively, (2) and (4)) also are equivalent. \end{proof} \begin{remark}[Lagrangians of mechanical type]\label{mechty} {\rm If $L$ is a Lagrangian function of mechanical type, then, using Theorem~\ref{regular-nonl}, we deduce (as in the case of linear constraints) that the constrained system $(L, \cm)$ is always regular.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \subsection{Lagrange-d'Alembert solutions and nonholonomic bracket} Assume that the constrained Lagrangian system $(L, \cm)$ is regular. Then (3) in Theorem~\ref{regular-nonl} is equivalent to $(\prol[E]{E})|_\cm\ = \prol[E]{\cm} \oplus F$. Denote by $P$ and $Q$ the complementary projectors defined by this decomposition \[ P_{a}: \prol[E]{E}[a] \to \prol[E]{\cm}[a], \makebox[.3cm]{} Q_{a}: \prol[E]{E}[a] \to F_a, \; \; \mbox{ for all } a \in \cm. \] As in the case of linear constraints, we may prove the following. \begin{theorem}\label{dym-nonl} Let $(L, \cm)$ be a regular constrained Lagrangian system and let $\Gamma_{L}$ be the solution of the free dynamics, i.e., $i_{{\Gamma}_{L}}\omega_{L} = dE_{L}$. Then, the solution of the constrained dynamics is the \textsc{sode}\ $\Gamma_{(L, \cm)}$ obtained as follows \[ \Gamma_{(L, \cm)} = P(\Gamma_{L}|_\cm). \] \end{theorem} On the other hand, $(4)$ in Theorem~\ref{regular-nonl} is equivalent to $(\prol[E]{E})|_\cm\ =\prol[\vd]{\cm}\oplus (\prol[\vd]{\cm})^\perp$ and we will denote by $\bar{P}$ and $\bar{Q}$ the corresponding projectors induced by this decomposition, that is, \[ \bar{P}_a:\prol[E]{E}[a]\to \prol[\vd]{\cm}[a],\;\;\; \bar{Q}_a:\prol[E]{E}[a]\to (\prol[\vd]{\cm}[a])^\perp, \mbox{ for all } a\in \cm. \] \begin{theorem}\label{t8.3} Let $(L,\cm)$ be a regular constrained Lagrangian system, $\Gamma_L$ (respectively, $\Gamma_{(L, \cm)}$) be the solution of the free (respectively, constrained) dynamics and $\Delta$ be the Liouville section of $\prol[E]{E}\to E$. Then, $\Gamma_{(L, \cm)}=\bar{P}(\Gamma_L|_{\cm})$ if and only if the restriction to ${\cm}$ of the vector field $\rho^1(\Delta)$ on $E$ is tangent to $\cm$. \end{theorem} \begin{proof} Proceeding as in the proof of Lemma~\ref{F-TDD}, we obtain that \[ (\prol[\vd]{\cm}[a])^\perp \cap \Ver{\prol[E]{E}[a]}=F_a,\mbox{ for all }a\in \cm. \] Thus, it is clear that \[ Q(\Gamma_L(a))\in F_a\subseteq (\prol[\vd]{\cm}[a])^\perp, \mbox{ for all } a\in \cm. \] Moreover, from (\ref{8.1'}) and using the fact that the solution of the constrained dynamics is a \textsc{sode}\ along $\cm$, we deduce \[ \Gamma_{(L, \cm)}(a) = P(\Gamma_{L}(a)) \in \prol[\vd]{\cm}[a], \; \; \mbox{ for all } a \in \cm, \] if and only if the restriction to $\cm$ of the vector field $\rho^1(\Delta)$ on $E$ is tangent to $\cm$. This proves the result. \end{proof} \begin{remark}[Linear constraints] {\rm Note that if $\cm$ is a vector subbundle $D$ of $E$, then the vector field $\rho^1(\Delta)$ is always tangent to $\cm = D$.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} As in the case of linear constraints, one may develop the distributional approach in order to obtain the solution of the constrained dynamics. In fact, if $(L, \cm)$ is regular, then $\prol[\vd]{\cm} \to \cm$ is a symplectic subbundle of $(\prol[E]{E}, \omega_{L})$ and, thus, the restriction $\omega^{L, \cm}$ of $\omega_{L}$ to $\prol[\vd]{\cm}$ is a symplectic section on that bundle. We may also define $\varepsilon^{L, \cm}$ as the restriction of $dE_{L}$ to $\prol[\vd]{\cm}$. Then, taking the restriction of Lagrange-d'Alembert equations to $\prol[\vd]{\cm}$, we get the following equation \begin{equation}\label{disapp} i_{\bar{\Gamma}}\omega^{L, \cm} = \varepsilon ^{L, \cm} , \end{equation} which uniquely determines a section $\bar{\Gamma}$ of $\prol[\vd]{\cm} \to \cm$. It is not difficult to prove that $\bar{\Gamma} = \bar{P}(\Gamma_L|_{\cm})$. Thus, the unique solution of equation (\ref{disapp}) is the solution of the constrained dynamics if and only if the vector field $\rho^{1}(\Delta)$ is tangent to $\cm$. Let $(L,\cm)$ a regular constrained Lagrangian system. Since $S^*: (\prol[E]{\cm})^0 \to \Psi$ is a vector bundle isomorphism, it follows that there exists a unique section $\alpha_{(L, \cm)}$ of $(\prol[E]{\cm})^0 \to \cm$ such that \[ i_{Q(\Gamma_{L}|_{\cm})}\omega_{L} = S^*(\alpha_{(L, \cm)}). \] Moreover, we have the following result. \begin{theorem} \label{conser-ener} If $(L, \cm)$ is a regular constrained Lagrangian system and $\Gamma_{(L, \cm)}$ is the solution of the dynamics, then $d_{\Gamma_{(L, \cm)}}(E_{L}|_{\cm}) = 0$ if and only if $\alpha_{(L, \cm)}(\Delta|_{\cm}) = 0$. In particular, if the vector field $\rho^{1}(\Delta)$ is tangent to $\cm$, then $d_{\Gamma_{(L, \cm)}}(E_{L}|_{\cm}) = 0$. \end{theorem} \begin{proof} From Theorem~\ref{dym-nonl}, we deduce \[ (i_{\Gamma_{(L, \cm)}}\omega_{L} - dE_{L})|_{\cm} = -S^*(\alpha_{(L, \cm)}). \] Therefore, using that $\Gamma_{(L, \cm)}$ is a \textsc{sode}\ along $\cm$, we obtain \[ d_{\Gamma_{(L, \cm)}}(E_{L}|_{\cm}) = \alpha_{(L, \cm)}(\Delta|_{\cm}). \] \end{proof} Now, let $(L, \cm)$ be a regular constrained Lagrangian system. In addition, suppose that $f$ and $g$ are two smooth functions on $\cm$ and take arbitrary extensions to $E$ denoted by the same letters. Then, as in Section~\ref{NHBracket}, we may define \emph{the nonholonomic bracket } of $f$ and $g$ as follows \[ \{f,g\}_{nh}=\omega_L(\bar{P}(X_f),\bar{P}(X_g))|_{\cm}, \] where $X_f$ and $X_g$ are the Hamiltonian sections on $\prol[E]{E}$ associated with $f$ and $g$, respectively. Moreover, proceeding as in the case of linear constraints, one can prove that \[ \dot{f}=\rho^1(R_L)(f)+\{f,E_L\}_{nh}, \;\;\; f\in C^\infty(\cm), \] where $R_L$ is the section of $\prol[E]{\cm}\to \cm$ defined by $R_L=P(\Gamma_L|_\cm)-\bar{P}(\Gamma_L|_{\cm}).$ Thus, in the particular case when the restriction to $\cm$ of the vector field $\rho^1(\Delta)$ on $E$ is tangent to ${\cm}$, it follows that \[ \dot{f}=\{f,E_L\}_{nh},\;\;\; \mbox{ for } f\in C^\infty(\cm). \] Alternatively, since $\prol[\vd]{\cm}$ is an anchored vector bundle, we may consider the differential $\bar{d}f\in \Sec{(\prol[\vd]{\cm})^*}$ for a function $f\in C^\infty(\cm)$. Thus, since the restriction $\omega^{L,\cm}$ of $\omega_L$ to $\prol[\vd]{\cm}$ is regular, we have a unique section $\bar{X}_f \in \Sec{\prol[\vd]{\cm}}$ given by $i_{\bar{X}_f}\omega^{L,\cm}=\bar{d}f$ and it follows that \[ \{f,g\}_{nh}=\omega^{L,\cm}(\bar{X}_f,\bar{X}_g). \] \subsection{Morphisms and reduction} Let $(L,\cm)$ be a regular constrained Lagrangian system on a Lie algebroid $\tau:E\to M$ and let $(L',\cm')$ be another constrained Lagrangian system on a second Lie algebroid $\tau':E'\to M'$. Suppose also that we have a fiberwise surjective morphism of Lie algebroids $\Phi:E\to E'$ over a surjective submersion $\phi:M\to M'$ such that: \begin{itemize} \item[(i)] $L=L'\circ \Phi$, \item[(ii)] $\Phi|_\cm :\cm\to \cm'$ is a surjective submersion, \item[(iii)] $\Phi(\vd[a])=\vd[\Phi(a)]'$, for all $a\in \cm$. \end{itemize} Note that condition (ii) implies that $\Phi(\vd[a])\subseteq \vd[\Phi(a)]'$, for all $a\in \cm$. Moreover, if $V(\Phi)$ is the vertical bundle of $\Phi$ and \[ V_a(\Phi)\subset T_a\cm, \mbox{ for all } a\in \cm , \] then condition (ii) also implies that $\vd[\Phi(a)]'\subseteq \Phi(\vd[a])$, for all $a\in \cm$. On the other hand, using condition (iii) and Proposition~\ref{transformation-omegaL}, it follows that $\ker G^{L',{\vd[]'}}=\{0\}$ and, thus, the constrained Lagrangian system $(L',\cm')$ is regular. Moreover, proceeding as in the proof of Lemma~\ref{l5.5} and Theorem~\ref{t5.6}, we deduce the following results. \begin{lemma}\label{l8.4} With respect to the decompositions \[ (\prol[E]{E})|_\cm=\prol[E]\cm\oplus F \quad \text{and} \quad (\prol[E']E')|_{\cm'}=\prol[E']\cm'\oplus F' \] we have the following properties \begin{enumerate} \item[(1)] $\prol[\Phi]{\Phi}(\prol[E]\cm)=\prol[E']{\cm'},$ \item[(2)] $\prol[\Phi]{\Phi}(F)=F'$, \item[(3)] If $P$, $Q$ and $P',Q'$ are the projectors associated with $(L,\cm)$ and $(L',\cm')$, respectively, then $P'\circ \prol[\Phi]{\Phi}=\prol[\Phi]{\Phi}\circ P$ and $Q'\circ \prol[\Phi]{\Phi}=\prol[\Phi]{\Phi}\circ Q$. \end{enumerate} \noindent With respect to the decompositions \[ (\prol[E]{E})|_{\cm}=\prol[\mathcal V]\cm\oplus (\prol[\mathcal V ]\cm)^\perp \mbox{ and } (\prol[E']{E'})|_{\cm'} = \prol[\mathcal V']{\cm'}\oplus (\prol[\mathcal V']{\cm'})^\perp \] we have the following properties \begin{enumerate} \item[(4)] $(\prol[\Phi]{\Phi})(\prol[\mathcal V]{\cm})=\prol[\mathcal V']{\cm'}$, \item[(5)] $(\prol[\Phi]{\Phi})((\prol[\mathcal V]{\cm})^\perp)=(\prol[\mathcal V']{\cm'})^\perp$, \item[(6)] If $\bar{P},\bar{Q}$ and $\bar{P}'$ and $\bar{Q'}$ are the projectors associated with $(L,{\cm})$ and $(L',{\cm'})$, respectively, then $\bar{P}'\circ \prol[\Phi]{\Phi} = \prol[\Phi]{\Phi}\circ \bar{P}$ and $\bar{Q}'\circ \prol[\Phi]{\Phi} = \prol[\Phi]{\Phi}\circ \bar{Q}$. \end{enumerate} \end{lemma} \begin{theorem}[Reduction of the constrained dynamics]\label{t8.5} Let $(L, \cm)$ be a regular constrained Lagrangian system on a Lie algebroid $E$ and let $(L',\cm')$ be a constrained Lagrangian system on a second Lie algebroid $E'$. Assume that we have a fiberwise surjective morphism of Lie algebroids $\Phi:E\to E'$ over $\phi:M\to M'$ such that conditions (i)-(iii) hold. If $\Gamma_{(L, \cm)}$ is the constrained dynamics for $L$ and $\Gamma_{(L', \cm')}$ is the constrained dynamics for $L'$, respectively, then $\prol[\Phi]{\Phi}\circ \Gamma_{(L, \cm)}=\Gamma_{(L', \cm')}\circ \Phi$. If $a(t)$ is a solution of Lagrange-d'Alembert differential equations for $L$, then $\Phi(a(t))$ is a solution of Lagrange-d'Alembert differential equations for $L'$. \end{theorem} We will say that the constrained dynamics $\Gamma_{(L', \cm')}$ is \emph{the reduction of the constrained dynamics} $\Gamma_{(L, \cm)}$ by the morphism $\Phi$. As in the case of linear constraints (see Theorem~\ref{t5.7}), we also may prove the following result \begin{theorem}\label{t8.6'} Under the same hypotheses as in Theorem~\ref{t8.5}, we have that \[ \{f'\circ \Phi,g'\circ \Phi\}_{nh}=\{f',g'\}_{nh}'\circ \Phi , \] for $f',g'\in C^\infty(\cm')$, where $\{\cdot,\cdot\}_{nh}$ (respectively, $\{\cdot,\cdot\}_{nh}'$) is the nonholonomic bracket for the constrained system $(L,\cm)$ (respectively, $(L',\cm')$). In other words, $\Phi:\cm\to \cm'$ is an almost-Poisson morphism. \end{theorem} Now, let $\phi:Q\to M$ be a principal $G$-bundle and $\tau:E\to Q$ be a Lie algebroid over $Q$. In addition, assume that we have an action of $G$ on $E$ such that the quotient vector bundle $E/G$ is defined and the set $\Sec{E}^G$ of equivariant sections of $E$ is a Lie subalgebra of $\Sec{E}$. Then, $E'=E/G$ has a canonical Lie algebroid structure over $M$ such that the canonical projection $\Phi:E\to E'$ is a fiberwise bijective Lie algebroid morphism over $\phi$ (see Theorem~\ref{quotient-Lie-algebroid}). Next, suppose that $(L,\cm)$ is a $G$-invariant regular constrained Lagrangian system, that is, the Lagrangian function $L$ and the constraint submanifold $\cm$ are $G$-invariant. Then, one may define a Lagrangian function $L':E'\to \mathbb{R}$ on $E'$ such that \[ L=L'\circ \Phi. \] Moreover, $G$ acts on $\cm$ and if the set of orbits $\cm'=\cm/G$ of this action is a quotient manifold, that is, $\cm'$ is a smooth manifold and the canonical projection $\Phi_{|\cm}:\cm\to \cm'={\cm }/{G}$ is a submersion, then one may consider the constrained Lagrangian system $(L',{\cm}')$ on $E'$. \begin{remark}[Quotient manifold]\label{r8.5'} {\rm If $\cm$ is a closed submanifold of $E$, then, using a well-known result (see~\cite[Theorem~4.1.20]{AM}), it follows that the set of orbits $\cm'=\cm/G$ is a quotient manifold.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} Since the orbits of the action of $G$ on $E$ are the fibers of $\Phi$ and $\cm$ is $G$-invariant, we deduce that \[ V_a(\Phi)\subseteq T_a\cm, \mbox{ for all } a\in \cm, \] which implies that $\Phi_{|\vd[a]}:{\vd[a]}\to \vd[\Phi(a)]'$ is a linear isomorphism, for all $a\in \cm.$ Thus, from Theorem~\ref{t8.5}, we conclude that the constrained Lagrangian system $(L',\cm')$ is regular and that \[ \prol[\Phi]{\Phi}\circ \Gamma_{(L, \cm)}=\Gamma_{(L', \cm')}\circ \Phi, \] where $\Gamma_{(L, \cm)}$ (resp., $\Gamma_{(L', \cm')}$) is the constrained dynamics for $L$ (resp., $L'$). In addition, using Theorem~\ref{t8.6'}, we obtain that $\Phi: \cm \to \cm'$ is an almost-Poisson morphism when on $\cm$ and $\cm'$ we consider the almost-Poisson structures induced by the corresponding nonholonomic brackets. We illustrate the results above in a particular example in the following subsection. \subsection{Example: a ball rolling on a rotating table} The following example is taken from~\cite{BlKrMaMu,CLMM,NF}. A (homogeneous) sphere of radius $r>0$, unit mass $m=1$ and inertia about any axis $k^2,$ rolls without sliding on a horizontal table which rotates with constant angular velocity $\Omega$ about a vertical axis through one of its points. Apart from the constant gravitational force, no other external forces are assumed to act on the sphere. Choose a Cartesian reference frame with origin at the center of rotation of the table and $z$-axis along the rotation axis. Let $(x,y)$ denote the position of the point of contact of the sphere with the table. The configuration space for the sphere on the table is $Q=\mathbb{R}^2\times SO(3)$, where $SO(3)$ may be parameterized by the Eulerian angles $\theta,\varphi$ and $\psi$. The kinetic energy of the sphere is then given by \[ T=\frac{1}{2}(\dot{x}^2 + \dot{y}^2 + k^2(\dot\theta^2 + \dot\psi^2 + 2 \dot\varphi\dot\psi \cos \theta)) , \] and with the potential energy being constant, we may put $V=0.$ The constraint equations are \begin{align*} \dot{x}-r\dot{\theta}\sin \psi + r \dot\varphi\sin \theta \cos \psi&=-\Omega y,\\ \dot{y} + r\dot\theta \cos\psi + r \dot{\varphi}\sin \theta \sin \psi&=\Omega x. \end{align*} Since the Lagrangian function is of mechanical type, the constrained system is regular. Note that the constraints are affine, and hence not linear, and that the restriction to the constraint submanifold $\cm$ of the Liouville vector field on $TQ$ is not tangent to $\cm$. Indeed, the constraints are linear if and only if $\Omega = 0$. Now, we can proceed from here to construct to equations of motion of the sphere, following the general theory. However, the use of the Eulerian angles as part of the coordinates leads to very complicated expressions. Instead, one may choose to exploit the symmetry of the problem, and one way to do this is by the use of appropriate \emph{quasi-coordinates} (see \cite{NF}). First of all, observe that the kinetic energy may be expressed as \[ T=\frac{1}{2}(\dot{x}^2+\dot{y}^2 + k^2(\omega_x^2 + \omega_y^2 + \omega_z^2)), \] where \begin{align*} \omega_x&= \dot{\theta}\cos\psi + \dot{\varphi}\sin\theta\sin\psi , \\ \omega_y&= \dot{\theta}\sin\psi - \dot{\varphi}\sin\theta\cos\psi , \\ \omega_z&= \dot{\varphi}\cos\theta + \dot\psi , \end{align*} are the components of the angular velocity of the sphere. The constraint equations expressing the rolling conditions can be rewritten as \begin{align*} \dot{x}-rw_y & = -\Omega y,\\ \dot{y} + r\omega_x& = \Omega x. \end{align*} Next, following~\cite{CLMM}, we consider local coordinates $(\bar{x},\bar{y},\bar{\theta}, \bar{\varphi}, \bar{\psi}; \pi_i)_{i=1,\dots ,5}$ on $TQ=T\mathbb{R}^2 \times T(SO(3))$, where \[ \bar{x}=x,\;\;\; \bar{y}=y,\;\;\; \bar{\theta}=\theta,\;\;\; \bar{\varphi}=\varphi,\;\;\; \bar{\psi}=\psi,\] \[ \pi_1=r\dot{x}+k^2\dot{q}_2,\;\;\; \pi_2=r\dot{y}-k^2\dot{q}_1,\;\;\; \pi_3=k^2\dot{q}_3, \] \[ \pi_4=\frac{k^2}{(k^2+ r^2)}(\dot{x}-r\dot{q}_2 + \Omega y),\;\;\; \pi_5=\frac{k^2}{(k^2+ r^2)}(\dot{y}+r\dot{q}_1 - \Omega x), \] and $(\dot{q}_1,\dot{q}_2,\dot{q}_3)$ are the quasi-coordinates defined by \[ \dot{q}_1=\omega_x,\;\;\; \dot{q}_2=\omega_y,\;\;\; \dot{q}_3=\omega_z. \] As is well-known, the coordinates $q_i$ only have a symbolic meaning. In fact, $ \displaystyle \{\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_2}, \frac{\partial }{\partial q_3}\}$ is the basis of left-invariant vector fields on $SO(3)$ given by \begin{align*} \frac{\partial }{\partial q_1} &= (\cos\psi) \frac{\partial }{\partial \theta} + \frac{\sin \psi }{\sin \theta}(\frac{\partial }{\partial \varphi}-\cos\theta \frac{\partial }{\partial \psi}),\\ \frac{\partial }{\partial q_2} &= (\sin \psi) \frac{\partial }{\partial \theta}-\frac{\cos\psi }{\sin \theta}(\frac{\partial }{\partial \varphi}-\cos\theta \frac{\partial }{\partial \psi}),\\ \frac{\partial }{\partial q_3} &= \frac{\partial }{\partial \psi} , \end{align*} and we have that \[ [\frac{\partial }{\partial q_2}, \frac{\partial }{\partial q_1}]=\frac{\partial }{\partial q_3} , \quad [\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_3}]=\frac{\partial }{\partial q_2} , \quad [\frac{\partial }{\partial q_3}, \frac{\partial }{\partial q_2}]=\frac{\partial }{\partial q_1}. \] Note that in the new coordinates the local equations defining the constraint submanifold $\cm$ are $\pi_4=0,$ $\pi_5=0$. On the other hand, if $P:(\prol[TQ]{TQ})|_\cm =T_\cm(TQ)\to \prol[TQ]{\cm}=T\cm$ and $Q:T_\cm(TQ)\to F$ are the projectors associated with the decomposition $T_\cm(TQ)=T\cm\oplus F$, then we have that (see~\cite{CLMM}) \begin{align*} Q&= \frac{\partial }{\partial \pi_4}\otimes d\pi_4 + \frac{\partial }{\partial \pi_5}\otimes d\pi_5,\\ P&= \text{Id} - \frac{\partial }{\partial \pi_4}\otimes d\pi_4-\frac{\partial }{\partial \pi_5}\otimes d\pi_5. \end{align*} Moreover, using that the unconstrained dynamics $\Gamma_L$ is given by \begin{align*} \Gamma_L &=\dot{x}\displaystyle\frac{\partial }{\partial \bar{x}} + \dot{y}\displaystyle\frac{\partial }{\partial \bar{y}}+ \dot{\theta}\displaystyle\frac{\partial }{\partial \bar{\theta}} + \dot{\varphi}\displaystyle\frac{\partial }{\partial \bar{\varphi}}+ \dot{\psi}\displaystyle\frac{\partial }{\partial \bar{\psi}} + \displaystyle\frac{k^2\Omega}{(k^2 + r^2)}\dot{y}\displaystyle\frac{\partial }{\partial \pi_4} + \displaystyle\frac{k^2\Omega}{(k^2 + r^2)}\dot{x}\displaystyle\frac{\partial }{\partial \pi_5}\\ &= \dot{x}\displaystyle\frac{\partial }{\partial \bar{x}} + \dot{y}\displaystyle\frac{\partial }{\partial \bar{y}}+ \dot{q_1}\displaystyle\frac{\partial }{\partial q_1} + \dot{q_2}\displaystyle\frac{\partial }{\partial q_2}+ \dot{q_3}\displaystyle\frac{\partial }{\partial q_3} + \displaystyle\frac{k^2\Omega}{(k^2 + r^2)}\dot{y}\displaystyle\frac{\partial }{\partial \pi_4} + \displaystyle\frac{k^2\Omega}{(k^2 + r^2)}\dot{x}\displaystyle\frac{\partial }{\partial \pi_5}, \end{align*} we deduce that the constrained dynamics is the \textsc{sode}\ $\Gamma_{(L, \cm)}$ along $\cm$ defined by \begin{align} \label{Gamma} \Gamma_{(L, \cm)} = (P\Gamma_L|_{\cm}) &=(\dot{x}\displaystyle\frac{\partial }{\partial \bar{x}} + \dot{y}\displaystyle\frac{\partial }{\partial \bar{y}}+ \dot{\theta}\displaystyle\frac{\partial }{\partial \bar{\theta}} + \dot{\varphi}\displaystyle\frac{\partial }{\partial \bar{\varphi}}+ \dot{\psi}\displaystyle\frac{\partial }{\partial \bar{\psi}})|_{\cm} \nonumber \\&=(\dot{x}\displaystyle\frac{\partial }{\partial \bar{x}} + \dot{y}\displaystyle\frac{\partial }{\partial \bar{y}}+ \dot{q_1}\displaystyle\frac{\partial }{\partial q_1} + \dot{q_2}\displaystyle\frac{\partial }{\partial q_2}+ \dot{q_3}\displaystyle\frac{\partial }{\partial q_3})|_{\cm}. \end{align} This implies that \[ d_{\Gamma_{(L, \cm)}}(E_{L}|_{\cm}) = d_{\Gamma_{(L, \cm)}}(L|_{\cm}) = \displaystyle \frac{\Omega^{2}k^{2}}{(k^{2} + r^{2})}(x \dot{x} + y \dot{y})|_{\cm}. \] Consequently, the Lagrangian energy is a constant of the motion if and only if $\Omega = 0$. When constructing the nonholonomic bracket on $\cm$, we find that the only non-zero fundamental brackets are \begin{equation}\label{Cornonh} \begin{array}{ll} \{x,\pi_1\}_{nh}=r,&\kern-50pt\{y,\pi_2\}_{nh}=r,\\ \{q_1,\pi_2\}_{nh}=-1,&\kern-50pt\{q_2,\pi_1\}_{nh}=1,\;\;\;\;\;\;\;\; \{q_3,\pi_3\}_{nh}=1,\\[5pt] \{\pi_1,\pi_2\}_{nh}=\pi_3,&\kern-50pt\{\pi_2,\pi_3\}_{nh} = \displaystyle\frac{k^2}{(k^2+r^2)}\pi_1 + \displaystyle\frac{rk^2\Omega}{(k^2+ r^2)}y,\\\{\pi_3,\pi_1\}_{nh}=\displaystyle\frac{k^2}{(k^2+r^2)}\pi_2 - \displaystyle\frac{rk^2\Omega}{(k^2+ r^2)}x, \end{array} \end{equation} in which the ``appropriate operational'' meaning has to be attached to the quasi-coordinates $q_i$. As a result, we have \[ \dot{f}=R_L(f) + \{f,L\}_{nh}, \mbox{ for $f\in C^\infty(\cm)$} \] where $R_L$ is the vector field on $\cm$ given by \begin{align*} R_L&=\displaystyle(\frac{k^2\Omega}{(k^2+ r^2)}(x\displaystyle\frac{\partial }{\partial y}-y\displaystyle\frac{\partial }{\partial x}) + \displaystyle\frac{r\Omega}{(k^2+ r^2)}(x\displaystyle\frac{\partial }{\partial q_1} + y \frac{\partial }{\partial q_2} \\[5pt] & + x(\pi_3-k^2\Omega)\displaystyle\frac{\partial }{\partial \pi_1} + y (\pi_3-k^2\Omega)\displaystyle\frac{\partial }{\partial \pi_2}-k^2(\pi_1x+\pi_2y)\displaystyle\frac{\partial }{\partial \pi_3}))|_{\cm}. \end{align*} Note that $R_{L} = 0$ if and only if $\Omega = 0$. Now, it is clear that $Q=\mathbb{R}^2\times SO(3)$ is the total space of a trivial principal $SO(3)$-bundle over $\mathbb{R}^2$ and the bundle projection $\phi:Q\to M=\mathbb{R}^2$ is just the canonical projection on the first factor. Therefore, we may consider the corresponding Atiyah algebroid $E'=TQ/SO(3)$ over $M=\mathbb{R}^2$. Next, we describe this Lie algebroid. Using the left-translations in $SO(3)$, one may define a diffeomorphism $\lambda$ between the tangent bundle to $SO(3)$ and the product manifold $SO(3)\times \mathbb{R}^3$ (see \cite{AM}). In fact, in terms of the Euler angles, the diffeomorphism $\lambda$ is given by \begin{equation}\label{lambda} \lambda(\theta,\varphi,\psi;\dot\theta,\dot\varphi,\dot\psi) = (\theta,\varphi,\psi; \omega_x,\omega_y,\omega_z). \end{equation} Under this identification between $T(SO(3))$ and $SO(3)\times \mathbb{R}^3$, the tangent action of $SO(3)$ on $T(SO(3))\cong SO(3)\times \mathbb{R}^3$ is the trivial action \begin{equation}\label{Action} SO(3)\times (SO(3)\times \mathbb{R}^3)\to SO(3)\times \mathbb{R}^3,\;\;\; (g,(h,\omega))\mapsto (gh,\omega). \end{equation} Thus, the Atiyah algebroid $TQ/SO(3)$ is isomorphic to the product manifold $T\mathbb{R}^2\times \mathbb{R}^3$, and the vector bundle projection is $\tau_{\mathbb{R}^2}\circ pr_1$, where $pr_1:T\mathbb{R}^2\times \mathbb{R}^3\to T\mathbb{R}^2$ and $\tau_{\mathbb{R}^2}:T\mathbb{R}^2\to \mathbb{R}^2$ are the canonical projections. A section of $E'=TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3\to \mathbb{R}^2$ is a pair $(X,u)$, where $X$ is a vector field on $\mathbb{R}^2$ and $u:\mathbb{R}^2\to \mathbb{R}^3$ is a smooth map. Therefore, a global basis of sections of $T\mathbb{R}^2\times \mathbb{R}^3\to \mathbb{R}^2$ is \[ \begin{array}{rrr} e_1'=(\displaystyle\frac{\partial}{\partial x},0),& e_2'=(\displaystyle\frac{\partial}{\partial y},0),&\\[5pt] e_3'=(0,u_1),& e_4'=(0,u_2),& e_5'=(0,u_3), \end{array} \] where $u_1,u_2,u_3:\mathbb{R}^2\to \mathbb{R}^3$ are the constant maps \[ u_1(x,y)=(1,0,0),\;\;\; u_2(x,y)=(0,1,0),\;\;\; u_3(x,y)=(0,0,1). \] In other words, there exists a one-to-one correspondence between the space $\Sec{E'=TQ/SO(3)}$ and the $G$-invariant vector fields on $Q$. Under this bijection, the sections $e_1'$ and $e_2'$ correspond with the vector fields $\displaystyle\frac{\partial }{\partial x}$ and $\displaystyle\frac{\partial }{\partial y}$ and the sections $e_3'$, $e_4'$ and $e_5'$ correspond with the vertical vector fields $\displaystyle\frac{\partial }{\partial q_1}$, $\displaystyle\frac{\partial }{\partial q_2}$ and $\displaystyle\frac{\partial }{\partial q_3}$, respectively. The anchor map $\rho':E'=TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3\to T\mathbb{R}^2$ is the projection over the first factor and, if $\lbrack\! \lbrack\cdot, \cdot \rbrack\! \rbrack'$ is the Lie bracket on the space $\Sec{E'=TQ/SO(3)}$, then the only non-zero fundamental Lie brackets are \[ \lbrack\! \lbrack e_4',e_3'\rbrack\! \rbrack'=e_5',\;\;\;\lbrack\! \lbrack e_5',e_4'\rbrack\! \rbrack'=e_3',\;\;\; \lbrack\! \lbrack e_3',e_5'\rbrack\! \rbrack'=e_4'. \] From (\ref{lambda}) and (\ref{Action}), it follows that the Lagrangian function $L=T$ and the constraint submanifold $\cm$ are $SO(3)$-invariant. Consequently, $L$ induces a Lagrangian function $L'$ on $E'=TQ/SO(3)$ and, since $\cm$ is closed on $TQ$, the set of orbits $\cm'=\cm/SO(3)$ is a submanifold of $E'=TQ/SO(3)$ in such a way that the canonical projection $\Phi|_\cm:\cm\to \cm'=\cm/SO(3)$ is a surjective submersion. Under the identification between $E'=TQ/SO(3)$ and $T\mathbb{R}^2\times \mathbb{R}^3$, $L'$ is given by \[ L'(x,y,\dot{x},\dot{y};\omega_1,\omega_2,\omega_3) = \frac{1}{2}(\dot{x}^2 + \dot{y}^2) + \frac{k^2}{2} (\omega_1^2 + \omega_2^2 + \omega_3^2) , \] where $(x,y,\dot{x},\dot{y})$ and $(\omega_1,\omega_2,\omega_3)$ are the standard coordinates on $T\mathbb{R}^2$ and $\mathbb{R}^3$, respectively. Moreover, the equations defining $\cm'$ as a submanifold of $T\mathbb{R}^2\times \mathbb{R}^3$ are \[ \dot{x}-r\omega_2 + \Omega y=0,\;\;\; \dot{y} + r\omega_1-\Omega x=0. \] So, we have the constrained Lagrangian system $(L',\cm')$ on the Atiyah algebroid $E'=TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3$. Note that the constraints are not linear, and that, if $\Delta'$ is the Liouville section of the prolongation $\prol[E']{E'}$, then the restriction to $\cm'$ of the vector field $(\rho')^1(\Delta')$ is not tangent to $\cm'$. Now, it is clear that the tangent bundle $TQ=T\mathbb{R}^2\times T(SO(3))\cong T\mathbb{R}^2\times (SO(3)\times \mathbb{R}^3)$ is the total space of a trivial principal $SO(3)$-bundle over $E'=TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3$ and, in addition (see~\cite[Theorem 9.1]{LeMaMa}), the prolongation $\prol[E']{E'}$ is isomorphic to the Atiyah algebroid associated with this principal $SO(3)$-bundle. Therefore, the sections of the prolongation $\prol[E']{E'}\to E'$ may be identified with the $SO(3)$-invariant vector fields on $TQ\cong T\mathbb{R}^2\times (SO(3)\times \mathbb{R}^3)$. Under this identification, the constrained dynamics $\Gamma_{(L', \cm')}$ for the system $(L',\cm')$ is just the $SO(3)$-invariant vector field $\Gamma_{(L, \cm)}=P(\Gamma_L|_{\cm})$. We recall that if $\Phi:TQ\to TQ/SO(3)$ is the canonical projection, then \begin{equation}\label{Reduc} \prol[\Phi]{\Phi}\circ \Gamma_{(L, \cm)}=\Gamma_{(L', \cm')}\circ \Phi. \end{equation} Next, we give a local description of the vector field $(\rho')^1(\Gamma_{(L', \cm')})$ on $E'=TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3$ and the nonholonomic bracket $\{\cdot,\cdot\}_{nh}'$ for the constrained system $(L',\cm')$. For this purpose, we consider a suitable system of local coordinates on $TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3.$ If we set \[\begin{array}{lll} x'=x,&y'=y,&\\ \pi_1'=r\dot{x} + k^2\omega_2,&\pi_2'=r\dot{y}-k^2\omega_1,&\pi_3'=k^2\omega_3,\\ \pi_4=\frac{k^2}{(k^2 + r^2)}(\dot{x}-r\omega_2 + \Omega y), &\pi_5'=\frac{k^2}{(k^2+r^2)}(\dot{y} + r \omega_1 -\Omega x), \end{array} \] then $(x',y',\pi_1',\pi_2',\pi_3',\pi_4',\pi_5')$ is a system of local coordinates on $TQ/SO(3)\cong T\mathbb{R}^2\times \mathbb{R}^3$. In these coordinates the equations defining the submanifold $\cm'$ are $\pi_4'=0$ and $\pi_5'=0$, and the canonical projection $\Phi:TQ\to TQ/SO(3)$ is given by \begin{equation}\label{Phi} \Phi(\bar{x},\bar{y},\bar{\theta},\bar{\varphi},\bar{\psi}; \pi_1,\pi_2,\pi_3,\pi_4,\pi_5)=(\bar{x},\bar{y};{\pi}_1, {\pi}_2, {\pi}_3, {\pi}_4, {\pi}_5). \end{equation} Thus, from (\ref{Gamma}) and (\ref{Reduc}), it follows that \[ (\rho')^1(\Gamma_{(L', \cm')})=(\dot{x}'\frac{\partial }{\partial x'} + \dot{y}'\frac{\partial }{\partial y'})|_{\cm'} , \] or, in the standard coordinates $(x,y,\dot{x},\dot{y}; \omega_1,\omega_2,\omega_3)$ on $T\mathbb{R}^2\times \mathbb{R}^3,$ \begin{align*} (\rho')^1(\Gamma_{(L', \cm')})&=\{\dot{x}(\displaystyle\frac{\partial }{\partial x} + \displaystyle\frac{\Omega k^2}{(k^2 + r^2)}\displaystyle\frac{\partial }{\partial \dot{y}} + \displaystyle\frac{\Omega r}{(k^2 + r^2)}\displaystyle\frac{\partial }{\partial \omega_1})\\ & \quad + \dot{y}(\displaystyle\frac{\partial }{\partial y} - \displaystyle\frac{\Omega k^2}{(k^2 + r^2)}\displaystyle\frac{\partial }{\partial \dot{x}} + \displaystyle\frac{\Omega r}{(k^2 + r^2)}\displaystyle\frac{\partial }{\partial \omega_2})\}|_{\cm'}. \end{align*} On the other hand, from (\ref{Cornonh}), (\ref{Phi}) and Theorem~\ref{t8.6'}, we deduce that the only non-zero fundamental nonholonomic brackets for the system $(L',\cm')$ are \[ \begin{array}{lll} \{x',\pi_1'\}'_{nh}=r,& \{y',\pi_2'\}'_{nh}=r,&\\ \{\pi_1',\pi_2'\}_{nh}'=\pi_3',&\{\pi_2',\pi_3'\}_{nh}' = \displaystyle\frac{k^2}{(k^2+ r^2)}\pi_1' + \displaystyle\frac{rk^2\Omega}{(k^2+ r^2)}y',&\\ \{\pi_3',\pi_1'\}_{nh}'=\displaystyle\frac{k^2}{(k^2 + r^2)} \pi_2'- \displaystyle\frac{rk^2\Omega}{(k^2+ r^2)}x'.&& \end{array} \] Therefore, we have that \[ \dot{f}'=(\rho')^1(R_{L'})(f') + \{f',L'\}_{nh}',\mbox{ for } f'\in C^\infty(\cm'), \] where $(\rho')^1(R_{L'})$ is the vector field on $\cm'$ given by \begin{align*} (\rho')^1(R_{L'}) &= \{ \displaystyle\frac{k^2\Omega}{k^2+ r^2}(x'\displaystyle\frac{\partial }{\partial y'}-y'\displaystyle\frac{\partial }{\partial x'}) + \displaystyle\frac{r\Omega}{(k^2 + r^2)}(x'(\pi_3'-k^2\Omega)\displaystyle\frac{\partial }{\partial \pi_1'} \\[5pt] & \quad + y' (\pi_3'-k^2\Omega)\displaystyle\frac{\partial }{\partial \pi_2'}-k^2(\pi_1' x' + \pi_2' y')\displaystyle\frac{\partial }{\partial \pi_3'})\}|_{\cm'}. \end{align*} \section{Conclusions and outlook}\label{conclusions} We have developed a geometrical description of nonholonomic mechanical systems in the context of Lie algebroids. This formalism is the natural extension of the standard treatment on the tangent bundle of the configuration space. The proposed approach also allows to deal with nonholonomic mechanical systems with symmetry, and perform the reduction procedure in a unified way. The main results obtained in the paper are summarized as follows: \begin{itemize} \item we have identified the notion of regularity of a nonholonomic mechanical system with linear constraints on a Lie algebroid, and we have characterized it in geometrical terms; \item we have obtained the constrained dynamics by projecting the unconstrained one using two different decompositions of the prolongation of the Lie algebroid along the constraint subbundle; \item we have developed a reduction procedure by stages and applied it to nonholonomic mechanical systems with symmetry. These results have allowed us to get new insights in the technique of quasicoordinates; \item we have defined the operation of nonholonomic bracket to measure the evolution of observables along the solutions of the system; \item we have examined the setup of nonlinearly constrained systems; \item we have illustrated the main results of the paper in several examples. \end{itemize} Current and future directions of research include the in-depth study of the reduction procedure following the steps of~\cite{BlKrMaMu,CaLeMaMa} for the standard case; the synthesis of so-called nonholonomic integrators~\cite{cortes,CoSo,LeMaSa} for systems evolving on Lie algebroids, and the development of a comprehensive treatment of classical field theories within the Lie algebroid formalism following the ideas by E. Mart{\'\i}nez~\cite{CFTLAMF}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mass in the visible Universe appears to be made up exclusively of matter. There is no evidence for stable antimatter up to close to the present Hubble radius \cite{Cohen}. Based on the observed matter energy density (from which the number density $n_B$ of baryons can be determined) and on the measured temperature of the cosmic microwave background (which yields the entropy density $s$), it follows that the baryon to entropy ratio $n_B / s$ is \begin{equation} {{n_B} \over s} \, \sim \, 10^{-10} \, . \label{ratio} \end{equation} A second argument in favor of this value of $n_B / s$ comes from the theory of big bang nucleosynthesis \cite{BBN}. The predicted and observed abundances of the light elements agree precisely if $n_B / s$ is in the range given by (\ref{ratio}). The goal of the theory of baryogenesis is to explain the origin of (\ref{ratio}) starting with symmetric initial conditions at very early times. Sakharov \cite{Sakharov} realized that in order to obtain a model of baryogenesis, three criteria must be satisfied:\hfill\break 1. $n_B$ violating processes must exist;\hfill\break 2. these processes must involve C and CP violation;\hfil\break 3. they must occur out of thermal equilibrium.\hfil\break Another way to state these criteria is that, in addition to the existence of baryon number violating processes, there needs to be a period in the early Universe in which the CPT symmetry is broken. In an expanding Universe, this condition is not hard to achieve since the expansion determines a preferred direction of time. The first theory of baryogenesis (there are several good recent reviews \cite{GUTBG} on this topic) was in the context of Grand Unified Theories, theories in which baryon number violating processes occur at the perturbative level since there are particles (the superheavy Higgs and gauge particles $X$ and $A_{\mu}$) which couple baryons and leptons. Baryons are generated at a temperature $T_{out} \sim T_{GUT} \sim 10^{16}{\rm GeV}$ by the out-of-equilibrium decay of the superheavy $X$ and $A_{\mu}$ particles. These particles were in thermal equilibrium for $T \gg T_{GUT}$ but fall out of equilibrium at a temperature $T_{out}$ close to the GUT symmetry breaking scale $T_{GUT}$ as the Universe expands and cools. Obviously, GUT baryogenesis makes use of new physics beyond the standard model. It also requires a new source of CP violation (perturbative CP violation in the standard model sector is too weak to account for the observed value of $n_B / s$), but such new CP violation is rather naturally present in the extended Higgs sector of a GUT model. A potentially fatal problem for GUT baryogenesis was pointed out by Kuzmin, Rubakov and Shaposhnikov \cite{KRS}: there are nonperturbative processes in the standard model which violate baryon number and are unsuppressed for $T \gg T_{EW}$, where $T_{EW}$ is the electroweak symmetry breaking scale, and hence can erase any primordial baryon asymmetry generated at $T \gg T_{EW}$, for example $T = T_{GUT}$. One way to protect GUT baryogenesis from this washout is to generate during the GUT phase transition an asymmetry in a quantum number like $B - L$ (where $B$ and $L$ denote baryon and lepton number, respectively) which is not violated by nonperturbative electroweak processes. The nonperturbative baryon number violating processes in the electroweak theory are related to the nontrivial gauge vacuum structure \cite{JR,tHooft}. The configuration $A_{\mu} = 0$ is not the only vacuum state. There are energetically degenerate states with nontrivial gauge field configurations $A_{\mu} \neq 0$. A gauge-invariant way to distinguish between these states is in terms of a topological invariant, the Chern-Simons number $N_{CS}$. The transitions between configurations of different $N_{CS}$ are called {\it sphaleron} transitions \cite{Manton}. They are exponentially suppressed at zero temperature $T = 0$. However, at temperatures $T \gg T_{EW}$, they are unsuppressed. In a theory in which $N_f$ fermion SU(2) doublets couple to the gauge fields, there is a change in baryon number $\Delta N_B$ associated with a sphaleron transition: \begin{equation} \Delta N_B \, = \, N_f \Delta N_{CS} \, . \end{equation} Hence, for $T \gg T_{EW}$, baryon number violating processes are in equilibrium. Note, however, that sphalerons preserve $B - L$. An alternative to trying to protect a primordial matter asymmetry generated at some temperature $T \gg T_{EW}$ from sphaleron washout is to make use of out-of-equilibrium sphaleron processes at $T \ll T_{EW}$ to re-generate a new baryon number below the electroweak phase transition. This is the goal of electroweak baryogenesis. Following early work by Shaposhnikov \cite{Shap} and Arnold and McLerran \cite{AMcL}, concrete models of electroweak baryogenesis were suggested by Turok and Zadrozny \cite{Turok} and by Cohen, Kaplan and Nelson \cite{CKN}. These mechanisms were based on sphaleron processes inside or in the vicinity of bubble walls nucleated at the electroweak phase transition. These mechanisms require the electroweak phase transition to be strongly first order and nucleation-driven. In this case, below the critical temperature $T_{EW}$, bubbles of the low temperature vacuum are nucleated in the surrounding sea of the false (i.e. high temperature) vacuum and then expand until they percolate. Detailed studies (see recent review articles \cite{EWBGrev} for details) indicate that physics beyond the standard model is needed in order to implement the mechanism, specifically in order for the phase transition to be strongly first order and to obtain sufficient CP violation. In this light, defect-mediated electroweak baryogenesis \cite{BD,BDPT} may be a promising alternative, since many theories beyond the standard model predict topological defects. In this case, the baryogenesis mechanism involves sphaleron processes inside the topological defects. In the following sections, I will review the defect-mediated electroweak baryogenesis mechanism and discuss how the dynamical breaking of CPT symmetry in defect networks leads to a nonvanishing net baryon number. These sections are based on \cite{BDPT} and \cite{PTDB}, respectively. In Section 4 I will mention recent ideas \cite{BHZ} on QCD-scale ``baryogenesis", a charge separation mechanism which also makes crucial use of the effective T violation in the defect dynamics in an expanding Universe. \section{Defect-Mediated Electroweak Baryogenesis} Before discussing the role of defects in electroweak baryogenesis I will review the main points of the ``standard" or ``first-order" mechanism \cite{Turok,CKN}. It is based on two key assumptions: \begin{enumerate} \item{} The electroweak phase transition is first order. \item{} The transition is nucleation-driven (rather than fluctuation-driven, see e.g. the article by Goldenfeld \cite{Goldenfeld} for critical comments on transition dynamics from the point of view of condensed matter physics). \end{enumerate} \noindent If these assumptions are satisfied, then the electroweak phase transition proceeds by the nucleation of bubbles of the low temperature vacuum in a surrounding sea of the high temperature, symmetric vacuum. Inside the bubbles, the electroweak symmetry is broken and sphalerons are suppressed, outside the bubbles the symmetry is restored and the sphaleron rate is not suppressed. The bubbles are nucleated with microscopic radius and then expand monotonically until they percolate. Let us briefly consider the way in which the Sakharov criteria are satisfied: The standard electroweak theory contains C and CP violating interactions which couple to the fields excited in the bubble walls (2nd criterium). The bubbles are out of equilibrium field configurations (3rd condition). Baryogenesis occurs via sphaleron processes near the bubble walls (1st criterium). Note that the bubble dynamics (expansion into the false vacuum) represents the effective dynamical breaking of CPT. The master equation for electroweak baryogenesis is \begin{equation} {{dn_B} \over {dt}} \, = \, - 3 \Gamma \mu \, , \label{master} \end{equation} where \begin{equation} \Gamma \, = \, \kappa (\alpha_w T)^4 \label{rate} \end{equation} is the sphaleron rate in the false vacuum \cite{sphrate} ($\alpha_w$ is the electroweak fine structure constant and $\kappa$ is a constant which must be determined in numerical simulations), and $\mu$ is the chemical potential for baryon number which is determined by the interplay between defect dynamics and CP violating interactions of the bubble wall, a complicated issue which is still not fully understood quantitatively. In qualitative terms, fermions scatter off the wall, generating a nonvanishing lepton number in front of the bubble (let us say at point $x$) which yields $\mu(x) \neq 0$ and biases sphaleron processes in front of the wall, yielding $n_B(x) \neq 0$. This value of $n_B(x)$ is then preserved as the wall passes by and the point $x$ becomes part of the true vacuum domain. The chemical potential $\mu$ is proportional to the constant $\epsilon$ describing the strength of CP violation. In the standard electroweak theory, $\epsilon$ is much too small to account for the observed $n_B / s$. Thus, extra CP violation beyond the standard model is required for successful electroweak baryogenesis. Another reason why physics beyond the standard model is required is that in the context of the basic electroweak model, sphaleron processes are still in equilibrium below $T_{EW}$ if the Higgs mass $m_H$ is larger than $90$GeV, which experimental bounds now indicate must be the case. In addition, for large $m_H$, the phase transition is no longer strongly first order, eliminating the first order baryogenesis mechanism alltogether. Even in the MSSM (the minimal supersymmetric standard model), the window for successful first order electroweak baryogenesis is very small \cite{Cline}. Hence, extensions of the standard model are required in order to realize baryogenesis at the electroweak scale. Many extensions of the standard model, e.g. theories with additional U(1) gauge symmetries which are broken at or above $T_{EW}$, admit topological defects. In this case, there is an alternative way to implement electroweak baryogenesis which does not make use of bubbles created at a first order transition. Topological defects may replace bubble walls as the out-of-equilibrium field configurations needed to satisfy the third Sakharov criterium. To be specific, we make the following assumptions: \begin{enumerate} \item{} New physics at a scale $\eta > T_{EW}$ generates topological defects. \item{} The electroweak symmetry is unbroken inside the defects and the defects are sufficiently thick such that sphalerons fit inside them. \end{enumerate} Given these assumptions, the scenario for baryogenesis is as follows: At the critical temperature $\eta$, a network of defects is produced by the usual Kibble mechanism \cite{Kibble}. The initial separation of the defects is microscopic, and hence a substantial fraction of space lies inside the defects. As the Universe expands, the defect network coarsens. As long as $T > T_{EW}$, all baryon number violating processes are in equilibrium and hence $n_B = 0$. Once $T$ drops below $T_{EW}$ (more precisely, when $T$ falls below the temperature $T_{out}$ at which sphalerons fall out of equilibrium), then baryon number generation sets in triggered by the defects, in a manner analogous to how bubble walls trigger baryogenesis in the first order mechanism described earlier. The mechanism can be described with the help of Figure 1. The defect is moving with velocity $v_D$ through the primordial plasma. At the leading edge, a baryon excess of negative sign builds up due to CP violating scatterings from the defect wall. Consider now a point $x$ in space which the defect crosses. When $x$ is hit by the leading defect edge, a value $n_B(x) = - \Delta n_B < 0$ is generated. While $x$ is inside the defect core, this baryon asymmetry relaxes (at least partially) by sphaleron processes. When the trailing edge of the defect passes by, then an asymmetry $\Delta n_B$ of equal magnitude but opposite sign as what is produced at the leading edge is generated. Due to the partial washout in the defect core, the net effect is to produce a positive baryon number density. \begin{figure}[htbp] \centering \psfig{file=figure2.ps,width=4in,height=3in,angle=270,clip=} \caption{Diagram of a portion of a defect, in this case a cosmic string, moving to the right through the primordial plasma. The differing decays of reflected particles within and outside the defect leads to the generation of a net baryon asymmetry.} \end{figure} The same master equation (\ref{master}) as for first order electroweak baryogenesis also applies to defect-mediated electroweak baryogenesis. However, the maximal $n_B / s$ which can be generated from defects is suppressed compared to what could be obtained in successful first order electroweak baryogenesis for several reasons. Most importantly, not all points in space are passed by defects after $T_{EW}$, and hence there is an important geometrical suppression factor $SF$. The value of $SF$ is the fraction of space which will be in a defect core at some time after $t_{EW}$. The value of $SF$ depends sensitively on the type of defect, and on the defect formation scale $\eta$. For non-superconducting cosmic strings \cite{BDPT} \begin{equation} \label{supp} SF \, \sim \, \lambda v_D ({{T_{EW}} \over {\eta}})^{3/2} \, , \end{equation} where $\lambda$ is the coupling constant of the string sector which determines the string width and string separation $\xi(t)$ at the time $t_c$ of string formation: \begin{equation} \xi(t_c) \, \sim \, \lambda^{-1} \eta^{-1} \, . \end{equation} The factor $(T_{EW} / \eta)^{3/2}$ in Equation (\ref{supp}) for the suppression factor comes from the coarsening of the defect network after formation and the resulting growth of $\xi(t)$. Therefore, the fraction of space covered by defects at $T_{EW}$ decreases as the string formation scale $\eta$ increases. For domain walls, there is much less suppression, because of the higher dimensionality of the defects. We find $SF \sim v_D$ \cite{BDPT}. For monopoles, on the other hand, the suppression factor renders defect-mediated baryogenesis completely ineffective. A further suppression factor comes from having only partial relaxation of $n_B$ inside the defects \cite{Espinosa}. A calculation without taking this latter factor into account yields (for non-superconducting cosmic strings) \cite{BDPT} \begin{equation} {{n_B} \over s} \, \sim \, \lambda \kappa \alpha_w^2 g_*^{-1} ({{m_t} \over {T_{EW}}})^2 \epsilon ({{T_{EW}} \over {\eta}})^{3/2} \, , \end{equation} where $g_*$ gives the number of spin degrees of freedom in the radiation bath, $\epsilon$ is the CP violating phase, and $m_t$ is the top quark mass. Efficient defect-mediated electroweak baryogenesis thus requires either cosmic strings with $\eta$ close to $T_{EW}$ (plus other optimistic assumptions about the parameters such as $\epsilon \sim 1$ - although according to Cline et al. \cite{Espinosa} even this may not be sufficient), or domain walls (which in turn must decay at late times in order to avoid the domain wall over-abundance problem \cite{DW}). Defect-mediated electroweak baryogenesis carries the advantage of being independent of the order of the electroweak phase transition and of the Higgs mass. In addition, whereas the efficiency of first-order baryogenesis is exponentially suppressed if $T_{out} < T_{EW}$ (since bubbles are only present at $T_{EW}$), defect-mediated baryogenesis is only suppressed by a power of $T_{out} / T_{EW}$ since defects are present at all times after $T_{EW}$. The power is determined by the coarsening dynamics of the defect network. Note that defect-mediated baryogenesis is not tied to the electroweak scale. Any defects which arise in the early Universe can potentially play a role in baryogenesis, as long as they couple to baryon number violating processes. This applies in particular to defects formed at the GUT scale. GUT defect-mediated baryogenesis \cite{BDH} is a mechanism which competes with the usual GUT baryogenesis channel based on the out-of-equilbrium decay of the superheavy $A_{\mu}$ and $X$ particles. If $T_{out} \ll T_{GUT}$, then defect-mediated GUT baryogenesis is in fact the dominant mechanism. \section{Dynamical Breaking of CPT and Defect-Mediated Baryogenesis} Let us in the following consider explicitly how dynamical CPT violation is crucial for defect-mediated baryogenesis \cite{PTDB}. To be specific, we consider extensions of the standard model with CP violation in an extended Higgs sectior in the form of a CP violating phase $\epsilon$ (e.g. the relative phase between the two doublets in the two Higgs doublet model). This phase has the following transformation properties under CP and T: \begin{eqnarray} CP: & \,\,\, & \epsilon(x,t) \, \rightarrow \, - \epsilon(-x,t) \nonumber \\ T: & \,\,\, & \epsilon(x,t) \, \rightarrow \, - \epsilon(x,-t) \\ CPT: & \,\,\, & \epsilon(x,t) \, \rightarrow \, \epsilon(-x,-t) \, .\nonumber \end{eqnarray} Hence, $\partial_{\mu} \epsilon$ is odd under CPT. How the defect (which we take to be a string) interacts with the plasma can be modelled by a term in the Lagrangian of the form \begin{equation} {\cal L}_{\epsilon} \, \propto \, \partial_{\mu} \epsilon j_5^{\mu} \, , \end{equation} where $j_5^{\mu}$ is the axial current \cite{JPT}. The axial current transforms under CPT as \begin{equation} CPT: \,\,\,\, j_5^{\mu} \, \rightarrow \, -j_5^{\mu} \, , \end{equation} so that the interaction Lagrangian ${\cal L}_{\epsilon}$ is invariant under CPT as it must be. Hence, it follows that a static string is its own antiparticle, and hence cannot generate any net baryon number. For a moving string, an apparent CPT paradox arises: The CPT conjugate of a defect with a value $\epsilon > 0$ inside the core moving with velocity ${\vec v}$ is a defect with the same value of $\epsilon$ in the core moving with the same velocity vector ${\vec v}$. Hence, if CPT were a symmetry, then it would not be possible for the string to generate a net baryon number. The resolution of this apparent CPT paradox starts with the observation that the master equation (\ref{master}) for baryogenesis is a dissipative equation which explicitly violates T symmetry, and, since it conserves CP, also violates CPT. Like the Ilion field of Cohen and Kaplan \cite{CK}, the defect network evolution drives the system out of thermal equilibrium, acting as an external source of T violation, and the dissipative processes tend to restore the thermal equilibrium. In turn, dissipation leads to a damping of the defect motion. If dissipation were the only force, then the defects would come to rest and $n_B$ violation would stop. However, the expansion of the Universe induces a counterforce on the defects which keeps the defect network out of equilibrium and allows $n_B$ generation to continue. The lesson we draw from this study is that the expansion of the Universe is the source of explicit external T violation which keeps the defect network out of equilibirium. The ordering dynamics of the defect network fueled by the cosmological expansion then leads to dynamical CPT violation and to the biasing of baryon number production. \section{Defect-Mediated QCD Scale Baryogenesis} As mentioned above, defect-mediated baryogenesis can be effective not only at the electroweak scale, but at any scale when defects are produced. Recent work \cite{HZ} has shown that as a consequence of the nontrivial vacuum structure of low energy QCD, domain walls form at the QCD phase transition. These domain walls separate regions of space in which the effective strong CP parameter $\theta$ has very different values. Hence, the domain walls automatically are associated with maximal CP violation. Recently\cite{BHZ}, a new baryogenesis (more precisely charge separation) scenario was proposed based on these QCD domain walls. Since this mechanism will be reviewed in a separate conference proceedings article \cite{BHZ2}, I will here only highlight the main points. The starting point of the QCD baryogenesis scenario is a new nonperturbative analysis of the vacuum structure of low energy QCD \cite{HZ}. Considering the vacuum energy $E$ of pure gluodynamics as a function of $\theta$, it was realized \cite{HZ} (see also \cite{Shifman,Witten}) that $E(\theta)$ must have a multi-branch structure \begin{equation} E(\theta) \, = \, {\rm min}_k E_k(\theta) \, = \, {\rm min}_k E_0(\theta + 2 \pi k) \, , \end{equation} and hence must in general have several isolated degenerate minima. When fermionic matter is introduced, at low energies represented by a chiral condensate matrix $U$ which contains the pion and sigma prime fields, then the potential energy $W(U, \theta)$ depends only on the combination $\theta - i Tr {\rm ln} U$ (by the anomalous Ward Identities). Hence, from the multi-branch structure of $E(\theta)$ it immediately follows that for fixed value of $\theta$, the potential $V(U) = W(U, \theta)$ has several isolated minima. These vacua differ in terms of the effective strong CP parameter $\theta$. Since there are several discrete minima of the potential, domain walls separating the different phases exist. In fact, by the Kibble mechanism \cite{Kibble}, during the QCD phase transition at $T = T_{QCD}$, inevitably a network of domain walls will form. The second crucial ingredient of the new scenario \cite{BHZ} is charge separation. In analogy to how in $1+1$ dimensional physics solitons acquire a fractional charge \cite{JR}, in a $3+1$ dimensional theory domain walls will also acquire a fractional baryonic charge. In the chiral limit, the different vacuum states would be energetically degenerate. In the presence of a nonvanishing quark mass $m_q$, the energy of states increases as a function of $|\theta|$. Hence, the different phases of the theory, which immediately after the phase transition are equidistributed, will no longer be so below a temperature $T_d$ at which the energy difference between the minima becomes thermodynamically important. At this time, the domain wall network will break up into a set of {\it B-shells}, domains of states of large $|\theta|$ in a surrounding sea of the phase with the lowest value of $|\theta|$. In the absence of explicit strong CP violation, i.e. when the lowest energy vacuum is $\theta = 0$, then there are an equal number of B-shells with $\theta > 0$ and $\theta < 0$. A B-shell with $\theta > 0$ has \cite{BHZ} negative baryon number. In order to generate a net baryon number in B-shells, (a small amount of) explicit CP violation is required such that the only B-shells which form have the same sign of $\theta$ (which we take to be positive). In this case, the total baryon number trapped in the walls is negative. Since there is no overall baryon number violation in QCD, the compensating positive baryon number must be in the bulk. This is the way in which domain walls in QCD lead to an effective baryogenesis mechanism by means of charge separation. Note that, in analogy to electroweak baryogenesis, the explicit T violation due to the expansion of the Universe which leads to the coarsening and eventual fragmentation of the defect network is crucial for the mechanism. As our estimates \cite{BHZ} indicate, it appears possible to generate a baryon to entropy ratio comparable to what observations require. \section*{Acknowledgments} I wish to thank Alan Kostelecky for the invitation to speak at this meeting, and for his hospitality. I thank my collaborators Anne-Christine Davis, Igor Halperin, Tomislav Prokopec, Mark Trodden and Erik Zhitnitsky, for enjoyable and stimulating collaborations. This work was supported in part by the US Department of Energy under Contract DE-FG02-91ER40688. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Q-learning is one of the most successful classes of reinforcement learning (RL) algorithms, which aims at finding the optimal action-value function or Q-function (and thus the associated optimal policy) via off-policy data samples. The Q-learning algorithm was first proposed by \citet{watkins1992q}, and since then, it has been widely used in various applications including robotics~\citep{tai2016robot}, autonomous driving~\citep{okuyama2018autonomous}, video games~\citep{mnih2015human}, to name a few. Theoretical performance of Q-learning has also been intensively explored. The asymptotic convergence has been established in \citet{tsitsiklis1994asynchronous,jaakkola1994convergence,borkar2000ode,melo2001convergence,Lee2019Switch}. The non-asymptotic (i.e., finite-time) convergence rate of Q-learning was firstly obtained in \citet{szepesvari1998asymptotic}, and has been further studied in~\citep{even2003learning,shah2018q,wainwright2019stochastic,beck2012error,Chen2020finiteSample} for synchronous Q-learning and in ~\citep{even2003learning,qu2020finite} for asynchoronous Q-learning. One major weakness of Q-learning arises in practice due to the large overestimation of the action-value function~\citep{hasselt2010double,van2016deep}. Practical implementation of Q-learning involves using the maximum {\em sampled} Q-function to estimate the maximum \textit{expected} Q-function (where the expectation is taken over the randomness of reward). Such an estimation often yields a large positive bias error~\citep{hasselt2010double}, and causes Q-learning to perform rather poorly. To address this issue, double Q-learning was proposed in~\citet{hasselt2010double}, which keeps two Q-estimators (i.e., estimators for Q-functions), one for estimating the maximum Q-function value and the other one for update, and continuously changes the roles of the two Q-estimators in a random manner. It was shown in \citet{hasselt2010double} that such an algorithm effectively overcomes the overestimation issue of the vanilla Q-learning. In \citet{van2016deep}, double Q-learning was further demonstrated to substantially improve the performance of Q-learning with deep neural networks (DQNs) for playing Atari 2600 games. It inspired many variants~\citep{zhang2017weighted,abed2018double}, received a lot of applications~\citep{zhang2018double,zhang2018human}, and have become one of the most common techniques for applying Q-learning type of algorithms~\citep{hessel2018rainbow}. Despite its tremendous empirical success and popularity in practice, theoretical understanding of double Q-learning is rather limited. Only the asymptotic convergence was provided in \citet{hasselt2010double,weng2020provably}. There has been no non-asymptotic result on how fast double Q-learning converges. From the technical standpoint, such finite-time analysis for double Q-learning does not follow readily from those for the vanilla Q-learning, because it involves two randomly updated Q-estimators, and the coupling between these two random paths significantly complicates the analysis. This goes much more beyond the existing techniques for analyzing the vanilla Q-learning that handles the random update of a single Q-estimator. Thus, {\em the goal of this paper is to develop new finite-time analysis techniques that handle the inter-connected two random path updates in double Q-learning and provide the convergence rate.} \vspace{-1em} \subsection{Our contributions} The main contribution of this paper lies in providing the first finite-time analysis for double Q-learning with both the synchronous and asynchronous implementations. \begin{list}{$\bullet$}{\topsep=0.ex \leftmargin=0.3in \rightmargin=0.in \itemsep =0.02in} \item We show that synchronous double Q-learning with a learning rate $\alpha_t = 1/t^\omega$ (where $\omega\in(0,1)$) attains an $\epsilon$-accurate global optimum with at least the probability of $1-\delta$ by taking $\Omega\left( \left( \frac{1}{(1-\gamma)^6\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|}{(1-\gamma)^7\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations, where $\gamma\in(0,1)$ is the discount factor, $|\mathcal S|$ and $|\mathcal A|$ are the sizes of the state space and action space, respectively. \item We further show that under the same accuracy and high probability requirements, asynchronous double Q-learning takes $\Omega\left( \left( \frac{L^4}{(1-\gamma)^6\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4}{(1-\gamma)^7\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{L^2}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations, where $L$ is the covering number specified by the exploration strategy. \end{list} Our results corroborate the design goal of double Q-learning, which opts for better accuracy by making less aggressive progress during the execution in order to avoid overestimation. Specifically, our results imply that in the high accuracy regime, double Q-learning achieves the same convergence rate as vanilla Q-learning in terms of the order-level dependence on $\epsilon$, which further indicates that the high accuracy design of double Q-learning dominates the less aggressive progress in such a regime. In the low-accuracy regime, which is not what double Q-learning is designed for, the cautious progress of double Q-learning yields a slightly weaker convergence rate than Q-learning in terms of the dependence on $1-\gamma$. From the technical standpoint, our proof develops new techniques beyond the existing finite-time analysis of the vanilla Q-learning with a single random iteration path. More specifically, we model the double Q-learning algorithm as two alternating stochastic approximation (SA) problems, where one SA captures the error propagation between the two Q-estimators, and the other captures the error dynamics between the Q-estimator and the global optimum. For the first SA, we develop new techniques to provide the finite-time bounds on the two inter-related stochastic iterations of Q-functions. Then we develop new tools to bound the convergence of Bernoulli-controlled stochastic iterations of the second SA conditioned on the first SA. \subsection{Related work} Due to the rapidly growing literature on Q-learning, we review only the theoretical results that are highly relevant to our work. Q-learning was first proposed in \citet{watkins1992q} under finite state-action space. Its asymptotic convergence has been established in \citet{tsitsiklis1994asynchronous,jaakkola1994convergence,borkar2000ode,melo2001convergence} through studying various general SA algorithms that include Q-learning as a special case. Along this line, \citet{Lee2019Switch} characterized Q-learning as a switched linear system and applied the results of~\citet{borkar2000ode} to show the asymptotic convergence, which was also extended to other Q-learning variants. Another line of research focuses on the finite-time analysis of Q-learning which can capture the convergence rate. Such non-asymptotic results were firstly obtained in \citet{szepesvari1998asymptotic}. A more comprehensive work~\citep{even2003learning} provided finite-time results for both synchronous and asynchoronous Q-learning. Both \citet{szepesvari1998asymptotic} and \citet{even2003learning} showed that with linear learning rates, the convergence rate of Q-learning can be exponentially slow as a function of $\frac{1}{1-\gamma}$. To handle this, the so-called rescaled linear learning rate was introduced to avoid such an exponential dependence in synchronous Q-learning~\citep{wainwright2019stochastic,Chen2020finiteSample} and asynchronous Q-learning~\citep{qu2020finite}. The finite-time convergence of Q-learning was also analyzed with constant step sizes~\citep{beck2012error,Chen2020finiteSample,li2020sample}. Moreover, the polynomial learning rate, which is also the focus of this work, was investigated for both synchronous~\citep{even2003learning,wainwright2019stochastic} and asynchronous Q-learning~\citep{even2003learning}. In addition, it is worth mentioning that \cite{shah2018q} applied the nearest neighbor approach to handle MDPs on infinite state space. Differently from the above extensive studies of vanilla Q-learning, theoretical understanding of double Q-learning is limited. The only theoretical guarantee was on the asymptotic convergence provided by \citet{hasselt2010double,weng2020provably}, which do not provide the non-asymptotic (i.e., finite-time) analysis on how fast double Q-learning converges. This paper provides the first finite-time analysis for double Q-learning. The vanilla Q-learning algorithm has also been studied for the function approximation case, i.e., the Q-function is approximated by a class of parameterized functions. In contrast to the tabular case, even with linear function approximation, Q-learning has been shown not to converge in general \citep{baird1995residual}. Strong assumptions are typically imposed to guarantee the convergence of Q-learning with function approximation \citep{bertsekas1996neuro,zou2019finite,Chen2019finiteQ,du2019provably,xu2019deepQ,cai2019neural,weng2020analysis,weng2020momentum}. Regarding double Q-learning, it is still an open topic on how to design double Q-learning algorithms under function approximation and under what conditions they have theoretically guaranteed convergence. \section{Preliminaries on Q-learning and Double Q-learning} In this section, we introduce the Q-learning and the double Q-learning algorithms. \subsection{Q-learning} We consider a $\gamma$-discounted Markov decision process (MDP) with a finite state space $\mathcal S$ and a finite action space $\mathcal A$. The transition probability of the MDP is given by $P:\mathcal S \times \mathcal A \times \mathcal S \rightarrow [0,1]$, that is, $\mathbb{P}(\cdot|s, a)$ denotes the probability distribution of the next state given the current state $s$ and action $a$. We consider a random reward function $R_t$ at time $t$ drawn from a fixed distribution $\phi: \mathcal{S}\times \mathcal{A}\times \mathcal{S} \mapsto \mathbb{R}$, where $\mathbb{E}\{R_t(s,a,s')\}=R_{sa}^{s'}$ and $s'$ denotes the next state starting from $(s,a)$. In addition, we assume $|R_t|\leq R_{\max}$. A policy $\pi:=\pi(\cdot|s)$ characterizes the conditional probability distribution over the action space $\mathcal{A}$ given each state $s\in\mathcal{S}$. The action-value function (i.e., Q-function) $Q^{\pi}\in\mathbb R^{|\mathcal S|\times |\mathcal A|}$ for a given policy $\pi$ is defined as \begin{align}\label{eq:Qfunction} Q^{\pi}(s,a):=&\mathbb E\left[\sum_{t=0}^{\infty}\gamma^t R_t(s,\pi(s),s')\Big|s_0=s,a_0=a \right] \nonumber \\ =& \mathbb E_{\substack{s'\sim P(\cdot|s,a)\\a'\sim\pi(\cdot|s')}} \left[R_{sa}^{s'}+\gamma Q^{\pi}(s',a')\right], \end{align} where $\gamma\in(0,1)$ is the discount factor. Q-learning aims to find the Q-function of an optimal policy $\pi^*$ that maximizes the accumulated reward. The existence of such a $\pi^*$ has been proved in the classical MDP theory~\citep{bertsekas1996neuro}. The corresponding optimal Q-function, denoted as $Q^*$, is known as the unique fixed point of the Bellman operator $\mathcal T$ given by \begin{equation}\label{eq:BellmanOperator} \mathcal T Q(s,a) = \mathbb E_{s'\sim P(\cdot|s,a)} \left[R_{sa}^{s'}+\gamma \underset{a' \in U(s')}{\max}Q(s',a')\right], \end{equation} where $ U(s')\subset\mathcal A$ is the admissible set of actions at state $s'$. It can be shown that the Bellman operator $\mathcal T$ is $\gamma$-contractive in the supremum norm $\norm{Q}:=\max_{s,a}|Q(s,a)|$, i.e., it satisfies \begin{equation} \label{eq:Contraction} \norm{\mathcal T Q - \mathcal T Q'} \leq \gamma\norm{Q - Q'}. \end{equation} The goal of Q-learning is to find $Q^*$, which further yields $\pi^*(s) = \arg\max_{a\in U(s)}Q^*(s,a)$. In practice, however, exact evaluation of the Bellman operator~\eqref{eq:BellmanOperator} is usually infeasible due to the lack of knowledge of the transition kernel of MDP and the randomness of the reward. Instead, Q-learning draws random samples to estimate the Bellman operator and iteratively learns $Q^*$ as \begin{equation}\label{eq:qlearning} Q_{t+1}(s,a) = (1-\alpha_t(s,a)) Q_t(s,a) + \alpha_t(s,a) \left( R_t(s,a,s') + \gamma\underset{a'\in U(s')}{\max}Q_t(s',a') \right), \end{equation} where $R_t$ is the sampled reward, $s'$ is sampled by the transition probability given $(s,a)$, and $\alpha_t(s,a)\in(0,1]$ denotes the learning rate. \subsection{Double Q-learning} Although Q-learning is a commonly used RL algorithm to find the optimal policy, it can suffer from overestimation in practice~\citep{smith2006optimizer}. To overcome this issue, \citet{hasselt2010double} proposed double Q-learning given in Algorithm~\ref{alg:doubleQ}. \begin{algorithm}[H] \caption{Synchronous Double Q-learning~\citep{hasselt2010double}} \label{alg:doubleQ} \begin{algorithmic}[1] \STATE {\bf Input:} Initial $Q^A_1, Q^B_1$. \FOR{$t=1,2,\dots,T$ } \STATE Assign learning rate $\alpha_t$. \STATE Randomly choose either UPDATE(A) or UPDATE(B) with probability 0.5, respectively. \FOR{each $(s,a)$} \STATE observe $ s'\sim P(\cdot|s,a)$, and sample $R_t(s,a,s')$. \IF{UPDATE(A)} \STATE Obtain $a^* = \arg\max_{a'}Q^A_t(s',a')$ \STATE $Q^A_{t+1}(s,a) = Q^A_t(s,a) + \alpha_t(s,a) (R_t(s,a,s') + \gamma Q^B_t(s',a^*) - Q^A_t(s,a))$ \ELSIF{UPDATE(B)} \STATE Obtain $b^* = \arg\max_{b'}Q^B_t(s',b')$ \STATE $Q^B_{t+1}(s,a) = Q^B_t(s,a) + \alpha_t(s,a) (R_t(s,a,s') + \gamma Q^A_t(s',b^*) - Q^B_t(s,a))$ \ENDIF \ENDFOR \ENDFOR \STATE {\bf Output:} $Q^A_T$ (or $Q^B_T$). \end{algorithmic} \end{algorithm} Double Q-learning maintains two Q-estimators (i.e., Q-tables): $Q^A$ and $Q^B$. At each iteration of Algorithm \ref{alg:doubleQ}, one Q-table is randomly chosen to be updated. Then this chosen Q-table generates a greedy optimal action, and the other Q-table is used for estimating the corresponding Bellman operator for updating the chosen table. Specifically, if $Q^A$ is chosen to be updated, we use $Q^A$ to obtain the optimal action $a^*$ and then estimate the corresponding Bellman operator using $Q^B$. As shown in ~\citet{hasselt2010double}, $\mathbb{E}[Q^B(s',a^*)]$ is likely smaller than $\max_a \mathbb{E}[Q^A(s',a)]$, where the expectation is taken over the randomness of the reward for the same state-action pair. In this way, such a two-estimator framework of double Q-learning can effectively reduce the overestimation. {\bf Synchronous and asynchronous double Q-learning:} In this paper, we study the finite-time convergence rate of double Q-learning in two different settings: synchronous and asynchronous implementations. For synchronous double Q-learning (as shown in Algorithm \ref{alg:doubleQ}), all the state-action pairs of the chosen Q-estimator are visited simultaneously at each iteration. For the asynchronous case, only one state-action pair is updated in the chosen Q-table. Specifically, in the latter case, we sample a trajectory $\{(s_t,a_t,R_t,i_t)\}_{t=0}^\infty$ under a certain exploration strategy, where $i_t\in\{A,B\}$ denotes the index of the chosen Q-table at time $t$. Then the two Q-tables are updated based on the following rule: \begin{multline*} Q^i_{t+1}(s,a) \\ = \left\{\begin{aligned} & Q^i_t(s,a), \qquad (s,a) \neq (s_t,a_t) \text{ or } i\neq i_t;\\ & (1-\alpha_t(s,a)) Q^i_t(s,a) + \alpha_t(s,a) \Big( R_t(s,a,s') + \gamma Q^{i^c}_t(s',\underset{a'\in U(s')}{\arg\max}Q^i_t(s',a') \Big), \ \text{otherwise}, \end{aligned} \right. \end{multline*} where $i^c = \{A,B\}\setminus i$. We next provide the boundedness property of the Q-estimators and the errors in the following lemma, which is typically necessary for the finite-time analysis. \begin{lemma}\label{lem:uniformBound} For either synchronous or asynchronous double Q-learning, let $Q^i_t(s,a)$ be the value of either Q table corresponding to a state-action pair $(s,a)$ at iteration $t$. Suppose $\norm{Q_0^i}\leq \frac{R_{\max}}{1-\gamma}$. Then we have $\norm{Q^i_t}\leq \frac{R_{\max}}{1-\gamma}$ and $\norm{Q^i_t-Q^*}\leq V_{\max}$ for all $t\geq0$, where $V_{\max} := \frac{2R_{\max}}{1-\gamma}$. \end{lemma} Lemma \ref{lem:uniformBound} can be proved by induction arguments using the triangle inequality and the uniform boundedness of the reward function, which is seen in \Cref{sec:proofOfLemma1}. \section{Main results} We present our finite-time analysis for the synchronous and asynchronous double Q-learning in this section, followed by a sketch of the proof for the synchronous case which captures our main techniques. The detailed proofs of all the results are provided in the Supplementary Materials. \subsection{Synchronous double Q-learning} Since the update of the two Q-estimators is symmetric, we can characterize the convergence rate of either Q-estimator, e.g., $Q^A$, to the global optimum $Q^*$. To this end, we first derive two important properties of double Q-learning that are crucial to our finite-time convergence analysis. The first property captures the stochastic error $\norm{Q^B_t - Q^A_t}$ between the two Q-estimators. Since double Q-learning updates alternatingly between these two estimators, such an error process must decay to zero in order for double Q-learning to converge. Furthermore, how fast such an error converges determines the overall convergence rate of double Q-learning. The following proposition (which is an informal restatement of~\Cref{lem:Gq} in~\Cref{subsec:PartI}) shows that such an error process can be \textit{block-wisely} bounded by an exponentially decreasing sequence $G_q = (1-\xi)^q V_{\max}$ for $q=0,1,2,\dots,$ and some $\xi\in(0,1)$. Conceptually, as illustrated in \Cref{fig:DkGk}, such an error process is upper-bounded by the blue-colored piece-wise linear curve. \begin{repproposition}{lem:Gq}({\bf\em Informal}) Consider synchronous double Q-learning under a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. We divide the time horizon into blocks $[\hat{\tau}_{q},\hat{\tau}_{q+1})$ for $q\geq0$, where $\hat{\tau}_0=0$ and $\hat{\tau}_{q+1} = \hat{\tau}_q + c_1\hat{\tau}_q^\omega$ with some $c_1>0$. Fix $\hat{\epsilon}>0$. Then for any $n$ such that $G_n \geq \hat{\epsilon}$ and under certain conditions on $\hat{\tau}_1$ (see~\Cref{subsec:PartI}), we have \begin{equation*} \mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\geq 1- c_2 n \exp\left( -\frac{c_3\hat{\tau}_1^{\omega}\hat{\epsilon}^2}{V_{\max}^2} \right), \end{equation*} where the positive constants $c_2$ and $c_3$ are specified in~\Cref{subsec:PartI}. \end{repproposition} \begin{figure}[h] \centering \includegraphics[width=0.70\textwidth]{doubleQ.png} \caption{Illustration of sequence $\{G_k\}_{k\geq0}$ as a block-wise upper bound on $\norm{Q^B_t-Q^A_t}$, and sequence $\{D_k\}_{k\geq0}$ as a block-wise upper bound on $\norm{Q^A_t-Q^*}$ conditioned on the first upper bound event.} \label{fig:DkGk} \end{figure} \Cref{lem:Gq} implies that the two Q-estimators approach each other asymptotically, but does not necessarily imply that they converge to the optimal action-value function $Q^*$. Then the next proposition (which is an informal restatement of \Cref{lem:conditionalBound} in~\Cref{subsec:PartII}) shows that as long as the high probability event in~\Cref{lem:Gq} holds, the error process $\norm{Q^A_t-Q^*}$ between either Q-estimator (say $Q^A$) and the optimal Q-function can be \textit{block-wisely} bounded by an exponentially decreasing sequence $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ for $k=0,1,2,\dots,$ and $\beta\in(0,1)$. Conceptually, as illustrated in \Cref{fig:DkGk}, such an error process is upper-bounded by the yellow-colored piece-wise linear curve. \begin{repproposition}{lem:conditionalBound} ({\bf\em Informal}) Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. We divide the time horizon into blocks $[\tau_{k},\!\tau_{k+1})$ for $k\geq0$, where ${\tau_{0}}=0$ and ${\tau}_{k+1} = {\tau}_k + c_4{\tau}_k^\omega$ with some $c_4>0$. Fix $\tilde{\epsilon}>0$. Then for any $m$ such that $D_m \geq \tilde{\epsilon}$ and under certain conditions on $\tau_1$ (see~\Cref{subsec:PartII}), we have \begin{equation*} \mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^*}\leq D_{k+1} |E,F \right] \geq 1 - c_5 m \exp\left( -\frac{c_6\tau_1^{\omega}\tilde{\epsilon}^2}{V_{\max}^2} \right), \end{equation*} where $E$ and $F$ denote certain events defined in~\eqref{eq:eventA} and \eqref{eq:eventB} in~\Cref{subsec:PartII}, and the positive constants $c_4,c_5$, and $c_6$ are specified~\Cref{subsec:PartII}. \end{repproposition} As illustrated in~\Cref{fig:DkGk}, the two block sequences $\{\hat{\tau}_q\}_{q\geq0}$ in~\Cref{lem:Gq} and $\{{\tau_q}\}_{q\geq0}$ in~\Cref{lem:conditionalBound} can be chosen to coincide with each other. Then combining the above two properties followed by further mathematical arguments yields the following main theorem that characterizes the convergence rate of double Q-learning. We will provide a proof sketch for~\Cref{thm:syncDQ} in~\Cref{subsec:proofOutline}, which explains the main steps to obtain the supporting properties of~\Cref{lem:Gq} and~\ref{lem:conditionalBound} and how they further yield the main theorem. \begin{theorem}\label{thm:syncDQ} Fix $\epsilon>0$ and $\gamma\in(1/3,1)$. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $Q^A_T(s,a)$ be the value of $Q^A$ for a state-action pair $(s,a)$ at time $T$. Then we have $\mathbb P(\norm{Q^A_T - Q^*} \leq \epsilon)\geq 1-\delta$, given that \begin{equation} \label{thm1T} T=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right), \end{equation} where $V_{\max} = \frac{2R_{\max}}{1-\gamma}$. \end{theorem} \Cref{thm:syncDQ} provides the finite-time convergence guarantee in high probability sense for synchronous double Q-learning. Specifically, double Q-learning attains an $\epsilon$-accurate optimal Q-function with high probability with at most $\Omega\left( \left( \frac{1}{(1-\gamma)^6\epsilon^2}\ln \frac{1}{(1-\gamma)^7\epsilon^2} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations. Such a result can be further understood by considering the following two regimes. In the high accuracy regime, in which $\epsilon \ll 1-\gamma$, the dependence on $\epsilon$ dominates, and the time complexity is given by $\Omega\left( \left( \frac{1}{\epsilon^2}\ln \frac{1}{\epsilon^2} \right)^{\frac{1}{\omega}} + \left( \ln\frac{ 1}{\epsilon} \right)^{\frac{1}{1-\omega}} \right)$, which is optimized as $\omega$ approaches to 1. In the low accuracy regime, in which $\epsilon \gg 1-\gamma$, the dependence on $\frac{1}{1-\gamma}$ dominates, and the time complexity can be optimized at $\omega=\frac{6}{7}$, which yields $T=\tilde{\Omega} \left( \frac{1}{(1-\gamma)^7\epsilon^{7/3}} + \frac{1}{(1-\gamma)^7} \right)=\tilde{\Omega} \left( \frac{1}{(1-\gamma)^7\epsilon^{7/3}} \right)$. Furthermore, \Cref{thm:syncDQ} corroborates the design effectiveness of double Q-learning, which overcomes the overestimation issue and hence achieves better accuracy by making less aggressive progress in each update. Specifically, comparison of \Cref{thm:syncDQ} with the time complexity bounds of vanilla synchronous Q-learning under a polynomial learning rate in \citet{even2003learning} and \citet{wainwright2019stochastic} indicates that in the high accuracy regime, double Q-learning achieves the same convergence rate as vanilla Q-learning in terms of the order-level dependence on $\epsilon$. Clearly, the design of double Q-learning for high accuracy dominates the performance. In the low-accuracy regime (which is not what double Q-learning is designed for), double Q-learning achieves a slightly weaker convergence rate than vanilla Q-learning in \citet{even2003learning,wainwright2019stochastic} in terms of the dependence on $1-\gamma$, because its nature of less aggressive progress dominates the performance. \subsection{Asynchronous Double Q-learning} In this subsection, we study the asynchronous double Q-learning and provide its finite-time convergence result. Differently from synchronous double Q-learning, in which all state-action pairs are visited for each update of the chosen Q-estimator, asynchronous double Q-learning visits only one state-action pair for each update of the chosen Q-estimator. Therefore, we make the following standard assumption on the exploration strategy~\citep{even2003learning}: \begin{assumption}\label{asp:covering} (Covering number) There exists a covering number $L$, such that in consecutive $L$ updates of either $Q^A$ or $Q^B$ estimator, all the state-action pairs of the chosen Q-estimator are visited at least once. \end{assumption} The above conditions on the exploration are usually necessary for the finite-time analysis of asynchronous Q-learning. The same assumption has been taken in \citet{even2003learning}. \citet{qu2020finite} proposed a mixing time condition which is in the same spirit. \Cref{asp:covering} essentially requires the sampling strategy to have good visitation coverage over all state-action pairs. Specifically, \Cref{asp:covering} guarantees that consecutive $L$ updates of $Q^A$ visit each state-action pair of $Q^A$ at least once, and the same holds for $Q^B$. Since $2L$ iterations of asynchronous double Q-learning must make at least $L$ updates for either $Q^A$ or $Q^B$, \Cref{asp:covering} further implies that any state-action pair $(s,a)$ must be visited at least once during $2L$ iterations of the algorithm. In fact, our analysis allows certain relaxation of~\Cref{asp:covering} by only requiring each state-action pair to be visited during an interval with a certain probability. In such a case, we can also derive a finite-time bound by additionally dealing with a conditional probability. Next, we provide the finite-time result for asynchronous double Q-learning in the following theorem. \begin{theorem}\label{thm:asyncDQ} Fix $\epsilon>0,\gamma\in(1/3,1)$. Consider asynchronous double Q-learning under a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose Assumption \ref{asp:covering} holds. Let $Q^A_T(s,a)$ be the value of $Q^A$ for a state-action pair $(s,a)$ at time $T$. Then we have $\mathbb P(\norm{Q^A_T - Q^*} \leq \epsilon)\geq 1-\delta$, given that \begin{equation} \label{thm2T} T=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4V_{\max}^2 }{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{L^2}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right). \end{equation} \end{theorem} Comparison of~\Cref{thm:syncDQ} and~\ref{thm:asyncDQ} indicates that the finite-time result of asynchronous double Q-learning matches that of synchronous double Q-learning in the order dependence on $\frac{1}{1-\gamma}$ and $\frac{1}{\epsilon}$. The difference lies in the extra dependence on the covering time $L$ in \Cref{thm:asyncDQ}. Since synchronous double Q-learning visits all state-action pairs (i.e., takes $|\mathcal S||\mathcal A|$ sample updates) at each iteration, whereas asynchronous double Q-learning visits only one state-action pair (i.e., takes only one sample update) at each iteration, a more reasonable comparison between the two should be in terms of the overall sample complexity. In this sense, synchronous and asynchronous double Q-learning algorithms have the sample complexities of $|\mathcal S||\mathcal A| T$ (where $T$ is given in~\eqref{thm1T}) and $T$ (where $T$ is given in~\eqref{thm2T}), respectively. Since in general $L\gg |\mathcal S||\mathcal A|$, synchronous double-Q is more efficient than asynchronous double-Q in terms of the overall sampling complexity. \subsection{Proof Sketch of Theorem \ref{thm:syncDQ}}\label{subsec:proofOutline} In this subsection, we provide an outline of the technical proof of Theorem \ref{thm:syncDQ} and summarize the key ideas behind the proof. The detailed proof can be found in~\Cref{sec:proofThm1}. Our goal is to study the finite-time convergence of the error $\norm{Q^A_t-Q^*}$ between one Q-estimator and the optimal Q-function (this is without the loss of generality due to the symmetry of the two estimators). To this end, our proof includes: (a) Part I which analyzes the stochastic error propagation between the two Q-estimators $\norm{Q^B_t - Q^A_t}$; (b) Part II which analyzes the error dynamics between one Q-estimator and the optimum $\norm{Q^A_t-Q^*}$ conditioned on the error event in Part I; and (c) Part III which bounds the unconditional error $\norm{Q^A_t-Q^*}$. We describe each of the three parts in more details below. \textbf{Part I: Bounding $\norm{Q^B_t - Q^A_t}$ (see \Cref{lem:Gq}).} The main idea is to upper bound $\norm{Q^B_t - Q^A_t}$ by a decreasing sequence $\{G_q\}_{q\geq0}$ block-wisely with high probability, where each block $q$ (with $q\geq0$) is defined by $t\in[\hat{\tau}_q, \hat{\tau}_{q+1})$. The proof consists of the following four steps. \textit{Step 1 (see \Cref{lem:uBAdyn})}: We characterize the dynamics of $u_t^{BA}(s,a):=Q^B(s,a) - Q^A(s,a)$ as an SA algorithm as follows: \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a)), \end{equation*} where $h_t$ is a contractive mapping of $u_t^{BA}$, and $z_t$ is a martingale difference sequence. \textit{Step 2 (see \Cref{lem:uBAsanwich})}: We derive lower and upper bounds on $u_t^{BA}$ via two sequences $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ as follows: \begin{equation*} -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) \leq u_t^{BA}(s,a) \leq X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a), \end{equation*} for any $t\geq \hat{\tau}_q$, state-action pair $(s,a)\in\mathcal{S}\times\mathcal{A}$, and $q\geq0$, where $X_{t;\hat{\tau}_q}$ is deterministic and driven by $G_q$, and $Z_{t;\hat{\tau}_q}$ is stochastic and driven by the martingale difference sequence $z_t$. \textit{Step 3 (see \Cref{lem:Xt} and \Cref{lem:ZlDiff})}: We block-wisely bound $u_t^{BA}(s,a)$ using the induction arguments. Namely, we prove $\norm{u_t^{BA}}\leq G_q$ for $t\in[\hat{\tau_q},\hat{\tau}_{q+1})$ holds for all $q\geq0$. By induction, we first observe for $q=0$, $\norm{u_t^{BA}}\leq G_0$ holds. Given any state-action pair $(s,a)$, we assume that $\norm{u_t^{BA}(s,a)}\leq G_q$ holds for $t\in[\hat{\tau}_q, \hat{\tau}_{q+1})$. Then we show $\norm{u_t^{BA}(s,a)}\leq G_{q+1}$ holds for $t\in[\hat{\tau}_{q+1}, \hat{\tau}_{q+2})$, which follows by bounding $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ separately in \Cref{lem:Xt} and \Cref{lem:ZlDiff}, respectively. \textit{Step 4 (see \Cref{subsec:proofProp1})} : We apply union bound (\Cref{lem:unionBound}) to obtain the block-wise bound for all state-action pairs and all blocks. \textbf{Part II: Conditionally bounding $\norm{Q^A_t - Q^*}$ (see~\Cref{lem:conditionalBound})}. We upper bound $\norm{Q^A_t - Q^*} $ by a decreasing sequence $\{D_k\}_{k\geq 0}$ block-wisely conditioned on the following two events: \begin{itemize} \item []Event $E$: $\norm{u^{BA}_t}$ is upper bounded properly (see \eqref{eq:eventA} in~\Cref{subsec:PartII}), and \item []Event $F$: there are sufficient updates of $Q^A_t$ in each block (see~\eqref{eq:eventB} in~\Cref{subsec:PartII}). \end{itemize} The proof of~\Cref{lem:conditionalBound} consists of the following four steps. \textit{Step 1 (see~\Cref{lem:couple})}: We design a special relationship (illustrated in~\Cref{fig:DkGk}) between the block-wise bounds $\{G_q\}_{q\geq 0}$ and $\{D_k\}_{k\geq 0}$ and their block separations. \textit{Step 2 (see~\Cref{lem:residualDynamics})}: We characterize the dynamics of the iteration residual $r_{t}(s,a):=Q^A_t(s,a) - Q^*(s,a)$ as an SA algorithm as follows: when $Q^A$ is chosen to be updated at iteration $t$, \begin{equation*} r_{t+1}(s,a) \!=\! (1\!-\!\alpha_{t}) r_{t}(s,a) \!+\! \alpha_{t} (\mathcal T Q_{t}^A(s,a)\!-\!Q^*(s,a)) \!+\! \alpha_{t} w_{t}(s,a) \!+\! \alpha_{t}\gamma u_{t}^{BA}(s',a^*), \end{equation*} where $w_{t}(s,a)$ is the error between the Bellman operator and the sample-based empirical estimator, and is thus a martingale difference sequence, and $u_{t}^{BA}$ has been defined in Part I. \textit{Step 3 (see~\Cref{lem:rtSandwich})}: We provide upper and lower bounds on $r_t$ via two sequences $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ as follows: \begin{equation*} -Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a) \leq r_t(s,a) \leq Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a), \end{equation*} for all $t\geq \tau_k$, all state-action pairs $(s,a)\in\mathcal{S}\times\mathcal{A}$, and all $q\geq0$, where $Y_{t;\tau_k}$ is deterministic and driven by $D_k$, and $W_{t;\tau_k}$ is stochastic and driven by the martingale difference sequence $w_t$. In particular, if $Q^A_t$ is not updated at some iteration, then the sequences $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ assume the same values from the previous iteration. \textit{Step 4 (see \Cref{lem:Yt}, \Cref{lem:WlDiff} and \Cref{subsec:proofProp2})}: Similarly to Steps 3 and 4 in Part I, we conditionally bound $\norm{r_t}\leq D_k$ for $t\in [\tau_{k}, \tau_{k+1})$ and $k\geq0$ via bounding $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ and further taking the union bound. \textbf{Part III: Bounding $\norm{Q^A_t - Q^*}$(see \Cref{subsec:proofThm1}).} We combine the results in the first two parts, and provide high probability bound on $\norm{r_t}$ with further probabilistic arguments, which exploit the high probability bounds on $\mathbb P(E)$ in~\Cref{lem:Gq} and $\mathbb P(F)$ in~\Cref{lem:halfQA}. \section{Conclusion} In this paper, we provide the first finite-time results for double Q-learning, which characterize how fast double Q-learning converges under both synchronous and asynchronous implementations. For the synchronous case, we show that it achieves an $\epsilon$-accurate optimal Q-function with at least the probability of $1-\delta$ by taking $\Omega\left( \left( \frac{1}{(1-\gamma)^6\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|}{(1-\gamma)^7\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations. Similar scaling order on $\frac{1}{1-\gamma}$ and $\frac{1}{\epsilon}$ also applies for asynchronous double Q-learning but with extra dependence on the covering number. We develop new techniques to bound the error between two correlated stochastic processes, which can be of independent interest. \section*{Acknowledgements} The work was supported in part by the U.S. National Science Foundation under the grant CCF-1761506 and the startup fund of the Southern University of Science and Technology (SUSTech), China. \section*{Broader Impact} Reinforcement learning has achieved great success in areas such as robotics and game playing, and thus has aroused broad interests and more potential real-world applications. Double Q-learning is a commonly used technique in deep reinforcement learning to improve the implementation stability and speed of deep Q-learning. In this paper, we provided the fundamental analysis on the convergence rate for double Q-learning, which theoretically justified the empirical success of double Q-learning in practice. Such a theory also provides practitioners desirable performance guarantee to further develop such a technique into various transferable technologies. \section{Illustrative Numerical Example} \section{Proof of Lemma \ref{lem:uniformBound}} \label{sec:proofOfLemma1} We prove \Cref{lem:uniformBound} by induction. First, it is easy to guarantee that the initial case is satisfied, i.e., $\norm{Q^A_1}\leq \frac{R_{\max}}{1-\gamma} = \frac{V_{\max}}{2}, \norm{Q^B_1}\leq \frac{V_{\max}}{2}$. (In practice we usually initialize the algorithm as $Q^A_1=Q^B_1=0$). Next, we assume that $\norm{Q^A_t}\leq \frac{V_{\max}}{2}, \norm{Q^B_t}\leq \frac{V_{\max}}{2}$. It remains to show that such conditions still hold for $t+1$. Observe that \begin{align*} \norm{Q^A_{t+1}(s,a)} &= \norm{ (1-\alpha_t)Q^A_{t}(s,a) + \alpha_t\left( R_t+\gamma Q^{B}_t(s',\underset{a'\in U(s')}{\arg\max}Q^A_t(s',a') \right) } \\ &\leq (1-\alpha_t)\norm{Q^A_{t}} + \alpha_t\norm{R_t} + \alpha_t\gamma\norm{Q^{B}_t}\\ &\leq (1-\alpha_t)\frac{R_{\max}}{1-\gamma} + \alpha_t R_{\max} + \frac{\alpha_t\gamma R_{\max}}{1-\gamma}\\ &= \frac{R_{\max}}{1-\gamma} = \frac{V_{\max}}{2}. \end{align*} Similarly, we can have $\norm{Q^B_{t+1}(s,a)}\leq \frac{V_{\max}}{2}$. Thus we complete the proof. \section{Proof of Theorem \ref{thm:syncDQ}} \label{sec:proofThm1} In this appendix, we will provide a detailed proof of~\Cref{thm:syncDQ}. Our proof includes: (a) Part I which analyzes the stochastic error propagation between the two Q-estimators $\norm{Q^B_t - Q^A_t }$; (b) Part II which analyzes the error dynamics between one Q-estimator and the optimum $\norm{Q^A_t -Q^* }$ conditioned on the error event in Part I; and (c) Part III which bounds the unconditional error $\norm{Q^A_t -Q^* }$. We describe each of the three parts in more details below. \subsection{Part I: Bounding $\norm{Q^B_t - Q^A_t }$} \label{subsec:PartI} The main idea is to upper bound $\norm{Q^B_t - Q^A_t }$ by a decreasing sequence $\{G_q\}_{q\geq0}$ block-wisely with high probability, where each block or epoch $q$ (with $q\geq0$) is defined by $t\in[\hat{\tau}_q, \hat{\tau}_{q+1})$. \begin{proposition} \label{lem:Gq} Fix $\epsilon>0, \kappa\in(0,1), \sigma\in(0,1)$ and $\Delta\in(0, e-2)$. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $G_q = (1-\xi)^q G_0$ with $G_0 = V_{\max}$ and $\xi=\frac{1-\gamma}{4}$. Let $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2c}{\kappa}\hat{\tau}_q^\omega$ for $q \geq 1$ with $c\geq \frac{\ln(2+\Delta)+1/\hat{\tau}_1^\omega}{1-\ln(2+\Delta)-1/\hat{\tau}_1^\omega}$ and $\hat{\tau}_1$ as the finishing time of the first epoch satisfying \begin{equation*} \hat{\tau}_1\geq \max\left\{\left(\frac{1}{1-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{128c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\ln\left(\frac{64c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $n$ such that $G_n\geq\sigma\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right]\\ &\quad\geq 1- \frac{4c(n+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right). \end{align*} \end{proposition} The proof of Proposition \ref{lem:Gq} consists of the following four steps. \subsubsection{Step 1: Characterizing the dynamics of $Q_t^B(s,a) - Q_t^A(s,a)$ } We first characterize the dynamics of $u_t^{BA}(s,a):=Q_t^B(s,a) - Q_t^A(s,a)$ as a stochastic approximation (SA) algorithm in this step. \begin{lemma}\label{lem:uBAdyn} Consider double Q-learning in Algorithm \ref{alg:doubleQ}. Then we have \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a), \end{equation*} where \begin{equation*} F_t(s,a) = \left\{\begin{aligned} &Q^B_t(s,a) - R_t - \gamma Q^B_t(s_{t+1},a^*),\quad\text{w.p. 1/2} \\ &R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a),\quad\text{w.p. 1/2}. \end{aligned} \right. \end{equation*} In addition, $F_t$ satisfies \begin{equation*} \norm{\mathbb E[F_t|\mathcal F_t]} \leq \frac{1+\gamma}{2} \norm{u_t^{BA} }. \end{equation*} \end{lemma} \begin{proof} Algorithm \ref{alg:doubleQ} indicates that at each time, either $Q^A$ or $Q^B$ is updated with equal probability. When updating $Q^A$ at time $t$, for each $(s,a)$ we have \begin{align*} u_{t+1}^{BA}(s,a) &= Q^B_{t+1}(s,a) - Q^A_{t+1}(s,a)\\ &= Q^B_t(s,a) - (Q^A_t(s,a) + \alpha_t (R_t + \gamma Q^B_t(s_{t+1},a^*) - Q^A_t(s,a)))\\ &= (1-\alpha_t) Q^B_t(s,a) - ( (1-\alpha_t)Q^A_t(s,a) + \alpha_t (R_t + \gamma Q^B_t(s_{t+1},a^*) - Q^B_t(s,a)) )\\ &= (1-\alpha_t) u_t^{BA}(s,a) + \alpha_t(Q^B_t(s,a) - R_t - \gamma Q^B_t(s_{t+1},a^*) ). \end{align*} Similarly, when updating $Q^B$, we have \begin{align*} u_{t+1}^{BA}(s,a) &= Q^B_{t+1}(s,a) - Q^A_{t+1}(s,a)\\ &= (Q^B_t(s,a) + \alpha_t (R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^B_t(s,a))) - Q^A_t(s,a)\\ &= (1-\alpha_t) Q^B_t(s,a) + ( \alpha_t (R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a)) - (1-\alpha_t)Q^A_t(s,a) )\\ &= (1-\alpha_t) u_t^{BA}(s,a) + \alpha_t(R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a)). \end{align*} Therefore, we can rewrite the dynamics of $u_t^{BA}$ as $u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a)$, where \begin{equation*} F_t(s,a) = \left\{\begin{aligned} &Q^B_t(s,a) - R_t - \gamma Q^B_t(s_{t+1},a^*),\quad\text{w.p. 1/2} \\ &R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a),\quad\text{w.p. 1/2}. \end{aligned} \right. \end{equation*} Thus, we have \begin{align} \mathbb E&[F_t(s,a)|\mathcal F_t] \nonumber\\ &= \frac{1}{2}\left( Q^B_t(sa)\! - \!\underset{s_{t+1}}{\mathbb E}[R_{sa}^{s'} \!- \!\gamma Q^B_t(s_{t+1},a^*)] \right)\! +\! \frac{1}{2}\left(\! \underset{s_{t+1}}{\mathbb E}[R_{s,a}^{s'}\! +\! \gamma Q^A_t(s_{t+1},b^*)\!]\! -\! Q^A_t(s,a) \right)\nonumber\\ &= \frac{1}{2} (Q^B_t(s,a) - Q^A_t(s,a)) + \frac{\gamma}{2} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\nonumber\\ &= \frac{1}{2} u_t^{BA}(s,a) + \frac{\gamma}{2} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]. \label{eq:pf1LemGq} \end{align} Next, we bound $\underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]$. First, consider the case when $\underset{s_{t+1}}{\mathbb E} Q^A_t(s_{t+1},b^*) \geq \underset{s_{t+1}}{\mathbb E} Q^B_t(s_{t+1},a^*)$. Then we have \begin{align*} \left| \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right] \right| &= \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\\ &\overset{\text{(i)}}{\leq} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},a^*) - Q^B_t(s_{t+1},a^*) \right]\\ &\leq \norm{u_t^{BA}}, \end{align*} where (i) follow from the definition of $a^*$ in Algorithm \ref{alg:doubleQ}. Similarly, if $\underset{s_{t+1}}{\mathbb E} Q^A_t(s_{t+1},b^*) < \underset{s_{t+1}}{\mathbb E} Q^B_t(s_{t+1},a^*)$, we have \begin{align*} \left| \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right] \right| &= \underset{s_{t+1}}{\mathbb E}\left[ Q^B_t(s_{t+1},a^*) - Q^A_t(s_{t+1},b^*) \right]\\ &\overset{\text{(i)}}{\leq} \underset{s_{t+1}}{\mathbb E}\left[ Q^B_t(s_{t+1},b^*) - Q^A_t(s_{t+1},b^*) \right]\\ &\leq \norm{u_t^{BA}}, \end{align*} where (i) follows from the definition of $b^*$. Thus we can conclude that \begin{equation*} \left| \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right] \right| \leq \norm{u_t^{BA}}. \end{equation*} Then, we continue to bound \eqref{eq:pf1LemGq}, and obtain \begin{align*} \left|\mathbb E[F_t(s,a)|\mathcal F_t] \right| &= \left|\frac{1}{2} u_t^{BA}(s,a) + \frac{\gamma}{2} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\right|\\ &\leq \frac{1}{2}\norm{u_t^{BA}} + \frac{\gamma}{2}\left|\underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\right|\\ &\leq \frac{1+\gamma}{2} \norm{u_t^{BA}}, \end{align*} for all $(s,a)$ pairs. Hence, $\norm{\mathbb E[F_t|\mathcal F_t]}\leq \frac{1+\gamma}{2} \norm{u_t^{BA}}$. \end{proof} Applying~\Cref{lem:uBAdyn}, we write the dynamics of $u_t^{BA}(s,a)$ in the form of a classical SA algorithm driven by a martingale difference sequence as follows: \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a)), \end{equation*} where $h_t(s,a) = \mathbb E[F_t(s,a)|\mathcal F_t]$ and $z_t(s,a) = F_t(s,a) - \mathbb E[F_t|\mathcal F_t]$. Then, we obtain $\mathbb E [z_t(s,a)|\mathcal F_t] = 0$ and $\norm{h_t} \leq \frac{1+\gamma}{2}\norm{u_t^{BA}}$ following from Lemma \ref{lem:uBAdyn}. We define $u^*(s,a) = 0$, and treat $h_t$ as an operator over $u^{BA}_t$. Then $h_t$ has a contraction property as: \begin{equation}\label{eq:uContraction} \norm{h_t-u^*} \leq \gamma'\norm{u_t^{BA}-u^*}, \end{equation} where $\gamma'=\frac{1+\gamma}{2}\in(0,1)$. Based on this SA formulation, we bound $u_t^{BA}(s,a)$ block-wisely in the next step. \subsubsection{Step 2: Constructing sandwich bounds on $u_t^{BA}$} We derive lower and upper bounds on $u_t^{BA}$ via two sequences $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ in the following lemma. \begin{lemma}\label{lem:uBAsanwich} Let $\hat{\tau}_q$ be such that $\norm{u_t^{BA}}\leq G_q$ for all $t\geq\hat{\tau}_q$. Define $Z_{t;\hat{\tau}_q}(s,a), X_{t;\hat{\tau}_q}(s,a)$ as \begin{align*} Z_{t+1;\hat{\tau}_q}(s,a) &= (1-\alpha_t)Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a), \quad \text{with } Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = 0;\\ X_{t+1;\hat{\tau}_q}(s,a) &= (1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q, \quad \text{with } X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q,\gamma'=\frac{1+\gamma}{2}. \end{align*} Then for any $t\geq\hat{\tau}_q$ and state-action pair $(s,a)$, we have \begin{equation*} -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) \leq u_t^{BA}(s,a) \leq X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a). \end{equation*} \end{lemma} \begin{proof} We proceed the proof by induction. For the initial condition $t=\hat{\tau}_q$, $\norm{u_{\hat{\tau}_q}^{BA}}\leq G_q$ implies $-G_q\leq u_{\hat{\tau}_q}^{BA} \leq G_q$. We assume the sandwich bound holds for time $t$. It remains to check that the bound also holds for $t+1$. At time $t+1$, we have \begin{align*} u_{t+1}^{BA}(s,a) &= (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\leq (1-\alpha_t)( X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) ) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\overset{\text{(i)}}{\leq} \left[(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'\norm{u_t^{BA}}\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &\leq \left[(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &= X_{t+1;\hat{\tau}_q}(s,a) + Z_{t+1;\hat{\tau}_q}(s,a), \end{align*} where (i) follows from Lemma \ref{lem:uBAdyn}. Similarly, we can bound the other direction as \begin{align*} u_{t+1}^{BA}(s,a) &= (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\geq (1-\alpha_t)( -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) ) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\geq \left[-(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) - \alpha_t \gamma'\norm{u_t^{BA}}\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &\geq \left[-(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) - \alpha_t \gamma'G_q\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &= -X_{t+1;\hat{\tau}_q}(s,a) + Z_{t+1;\hat{\tau}_q}(s,a). \end{align*} \end{proof} \subsubsection{Step 3: Bounding $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ for block $q+1$} We bound $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ in \Cref{lem:Xt} and \Cref{lem:ZlDiff} below, respectively. Before that, we first introduce the following technical lemma which will be useful in the proof of~\Cref{lem:Xt}. \begin{lemma}\label{lem:prodHelp} Fix $\omega\in(0,1)$. Let $0 < t_1 < t_2$. Then we have \begin{equation*} \prod_{i=t_1}^{t_2} \left( 1-\frac{1}{i^\omega} \right) \leq \exp\left( -\frac{t_2 - t_1}{t_2^\omega} \right). \end{equation*} \end{lemma} \begin{proof} Since $\ln(1-x)\leq -x$ for any $x\in (0,1)$, we have \begin{equation*} \ln\left[ \prod_{i=t_1}^{t_2} \left( 1-\frac{1}{i^\omega} \right) \right] \leq -\sum_{i=t_1}^{t_2}i^{-\omega}\leq -\int_{t_1}^{t_2} t^{-\omega}dt = -\frac{t_2^{1-\omega} - t_1^{1-\omega}}{1-\omega}. \end{equation*} Thus, fix $\omega\in(0,1)$, let $0 < t_1 < t_2$, and then we have \begin{equation*} \prod_{i=t_1}^{t_2} \left( 1-\frac{1}{i^\omega} \right) \leq \exp\left( -\frac{t_2^{1-\omega} - t_1^{1-\omega}}{1-\omega} \right). \end{equation*} Define $f(t) := t^{1-\omega}$. Observe that $f(t)$ is an increasing concave function. Then we have \begin{align*} t_2^{1-\omega} - t_1^{1-\omega} &\geq f'(t_2)(t_2-t_1) = (1-\omega)t_2^{-\omega} (t_2 - t_1), \end{align*} which immediately indicates the result. \end{proof} We now derive a bound for $X_{t;\hat{\tau}_q}$. \begin{lemma}\label{lem:Xt} Fix $\kappa\in (0,1)$ and $\Delta\in(0, e-2)$. Let $\{G_q\}$ be defined in~\Cref{lem:Gq}. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose that $X_{t;\hat{\tau}_q}(s,a) \leq G_q$ for any $t \geq \hat{\tau}_q$. Then for any $t\in[\hat{\tau}_{q+1}, \hat{\tau}_{q+2})$, given $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2c}{\kappa}\hat{\tau}_q^\omega$ with $\hat{\tau}_1\geq \left(\frac{1}{1-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}$ and $c\geq \frac{\ln(2+\Delta)+1/\hat{\tau}_1^\omega}{1-\ln(2+\Delta)-1/\hat{\tau}_1^\omega}$, we have \begin{equation*} X_{t;\hat{\tau}_q}(s,a) \leq \left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q. \end{equation*} \end{lemma} \begin{proof} Observe that $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q = \gamma' G_q + (1-\gamma')G_q := \gamma' G_q + \rho_{\hat{\tau}_q} $. We can rewrite the dynamics of $X_{t;\hat{\tau}_q}(s,a)$ as \begin{equation*} X_{t+1;\hat{\tau}_q}(s,a) = (1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q = \gamma'G_q + (1-\alpha_t)\rho_t, \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$. By the definition of $\rho_t$, we obtain \begin{align*} \rho_t &= (1-\alpha_{t-1})\rho_{t-1} = \dots = (1-\gamma')G_q\prod_{i=\hat{\tau}_q}^{t-1}(1-\alpha_i)\\ &= (1-\gamma')G_q\prod_{i=\hat{\tau}_q}^{t-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(i)}}{\leq} (1-\gamma')G_q\prod_{i=\hat{\tau}_q}^{\hat{\tau}_{q+1}-1}\left(1-\frac{1}{i^\omega}\right)\\ &\overset{\text{(ii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{\hat{\tau}_{q+1}-1-\hat{\tau}_q}{(\hat{\tau}_{q+1}-1)^\omega} \right) \leq (1-\gamma')G_q\exp\left( -\frac{\hat{\tau}_{q+1}-1-\hat{\tau}_q}{\hat{\tau}_{q+1}^\omega} \right)\\ &= (1-\gamma')G_q\exp\left( -\frac{\frac{2c}{\kappa}\hat{\tau}_q^\omega-1 }{\hat{\tau}_{q+1}^\omega} \right) = (1-\gamma')G_q\exp\left( -\frac{2c}{\kappa}\left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega + \frac{1}{\hat{\tau}_{q+1}^\omega} \right)\\ &\overset{\text{(iii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{2c}{\kappa}\frac{1}{1+\frac{2c}{\kappa}} + \frac{1}{\hat{\tau}_{1}^\omega} \right) \overset{\text{(iv)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{c}{1+c} + \frac{1}{\hat{\tau}_{1}^\omega} \right), \end{align*} where (i) follows because $\alpha_i$ is decreasing and $t\geq \hat{\tau}_{q+1}$, (ii) follows from Lemma \ref{lem:prodHelp}, (iii) follows because $\hat{\tau}_q\geq\hat{\tau}_1$ and \begin{equation*} \left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega \geq \frac{\hat{\tau}_q}{\hat{\tau}_{q+1}} = \frac{\hat{\tau}_q}{\hat{\tau}_{q} + \frac{2c}{\kappa}\hat{\tau}_q^\omega}\geq \frac{1}{1+\frac{2c}{\kappa}}, \end{equation*} and (iv) follows because $\frac{2c}{\kappa}\geq c$. Next, observing the conditions that $\hat{\tau}_1^\omega\geq\frac{1}{1-\ln(2+\Delta)}$ and $c\geq \frac{1}{1-\ln(2+\Delta)-1/\hat{\tau}_1^\omega}-1$, we have \begin{equation*} \frac{c}{1+c} - \frac{1}{\hat{\tau}_{1}^\omega} \geq \ln(2+\Delta). \end{equation*} Thus we have $\rho_t\leq \frac{1-\gamma'}{2+\Delta}G_q$. Finally, We finish our proof by further observing that $1-\gamma' = 2\xi$. \end{proof} Since we have bounded $X_{t;\hat{\tau}_q}(s,a)$ by $\left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q$ for all $t\geq\hat{\tau}_{q+1}$, it remains to bound $Z_{t;\hat{\tau}_q}(s,a)$ by $\left(1-\frac{2}{2+\Delta}\right)\xi G_q$ for block $q+1$, which will further yield $\norm{u_t^{BA}(s,a)}\leq(\gamma'+\xi)G_q = (1-\xi)G_q = G_{q+1}$ for any $t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2})$ as desired. Differently from $X_{t;\hat{\tau}_q}(s,a)$ which is a deterministic monotonic sequence, $Z_{t;\hat{\tau}_q}(s,a)$ is stochastic. We need to capture the probability for a bound on $Z_{t;\hat{\tau}_q}(s,a)$ to hold for block $q+1$. To this end, we introduce a different sequence $\{Z_{t;\hat{\tau}_q}^l (s,a)\}$ given by \begin{equation}\label{eq:Zl} Z^l_{t;\hat{\tau}_q}(s,a) = \sum_{i=\hat{\tau}_q}^{\hat{\tau}_q+l}\alpha_i\prod_{j=i+1}^{t-1}(1-\alpha_j) z_i(s,a) := \sum_{i=\hat{\tau}_q}^{\hat{\tau}_q+l} \phi_i^{q,t-1} z_i(s,a), \end{equation} where $\phi_i^{q,t-1} = \alpha_i\prod_{j=i+1}^{t-1}(1-\alpha_j)$. By the definition of $Z_{t;\hat{\tau}_q}(s,a)$, one can check that $Z_{t;\hat{\tau}_q}(s,a) = Z^{t-1-\hat{\tau}_q}_{t;\hat{\tau}_q}(s,a) $. Thus we have \begin{equation}\label{eq:ZZl} Z_{t;\hat{\tau}_q}(s,a) = Z_{t;\hat{\tau}_q}(s,a) - Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = \sum_{l=1}^{t-1-\hat{\tau}_q} (Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a)) + Z^{0}_{t;\hat{\tau}_q}(s,a). \end{equation} In the following lemma, we capture an important property of $Z^{l}_{t;\hat{\tau}_q}(s,a)$ defined in \eqref{eq:Zl}. \begin{lemma}\label{lem:ZlDiff} For any $t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2})$ and $1\leq l\leq t-1-\hat{\tau}_q$, $Z^{l}_{t;\hat{\tau}_q}(s,a)$ is a martingale sequence and satisfies \begin{equation}\label{eq:ZlvsZ} \lvert Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{2V_{\max}}{\hat{\tau}_q^\omega}. \end{equation} \end{lemma} \begin{proof} To show the martingale property, we observe that \begin{align*} \mathbb E[Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a)|\mathcal F_{\hat{\tau}_q + l -1}] &= \mathbb E[ \phi_{\hat{\tau}_q + l}^{q,t-1} z_{\hat{\tau}_q + l}(s,a)|\mathcal F_{\hat{\tau}_q + l -1} ]\\ &= \phi_{\hat{\tau}_q + l}^{q,t-1} \mathbb E[ z_{\hat{\tau}_q + l}(s,a)|\mathcal F_{\hat{\tau}_q + l -1} ] = 0, \end{align*} where the last equation follows from the definition of $z_t(s,a)$. In addition, based on the definition of $\phi_i^{q,t-1}$ in \eqref{eq:Zl} which requires $i\geq\hat{\tau}_q$, we have \begin{equation*} \phi_i^{q,t-1}=\alpha_i\prod_{j=i+1}^{t-1}(1-\alpha_j)\leq \alpha_i\leq \frac{1}{\hat{\tau}_q^\omega}. \end{equation*} Further, since $|F_t|\leq\frac{2R_{\max}}{1-\gamma}=V_{\max}$, we obtain $|z_t(s,a)|=|F_t - \mathbb E[F_t|\mathcal F_t]|\leq 2V_{\max}$. Thus \begin{equation*} \lvert Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \rvert = \phi_{\hat{\tau}_q + l}^{q,t-1} |z_{\hat{\tau}_q + l}(s,a)|\leq \frac{2 V_{\max}}{\hat{\tau}_q^\omega}. \end{equation*} \end{proof} Lemma \ref{lem:ZlDiff} guarantees that $Z^{l}_{t;\hat{\tau}_q}(s,a)$ is a martingale sequence, which allows us to apply the following Azuma's inequality. \begin{lemma}\label{lem:azuma} \citep{azuma1967weighted} Let $X_0,X_1,\dots,X_n$ be a martingale sequence such that for each $1\leq k\leq n$, \begin{equation*} |X_k-X_{k-1}| \leq c_k, \end{equation*} where the $c_k$ is a constant that may depend on $k$. Then for all $n\geq 1$ and any $\epsilon>0$, \begin{equation*} \mathbb P[|X_n-X_0|>\epsilon] \leq 2\exp\left( -\frac{\epsilon^2}{2\sum_{k=1}^n c_k^2} \right). \end{equation*} \end{lemma} By Azuma's inequality and the relationship between $Z_{t;\hat{\tau}_q}(s,a)$ and $Z^l_{t;\hat{\tau}_q}(s,a)$ in~\eqref{eq:Zl}, we obtain \begin{align*} &\mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \hat{\epsilon}| t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\leq 2\exp\left( -\frac{\hat{\epsilon}^2}{2\sum_{l=1}^{t-\hat{\tau}_q-1} \left( Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \right)^2 + 2(Z^{0}_{t;\hat{\tau}_q}(s,a))^2} \right)\\ &\quad\overset{\text{(i)}}{\leq} 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(t-\hat{\tau}_q)V_{\max}^2} \right) \leq 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(\hat{\tau}_{q+2}-\hat{\tau}_q)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\leq} 2\exp\left( -\frac{\kappa^2\hat{\epsilon}^2\hat{\tau}_q^{\omega}}{32c(c+\kappa)V_{\max}^2} \right) = 2\exp\left( -\frac{\kappa^2\hat{\epsilon}^2\hat{\tau}_q^{\omega}}{32c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from Lemma \ref{lem:ZlDiff}, and (ii) follows because \begin{equation*} \hat{\tau}_{q+2}-\hat{\tau}_q = \frac{2c}{\kappa}\hat{\tau}_{q+1}^\omega+\frac{2c}{\kappa}\hat{\tau}_{q}^\omega = \frac{2c}{\kappa}\left(\hat{\tau}_{q}+\frac{2c}{\kappa}\hat{\tau}_{q}^\omega\right)^\omega+\frac{2c}{\kappa}\hat{\tau}_{q}^\omega\leq \frac{2c}{\kappa}\left(2+\frac{2c}{\kappa}\right)\hat{\tau}_{q}^\omega=\frac{4c(c+\kappa)}{\kappa^2}\hat{\tau}_q^\omega. \end{equation*} \subsubsection{Step 4: Unionizing all blocks and state-action pairs} \label{subsec:proofProp1} Now we are ready to prove~\Cref{lem:Gq} by taking a union of probabilities over all blocks and state-action pairs. Before that, we introduce the following two preliminary lemmas, which will be used for multiple times in the sequel. \begin{lemma}\label{lem:unionBound} Let $\{X_i\}_{i\in\mathcal{I}}$ be a set of random variables. Fix $\epsilon>0$. If for any $i\in\mathcal{I}$, we have $\mathbb P(X_i\leq\epsilon) \geq 1-\delta$, then \begin{equation*} \mathbb P(\forall i\in\mathcal{I}, X_i\leq \epsilon) \geq 1- |\mathcal{I}|\delta. \end{equation*} \end{lemma} \begin{proof} By union bound, we have \begin{align*} \mathbb P(\forall i\in\mathcal{I}, X_i\leq \epsilon) = 1-\mathbb P\left(\bigcup_{i\in\mathcal{I}} X_i>\epsilon \right) \geq 1- \sum_{i\in\mathcal{I}}\mathbb P(X_i > \epsilon) \geq 1-|\mathcal{I}|\delta. \end{align*} \end{proof} \begin{lemma}\label{lem:tauHelp} Fix positive constants $a,b$ satisfying $2ab\ln ab > 1$. If $\tau\geq 2ab\ln ab$, then \begin{equation*} \tau^b\exp\left( -\frac{2\tau}{a} \right) \leq \exp\left( -\frac{\tau}{a} \right). \end{equation*} \end{lemma} \begin{proof} Let $c=ab$. If $\tau\leq c^2$, we have \begin{equation*} c\ln\tau \leq c\ln c^2 = 2c\ln c\leq \tau. \end{equation*} If $\tau\geq c^2$, we have \begin{equation*} c\ln \tau \leq \sqrt{\tau}\ln\tau \leq \sqrt{\tau}\sqrt{\tau} = \tau, \end{equation*} where the last inequality follows from $\ln x^2=2\ln x\leq x$. Therefore, we obtain $c\ln\tau = ab\ln\tau \leq \tau$. Thus $\tau^b\leq \exp\left( \frac{\tau}{a} \right)$, which implies this lemma. \end{proof} \textbf{Proof of~\Cref{lem:Gq}}\\ Based on the results obtained above, we are ready to prove~\Cref{lem:Gq}. Applying Lemma \ref{lem:unionBound}, we have \begin{align*} &\mathbb P\left[ \forall(s,a), \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\xi G_q \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A|(\hat{\tau}_{q+2} - \hat{\tau}_{q+1}) \cdot \mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \frac{\Delta}{2+\Delta}\xi G_q \Big\rvert t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2c}{\kappa}\hat{\tau}_{q+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 G_q^2\hat{\tau}_q^\omega}{32c(c+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 G_q^2\hat{\tau}_q^\omega}{32c(c+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^\omega}{32c(c+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\sum_{q=0}^n |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^\omega}{64c(c+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(iii)}}{\geq} 1- \frac{4c(n+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows because $G_q \geq G_n \geq \sigma\epsilon $, (ii) follows from Lemma \ref{lem:tauHelp} by substituting that $a=\frac{64c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }, b=1$ and observing \begin{align*} \hat{\tau}_q^\omega&\geq\hat{\tau}_1^\omega\geq \frac{128c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\ln\left(\frac{64c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\right) = 2ab\ln ab, \end{align*} and (iii) follows because $\hat{\tau}_q \geq \hat{\tau}_1$. Finally, we complete the proof of \Cref{lem:Gq} by observing that $X_{t;\hat{\tau}_q}$ is a deterministic sequence and thus \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\xi G_q \right]. \end{align*} \subsection{Part II: Conditionally bounding $\norm{Q^A_t - Q^*}$} \label{subsec:PartII} In this part, we upper bound $\norm{Q^A_t - Q^*} $ by a decreasing sequence $\{D_k\}_{k\geq 0}$ block-wisely conditioned on the following two events: fix a positive integer $m$, we define \begin{align} E &:= \left\{ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t}\leq \sigma D_{k+1} \right\}, \label{eq:eventA}\\ F &:= \{ \forall k\in [1,m+1], I^A_k\geq c\tau_{k}^\omega \},\label{eq:eventB} \end{align} where $I^A_k$ denotes the number of iterations updating $Q^A$ at epoch $k$, $\tau_{k+1}$ is the starting iteration index of the $(k+1)$th block, and $\omega$ is the decay parameter of the polynomial learning rate. Roughly, Event $E$ requires that the difference between the two Q-estimators are bounded appropriately, and Event $F$ requires that $Q^A$ is sufficiently updated in each block. \begin{proposition}\label{lem:conditionalBound} Fix $\epsilon>0, \kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Consider synchronous double Q-learning under a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $\{G_q\}_{q\geq0}, \{\hat{\tau}_q\}_{q\geq0}$ be defined in~\Cref{lem:Gq}. Define $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Let $\tau_k=\hat{\tau}_k$ for $k\geq0$. Suppose that $c\geq \frac{\kappa(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$ and $\tau_1$ as the finishing time of the first block satisfies \begin{equation*} \tau_1\geq \max\left\{\left(\frac{1}{\kappa-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{32c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln \left(\frac{16c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $m$ such that $D_m\geq\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t- Q^*}\leq D_{k+1} |E,F \right]\\ &\quad\geq 1 - \frac{4c(m+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right), \end{align*} where the events $E,F$ are defined in \eqref{eq:eventA} and \eqref{eq:eventB}, respectively. \end{proposition} The proof of~\Cref{lem:conditionalBound} consists of the following four steps. \subsubsection{Step 1: Designing $\{D_k\}_{k\geq 0}$} The following lemma establishes the relationship (illustrated in~\Cref{fig:DkGk}) between the block-wise bounds $\{G_q\}_{q\geq 0}$ and $\{D_k\}_{k\geq 0}$ and their block separations, such that Event $E$ occurs with high probability as a result of~\Cref{lem:Gq}. \begin{lemma}\label{lem:couple} Let $\{G_q\}$ be defined in~\Cref{lem:Gq}, and let $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Then we have \begin{align*} &\mathbb P\left[\forall q\in [0,m], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\\ &\quad\leq \mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t}\leq \sigma D_{k+1} \right], \end{align*} given that $\tau_k = \hat{\tau}_{k }$. \end{lemma} \begin{proof} Based on our choice of $\sigma$, we have \begin{equation*} \beta = \frac{1-\gamma(1+\sigma)}{2} = \frac{1-\gamma\cdot\frac{1+\gamma}{2\gamma}}{2} = \frac{1-\gamma}{4} = \xi. \end{equation*} Therefore, the decay rate of $D_k$ is the same as that of $G_q$. Further considering $G_0=\sigma D_0$, we can make the sequence $\{\sigma D_k\}$ as an upper bound of $\{G_q\}$ for any time as long as we set the same starting point and ending point for each epoch. \end{proof} In Lemma \ref{lem:couple}, we make $G_k = \sigma D_k$ at any block $k$ and $\xi=\beta=\frac{1-\gamma}{4}$ by careful design of $\sigma$. In fact, one can choose any value of $\sigma\in(0,(1-\gamma)/\gamma)$ and design a corresponding relationship between $\tau_k $ and $ \hat{\tau}_{k }$ as long as the sequence $\{\sigma D_k\}$ can upper bound $\{G_q\}$ for any time. For simplicity of presentation, we keep the design in Lemma \ref{lem:couple}. \subsubsection{Step 2: Characterizing the dynamics of $Q^A_t(s,a) - Q^*(s,a)$ } We characterize the dynamics of the iteration residual $r_{t}(s,a):=Q^A_t(s,a) - Q^*(s,a)$ as an SA algorithm in~\Cref{lem:residualDynamics} below. Since not all iterations contribute to the error propagation due to the random update between the two Q-estimators, we introduce the following notations to label the valid iterations. \begin{definition}\label{def:TA} We define $T^A$ as the collection of iterations updating $Q^A$. In addition, we denote $T^A(t_1, t_2)$ as the set of iterations updating $Q^A$ between time $t_1$ and $t_2$. That is, \begin{equation*} T^A(t_1, t_2) = \left\{ t: t\in [t_1, t_2] \text{ and } t\in T^A \right\}. \end{equation*} Correspondingly, the number of iterations updating $Q^A$ between time $t_1$ and $t_2$ is the cardinality of $T^A(t_1, t_2)$ which is denoted as $|T^A(t_1,t_2)|$. \end{definition} \begin{lemma}\label{lem:residualDynamics} Consider double Q-learning in Algorithm \ref{alg:doubleQ}. Then we have \begin{equation*} r_{t+1}(s,a) \!=\! \left\{\begin{aligned} & r_t(s,a), \quad t \notin T^A;\\ & (1\!-\!\alpha_{t}) r_{t}(s,a) \!+\! \alpha_{t} (\mathcal T Q_{t}^A(s,a)\!-\!Q^*(s,a)) \!+\! \alpha_{t} w_{t}(s,a) \!+\! \alpha_{t}\gamma u_{t}^{BA}(s',a^*), t\in T^A, \end{aligned} \right. \end{equation*} where $w_{t}(s,a) = \mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a), u_{t}^{BA}(s,a) = Q_{t}^B(s,a) - Q_{t}^A(s,a)$. \end{lemma} \begin{proof} Following from Algorithm \ref{alg:doubleQ} and for $t\in T^A$, we have \begin{align*} &Q_{t+1}^A(s,a)\\ &\quad= Q_{t}^A(s,a) + \alpha_{t}(R_{t} + \gamma Q_{t}^B(s',a^*) - Q^A_{t}(s,a) )\\ &\quad= (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t}\left( R_{t} + \gamma Q_{t}^A(s',a^*) \right) + \alpha_{t}\left(\gamma Q_{t}^B(s',a^*) - \gamma Q_{t}^A(s',a^*) \right)\\ &\quad\overset{\text{(i)}}{=} (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t}\left( \mathcal T_{t} Q_{t}^A(s,a) + \gamma u_{t}^{BA}(s',a^*) \right)\\ &\quad= (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t} \mathcal T Q_{t}^A(s,a) + \alpha_{t} (\mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a))+ \alpha_{t}\gamma u_{t}^{BA}(s',a^*)\\ &\quad= (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t} \mathcal T Q_{t}^A(s,a) + \alpha_{t} w_{t}(s,a) + \alpha_{t}\gamma u_{t}^{BA}(s',a^*), \end{align*} where (i) follows because we denote $\mathcal T_{t} Q_{t}^A(s,a) = R_{t} + \gamma Q_{t}^A(s',a^*)$. By subtracting $Q^*$ from both sides, we complete the proof. \end{proof} \subsubsection{Step 3: Constructing sandwich bounds on $r_t(s,a)$} We provide upper and lower bounds on $r_t$ by constructing two sequences $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ in the following lemma. \begin{lemma}\label{lem:rtSandwich} Let $\tau_k$ be such that $\norm{r_t}\leq D_k$ for all $t\geq\tau_k$. Suppose that we have $\norm{u_t^{BA}}\leq \sigma D_k$ with $\sigma = \frac{1-\gamma}{2\gamma}$ for all $t\geq\tau_k$. Define $W_{t;\tau_k}(s,a)$ as \begin{equation*} W_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &W_{t;\tau_k}(s,a), \quad t\notin T^A;\\ &(1-\alpha_t)W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a), \quad t\in T^A, \end{aligned}\right. \end{equation*} where $W_{\tau_k;\tau_k}(s,a) = 0$ and define $Y_{t;\tau_k}(s,a)$ as \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &Y_{t;\tau_k}(s,a), \quad t\notin T^A;\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''D_k, \quad t\in T^A, \end{aligned}\right. \end{equation*} where $Y_{\tau_k;\tau_k}(s,a) = D_k$ and $\gamma''=\gamma(1+\sigma)$. Then for any $t\geq\tau_k$ and state-action pair $(s,a)$, we have \begin{equation*} -Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a) \leq r_t(s,a) \leq Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a). \end{equation*} \end{lemma} \begin{proof} We proceed the proof by induction. For the initial condition $t=\tau_k$, we have $\norm{r_t(s,a)}\leq D_k$, and thus it holds that $-D_k \leq r_{\tau_k}(s,a) \leq D_k$. We assume the sandwich bound holds for time $t\geq\tau_k$. It remains to check whether this bound holds for $t+1$. If $t\notin T^A$, then $r_{t+1}(s,a) = r_t(s,a), W_{t+1;\tau_k}(s,a)=W_{t;\tau_k}(s,a), Y_{t+1;\tau_k}(s,a)=Y_{t;\tau_k}(s,a)$. Thus the sandwich bound still holds. If $t\in T^A$, we have \begin{align*} r_{t+1}(s,a) &= (1-\alpha_t) r_t(s,a) + \alpha_t( \mathcal T Q_t^A(s,a) - Q^*(s,a) ) + \alpha_t w_t(s,a) + \alpha_t\gamma u_t^{BA}(s',a^*)\\ &\leq (1-\alpha_t) (Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) + \alpha_t\norm{ \mathcal T Q_t^A - Q^*}\\ &\quad + \alpha_t w_t(s,a) + \alpha_t\gamma \norm{u_t^{BA}}\\ &\overset{\text{(i)}}{\leq} (1-\alpha_t) (Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) + \alpha_t \gamma \norm{r_t}\\ &\quad + \alpha_t w_t(s,a) + \alpha_t\gamma \norm{u_t^{BA}}\\ &\overset{\text{(ii)}}{\leq} (1-\alpha_t) Y_{t;\tau_k}(s,a) + \alpha_t \gamma(1+\sigma)D_k + (1-\alpha_t) W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a)\\ &\leq Y_{t+1;\tau_k}(s,a) + W_{t+1;\tau_k}(s,a), \end{align*} where (i) follows from the contraction property of the Bellman operator, and (ii) follows from the condition $\norm{u_t^{BA}}\leq \sigma D_k$. Similarly, we can bound the other direction as \begin{align*} r_{t+1}(s,a) &= (1-\alpha_t) r_t(s,a) + \alpha_t( \mathcal T Q_t^A(s,a) - Q^*(s,a) ) + \alpha_t w_t(s,a) + \alpha_t\gamma u_t^{BA}(s',a^*)\\ &\geq (1-\alpha_t) (-Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) - \alpha_t\norm{ \mathcal T Q_t^A - Q^*}\\ &\quad + \alpha_t w_t(s,a) - \alpha_t\gamma \norm{u_t^{BA}}\\ &\geq (1-\alpha_t) (Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) - \alpha_t \gamma \norm{r_t}\\ &\quad + \alpha_t w_t(s,a) - \alpha_t\gamma \norm{u_t^{BA}}\\ &\geq -(1-\alpha_t) Y_{t;\tau_k}(s,a) - \alpha_t \gamma(1+\sigma)D_k + (1-\alpha_t) W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a)\\ &\geq -Y_{t+1;\tau_k}(s,a) + W_{t+1;\tau_k}(s,a). \end{align*} \end{proof} \subsubsection{Step 4: Bounding $Y_{t;\tau_k}(s,a)$ and $W_{t;\tau_k}(s,a)$ for epoch $k+1$} \label{subsec:proofProp2} Similarly to Steps 3 and 4 in Part I, we conditionally bound $\norm{r_t}\leq D_k$ for $t\in [\tau_{k}, \tau_{k+1})$ and $k=0,1,2,\dots$ by the induction arguments followed by the union bound. We first bound $Y_{t;\tau_k}(s,a)$ and $W_{t;\tau_k}(s,a)$ in \Cref{lem:Yt} and~\Cref{lem:WlDiff}, respectively. \begin{lemma}\label{lem:Yt} Fix $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Let $\{D_k\}$ be defined in Lemma \ref{lem:couple}. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose that $Y_{t;\tau_k}(s,a) \leq D_k$ for any $t \geq \tau_k$. At block $k$, we assume that there are at least $c\tau_k^\omega$ iterations updating $Q^A$, i.e., $|T^A(\tau_k,\tau_{k+1})|\geq c\tau_k^\omega$. Then for any $t\in[\tau_{k+1},\tau_{k+2})$, we have \begin{equation*} Y_{t;\tau_k}(s,a) \leq \left(\gamma'' + \frac{2}{2+\Delta}\beta\right)D_k. \end{equation*} \end{lemma} \begin{proof} Since we have defined $\tau_k=\hat{\tau}_k$ in Lemma \ref{lem:couple}, we have $\tau_{k+1} = \tau_k + \frac{2c}{\kappa}\tau_k^\omega$. Observe that $Y_{\tau_k;\tau_k}(s,a) = D_k = \gamma'' D_k + (1-\gamma'')D_k := \gamma'' D_k + \rho_{\tau_k} $. We can rewrite the dynamics of $Y_{t;\tau_k}(s,a)$ as \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} & Y_{t;\tau_k}(s,a), \quad t\notin T^A\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''D_k = \gamma''D_k + (1-\alpha_t)\rho_t, \quad t\in T^A \end{aligned}\right. \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$ for $t\in T^A$. By the definition of $\rho_t$, we obtain \begin{align} \rho_t &= \rho_{\tau_k}\prod_{i\in T^A(\tau_k, t-1)}(1-\alpha_i) = (1-\gamma'')D_k\prod_{i\in T^A(\tau_k, t-1)}(1-\alpha_i)\nonumber\\ &= (1-\gamma'')D_k\prod_{i\in T^A(\tau_k, t-1)}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(i)}}{\leq} (1-\gamma'')D_k\prod_{i\in T^A(\tau_k, \tau_{k+1}-1)}\left(1-\frac{1}{i^\omega}\right) \label{eq:issue1}\\ &\overset{\text{(ii)}}{\leq} (1-\gamma'')D_k\prod_{i=\tau_{k+1}-c\tau_k^\omega}^{\tau_{k+1}-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(iii)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{(\tau_{k+1}-1)^\omega} \right) \nonumber\\ &\leq (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{\tau_{k+1}^\omega} \right) = (1-\gamma'')D_k\exp\left( -c\left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega + \frac{1}{\tau_{k+1}^\omega} \right) \nonumber\\ &\overset{\text{(iv)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\tau_{1}^\omega} \right), \nonumber \end{align} where (i) follows because $\alpha_i<1$ and $t\geq \tau_{k+1}$, (ii) follows because $|T^A(\tau_{k}, \tau_{k+1}-1)|\geq c\tau_k^\omega$ where $T^A(t_1,t_2)$ and $|T^A(t_1,t_2)|$ are defined in Definition \ref{def:TA}, (iii) follows from Lemma \ref{lem:tauHelp}, and (iv) holds because $\tau+k\geq\tau_1$ and \begin{equation*} \left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega \geq \frac{\tau_k}{\tau_{k+1}} = \frac{\tau_k}{\tau_{k} + \frac{2c}{\kappa}\tau_k^\omega}\geq \frac{1}{1+\frac{2c}{\kappa}}. \end{equation*} Next we check the value of the power $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\tau_{1}^\omega}$. Since $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$, we have $\ln (2+\Delta)\in (0, \kappa)$. Further, observing $\tau_1^\omega > \frac{1}{\kappa-\ln(2+\Delta)}$, we obtain $\ln(2+\Delta) + \frac{1}{\tau_1^\omega} \in (0,\kappa)$. Last, since $c\geq\frac{\kappa}{2}\left( \frac{1}{1-\frac{\ln(2+\Delta) + 1/\tau_1^\omega}{\kappa}} - 1 \right)=\frac{\kappa(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$, we have $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\tau_{1}^\omega}\leq -\ln(2+\Delta)$. Thus, we have $\rho_t\leq \frac{1-\gamma''}{2+\Delta}D_k$. Finally, we finish our proof by further observing that $1-\gamma'' = 2\beta$. \end{proof} It remains to bound $|W_{t;\tau_k}(s,a)|\leq \left(1-\frac{2}{2+\Delta}\right)\beta D_k$ for $t\in [\tau_{k+1},\tau_{k+2})$. Combining the bounds of $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ yields $(\gamma''+\beta)D_k = (1-\beta)D_k=D_{k+1}$. Since $W_{t;\tau_k}$ is stochastic, we need to derive the probability for the bound to hold. To this end, we first rewrite the dynamics of $W_{t;\tau_k}$ defined in Lemma \ref{lem:rtSandwich} as \begin{equation*} W_{t;\tau_k}(s,a) = \sum_{i\in T^A(\tau_k, t-1)} \alpha_i\underset{j\in T^A(i+1, t-1)}{\Pi} (1-\alpha_j)w_i(s,a). \end{equation*} Next, we introduce a new sequence $\{W_{t;\tau_k}^l(s,a)\}$ as \begin{equation*} W^l_{t;\tau_k}(s,a) = \sum_{i\in T^A(\tau_k, \tau_k+l)} \alpha_i\underset{j\in T^A(i+1, t-1)}{\Pi} (1-\alpha_j)w_i(s,a). \end{equation*} Thus we have $W_{t;\tau_k}(s,a) = W^{t-1-\tau_k}_{t;\tau_k}(s,a)$. Then we have the following lemma. \begin{lemma}\label{lem:WlDiff} For any $t\in[\tau_{k+1}, \tau_{k+2}]$ and $1\leq l \leq t-\tau_k-1$, $\{W_{t;\tau_k}^l(s,a)\}$ is a martingale sequence and satisfies \begin{equation*} \lvert W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \rvert \leq \frac{V_{\max}}{\tau_k^\omega}. \end{equation*} \end{lemma} \begin{proof} Observe that \begin{equation*} W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) = \left\{\begin{aligned} &0, \quad \tau_k+l-1\notin T^A;\\ &\alpha_{\tau_k+l}\underset{j\in T^A(\tau_k+l+1, t-1)}{\Pi} (1-\alpha_j)w_{\tau_k+l}(s,a), \quad \tau_k+l-1\in T^A. \end{aligned} \right. \end{equation*} Since $\mathbb E[w_t|\mathcal F_{t-1}]=0$, we have \begin{align*} \mathbb E\left[ W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) | \mathcal F_{\tau_k+l-1} \right]=0. \end{align*} Thus $\{W_{t;\tau_k}^l(s,a)\}$ is a martingale sequence. In addition, since $l\geq 1$ and $\alpha_t\in (0,1)$, we have \begin{equation*} \alpha_{\tau_k+l}\underset{j\in T^A(\tau_k+l+1, t-1)}{\Pi} (1-\alpha_j)\leq \alpha_{\tau_k+l}\leq \alpha_{\tau_k} = \frac{1}{\tau_k^\omega}. \end{equation*} Further, we obtain $|w_t(s,a)| = |\mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a)| \leq\frac{2Q_{\max}}{1-\gamma} = V_{\max}$. Thus \begin{equation*} \lvert W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \rvert \leq \alpha_{\tau_k+l}|w_{\tau_k+l}(s,a)| \leq \frac{V_{\max}}{\tau_k^\omega}. \end{equation*} \end{proof} Next, we bound $W_{t;\tau_k}(s,a)$. Fix $\tilde{\epsilon}>0$. Then for any $t\in[\tau_{k+1},\tau_{k+2})$, we have \begin{align*} &\mathbb P\left[ |W_{t;\tau_k}(s,a)|>\tilde{\epsilon} | t\in[\tau_{k+1},\tau_{k+2}),E,F \right]\\ &\quad\overset{\text{(i)}}{\leq} 2\exp\left( \frac{-\tilde{\epsilon}^2}{2\underset{l:\tau_k+l-1\in T^A(\tau_k, t-1)}{\sum}\left( W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \right)^2 + 2(W^{\min(T^A(\tau_k, t-1))}_{t;\tau_k}(s,a))^2 } \right)\\ &\quad\overset{\text{(ii)}}{\leq} 2\exp\left( -\frac{\hat{\epsilon}^2\tau_k^{2\omega}}{2(|T^A(\tau_k,t-1)|+1)V_{\max}^2} \right) \overset{\text{(iii)}}{\leq} 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(t+1-\tau_k)V_{\max}^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(\tau_{k+2}-\tau_k)V_{\max}^2} \right) \overset{\text{(iv)}}{\leq} 2\exp\left( -\frac{\kappa^2\tilde{\epsilon}^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from Lemma \ref{lem:azuma}, (ii) follows from Lemma \ref{lem:WlDiff}, (iii) follows because $|T^A(t_1,t_2)|\leq t_2 - t_1 + 1$ and (iv) holds because \begin{equation*} \tau_{k+2} - \tau_k = \frac{2c}{\kappa}\tau_{k+1}^\omega + \frac{2c}{\kappa}\tau_k^\omega = \frac{2c}{\kappa}\left( \tau_k + \frac{2c}{\kappa}\tau_k^\omega \right)^\omega + \frac{2c}{\kappa}\tau_k^\omega \leq \frac{4c(c+\kappa)}{\kappa^2}\tau_k^\omega. \end{equation*} \textbf{Proof of~\Cref{lem:conditionalBound}}\\ Now we bound $\norm{r_t}$ by combining the bounds of $Y_{t;\tau_k}$ and $W_{t;\tau_k}$. Applying the union bound in Lemma \ref{lem:unionBound} yields \begin{align} &\mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \lvert W_{t;\tau_k}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\beta D_k|E,F \right] \nonumber\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A|(\tau_{k+2}-\tau_{k+1}) \cdot \mathbb P\left[ \lvert W_{t;\tau_k}(s,a) \rvert > \frac{\Delta}{2+\Delta}\beta D_k \Big\rvert t\in[\tau_{k+1},\tau_{k+2}),E,F \right] \nonumber\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2c}{\kappa}\tau_{k+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right) \nonumber\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right) \nonumber\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right)\label{eq:issue2}\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\sum_{k=0}^m |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{16c(c+\kappa)V_{\max}^2} \right) \nonumber\\ &\quad\geq 1 - \frac{4c(m+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right), \nonumber \end{align} where (i) follows because $D_k\geq D_m\geq \epsilon$, and (ii) follows from Lemma \ref{lem:tauHelp} by substituting $a=\frac{16c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }, b=1$ and observing that \begin{align*} \tau_k^{\omega}&\geq\hat{\tau}_1^{\omega}\geq \frac{32c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln \left(\frac{16c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) = 2ab\ln ab. \end{align*} Note that $Y_{t;\tau_k}(s,a)$ is deterministic. We complete this proof by observing that \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^*}\leq D_{k+1} | E,F\right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \lvert W_{t;\tau_k}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\beta D_k|E,F \right]. \end{align*} \subsection{Part III: Bounding $\norm{Q^A_t - Q^* }$} \label{subsec:proofThm1} We combine the results in the first two parts, and provide a high probability bound on $\norm{r_t}$ with further probabilistic arguments, which exploit the high probability bounds on $\mathbb P(E)$ in~\Cref{lem:Gq} and $\mathbb P(F)$ in the following lemma. \begin{lemma}\label{lem:halfQA} Let the sequence $\tau_k$ be the same as given in Lemma \ref{lem:couple}, i.e. $\tau_{k+1} = \tau_k + \frac{2c}{\kappa}\tau_k^\omega$ for $k\geq 1$. Then we have \begin{equation*} \mathbb P\left[\forall k\in [1,m], I^A_k\geq c\tau_{k}^\omega \right] \geq 1- m \exp\left( -\frac{(1-\kappa)^2c\tau_1^\omega}{\kappa} \right). \end{equation*} where $I^A_k$ denotes the number of iterations updating $Q^A$ at epoch $k$. \end{lemma} \begin{proof} The event updating $Q^A$ is a binomial random variable. To be specific, at iteration $t$ we define \begin{equation*} J^A_t = \left\{ \begin{aligned} & 1, \quad\text{updating } Q^A;\\ & 0, \quad\text{updating } Q^B. \end{aligned} \right. \end{equation*} Clearly, the events are independent across iterations. Therefore, for a given epoch $[\tau_k, \tau_{k+1})$, $I^A_k = \sum_{t=\tau_k}^{\tau_{k+1}-1} J^A_t$ is a binomial random variable satisfying the distribution $Binomial(\tau_{k+1}-\tau_k, 0.5)$. In the following, we use the tail bound of a binomial random variable. That is, if a random variable $X\sim Binomial(n,p)$, by Hoeffding's inequality we have $\mathbb P(X\leq x)\leq \exp\left(-\frac{2(np-x)^2}{n}\right)$ for $x< np$, which implies $\mathbb P(X\leq \kappa np)\leq \exp\left(-2np^2(1-\kappa)^2\right)$ for any fixed $\kappa\in(0,1)$. If $k=0$, $I^A_0\sim Binomial(\tau_1, 0.5)$. Thus the tail bound yields \begin{equation*} \mathbb P \left[I^A_0\leq \frac{\kappa}{2}\cdot\tau_1\right] \leq \exp\left( -\frac{(1-\kappa)^2\tau_1}{2} \right). \end{equation*} If $k\geq1$, since $\tau_{k+1}-\tau_k = \frac{2c}{\kappa}\tau_k^\omega $, we have $I^A_k\sim Binomial\left( \frac{2c}{\kappa}\tau_k^\omega, 0.5 \right)$. Thus the tail bound of a binomial random variable gives \begin{equation*} \mathbb P \left[I^A_k\leq \frac{\kappa}{2}\cdot \frac{2c}{\kappa}\tau_k^\omega\right] \leq \exp\left( -\frac{(1-\kappa)^2c\tau_k^\omega}{\kappa} \right). \end{equation*} Then by the union bound, we have \begin{align*} \mathbb P \left[\forall k\in[1,m], I^A_k\geq c\tau_k^\omega\right] &=\mathbb P \left[\forall k\in[1,m], I^A_k\geq \frac{\kappa}{2}\cdot \frac{2c}{\kappa}\tau_k^\omega\right]\\ &\geq 1 - \sum_{k=1}^m\exp\left( -\frac{(1-\kappa)^2c\tau_k^\omega}{\kappa} \right)\\ &\geq 1- m \exp\left( -\frac{(1-\kappa)^2c\tau_1^\omega}{\kappa} \right). \end{align*} \end{proof} We further give the following~\Cref{lem:totalIter} and~\Cref{lem:iteration} before proving~\Cref{thm:syncDQ}. \Cref{lem:totalIter} characterizes the number of blocks to achieve $\epsilon$-accuracy given $D_k$ defined in Lemma \ref{lem:couple}. \begin{lemma}\label{lem:totalIter} Let $D_{k+1}=(1-\beta)D_k$ with $\beta = \frac{1-\gamma}{4}, D_0 = \frac{2\gamma V_{\max}}{1-\gamma}$. Then for $m\geq\frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}$, we have $D_m \leq \epsilon$. \end{lemma} \begin{proof} By the definition of $D_k$, we have $D_k = \left( 1 - \beta \right)^k D_0$. Then we obtain \begin{equation*} D_k\leq\epsilon \Longleftrightarrow \left( 1 - \beta \right)^k D_0 \leq \epsilon \Longleftrightarrow \frac{1}{(1-\beta)^k} \geq \frac{D_0}{\epsilon} \Longleftrightarrow k \geq \frac{\ln(D_0/\epsilon)}{\ln(1/(1-\beta))}. \end{equation*} Further observe that $\ln \frac{1}{1-x}\leq x$ if $x\in(0,1)$. Thus we have \begin{equation*} k \geq\frac{1}{\beta}\ln \frac{D_0}{\epsilon} = \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}. \end{equation*} \end{proof} From the above lemma, it suffices to find the starting time at epoch $m^*=\left\lceil \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$. The next lemma is useful to calculate the total iterations given the initial epoch length and number of epochs. \begin{lemma}\label{lem:iteration} \citep[Lemma 32]{even2003learning} Consider a sequence $\{x_k\}$ satisfying \begin{equation*} x_{k+1} = x_k + c x_k^\omega = x_1 + \sum_{i=1}^k c x_i^\omega. \end{equation*} Then for any constant $\omega\in(0,1)$, we have \begin{equation*} x_k = O\left( (x_1^{1-\omega} + c k)^\frac{1}{1-\omega} \right) = O\left( x_1 + (ck)^{\frac{1}{1-\omega}} \right). \end{equation*} \end{lemma} \textbf{Proof of Theorem \ref{thm:syncDQ}}\\ Now we are ready to prove Theorem \ref{thm:syncDQ} based on the results obtained so far.\\ Let $m^*=\Big\lceil \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\Big\rceil$, then $G_{m^*-1}\geq\sigma\epsilon, D_{m^*-1}\geq \epsilon$. Thus we obtain \begin{align*} &\mathbb P(\norm{Q^A_{\tau_{m^*}}(s,a) - Q^* } \leq \epsilon)\\ & \geq\mathbb P\left[ \forall k\in [0,m^*- 1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} \right]\\ & = \mathbb P\left[ \forall k\in [0,m^*- 1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |E,F \right]\cdot\mathbb P(E\cap F)\\ & \geq \mathbb P\left[ \forall k\in [0,m^*-1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |E,F \right]\\ &\quad \cdot (\mathbb P(E)+\mathbb P(F)-1)\\ &\overset{\text{(i)}}{\geq}\mathbb P\left[ \forall k\in [0,m^*- 1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |E,F \right]\\ &\quad\cdot \left(\mathbb P\left[ \forall q\in [0, m^*- 1], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right] + \mathbb P(F) - 1\right)\\ &\overset{\text{(ii)}}{\geq}\left[ 1 - \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right) \right]\\ &\quad\cdot\!\left[ 1 \!-\! \frac{4cm^*}{\kappa}\left(1\!+\!\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right) \!-\! m^*\! \exp\!\left( -\frac{(1\!-\!\kappa)^2c\hat{\tau}_1^{\omega}}{\kappa} \right) \right]\\ &\geq 1- \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right)\\ &\quad - \frac{4cm^*}{\kappa}\left(1\!+\!\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right) - m^* \exp\left( -\frac{(1-\kappa)^2c\hat{\tau}_1^{\omega}}{\kappa} \right)\\ &\overset{\text{(iii)}}{\geq} 1- \frac{12cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from~\Cref{lem:couple}, (ii) follows from~\Cref{lem:Gq} and~\ref{lem:conditionalBound} and (iii) holds due to the fact that \begin{align*} \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \!=\! \max&\left\{ \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A|, m^* \right\},\\ \frac{\kappa^2(1\!-\!\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2}\!\leq\! \min&\left\{ \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\beta^2 \epsilon^2\hat{\tau}_1^{\omega}}{16c(c+\kappa)V_{\max}^2}, \frac{(1\!-\!\kappa)^2\hat{\tau}_1^{\omega}}{\kappa}, \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2}\right\}. \end{align*} By setting \begin{equation*} 1- \frac{12cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2} \right) \geq 1-\delta, \end{equation*} we obtain \begin{equation*} \hat{\tau}_1 \geq \left( \frac{64c(c+\kappa)V_{\max}^2}{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2}\ln \frac{12cm^*|\mathcal S||\mathcal A|(2c+\kappa)}{\kappa^2\delta} \right)^{\frac{1}{\omega}}. \end{equation*} Considering the conditions on $\hat{\tau}_1$ in~\Cref{lem:Gq} and~\Cref{lem:conditionalBound}, we choose \begin{equation*} \hat{\tau}_1 = \Theta\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{m^*|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} \right). \end{equation*} Finally, applying the number of iterations $m^*=\left\lceil\frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$ and Lemma \ref{lem:iteration}, we conclude that it suffices to let \begin{align*} T&=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{m^*|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{2c}{\kappa}\frac{1}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|V_{\max}^2\ln(\frac{V_{\max}}{(1-\gamma)\epsilon})}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right), \end{align*} to attain an $\epsilon$-accurate Q-estimator. \input{SupplementaryBC} \section{Proof of Theorem \ref{thm:asyncDQ} }\label{app:asyncThm} The main idea of this proof is similar to that of Theorem \ref{thm:syncDQ} with further efforts to characterize the effects of asynchronous sampling. The proof also consists of three parts: (a) Part I which analyzes the stochastic error propagation between the two Q-estimators $\norm{Q^B_t - Q^A_t }$; (b) Part II which analyzes the error dynamics between one Q-estimator and the optimum $\norm{Q^A_t -Q^* }$ conditioned on the error event in Part I; and (c) Part III which bounds the unconditional error $\norm{Q^A_t -Q^*}$. To proceed the proof, we first introduce the following notion of valid iterations for any fixed state-action pair $(s,a)$. \begin{definition}\label{def:TAsa} We define $T(s,a)$ as the collection of iterations if a state-action pair $(s,a)$ is used to update the Q-function $Q^A$ or $Q^B$, and $T^A(s, a)$ as the collection of iterations specifically updating $Q^A(s,a)$. In addition, we denote $T(s, a, t_1, t_2)$ and $T^A(s, a, t_1, t_2)$ as the set of iterations updating $(s,a)$ and $Q^A(s,a)$ between time $t_1$ and $t_2$, respectively. That is, \begin{align*} T(s, a, t_1, t_2) &= \left\{ t: t\in [t_1, t_2] \text{ and } t\in T(s,a) \right\},\\ T^A(s, a, t_1, t_2) &= \left\{ t: t\in [t_1, t_2] \text{ and } t\in T^A(s,a) \right\}. \end{align*} Correspondingly, the number of iterations updating $(s,a)$ between time $t_1$ and $t_2$ equals the cardinality of $T(s, a, t_1, t_2)$ which is denoted as $|T(s, a, t_1, t_2)|$. Similarly, the number of iterations updating $Q^A(s,a)$ between time $t_1$ and $t_2$ is denoted as $|T^A(s, a, t_1, t_2)|$. \end{definition} Given Assumption \ref{asp:covering}, we can obtain some properties of the quantities defined above. \begin{lemma}\label{prop:sa2L} It always holds that $|T(s,a,t_1,t_2)|\leq t_2-t_1+1$ and $|T^A(s,a,t_1,t_2)|\leq t_2-t_1+1$. In addition, suppose that Assumption \ref{asp:covering} holds. Then we have $T(s,a,t,t+2kL-1)\geq k$ for any $t\geq 0$. \end{lemma} \begin{proof} Since in a consecutive $2L$ running iterations of Algorithm \ref{alg:doubleQ}, either $Q^A$ or $Q^B$ is updated at least $L$ times. Then following from Assumption \ref{asp:covering}, $(s,a)$ is visited at least once for each $2L$ running iterations of Algorithm \ref{alg:doubleQ}, which immediately implies this proposition. \end{proof} Now we proceed our proof by three parts. \subsection{Part I: Bounding $\norm{Q^B_t-Q^A_t}$} \label{subsec:PartIThm2} We upper bound $\norm{Q^B_t - Q^A_t}$ block-wisely using a decreasing sequence $\{G_q\}_{q\geq0}$ as defined in~\Cref{lem:GqAsy} below. \begin{proposition}\label{lem:GqAsy} Fix $\epsilon>0, \kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Consider asynchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose that Assumption \ref{asp:covering} holds. Let $G_q = (1-\xi)^q G_0$ with $G_0 = V_{\max}$ and $\xi=\frac{1-\gamma}{4}$. Let $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$ for $q \geq 1$ with $c\geq \frac{L\kappa(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$ and $\hat{\tau}_1$ as the finishing time of the first block satisfying \begin{equation*} \hat{\tau}_1\geq \max\left\{\left(\frac{1}{\kappa-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{128cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\ln\left(\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $n$ such that $G_n\geq\sigma\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\\ &\quad\geq 1- \frac{4cL(n+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right). \end{align*} \end{proposition} The proof of~\Cref{lem:GqAsy} consists of the following steps. Since the main idea of the proofs is similar to that of~\Cref{lem:Gq}, we will focus on pointing out the difference. We continue to use the notation $u^{BA}_t(s,a):=Q^B_t(s,a)-Q^A_t(s,a)$. \textbf{Step 1: Characterizing the dynamics of $u^{BA}_t$} First, we observe that when $(s,a)$ is visited at time $t$, i.e., $t\in T(s,a)$, Lemmas \ref{lem:uBAdyn} and \ref{lem:uBAsanwich} still apply. Otherwise, $u^{BA}$ is not updated. Thus, we have \begin{equation*} u_{t+1}^{BA}(s,a) = \left\{\begin{aligned} &u_t^{BA}(s,a),\quad t\notin T(s,a);\\ &(1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a),\quad t\in T(s,a), \end{aligned}\right. \end{equation*} where $F_t$ satisfies \begin{equation*} \norm{\mathbb E[F_t|\mathcal F_t]} \leq \frac{1+\gamma}{2} \norm{u_t^{BA}}. \end{equation*} For $t\in T(s,a)$, we rewrite the dynamics of $u_t^{BA}(s,a)$ as \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a)), \end{equation*} where $h_t(s,a) = \mathbb E[F_t(s,a)|\mathcal F_t]$ and $z_t(s,a) = F_t(s,a) - \mathbb E[F_t(s,a)|\mathcal F_t]$. In the following steps, we use induction to proceed the proof of~\Cref{lem:GqAsy}. Given $G_q$ defined in~\Cref{lem:GqAsy}, since $\norm{u_t^{BA} }\leq G_0$ holds for all $t$, and thus it holds for $t\in [0,\hat{\tau}_1]$. Now suppose $\hat{\tau}_q$ satisfies that $\norm{u_t^{BA} }\leq G_q$ for any $t\geq \hat{\tau}_q$. Then we will show there exists $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$ such that $\norm{u_t^{BA} }\leq G_{q+1}$ for any $t\geq \hat{\tau}_{q+1}$. \textbf{Step 2: Constructing sandwich bounds} We first observe that the following sandwich bound still holds for all $t\geq \hat{\tau}_q$. \begin{equation*} -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) \leq u_t^{BA}(s,a) \leq X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a), \end{equation*} where $Z_{t;\hat{\tau}_q}(s,a)$ is defined as \begin{equation*} Z_{t+1;\hat{\tau}_q}(s,a) = \left\{\begin{aligned} & Z_{t;\hat{\tau}_q}(s,a), \quad t\notin T(s,a)\\ & (1-\alpha_t)Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a), \quad t\in T(s,a), \end{aligned} \right. \end{equation*} with the initial condition $Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = 0$, and $X_{t;\hat{\tau}_q}(s,a)$ is defined as \begin{equation*} X_{t+1;\hat{\tau}_q}(s,a) = \left\{\begin{aligned} & X_{t;\hat{\tau}_q}(s,a), \quad t\notin T(s,a)\\ & (1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q, \quad t\in T(s,a), \end{aligned} \right. \end{equation*} with $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q,\gamma'=\frac{1+\gamma}{2}$. This claim can be shown by induction. This bound clearly holds for the initial case with $t=\hat{\tau}_q$. Assume that it still holds for iteration $t$. If $t\in T(s,a)$, the proof is the same as that of~\Cref{lem:uBAsanwich}. If $t\notin T(s,a)$, since all three sequences do not change from time $t$ to time $t+1$, the sandwich bound still holds. Thus we conclude this claim. \textbf{Step 3: Bounding $X_{t;\hat{\tau}_q}(s,a)$} Next, we bound the deterministic sequence $X_{t;\hat{\tau}_q}(s,a)$. Observe that $X_{t;\hat{\tau}_q}(s,a)\leq G_q$ for any $t\geq\hat{\tau}_q$. We will next show that $X_{t;\hat{\tau}_q}(s,a) \leq \left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q$ for any $t\in [\hat{\tau}_{q+1},\hat{\tau}_{q+2})$ where $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$. Similarly to the proof of~\Cref{lem:Xt}, we still rewrite $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a)$ as $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q = \gamma' G_q + (1-\gamma')G_q := \gamma' G_q + \rho_{\hat{\tau}_q} $. However, in this case the dynamics of $X_{t;\hat{\tau}_q}(s,a)$ is different, which is represented as \begin{equation*} X_{t+1;\hat{\tau}_q}(s,a) =\left\{\begin{aligned} & X_{t;\hat{\tau}_q}(s,a), \quad t\notin T(s,a)\\ &(1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q = \gamma'G_q + (1-\alpha_t)\rho_t, \quad t\in T(s,a). \end{aligned} \right. \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$ when $t\in T(s,a)$. By the definition of $\rho_t$, we obtain \begin{align*} \rho_t &= \rho_{\hat{\tau}_q}\underset{i\in T(s, a, \hat{\tau}_q, t-1)}{\Pi}(1-\alpha_i) = (1-\gamma')G_q\underset{i\in T(s, a, \hat{\tau}_q, t-1)}{\Pi}(1-\alpha_i)\\ &\leq (1-\gamma')G_q\underset{i\in T(s, a, \hat{\tau}_q, \hat{\tau}_{q+1}-1)}{\Pi}\left(1-\frac{1}{i^\omega}\right) \leq (1-\gamma')G_q\prod_{i=\hat{\tau}_{q+1}-|T(s, a, \hat{\tau}_q, \hat{\tau}_{q+1}-1)|}^{\hat{\tau}_{q+1}-1}\left(1-\frac{1}{i^\omega}\right)\\ &\overset{\text{(i)}}{\leq} (1-\gamma')G_q\prod_{i=\hat{\tau}_{q+1}-\frac{c}{\kappa}\hat{\tau}_q^\omega}^{\hat{\tau}_{q+1}-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(ii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{\frac{c}{\kappa}\hat{\tau}_q^\omega-1}{(\hat{\tau}_{q+1}-1)^\omega} \right)\\ &\leq (1-\gamma')G_q\exp\left( -\frac{\frac{c}{\kappa}\hat{\tau}_q^\omega-1 }{\hat{\tau}_{q+1}^\omega} \right) = (1-\gamma')G_q\exp\left( -\frac{c}{\kappa}\left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega + \frac{1}{\hat{\tau}_{q+1}^\omega} \right)\\ &\overset{\text{(iii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{c}{\kappa}\frac{1}{1+\frac{2cL}{\kappa}} + \frac{1}{\hat{\tau}_{1}^\omega} \right), \end{align*} where (i) follows from~\Cref{prop:sa2L}, (ii) follows~\Cref{lem:prodHelp}, and (iii) follows because $\hat{\tau}_q\geq\hat{\tau}_1$ and \begin{equation*} \left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega \geq \frac{\hat{\tau}_q}{\hat{\tau}_{q+1}} = \frac{\hat{\tau}_q}{\hat{\tau}_{q} + \frac{2cL}{\kappa}\hat{\tau}_q^\omega}\geq \frac{1}{1+\frac{2cL}{\kappa}}. \end{equation*} Since $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$, we have $\ln (2+\Delta)\in (0, \kappa)$. Further, observing $\hat{\tau_1}^\omega > \frac{1}{\kappa-\ln(2+\Delta)}$, we obtain $\ln(2+\Delta) + \frac{1}{\hat{\tau_1}^\omega} \in (0,\kappa)$. Last, since $c\geq\frac{L\kappa(\ln(2+\Delta) + 1/\hat{\tau_1}^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\hat{\tau_1}^\omega)}$, we have $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\hat{\tau_1}^\omega}\leq -\ln(2+\Delta)$. Finally, combining the above observations with the fact $1-\gamma'=2\xi$, we conclude that for any $t\geq \hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$, \begin{equation*} X_{t;\hat{\tau}_q}(s,a) \leq \left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q. \end{equation*} \textbf{Step 4: Bounding $Z_{t;\hat{\tau}_q}(s,a)$} It remains to bound the stochastic sequence $Z_{t;\hat{\tau}_q}(s,a)$ by $\frac{\Delta}{2+\Delta}\xi G_q$ at epoch $q+1$. We define an auxiliary sequence $\{Z_{t;\hat{\tau}_q}^l (s,a)\}$ (which is different from that in \eqref{eq:Zl}) as: \begin{equation*} Z^l_{t;\hat{\tau}_q}(s,a) = \sum_{i\in T(s, a, \hat{\tau}_q, t-1)}\alpha_i\underset{j\in T(s, a, i+1, t-1)}{\Pi}(1-\alpha_j) z_i(s,a). \end{equation*} Following the same arguments as the proof of Lemma \ref{lem:ZlDiff}, we conclude that $\{Z_{t;\hat{\tau}_q}^l (s,a)\}$ is a martingale sequence and satisfies \begin{equation*} \lvert Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \rvert = \alpha_{\hat{\tau}_q+l} |z_{\hat{\tau}_q + l}(s,a)|\leq \frac{2V_{\max}}{\hat{\tau}_q^\omega}. \end{equation*} In addition, note that \begin{align*} Z_{t;\hat{\tau}_q}(s,a) &= Z_{t;\hat{\tau}_q}(s,a) - Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a)\\ &= \sum_{l:\hat{\tau}_q+l-1\in T(s, a, \hat{\tau}_q, t-1)} (Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a)) + Z^{\min(T(s, a, \hat{\tau}_q, t-1))}_{t;\hat{\tau}_q}(s,a). \end{align*} Then we apply Azuma' inequality in Lemma \ref{lem:azuma} and obtain \begin{align*} &\mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \hat{\epsilon}| t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\leq 2\exp\left( \frac{-\hat{\epsilon}^2}{2\underset{l:\hat{\tau}_q+l-1\in T(s, a, \hat{\tau}_q, t-1)}{\sum} (Z^l_{t;\hat{\tau}_q}(s,a) \!-\! Z^{l-1}_{t;\hat{\tau}_q}(s,a))^2 \!+\! 2\left(Z^{\min(T(s, a, \hat{\tau}_q, t-1))}_{t;\hat{\tau}_q}(s,a)\right)^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(|T(s, a, \hat{\tau}_q, t-1)|+1) V_{\max}^2} \right) \overset{\text{(i)}}{\leq} 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(t-\hat{\tau}_q) V_{\max}^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(\hat{\tau}_{q+2}-\hat{\tau}_q) V_{\max}^2} \right) = 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8 \left(\frac{2cL}{\kappa}\hat{\tau}_{q+1}^\omega+\frac{2cL}{\kappa}\hat{\tau}_q^\omega\right) V_{\max}^2} \right)\\ &\quad = 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8 \left(\frac{2cL}{\kappa}(\hat{\tau}_{q}+\frac{2cL}{\kappa}\hat{\tau}_q^\omega)^\omega+\frac{2cL}{\kappa}\hat{\tau}_q^\omega\right) V_{\max}^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\kappa^2\hat{\epsilon}^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa) V_{\max}^2} \right) \end{align*} where (i) follows from~\Cref{prop:sa2L}. \textbf{Step 5: Taking union over all blocks} Finally, using the union bound of~\Cref{lem:unionBound} yields \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\xi G_q \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A|(\hat{\tau}_{q+2} - \hat{\tau}_{q+1}) \cdot \mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \frac{\Delta}{2+\Delta}\xi G_q \Big\rvert t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\hat{\tau}_{q+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 G_q^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 G_q^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\sum_{q=0}^n |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(iii)}}{\geq} 1- \frac{4cL(n+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from $G_q \geq G_n \geq \sigma\epsilon $, (ii) follows from~\Cref{lem:tauHelp} by substituting $a=\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }, b=1$ and observing that \begin{align*} \hat{\tau}_q^{\omega}&\geq\hat{\tau}_1^{\omega}\geq \frac{128cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\ln\left(\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\right)=2ab\ln ab, \end{align*} and (iii) follows from $\hat{\tau}_q \geq \hat{\tau}_1$. \subsection{Part II: Conditionally bounding $\norm{Q^A_t-Q^*}$} \label{subsec:PartIIThm2} We upper bound $\norm{Q^A_t-Q^*}$ block-wisely by a decreasing sequence $\{D_k\}_{k\geq0}$ conditioned on the following two events: fix a positive integer $m$, \begin{align} G & = \left\{ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t }\leq \sigma D_{k+1} \right\}, \label{eq:eventE} \\ H & = \{ \forall k\in [1,m+1], I^A_k\geq cL\tau_{k}^\omega \}, \label{eq:eventF} \end{align} where $I^A_k$ denotes the number of iterations updating $Q^A$ at epoch $k$, $\tau_k$ is the starting iteration index of the $k+1$th block, and $\omega$ is the parameter of the polynomial learning rate. Roughly, Event $G$ requires that the difference between the two Q-function estimators are bounded appropriately, and Event $H$ requires that $Q^A$ is sufficiently updated in each epoch. Again, we will design $\{D_k\}_{k\geq 0}$ in a way such that the occurrence of Event $G$ can be implied from the event that $\norm{u^{BA}_t}$ is bounded by $\{G_q\}_{q\geq 0}$ (see~\Cref{lem:coupleAsy} below). A lower bound of the probability for Event $H$ to hold is characterized in~\Cref{lem:halfQA} in Part III. \begin{proposition}\label{lem:conditionalBoundAsy} Fix $\epsilon>0, \kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Consider asynchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $\{G_q\}, \{\hat{\tau}_q\}$ be as defined in~\Cref{lem:GqAsy}. Define $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Let $\tau_k=\hat{\tau}_k$ for $k\geq0$. Suppose that $c\geq \frac{L(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$ and $\tau_1 = \hat{\tau}_1$ as the finishing time of the first epoch satisfies \begin{equation*} \tau_1\geq \max\left\{\left(\frac{1}{\kappa-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{32cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln \left(\frac{16cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $m$ such that $D_m\geq\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\\ &\quad\geq 1 - \frac{4cL(m+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( 1-\frac{2}{e} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right). \end{align*} \end{proposition} Recall that in the proof of~\Cref{lem:conditionalBound}, $Q^A$ is not updated at each iteration and thus we introduced notations $T^A$ and $T^A(t_1,t_2)$ in Definition \ref{def:TA} to capture the convergence of the error $\norm{Q^A -Q^* }$. In this proof, the only difference is that when choosing to update $Q^A$, only one $(s,a)$-pair is visited. Therefore, the proof of~\Cref{lem:conditionalBoundAsy} is similar to that of~\Cref{lem:conditionalBound}, where most of the arguments simply substitute $T^A, T^A(t_1,t_2)$ in the proof of~\Cref{lem:conditionalBound} by $T^A(s,a), T^A(s,a,t_1,t_2)$ in Definition \ref{def:TAsa}, respectively. Certain bounds are affected by such substitutions. In the following, we proceed the proof of~\Cref{lem:conditionalBoundAsy} in five steps, and focus on pointing out the difference from the proof of~\Cref{lem:conditionalBound}. More details can be referred to~\Cref{subsec:PartII}. \textbf{Step 1: Coupling $\{D_k\}_{k\geq0}$ and $\{G_q\}_{q\geq0}$} We establish the relationship between $\{D_k\}_{k\geq0}$ and $\{G_q\}_{q\geq0}$ in the same way as Lemma \ref{lem:couple}. For the convenience of reference, we restate \Cref{lem:couple} in the following. \begin{lemma}\label{lem:coupleAsy} Let $\{G_q\}$ be defined in~\Cref{lem:GqAsy}, and let $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Then we have \begin{align*} &\mathbb P\left[ \forall(s,a), \forall q\in [0,m], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right]\\ &\quad\leq \mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t }\leq \sigma D_{k+1} \right], \end{align*} given that $\tau_k = \hat{\tau}_{k }$. \end{lemma} \textbf{Step 2: Constructing sandwich bounds} Let $r_t(s,a)=Q^A(s,a)-Q^*(s,a)$ and $\tau_k$ be such that $\norm{r_t}\leq D_k$ for all $t\geq\tau_k$. The requirement of Event $G$ yields \begin{equation*} -Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a) \leq r_t(s,a) \leq Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a), \end{equation*} where $W_{t;\tau_k}(s,a)$ is defined as \begin{equation*} W_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &W_{t;\tau_k}(s,a), \quad t\notin T^A(s,a);\\ &(1-\alpha_t)W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a), \quad t\in T^A(s,a), \end{aligned}\right. \end{equation*} with $w_{t}(s,a) = \mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a)$ and $W_{\tau_k;\tau_k}(s,a) = 0$, and $Y_{t;\tau_k}(s,a)$ is given by \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &Y_{t;\tau_k}(s,a), \quad t\notin T^A(s,a);\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''D_k, \quad t\in T^A(s,a), \end{aligned}\right. \end{equation*} with $Y_{\tau_k;\tau_k}(s,a) = D_k$ and $\gamma''=\gamma(1+\sigma)$. \textbf{Step 3: Bounding $Y_{t;\tau_k}(s,a)$} Next, we first bound $Y_{t;\tau_k}(s,a)$. Observe that $Y_{t;\tau_k}(s,a) \leq D_k$ for any $t \geq \tau_k$. We will bound $Y_{t;\tau_k}(s,a)$ by $\left(\gamma'' + \frac{2}{2+\Delta}\beta\right)D_k$ for block $k+1$. We use a similar representation of $Y_{t;\tau_k}(s,a)$ as in the proof of Lemma \ref{lem:Yt}, which is given by \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} & Y_{t;\tau_k}(s,a), \quad t\notin T^A(s,a)\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''G_q = \gamma''G_q + (1-\alpha_t)\rho_t, \quad t\in T^A(s,a) \end{aligned}\right. \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$ for $t\in T^A(s,a)$. By the definition of $\rho_t$, we obtain \begin{align*} \rho_t &= \rho_{\tau_k}\prod_{i\in T^A(s, a, \tau_k, t-1)}(1-\alpha_i) = (1-\gamma'')D_k\prod_{i\in T^A(s, a, \tau_k, t-1)}(1-\alpha_i)\\ &= (1-\gamma'')D_k\prod_{i\in T^A(s, a, \tau_k, t-1)}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(i)}}{\leq} (1-\gamma'')D_k\prod_{i\in T^A(s, a, \tau_k, \tau_{k+1}-1)}\left(1-\frac{1}{i^\omega}\right)\\ &\overset{\text{(ii)}}{\leq} (1-\gamma'')D_k\prod_{i=\tau_{k+1}-c\tau_k^\omega}^{\tau_{k+1}-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(iii)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{(\tau_{k+1}-1)^\omega} \right) \nonumber\\ &\leq (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{\tau_{k+1}^\omega} \right) = (1-\gamma'')D_k\exp\left( -c\left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega + \frac{1}{\tau_{k+1}^\omega} \right) \nonumber\\ &\overset{\text{(iv)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c}{1+\frac{2Lc}{\kappa}} + \frac{1}{\tau_{1}^\omega} \right), \nonumber \end{align*} where (i) follows because $\alpha_i<1$ and $t\geq \tau_{k+1}$, (ii) follows from Proposition \ref{prop:sa2L} and the requirement of event $H$, (iii) follows from Lemma \ref{lem:tauHelp}, and (iv) holds because $\tau+k\geq\tau_1$ and \begin{equation*} \left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega \geq \frac{\tau_k}{\tau_{k+1}} = \frac{\tau_k}{\tau_{k} + \frac{2cL}{\kappa}\tau_k^\omega}\geq \frac{1}{1+\frac{2cL}{\kappa}}. \end{equation*} Since $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$, we have $\ln (2+\Delta)\in (0, \kappa)$. Further, observing $\hat{\tau_1}^\omega > \frac{1}{\kappa-\ln(2+\Delta)}$, we obtain $\ln(2+\Delta) + \frac{1}{\hat{\tau_1}^\omega} \in (0,\kappa)$. Last, since $c\geq\frac{L(\ln(2+\Delta) + 1/\hat{\tau_1}^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\hat{\tau_1}^\omega)}$, we have $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\hat{\tau_1}^\omega}\leq -\ln(2+\Delta)$. Then, we have $\rho_t\leq \frac{1-\gamma''}{2+\Delta}D_k$. Thus we conclude that for any $t\in [\tau_{k+1}, \tau_{k+2}]$, \begin{equation*} Y_{t;\tau_k}(s,a) \leq \left(\gamma'' + \frac{2}{2+\Delta}\beta\right)D_k. \end{equation*} \textbf{Step 4: Bounding $W_{t;\tau_k}(s,a)$} It remains to bound $|W_{t;\tau_k}(s,a)|\leq \left(1-\frac{2}{2+\Delta}\right)\beta D_k$ for $t\in [\tau_{k+1},\tau_{k+2})$. Similarly to \Cref{subsec:proofProp2}, we define a new sequence $\{W_{t;\tau_k}^l(s,a)\}$ as \begin{equation*} W^l_{t;\tau_k}(s,a) = \sum_{i\in T^A(s, a, \tau_k, \tau_k+l)} \alpha_i\underset{j\in T^A(s, a, i+1, t-1)}{\Pi} (1-\alpha_j)w_i(s,a). \end{equation*} The same arguments as the proof of Lemma \ref{lem:WlDiff} yields \begin{equation*} \lvert W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \rvert \leq \frac{V_{\max}}{\tau_k^\omega}. \end{equation*} If we fix $\tilde{\epsilon}>0$, then for any $t\in[\tau_{k+1},\tau_{k+2})$ we have \begin{align*} &\mathbb P\left[ |W_{t;\tau_k}(s,a)|>\tilde{\epsilon} | t\in[\tau_{k+1},\tau_{k+2}),G,H \right]\\ &\leq 2\exp\left( \frac{-\tilde{\epsilon}^2}{2\underset{l:\tau_k+l-1\in T^A(s,a,\tau_k, t\!-\!1)}{\sum}\!\left( W^l_{t;\tau_k}(s,a) \!-\! W^{l-1}_{t;\tau_k}(s,a) \right)^2 \!+\! 2(W^{\min(T^A(s,a,\tau_k, t\!-\!1))}_{t;\tau_k}(s,a))^2 } \right)\\ &\leq 2\exp\left( -\frac{\hat{\epsilon}^2\tau_k^{2\omega}}{2(|T^A(s,a,\tau_k,t-1)|+1)V_{\max}^2} \right) \overset{\text{(i)}}{\leq} 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(t-\tau_k)V_{\max}^2} \right)\\ &\leq 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(\tau_{k+2}-\tau_k)V_{\max}^2} \right) \overset{\text{(ii)}}{\leq} 2\exp\left( -\frac{\kappa^2\tilde{\epsilon}^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &= 2\exp\left( -\frac{\kappa^2\tilde{\epsilon}^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from Proposition \ref{prop:sa2L} and (ii) holds because \begin{equation*} \tau_{k+2} - \tau_k = \frac{2cL}{\kappa}\tau_{k+1}^\omega + \frac{2cL}{\kappa}\tau_k^\omega = \frac{2cL}{\kappa}\left( \tau_k + \frac{2cL}{\kappa}\tau_k^\omega \right)^\omega + \frac{2cL}{\kappa}\tau_k^\omega \leq \frac{4cL(cL+\kappa)}{\kappa^2}\tau_k^\omega. \end{equation*} \textbf{Step 5: Taking union over all blocks } Applying the union bound in Lemma \ref{lem:unionBound}, we obtain \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} | G,H\right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \lvert W_{t;\tau_k}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\beta D_k|G,H \right]\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A|(\tau_{k+2}-\tau_{k+1}) \cdot \mathbb P\left[ \lvert W_{t;\tau_k}(s,a) \rvert > \frac{\Delta}{2+\Delta}\beta D_k \Big\rvert t\in[\tau_{k+1},\tau_{k+2}),G,H \right]\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\tau_{k+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\sum_{k=0}^m |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \frac{4cL(m+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows because $D_k\geq D_m\geq \epsilon$, and (ii) follows from Lemma \ref{lem:tauHelp} by substituting $a=\frac{16cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }, b=1$ and observing that \begin{align*} \tau_k^{\omega}&\geq\hat{\tau}_1^{\omega}\geq \frac{32cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln\left(\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) = 2ab\ln ab. \end{align*} \subsection{Part III: Bound $\norm{Q^A_t-Q^*}$} In order to obtain the unconditional high-probability bound on $\norm{Q^A_t-Q^*}$, we first characterize a lower bound on the probability of Event $H$. Note that the probability of Event $G$ is lower bounded in~\Cref{lem:GqAsy}. \begin{lemma}\label{lem:halfQAAsy} Let the sequence $\tau_k$ be the same as given in Lemma \ref{lem:coupleAsy}, i.e. $\tau_{k+1} = \tau_k + \frac{2cL}{\kappa}\tau_k^\omega$ for $k\geq 1$. Define $I^A_k$ as the number of iterations updating $Q^A$ at epoch $k$. Then we have \begin{equation*} \mathbb P\left[\forall k\in [1,m], I^A_k\geq cL\tau_{k}^\omega \right] \geq 1- m \exp\left( -\frac{(1-\kappa)^2cL\tau_1^\omega}{\kappa} \right). \end{equation*} \end{lemma} \begin{proof} We use the same idea as the proof of Lemma \ref{lem:halfQA}. Since we only focus on the blocks with $k\geq 1$, $I^A_k\sim Binomial\left( \frac{2cL}{\kappa}\tau_k^\omega, 0.5 \right)$ in such a case. Thus the tail bound of a binomial random variable gives \begin{equation*} \mathbb P \left[I^A_k\leq \frac{\kappa}{2}\cdot \frac{2cL}{\kappa}\tau_k^\omega\right] \leq \exp\left( -\frac{(1-\kappa)^2 cL\tau_k^\omega}{\kappa} \right). \end{equation*} Then by the union bound, we have \begin{align*} \mathbb P \left[\forall k\in[1,m], I^A_k\geq cL\tau_k^\omega\right] &=\mathbb P \left[\forall k\in[1,m], I^A_k\geq \frac{\kappa}{2}\cdot \frac{2cL}{\kappa}\tau_k^\omega\right]\\ &\geq 1 - \sum_{k=1}^m\exp\left( -\frac{(1-\kappa)^2 cL\tau_k^\omega}{\kappa} \right)\\ &\geq 1- m \exp\left( -\frac{(1-\kappa)^2 cL\tau_1^\omega}{\kappa} \right). \end{align*} \end{proof} Following from Lemma \ref{lem:totalIter}, it suffices to determine the starting time at epoch $m^*=\left\lceil \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$. This can be done by using Lemma \ref{lem:iteration} if we have $\hat{\tau}_1$. Now we are ready to prove the main result of~\Cref{thm:asyncDQ}. By the definition of $m^*$, we know $D_{m^*-1}\geq\epsilon, G_{m^*-1}\geq \sigma\epsilon$. Then we obtain \begin{align*} &\mathbb P(\norm{Q^A_{\tau_{m^*}} - Q^* } \leq \epsilon)\\ & \geq\mathbb P\left[\forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} \right]\\ & = \mathbb P\left[ \forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\cdot\mathbb P(G\cap H)\\ & \geq \mathbb P\left[ \forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\\ &\quad \cdot (\mathbb P(G)+\mathbb P(H)-1)\\ &\overset{\text{(i)}}{\geq}\mathbb P\left[ \forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\\ &\quad\cdot \left(\mathbb P\left[ \forall q\in [0, m^* -1], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right] + \mathbb P(H) - 1\right)\\ &\overset{\text{(ii)}}{\geq}\left[ 1 - \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right) \right]\\ &\quad\cdot\left[ 1- \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right)\right.\\ &\quad\quad \left.- m^*\! \exp\!\left( -\frac{(1 -\kappa)^2 cL\hat{\tau}_1^{\omega}}{\kappa} \right) \right]\\ &\geq 1- \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad - \!\frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right)\! -\! m^* \exp\left( -\frac{(1\!-\!\kappa)^2 cL\hat{\tau}_1^{\omega}}{\kappa} \right)\\ &\overset{\text{(iii)}}{\geq} 1- \frac{12cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64 cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from~\Cref{lem:coupleAsy}, (ii) follows from Propositions \ref{lem:GqAsy} and \ref{lem:conditionalBoundAsy} and (iii) holds due to the fact that \begin{align*} \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| = \max&\left\{ \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A|, m^* \right\},\\ \frac{\kappa^2(1\!-\!\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2}\!\leq\! \min&\left\{ \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\beta^2 \epsilon^2\hat{\tau}_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2}, \frac{(1\!-\!\kappa)^2\hat{\tau}_1^{\omega}}{\kappa}, \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2}\right\}. \end{align*} By setting \begin{equation*} 1- \frac{12cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64 cL(cL+\kappa)V_{\max}^2} \right) \geq 1-\delta, \end{equation*} we obtain \begin{equation*} \hat{\tau}_1 \geq \left( \frac{64 cL(cL+\kappa)V_{\max}^2}{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2}\ln \frac{12m^*|\mathcal S||\mathcal A|cL(2cL+\kappa)}{\kappa^2\delta} \right)^{\frac{1}{\omega}}. \end{equation*} Combining with the requirement of $\hat{\tau}_1$ in Propositions \ref{lem:GqAsy} and \ref{lem:conditionalBoundAsy}, we can choose \begin{equation*} \hat{\tau}_1 = \Theta\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{ m^*|\mathcal S||\mathcal A|L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} \right). \end{equation*} Finally, applying $m^*=\left\lceil\frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$ and Lemma \ref{lem:iteration}, we conclude that it suffices to let \begin{align*} T&=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{m^*|\mathcal S||\mathcal A|L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{2cL}{\kappa}\frac{1}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4V_{\max}^2 \ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{2cL}{\kappa}\frac{1}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4V_{\max}^2 }{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{L^2}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right). \end{align*} to attain an $\epsilon$-accurate Q-estimator. \section{Introduction} Q-learning is one of the most successful classes of reinforcement learning (RL) algorithms, which aims at finding the optimal action-value function or Q-function (and thus the associated optimal policy) via off-policy data samples. The Q-learning algorithm was first proposed by \citet{watkins1992q}, and since then, it has been widely used in various applications including robotics~\citep{tai2016robot}, autonomous driving~\citep{okuyama2018autonomous}, video games~\citep{mnih2015human}, to name a few. Theoretical performance of Q-learning has also been intensively explored. The asymptotic convergence has been established in \citet{tsitsiklis1994asynchronous,jaakkola1994convergence,borkar2000ode,melo2001convergence,Lee2019Switch}. The non-asymptotic (i.e., finite-time) convergence rate of Q-learning was firstly obtained in \citet{szepesvari1998asymptotic}, and has been further studied in~\citep{even2003learning,shah2018q,wainwright2019stochastic,beck2012error,Chen2020finiteSample} for synchronous Q-learning and in ~\citep{even2003learning,qu2020finite} for asynchoronous Q-learning. One major weakness of Q-learning arises in practice due to the large overestimation of the action-value function~\citep{hasselt2010double,van2016deep}. Practical implementation of Q-learning involves using the maximum {\em sampled} Q-function to estimate the maximum \textit{expected} Q-function (where the expectation is taken over the randomness of reward). Such an estimation often yields a large positive bias error~\citep{hasselt2010double}, and causes Q-learning to perform rather poorly. To address this issue, double Q-learning was proposed in~\citet{hasselt2010double}, which keeps two Q-estimators (i.e., estimators for Q-functions), one for estimating the maximum Q-function value and the other one for update, and continuously changes the roles of the two Q-estimators in a random manner. It was shown in \citet{hasselt2010double} that such an algorithm effectively overcomes the overestimation issue of the vanilla Q-learning. In \citet{van2016deep}, double Q-learning was further demonstrated to substantially improve the performance of Q-learning with deep neural networks (DQNs) for playing Atari 2600 games. It inspired many variants~\citep{zhang2017weighted,abed2018double}, received a lot of applications~\citep{zhang2018double,zhang2018human}, and have become one of the most common techniques for applying Q-learning type of algorithms~\citep{hessel2018rainbow}. Despite its tremendous empirical success and popularity in practice, theoretical understanding of double Q-learning is rather limited. Only the asymptotic convergence was provided in \citet{hasselt2010double,weng2020provably}. There has been no non-asymptotic result on how fast double Q-learning converges. From the technical standpoint, such finite-time analysis for double Q-learning does not follow readily from those for the vanilla Q-learning, because it involves two randomly updated Q-estimators, and the coupling between these two random paths significantly complicates the analysis. This goes much more beyond the existing techniques for analyzing the vanilla Q-learning that handles the random update of a single Q-estimator. Thus, {\em the goal of this paper is to develop new finite-time analysis techniques that handle the inter-connected two random path updates in double Q-learning and provide the convergence rate.} \vspace{-1em} \subsection{Our contributions} The main contribution of this paper lies in providing the first finite-time analysis for double Q-learning with both the synchronous and asynchronous implementations. \begin{list}{$\bullet$}{\topsep=0.ex \leftmargin=0.3in \rightmargin=0.in \itemsep =0.02in} \item We show that synchronous double Q-learning with a learning rate $\alpha_t = 1/t^\omega$ (where $\omega\in(0,1)$) attains an $\epsilon$-accurate global optimum with at least the probability of $1-\delta$ by taking $\Omega\left( \left( \frac{1}{(1-\gamma)^6\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|}{(1-\gamma)^7\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations, where $\gamma\in(0,1)$ is the discount factor, $|\mathcal S|$ and $|\mathcal A|$ are the sizes of the state space and action space, respectively. \item We further show that under the same accuracy and high probability requirements, asynchronous double Q-learning takes $\Omega\left( \left( \frac{L^4}{(1-\gamma)^6\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4}{(1-\gamma)^7\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{L^2}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations, where $L$ is the covering number specified by the exploration strategy. \end{list} Our results corroborate the design goal of double Q-learning, which opts for better accuracy by making less aggressive progress during the execution in order to avoid overestimation. Specifically, our results imply that in the high accuracy regime, double Q-learning achieves the same convergence rate as vanilla Q-learning in terms of the order-level dependence on $\epsilon$, which further indicates that the high accuracy design of double Q-learning dominates the less aggressive progress in such a regime. In the low-accuracy regime, which is not what double Q-learning is designed for, the cautious progress of double Q-learning yields a slightly weaker convergence rate than Q-learning in terms of the dependence on $1-\gamma$. From the technical standpoint, our proof develops new techniques beyond the existing finite-time analysis of the vanilla Q-learning with a single random iteration path. More specifically, we model the double Q-learning algorithm as two alternating stochastic approximation (SA) problems, where one SA captures the error propagation between the two Q-estimators, and the other captures the error dynamics between the Q-estimator and the global optimum. For the first SA, we develop new techniques to provide the finite-time bounds on the two inter-related stochastic iterations of Q-functions. Then we develop new tools to bound the convergence of Bernoulli-controlled stochastic iterations of the second SA conditioned on the first SA. \subsection{Related work} Due to the rapidly growing literature on Q-learning, we review only the theoretical results that are highly relevant to our work. Q-learning was first proposed in \citet{watkins1992q} under finite state-action space. Its asymptotic convergence has been established in \citet{tsitsiklis1994asynchronous,jaakkola1994convergence,borkar2000ode,melo2001convergence} through studying various general SA algorithms that include Q-learning as a special case. Along this line, \citet{Lee2019Switch} characterized Q-learning as a switched linear system and applied the results of~\citet{borkar2000ode} to show the asymptotic convergence, which was also extended to other Q-learning variants. Another line of research focuses on the finite-time analysis of Q-learning which can capture the convergence rate. Such non-asymptotic results were firstly obtained in \citet{szepesvari1998asymptotic}. A more comprehensive work~\citep{even2003learning} provided finite-time results for both synchronous and asynchoronous Q-learning. Both \citet{szepesvari1998asymptotic} and \citet{even2003learning} showed that with linear learning rates, the convergence rate of Q-learning can be exponentially slow as a function of $\frac{1}{1-\gamma}$. To handle this, the so-called rescaled linear learning rate was introduced to avoid such an exponential dependence in synchronous Q-learning~\citep{wainwright2019stochastic,Chen2020finiteSample} and asynchronous Q-learning~\citep{qu2020finite}. The finite-time convergence of Q-learning was also analyzed with constant step sizes~\citep{beck2012error,Chen2020finiteSample,li2020sample}. Moreover, the polynomial learning rate, which is also the focus of this work, was investigated for both synchronous~\citep{even2003learning,wainwright2019stochastic} and asynchronous Q-learning~\citep{even2003learning}. In addition, it is worth mentioning that \cite{shah2018q} applied the nearest neighbor approach to handle MDPs on infinite state space. Differently from the above extensive studies of vanilla Q-learning, theoretical understanding of double Q-learning is limited. The only theoretical guarantee was on the asymptotic convergence provided by \citet{hasselt2010double,weng2020provably}, which do not provide the non-asymptotic (i.e., finite-time) analysis on how fast double Q-learning converges. This paper provides the first finite-time analysis for double Q-learning. The vanilla Q-learning algorithm has also been studied for the function approximation case, i.e., the Q-function is approximated by a class of parameterized functions. In contrast to the tabular case, even with linear function approximation, Q-learning has been shown not to converge in general \citep{baird1995residual}. Strong assumptions are typically imposed to guarantee the convergence of Q-learning with function approximation \citep{bertsekas1996neuro,zou2019finite,Chen2019finiteQ,du2019provably,xu2019deepQ,cai2019neural,weng2020analysis,weng2020momentum}. Regarding double Q-learning, it is still an open topic on how to design double Q-learning algorithms under function approximation and under what conditions they have theoretically guaranteed convergence. \section{Preliminaries on Q-learning and Double Q-learning} In this section, we introduce the Q-learning and the double Q-learning algorithms. \subsection{Q-learning} We consider a $\gamma$-discounted Markov decision process (MDP) with a finite state space $\mathcal S$ and a finite action space $\mathcal A$. The transition probability of the MDP is given by $P:\mathcal S \times \mathcal A \times \mathcal S \rightarrow [0,1]$, that is, $\mathbb{P}(\cdot|s, a)$ denotes the probability distribution of the next state given the current state $s$ and action $a$. We consider a random reward function $R_t$ at time $t$ drawn from a fixed distribution $\phi: \mathcal{S}\times \mathcal{A}\times \mathcal{S} \mapsto \mathbb{R}$, where $\mathbb{E}\{R_t(s,a,s')\}=R_{sa}^{s'}$ and $s'$ denotes the next state starting from $(s,a)$. In addition, we assume $|R_t|\leq R_{\max}$. A policy $\pi:=\pi(\cdot|s)$ characterizes the conditional probability distribution over the action space $\mathcal{A}$ given each state $s\in\mathcal{S}$. The action-value function (i.e., Q-function) $Q^{\pi}\in\mathbb R^{|\mathcal S|\times |\mathcal A|}$ for a given policy $\pi$ is defined as \begin{align}\label{eq:Qfunction} Q^{\pi}(s,a):=&\mathbb E\left[\sum_{t=0}^{\infty}\gamma^t R_t(s,\pi(s),s')\Big|s_0=s,a_0=a \right] \nonumber \\ =& \mathbb E_{\substack{s'\sim P(\cdot|s,a)\\a'\sim\pi(\cdot|s')}} \left[R_{sa}^{s'}+\gamma Q^{\pi}(s',a')\right], \end{align} where $\gamma\in(0,1)$ is the discount factor. Q-learning aims to find the Q-function of an optimal policy $\pi^*$ that maximizes the accumulated reward. The existence of such a $\pi^*$ has been proved in the classical MDP theory~\citep{bertsekas1996neuro}. The corresponding optimal Q-function, denoted as $Q^*$, is known as the unique fixed point of the Bellman operator $\mathcal T$ given by \begin{equation}\label{eq:BellmanOperator} \mathcal T Q(s,a) = \mathbb E_{s'\sim P(\cdot|s,a)} \left[R_{sa}^{s'}+\gamma \underset{a' \in U(s')}{\max}Q(s',a')\right], \end{equation} where $ U(s')\subset\mathcal A$ is the admissible set of actions at state $s'$. It can be shown that the Bellman operator $\mathcal T$ is $\gamma$-contractive in the supremum norm $\norm{Q}:=\max_{s,a}|Q(s,a)|$, i.e., it satisfies \begin{equation} \label{eq:Contraction} \norm{\mathcal T Q - \mathcal T Q'} \leq \gamma\norm{Q - Q'}. \end{equation} The goal of Q-learning is to find $Q^*$, which further yields $\pi^*(s) = \arg\max_{a\in U(s)}Q^*(s,a)$. In practice, however, exact evaluation of the Bellman operator~\eqref{eq:BellmanOperator} is usually infeasible due to the lack of knowledge of the transition kernel of MDP and the randomness of the reward. Instead, Q-learning draws random samples to estimate the Bellman operator and iteratively learns $Q^*$ as \begin{equation}\label{eq:qlearning} Q_{t+1}(s,a) = (1-\alpha_t(s,a)) Q_t(s,a) + \alpha_t(s,a) \left( R_t(s,a,s') + \gamma\underset{a'\in U(s')}{\max}Q_t(s',a') \right), \end{equation} where $R_t$ is the sampled reward, $s'$ is sampled by the transition probability given $(s,a)$, and $\alpha_t(s,a)\in(0,1]$ denotes the learning rate. \subsection{Double Q-learning} Although Q-learning is a commonly used RL algorithm to find the optimal policy, it can suffer from overestimation in practice~\citep{smith2006optimizer}. To overcome this issue, \citet{hasselt2010double} proposed double Q-learning given in Algorithm~\ref{alg:doubleQ}. \begin{algorithm}[H] \caption{Synchronous Double Q-learning~\citep{hasselt2010double}} \label{alg:doubleQ} \begin{algorithmic}[1] \STATE {\bf Input:} Initial $Q^A_1, Q^B_1$. \FOR{$t=1,2,\dots,T$ } \STATE Assign learning rate $\alpha_t$. \STATE Randomly choose either UPDATE(A) or UPDATE(B) with probability 0.5, respectively. \FOR{each $(s,a)$} \STATE observe $ s'\sim P(\cdot|s,a)$, and sample $R_t(s,a,s')$. \IF{UPDATE(A)} \STATE Obtain $a^* = \arg\max_{a'}Q^A_t(s',a')$ \STATE $Q^A_{t+1}(s,a) = Q^A_t(s,a) + \alpha_t(s,a) (R_t(s,a,s') + \gamma Q^B_t(s',a^*) - Q^A_t(s,a))$ \ELSIF{UPDATE(B)} \STATE Obtain $b^* = \arg\max_{b'}Q^B_t(s',b')$ \STATE $Q^B_{t+1}(s,a) = Q^B_t(s,a) + \alpha_t(s,a) (R_t(s,a,s') + \gamma Q^A_t(s',b^*) - Q^B_t(s,a))$ \ENDIF \ENDFOR \ENDFOR \STATE {\bf Output:} $Q^A_T$ (or $Q^B_T$). \end{algorithmic} \end{algorithm} Double Q-learning maintains two Q-estimators (i.e., Q-tables): $Q^A$ and $Q^B$. At each iteration of Algorithm \ref{alg:doubleQ}, one Q-table is randomly chosen to be updated. Then this chosen Q-table generates a greedy optimal action, and the other Q-table is used for estimating the corresponding Bellman operator for updating the chosen table. Specifically, if $Q^A$ is chosen to be updated, we use $Q^A$ to obtain the optimal action $a^*$ and then estimate the corresponding Bellman operator using $Q^B$. As shown in ~\citet{hasselt2010double}, $\mathbb{E}[Q^B(s',a^*)]$ is likely smaller than $\max_a \mathbb{E}[Q^A(s',a)]$, where the expectation is taken over the randomness of the reward for the same state-action pair. In this way, such a two-estimator framework of double Q-learning can effectively reduce the overestimation. {\bf Synchronous and asynchronous double Q-learning:} In this paper, we study the finite-time convergence rate of double Q-learning in two different settings: synchronous and asynchronous implementations. For synchronous double Q-learning (as shown in Algorithm \ref{alg:doubleQ}), all the state-action pairs of the chosen Q-estimator are visited simultaneously at each iteration. For the asynchronous case, only one state-action pair is updated in the chosen Q-table. Specifically, in the latter case, we sample a trajectory $\{(s_t,a_t,R_t,i_t)\}_{t=0}^\infty$ under a certain exploration strategy, where $i_t\in\{A,B\}$ denotes the index of the chosen Q-table at time $t$. Then the two Q-tables are updated based on the following rule: \begin{multline*} Q^i_{t+1}(s,a) \\ = \left\{\begin{aligned} & Q^i_t(s,a), \qquad (s,a) \neq (s_t,a_t) \text{ or } i\neq i_t;\\ & (1-\alpha_t(s,a)) Q^i_t(s,a) + \alpha_t(s,a) \Big( R_t(s,a,s') + \gamma Q^{i^c}_t(s',\underset{a'\in U(s')}{\arg\max}Q^i_t(s',a') \Big), \ \text{otherwise}, \end{aligned} \right. \end{multline*} where $i^c = \{A,B\}\setminus i$. We next provide the boundedness property of the Q-estimators and the errors in the following lemma, which is typically necessary for the finite-time analysis. \begin{lemma}\label{lem:uniformBound} For either synchronous or asynchronous double Q-learning, let $Q^i_t(s,a)$ be the value of either Q table corresponding to a state-action pair $(s,a)$ at iteration $t$. Suppose $\norm{Q_0^i}\leq \frac{R_{\max}}{1-\gamma}$. Then we have $\norm{Q^i_t}\leq \frac{R_{\max}}{1-\gamma}$ and $\norm{Q^i_t-Q^*}\leq V_{\max}$ for all $t\geq0$, where $V_{\max} := \frac{2R_{\max}}{1-\gamma}$. \end{lemma} Lemma \ref{lem:uniformBound} can be proved by induction arguments using the triangle inequality and the uniform boundedness of the reward function, which is seen in \Cref{sec:proofOfLemma1}. \section{Main results} We present our finite-time analysis for the synchronous and asynchronous double Q-learning in this section, followed by a sketch of the proof for the synchronous case which captures our main techniques. The detailed proofs of all the results are provided in the Supplementary Materials. \subsection{Synchronous double Q-learning} Since the update of the two Q-estimators is symmetric, we can characterize the convergence rate of either Q-estimator, e.g., $Q^A$, to the global optimum $Q^*$. To this end, we first derive two important properties of double Q-learning that are crucial to our finite-time convergence analysis. The first property captures the stochastic error $\norm{Q^B_t - Q^A_t}$ between the two Q-estimators. Since double Q-learning updates alternatingly between these two estimators, such an error process must decay to zero in order for double Q-learning to converge. Furthermore, how fast such an error converges determines the overall convergence rate of double Q-learning. The following proposition (which is an informal restatement of~\Cref{lem:Gq} in~\Cref{subsec:PartI}) shows that such an error process can be \textit{block-wisely} bounded by an exponentially decreasing sequence $G_q = (1-\xi)^q V_{\max}$ for $q=0,1,2,\dots,$ and some $\xi\in(0,1)$. Conceptually, as illustrated in \Cref{fig:DkGk}, such an error process is upper-bounded by the blue-colored piece-wise linear curve. \begin{repproposition}{lem:Gq}({\bf\em Informal}) Consider synchronous double Q-learning under a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. We divide the time horizon into blocks $[\hat{\tau}_{q},\hat{\tau}_{q+1})$ for $q\geq0$, where $\hat{\tau}_0=0$ and $\hat{\tau}_{q+1} = \hat{\tau}_q + c_1\hat{\tau}_q^\omega$ with some $c_1>0$. Fix $\hat{\epsilon}>0$. Then for any $n$ such that $G_n \geq \hat{\epsilon}$ and under certain conditions on $\hat{\tau}_1$ (see~\Cref{subsec:PartI}), we have \begin{equation*} \mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\geq 1- c_2 n \exp\left( -\frac{c_3\hat{\tau}_1^{\omega}\hat{\epsilon}^2}{V_{\max}^2} \right), \end{equation*} where the positive constants $c_2$ and $c_3$ are specified in~\Cref{subsec:PartI}. \end{repproposition} \begin{figure}[h] \centering \includegraphics[width=0.70\textwidth]{doubleQ.png} \caption{Illustration of sequence $\{G_k\}_{k\geq0}$ as a block-wise upper bound on $\norm{Q^B_t-Q^A_t}$, and sequence $\{D_k\}_{k\geq0}$ as a block-wise upper bound on $\norm{Q^A_t-Q^*}$ conditioned on the first upper bound event.} \label{fig:DkGk} \end{figure} \Cref{lem:Gq} implies that the two Q-estimators approach each other asymptotically, but does not necessarily imply that they converge to the optimal action-value function $Q^*$. Then the next proposition (which is an informal restatement of \Cref{lem:conditionalBound} in~\Cref{subsec:PartII}) shows that as long as the high probability event in~\Cref{lem:Gq} holds, the error process $\norm{Q^A_t-Q^*}$ between either Q-estimator (say $Q^A$) and the optimal Q-function can be \textit{block-wisely} bounded by an exponentially decreasing sequence $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ for $k=0,1,2,\dots,$ and $\beta\in(0,1)$. Conceptually, as illustrated in \Cref{fig:DkGk}, such an error process is upper-bounded by the yellow-colored piece-wise linear curve. \begin{repproposition}{lem:conditionalBound} ({\bf\em Informal}) Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. We divide the time horizon into blocks $[\tau_{k},\!\tau_{k+1})$ for $k\geq0$, where ${\tau_{0}}=0$ and ${\tau}_{k+1} = {\tau}_k + c_4{\tau}_k^\omega$ with some $c_4>0$. Fix $\tilde{\epsilon}>0$. Then for any $m$ such that $D_m \geq \tilde{\epsilon}$ and under certain conditions on $\tau_1$ (see~\Cref{subsec:PartII}), we have \begin{equation*} \mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^*}\leq D_{k+1} |E,F \right] \geq 1 - c_5 m \exp\left( -\frac{c_6\tau_1^{\omega}\tilde{\epsilon}^2}{V_{\max}^2} \right), \end{equation*} where $E$ and $F$ denote certain events defined in~\eqref{eq:eventA} and \eqref{eq:eventB} in~\Cref{subsec:PartII}, and the positive constants $c_4,c_5$, and $c_6$ are specified~\Cref{subsec:PartII}. \end{repproposition} As illustrated in~\Cref{fig:DkGk}, the two block sequences $\{\hat{\tau}_q\}_{q\geq0}$ in~\Cref{lem:Gq} and $\{{\tau_q}\}_{q\geq0}$ in~\Cref{lem:conditionalBound} can be chosen to coincide with each other. Then combining the above two properties followed by further mathematical arguments yields the following main theorem that characterizes the convergence rate of double Q-learning. We will provide a proof sketch for~\Cref{thm:syncDQ} in~\Cref{subsec:proofOutline}, which explains the main steps to obtain the supporting properties of~\Cref{lem:Gq} and~\ref{lem:conditionalBound} and how they further yield the main theorem. \begin{theorem}\label{thm:syncDQ} Fix $\epsilon>0$ and $\gamma\in(1/3,1)$. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $Q^A_T(s,a)$ be the value of $Q^A$ for a state-action pair $(s,a)$ at time $T$. Then we have $\mathbb P(\norm{Q^A_T - Q^*} \leq \epsilon)\geq 1-\delta$, given that \begin{equation} \label{thm1T} T=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right), \end{equation} where $V_{\max} = \frac{2R_{\max}}{1-\gamma}$. \end{theorem} \Cref{thm:syncDQ} provides the finite-time convergence guarantee in high probability sense for synchronous double Q-learning. Specifically, double Q-learning attains an $\epsilon$-accurate optimal Q-function with high probability with at most $\Omega\left( \left( \frac{1}{(1-\gamma)^6\epsilon^2}\ln \frac{1}{(1-\gamma)^7\epsilon^2} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations. Such a result can be further understood by considering the following two regimes. In the high accuracy regime, in which $\epsilon \ll 1-\gamma$, the dependence on $\epsilon$ dominates, and the time complexity is given by $\Omega\left( \left( \frac{1}{\epsilon^2}\ln \frac{1}{\epsilon^2} \right)^{\frac{1}{\omega}} + \left( \ln\frac{ 1}{\epsilon} \right)^{\frac{1}{1-\omega}} \right)$, which is optimized as $\omega$ approaches to 1. In the low accuracy regime, in which $\epsilon \gg 1-\gamma$, the dependence on $\frac{1}{1-\gamma}$ dominates, and the time complexity can be optimized at $\omega=\frac{6}{7}$, which yields $T=\tilde{\Omega} \left( \frac{1}{(1-\gamma)^7\epsilon^{7/3}} + \frac{1}{(1-\gamma)^7} \right)=\tilde{\Omega} \left( \frac{1}{(1-\gamma)^7\epsilon^{7/3}} \right)$. Furthermore, \Cref{thm:syncDQ} corroborates the design effectiveness of double Q-learning, which overcomes the overestimation issue and hence achieves better accuracy by making less aggressive progress in each update. Specifically, comparison of \Cref{thm:syncDQ} with the time complexity bounds of vanilla synchronous Q-learning under a polynomial learning rate in \citet{even2003learning} and \citet{wainwright2019stochastic} indicates that in the high accuracy regime, double Q-learning achieves the same convergence rate as vanilla Q-learning in terms of the order-level dependence on $\epsilon$. Clearly, the design of double Q-learning for high accuracy dominates the performance. In the low-accuracy regime (which is not what double Q-learning is designed for), double Q-learning achieves a slightly weaker convergence rate than vanilla Q-learning in \citet{even2003learning,wainwright2019stochastic} in terms of the dependence on $1-\gamma$, because its nature of less aggressive progress dominates the performance. \subsection{Asynchronous Double Q-learning} In this subsection, we study the asynchronous double Q-learning and provide its finite-time convergence result. Differently from synchronous double Q-learning, in which all state-action pairs are visited for each update of the chosen Q-estimator, asynchronous double Q-learning visits only one state-action pair for each update of the chosen Q-estimator. Therefore, we make the following standard assumption on the exploration strategy~\citep{even2003learning}: \begin{assumption}\label{asp:covering} (Covering number) There exists a covering number $L$, such that in consecutive $L$ updates of either $Q^A$ or $Q^B$ estimator, all the state-action pairs of the chosen Q-estimator are visited at least once. \end{assumption} The above conditions on the exploration are usually necessary for the finite-time analysis of asynchronous Q-learning. The same assumption has been taken in \citet{even2003learning}. \citet{qu2020finite} proposed a mixing time condition which is in the same spirit. \Cref{asp:covering} essentially requires the sampling strategy to have good visitation coverage over all state-action pairs. Specifically, \Cref{asp:covering} guarantees that consecutive $L$ updates of $Q^A$ visit each state-action pair of $Q^A$ at least once, and the same holds for $Q^B$. Since $2L$ iterations of asynchronous double Q-learning must make at least $L$ updates for either $Q^A$ or $Q^B$, \Cref{asp:covering} further implies that any state-action pair $(s,a)$ must be visited at least once during $2L$ iterations of the algorithm. In fact, our analysis allows certain relaxation of~\Cref{asp:covering} by only requiring each state-action pair to be visited during an interval with a certain probability. In such a case, we can also derive a finite-time bound by additionally dealing with a conditional probability. Next, we provide the finite-time result for asynchronous double Q-learning in the following theorem. \begin{theorem}\label{thm:asyncDQ} Fix $\epsilon>0,\gamma\in(1/3,1)$. Consider asynchronous double Q-learning under a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose Assumption \ref{asp:covering} holds. Let $Q^A_T(s,a)$ be the value of $Q^A$ for a state-action pair $(s,a)$ at time $T$. Then we have $\mathbb P(\norm{Q^A_T - Q^*} \leq \epsilon)\geq 1-\delta$, given that \begin{equation} \label{thm2T} T=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4V_{\max}^2 }{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{L^2}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right). \end{equation} \end{theorem} Comparison of~\Cref{thm:syncDQ} and~\ref{thm:asyncDQ} indicates that the finite-time result of asynchronous double Q-learning matches that of synchronous double Q-learning in the order dependence on $\frac{1}{1-\gamma}$ and $\frac{1}{\epsilon}$. The difference lies in the extra dependence on the covering time $L$ in \Cref{thm:asyncDQ}. Since synchronous double Q-learning visits all state-action pairs (i.e., takes $|\mathcal S||\mathcal A|$ sample updates) at each iteration, whereas asynchronous double Q-learning visits only one state-action pair (i.e., takes only one sample update) at each iteration, a more reasonable comparison between the two should be in terms of the overall sample complexity. In this sense, synchronous and asynchronous double Q-learning algorithms have the sample complexities of $|\mathcal S||\mathcal A| T$ (where $T$ is given in~\eqref{thm1T}) and $T$ (where $T$ is given in~\eqref{thm2T}), respectively. Since in general $L\gg |\mathcal S||\mathcal A|$, synchronous double-Q is more efficient than asynchronous double-Q in terms of the overall sampling complexity. \subsection{Proof Sketch of Theorem \ref{thm:syncDQ}}\label{subsec:proofOutline} In this subsection, we provide an outline of the technical proof of Theorem \ref{thm:syncDQ} and summarize the key ideas behind the proof. The detailed proof can be found in~\Cref{sec:proofThm1}. Our goal is to study the finite-time convergence of the error $\norm{Q^A_t-Q^*}$ between one Q-estimator and the optimal Q-function (this is without the loss of generality due to the symmetry of the two estimators). To this end, our proof includes: (a) Part I which analyzes the stochastic error propagation between the two Q-estimators $\norm{Q^B_t - Q^A_t}$; (b) Part II which analyzes the error dynamics between one Q-estimator and the optimum $\norm{Q^A_t-Q^*}$ conditioned on the error event in Part I; and (c) Part III which bounds the unconditional error $\norm{Q^A_t-Q^*}$. We describe each of the three parts in more details below. \textbf{Part I: Bounding $\norm{Q^B_t - Q^A_t}$ (see \Cref{lem:Gq}).} The main idea is to upper bound $\norm{Q^B_t - Q^A_t}$ by a decreasing sequence $\{G_q\}_{q\geq0}$ block-wisely with high probability, where each block $q$ (with $q\geq0$) is defined by $t\in[\hat{\tau}_q, \hat{\tau}_{q+1})$. The proof consists of the following four steps. \textit{Step 1 (see \Cref{lem:uBAdyn})}: We characterize the dynamics of $u_t^{BA}(s,a):=Q^B(s,a) - Q^A(s,a)$ as an SA algorithm as follows: \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a)), \end{equation*} where $h_t$ is a contractive mapping of $u_t^{BA}$, and $z_t$ is a martingale difference sequence. \textit{Step 2 (see \Cref{lem:uBAsanwich})}: We derive lower and upper bounds on $u_t^{BA}$ via two sequences $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ as follows: \begin{equation*} -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) \leq u_t^{BA}(s,a) \leq X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a), \end{equation*} for any $t\geq \hat{\tau}_q$, state-action pair $(s,a)\in\mathcal{S}\times\mathcal{A}$, and $q\geq0$, where $X_{t;\hat{\tau}_q}$ is deterministic and driven by $G_q$, and $Z_{t;\hat{\tau}_q}$ is stochastic and driven by the martingale difference sequence $z_t$. \textit{Step 3 (see \Cref{lem:Xt} and \Cref{lem:ZlDiff})}: We block-wisely bound $u_t^{BA}(s,a)$ using the induction arguments. Namely, we prove $\norm{u_t^{BA}}\leq G_q$ for $t\in[\hat{\tau_q},\hat{\tau}_{q+1})$ holds for all $q\geq0$. By induction, we first observe for $q=0$, $\norm{u_t^{BA}}\leq G_0$ holds. Given any state-action pair $(s,a)$, we assume that $\norm{u_t^{BA}(s,a)}\leq G_q$ holds for $t\in[\hat{\tau}_q, \hat{\tau}_{q+1})$. Then we show $\norm{u_t^{BA}(s,a)}\leq G_{q+1}$ holds for $t\in[\hat{\tau}_{q+1}, \hat{\tau}_{q+2})$, which follows by bounding $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ separately in \Cref{lem:Xt} and \Cref{lem:ZlDiff}, respectively. \textit{Step 4 (see \Cref{subsec:proofProp1})} : We apply union bound (\Cref{lem:unionBound}) to obtain the block-wise bound for all state-action pairs and all blocks. \textbf{Part II: Conditionally bounding $\norm{Q^A_t - Q^*}$ (see~\Cref{lem:conditionalBound})}. We upper bound $\norm{Q^A_t - Q^*} $ by a decreasing sequence $\{D_k\}_{k\geq 0}$ block-wisely conditioned on the following two events: \begin{itemize} \item []Event $E$: $\norm{u^{BA}_t}$ is upper bounded properly (see \eqref{eq:eventA} in~\Cref{subsec:PartII}), and \item []Event $F$: there are sufficient updates of $Q^A_t$ in each block (see~\eqref{eq:eventB} in~\Cref{subsec:PartII}). \end{itemize} The proof of~\Cref{lem:conditionalBound} consists of the following four steps. \textit{Step 1 (see~\Cref{lem:couple})}: We design a special relationship (illustrated in~\Cref{fig:DkGk}) between the block-wise bounds $\{G_q\}_{q\geq 0}$ and $\{D_k\}_{k\geq 0}$ and their block separations. \textit{Step 2 (see~\Cref{lem:residualDynamics})}: We characterize the dynamics of the iteration residual $r_{t}(s,a):=Q^A_t(s,a) - Q^*(s,a)$ as an SA algorithm as follows: when $Q^A$ is chosen to be updated at iteration $t$, \begin{equation*} r_{t+1}(s,a) \!=\! (1\!-\!\alpha_{t}) r_{t}(s,a) \!+\! \alpha_{t} (\mathcal T Q_{t}^A(s,a)\!-\!Q^*(s,a)) \!+\! \alpha_{t} w_{t}(s,a) \!+\! \alpha_{t}\gamma u_{t}^{BA}(s',a^*), \end{equation*} where $w_{t}(s,a)$ is the error between the Bellman operator and the sample-based empirical estimator, and is thus a martingale difference sequence, and $u_{t}^{BA}$ has been defined in Part I. \textit{Step 3 (see~\Cref{lem:rtSandwich})}: We provide upper and lower bounds on $r_t$ via two sequences $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ as follows: \begin{equation*} -Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a) \leq r_t(s,a) \leq Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a), \end{equation*} for all $t\geq \tau_k$, all state-action pairs $(s,a)\in\mathcal{S}\times\mathcal{A}$, and all $q\geq0$, where $Y_{t;\tau_k}$ is deterministic and driven by $D_k$, and $W_{t;\tau_k}$ is stochastic and driven by the martingale difference sequence $w_t$. In particular, if $Q^A_t$ is not updated at some iteration, then the sequences $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ assume the same values from the previous iteration. \textit{Step 4 (see \Cref{lem:Yt}, \Cref{lem:WlDiff} and \Cref{subsec:proofProp2})}: Similarly to Steps 3 and 4 in Part I, we conditionally bound $\norm{r_t}\leq D_k$ for $t\in [\tau_{k}, \tau_{k+1})$ and $k\geq0$ via bounding $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ and further taking the union bound. \textbf{Part III: Bounding $\norm{Q^A_t - Q^*}$(see \Cref{subsec:proofThm1}).} We combine the results in the first two parts, and provide high probability bound on $\norm{r_t}$ with further probabilistic arguments, which exploit the high probability bounds on $\mathbb P(E)$ in~\Cref{lem:Gq} and $\mathbb P(F)$ in~\Cref{lem:halfQA}. \section{Conclusion} In this paper, we provide the first finite-time results for double Q-learning, which characterize how fast double Q-learning converges under both synchronous and asynchronous implementations. For the synchronous case, we show that it achieves an $\epsilon$-accurate optimal Q-function with at least the probability of $1-\delta$ by taking $\Omega\left( \left( \frac{1}{(1-\gamma)^6\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|}{(1-\gamma)^7\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ 1}{(1-\gamma)^2\epsilon} \right)^{\frac{1}{1-\omega}} \right)$ iterations. Similar scaling order on $\frac{1}{1-\gamma}$ and $\frac{1}{\epsilon}$ also applies for asynchronous double Q-learning but with extra dependence on the covering number. We develop new techniques to bound the error between two correlated stochastic processes, which can be of independent interest. \section*{Acknowledgements} The work was supported in part by the U.S. National Science Foundation under the grant CCF-1761506 and the startup fund of the Southern University of Science and Technology (SUSTech), China. \section*{Broader Impact} Reinforcement learning has achieved great success in areas such as robotics and game playing, and thus has aroused broad interests and more potential real-world applications. Double Q-learning is a commonly used technique in deep reinforcement learning to improve the implementation stability and speed of deep Q-learning. In this paper, we provided the fundamental analysis on the convergence rate for double Q-learning, which theoretically justified the empirical success of double Q-learning in practice. Such a theory also provides practitioners desirable performance guarantee to further develop such a technique into various transferable technologies. \section{Proof of Theorem \ref{thm:asyncDQ} }\label{app:asyncThm} The main idea of this proof is similar to that of Theorem \ref{thm:syncDQ} with further efforts to characterize the effects of asynchronous sampling. The proof also consists of three parts: (a) Part I which analyzes the stochastic error propagation between the two Q-estimators $\norm{Q^B_t - Q^A_t }$; (b) Part II which analyzes the error dynamics between one Q-estimator and the optimum $\norm{Q^A_t -Q^* }$ conditioned on the error event in Part I; and (c) Part III which bounds the unconditional error $\norm{Q^A_t -Q^*}$. To proceed the proof, we first introduce the following notion of valid iterations for any fixed state-action pair $(s,a)$. \begin{definition}\label{def:TAsa} We define $T(s,a)$ as the collection of iterations if a state-action pair $(s,a)$ is used to update the Q-function $Q^A$ or $Q^B$, and $T^A(s, a)$ as the collection of iterations specifically updating $Q^A(s,a)$. In addition, we denote $T(s, a, t_1, t_2)$ and $T^A(s, a, t_1, t_2)$ as the set of iterations updating $(s,a)$ and $Q^A(s,a)$ between time $t_1$ and $t_2$, respectively. That is, \begin{align*} T(s, a, t_1, t_2) &= \left\{ t: t\in [t_1, t_2] \text{ and } t\in T(s,a) \right\},\\ T^A(s, a, t_1, t_2) &= \left\{ t: t\in [t_1, t_2] \text{ and } t\in T^A(s,a) \right\}. \end{align*} Correspondingly, the number of iterations updating $(s,a)$ between time $t_1$ and $t_2$ equals the cardinality of $T(s, a, t_1, t_2)$ which is denoted as $|T(s, a, t_1, t_2)|$. Similarly, the number of iterations updating $Q^A(s,a)$ between time $t_1$ and $t_2$ is denoted as $|T^A(s, a, t_1, t_2)|$. \end{definition} Given Assumption \ref{asp:covering}, we can obtain some properties of the quantities defined above. \begin{lemma}\label{prop:sa2L} It always holds that $|T(s,a,t_1,t_2)|\leq t_2-t_1+1$ and $|T^A(s,a,t_1,t_2)|\leq t_2-t_1+1$. In addition, suppose that Assumption \ref{asp:covering} holds. Then we have $T(s,a,t,t+2kL-1)\geq k$ for any $t\geq 0$. \end{lemma} \begin{proof} Since in a consecutive $2L$ running iterations of Algorithm \ref{alg:doubleQ}, either $Q^A$ or $Q^B$ is updated at least $L$ times. Then following from Assumption \ref{asp:covering}, $(s,a)$ is visited at least once for each $2L$ running iterations of Algorithm \ref{alg:doubleQ}, which immediately implies this proposition. \end{proof} Now we proceed our proof by three parts. \subsection{Part I: Bounding $\norm{Q^B_t-Q^A_t}$} \label{subsec:PartIThm2} We upper bound $\norm{Q^B_t - Q^A_t}$ block-wisely using a decreasing sequence $\{G_q\}_{q\geq0}$ as defined in~\Cref{lem:GqAsy} below. \begin{proposition}\label{lem:GqAsy} Fix $\epsilon>0, \kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Consider asynchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose that Assumption \ref{asp:covering} holds. Let $G_q = (1-\xi)^q G_0$ with $G_0 = V_{\max}$ and $\xi=\frac{1-\gamma}{4}$. Let $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$ for $q \geq 1$ with $c\geq \frac{L\kappa(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$ and $\hat{\tau}_1$ as the finishing time of the first block satisfying \begin{equation*} \hat{\tau}_1\geq \max\left\{\left(\frac{1}{\kappa-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{128cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\ln\left(\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $n$ such that $G_n\geq\sigma\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\\ &\quad\geq 1- \frac{4cL(n+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right). \end{align*} \end{proposition} The proof of~\Cref{lem:GqAsy} consists of the following steps. Since the main idea of the proofs is similar to that of~\Cref{lem:Gq}, we will focus on pointing out the difference. We continue to use the notation $u^{BA}_t(s,a):=Q^B_t(s,a)-Q^A_t(s,a)$. \textbf{Step 1: Characterizing the dynamics of $u^{BA}_t$} First, we observe that when $(s,a)$ is visited at time $t$, i.e., $t\in T(s,a)$, Lemmas \ref{lem:uBAdyn} and \ref{lem:uBAsanwich} still apply. Otherwise, $u^{BA}$ is not updated. Thus, we have \begin{equation*} u_{t+1}^{BA}(s,a) = \left\{\begin{aligned} &u_t^{BA}(s,a),\quad t\notin T(s,a);\\ &(1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a),\quad t\in T(s,a), \end{aligned}\right. \end{equation*} where $F_t$ satisfies \begin{equation*} \norm{\mathbb E[F_t|\mathcal F_t]} \leq \frac{1+\gamma}{2} \norm{u_t^{BA}}. \end{equation*} For $t\in T(s,a)$, we rewrite the dynamics of $u_t^{BA}(s,a)$ as \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a)), \end{equation*} where $h_t(s,a) = \mathbb E[F_t(s,a)|\mathcal F_t]$ and $z_t(s,a) = F_t(s,a) - \mathbb E[F_t(s,a)|\mathcal F_t]$. In the following steps, we use induction to proceed the proof of~\Cref{lem:GqAsy}. Given $G_q$ defined in~\Cref{lem:GqAsy}, since $\norm{u_t^{BA} }\leq G_0$ holds for all $t$, and thus it holds for $t\in [0,\hat{\tau}_1]$. Now suppose $\hat{\tau}_q$ satisfies that $\norm{u_t^{BA} }\leq G_q$ for any $t\geq \hat{\tau}_q$. Then we will show there exists $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$ such that $\norm{u_t^{BA} }\leq G_{q+1}$ for any $t\geq \hat{\tau}_{q+1}$. \textbf{Step 2: Constructing sandwich bounds} We first observe that the following sandwich bound still holds for all $t\geq \hat{\tau}_q$. \begin{equation*} -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) \leq u_t^{BA}(s,a) \leq X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a), \end{equation*} where $Z_{t;\hat{\tau}_q}(s,a)$ is defined as \begin{equation*} Z_{t+1;\hat{\tau}_q}(s,a) = \left\{\begin{aligned} & Z_{t;\hat{\tau}_q}(s,a), \quad t\notin T(s,a)\\ & (1-\alpha_t)Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a), \quad t\in T(s,a), \end{aligned} \right. \end{equation*} with the initial condition $Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = 0$, and $X_{t;\hat{\tau}_q}(s,a)$ is defined as \begin{equation*} X_{t+1;\hat{\tau}_q}(s,a) = \left\{\begin{aligned} & X_{t;\hat{\tau}_q}(s,a), \quad t\notin T(s,a)\\ & (1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q, \quad t\in T(s,a), \end{aligned} \right. \end{equation*} with $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q,\gamma'=\frac{1+\gamma}{2}$. This claim can be shown by induction. This bound clearly holds for the initial case with $t=\hat{\tau}_q$. Assume that it still holds for iteration $t$. If $t\in T(s,a)$, the proof is the same as that of~\Cref{lem:uBAsanwich}. If $t\notin T(s,a)$, since all three sequences do not change from time $t$ to time $t+1$, the sandwich bound still holds. Thus we conclude this claim. \textbf{Step 3: Bounding $X_{t;\hat{\tau}_q}(s,a)$} Next, we bound the deterministic sequence $X_{t;\hat{\tau}_q}(s,a)$. Observe that $X_{t;\hat{\tau}_q}(s,a)\leq G_q$ for any $t\geq\hat{\tau}_q$. We will next show that $X_{t;\hat{\tau}_q}(s,a) \leq \left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q$ for any $t\in [\hat{\tau}_{q+1},\hat{\tau}_{q+2})$ where $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$. Similarly to the proof of~\Cref{lem:Xt}, we still rewrite $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a)$ as $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q = \gamma' G_q + (1-\gamma')G_q := \gamma' G_q + \rho_{\hat{\tau}_q} $. However, in this case the dynamics of $X_{t;\hat{\tau}_q}(s,a)$ is different, which is represented as \begin{equation*} X_{t+1;\hat{\tau}_q}(s,a) =\left\{\begin{aligned} & X_{t;\hat{\tau}_q}(s,a), \quad t\notin T(s,a)\\ &(1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q = \gamma'G_q + (1-\alpha_t)\rho_t, \quad t\in T(s,a). \end{aligned} \right. \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$ when $t\in T(s,a)$. By the definition of $\rho_t$, we obtain \begin{align*} \rho_t &= \rho_{\hat{\tau}_q}\underset{i\in T(s, a, \hat{\tau}_q, t-1)}{\Pi}(1-\alpha_i) = (1-\gamma')G_q\underset{i\in T(s, a, \hat{\tau}_q, t-1)}{\Pi}(1-\alpha_i)\\ &\leq (1-\gamma')G_q\underset{i\in T(s, a, \hat{\tau}_q, \hat{\tau}_{q+1}-1)}{\Pi}\left(1-\frac{1}{i^\omega}\right) \leq (1-\gamma')G_q\prod_{i=\hat{\tau}_{q+1}-|T(s, a, \hat{\tau}_q, \hat{\tau}_{q+1}-1)|}^{\hat{\tau}_{q+1}-1}\left(1-\frac{1}{i^\omega}\right)\\ &\overset{\text{(i)}}{\leq} (1-\gamma')G_q\prod_{i=\hat{\tau}_{q+1}-\frac{c}{\kappa}\hat{\tau}_q^\omega}^{\hat{\tau}_{q+1}-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(ii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{\frac{c}{\kappa}\hat{\tau}_q^\omega-1}{(\hat{\tau}_{q+1}-1)^\omega} \right)\\ &\leq (1-\gamma')G_q\exp\left( -\frac{\frac{c}{\kappa}\hat{\tau}_q^\omega-1 }{\hat{\tau}_{q+1}^\omega} \right) = (1-\gamma')G_q\exp\left( -\frac{c}{\kappa}\left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega + \frac{1}{\hat{\tau}_{q+1}^\omega} \right)\\ &\overset{\text{(iii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{c}{\kappa}\frac{1}{1+\frac{2cL}{\kappa}} + \frac{1}{\hat{\tau}_{1}^\omega} \right), \end{align*} where (i) follows from~\Cref{prop:sa2L}, (ii) follows~\Cref{lem:prodHelp}, and (iii) follows because $\hat{\tau}_q\geq\hat{\tau}_1$ and \begin{equation*} \left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega \geq \frac{\hat{\tau}_q}{\hat{\tau}_{q+1}} = \frac{\hat{\tau}_q}{\hat{\tau}_{q} + \frac{2cL}{\kappa}\hat{\tau}_q^\omega}\geq \frac{1}{1+\frac{2cL}{\kappa}}. \end{equation*} Since $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$, we have $\ln (2+\Delta)\in (0, \kappa)$. Further, observing $\hat{\tau_1}^\omega > \frac{1}{\kappa-\ln(2+\Delta)}$, we obtain $\ln(2+\Delta) + \frac{1}{\hat{\tau_1}^\omega} \in (0,\kappa)$. Last, since $c\geq\frac{L\kappa(\ln(2+\Delta) + 1/\hat{\tau_1}^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\hat{\tau_1}^\omega)}$, we have $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\hat{\tau_1}^\omega}\leq -\ln(2+\Delta)$. Finally, combining the above observations with the fact $1-\gamma'=2\xi$, we conclude that for any $t\geq \hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2cL}{\kappa}\hat{\tau}_q^\omega$, \begin{equation*} X_{t;\hat{\tau}_q}(s,a) \leq \left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q. \end{equation*} \textbf{Step 4: Bounding $Z_{t;\hat{\tau}_q}(s,a)$} It remains to bound the stochastic sequence $Z_{t;\hat{\tau}_q}(s,a)$ by $\frac{\Delta}{2+\Delta}\xi G_q$ at epoch $q+1$. We define an auxiliary sequence $\{Z_{t;\hat{\tau}_q}^l (s,a)\}$ (which is different from that in \eqref{eq:Zl}) as: \begin{equation*} Z^l_{t;\hat{\tau}_q}(s,a) = \sum_{i\in T(s, a, \hat{\tau}_q, t-1)}\alpha_i\underset{j\in T(s, a, i+1, t-1)}{\Pi}(1-\alpha_j) z_i(s,a). \end{equation*} Following the same arguments as the proof of Lemma \ref{lem:ZlDiff}, we conclude that $\{Z_{t;\hat{\tau}_q}^l (s,a)\}$ is a martingale sequence and satisfies \begin{equation*} \lvert Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \rvert = \alpha_{\hat{\tau}_q+l} |z_{\hat{\tau}_q + l}(s,a)|\leq \frac{2V_{\max}}{\hat{\tau}_q^\omega}. \end{equation*} In addition, note that \begin{align*} Z_{t;\hat{\tau}_q}(s,a) &= Z_{t;\hat{\tau}_q}(s,a) - Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a)\\ &= \sum_{l:\hat{\tau}_q+l-1\in T(s, a, \hat{\tau}_q, t-1)} (Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a)) + Z^{\min(T(s, a, \hat{\tau}_q, t-1))}_{t;\hat{\tau}_q}(s,a). \end{align*} Then we apply Azuma' inequality in Lemma \ref{lem:azuma} and obtain \begin{align*} &\mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \hat{\epsilon}| t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\leq 2\exp\left( \frac{-\hat{\epsilon}^2}{2\underset{l:\hat{\tau}_q+l-1\in T(s, a, \hat{\tau}_q, t-1)}{\sum} (Z^l_{t;\hat{\tau}_q}(s,a) \!-\! Z^{l-1}_{t;\hat{\tau}_q}(s,a))^2 \!+\! 2\left(Z^{\min(T(s, a, \hat{\tau}_q, t-1))}_{t;\hat{\tau}_q}(s,a)\right)^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(|T(s, a, \hat{\tau}_q, t-1)|+1) V_{\max}^2} \right) \overset{\text{(i)}}{\leq} 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(t-\hat{\tau}_q) V_{\max}^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(\hat{\tau}_{q+2}-\hat{\tau}_q) V_{\max}^2} \right) = 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8 \left(\frac{2cL}{\kappa}\hat{\tau}_{q+1}^\omega+\frac{2cL}{\kappa}\hat{\tau}_q^\omega\right) V_{\max}^2} \right)\\ &\quad = 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8 \left(\frac{2cL}{\kappa}(\hat{\tau}_{q}+\frac{2cL}{\kappa}\hat{\tau}_q^\omega)^\omega+\frac{2cL}{\kappa}\hat{\tau}_q^\omega\right) V_{\max}^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\kappa^2\hat{\epsilon}^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa) V_{\max}^2} \right) \end{align*} where (i) follows from~\Cref{prop:sa2L}. \textbf{Step 5: Taking union over all blocks} Finally, using the union bound of~\Cref{lem:unionBound} yields \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\xi G_q \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A|(\hat{\tau}_{q+2} - \hat{\tau}_{q+1}) \cdot \mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \frac{\Delta}{2+\Delta}\xi G_q \Big\rvert t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\hat{\tau}_{q+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 G_q^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 G_q^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^{\omega}}{32cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\sum_{q=0}^n |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(iii)}}{\geq} 1- \frac{4cL(n+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from $G_q \geq G_n \geq \sigma\epsilon $, (ii) follows from~\Cref{lem:tauHelp} by substituting $a=\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }, b=1$ and observing that \begin{align*} \hat{\tau}_q^{\omega}&\geq\hat{\tau}_1^{\omega}\geq \frac{128cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\ln\left(\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\xi^2\sigma^2\epsilon^2 }\right)=2ab\ln ab, \end{align*} and (iii) follows from $\hat{\tau}_q \geq \hat{\tau}_1$. \subsection{Part II: Conditionally bounding $\norm{Q^A_t-Q^*}$} \label{subsec:PartIIThm2} We upper bound $\norm{Q^A_t-Q^*}$ block-wisely by a decreasing sequence $\{D_k\}_{k\geq0}$ conditioned on the following two events: fix a positive integer $m$, \begin{align} G & = \left\{ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t }\leq \sigma D_{k+1} \right\}, \label{eq:eventE} \\ H & = \{ \forall k\in [1,m+1], I^A_k\geq cL\tau_{k}^\omega \}, \label{eq:eventF} \end{align} where $I^A_k$ denotes the number of iterations updating $Q^A$ at epoch $k$, $\tau_k$ is the starting iteration index of the $k+1$th block, and $\omega$ is the parameter of the polynomial learning rate. Roughly, Event $G$ requires that the difference between the two Q-function estimators are bounded appropriately, and Event $H$ requires that $Q^A$ is sufficiently updated in each epoch. Again, we will design $\{D_k\}_{k\geq 0}$ in a way such that the occurrence of Event $G$ can be implied from the event that $\norm{u^{BA}_t}$ is bounded by $\{G_q\}_{q\geq 0}$ (see~\Cref{lem:coupleAsy} below). A lower bound of the probability for Event $H$ to hold is characterized in~\Cref{lem:halfQA} in Part III. \begin{proposition}\label{lem:conditionalBoundAsy} Fix $\epsilon>0, \kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Consider asynchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $\{G_q\}, \{\hat{\tau}_q\}$ be as defined in~\Cref{lem:GqAsy}. Define $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Let $\tau_k=\hat{\tau}_k$ for $k\geq0$. Suppose that $c\geq \frac{L(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$ and $\tau_1 = \hat{\tau}_1$ as the finishing time of the first epoch satisfies \begin{equation*} \tau_1\geq \max\left\{\left(\frac{1}{\kappa-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{32cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln \left(\frac{16cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $m$ such that $D_m\geq\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\\ &\quad\geq 1 - \frac{4cL(m+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( 1-\frac{2}{e} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right). \end{align*} \end{proposition} Recall that in the proof of~\Cref{lem:conditionalBound}, $Q^A$ is not updated at each iteration and thus we introduced notations $T^A$ and $T^A(t_1,t_2)$ in Definition \ref{def:TA} to capture the convergence of the error $\norm{Q^A -Q^* }$. In this proof, the only difference is that when choosing to update $Q^A$, only one $(s,a)$-pair is visited. Therefore, the proof of~\Cref{lem:conditionalBoundAsy} is similar to that of~\Cref{lem:conditionalBound}, where most of the arguments simply substitute $T^A, T^A(t_1,t_2)$ in the proof of~\Cref{lem:conditionalBound} by $T^A(s,a), T^A(s,a,t_1,t_2)$ in Definition \ref{def:TAsa}, respectively. Certain bounds are affected by such substitutions. In the following, we proceed the proof of~\Cref{lem:conditionalBoundAsy} in five steps, and focus on pointing out the difference from the proof of~\Cref{lem:conditionalBound}. More details can be referred to~\Cref{subsec:PartII}. \textbf{Step 1: Coupling $\{D_k\}_{k\geq0}$ and $\{G_q\}_{q\geq0}$} We establish the relationship between $\{D_k\}_{k\geq0}$ and $\{G_q\}_{q\geq0}$ in the same way as Lemma \ref{lem:couple}. For the convenience of reference, we restate \Cref{lem:couple} in the following. \begin{lemma}\label{lem:coupleAsy} Let $\{G_q\}$ be defined in~\Cref{lem:GqAsy}, and let $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Then we have \begin{align*} &\mathbb P\left[ \forall(s,a), \forall q\in [0,m], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right]\\ &\quad\leq \mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t }\leq \sigma D_{k+1} \right], \end{align*} given that $\tau_k = \hat{\tau}_{k }$. \end{lemma} \textbf{Step 2: Constructing sandwich bounds} Let $r_t(s,a)=Q^A(s,a)-Q^*(s,a)$ and $\tau_k$ be such that $\norm{r_t}\leq D_k$ for all $t\geq\tau_k$. The requirement of Event $G$ yields \begin{equation*} -Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a) \leq r_t(s,a) \leq Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a), \end{equation*} where $W_{t;\tau_k}(s,a)$ is defined as \begin{equation*} W_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &W_{t;\tau_k}(s,a), \quad t\notin T^A(s,a);\\ &(1-\alpha_t)W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a), \quad t\in T^A(s,a), \end{aligned}\right. \end{equation*} with $w_{t}(s,a) = \mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a)$ and $W_{\tau_k;\tau_k}(s,a) = 0$, and $Y_{t;\tau_k}(s,a)$ is given by \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &Y_{t;\tau_k}(s,a), \quad t\notin T^A(s,a);\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''D_k, \quad t\in T^A(s,a), \end{aligned}\right. \end{equation*} with $Y_{\tau_k;\tau_k}(s,a) = D_k$ and $\gamma''=\gamma(1+\sigma)$. \textbf{Step 3: Bounding $Y_{t;\tau_k}(s,a)$} Next, we first bound $Y_{t;\tau_k}(s,a)$. Observe that $Y_{t;\tau_k}(s,a) \leq D_k$ for any $t \geq \tau_k$. We will bound $Y_{t;\tau_k}(s,a)$ by $\left(\gamma'' + \frac{2}{2+\Delta}\beta\right)D_k$ for block $k+1$. We use a similar representation of $Y_{t;\tau_k}(s,a)$ as in the proof of Lemma \ref{lem:Yt}, which is given by \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} & Y_{t;\tau_k}(s,a), \quad t\notin T^A(s,a)\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''G_q = \gamma''G_q + (1-\alpha_t)\rho_t, \quad t\in T^A(s,a) \end{aligned}\right. \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$ for $t\in T^A(s,a)$. By the definition of $\rho_t$, we obtain \begin{align*} \rho_t &= \rho_{\tau_k}\prod_{i\in T^A(s, a, \tau_k, t-1)}(1-\alpha_i) = (1-\gamma'')D_k\prod_{i\in T^A(s, a, \tau_k, t-1)}(1-\alpha_i)\\ &= (1-\gamma'')D_k\prod_{i\in T^A(s, a, \tau_k, t-1)}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(i)}}{\leq} (1-\gamma'')D_k\prod_{i\in T^A(s, a, \tau_k, \tau_{k+1}-1)}\left(1-\frac{1}{i^\omega}\right)\\ &\overset{\text{(ii)}}{\leq} (1-\gamma'')D_k\prod_{i=\tau_{k+1}-c\tau_k^\omega}^{\tau_{k+1}-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(iii)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{(\tau_{k+1}-1)^\omega} \right) \nonumber\\ &\leq (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{\tau_{k+1}^\omega} \right) = (1-\gamma'')D_k\exp\left( -c\left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega + \frac{1}{\tau_{k+1}^\omega} \right) \nonumber\\ &\overset{\text{(iv)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c}{1+\frac{2Lc}{\kappa}} + \frac{1}{\tau_{1}^\omega} \right), \nonumber \end{align*} where (i) follows because $\alpha_i<1$ and $t\geq \tau_{k+1}$, (ii) follows from Proposition \ref{prop:sa2L} and the requirement of event $H$, (iii) follows from Lemma \ref{lem:tauHelp}, and (iv) holds because $\tau+k\geq\tau_1$ and \begin{equation*} \left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega \geq \frac{\tau_k}{\tau_{k+1}} = \frac{\tau_k}{\tau_{k} + \frac{2cL}{\kappa}\tau_k^\omega}\geq \frac{1}{1+\frac{2cL}{\kappa}}. \end{equation*} Since $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$, we have $\ln (2+\Delta)\in (0, \kappa)$. Further, observing $\hat{\tau_1}^\omega > \frac{1}{\kappa-\ln(2+\Delta)}$, we obtain $\ln(2+\Delta) + \frac{1}{\hat{\tau_1}^\omega} \in (0,\kappa)$. Last, since $c\geq\frac{L(\ln(2+\Delta) + 1/\hat{\tau_1}^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\hat{\tau_1}^\omega)}$, we have $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\hat{\tau_1}^\omega}\leq -\ln(2+\Delta)$. Then, we have $\rho_t\leq \frac{1-\gamma''}{2+\Delta}D_k$. Thus we conclude that for any $t\in [\tau_{k+1}, \tau_{k+2}]$, \begin{equation*} Y_{t;\tau_k}(s,a) \leq \left(\gamma'' + \frac{2}{2+\Delta}\beta\right)D_k. \end{equation*} \textbf{Step 4: Bounding $W_{t;\tau_k}(s,a)$} It remains to bound $|W_{t;\tau_k}(s,a)|\leq \left(1-\frac{2}{2+\Delta}\right)\beta D_k$ for $t\in [\tau_{k+1},\tau_{k+2})$. Similarly to \Cref{subsec:proofProp2}, we define a new sequence $\{W_{t;\tau_k}^l(s,a)\}$ as \begin{equation*} W^l_{t;\tau_k}(s,a) = \sum_{i\in T^A(s, a, \tau_k, \tau_k+l)} \alpha_i\underset{j\in T^A(s, a, i+1, t-1)}{\Pi} (1-\alpha_j)w_i(s,a). \end{equation*} The same arguments as the proof of Lemma \ref{lem:WlDiff} yields \begin{equation*} \lvert W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \rvert \leq \frac{V_{\max}}{\tau_k^\omega}. \end{equation*} If we fix $\tilde{\epsilon}>0$, then for any $t\in[\tau_{k+1},\tau_{k+2})$ we have \begin{align*} &\mathbb P\left[ |W_{t;\tau_k}(s,a)|>\tilde{\epsilon} | t\in[\tau_{k+1},\tau_{k+2}),G,H \right]\\ &\leq 2\exp\left( \frac{-\tilde{\epsilon}^2}{2\underset{l:\tau_k+l-1\in T^A(s,a,\tau_k, t\!-\!1)}{\sum}\!\left( W^l_{t;\tau_k}(s,a) \!-\! W^{l-1}_{t;\tau_k}(s,a) \right)^2 \!+\! 2(W^{\min(T^A(s,a,\tau_k, t\!-\!1))}_{t;\tau_k}(s,a))^2 } \right)\\ &\leq 2\exp\left( -\frac{\hat{\epsilon}^2\tau_k^{2\omega}}{2(|T^A(s,a,\tau_k,t-1)|+1)V_{\max}^2} \right) \overset{\text{(i)}}{\leq} 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(t-\tau_k)V_{\max}^2} \right)\\ &\leq 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(\tau_{k+2}-\tau_k)V_{\max}^2} \right) \overset{\text{(ii)}}{\leq} 2\exp\left( -\frac{\kappa^2\tilde{\epsilon}^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &= 2\exp\left( -\frac{\kappa^2\tilde{\epsilon}^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from Proposition \ref{prop:sa2L} and (ii) holds because \begin{equation*} \tau_{k+2} - \tau_k = \frac{2cL}{\kappa}\tau_{k+1}^\omega + \frac{2cL}{\kappa}\tau_k^\omega = \frac{2cL}{\kappa}\left( \tau_k + \frac{2cL}{\kappa}\tau_k^\omega \right)^\omega + \frac{2cL}{\kappa}\tau_k^\omega \leq \frac{4cL(cL+\kappa)}{\kappa^2}\tau_k^\omega. \end{equation*} \textbf{Step 5: Taking union over all blocks } Applying the union bound in Lemma \ref{lem:unionBound}, we obtain \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} | G,H\right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \lvert W_{t;\tau_k}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\beta D_k|G,H \right]\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A|(\tau_{k+2}-\tau_{k+1}) \cdot \mathbb P\left[ \lvert W_{t;\tau_k}(s,a) \rvert > \frac{\Delta}{2+\Delta}\beta D_k \Big\rvert t\in[\tau_{k+1},\tau_{k+2}),G,H \right]\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\tau_{k+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{8cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4cL}{\kappa}\left(1+\frac{2cL}{\kappa}\right)\sum_{k=0}^m |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \frac{4cL(m+1)}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows because $D_k\geq D_m\geq \epsilon$, and (ii) follows from Lemma \ref{lem:tauHelp} by substituting $a=\frac{16cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }, b=1$ and observing that \begin{align*} \tau_k^{\omega}&\geq\hat{\tau}_1^{\omega}\geq \frac{32cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln\left(\frac{64cL(cL+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) = 2ab\ln ab. \end{align*} \subsection{Part III: Bound $\norm{Q^A_t-Q^*}$} In order to obtain the unconditional high-probability bound on $\norm{Q^A_t-Q^*}$, we first characterize a lower bound on the probability of Event $H$. Note that the probability of Event $G$ is lower bounded in~\Cref{lem:GqAsy}. \begin{lemma}\label{lem:halfQAAsy} Let the sequence $\tau_k$ be the same as given in Lemma \ref{lem:coupleAsy}, i.e. $\tau_{k+1} = \tau_k + \frac{2cL}{\kappa}\tau_k^\omega$ for $k\geq 1$. Define $I^A_k$ as the number of iterations updating $Q^A$ at epoch $k$. Then we have \begin{equation*} \mathbb P\left[\forall k\in [1,m], I^A_k\geq cL\tau_{k}^\omega \right] \geq 1- m \exp\left( -\frac{(1-\kappa)^2cL\tau_1^\omega}{\kappa} \right). \end{equation*} \end{lemma} \begin{proof} We use the same idea as the proof of Lemma \ref{lem:halfQA}. Since we only focus on the blocks with $k\geq 1$, $I^A_k\sim Binomial\left( \frac{2cL}{\kappa}\tau_k^\omega, 0.5 \right)$ in such a case. Thus the tail bound of a binomial random variable gives \begin{equation*} \mathbb P \left[I^A_k\leq \frac{\kappa}{2}\cdot \frac{2cL}{\kappa}\tau_k^\omega\right] \leq \exp\left( -\frac{(1-\kappa)^2 cL\tau_k^\omega}{\kappa} \right). \end{equation*} Then by the union bound, we have \begin{align*} \mathbb P \left[\forall k\in[1,m], I^A_k\geq cL\tau_k^\omega\right] &=\mathbb P \left[\forall k\in[1,m], I^A_k\geq \frac{\kappa}{2}\cdot \frac{2cL}{\kappa}\tau_k^\omega\right]\\ &\geq 1 - \sum_{k=1}^m\exp\left( -\frac{(1-\kappa)^2 cL\tau_k^\omega}{\kappa} \right)\\ &\geq 1- m \exp\left( -\frac{(1-\kappa)^2 cL\tau_1^\omega}{\kappa} \right). \end{align*} \end{proof} Following from Lemma \ref{lem:totalIter}, it suffices to determine the starting time at epoch $m^*=\left\lceil \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$. This can be done by using Lemma \ref{lem:iteration} if we have $\hat{\tau}_1$. Now we are ready to prove the main result of~\Cref{thm:asyncDQ}. By the definition of $m^*$, we know $D_{m^*-1}\geq\epsilon, G_{m^*-1}\geq \sigma\epsilon$. Then we obtain \begin{align*} &\mathbb P(\norm{Q^A_{\tau_{m^*}} - Q^* } \leq \epsilon)\\ & \geq\mathbb P\left[\forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} \right]\\ & = \mathbb P\left[ \forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\cdot\mathbb P(G\cap H)\\ & \geq \mathbb P\left[ \forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\\ &\quad \cdot (\mathbb P(G)+\mathbb P(H)-1)\\ &\overset{\text{(i)}}{\geq}\mathbb P\left[ \forall k\in [0,m^* -1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |G,H \right]\\ &\quad\cdot \left(\mathbb P\left[ \forall q\in [0, m^* -1], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right] + \mathbb P(H) - 1\right)\\ &\overset{\text{(ii)}}{\geq}\left[ 1 - \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right) \right]\\ &\quad\cdot\left[ 1- \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right)\right.\\ &\quad\quad \left.- m^*\! \exp\!\left( -\frac{(1 -\kappa)^2 cL\hat{\tau}_1^{\omega}}{\kappa} \right) \right]\\ &\geq 1- \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2} \right)\\ &\quad - \!\frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2} \right)\! -\! m^* \exp\left( -\frac{(1\!-\!\kappa)^2 cL\hat{\tau}_1^{\omega}}{\kappa} \right)\\ &\overset{\text{(iii)}}{\geq} 1- \frac{12cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64 cL(cL+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from~\Cref{lem:coupleAsy}, (ii) follows from Propositions \ref{lem:GqAsy} and \ref{lem:conditionalBoundAsy} and (iii) holds due to the fact that \begin{align*} \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| = \max&\left\{ \frac{4cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A|, m^* \right\},\\ \frac{\kappa^2(1\!-\!\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2}\!\leq\! \min&\left\{ \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\beta^2 \epsilon^2\hat{\tau}_1^{\omega}}{16cL(cL+\kappa)V_{\max}^2}, \frac{(1\!-\!\kappa)^2\hat{\tau}_1^{\omega}}{\kappa}, \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64cL(cL+\kappa)V_{\max}^2}\right\}. \end{align*} By setting \begin{equation*} 1- \frac{12cLm^*}{\kappa}\left(1+\frac{2cL}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64 cL(cL+\kappa)V_{\max}^2} \right) \geq 1-\delta, \end{equation*} we obtain \begin{equation*} \hat{\tau}_1 \geq \left( \frac{64 cL(cL+\kappa)V_{\max}^2}{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2}\ln \frac{12m^*|\mathcal S||\mathcal A|cL(2cL+\kappa)}{\kappa^2\delta} \right)^{\frac{1}{\omega}}. \end{equation*} Combining with the requirement of $\hat{\tau}_1$ in Propositions \ref{lem:GqAsy} and \ref{lem:conditionalBoundAsy}, we can choose \begin{equation*} \hat{\tau}_1 = \Theta\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{ m^*|\mathcal S||\mathcal A|L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} \right). \end{equation*} Finally, applying $m^*=\left\lceil\frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$ and Lemma \ref{lem:iteration}, we conclude that it suffices to let \begin{align*} T&=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{m^*|\mathcal S||\mathcal A|L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{2cL}{\kappa}\frac{1}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4V_{\max}^2 \ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{2cL}{\kappa}\frac{1}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left(\frac{ L^4V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|L^4V_{\max}^2 }{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{L^2}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right). \end{align*} to attain an $\epsilon$-accurate Q-estimator. \section{Illustrative Numerical Example} \section{Proof of Lemma \ref{lem:uniformBound}} \label{sec:proofOfLemma1} We prove \Cref{lem:uniformBound} by induction. First, it is easy to guarantee that the initial case is satisfied, i.e., $\norm{Q^A_1}\leq \frac{R_{\max}}{1-\gamma} = \frac{V_{\max}}{2}, \norm{Q^B_1}\leq \frac{V_{\max}}{2}$. (In practice we usually initialize the algorithm as $Q^A_1=Q^B_1=0$). Next, we assume that $\norm{Q^A_t}\leq \frac{V_{\max}}{2}, \norm{Q^B_t}\leq \frac{V_{\max}}{2}$. It remains to show that such conditions still hold for $t+1$. Observe that \begin{align*} \norm{Q^A_{t+1}(s,a)} &= \norm{ (1-\alpha_t)Q^A_{t}(s,a) + \alpha_t\left( R_t+\gamma Q^{B}_t(s',\underset{a'\in U(s')}{\arg\max}Q^A_t(s',a') \right) } \\ &\leq (1-\alpha_t)\norm{Q^A_{t}} + \alpha_t\norm{R_t} + \alpha_t\gamma\norm{Q^{B}_t}\\ &\leq (1-\alpha_t)\frac{R_{\max}}{1-\gamma} + \alpha_t R_{\max} + \frac{\alpha_t\gamma R_{\max}}{1-\gamma}\\ &= \frac{R_{\max}}{1-\gamma} = \frac{V_{\max}}{2}. \end{align*} Similarly, we can have $\norm{Q^B_{t+1}(s,a)}\leq \frac{V_{\max}}{2}$. Thus we complete the proof. \section{Proof of Theorem \ref{thm:syncDQ}} \label{sec:proofThm1} In this appendix, we will provide a detailed proof of~\Cref{thm:syncDQ}. Our proof includes: (a) Part I which analyzes the stochastic error propagation between the two Q-estimators $\norm{Q^B_t - Q^A_t }$; (b) Part II which analyzes the error dynamics between one Q-estimator and the optimum $\norm{Q^A_t -Q^* }$ conditioned on the error event in Part I; and (c) Part III which bounds the unconditional error $\norm{Q^A_t -Q^* }$. We describe each of the three parts in more details below. \subsection{Part I: Bounding $\norm{Q^B_t - Q^A_t }$} \label{subsec:PartI} The main idea is to upper bound $\norm{Q^B_t - Q^A_t }$ by a decreasing sequence $\{G_q\}_{q\geq0}$ block-wisely with high probability, where each block or epoch $q$ (with $q\geq0$) is defined by $t\in[\hat{\tau}_q, \hat{\tau}_{q+1})$. \begin{proposition} \label{lem:Gq} Fix $\epsilon>0, \kappa\in(0,1), \sigma\in(0,1)$ and $\Delta\in(0, e-2)$. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $G_q = (1-\xi)^q G_0$ with $G_0 = V_{\max}$ and $\xi=\frac{1-\gamma}{4}$. Let $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2c}{\kappa}\hat{\tau}_q^\omega$ for $q \geq 1$ with $c\geq \frac{\ln(2+\Delta)+1/\hat{\tau}_1^\omega}{1-\ln(2+\Delta)-1/\hat{\tau}_1^\omega}$ and $\hat{\tau}_1$ as the finishing time of the first epoch satisfying \begin{equation*} \hat{\tau}_1\geq \max\left\{\left(\frac{1}{1-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{128c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\ln\left(\frac{64c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $n$ such that $G_n\geq\sigma\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right]\\ &\quad\geq 1- \frac{4c(n+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right). \end{align*} \end{proposition} The proof of Proposition \ref{lem:Gq} consists of the following four steps. \subsubsection{Step 1: Characterizing the dynamics of $Q_t^B(s,a) - Q_t^A(s,a)$ } We first characterize the dynamics of $u_t^{BA}(s,a):=Q_t^B(s,a) - Q_t^A(s,a)$ as a stochastic approximation (SA) algorithm in this step. \begin{lemma}\label{lem:uBAdyn} Consider double Q-learning in Algorithm \ref{alg:doubleQ}. Then we have \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a), \end{equation*} where \begin{equation*} F_t(s,a) = \left\{\begin{aligned} &Q^B_t(s,a) - R_t - \gamma Q^B_t(s_{t+1},a^*),\quad\text{w.p. 1/2} \\ &R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a),\quad\text{w.p. 1/2}. \end{aligned} \right. \end{equation*} In addition, $F_t$ satisfies \begin{equation*} \norm{\mathbb E[F_t|\mathcal F_t]} \leq \frac{1+\gamma}{2} \norm{u_t^{BA} }. \end{equation*} \end{lemma} \begin{proof} Algorithm \ref{alg:doubleQ} indicates that at each time, either $Q^A$ or $Q^B$ is updated with equal probability. When updating $Q^A$ at time $t$, for each $(s,a)$ we have \begin{align*} u_{t+1}^{BA}(s,a) &= Q^B_{t+1}(s,a) - Q^A_{t+1}(s,a)\\ &= Q^B_t(s,a) - (Q^A_t(s,a) + \alpha_t (R_t + \gamma Q^B_t(s_{t+1},a^*) - Q^A_t(s,a)))\\ &= (1-\alpha_t) Q^B_t(s,a) - ( (1-\alpha_t)Q^A_t(s,a) + \alpha_t (R_t + \gamma Q^B_t(s_{t+1},a^*) - Q^B_t(s,a)) )\\ &= (1-\alpha_t) u_t^{BA}(s,a) + \alpha_t(Q^B_t(s,a) - R_t - \gamma Q^B_t(s_{t+1},a^*) ). \end{align*} Similarly, when updating $Q^B$, we have \begin{align*} u_{t+1}^{BA}(s,a) &= Q^B_{t+1}(s,a) - Q^A_{t+1}(s,a)\\ &= (Q^B_t(s,a) + \alpha_t (R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^B_t(s,a))) - Q^A_t(s,a)\\ &= (1-\alpha_t) Q^B_t(s,a) + ( \alpha_t (R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a)) - (1-\alpha_t)Q^A_t(s,a) )\\ &= (1-\alpha_t) u_t^{BA}(s,a) + \alpha_t(R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a)). \end{align*} Therefore, we can rewrite the dynamics of $u_t^{BA}$ as $u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a)$, where \begin{equation*} F_t(s,a) = \left\{\begin{aligned} &Q^B_t(s,a) - R_t - \gamma Q^B_t(s_{t+1},a^*),\quad\text{w.p. 1/2} \\ &R_t + \gamma Q^A_t(s_{t+1},b^*) - Q^A_t(s,a),\quad\text{w.p. 1/2}. \end{aligned} \right. \end{equation*} Thus, we have \begin{align} \mathbb E&[F_t(s,a)|\mathcal F_t] \nonumber\\ &= \frac{1}{2}\left( Q^B_t(sa)\! - \!\underset{s_{t+1}}{\mathbb E}[R_{sa}^{s'} \!- \!\gamma Q^B_t(s_{t+1},a^*)] \right)\! +\! \frac{1}{2}\left(\! \underset{s_{t+1}}{\mathbb E}[R_{s,a}^{s'}\! +\! \gamma Q^A_t(s_{t+1},b^*)\!]\! -\! Q^A_t(s,a) \right)\nonumber\\ &= \frac{1}{2} (Q^B_t(s,a) - Q^A_t(s,a)) + \frac{\gamma}{2} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\nonumber\\ &= \frac{1}{2} u_t^{BA}(s,a) + \frac{\gamma}{2} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]. \label{eq:pf1LemGq} \end{align} Next, we bound $\underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]$. First, consider the case when $\underset{s_{t+1}}{\mathbb E} Q^A_t(s_{t+1},b^*) \geq \underset{s_{t+1}}{\mathbb E} Q^B_t(s_{t+1},a^*)$. Then we have \begin{align*} \left| \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right] \right| &= \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\\ &\overset{\text{(i)}}{\leq} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},a^*) - Q^B_t(s_{t+1},a^*) \right]\\ &\leq \norm{u_t^{BA}}, \end{align*} where (i) follow from the definition of $a^*$ in Algorithm \ref{alg:doubleQ}. Similarly, if $\underset{s_{t+1}}{\mathbb E} Q^A_t(s_{t+1},b^*) < \underset{s_{t+1}}{\mathbb E} Q^B_t(s_{t+1},a^*)$, we have \begin{align*} \left| \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right] \right| &= \underset{s_{t+1}}{\mathbb E}\left[ Q^B_t(s_{t+1},a^*) - Q^A_t(s_{t+1},b^*) \right]\\ &\overset{\text{(i)}}{\leq} \underset{s_{t+1}}{\mathbb E}\left[ Q^B_t(s_{t+1},b^*) - Q^A_t(s_{t+1},b^*) \right]\\ &\leq \norm{u_t^{BA}}, \end{align*} where (i) follows from the definition of $b^*$. Thus we can conclude that \begin{equation*} \left| \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right] \right| \leq \norm{u_t^{BA}}. \end{equation*} Then, we continue to bound \eqref{eq:pf1LemGq}, and obtain \begin{align*} \left|\mathbb E[F_t(s,a)|\mathcal F_t] \right| &= \left|\frac{1}{2} u_t^{BA}(s,a) + \frac{\gamma}{2} \underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\right|\\ &\leq \frac{1}{2}\norm{u_t^{BA}} + \frac{\gamma}{2}\left|\underset{s_{t+1}}{\mathbb E}\left[ Q^A_t(s_{t+1},b^*) - Q^B_t(s_{t+1},a^*) \right]\right|\\ &\leq \frac{1+\gamma}{2} \norm{u_t^{BA}}, \end{align*} for all $(s,a)$ pairs. Hence, $\norm{\mathbb E[F_t|\mathcal F_t]}\leq \frac{1+\gamma}{2} \norm{u_t^{BA}}$. \end{proof} Applying~\Cref{lem:uBAdyn}, we write the dynamics of $u_t^{BA}(s,a)$ in the form of a classical SA algorithm driven by a martingale difference sequence as follows: \begin{equation*} u_{t+1}^{BA}(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t F_t(s,a) = (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a)), \end{equation*} where $h_t(s,a) = \mathbb E[F_t(s,a)|\mathcal F_t]$ and $z_t(s,a) = F_t(s,a) - \mathbb E[F_t|\mathcal F_t]$. Then, we obtain $\mathbb E [z_t(s,a)|\mathcal F_t] = 0$ and $\norm{h_t} \leq \frac{1+\gamma}{2}\norm{u_t^{BA}}$ following from Lemma \ref{lem:uBAdyn}. We define $u^*(s,a) = 0$, and treat $h_t$ as an operator over $u^{BA}_t$. Then $h_t$ has a contraction property as: \begin{equation}\label{eq:uContraction} \norm{h_t-u^*} \leq \gamma'\norm{u_t^{BA}-u^*}, \end{equation} where $\gamma'=\frac{1+\gamma}{2}\in(0,1)$. Based on this SA formulation, we bound $u_t^{BA}(s,a)$ block-wisely in the next step. \subsubsection{Step 2: Constructing sandwich bounds on $u_t^{BA}$} We derive lower and upper bounds on $u_t^{BA}$ via two sequences $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ in the following lemma. \begin{lemma}\label{lem:uBAsanwich} Let $\hat{\tau}_q$ be such that $\norm{u_t^{BA}}\leq G_q$ for all $t\geq\hat{\tau}_q$. Define $Z_{t;\hat{\tau}_q}(s,a), X_{t;\hat{\tau}_q}(s,a)$ as \begin{align*} Z_{t+1;\hat{\tau}_q}(s,a) &= (1-\alpha_t)Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a), \quad \text{with } Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = 0;\\ X_{t+1;\hat{\tau}_q}(s,a) &= (1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q, \quad \text{with } X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q,\gamma'=\frac{1+\gamma}{2}. \end{align*} Then for any $t\geq\hat{\tau}_q$ and state-action pair $(s,a)$, we have \begin{equation*} -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) \leq u_t^{BA}(s,a) \leq X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a). \end{equation*} \end{lemma} \begin{proof} We proceed the proof by induction. For the initial condition $t=\hat{\tau}_q$, $\norm{u_{\hat{\tau}_q}^{BA}}\leq G_q$ implies $-G_q\leq u_{\hat{\tau}_q}^{BA} \leq G_q$. We assume the sandwich bound holds for time $t$. It remains to check that the bound also holds for $t+1$. At time $t+1$, we have \begin{align*} u_{t+1}^{BA}(s,a) &= (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\leq (1-\alpha_t)( X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) ) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\overset{\text{(i)}}{\leq} \left[(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'\norm{u_t^{BA}}\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &\leq \left[(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &= X_{t+1;\hat{\tau}_q}(s,a) + Z_{t+1;\hat{\tau}_q}(s,a), \end{align*} where (i) follows from Lemma \ref{lem:uBAdyn}. Similarly, we can bound the other direction as \begin{align*} u_{t+1}^{BA}(s,a) &= (1-\alpha_t)u_t^{BA}(s,a) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\geq (1-\alpha_t)( -X_{t;\hat{\tau}_q}(s,a) + Z_{t;\hat{\tau}_q}(s,a) ) + \alpha_t (h_t(s,a) + z_t(s,a))\\ &\geq \left[-(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) - \alpha_t \gamma'\norm{u_t^{BA}}\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &\geq \left[-(1-\alpha_t) X_{t;\hat{\tau}_q}(s,a) - \alpha_t \gamma'G_q\right] + \left[(1-\alpha_t) Z_{t;\hat{\tau}_q}(s,a) + \alpha_t z_t(s,a)\right]\\ &= -X_{t+1;\hat{\tau}_q}(s,a) + Z_{t+1;\hat{\tau}_q}(s,a). \end{align*} \end{proof} \subsubsection{Step 3: Bounding $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ for block $q+1$} We bound $X_{t;\hat{\tau}_q}$ and $Z_{t;\hat{\tau}_q}$ in \Cref{lem:Xt} and \Cref{lem:ZlDiff} below, respectively. Before that, we first introduce the following technical lemma which will be useful in the proof of~\Cref{lem:Xt}. \begin{lemma}\label{lem:prodHelp} Fix $\omega\in(0,1)$. Let $0 < t_1 < t_2$. Then we have \begin{equation*} \prod_{i=t_1}^{t_2} \left( 1-\frac{1}{i^\omega} \right) \leq \exp\left( -\frac{t_2 - t_1}{t_2^\omega} \right). \end{equation*} \end{lemma} \begin{proof} Since $\ln(1-x)\leq -x$ for any $x\in (0,1)$, we have \begin{equation*} \ln\left[ \prod_{i=t_1}^{t_2} \left( 1-\frac{1}{i^\omega} \right) \right] \leq -\sum_{i=t_1}^{t_2}i^{-\omega}\leq -\int_{t_1}^{t_2} t^{-\omega}dt = -\frac{t_2^{1-\omega} - t_1^{1-\omega}}{1-\omega}. \end{equation*} Thus, fix $\omega\in(0,1)$, let $0 < t_1 < t_2$, and then we have \begin{equation*} \prod_{i=t_1}^{t_2} \left( 1-\frac{1}{i^\omega} \right) \leq \exp\left( -\frac{t_2^{1-\omega} - t_1^{1-\omega}}{1-\omega} \right). \end{equation*} Define $f(t) := t^{1-\omega}$. Observe that $f(t)$ is an increasing concave function. Then we have \begin{align*} t_2^{1-\omega} - t_1^{1-\omega} &\geq f'(t_2)(t_2-t_1) = (1-\omega)t_2^{-\omega} (t_2 - t_1), \end{align*} which immediately indicates the result. \end{proof} We now derive a bound for $X_{t;\hat{\tau}_q}$. \begin{lemma}\label{lem:Xt} Fix $\kappa\in (0,1)$ and $\Delta\in(0, e-2)$. Let $\{G_q\}$ be defined in~\Cref{lem:Gq}. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose that $X_{t;\hat{\tau}_q}(s,a) \leq G_q$ for any $t \geq \hat{\tau}_q$. Then for any $t\in[\hat{\tau}_{q+1}, \hat{\tau}_{q+2})$, given $\hat{\tau}_{q+1} = \hat{\tau}_q + \frac{2c}{\kappa}\hat{\tau}_q^\omega$ with $\hat{\tau}_1\geq \left(\frac{1}{1-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}$ and $c\geq \frac{\ln(2+\Delta)+1/\hat{\tau}_1^\omega}{1-\ln(2+\Delta)-1/\hat{\tau}_1^\omega}$, we have \begin{equation*} X_{t;\hat{\tau}_q}(s,a) \leq \left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q. \end{equation*} \end{lemma} \begin{proof} Observe that $X_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = G_q = \gamma' G_q + (1-\gamma')G_q := \gamma' G_q + \rho_{\hat{\tau}_q} $. We can rewrite the dynamics of $X_{t;\hat{\tau}_q}(s,a)$ as \begin{equation*} X_{t+1;\hat{\tau}_q}(s,a) = (1-\alpha_t)X_{t;\hat{\tau}_q}(s,a) + \alpha_t \gamma'G_q = \gamma'G_q + (1-\alpha_t)\rho_t, \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$. By the definition of $\rho_t$, we obtain \begin{align*} \rho_t &= (1-\alpha_{t-1})\rho_{t-1} = \dots = (1-\gamma')G_q\prod_{i=\hat{\tau}_q}^{t-1}(1-\alpha_i)\\ &= (1-\gamma')G_q\prod_{i=\hat{\tau}_q}^{t-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(i)}}{\leq} (1-\gamma')G_q\prod_{i=\hat{\tau}_q}^{\hat{\tau}_{q+1}-1}\left(1-\frac{1}{i^\omega}\right)\\ &\overset{\text{(ii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{\hat{\tau}_{q+1}-1-\hat{\tau}_q}{(\hat{\tau}_{q+1}-1)^\omega} \right) \leq (1-\gamma')G_q\exp\left( -\frac{\hat{\tau}_{q+1}-1-\hat{\tau}_q}{\hat{\tau}_{q+1}^\omega} \right)\\ &= (1-\gamma')G_q\exp\left( -\frac{\frac{2c}{\kappa}\hat{\tau}_q^\omega-1 }{\hat{\tau}_{q+1}^\omega} \right) = (1-\gamma')G_q\exp\left( -\frac{2c}{\kappa}\left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega + \frac{1}{\hat{\tau}_{q+1}^\omega} \right)\\ &\overset{\text{(iii)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{2c}{\kappa}\frac{1}{1+\frac{2c}{\kappa}} + \frac{1}{\hat{\tau}_{1}^\omega} \right) \overset{\text{(iv)}}{\leq} (1-\gamma')G_q\exp\left( -\frac{c}{1+c} + \frac{1}{\hat{\tau}_{1}^\omega} \right), \end{align*} where (i) follows because $\alpha_i$ is decreasing and $t\geq \hat{\tau}_{q+1}$, (ii) follows from Lemma \ref{lem:prodHelp}, (iii) follows because $\hat{\tau}_q\geq\hat{\tau}_1$ and \begin{equation*} \left(\frac{\hat{\tau}_q}{\hat{\tau}_{q+1}}\right)^\omega \geq \frac{\hat{\tau}_q}{\hat{\tau}_{q+1}} = \frac{\hat{\tau}_q}{\hat{\tau}_{q} + \frac{2c}{\kappa}\hat{\tau}_q^\omega}\geq \frac{1}{1+\frac{2c}{\kappa}}, \end{equation*} and (iv) follows because $\frac{2c}{\kappa}\geq c$. Next, observing the conditions that $\hat{\tau}_1^\omega\geq\frac{1}{1-\ln(2+\Delta)}$ and $c\geq \frac{1}{1-\ln(2+\Delta)-1/\hat{\tau}_1^\omega}-1$, we have \begin{equation*} \frac{c}{1+c} - \frac{1}{\hat{\tau}_{1}^\omega} \geq \ln(2+\Delta). \end{equation*} Thus we have $\rho_t\leq \frac{1-\gamma'}{2+\Delta}G_q$. Finally, We finish our proof by further observing that $1-\gamma' = 2\xi$. \end{proof} Since we have bounded $X_{t;\hat{\tau}_q}(s,a)$ by $\left(\gamma' + \frac{2}{2+\Delta}\xi\right)G_q$ for all $t\geq\hat{\tau}_{q+1}$, it remains to bound $Z_{t;\hat{\tau}_q}(s,a)$ by $\left(1-\frac{2}{2+\Delta}\right)\xi G_q$ for block $q+1$, which will further yield $\norm{u_t^{BA}(s,a)}\leq(\gamma'+\xi)G_q = (1-\xi)G_q = G_{q+1}$ for any $t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2})$ as desired. Differently from $X_{t;\hat{\tau}_q}(s,a)$ which is a deterministic monotonic sequence, $Z_{t;\hat{\tau}_q}(s,a)$ is stochastic. We need to capture the probability for a bound on $Z_{t;\hat{\tau}_q}(s,a)$ to hold for block $q+1$. To this end, we introduce a different sequence $\{Z_{t;\hat{\tau}_q}^l (s,a)\}$ given by \begin{equation}\label{eq:Zl} Z^l_{t;\hat{\tau}_q}(s,a) = \sum_{i=\hat{\tau}_q}^{\hat{\tau}_q+l}\alpha_i\prod_{j=i+1}^{t-1}(1-\alpha_j) z_i(s,a) := \sum_{i=\hat{\tau}_q}^{\hat{\tau}_q+l} \phi_i^{q,t-1} z_i(s,a), \end{equation} where $\phi_i^{q,t-1} = \alpha_i\prod_{j=i+1}^{t-1}(1-\alpha_j)$. By the definition of $Z_{t;\hat{\tau}_q}(s,a)$, one can check that $Z_{t;\hat{\tau}_q}(s,a) = Z^{t-1-\hat{\tau}_q}_{t;\hat{\tau}_q}(s,a) $. Thus we have \begin{equation}\label{eq:ZZl} Z_{t;\hat{\tau}_q}(s,a) = Z_{t;\hat{\tau}_q}(s,a) - Z_{\hat{\tau}_q;\hat{\tau}_q}(s,a) = \sum_{l=1}^{t-1-\hat{\tau}_q} (Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a)) + Z^{0}_{t;\hat{\tau}_q}(s,a). \end{equation} In the following lemma, we capture an important property of $Z^{l}_{t;\hat{\tau}_q}(s,a)$ defined in \eqref{eq:Zl}. \begin{lemma}\label{lem:ZlDiff} For any $t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2})$ and $1\leq l\leq t-1-\hat{\tau}_q$, $Z^{l}_{t;\hat{\tau}_q}(s,a)$ is a martingale sequence and satisfies \begin{equation}\label{eq:ZlvsZ} \lvert Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{2V_{\max}}{\hat{\tau}_q^\omega}. \end{equation} \end{lemma} \begin{proof} To show the martingale property, we observe that \begin{align*} \mathbb E[Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a)|\mathcal F_{\hat{\tau}_q + l -1}] &= \mathbb E[ \phi_{\hat{\tau}_q + l}^{q,t-1} z_{\hat{\tau}_q + l}(s,a)|\mathcal F_{\hat{\tau}_q + l -1} ]\\ &= \phi_{\hat{\tau}_q + l}^{q,t-1} \mathbb E[ z_{\hat{\tau}_q + l}(s,a)|\mathcal F_{\hat{\tau}_q + l -1} ] = 0, \end{align*} where the last equation follows from the definition of $z_t(s,a)$. In addition, based on the definition of $\phi_i^{q,t-1}$ in \eqref{eq:Zl} which requires $i\geq\hat{\tau}_q$, we have \begin{equation*} \phi_i^{q,t-1}=\alpha_i\prod_{j=i+1}^{t-1}(1-\alpha_j)\leq \alpha_i\leq \frac{1}{\hat{\tau}_q^\omega}. \end{equation*} Further, since $|F_t|\leq\frac{2R_{\max}}{1-\gamma}=V_{\max}$, we obtain $|z_t(s,a)|=|F_t - \mathbb E[F_t|\mathcal F_t]|\leq 2V_{\max}$. Thus \begin{equation*} \lvert Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \rvert = \phi_{\hat{\tau}_q + l}^{q,t-1} |z_{\hat{\tau}_q + l}(s,a)|\leq \frac{2 V_{\max}}{\hat{\tau}_q^\omega}. \end{equation*} \end{proof} Lemma \ref{lem:ZlDiff} guarantees that $Z^{l}_{t;\hat{\tau}_q}(s,a)$ is a martingale sequence, which allows us to apply the following Azuma's inequality. \begin{lemma}\label{lem:azuma} \citep{azuma1967weighted} Let $X_0,X_1,\dots,X_n$ be a martingale sequence such that for each $1\leq k\leq n$, \begin{equation*} |X_k-X_{k-1}| \leq c_k, \end{equation*} where the $c_k$ is a constant that may depend on $k$. Then for all $n\geq 1$ and any $\epsilon>0$, \begin{equation*} \mathbb P[|X_n-X_0|>\epsilon] \leq 2\exp\left( -\frac{\epsilon^2}{2\sum_{k=1}^n c_k^2} \right). \end{equation*} \end{lemma} By Azuma's inequality and the relationship between $Z_{t;\hat{\tau}_q}(s,a)$ and $Z^l_{t;\hat{\tau}_q}(s,a)$ in~\eqref{eq:Zl}, we obtain \begin{align*} &\mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \hat{\epsilon}| t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\leq 2\exp\left( -\frac{\hat{\epsilon}^2}{2\sum_{l=1}^{t-\hat{\tau}_q-1} \left( Z^l_{t;\hat{\tau}_q}(s,a) - Z^{l-1}_{t;\hat{\tau}_q}(s,a) \right)^2 + 2(Z^{0}_{t;\hat{\tau}_q}(s,a))^2} \right)\\ &\quad\overset{\text{(i)}}{\leq} 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(t-\hat{\tau}_q)V_{\max}^2} \right) \leq 2\exp\left( -\frac{\hat{\epsilon}^2\hat{\tau}_q^{2\omega}}{8(\hat{\tau}_{q+2}-\hat{\tau}_q)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\leq} 2\exp\left( -\frac{\kappa^2\hat{\epsilon}^2\hat{\tau}_q^{\omega}}{32c(c+\kappa)V_{\max}^2} \right) = 2\exp\left( -\frac{\kappa^2\hat{\epsilon}^2\hat{\tau}_q^{\omega}}{32c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from Lemma \ref{lem:ZlDiff}, and (ii) follows because \begin{equation*} \hat{\tau}_{q+2}-\hat{\tau}_q = \frac{2c}{\kappa}\hat{\tau}_{q+1}^\omega+\frac{2c}{\kappa}\hat{\tau}_{q}^\omega = \frac{2c}{\kappa}\left(\hat{\tau}_{q}+\frac{2c}{\kappa}\hat{\tau}_{q}^\omega\right)^\omega+\frac{2c}{\kappa}\hat{\tau}_{q}^\omega\leq \frac{2c}{\kappa}\left(2+\frac{2c}{\kappa}\right)\hat{\tau}_{q}^\omega=\frac{4c(c+\kappa)}{\kappa^2}\hat{\tau}_q^\omega. \end{equation*} \subsubsection{Step 4: Unionizing all blocks and state-action pairs} \label{subsec:proofProp1} Now we are ready to prove~\Cref{lem:Gq} by taking a union of probabilities over all blocks and state-action pairs. Before that, we introduce the following two preliminary lemmas, which will be used for multiple times in the sequel. \begin{lemma}\label{lem:unionBound} Let $\{X_i\}_{i\in\mathcal{I}}$ be a set of random variables. Fix $\epsilon>0$. If for any $i\in\mathcal{I}$, we have $\mathbb P(X_i\leq\epsilon) \geq 1-\delta$, then \begin{equation*} \mathbb P(\forall i\in\mathcal{I}, X_i\leq \epsilon) \geq 1- |\mathcal{I}|\delta. \end{equation*} \end{lemma} \begin{proof} By union bound, we have \begin{align*} \mathbb P(\forall i\in\mathcal{I}, X_i\leq \epsilon) = 1-\mathbb P\left(\bigcup_{i\in\mathcal{I}} X_i>\epsilon \right) \geq 1- \sum_{i\in\mathcal{I}}\mathbb P(X_i > \epsilon) \geq 1-|\mathcal{I}|\delta. \end{align*} \end{proof} \begin{lemma}\label{lem:tauHelp} Fix positive constants $a,b$ satisfying $2ab\ln ab > 1$. If $\tau\geq 2ab\ln ab$, then \begin{equation*} \tau^b\exp\left( -\frac{2\tau}{a} \right) \leq \exp\left( -\frac{\tau}{a} \right). \end{equation*} \end{lemma} \begin{proof} Let $c=ab$. If $\tau\leq c^2$, we have \begin{equation*} c\ln\tau \leq c\ln c^2 = 2c\ln c\leq \tau. \end{equation*} If $\tau\geq c^2$, we have \begin{equation*} c\ln \tau \leq \sqrt{\tau}\ln\tau \leq \sqrt{\tau}\sqrt{\tau} = \tau, \end{equation*} where the last inequality follows from $\ln x^2=2\ln x\leq x$. Therefore, we obtain $c\ln\tau = ab\ln\tau \leq \tau$. Thus $\tau^b\leq \exp\left( \frac{\tau}{a} \right)$, which implies this lemma. \end{proof} \textbf{Proof of~\Cref{lem:Gq}}\\ Based on the results obtained above, we are ready to prove~\Cref{lem:Gq}. Applying Lemma \ref{lem:unionBound}, we have \begin{align*} &\mathbb P\left[ \forall(s,a), \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\xi G_q \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A|(\hat{\tau}_{q+2} - \hat{\tau}_{q+1}) \cdot \mathbb P\left[ \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert > \frac{\Delta}{2+\Delta}\xi G_q \Big\rvert t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}) \right]\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2c}{\kappa}\hat{\tau}_{q+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 G_q^2\hat{\tau}_q^\omega}{32c(c+\kappa)V_{\max}^2} \right)\\ &\quad\geq 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 G_q^2\hat{\tau}_q^\omega}{32c(c+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{q=0}^n |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\hat{\tau}_{q}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^\omega}{32c(c+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\sum_{q=0}^n |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_q^\omega}{64c(c+\kappa)V_{\max}^2} \right)\\ &\quad\overset{\text{(iii)}}{\geq} 1- \frac{4c(n+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows because $G_q \geq G_n \geq \sigma\epsilon $, (ii) follows from Lemma \ref{lem:tauHelp} by substituting that $a=\frac{64c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }, b=1$ and observing \begin{align*} \hat{\tau}_q^\omega&\geq\hat{\tau}_1^\omega\geq \frac{128c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\ln\left(\frac{64c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\sigma^2\xi^2\epsilon^2 }\right) = 2ab\ln ab, \end{align*} and (iii) follows because $\hat{\tau}_q \geq \hat{\tau}_1$. Finally, we complete the proof of \Cref{lem:Gq} by observing that $X_{t;\hat{\tau}_q}$ is a deterministic sequence and thus \begin{align*} &\mathbb P\left[ \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall q\in [0,n], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \lvert Z_{t;\hat{\tau}_q}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\xi G_q \right]. \end{align*} \subsection{Part II: Conditionally bounding $\norm{Q^A_t - Q^*}$} \label{subsec:PartII} In this part, we upper bound $\norm{Q^A_t - Q^*} $ by a decreasing sequence $\{D_k\}_{k\geq 0}$ block-wisely conditioned on the following two events: fix a positive integer $m$, we define \begin{align} E &:= \left\{ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t}\leq \sigma D_{k+1} \right\}, \label{eq:eventA}\\ F &:= \{ \forall k\in [1,m+1], I^A_k\geq c\tau_{k}^\omega \},\label{eq:eventB} \end{align} where $I^A_k$ denotes the number of iterations updating $Q^A$ at epoch $k$, $\tau_{k+1}$ is the starting iteration index of the $(k+1)$th block, and $\omega$ is the decay parameter of the polynomial learning rate. Roughly, Event $E$ requires that the difference between the two Q-estimators are bounded appropriately, and Event $F$ requires that $Q^A$ is sufficiently updated in each block. \begin{proposition}\label{lem:conditionalBound} Fix $\epsilon>0, \kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Consider synchronous double Q-learning under a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Let $\{G_q\}_{q\geq0}, \{\hat{\tau}_q\}_{q\geq0}$ be defined in~\Cref{lem:Gq}. Define $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Let $\tau_k=\hat{\tau}_k$ for $k\geq0$. Suppose that $c\geq \frac{\kappa(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$ and $\tau_1$ as the finishing time of the first block satisfies \begin{equation*} \tau_1\geq \max\left\{\left(\frac{1}{\kappa-\ln(2+\Delta)}\right)^{\frac{1}{\omega}}, \left( \frac{32c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln \left(\frac{16c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) \right)^{\frac{1}{\omega}} \right\}. \end{equation*} Then for any $m$ such that $D_m\geq\epsilon$, we have \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t- Q^*}\leq D_{k+1} |E,F \right]\\ &\quad\geq 1 - \frac{4c(m+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right), \end{align*} where the events $E,F$ are defined in \eqref{eq:eventA} and \eqref{eq:eventB}, respectively. \end{proposition} The proof of~\Cref{lem:conditionalBound} consists of the following four steps. \subsubsection{Step 1: Designing $\{D_k\}_{k\geq 0}$} The following lemma establishes the relationship (illustrated in~\Cref{fig:DkGk}) between the block-wise bounds $\{G_q\}_{q\geq 0}$ and $\{D_k\}_{k\geq 0}$ and their block separations, such that Event $E$ occurs with high probability as a result of~\Cref{lem:Gq}. \begin{lemma}\label{lem:couple} Let $\{G_q\}$ be defined in~\Cref{lem:Gq}, and let $D_k = (1-\beta)^k\frac{V_{\max}}{\sigma}$ with $\beta = \frac{1-\gamma(1+\sigma)}{2}$ and $\sigma = \frac{1-\gamma}{2\gamma}$. Then we have \begin{align*} &\mathbb P\left[\forall q\in [0,m], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t}\leq G_{q+1} \right]\\ &\quad\leq \mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^B_t - Q^A_t}\leq \sigma D_{k+1} \right], \end{align*} given that $\tau_k = \hat{\tau}_{k }$. \end{lemma} \begin{proof} Based on our choice of $\sigma$, we have \begin{equation*} \beta = \frac{1-\gamma(1+\sigma)}{2} = \frac{1-\gamma\cdot\frac{1+\gamma}{2\gamma}}{2} = \frac{1-\gamma}{4} = \xi. \end{equation*} Therefore, the decay rate of $D_k$ is the same as that of $G_q$. Further considering $G_0=\sigma D_0$, we can make the sequence $\{\sigma D_k\}$ as an upper bound of $\{G_q\}$ for any time as long as we set the same starting point and ending point for each epoch. \end{proof} In Lemma \ref{lem:couple}, we make $G_k = \sigma D_k$ at any block $k$ and $\xi=\beta=\frac{1-\gamma}{4}$ by careful design of $\sigma$. In fact, one can choose any value of $\sigma\in(0,(1-\gamma)/\gamma)$ and design a corresponding relationship between $\tau_k $ and $ \hat{\tau}_{k }$ as long as the sequence $\{\sigma D_k\}$ can upper bound $\{G_q\}$ for any time. For simplicity of presentation, we keep the design in Lemma \ref{lem:couple}. \subsubsection{Step 2: Characterizing the dynamics of $Q^A_t(s,a) - Q^*(s,a)$ } We characterize the dynamics of the iteration residual $r_{t}(s,a):=Q^A_t(s,a) - Q^*(s,a)$ as an SA algorithm in~\Cref{lem:residualDynamics} below. Since not all iterations contribute to the error propagation due to the random update between the two Q-estimators, we introduce the following notations to label the valid iterations. \begin{definition}\label{def:TA} We define $T^A$ as the collection of iterations updating $Q^A$. In addition, we denote $T^A(t_1, t_2)$ as the set of iterations updating $Q^A$ between time $t_1$ and $t_2$. That is, \begin{equation*} T^A(t_1, t_2) = \left\{ t: t\in [t_1, t_2] \text{ and } t\in T^A \right\}. \end{equation*} Correspondingly, the number of iterations updating $Q^A$ between time $t_1$ and $t_2$ is the cardinality of $T^A(t_1, t_2)$ which is denoted as $|T^A(t_1,t_2)|$. \end{definition} \begin{lemma}\label{lem:residualDynamics} Consider double Q-learning in Algorithm \ref{alg:doubleQ}. Then we have \begin{equation*} r_{t+1}(s,a) \!=\! \left\{\begin{aligned} & r_t(s,a), \quad t \notin T^A;\\ & (1\!-\!\alpha_{t}) r_{t}(s,a) \!+\! \alpha_{t} (\mathcal T Q_{t}^A(s,a)\!-\!Q^*(s,a)) \!+\! \alpha_{t} w_{t}(s,a) \!+\! \alpha_{t}\gamma u_{t}^{BA}(s',a^*), t\in T^A, \end{aligned} \right. \end{equation*} where $w_{t}(s,a) = \mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a), u_{t}^{BA}(s,a) = Q_{t}^B(s,a) - Q_{t}^A(s,a)$. \end{lemma} \begin{proof} Following from Algorithm \ref{alg:doubleQ} and for $t\in T^A$, we have \begin{align*} &Q_{t+1}^A(s,a)\\ &\quad= Q_{t}^A(s,a) + \alpha_{t}(R_{t} + \gamma Q_{t}^B(s',a^*) - Q^A_{t}(s,a) )\\ &\quad= (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t}\left( R_{t} + \gamma Q_{t}^A(s',a^*) \right) + \alpha_{t}\left(\gamma Q_{t}^B(s',a^*) - \gamma Q_{t}^A(s',a^*) \right)\\ &\quad\overset{\text{(i)}}{=} (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t}\left( \mathcal T_{t} Q_{t}^A(s,a) + \gamma u_{t}^{BA}(s',a^*) \right)\\ &\quad= (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t} \mathcal T Q_{t}^A(s,a) + \alpha_{t} (\mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a))+ \alpha_{t}\gamma u_{t}^{BA}(s',a^*)\\ &\quad= (1-\alpha_{t}) Q_{t}^A(s,a) + \alpha_{t} \mathcal T Q_{t}^A(s,a) + \alpha_{t} w_{t}(s,a) + \alpha_{t}\gamma u_{t}^{BA}(s',a^*), \end{align*} where (i) follows because we denote $\mathcal T_{t} Q_{t}^A(s,a) = R_{t} + \gamma Q_{t}^A(s',a^*)$. By subtracting $Q^*$ from both sides, we complete the proof. \end{proof} \subsubsection{Step 3: Constructing sandwich bounds on $r_t(s,a)$} We provide upper and lower bounds on $r_t$ by constructing two sequences $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ in the following lemma. \begin{lemma}\label{lem:rtSandwich} Let $\tau_k$ be such that $\norm{r_t}\leq D_k$ for all $t\geq\tau_k$. Suppose that we have $\norm{u_t^{BA}}\leq \sigma D_k$ with $\sigma = \frac{1-\gamma}{2\gamma}$ for all $t\geq\tau_k$. Define $W_{t;\tau_k}(s,a)$ as \begin{equation*} W_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &W_{t;\tau_k}(s,a), \quad t\notin T^A;\\ &(1-\alpha_t)W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a), \quad t\in T^A, \end{aligned}\right. \end{equation*} where $W_{\tau_k;\tau_k}(s,a) = 0$ and define $Y_{t;\tau_k}(s,a)$ as \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} &Y_{t;\tau_k}(s,a), \quad t\notin T^A;\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''D_k, \quad t\in T^A, \end{aligned}\right. \end{equation*} where $Y_{\tau_k;\tau_k}(s,a) = D_k$ and $\gamma''=\gamma(1+\sigma)$. Then for any $t\geq\tau_k$ and state-action pair $(s,a)$, we have \begin{equation*} -Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a) \leq r_t(s,a) \leq Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a). \end{equation*} \end{lemma} \begin{proof} We proceed the proof by induction. For the initial condition $t=\tau_k$, we have $\norm{r_t(s,a)}\leq D_k$, and thus it holds that $-D_k \leq r_{\tau_k}(s,a) \leq D_k$. We assume the sandwich bound holds for time $t\geq\tau_k$. It remains to check whether this bound holds for $t+1$. If $t\notin T^A$, then $r_{t+1}(s,a) = r_t(s,a), W_{t+1;\tau_k}(s,a)=W_{t;\tau_k}(s,a), Y_{t+1;\tau_k}(s,a)=Y_{t;\tau_k}(s,a)$. Thus the sandwich bound still holds. If $t\in T^A$, we have \begin{align*} r_{t+1}(s,a) &= (1-\alpha_t) r_t(s,a) + \alpha_t( \mathcal T Q_t^A(s,a) - Q^*(s,a) ) + \alpha_t w_t(s,a) + \alpha_t\gamma u_t^{BA}(s',a^*)\\ &\leq (1-\alpha_t) (Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) + \alpha_t\norm{ \mathcal T Q_t^A - Q^*}\\ &\quad + \alpha_t w_t(s,a) + \alpha_t\gamma \norm{u_t^{BA}}\\ &\overset{\text{(i)}}{\leq} (1-\alpha_t) (Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) + \alpha_t \gamma \norm{r_t}\\ &\quad + \alpha_t w_t(s,a) + \alpha_t\gamma \norm{u_t^{BA}}\\ &\overset{\text{(ii)}}{\leq} (1-\alpha_t) Y_{t;\tau_k}(s,a) + \alpha_t \gamma(1+\sigma)D_k + (1-\alpha_t) W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a)\\ &\leq Y_{t+1;\tau_k}(s,a) + W_{t+1;\tau_k}(s,a), \end{align*} where (i) follows from the contraction property of the Bellman operator, and (ii) follows from the condition $\norm{u_t^{BA}}\leq \sigma D_k$. Similarly, we can bound the other direction as \begin{align*} r_{t+1}(s,a) &= (1-\alpha_t) r_t(s,a) + \alpha_t( \mathcal T Q_t^A(s,a) - Q^*(s,a) ) + \alpha_t w_t(s,a) + \alpha_t\gamma u_t^{BA}(s',a^*)\\ &\geq (1-\alpha_t) (-Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) - \alpha_t\norm{ \mathcal T Q_t^A - Q^*}\\ &\quad + \alpha_t w_t(s,a) - \alpha_t\gamma \norm{u_t^{BA}}\\ &\geq (1-\alpha_t) (Y_{t;\tau_k}(s,a) + W_{t;\tau_k}(s,a)) - \alpha_t \gamma \norm{r_t}\\ &\quad + \alpha_t w_t(s,a) - \alpha_t\gamma \norm{u_t^{BA}}\\ &\geq -(1-\alpha_t) Y_{t;\tau_k}(s,a) - \alpha_t \gamma(1+\sigma)D_k + (1-\alpha_t) W_{t;\tau_k}(s,a) + \alpha_t w_t(s,a)\\ &\geq -Y_{t+1;\tau_k}(s,a) + W_{t+1;\tau_k}(s,a). \end{align*} \end{proof} \subsubsection{Step 4: Bounding $Y_{t;\tau_k}(s,a)$ and $W_{t;\tau_k}(s,a)$ for epoch $k+1$} \label{subsec:proofProp2} Similarly to Steps 3 and 4 in Part I, we conditionally bound $\norm{r_t}\leq D_k$ for $t\in [\tau_{k}, \tau_{k+1})$ and $k=0,1,2,\dots$ by the induction arguments followed by the union bound. We first bound $Y_{t;\tau_k}(s,a)$ and $W_{t;\tau_k}(s,a)$ in \Cref{lem:Yt} and~\Cref{lem:WlDiff}, respectively. \begin{lemma}\label{lem:Yt} Fix $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$. Let $\{D_k\}$ be defined in Lemma \ref{lem:couple}. Consider synchronous double Q-learning using a polynomial learning rate $\alpha_t = \frac{1}{t^\omega}$ with $\omega\in(0,1)$. Suppose that $Y_{t;\tau_k}(s,a) \leq D_k$ for any $t \geq \tau_k$. At block $k$, we assume that there are at least $c\tau_k^\omega$ iterations updating $Q^A$, i.e., $|T^A(\tau_k,\tau_{k+1})|\geq c\tau_k^\omega$. Then for any $t\in[\tau_{k+1},\tau_{k+2})$, we have \begin{equation*} Y_{t;\tau_k}(s,a) \leq \left(\gamma'' + \frac{2}{2+\Delta}\beta\right)D_k. \end{equation*} \end{lemma} \begin{proof} Since we have defined $\tau_k=\hat{\tau}_k$ in Lemma \ref{lem:couple}, we have $\tau_{k+1} = \tau_k + \frac{2c}{\kappa}\tau_k^\omega$. Observe that $Y_{\tau_k;\tau_k}(s,a) = D_k = \gamma'' D_k + (1-\gamma'')D_k := \gamma'' D_k + \rho_{\tau_k} $. We can rewrite the dynamics of $Y_{t;\tau_k}(s,a)$ as \begin{equation*} Y_{t+1;\tau_k}(s,a) = \left\{\begin{aligned} & Y_{t;\tau_k}(s,a), \quad t\notin T^A\\ &(1-\alpha_t)Y_{t;\tau_k}(s,a) + \alpha_t \gamma''D_k = \gamma''D_k + (1-\alpha_t)\rho_t, \quad t\in T^A \end{aligned}\right. \end{equation*} where $\rho_{t+1} = (1-\alpha_t)\rho_t$ for $t\in T^A$. By the definition of $\rho_t$, we obtain \begin{align} \rho_t &= \rho_{\tau_k}\prod_{i\in T^A(\tau_k, t-1)}(1-\alpha_i) = (1-\gamma'')D_k\prod_{i\in T^A(\tau_k, t-1)}(1-\alpha_i)\nonumber\\ &= (1-\gamma'')D_k\prod_{i\in T^A(\tau_k, t-1)}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(i)}}{\leq} (1-\gamma'')D_k\prod_{i\in T^A(\tau_k, \tau_{k+1}-1)}\left(1-\frac{1}{i^\omega}\right) \label{eq:issue1}\\ &\overset{\text{(ii)}}{\leq} (1-\gamma'')D_k\prod_{i=\tau_{k+1}-c\tau_k^\omega}^{\tau_{k+1}-1}\left(1-\frac{1}{i^\omega}\right) \overset{\text{(iii)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{(\tau_{k+1}-1)^\omega} \right) \nonumber\\ &\leq (1-\gamma'')D_k\exp\left( -\frac{c\tau_k^\omega-1}{\tau_{k+1}^\omega} \right) = (1-\gamma'')D_k\exp\left( -c\left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega + \frac{1}{\tau_{k+1}^\omega} \right) \nonumber\\ &\overset{\text{(iv)}}{\leq} (1-\gamma'')D_k\exp\left( -\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\tau_{1}^\omega} \right), \nonumber \end{align} where (i) follows because $\alpha_i<1$ and $t\geq \tau_{k+1}$, (ii) follows because $|T^A(\tau_{k}, \tau_{k+1}-1)|\geq c\tau_k^\omega$ where $T^A(t_1,t_2)$ and $|T^A(t_1,t_2)|$ are defined in Definition \ref{def:TA}, (iii) follows from Lemma \ref{lem:tauHelp}, and (iv) holds because $\tau+k\geq\tau_1$ and \begin{equation*} \left(\frac{\tau_k}{\tau_{k+1}}\right)^\omega \geq \frac{\tau_k}{\tau_{k+1}} = \frac{\tau_k}{\tau_{k} + \frac{2c}{\kappa}\tau_k^\omega}\geq \frac{1}{1+\frac{2c}{\kappa}}. \end{equation*} Next we check the value of the power $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\tau_{1}^\omega}$. Since $\kappa\in(\ln 2,1)$ and $\Delta\in(0, e^{\kappa}-2)$, we have $\ln (2+\Delta)\in (0, \kappa)$. Further, observing $\tau_1^\omega > \frac{1}{\kappa-\ln(2+\Delta)}$, we obtain $\ln(2+\Delta) + \frac{1}{\tau_1^\omega} \in (0,\kappa)$. Last, since $c\geq\frac{\kappa}{2}\left( \frac{1}{1-\frac{\ln(2+\Delta) + 1/\tau_1^\omega}{\kappa}} - 1 \right)=\frac{\kappa(\ln(2+\Delta) + 1/\tau_1^\omega)}{2(\kappa-\ln(2+\Delta) - 1/\tau_1^\omega)}$, we have $-\frac{c}{1+\frac{2c}{\kappa}} + \frac{1}{\tau_{1}^\omega}\leq -\ln(2+\Delta)$. Thus, we have $\rho_t\leq \frac{1-\gamma''}{2+\Delta}D_k$. Finally, we finish our proof by further observing that $1-\gamma'' = 2\beta$. \end{proof} It remains to bound $|W_{t;\tau_k}(s,a)|\leq \left(1-\frac{2}{2+\Delta}\right)\beta D_k$ for $t\in [\tau_{k+1},\tau_{k+2})$. Combining the bounds of $Y_{t;\tau_k}$ and $W_{t;\tau_k}$ yields $(\gamma''+\beta)D_k = (1-\beta)D_k=D_{k+1}$. Since $W_{t;\tau_k}$ is stochastic, we need to derive the probability for the bound to hold. To this end, we first rewrite the dynamics of $W_{t;\tau_k}$ defined in Lemma \ref{lem:rtSandwich} as \begin{equation*} W_{t;\tau_k}(s,a) = \sum_{i\in T^A(\tau_k, t-1)} \alpha_i\underset{j\in T^A(i+1, t-1)}{\Pi} (1-\alpha_j)w_i(s,a). \end{equation*} Next, we introduce a new sequence $\{W_{t;\tau_k}^l(s,a)\}$ as \begin{equation*} W^l_{t;\tau_k}(s,a) = \sum_{i\in T^A(\tau_k, \tau_k+l)} \alpha_i\underset{j\in T^A(i+1, t-1)}{\Pi} (1-\alpha_j)w_i(s,a). \end{equation*} Thus we have $W_{t;\tau_k}(s,a) = W^{t-1-\tau_k}_{t;\tau_k}(s,a)$. Then we have the following lemma. \begin{lemma}\label{lem:WlDiff} For any $t\in[\tau_{k+1}, \tau_{k+2}]$ and $1\leq l \leq t-\tau_k-1$, $\{W_{t;\tau_k}^l(s,a)\}$ is a martingale sequence and satisfies \begin{equation*} \lvert W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \rvert \leq \frac{V_{\max}}{\tau_k^\omega}. \end{equation*} \end{lemma} \begin{proof} Observe that \begin{equation*} W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) = \left\{\begin{aligned} &0, \quad \tau_k+l-1\notin T^A;\\ &\alpha_{\tau_k+l}\underset{j\in T^A(\tau_k+l+1, t-1)}{\Pi} (1-\alpha_j)w_{\tau_k+l}(s,a), \quad \tau_k+l-1\in T^A. \end{aligned} \right. \end{equation*} Since $\mathbb E[w_t|\mathcal F_{t-1}]=0$, we have \begin{align*} \mathbb E\left[ W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) | \mathcal F_{\tau_k+l-1} \right]=0. \end{align*} Thus $\{W_{t;\tau_k}^l(s,a)\}$ is a martingale sequence. In addition, since $l\geq 1$ and $\alpha_t\in (0,1)$, we have \begin{equation*} \alpha_{\tau_k+l}\underset{j\in T^A(\tau_k+l+1, t-1)}{\Pi} (1-\alpha_j)\leq \alpha_{\tau_k+l}\leq \alpha_{\tau_k} = \frac{1}{\tau_k^\omega}. \end{equation*} Further, we obtain $|w_t(s,a)| = |\mathcal T_{t} Q_{t}^A(s,a) - \mathcal T Q_{t}^A(s,a)| \leq\frac{2Q_{\max}}{1-\gamma} = V_{\max}$. Thus \begin{equation*} \lvert W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \rvert \leq \alpha_{\tau_k+l}|w_{\tau_k+l}(s,a)| \leq \frac{V_{\max}}{\tau_k^\omega}. \end{equation*} \end{proof} Next, we bound $W_{t;\tau_k}(s,a)$. Fix $\tilde{\epsilon}>0$. Then for any $t\in[\tau_{k+1},\tau_{k+2})$, we have \begin{align*} &\mathbb P\left[ |W_{t;\tau_k}(s,a)|>\tilde{\epsilon} | t\in[\tau_{k+1},\tau_{k+2}),E,F \right]\\ &\quad\overset{\text{(i)}}{\leq} 2\exp\left( \frac{-\tilde{\epsilon}^2}{2\underset{l:\tau_k+l-1\in T^A(\tau_k, t-1)}{\sum}\left( W^l_{t;\tau_k}(s,a) - W^{l-1}_{t;\tau_k}(s,a) \right)^2 + 2(W^{\min(T^A(\tau_k, t-1))}_{t;\tau_k}(s,a))^2 } \right)\\ &\quad\overset{\text{(ii)}}{\leq} 2\exp\left( -\frac{\hat{\epsilon}^2\tau_k^{2\omega}}{2(|T^A(\tau_k,t-1)|+1)V_{\max}^2} \right) \overset{\text{(iii)}}{\leq} 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(t+1-\tau_k)V_{\max}^2} \right)\\ &\quad\leq 2\exp\left( -\frac{\tilde{\epsilon}^2\tau_k^{2\omega}}{2(\tau_{k+2}-\tau_k)V_{\max}^2} \right) \overset{\text{(iv)}}{\leq} 2\exp\left( -\frac{\kappa^2\tilde{\epsilon}^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from Lemma \ref{lem:azuma}, (ii) follows from Lemma \ref{lem:WlDiff}, (iii) follows because $|T^A(t_1,t_2)|\leq t_2 - t_1 + 1$ and (iv) holds because \begin{equation*} \tau_{k+2} - \tau_k = \frac{2c}{\kappa}\tau_{k+1}^\omega + \frac{2c}{\kappa}\tau_k^\omega = \frac{2c}{\kappa}\left( \tau_k + \frac{2c}{\kappa}\tau_k^\omega \right)^\omega + \frac{2c}{\kappa}\tau_k^\omega \leq \frac{4c(c+\kappa)}{\kappa^2}\tau_k^\omega. \end{equation*} \textbf{Proof of~\Cref{lem:conditionalBound}}\\ Now we bound $\norm{r_t}$ by combining the bounds of $Y_{t;\tau_k}$ and $W_{t;\tau_k}$. Applying the union bound in Lemma \ref{lem:unionBound} yields \begin{align} &\mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \lvert W_{t;\tau_k}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\beta D_k|E,F \right] \nonumber\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A|(\tau_{k+2}-\tau_{k+1}) \cdot \mathbb P\left[ \lvert W_{t;\tau_k}(s,a) \rvert > \frac{\Delta}{2+\Delta}\beta D_k \Big\rvert t\in[\tau_{k+1},\tau_{k+2}),E,F \right] \nonumber\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2c}{\kappa}\tau_{k+1}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right) \nonumber\\ &\quad\geq 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 D_k^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right) \nonumber\\ &\quad\overset{\text{(i)}}{\geq} 1 - \sum_{k=0}^m |\mathcal S||\mathcal A| \frac{2c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\tau_{k}^\omega \cdot 2\exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{8c(c+\kappa)V_{\max}^2} \right)\label{eq:issue2}\\ &\quad\overset{\text{(ii)}}{\geq} 1 - \frac{4c}{\kappa}\left(1+\frac{2c}{\kappa}\right)\sum_{k=0}^m |\mathcal S||\mathcal A| \cdot \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_k^{\omega}}{16c(c+\kappa)V_{\max}^2} \right) \nonumber\\ &\quad\geq 1 - \frac{4c(m+1)}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right), \nonumber \end{align} where (i) follows because $D_k\geq D_m\geq \epsilon$, and (ii) follows from Lemma \ref{lem:tauHelp} by substituting $a=\frac{16c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }, b=1$ and observing that \begin{align*} \tau_k^{\omega}&\geq\hat{\tau}_1^{\omega}\geq \frac{32c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\ln \left(\frac{16c(c+\kappa)V_{\max}^2}{\kappa^2\left(\frac{\Delta}{2+\Delta}\right)^2\beta^2\epsilon^2 }\right) = 2ab\ln ab. \end{align*} Note that $Y_{t;\tau_k}(s,a)$ is deterministic. We complete this proof by observing that \begin{align*} &\mathbb P\left[ \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^*}\leq D_{k+1} | E,F\right]\\ &\quad\geq \mathbb P\left[ \forall(s,a), \forall k\in [0,m], \forall t\in[\tau_{k+1},\tau_{k+2}), \lvert W_{t;\tau_k}(s,a) \rvert \leq \frac{\Delta}{2+\Delta}\beta D_k|E,F \right]. \end{align*} \subsection{Part III: Bounding $\norm{Q^A_t - Q^* }$} \label{subsec:proofThm1} We combine the results in the first two parts, and provide a high probability bound on $\norm{r_t}$ with further probabilistic arguments, which exploit the high probability bounds on $\mathbb P(E)$ in~\Cref{lem:Gq} and $\mathbb P(F)$ in the following lemma. \begin{lemma}\label{lem:halfQA} Let the sequence $\tau_k$ be the same as given in Lemma \ref{lem:couple}, i.e. $\tau_{k+1} = \tau_k + \frac{2c}{\kappa}\tau_k^\omega$ for $k\geq 1$. Then we have \begin{equation*} \mathbb P\left[\forall k\in [1,m], I^A_k\geq c\tau_{k}^\omega \right] \geq 1- m \exp\left( -\frac{(1-\kappa)^2c\tau_1^\omega}{\kappa} \right). \end{equation*} where $I^A_k$ denotes the number of iterations updating $Q^A$ at epoch $k$. \end{lemma} \begin{proof} The event updating $Q^A$ is a binomial random variable. To be specific, at iteration $t$ we define \begin{equation*} J^A_t = \left\{ \begin{aligned} & 1, \quad\text{updating } Q^A;\\ & 0, \quad\text{updating } Q^B. \end{aligned} \right. \end{equation*} Clearly, the events are independent across iterations. Therefore, for a given epoch $[\tau_k, \tau_{k+1})$, $I^A_k = \sum_{t=\tau_k}^{\tau_{k+1}-1} J^A_t$ is a binomial random variable satisfying the distribution $Binomial(\tau_{k+1}-\tau_k, 0.5)$. In the following, we use the tail bound of a binomial random variable. That is, if a random variable $X\sim Binomial(n,p)$, by Hoeffding's inequality we have $\mathbb P(X\leq x)\leq \exp\left(-\frac{2(np-x)^2}{n}\right)$ for $x< np$, which implies $\mathbb P(X\leq \kappa np)\leq \exp\left(-2np^2(1-\kappa)^2\right)$ for any fixed $\kappa\in(0,1)$. If $k=0$, $I^A_0\sim Binomial(\tau_1, 0.5)$. Thus the tail bound yields \begin{equation*} \mathbb P \left[I^A_0\leq \frac{\kappa}{2}\cdot\tau_1\right] \leq \exp\left( -\frac{(1-\kappa)^2\tau_1}{2} \right). \end{equation*} If $k\geq1$, since $\tau_{k+1}-\tau_k = \frac{2c}{\kappa}\tau_k^\omega $, we have $I^A_k\sim Binomial\left( \frac{2c}{\kappa}\tau_k^\omega, 0.5 \right)$. Thus the tail bound of a binomial random variable gives \begin{equation*} \mathbb P \left[I^A_k\leq \frac{\kappa}{2}\cdot \frac{2c}{\kappa}\tau_k^\omega\right] \leq \exp\left( -\frac{(1-\kappa)^2c\tau_k^\omega}{\kappa} \right). \end{equation*} Then by the union bound, we have \begin{align*} \mathbb P \left[\forall k\in[1,m], I^A_k\geq c\tau_k^\omega\right] &=\mathbb P \left[\forall k\in[1,m], I^A_k\geq \frac{\kappa}{2}\cdot \frac{2c}{\kappa}\tau_k^\omega\right]\\ &\geq 1 - \sum_{k=1}^m\exp\left( -\frac{(1-\kappa)^2c\tau_k^\omega}{\kappa} \right)\\ &\geq 1- m \exp\left( -\frac{(1-\kappa)^2c\tau_1^\omega}{\kappa} \right). \end{align*} \end{proof} We further give the following~\Cref{lem:totalIter} and~\Cref{lem:iteration} before proving~\Cref{thm:syncDQ}. \Cref{lem:totalIter} characterizes the number of blocks to achieve $\epsilon$-accuracy given $D_k$ defined in Lemma \ref{lem:couple}. \begin{lemma}\label{lem:totalIter} Let $D_{k+1}=(1-\beta)D_k$ with $\beta = \frac{1-\gamma}{4}, D_0 = \frac{2\gamma V_{\max}}{1-\gamma}$. Then for $m\geq\frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}$, we have $D_m \leq \epsilon$. \end{lemma} \begin{proof} By the definition of $D_k$, we have $D_k = \left( 1 - \beta \right)^k D_0$. Then we obtain \begin{equation*} D_k\leq\epsilon \Longleftrightarrow \left( 1 - \beta \right)^k D_0 \leq \epsilon \Longleftrightarrow \frac{1}{(1-\beta)^k} \geq \frac{D_0}{\epsilon} \Longleftrightarrow k \geq \frac{\ln(D_0/\epsilon)}{\ln(1/(1-\beta))}. \end{equation*} Further observe that $\ln \frac{1}{1-x}\leq x$ if $x\in(0,1)$. Thus we have \begin{equation*} k \geq\frac{1}{\beta}\ln \frac{D_0}{\epsilon} = \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}. \end{equation*} \end{proof} From the above lemma, it suffices to find the starting time at epoch $m^*=\left\lceil \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$. The next lemma is useful to calculate the total iterations given the initial epoch length and number of epochs. \begin{lemma}\label{lem:iteration} \citep[Lemma 32]{even2003learning} Consider a sequence $\{x_k\}$ satisfying \begin{equation*} x_{k+1} = x_k + c x_k^\omega = x_1 + \sum_{i=1}^k c x_i^\omega. \end{equation*} Then for any constant $\omega\in(0,1)$, we have \begin{equation*} x_k = O\left( (x_1^{1-\omega} + c k)^\frac{1}{1-\omega} \right) = O\left( x_1 + (ck)^{\frac{1}{1-\omega}} \right). \end{equation*} \end{lemma} \textbf{Proof of Theorem \ref{thm:syncDQ}}\\ Now we are ready to prove Theorem \ref{thm:syncDQ} based on the results obtained so far.\\ Let $m^*=\Big\lceil \frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\Big\rceil$, then $G_{m^*-1}\geq\sigma\epsilon, D_{m^*-1}\geq \epsilon$. Thus we obtain \begin{align*} &\mathbb P(\norm{Q^A_{\tau_{m^*}}(s,a) - Q^* } \leq \epsilon)\\ & \geq\mathbb P\left[ \forall k\in [0,m^*- 1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} \right]\\ & = \mathbb P\left[ \forall k\in [0,m^*- 1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |E,F \right]\cdot\mathbb P(E\cap F)\\ & \geq \mathbb P\left[ \forall k\in [0,m^*-1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |E,F \right]\\ &\quad \cdot (\mathbb P(E)+\mathbb P(F)-1)\\ &\overset{\text{(i)}}{\geq}\mathbb P\left[ \forall k\in [0,m^*- 1], \forall t\in[\tau_{k+1},\tau_{k+2}), \norm{Q^A_t - Q^* }\leq D_{k+1} |E,F \right]\\ &\quad\cdot \left(\mathbb P\left[ \forall q\in [0, m^*- 1], \forall t\in[\hat{\tau}_{q+1},\hat{\tau}_{q+2}), \norm{Q^B_t - Q^A_t }\leq G_{q+1} \right] + \mathbb P(F) - 1\right)\\ &\overset{\text{(ii)}}{\geq}\left[ 1 - \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right) \right]\\ &\quad\cdot\!\left[ 1 \!-\! \frac{4cm^*}{\kappa}\left(1\!+\!\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right) \!-\! m^*\! \exp\!\left( -\frac{(1\!-\!\kappa)^2c\hat{\tau}_1^{\omega}}{\kappa} \right) \right]\\ &\geq 1- \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta} \right)^2\beta^2 \epsilon^2\tau_1^{\omega}}{16c(c+\kappa)V_{\max}^2} \right)\\ &\quad - \frac{4cm^*}{\kappa}\left(1\!+\!\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2\left( \frac{\Delta}{2+\Delta}\right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^\omega}{64c(c+\kappa)V_{\max}^2} \right) - m^* \exp\left( -\frac{(1-\kappa)^2c\hat{\tau}_1^{\omega}}{\kappa} \right)\\ &\overset{\text{(iii)}}{\geq} 1- \frac{12cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2} \right), \end{align*} where (i) follows from~\Cref{lem:couple}, (ii) follows from~\Cref{lem:Gq} and~\ref{lem:conditionalBound} and (iii) holds due to the fact that \begin{align*} \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \!=\! \max&\left\{ \frac{4cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A|, m^* \right\},\\ \frac{\kappa^2(1\!-\!\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2}\!\leq\! \min&\left\{ \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\beta^2 \epsilon^2\hat{\tau}_1^{\omega}}{16c(c+\kappa)V_{\max}^2}, \frac{(1\!-\!\kappa)^2\hat{\tau}_1^{\omega}}{\kappa}, \frac{\kappa^2\!\left( \frac{\Delta}{2+\Delta} \right)^2\!\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2}\right\}. \end{align*} By setting \begin{equation*} 1- \frac{12cm^*}{\kappa}\left(1+\frac{2c}{\kappa}\right)|\mathcal S||\mathcal A| \exp\left( -\frac{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2\hat{\tau}_1^{\omega}}{64c(c+\kappa)V_{\max}^2} \right) \geq 1-\delta, \end{equation*} we obtain \begin{equation*} \hat{\tau}_1 \geq \left( \frac{64c(c+\kappa)V_{\max}^2}{\kappa^2(1-\kappa)^2\left( \frac{\Delta}{2+\Delta} \right)^2\xi^2 \sigma^2\epsilon^2}\ln \frac{12cm^*|\mathcal S||\mathcal A|(2c+\kappa)}{\kappa^2\delta} \right)^{\frac{1}{\omega}}. \end{equation*} Considering the conditions on $\hat{\tau}_1$ in~\Cref{lem:Gq} and~\Cref{lem:conditionalBound}, we choose \begin{equation*} \hat{\tau}_1 = \Theta\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{m^*|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} \right). \end{equation*} Finally, applying the number of iterations $m^*=\left\lceil\frac{4}{1-\gamma}\ln \frac{2\gamma V_{\max}}{\epsilon(1-\gamma)}\right\rceil$ and Lemma \ref{lem:iteration}, we conclude that it suffices to let \begin{align*} T&=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{m^*|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^4\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{2c}{\kappa}\frac{1}{1-\gamma} \ln\frac{\gamma V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|V_{\max}^2\ln(\frac{V_{\max}}{(1-\gamma)\epsilon})}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right)\\ &=\Omega\left( \left( \frac{V_{\max}^2}{(1-\gamma)^4\epsilon^2}\ln \frac{|\mathcal S||\mathcal A|V_{\max}^2}{(1-\gamma)^5\epsilon^2\delta} \right)^{\frac{1}{\omega}} + \left(\frac{1}{1-\gamma} \ln\frac{ V_{\max}}{(1-\gamma)\epsilon} \right)^{\frac{1}{1-\omega}} \right), \end{align*} to attain an $\epsilon$-accurate Q-estimator. \input{SupplementaryBC}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:Introduction} The discovery of the 125-GeV scalar resonance at the LHC~\cite{ATLAS:2012yve,CMS:2012qbp} has claimed its consistency with the Standard Model (SM) Higgs boson in terms of particle content. Nonetheless, as there remain several experimental observations that ask for new physics explanations, the exact structure of the electroweak sector is still under intense exploration. one example is the deviations from the SM predictions for the ${hff}$, ${hVV}$, ${hZ\gamma}$ and ${h\gamma\gamma}$ couplings, as given in Refs.~\cite{CMS:2018uag,ATLAS:2019nkf}, that still allow a beyond-SM interpretation. Another example is the electroweak baryogenesis problem, the success of which requires the occurrence of a strong first-order electroweak phase transition (EWPT). However, according to the non-perturbative lattice computations~\cite{Kajantie:1996mn,Gurtler:1997hr,Csikor:1998eu}, the electroweak symmetry breaking (EWSB) of the SM only occurs through a smooth crossover transition around the temperature $T\sim100$~GeV. Thus, extensions to the SM Higgs sector are called for. In this work, we study the Georgi-Machacek (GM) model~\cite{Georgi:1985nv,Chanowitz:1985ug}, which introduces one complex and one real scalar triplets that preserves the custodial symmetry at tree level after the electroweak symmetry breakdown (EWSB). The model predicts the existence of several Higgs multiplets, whose mass eigenstates form one quintet ($H_5$), one triplet ($H_3$), and two singlets ($H_1$ and $h$) under the custodial symmetry, thus leading to rich Higgs phenomenology. For example, enhancements in the $hWW$ and $hZZ$ couplings compared to the SM predictions can be achieved through the additional triplet-gauge interactions, and considerable deviations from the SM predictions for the di-Higgs production rates can also be induced through the modification to the Higgs self-couplings as well as the new contribution from $H_1$ through the singlet mixing. The model also has the capability of providing Majorana mass to neutrinos through the triplet vacuum expectation values (VEVs). Moreover, as we show in this study, the GM model can generate strong first-order EWPTs while satisfying all the current collider measurement constraints in certain phase space, and can further lead to detectable stochastic gravitational wave (GW) backgrounds through the bubble dynamics between the symmetric and broken phases~\cite{Caprini:2015zlo,Caprini:2019egz}. These salient features of the model arouse in recent years a series of studies on collider phenomenology~\cite{Chiang:2013rua,Chiang:2012cn,Chiang:2014bia,Chiang:2015kka,Chiang:2015amq,Chiang:2015rva,Logan:2015xpa,Degrande:2017naf,Logan:2017jpr,Chang:2017niy} as well as the EWPT~\cite{Chiang:2014hia,Zhou:2018zli}. To explore the phase space of the GM model that satisfies essential theoretical bounds and experimental constraints from various LHC and Tevatron measurements, we perform Bayesian Markov-Chain Monte Carlo (MCMC) global fits in the model with \texttt{HEPfit}~\cite{DeBlas:2019ehy}. Compared with the previous work~\cite{Chiang:2018cgb}, we have updated the experimental data and refined several fitting setups to achieve more restraining results. With the parameter samples extracted from the phase space that satisfies all the mentioned constraints, we go on to calculate the EWPT characteristics by employing a high-temperature approximation for the thermal effective potential, and predict the GW backgrounds induced from the bubble dynamics. The structure of this paper is as follows. In Sec.~\ref{sec:The Georgi-Machacek Model}, we review the GM model and give the theoretical constraints to be imposed on the model. In Sec.~\ref{sec:Constraints}, we choose the model Lagrangian parameters as our scanning parameters and set their prior distributions. We then show step by step how various theoretical and experimental constraints restrict the parameter space. Based on the scanning result, we further find the parameter sets that will lead to sufficiently strong first-order EWPTs in Sec.~\ref{sec:Electroweak Phase Transition and Gravitational Waves}. We calculate the associated GW spectra and make a comparison with the sensitivities of several proposed GW experiments. Moreover, we use these parameter sets to make predictions for the most promising constraining/discovering modes at the LHC in Sec.~\ref{sec:Predictions}. Finally, we discuss and summarize our findings in Sec.~\ref{sec:Discussions and Summary}. \section{The Georgi-Machacek Model} \label{sec:The Georgi-Machacek Model} The electroweak (EW) sector of the GM model comprises one isospin doublet scalar field with hypercharge $Y=1 / 2$\footnote{We adopt the hypercharge convention such that $Q=T_3+Y$.}, one complex isospin triplet scalar field with $Y=1,$ and one real isospin triplet scalar field with $Y=0$. These fields are denoted respectively by\footnote{The sign conventions for the charge conjugate fields are $\phi^-=(\phi^+)^*$, $\chi^-=(\chi^+)^*$, $\xi^-=(\xi^+)^*$ and $\chi^{--}=(\chi^{++})^*$.} \begin{equation}\label{eq:scalar field} \begin{array}{l} \phi=\left(\begin{array}{c} \phi^{+} \\ \phi^{0} \end{array}\right), \quad \chi=\left(\begin{array}{c} \chi^{++} \\ \chi^{+} \\ \chi^{0} \end{array}\right), \quad \xi=\left(\begin{array}{c} \xi^{+} \\ \xi^{0} \\ -\left(\xi^{+}\right)^{*} \end{array}\right) \end{array} ~, \end{equation} where the neutral components before the EWSB are parametrized as $\phi^{0}=\left(h_{\phi}+i a_{\phi}\right)/\sqrt{2}$, $\chi^{0}=\left(h_{\chi}+i a_{\chi}\right)/\sqrt{2}$, and $\xi^{0}=h_{\xi}$. A global $\mathrm{SU}(2)_{L} \times \mathrm{SU}(2)_{R}$ symmetry, which is explicitly broken by the Yukawa and the hypercharge-$U(1)$ gauge interactions, is imposed on the Higgs potential at tree level, which can be succinctly expressed by introducing the $\mathrm{SU}(2)_{L} \times \mathrm{SU}(2)_{R}$-covariant forms of the fields: \begin{equation} \begin{aligned} \Phi & \equiv\left(\epsilon_{2} \phi^{*}, \phi\right)=\left(\begin{array}{ccc} \left(\phi^{0}\right)^{*} & \phi^{+} \\ -\left(\phi^{+}\right)^{*} & \phi^{0} \end{array}\right), \quad \text {with } \epsilon_{2}=\left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right) ~, \\ \Delta & \equiv\left(\epsilon_{3} \chi^{*}, \xi, \chi\right)=\left(\begin{array}{ccc} \left(\chi^{0}\right)^{*} & \xi^{+} & \chi^{++} \\ -\left(\chi^{+}\right)^{*} & \xi^{0} & \chi^{+} \\ \left(\chi^{++}\right)^{*} & -\left(\xi^{+}\right)^{*} & \chi^{0} \end{array}\right), \quad \text { with } \epsilon_{3}=\left(\begin{array}{ccc} 0 & 0 & 1 \\ 0 & -1 & 0 \\ 1 & 0 & 0 \end{array}\right) ~. \end{aligned} \end{equation} The Lagrangian of the EW sector is given by \begin{equation} \mathcal{L}=\frac{1}{2} \operatorname{tr}\left[\left(D^{\mu} \Phi\right)^{\dagger} \left(D_{\mu} \Phi\right)\right]+\frac{1}{2} \operatorname{tr}\left[\left(D^{\mu} \Delta\right)^{\dagger} \left(D_{\mu} \Delta\right)\right]-V(\Phi, \Delta) ~, \end{equation} with the most general potential invariant under the gauge and global $\mathrm{SU}(2)_{L} \times \mathrm{SU}(2)_{R} \times \mathrm{U(1)_Y}$ symmetries as \begin{equation} \begin{aligned} V(\Phi, \Delta)=& \frac{1}{2} m_{1}^{2} \operatorname{tr}\left[\Phi^{\dagger} \Phi\right]+\frac{1}{2} m_{2}^{2} \operatorname{tr}\left[\Delta^{\dagger} \Delta\right]+\lambda_{1}\left(\operatorname{tr}\left[\Phi^{\dagger} \Phi\right]\right)^{2}+\lambda_{2}\left(\operatorname{tr}\left[\Delta^{\dagger} \Delta\right]\right)^{2} \\ &+\lambda_{3} \operatorname{tr}\left[\left(\Delta^{\dagger} \Delta\right)^{2}\right]+\lambda_{4} \operatorname{tr}\left[\Phi^{\dagger} \Phi\right] \operatorname{tr}\left[\Delta^{\dagger} \Delta\right]+\lambda_{5} \operatorname{tr}\left[\Phi^{\dagger} \frac{\sigma^{a}}{2} \Phi \frac{\sigma^{b}}{2}\right] \operatorname{tr}\left[\Delta^{\dagger} T^{a} \Delta T^{b}\right] \\ &+\mu_{1} \operatorname{tr}\left[\Phi^{\dagger} \frac{\sigma^{a}}{2} \Phi \frac{\sigma^{b}}{2}\right]\left(P^{\dagger} \Delta P\right)_{a b}+\mu_{2} \operatorname{tr}\left[\Delta^{\dagger} T^{a} \Delta T^{b}\right]\left(P^{\dagger} \Delta P\right)_{a b} ~, \end{aligned} \end{equation} where $\sigma^a$ and $T^a$ are the $2\times2$ and $3\times3$ representations of the $\mathrm{SU}(2)$ generators, and the matrix $P$, which rotates $\Delta$ into the Cartesian basis, is given by \begin{equation*} P=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc} -1 & i & 0 \\ 0 & 0 & \sqrt{2} \\ 1 & i & 0 \end{array}\right) ~. \end{equation*} The vacuum potential is given by \begin{equation}\label{potential at 0} V_0=\frac{m_{1}^{2}}{2} v_{\phi}^{2}+\frac{3}{2}m_{2}^{2} v_{\Delta}^{2}+\lambda_{1} v_{\phi}^{4}+\frac{3}{2}\left(2 \lambda_{4}+\lambda_{5}\right) v_{\phi}^{2} v_{\Delta}^{2}+3\left(\lambda_{3}+3 \lambda_{2}\right) v_{\Delta}^{4}+\frac{3}{4} \mu_{1} v_{\phi}^{2} v_{\Delta}+6 \mu_{2} v_{\Delta}^{3} ~, \end{equation} \label{eq:vacuum alignment} where the VEVs\footnote{As elucidated in Ref.~\cite{Chen:2022ocr}, one has to choose ``aligned'' triplet VEVs for the custodially symmetric potential. Assuming misaligned VEVs would lead to undesirable Goldstone and tachyonic modes in the model.} \begin{equation}\label{eq:custudial symmetry} \left\langle h_{\phi}\right\rangle=v_{\Phi}, \quad \left\langle h_{\chi}\right\rangle= \sqrt{2} v_{\Delta}, \quad \left\langle h_{\xi}\right\rangle=v_{\Delta} \end{equation} preserve the custodial $\mathrm{SU}(2)_V$ symmetry by breaking the $\mathrm{SU}(2)_L\times \mathrm{SU}(2)_R$ symmetry diagonally, and satisfy $v=\sqrt{v_\phi^2+8v_\Delta^2}\simeq 246$~GeV. The tadpole conditions are given by \begin{equation} \frac{\partial V(\Phi, \Delta)}{\partial h_{\phi}}\Bigg\vert_0=\frac{\partial V(\Phi, \Delta)}{\partial h_{\chi}}\Bigg\vert_0=\frac{\partial V(\Phi, \Delta)}{\partial h_{\xi}}\Bigg\vert_0=0 ~. \end{equation} Since the last two conditions are equivalent, we eventually have two linearly independent conditions: \begin{equation}\label{eq:tadpole conditions} \begin{aligned} m_{1}^{2} &=-4 \lambda_{1} v_{\Phi}^{2}-6 \lambda_{4} v_{\Delta}^{2}-3 \lambda_{5} v_{\Delta}^{2}-\frac{3}{2} \mu_{1} v_{\Delta} ~, \\ m_{2}^{2} &=-12 \lambda_{2} v_{\Delta}^{2}-4 \lambda_{3} v_{\Delta}^{2}-2 \lambda_{4} v_{\Phi}^{2}-\lambda_{5} v_{\Phi}^{2}-\mu_{1} \frac{v_{\Phi}^{2}}{4 v_{\Delta}}-6 \mu_{2} v_{\Delta} ~. \end{aligned} \end{equation} We further define \begin{equation} M_{1}^{2} \equiv-\frac{v}{\sqrt{2} \cos \beta} \mu_{1}, \quad M_{2}^{2} \equiv-3 \sqrt{2} \cos \beta v \mu_{2} \end{equation} to simplify the notations, where $\tan{\beta}=v_\phi/\left(2\sqrt{2}v_\Delta\right)$. Before we discuss the mass spectrum of the scalars, it is convenient to classify them according to their custodial $\mathrm{SU}(2)_V$ isospins. We decompose the $\mathbf{2}\otimes\mathbf{2}$ representation $\Phi$ and the $\mathbf{3}\otimes\mathbf{3}$ representation $\Delta$ into irreducible $\mathbf{1}\oplus\mathbf{3}$ and $\mathbf{1}\oplus\mathbf{3}\oplus\mathbf{5}$ representations, respectively. In general, the two singlet fields and the two triplet fields can further mix respectively with each other, and three Nambu-Goldstone (NG) modes to be eaten by the weak gauge bosons are produced from the latter mixing. The physical quintet $(H_5^{\pm\pm},H_5^{\pm},H_5^{0})$, the physical triplet $(H_3^{\pm}, H_3^{0})$, and the two physical singlets $(H_1,h)$ can be related to the original fields via \begin{equation}\label{flavor eigenstates} \begin{array}{l} H_{5}^{++}=\chi^{++}, \quad H_{5}^{+}=\frac{1}{\sqrt{2}}\left(\chi^{+}-\xi^{+}\right), \quad H_{5}^{0}=\sqrt{\frac{1}{3}} h_{\chi}-\sqrt{\frac{2}{3}} h_{\xi}, \\ H_{3}^{+}=-\cos \beta \phi^{+}+\sin \beta \frac{1}{\sqrt{2}}\left(\chi^{+}+\xi^{+}\right), \quad H_{3}^{0}=-\cos \beta a_{\phi}+\sin \beta a_{\chi}, \\ h=\cos \alpha h_{\phi}-\frac{\sin \alpha}{\sqrt{3}}\left(\sqrt{2} h_{\chi}+h_{\xi}\right), \quad H_{1}=\sin \alpha h_{\phi}+\frac{\cos \alpha}{\sqrt{3}}\left(\sqrt{2} h_{\chi}+h_{\xi}\right)~, \end{array} \end{equation} where the mixing angle $\alpha \in \left( -\pi/2 , \pi/2 \right)$ is given by \begin{equation}\label{alpha} \tan 2 \alpha=\frac{2\left(M^{2}\right)_{12}}{\left(M^{2}\right)_{22}-\left(M^{2}\right)_{11}} ~, \end{equation} with \begin{equation} \begin{array}{l} \left(M^{2}\right)_{11}= 8 \lambda_1 v_\phi^2 =8 \lambda_{1} v^{2} \sin ^{2} \beta ~, \\ \left(M^{2}\right)_{22}=\left(3 \lambda_{2}+\lambda_{3}\right) v^{2} \cos ^{2} \beta+M_{1}^{2} \sin ^{2} \beta-\frac{1}{2} M_{2}^{2} ~, \\ \left(M^{2}\right)_{12}=\sqrt{\frac{3}{2}} \sin \beta \cos \beta\left[\left(2 \lambda_{4}+\lambda_{5}\right) v^{2}-M_{1}^{2}\right] ~. \end{array} \end{equation} The mass eigenvalues are then given by \begin{equation}\label{masses} \begin{aligned} m_{H_{5}}^{2} & \equiv m_{H_{5}^{\pm\pm}}^{2}=m_{H_{5}^{\pm}}^{2}=m_{H_{5}^{0}}^{2}=\left(M_{1}^{2}-\frac{3}{2} \lambda_{5} v^{2}\right) \sin ^{2} \beta+\lambda_{3} v^{2} \cos ^{2} \beta+M_{2}^{2} ~, \\ m_{H_{3}}^{2} & \equiv m_{H_{3}^{\pm}}^{2}=m_{H_{3}^{0}}^{2}=M_{1}^{2}-\frac{1}{2} \lambda_{5} v^{2} ~, \\ m_{H_{1}}^{2} & =\left( M^{2}\right)_{11} \sin ^{2} \alpha+\left(M^{2}\right)_{22} \cos ^{2} \alpha+2 \left(M^{2}\right)_{12} \sin \alpha \cos \alpha ~, \\ m_{h}^{2} & =\left(M^{2}\right)_{11} \cos ^{2} \alpha+\left(M^{2}\right)_{22} \sin ^{2} \alpha-2 \left(M^{2}\right)_{12} \sin \alpha \cos \alpha ~, \end{aligned} \end{equation} where we identify $h$ as the 125-GeV SM-like Higgs. We remark here that because of the preserved custodial symmetry at tree level, the quintet and triplet mass spectra are degenerate, respectively. The first thing we now observe is the modification to the trilinear Higgs self-coupling, which is given by \begin{equation} \begin{aligned} g_{h h h} =& 24 \cos^{3}\alpha \lambda_{1} v_{\phi}+6 \cos \alpha \sin^{2}\alpha v_{\phi}\left(2 \lambda_{4}+\lambda_{5}\right)\\&+\frac{3}{2} \sqrt{3} \cos^{2}\alpha \sin\alpha\left[4 v_{\Delta}\left(-2 \lambda_{4}-\lambda_{5}\right)-\mu_{1}\right] -4 \sqrt{3} \sin^{3}\alpha\left[\mu_{2}+2 v_{\Delta}\left(3 \lambda_{2}+\lambda_{3}\right)\right] ~, \end{aligned} \end{equation} where the SM counterpart is given by $g^{\rm SM}_{hhh}=3m_h^2/v$. On the other hand, the singlet mixing also leads to \begin{equation} \begin{aligned} g_{H_{1} h h}=& 24 \lambda_{1} \cos^2{\alpha} \sin{\alpha} v_{\phi}+8 \sqrt{3} \cos{\alpha} \sin^2{\alpha} v_{\Delta}\left(\lambda_{3}+3 \lambda_{2}\right)\\ &+2\left[\sqrt{3} \cos{\alpha} v_{\Delta}\left(3 \cos^2{\alpha}-2\right)+\sin{\alpha} v_{\phi}\left(1-3 \cos^2{\alpha}\right)\right]\left(2 \lambda_{4}+\lambda_{5}\right) \\ &+\frac{\sqrt{3}}{2} \mu_{1} \cos{\alpha}\left(3 \cos^2{\alpha}-2\right)+4 \sqrt{3} \mu_{2} \cos{\alpha} \sin^2{\alpha} ~. \end{aligned} \end{equation} Because of these two couplings, the di-Higgs production rate predicted by the GM model can be considerably different from the SM prediction, making it one of the most interesting channels to be studied. Moreover, the couplings of $h$ to the SM fermions $f$ and weak gauge bosons $V = W,Z$ are modified respectively as \begin{equation} \begin{aligned} g_{h f \bar{f}} & =\kappa_F \times g_{h f \bar{f}}^{\mathrm{SM}}~,\\ g_{h V V} &= \kappa_V \times g_{h V V}^{\mathrm{SM}}~, \end{aligned} \end{equation} with \begin{equation} \begin{aligned} \kappa_F &= \frac{\cos{\alpha}}{\sin{\beta}}~, \\ \kappa_V &= \sin{\beta} \cos{\alpha}-\sqrt{\frac{8}{3}} \cos{\beta} \sin{\alpha}~, \end{aligned} \end{equation} which are sensitive to the current Higgs measurements. In particular, the GM model is arguably the simplest custodially symmetric model whose $\kappa$'s can be larger than unity. Also, because one major contribution to the di-Higgs production is a box diagram with an inner top-loop, the modifications to the Yukawa couplings also have a large impact on this process. Here we briefly comment on the decoupling limit of the GM model\footnote{We note that the model does not have the limit of alignment without decoupling.}, which is an important region for the global fit as conclusive discovery of new physics have yet been made to date. The decoupling limit of the GM model is achieved when $v_\Delta\to0$ and $\mu_1\to0$, as a result of which we have \begin{equation} \cos{\beta} \to 0, \quad \alpha \to 0, \quad M_1^2 \gg v^2 \text{~and~} \quad M_2^2 \to 0 ~. \end{equation} In this limit, the scalar masses reduce to \begin{equation} \begin{aligned} m_{H_5}^2 &\to -\frac{3}{2} \lambda_5 v^2 + M_1^2 + M_2^2~, \\ m_{H_3}^2 &\to -\frac{1}{2} \lambda_5 v^2 + M_1^2~, \\ m_{H_1}^2 &\to M_1^2 - \frac{1}{2} M_2^2~, \\ m_h^2 &\to 8 \lambda_1 v^2 ~, \end{aligned} \end{equation} where only $h$ remains at the electroweak scale and acts exactly like the SM Higgs boson. Additionally, the mass spectrum of the exotic Higgs bosons satisfies the relation \begin{equation}\label{eq:mass hierarchy} 2m_{H_1}^2=3m_{H_3}^2-m_{H_5}^2 ~. \end{equation} We now discuss the theoretical constraints on the parameter space. We consider three different sets of constraints at tree level: the vacuum stability or the bounded from below (BFB) condition, the perturbative unitarity condition, and the unique vacuum condition\footnote{We remark that the theoretical bounds implemented in this work are conservative. The loop corrections may break these constraints~\cite{Chiang:2018cgb}. Because of the attention on LHC constraints, we use more relaxed bounds on the theory side.}. The BFB condition ensures that there is a stable vacuum in the potential. As noted in Ref.~\cite{Hartling:2014zca}, the BFB constraint can be satisfied as long as the quartic terms of the scalar potential remain positive for all possible field conFigurations, and can be guaranteed by satisfying the following conditions: \begin{equation} \begin{array}{l} \lambda_{1}>0 ~, \\ \lambda_{2}>\left\{\begin{array}{l} -\frac{1}{3} \lambda_{3} \text { for } \lambda_{3} \geq 0 ~, \\ -\lambda_{3} \quad \text { for } \lambda_{3}<0 ~, \end{array}\right. \\ \lambda_{4}>\left\{\begin{array}{l} -\frac{1}{2} \lambda_{5}-2 \sqrt{\lambda_{1}\left(\frac{1}{3} \lambda_{3}+\lambda_{2}\right)} \quad \text { for } \lambda_{5} < 0 \text { and } \lambda_{3} \geq 0 ~, \\ -\omega_{+}(\zeta) \lambda_{5}-2 \sqrt{\lambda_{1}\left(\zeta \lambda_{3}+\lambda_{2}\right)} \text { for } \lambda_{5} < 0 \text { and } \lambda_{3}<0 ~, \\ -\omega_{-}(\zeta) \lambda_{5}-2 \sqrt{\lambda_{1}\left(\zeta \lambda_{3}+\lambda_{2}\right)} \text { for } \lambda_{5} \geq 0 ~, \end{array}\right. \end{array} \end{equation} where $\omega \in\left[\omega_{-}, \omega_{+}\right]$, and \begin{equation} \omega_{\pm}(\zeta)=\frac{1}{6}(1-B) \pm \frac{\sqrt{2}}{3}\left[(1-B)\left(\frac{1}{2}+B\right)\right]^{1/2} ~, \end{equation} with \begin{equation} B \equiv \sqrt{\frac{3}{2}\left(\zeta-\frac{1}{3}\right)} \in[0,1], \text{ and } \zeta \in [\frac{1}{3},1] ~. \end{equation} The perturbative unitarity condition requires that the largest zeroth partial-wave mode of all $2\to2$ scattering channels be smaller than $1/2$ at high energies. Such constraints of the GM model were first studied in Ref.~\cite{Aoki:2007ah} and shown to be \begin{equation} \begin{aligned} &\left|6 \lambda_{1}+7 \lambda_{3}+11 \lambda_{2}\right| \pm \sqrt{\left(6 \lambda_{1}-7 \lambda_{3}-11 \lambda_{2}\right)^{2}+36 \lambda_{4}^{2}}<4 \pi ~, \\ &\left|2 \lambda_{1}-\lambda_{3}+2 \lambda_{2}\right| \pm \sqrt{\left(2 \lambda_{1}+\lambda_{3}-2 \lambda_{2}\right)^{2}+\lambda_{5}^{2}}<4 \pi ~, \\ &\left|\lambda_{4}+\lambda_{5}\right|<2 \pi, \quad\left|2 \lambda_{3}+\lambda_{2}\right|<\pi ~, \end{aligned} \end{equation} in the high-energy limit. The unique vacuum condition~\cite{Hartling:2014zca} requires that there be no alternative global minimum in the scalar potential to the custodially-conserving vacuum. To examine this condition, we first parametrize the triplet VEVs as \begin{equation} \operatorname{Re} \chi^{0}=\frac{1}{\sqrt{2}} \sin \theta, \quad \xi^{0}=\cos \theta ~, \end{equation} where $\theta \in [-\pi, \pi]$. Then, we scan over the $\theta$ interval and check whether there is a deeper point in the potential than the custodially-conserving limit lying at $\theta=\pi/4$. \section{Global Fitting and Experimental Constraints} \label{sec:Constraints} In our global fits in the GM model, we utilizes the \texttt{HEPfit} package which is based upon a Bayesian statistics approach. The Bayes theorem states that \begin{equation} p(\vec{p} \mid \vec{d}, m)=\frac{p(\vec{d} \mid \vec{p}, m) \times p(\vec{p} \mid m)}{p(\vec{d} \mid m)} ~, \end{equation} where $p(\vec{d} \mid \vec{p}, m)$ is the evidence (likelihood), $p(\vec{p} \mid m)$ is the prior\footnote{Conceptually, a prior can either merely specifies the pre-knowledge of the parameter distributions, or it can further embed the behavior of the model. For example, the tadpole conditions given in Eq.~(\ref{eq:tadpole conditions}) can have non-physical solutions, such as duplicate vacua or imaginary VEVs (note that we have chosen the phase convention such that $v_\phi,v_\Delta$ are both real and positive). The exclusion of such data points can either be thought of as part of the prior or as part of the likelihood. In this work, we choose to interpret this in the former way, and consequently, the likelihood contains only the theoretical and experimental constraints.}, and $p(\vec{p} \mid \vec{d},m)$ is the posterior. These probability distributions are described by the model parameters $\vec{p}$, the data $\vec{d}$, and the prior knowledge $m$, which is defined by the mean values and variances of the input parameters. Thus, in addition to the experimental data that determine the likelihood, a prior that specifies the \textit{a priori} distributions of the model parameters is also required, in which we can freely embed our pre-knowledge of the model. As alluded to earlier, a similar global fit had been performed in Ref.~\cite{Chiang:2018cgb}. This work differs from it in the following ways. First, the theoretical constraints are refined according to Ref.~\cite{Aoki:2007ah} (as we discussed in Sec.~\ref{sec:The Georgi-Machacek Model}) and the experimental data are updated. Second, we focus on the parameter space where the exotic Higgs masses are reachable according to the LHC sensitivity. Finally, we change our scheme for the input parameters to achieve stabler numerical manipulations. We now address the details of the global fit. \subsection{Prior choices and mass constraints} In a typical Bayesian fit, it is important to select a reasonable prior, lest the fit leads to unwanted statistical biases or non-physical results, while at the same time embedding our pre-understanding of the model into the fit. In our work, we choose the following seven potential parameters: $\lambda_2$, $\lambda_3$, $\lambda_4$, $\lambda_5$, $\mu_1$, $\mu_2$ and $m_2^2$ as the input parameters. We make this change compared to Ref.~\cite{Chiang:2018cgb} because of the limited precision-handling capability of computers, which could cause the inference of quartic couplings from the physical masses and VEVs to suffer from serious propagation of errors. This is especially important to our fit as all of the theoretical constraints are imposed on the dimensionless parameters, which renders a relatively high demand of numerical precision. We choose the priors of the dimensionless parameters to be uniform within the bounds specified by the perturbative unitarity conditions~\cite{Hartling:2014zca}. As for the other couplings, we choose to make them Gaussian-distributed and, therefore, they are in general unbounded. Moreover, we choose the $m_2^2$ prior to be uniformaly distributed in logarithmic scale. Finally, because we only focus on the mass ranges probable at the near future LHC, we impose auxiliary single-sided Gaussian constraints on the heavy scalar masses. The summary of the prior choices is given in Table~\ref{tab:priors setting}. \begin{table} \begin{tabular}{l|llll} \hline Parameters \qquad \qquad& Feature\qquad \qquad &Shape\qquad \qquad &Mean\qquad \qquad& Error/Range\\ \hline\hline \textbf{Input} & & &Priors\\ \hline $m_2^2$ / $\mathrm{GeV}^2$ & $\log$ & Gaussian &$10^{-2}$& $\left(10^{-4}, 10^{8}\right)$ \\ $\lambda_2$ & linear & Uniform & -- &$\left(-\pi, \pi\right)$ \\ $\lambda_3$ & linear & Uniform & -- &$\left(-\pi, \pi \right)$ \\ $\lambda_4$ & linear & Uniform & -- &$\left(-\pi, \pi \right)$ \\ $\lambda_5$ & linear & Uniform & -- &$\left(-3\pi, 3\pi \right)$ \\ $\mu_1$ / $\mathrm{GeV}$ & linear & Gaussian &$0$& $\left(-5\times10^{3}, 5\times10^{3}\right)$ \\ $\mu_2$ / $\mathrm{GeV}$ & linear & Gaussian &$0$& $\left(-5\times10^{3}, 5\times10^{3}\right)$ \\ \hline \textbf{Auxiliary} &&& Priors \\ \hline $m_{H_{1,3,5}}$ / $\mathrm{GeV}$ & $\mathbb{R}_+$ & Gaussian &$10^{-2}$&$(10^{-2},10^3)$ \\ \hline \hline \end{tabular} \caption{Input parameters and corresponding prior choices, as well as the auxiliary constraints on the new scalar masses in our global fit.} \label{tab:priors setting} \end{table} \subsection{Experimental data from the colliders} We mainly consider data from the LHC Higgs signal strength measurements and exotic scalar searches as our experimental constraints, supplemented with a few data from Tevatron. Based upon those used Ref.~\cite{Chiang:2018cgb}, we update with the latest data. We show in Table~\ref{tab:signalstrengthinputs} in Appendix~\ref{appendix:exp list} the current sensitivity of each individual channel for the Higgs signal strengths. The new data that we add are quoted from Refs.~\cite{ATLAS-CONF-2020-027,ATLAS-CONF-2020-045,Aad:2020jym,CMS-PAS-HIG-17-026,ATLAS:2020qcv,CMS-PAS-HIG-19-014}. We define $\hat{\sigma}$ to be the ratio of the smallest uncertainty of all individual measurements in one table cell of Table~\ref{tab:signalstrengthinputs} ($\sigma_{min}$) to the weight of the corresponding production channel ($w$)\footnote{ For example, the smallest uncertainty of the 13-TeV $gg\to h\to WW$ measurements is given by Ref.~\cite{ATLAS-CONF-2020-027}, which gives the signal strength $\mu=1.08^{+0.19}_{-0.18}$, and thus $\sigma_{min}=0.18$. The weight $w$ is $100\%$ in this case, and eventually we have $\hat{\sigma}=\sigma_{min}/w=0.18$. As such, the corresponding cell in Table~\ref{tab:signalstrengthinputs} is painted green according to the color scheme shown under the table. }. We then use $\hat{\sigma}$ to give an estimate on the current sensitivity of each individual channel. We remark that $\hat{\sigma}$ relies on the individual measurements instead of the combined ones, and thus this quantity is only intended to deliver a rough precision estimate for each channel. The direct search data are listed in Appendix~\ref{appendix:exp list}, with the old data (Tables~\ref{tab:neutral heavy higgs with fermions},\ref{tab:neutral heavy higgs with bosons},\ref{tab:neutral heavy higgs with higgs} and \ref{tab:charged heavy higgs with charged scalars}) separated from the new ones (Tables~\ref{tab:new neutral direct searches} and \ref{tab:new charged direct searches}). \subsection{Global fit results} We show in Fig.~\ref{Fig:alpha-vDelta with different constraints} the results of the global fits, with different constraints imposed, in the $\alpha$-$v_\Delta$ plane. With our chosen prior, most of the data accumulate around the origin, which corresponds to the decoupling limit, as shown in Fig.~\ref{Fig:alpha-vDelta with different constraints}(a). After we impose the theoretical constraints, the data start to show a tendency towards the region around $\kappa_{F}\sim1$ in Fig.~\ref{Fig:alpha-vDelta with different constraints}(b). This is because the theoretical constraints tend to suppress the magnitudes of $\lambda_i$, which would in turn exclude the region where $M^2_{12}\to0$ or $M^2_{22}-M^2_{11}\gg1$ and thus cause the posterior of $\alpha$ to disfavor the point 0. Moreover, since the upper bound imposed on $m_{H_3}$ when $\alpha>0$, which is given by~\cite{Chiang:2015amq} \begin{equation} m_{H_{3}}^{2} \leq \frac{1}{2}\left(4 \lambda_{4}+\lambda_{5}\right) v^{2}+\sqrt{\frac{2}{3}} \frac{\sin \alpha \cos \alpha}{\sin \beta \cos \beta} m_{h}^{2} ~, \end{equation} is suppressed by the theoretical constraints, the $\alpha>0$ region is in tension with our prior setting that favors large exotic scalar masses. As a result, the region where $\kappa_F\sim1$, which has already been favored by the prior, becomes dominant in the posterior distribution. Once the Higgs signal strength constraints are applied, the allowed phase space becomes apparently restricted, as shown in Fig.~\ref{Fig:alpha-vDelta with different constraints}(c). The region around $\alpha\sim0$ becomes excluded because the signal strengths in the $WW$ and $ZZ$ channels are measured to be larger than the SM predictions, thus favoring the region where $\kappa_{V}>1.0$. Finally, in Fig.~\ref{Fig:alpha-vDelta with different constraints}(d), we observe that the direct search data further exclude more of the region where $\kappa_{V}>1.1$ and $\kappa_{F}>1.0$ because the data with larger $H_{3,5}^\pm$ branching ratios to certain channels, as discussed in Ref.~\cite{Chiang:2015amq}, are excluded by the experiments. \begin{figure}[htb] \centering \includegraphics[width=1\textwidth]{figures/alpha_vDelta-rh.pdf} \caption{Normalized posterior distributions in the $\alpha$-$v_\Delta$ plane with (a) only the prior imposed, (b) theoretical constraints imposed, (c) theoretical constraints and Higgs signal strength constraints imposed, (d) theoretical constraints, Higgs signal strength constraints, and direct search constraints imposed. The dashed and solid curves represent the contours of $\kappa_{f}$ and $\kappa_{V}$, respectively.} \label{Fig:alpha-vDelta with different constraints} \end{figure} Before closing this section, we would like to add a remark on the $m_1^2$ parameter. While $m_1^2$ has to be negative in the SM to generate a non-trivial vacuum, this is not necessary for the GM model, as the VEV in the $\phi_r$ direction can be induced by the interactions between $\Phi$ and $\Delta$. We find from the results of the global fit that $m_1^2$ is bounded from above at $\sim9000$~GeV$^2$. When $m_1^2$ increases, stronger interactions between the doublet and triplet fields are required to induce a VEV in the $\phi_r$ direction, and eventually this will be bounded by the theoretical constraints. This phenomenon is crucial to the discussion of EWPT in the next section. \section{Electroweak Phase Transition and Gravitational Waves} \label{sec:Electroweak Phase Transition and Gravitational Waves} In this section, we discuss the EWPTs and the spectrum of induced GWs in the GM model. At high temperatures, thermal corrections dominate in the total potential and stabilize at the origin where the electroweak symmetry is preserved. When the temperature drops to a critical temperature $T_C$, where the potential develops another minimum of equal height to the origin, a non-trivial symmetry-breaking phase $\vec{h}(T=T_C)$ starts to form. If there exists a sufficiently high and wide potential barrier between the symmetric-phase vacuum and the broken-phase vacuum, then a first-order phase transition would take place. As the temperature further decreases, the potential barrier also lowers while the potential difference between the true and false vacua increases, eventually leading to bubble nucleation in the field plasma. Collisions of these vacuum bubbles induce the production of stochastic GWs. In the following, we discuss the details of these dynamics in the GM model. \subsection{Electroweak phase transitions} In our study, we assume that the EWPT takes place at a sufficiently high temperature such that the one-loop thermal corrections dominate over the Coleman-Weinberg potential, allowing an expansion of the thermal corrections to $\mathcal{O}\left(T^2\right)$. The overall potential at $T>0$ is then given by \begin{equation}\label{eq:HT potential} V^{HT}_T(\vec{h}, T) = V_0(\vec{h}) + \frac{1}{2} \left( \Sigma_\phi h_\phi^2 + \Sigma_\chi h_\chi^2 + \Sigma_\xi h_\xi^2\right) T^2 ~, \end{equation} where $V_0$ is the tree-level potential, $\vec h = (h_\phi, h_\xi, h\chi)$ and the thermal mass contributions \begin{align} \label{thermal masses contribution} \begin{split} \Sigma_{\phi} &= \frac{3 g^{2}}{16}+\frac{g^{\prime 2}}{16}+2\lambda_{1}+\frac{3 \lambda_{4}}{2}+\frac{1}{4} y_{t}^{2} \csc ^{2}{\beta} ~, \\ \Sigma_{\chi} &= \frac{g^{2}}{2}+\frac{g^{\prime 2}}{4}+\frac{11 \lambda_{2}}{3}+\frac{7 \lambda_{3}}{3}+\frac{2\lambda_{4}}{3} ~,\\ \Sigma_{\xi} &= \frac{g^{2}}{2}+\frac{11 \lambda_{2}}{3}+\frac{7 \lambda_{3}}{3}+\frac{2\lambda_{4}}{3} ~, \end{split} \end{align} with $y_t=\sqrt{2}m_t/v_\phi$ being the top Yukawa coupling, and $g$ and $g^\prime$ being respectively the $SU(2)_L$ and $U(1)_Y$ gauge couplings. Assuming that the custodial symmetry is still preserved at $T>0$, we set $h_\xi=h_\chi/\sqrt{2}=h_\Delta$. Obviously, the potential minimum approaches $\vec{v}=(v_\Phi, v_\Delta)$ as $T$ decreases. Fig.~\ref{Fig:transition path} shows a schematic example of the phase transition tunneling paths in the $h_\Delta-h_\phi$ plane in the GM model\footnote{While most benchmarks from our global fit give concave paths, there are also benchmarks that give either straight or convex paths.}. The thermal potential $\left(\Sigma_\phi h_\phi^2 + \Sigma_\chi h_\chi^2 + \Sigma_\xi h_\xi^2\right)T^2/2$, especially the $h_\phi^2$ term, is the primary source of the potential barriers , since it can lift the potential much higher than $V_0$ when $\vec{h}$ is small and $T$ is high. On the other hand, $V_0$ plays the main role of determining the shape of the tunneling path, which is crucial to the phase transition characteristics. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{figures/transition_path.pdf} \caption{A schematic example of the first-order phase transition tunneling paths in the $(h_\phi$-$h_\Delta)$ plane. The peak of the potential barrier at $T=T_C$ is denoted by $\vec{v}_p$, $\vec{v}_C$ represents the potential minimum at $T_C$, and $\vec{v}$ is the EW minimum at $T = 0$. The red solid curve represents the phase transition tunneling path, and the red dashed curve represents the extension of the tunneling path towards $\vec{v}$. The color map and the contours illustrate the potential distribution around the tunneling path at $T=T_C$.} \label{Fig:transition path} \end{figure} We divide the EWPT calculation into two steps. First, we run a preselection to derive the critical VEVs ($\vec{v}_C$'s) and $T_C$'s of the data generated by \texttt{HEPfit} by numerically solving the equations $V_{T}^{HT}(\vec{v}_C, T_C)=V_{T}^{HT}(\vec{0}, T_C)$ and $\nabla V_{T}^{HT}=\vec{0}$. Since the preselection is just a simple procedure to pin down $v_C$'s and $T_C$'s of the data, we use \texttt{cosmoTransitions}~\cite{Wainwright:2011kj} to determine the order of the EWPT as well as to calculate the bubble dynamics. To ensure the validity of the high-$T$ expansion, we focus on the data points with $T_C>60$~GeV. Roughly 10\% of the data points are found to generate strong first-order EWPTs. Among all the points generated by \texttt{HEPfit}, we have found no two-step EWPTs as claimed in Ref.~\cite{Zhou:2018zli}, which could be due to the fact that Ref.~\cite{Zhou:2018zli} did not perform as comprehensive a phenomenological scan of parameters as this study. Fig.~\ref{Fig:transition points with different constraints} is a scatter plot of $\vec{v_C}$ calculated using the aforementioned preselection method under different constraints. We also present the $\vec{v_C}$ data that pass all the mentioned constraints and are further determined by \texttt{cosmoTransitions} to be of first-order and second-order phase transitions. After we impose the theoretical constraints, we observe that the BFB condition would exclude the data with $|\vec{v}_C| > v$ (the region to the right of the dashed curve), as can be seen by comparing the distributions of the green (perturbative unitarity and unique vacuum constraints imposed) and gray (all theoretical constraints imposed) data points. This is because the $V_0$'s of these excluded data points are not bounded from below when $|\vec{h}|\to\infty$, and hence the $V^{HT}_T$'s would create $\vec{v}_C$'s beyond $\vec{v}$ when $T$ increases. If we further impose either the Higgs signal strength or direct search constraint, the allowed range for $h_\Delta^{min}$ becomes even more restricted. The experimental constraints are thus responsible for the smaller $v_C$'s of the strong first-order EWPTs and the limitation on the values of $v_C/T_C$. This implies that the collider measurements are in fact good probes to the EWPT behavior of the GM model. We will illustrate this in more detail in Sec.~\ref{sec:Predictions}. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{figures/hmin-scatter.pdf} \caption{Scatter plot of $\vec{v}_C$ screened by the preselection method under different constraints. The light-gray, green, gray, red, orange and light blue points denote the data that pass the prior, the perturbative unitarity (P) and unique vacuum (U) constraints, the theoretical constraints, the theoretical and direct search constraints, the theoretical and Higgs signal strength constraints, and all of the above-mentioned constraints, respectively. The dark-blue first-order and blue second-order phase transition data points also pass all the constraints, and are further processed by \texttt{cosmoTransitions}, all with $T_C>60$~GeV. The black dashed curve denotes the contour of $|\vec{v}_{C}|=v$.} \label{Fig:transition points with different constraints} \end{figure} We also illustrate the impact of the $m_1^2$ term in $V_0$ on $v_C$ and $v_C/T_C$ in Fig.~\ref{Fig:Tc-m1sq}. The black hatched region is first excluded because of the failure of high-$T$ expansion. Some of the points falling within the red hatched region can give rise to first-order phase transitions. As $m_1^2$ increases, $V_0$ becomes shallower in the $h_\phi$-direction, implying that the thermal corrections needed to lift the broken phase to the critical value are smaller, thus tending towards a lower $T_C$. Meanwhile, an increasing $m_1^2$ also lengthens the potential barrier and thus the transition path, which in turn enhances the phase transition strength $v_C/T_C$. Based on the same argument, we can see that as $m_1^2$ decreases, $T_C$ then tends to increase and $v_C/T_C$ tends to decrease. Consequently, as can be seen from the plot, most of the first-order phase transitions occur around $T_C\approx70$~GeV and are strong, with some of their $v_C/T_C$ reaching $2.5-4$ when $m_1^2\sim -2500$~GeV$^2$. \begin{figure}[htb] \centering \includegraphics[width=0.85\textwidth]{figures/Tc_m1sq.png} \caption{Scatter plot of the data points that pass all of the theoretical and experimental constraints in the $T_C$-$m_1^2$ plane. The color bar indicates the value of $v_C/T_C$. The first-order EWPT data points are contained in the red hatched region. We remark that the data points in the black hatched region violate the high-$T$ assumption. Also, $v_C$'s and $T_C$'s are derived using the preselection method, while the first-order EWPT data are further processed with \texttt{cosmoTransitions}.} \label{Fig:Tc-m1sq} \end{figure} \subsection{Gravitational waves} We now discuss the GWs induced from the bubble dynamics during EWPTs. The information of the stochastic GWs generated by the bubble dynamics of the strong first-order phase transitions can be completely accessed with two primary parameters: $\alpha_\mathrm{GW}$ and $\beta_\mathrm{GW}/H_n$~\cite{Kamionkowski:1993fg}. $\alpha_\mathrm{GW}$ is the ratio of the latent heat to the radiation energy density in the plasma \begin{equation} \alpha_\mathrm{GW}=\frac{\epsilon\left(T_{n}\right)}{\rho_{\mathrm{rad}}\left(T_{n}\right)} ~, \end{equation} where $\rho_\mathrm{rad} = \pi^2 g_\ast T^4/30$, the degree of freedom at the domain wall decay time $g_{\ast}=100$, and \begin{equation} \epsilon(T)=T \frac{\partial \Delta V^{HT}_{T}(T)}{\partial T}-\Delta V^{HT}_{T}(T) ~, \end{equation} with $\Delta V^{HT}_{T}(T) \equiv V^{HT}_{T}(\vec{h}(T), T) - V^{HT}_{T}(\vec{0}, T)$ being the potential difference between the broken phase and the symmetric phase at temperature $T$. $\alpha_\mathrm{GW}$ is related to the maximum available energy budget for GW emissions. Next, by assuming that the percolation takes place soon after the nucleation of the true vacua, which leads to the commonly used condition $T_{\ast} \simeq T_n$ where $T_\ast$ is the GW generation temperature and $T_n$ represents the nucleation temperature~\cite{Espinosa:2008kw,Ellis:2020awk}, $\beta_\mathrm{GW}/H_n$ is defined as \begin{equation}\label{eq:beta_H} \frac{\beta_{GW}}{H_{n}}=\left.T_{n} \frac{d}{dT} \left(\frac{S_{3}(T)}{T}\right)\right|_{T=T_{n}} ~, \end{equation} where $S_3$ denotes the three-dimensional on-shell Euclidean action of the instanton. As $\beta_\mathrm{GW}/H_n$ is the inverse ratio of first-order EWPT duration to the universe expansion time scale. it defines the characteristic frequency of the GW spectrum produced from the phase transition. The main sources of the GWs generated during EWPTs are bubble collisions, sound waves, and turbulence, which have been well studied in the literature~\cite{Caprini:2015zlo,Cai:2017tmh}. According to the numerical estimations performed in Refs.~\cite{Huber:2008hg,Jinno:2016vai,Hindmarsh:2015qta,Caprini:2009yp,Espinosa:2010hh,Breitbach:2018ddu}, the GW spectrum are given by \begin{equation}\label{eq:GW amp} \begin{aligned} h^{2} \Omega_{\mathrm{col}}(f) &=1.67 \times 10^{-5}\left(\frac{H_{n}}{\beta_\mathrm{GW}}\right)^{2}\left(\frac{\kappa_{\mathrm{col}} \alpha_\mathrm{GW}}{1+\alpha_\mathrm{GW}}\right)^{2}\left(\frac{100}{g_{*}}\right)^{\frac{1}{3}}\left(\frac{0.11 v_{w}^{3}}{0.42+v_{w}^{2}}\right) \frac{3.8\left(f / f_{\mathrm{col}}\right)^{2.8}}{1+2.8\left(f / f_{\mathrm{col}}\right)^{3.8}} ~,\\ h^{2} \Omega_{\mathrm{sw}}(f) &=2.65 \times 10^{-6}\left(\frac{H_{n}}{\beta_\mathrm{GW}}\right)\left(\frac{\kappa_{\mathrm{sw}} \alpha_\mathrm{GW}}{1+\alpha_\mathrm{GW}}\right)^{2}\left(\frac{100}{g_{*}}\right)^{\frac{1}{3}} v_{w}\left(\frac{f}{f_{\mathrm{sw}}}\right)^{3}\left(\frac{7}{4+3\left(f / f_{\mathrm{sw}}\right)^{2}}\right)^{7 / 2} ~,\\ h^{2} \Omega_{\mathrm{turb}}(f) &=3.35 \times 10^{-4}\left(\frac{H_{n}}{\beta_\mathrm{GW}}\right)\left(\frac{\kappa_{\mathrm{turb}} \alpha_\mathrm{GW}}{1+\alpha_\mathrm{GW}}\right)^{\frac{3}{2}}\left(\frac{100}{g_{*}}\right)^{1 / 3} v_{w} \frac{\left(\frac{f}{f_{\mathrm {turb}}}\right)^{3}}{\left(1+\frac{f}{f_{\mathrm{turb}}}\right)^{\frac{11}{3}} \left(1+\frac{8 \pi f}{H_{0}}\right)} ~, \end{aligned} \end{equation} where $\kappa_\mathrm{col}$, $\kappa_\mathrm{sw}$ and $\kappa_\mathrm{turb}$ are the transformation efficiencies of the first-order phase transition energy to kinetic energy, bulk motion of the fluid and turbulence, respectively, given by \begin{equation} \begin{aligned} \kappa_\mathrm{col}&=\frac{1}{1+0.715 \alpha_\mathrm{GW}}\left[0.715 \alpha_\mathrm{GW}+\frac{4}{27} \sqrt{\frac{3 \alpha_\mathrm{GW}}{2}}\right]~,\\ \kappa_\mathrm{sw} &= \frac{\alpha_\mathrm{GW}}{0.73+0.083\sqrt{\alpha_\mathrm{GW}}+\alpha_\mathrm{GW}}~,\\ \kappa_\mathrm{turb} &= \xi_\mathrm{turb} \kappa_\mathrm{sw} ~, \end{aligned} \end{equation} with the fraction of turbulent bulk motion ($\xi_\mathrm{turb}$) assumed to be about $10\%$. The red-shifted peak frequency of the GW spectra are given by \begin{equation}\label{eq:GW freq} \begin{aligned} f_{\mathrm{col}} &=16.5 \times 10^{-3} \mathrm{mHz}\left(\frac{0.62}{1.8-0.1 v_{w}+v_{w}^{2}}\right)\left(\frac{\beta_\mathrm{GW}}{H_{n}}\right)\left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}} ~,\\ f_{\mathrm{sw}} &=1.9 \times 10^{-2} \mathrm{mHz} \frac{1}{v_{w}}\left(\frac{\beta_\mathrm{GW}}{H_{n}}\right)\left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}} ~,\\ f_{\mathrm {turb}} &=2.7 \times 10^{-2} \mathrm{mHz} \frac{1}{v_{w}}\left(\frac{\beta_\mathrm{GW}}{H_{n}}\right)\left(\frac{T_{n}}{100 \mathrm{GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}} ~, \end{aligned} \end{equation} where the bubble wall velocity $v_w\sim1$. Recent studies indicate that the contribution to the total GW spectrum from bubble collisions is negligible as very little energy is deposited in the bubble walls~\cite{Bodeker:2017cim}. In the following, we will restrict ourselves to the case of non-runaway bubbles, where the GWs can be effectively produced by the sound waves and turbulence. Fig.~\ref{Fig:GW sensitivity} shows the GW spectra, represented by the pink band based upon our two thousand data points, and the power-law integrated sensitivities of various GW experiments. We can see that the stronger the phase transition strength is, the larger the GW amplitude and the lower the peak frequency are. This can be derived from Eqs~(\ref{eq:GW amp}) and (\ref{eq:GW freq}): when the phase transition strength is stronger, $T_C$ tends to be lower as implied in Fig.~\ref{Fig:Tc-m1sq}, and so does $T_N$, which leads to a larger $\alpha_\mathrm{GW} \propto T_n^{-2}$ and a smaller $\beta_\mathrm{GW}/H_n \propto T_n$. The result shows that GWs induced from the strong first-order EWPTs of the GM model can possibly be detected in \texttt{LISA}~\cite{2017arXiv170200786A}, \texttt{Taji}~\cite{Ruan:2018tsw}, \texttt{DECIGO}~\cite{Crowder:2005nr} and \texttt{BBO}~\cite{Sato:2017dkf} for $v_C/T_C\in[1,3.5]$. \begin{figure}[htb] \centering \includegraphics[width=0.85\textwidth]{figures/GW_detection.pdf} \caption{Spectra of GWs induced by the strong first-order EWPT (SFOEWPT) data, as well as the power-law integrated sensitivities of various GW experiments. } \label{Fig:GW sensitivity} \end{figure} The detectability of the GW signals is evaluated by the corresponding signal-to-noise ratio (SNR)~\cite{Breitbach:2018ddu,Caprini:2015zlo}, given by \begin{equation} \rho=\sqrt{\mathcal{N} \mathcal{T}_{\mathrm{obs}} \int_{f_{\min }}^{f_{\max }} d f\left[\frac{h^{2} \Omega_{\mathrm{GW}}(f)}{h^{2} \Omega_\mathrm{exp }(f)}\right]^{2}} ~, \end{equation} where $h^2\Omega_\mathrm{exp}$ is the effective noise energy density. $\mathcal{N}$ is the number of independent observatories of the experiment, which equals one for the auto-correlated experiments, and equals two for the cross-correlated experiments. $\mathcal{T}_\mathrm{obs}$ is the duration of the observation in units of year, assumed here to be four for each experiment as done in Ref.~\cite{Breitbach:2018ddu}. We summarize our assumptions and the features of interferometers in Table~\ref{tab:GW experiments}. We then extract the GW SNR thresholds assuming $T_n=65$ GeV from the documentations of the experiments. In Fig.~\ref{Fig:GW detectability}, the GW SNR thresholds are illustrated in the $\alpha_\mathrm{GW}$-$\beta_\mathrm{GW}/H_n$ plane, on which we also scatter our data. The data in the regions to the right of the curves are above the SNR thresholds of the corresponding GW observatories. As can be seen in the plot, most of our data are detectable and able to be separated from the instrumental noise in \texttt{BBO} and \texttt{DECIGO}. \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline Experiment & Frequency range & $\rho_{\mathrm{thr }}$ & $\mathcal{N}$ & $\mathcal{T}_{\mathrm{obs }}[\mathrm{yrs}]$ & Refs. \\ \hline LISA & $10^{-5}-1 \mathrm{~Hz}$ & 10 & 1 & 4 & \cite{LISA:2017pwj,Robson:2018ifk} \\ DECIGO & $10^{-3}-10^{2} \mathrm{~Hz}$ & 10 & 2 & 4 & \cite{Sato:2017dkf,Yagi:2011yu,Yagi:2013du} \\ BBO & $10^{-3}-10^{2} \mathrm{~Hz}$ & 10 & 2 & 4 & \cite{Crowder:2005nr,Yagi:2011yu,Yagi:2013du} \\ \hline \end{tabular} \caption{Summary of the parameters and assumptions used for the projected space-based interferometers. \texttt{Taji} is not listed here because it does not release the effective noise energy density.} \label{tab:GW experiments} \end{table} \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{figures/GW_sensitivity.pdf} \caption{Scatter points of our data in the $\alpha_\mathrm{GW}$-$\beta_\mathrm{GW}/H_n$ plane. The color bar denotes $T_n$ of the data. The dashed curves represent the SNR thresholds of the listed GW experiments. The data on the RHS of the curves are detectable in the corresponding experiments.} \label{Fig:GW detectability} \end{figure} \section{Predictions} \label{sec:Predictions} In this section, we summarize and predict some of the most important and experimentally promising observables with our data, including those that are discussed in Section~\ref{sec:Constraints} and those that can further generate strong first-order EWPTs and GWs through bubble dynamics, as discussed in Section~\ref{sec:Electroweak Phase Transition and Gravitational Waves}. In the following plots, The former are presented with gray scatter points and the latter are shown with colored histograms. Fig.~\ref{Fig:alpha-vDelta with all constraints} shows the prediction in the $\alpha$-$v_\Delta$ plane. The strong first-order phase transition data accumulate around $v_\Delta\in[15,20]$~GeV and $\alpha\in[-15^\circ,-10^\circ]$, corresponding to $\kappa_F\sim1$ and $\kappa_V\in(1.0,1.1)$, while they are mostly confined within $v_\Delta\in[5,25]$~GeV and $\alpha\in[-25^\circ,0^\circ]$. No data show up in the decoupling region because a SM-like potential could only induce a smooth crossover rather than strong first-order EWPTs. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{figures/alphaGM_deg_vDelta.pdf} \caption{The predictions of our data in the $\alpha$-$v_\Delta$ plane. The gray scatter points represent the data that pass the \texttt{HEPfit}-level constraints, while the 2D colored histogram denotes the number density of the data that can further induce strong first-order EWPTs with $T_C>60$~GeV. The same plotting scheme is applied to all the following plots. The dashed curves and solid curves represent the contours of $\kappa_{F}$ and $\kappa_{V}$ respectively.} \label{Fig:alpha-vDelta with all constraints} \end{figure} In Fig.~\ref{Fig:rh_gaga-rh_Zga}, we show the prediction in the $\kappa_{\gamma\gamma}$-$\kappa_{Z\gamma}$ plane, where $\kappa_{Z\gamma}$ and $\kappa_{\gamma\gamma}$ are the ratios of the loop-induced $hZ\gamma$ and $h\gamma\gamma$ couplings to the respective SM predictions. Compared to the result given in Ref.~\cite{Chiang:2018cgb}, we do not observe any data points around $\kappa_{Z\gamma}\sim0.1$ after imposing the Higgs signal strength constraints, and we find that it is ruled out by the new 13-TeV $h\to Z\gamma$ measurements~\cite{CMS-PAS-HIG-19-014,ATLAS:2020qcv}. Our results show that $hZ\gamma$ and $h\gamma\gamma$ couplings are positively correlated and, while most data give $\kappa_{Z\gamma}\sim 1.04$ and $\kappa_{\gamma\gamma}\sim 1.05$ at the \texttt{HEPfit}-level, the peak in the $\kappa_{\gamma\gamma}$-$\kappa_{Z\gamma}$ plane starts to approach $(1.11,1.07)$ after we require strong first-order EWPTs. Thus, a more precise measurement of these couplings can be a good probe to the EWPT behavior of the GM model. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{figures/rh_gaga-rh_Zga.pdf} \caption{Prediction of our data in the $\kappa_{\gamma\gamma}$-$\kappa_{Z\gamma}$ plane. The plotting scheme is the same as Fig.~\ref{Fig:alpha-vDelta with all constraints}.} \label{Fig:rh_gaga-rh_Zga} \end{figure} Let's define the mass differences and the mass squared differences respectively as $\Delta m_{ij} \equiv m_{H_i}-m_{H_j}$ and $\Delta m_{ij}^2 \equiv m_{H_i}^2-m_{H_j}^2$, for $i,j=1,3,5$. Fig.~\ref{Fig:mass differences} shows various mass relations according to our scan results. As indicated by the gray points in Fig.~\ref{Fig:mass differences}(a), the constrained parameter space tends towards $\Delta m_{13}\in (-70,120)$~GeV, $\Delta m_{35} \in (-120,450)$~GeV, and $\Delta m_{15} \in (-200,550)$~GeV. We find that $\Delta m_{35}$ reaches its minimum when $m_{H_1} \sim 700$~GeV and its maximum when $m_{H_1} \sim 850$~GeV. After imposing the requirements of strong first-order EWPTs, all the data predict exclusively the mass hierarchy $m_{H_1} > m_{H_3} > m_{H_5}$, and most of them prefer a mass difference of around 50~GeV between $H_1$ and $H_3$ and around 100~GeV between $H_3$ and $H_5$. Such a mass hierarchy would limit certain scalar decay modes, such as $H^0_3 \to H^0_1 Z$, which has been searched for in the experiments $A_{13}^{\phi Z}$ and $A_{13b}^{\phi Z}$ as defined in Appendix~\ref{appendix:exp list}, and thus is another good probe to the EWPT behavior of the model. Fig.~\ref{Fig:mass differences}(b) shows the distribution in the $\Delta m_{15}^2$-$\Delta m_{13}^2$ plane, to be compared with the mass relation predicted in the decoupling limit, given by Eq.~\eqref{eq:mass hierarchy} and indicated by the dot-dashed line. Fig.~\ref{Fig:mass differences}(c) illustrates that the $H_5$ mass falls in the range of $[150,1500]$~GeV for the data points with strong first-order EWPTs. When $m_{H_5} \lesssim 500$~GeV, there are possibilities that a larger mass gap exists between $H_1$ and $H_5$, making $m_{H_1}$ fall around $650$~GeV. \begin{figure}[htb] \centering \includegraphics[width=.47\linewidth]{figures/Dm1-3_Dm3-5.pdf} \includegraphics[width=.47\linewidth]{figures/Dmsq1-5_Dmsq1-3.pdf} \\ \vspace{-0.6cm}\hspace{1.cm}(a) \hspace{7cm} (b) \\ \includegraphics[width=.47\linewidth]{figures/mH5_Dm1-5.pdf} \\ \vspace{-0.6cm} \hspace{.5cm}(c) \caption{Predictions of our data in (a) the $\Delta m_{13}$-$\Delta m_{35}$ plane, (b) the $\Delta m^2_{15}$-$\Delta m^2_{13}$ plane, and (c) the $m_{H_5}$-$\Delta m_{15}$ plane. The plotting scheme is the same as Fig.~\ref{Fig:alpha-vDelta with all constraints}. The slope of the dashed line in plot (a) is one, and the one of the dot-dashed line in plot (b) is $1/2$.} \label{Fig:mass differences} \end{figure} The di-Higgs production cross sections are calculated with \texttt{Hpair}~\cite{Dawson:1998py} and illustrated in Fig.~\ref{Fig:sigmagg sensitivity}. We also show the current 95\% confidence level (C.L.) upper limit given by ATLAS~\cite{ATLAS-CONF-2018-043}\footnote{The latest CMS constraint~\cite{CMS-PAS-HIG-17-030} is looser than the ATLAS constraint.}. We observe that except for a small patch of the parameter space with $g^{GM}_{hhh}/g^{SM}_{hhh}\sim1.2$, most of our data survive the ATLAS constraint and correspond to $g^{GM}_{hhh}/g^{SM}_{hhh}\in[1.4, 2.0]$. Though with a relatively small portion, the data points with $g^{GM}_{hhh}/g^{SM}_{hhh} \in (1.1,1.4)$ have the prediction that $\sigma_{GM}(gg \to hh) / \sigma_{SM}(gg \to hh) \sim {\cal O}(2-20)$, to which future experiments are sensitive. However, most of our data points have $g^{GM}_{hhh}/g^{SM}_{hhh}$ above $\sim 1.5$ with $\sigma_{GM}(gg \to hh) / \sigma_{SM}(gg \to hh)$ being smaller than the current sensitivity by at least one order of magnitude. \begin{figure}[ht!] \centering \includegraphics[width=0.8\textwidth]{figures/ghhh_ratio_sigma_gg-hist.png} \caption{Predictions of the di-Higgs production cross sections against $g^{GM}_{hhh}/g^{SM}_{hhh}$, as well as the ATLAS bound at 95\% C.L. The plotting scheme is the same as Fig.~\ref{Fig:alpha-vDelta with all constraints}.} \label{Fig:sigmagg sensitivity} \end{figure} Finally, we show some of the most constraining direct search channels for the GM model in Fig.~\ref{Fig:Direct Search constraints}. We also show the corresponding upper limits given by ATLAS and CMS, including $A^{2\ell 2L}_{13,2}$, $C^{2\ell 2X}_{13}$, $C^{\ell^\pm \ell^\pm}_{13,1}$, $C^{\ell^\pm \ell^\pm}_{13,2}$, $A^{WZ}_{13}$, $C^{WZ}_{13,1}$, $A^{WZ}_{13,2}$, $A^{WZ}_{13,3}$, $A^{bbZ}_{13}$, $C^{bbZ}_{13,1}$ and $C^{bbZ}_{13,2}$. From these results, we observe that the constraints from the $H_3^0$ channels are stronger than those from the $H^\pm_3$ channels, while the $H^\pm_5$ channels impose stronger constraints than the $H_5^0$ and $H_5^{\pm\pm}$ channels do. We can see that most of the mass ranges favored by the strong first-order EWPT data points are highly constrained, and thus these collider measurements also serve as good probes to the EWPT behavior of the GM model. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/wlog10_gg_H_ZZ_llllOllvv_TH13_mH1.pdf} \includegraphics[width=0.48\textwidth]{figures/wlog10_pp_H1_ZZ_TH13_mH3.pdf} \\ \vspace{-0.7cm} (a) \hspace{7.0cm} (b) \\ \includegraphics[width=0.48\textwidth]{figures/wlog10_VV_H5ppmm_WW_TH13_mH5.pdf} \includegraphics[width=0.48\textwidth]{figures/wlog10_WZ_H5pm_WZ_TH13_mH5.pdf} \\ \vspace{-0.7cm} (c) \hspace{7cm} (d) \\ \includegraphics[width=0.48\textwidth]{figures/wlog10_gg_H3_hZ_bbZ_TH13_mH3.pdf} \hspace{-0.5cm} \includegraphics[width=0.08\textwidth]{figures/colorbar.pdf} \\ \vspace{-0.7cm} (e) \caption{The predictions of the most constraining direct search channels. The colored curves indicate the 95\% C.L. limits imposed by the LHC measurements.} \label{Fig:Direct Search constraints} \end{figure} \section{Discussions and Summary} \label{sec:Discussions and Summary} We have performed global fits for the GM model with \texttt{HEPfit} to acquire the allowed phase space. The considered constraints include theoretical bounds of vacuum stability, perturbative unitarity, and the unique vacuum, as well as experimental data of Higgs signal strengths and direct searches for exotic Higgs bosons. We calculate $v_C$'s and $T_C$'s for the allowed phase space screened by \texttt{HEPfit} using the preselection method under the high-$T$ assumption, and then process the data with $v_C>60$~GeV by utilizing the \texttt{cosmoTransitions} package. Based upon the scan results, we calculate the GW spectra induced by the bubble dynamics during the EWPT. By comparing the results obtained at different levels of constraints in \texttt{HEPfit}, we demonstrate the tendency of each constraint level in the $\alpha$-$v_\Delta$ plane and identify the favored $\kappa_F$-$\kappa_V$ region. In particular, we find that there is an accumulation of data points around $(\kappa_F, \kappa_V)\sim(0.99,1.05)$ at all levels. In the vicinity of this point, $\kappa_F$ is almost one, and thus the cross sections of the ggF, bbH, and ttH production modes are nearly identical to the respective SM predictions. Moreover, since $\kappa_V\geq1$, the cross section of the VBF production mode would enhance, and so would the partial widths of the $h\to VV$ decays. Therefore, the $WW$ and $ZZ$ signal strengths are mostly enhanced within this region. We also study the previously unexplored region where $m_1^2>0$ and find that a nonzero $v_\phi$ can still be induced from the interactions between $\Phi$ and $\Delta$, although such a scenario is disfavored by the study of EWPT. We find that the experimental constraints impose a relatively strong bound on the $\vec{v}_C$ distributions, especially in the $h_\Delta$ direction. Furthermore, we show the impact of $V_0$ on $T_C$ and $v_C/T_C$, especially regarding that $m_1^2$ has a major impact on the depth of the overall potential and thus on the EWPT characteristics. In the calculation of the induced GW spectra, we find that the peak frequency lies roughly within $[10^{-4},10^{-1}]$~Hz and the corresponding amplitude $h^2 \Omega_{\text GW}$ can reach up to $10^{-10}$, which can be possibly detected by \texttt{LISA}, \texttt{Taji}, \texttt{ALIA} or \texttt{BBO} in the near future. We calculate $\kappa_{Z\gamma}$ and find that the strong first-order EWPT phase space only affords a small deviation from the SM prediction. We also observe that the strong first-order EWPT data points all prefer the ``inverted'' mass hierarchy, $m_{H_1}>m_{H_3}>m_{H_5}$, with the masses lying within $[0.5,1.5]$~TeV. Finally, we list some of the most constraining or physically interesting experiments, including the di-Higgs productions and several direct searches for exotic scalars. According to the \texttt{HEPfit} results, the di-Higgs production cross sections range from $0.3$ to $30$ times the SM prediction, and most data still lie below the sensitivity of the latest ATLAS measurement~\cite{ATLAS-CONF-2018-043}. The direct search channels we choose to show are $gg\to H_1 \to ZZ$, $pp \to H_1 \to ZZ$, $VV\to H^\pm_5 \to W^\pm W\pm$, $W^\pm Z \to H_5^\pm \to W^\pm Z$ and $gg\to H_3 \to h Z \to bbZ$ at $\sqrt{s}=13$ TeV, which serve as the most promising probes to the GM model in the near future LHC experiments. \section*{Acknowledgments} The authors would like to thank Otto Eberhardt, Ayan Paul, and Eibun Senaha for some technical help. This research was supported in part by the Ministry of Science and Technology of Taiwan under Grant No.~MOST-108-2112-M-002-005-MY3.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Generalization to higher dimensions -- Compactifying on an N-dimension torus $T^N$} One generalization is to obtain the cMERA on an $N$- dimensional torus $T^N$ from $\mathbb{R}^N$, since an $N$-torus can be thought of as an orbifold of $\mathbb{R}^N$. This is done by repeating the computation, with a sum over images in $N$ directions separately. To be precise, on an $N$ torus, a field $\phi(x)$ is expressible as \begin{equation} \phi(x) = \sum_{\{n_i\}} \left(\prod_{i}^N \frac{1}{\sqrt{l_{c_i}}}\right) e^{i k_{\{n_i\}}.x} \phi(\{n_{i}\}), \end{equation} where $x$ is an $N$ component vector, and $k_{\{n_i\}}$ also an $N$ component vector given by \begin{equation} k_{\{n_i\}} =\{ \frac{2\pi n_i}{l_{c_i}} \}, \qquad n_i \in \mathbb{Z}, \end{equation} and $l_{c_i}$ is the periodicity of each cycle $i$ in $T^N$. A similar epxression applies to $\pi(x)$. The cMERA entangler would take the form \begin{align} &K_{T^N}(s) \\ &=\frac{1}{2} \int_{T^N} d^Nx d^N y g_{T^N}(s, x -y) [\pi(x) \phi(y) + \phi(x)\pi(y)] \\ &=\frac{1}{2}\sum_{\{n_i \in \mathbb{N}\}} g_{T^N}(s, \{n_i\}) [\pi(\{-n_i\}) \phi(\{ n_i \}) \nonumber\\ &+ \pi(\{-n_i\}) \phi(\{ n_i \})]. \end{align} Here $g_{T^N}(s,\{n_i\})$ again correspond to the Fourier transform of $g_{T^N}(s, x -y)$ The method of images would imply that one can generate an admissible entangler on $T^N$ using the entangler on the $\mathbb{R}^N$. i.e. \begin{equation} g_{T^N}(s,x) = \sum_{\{n_i\}} g_{\mathbb{R}^N}(s, x_{\{n_i\}}), \qquad x_{\{n_i\}} \equiv \{x_i + n_i l_{c_i}\}. \end{equation} On $T^2$ for example, this procedure generates doubly periodic functions. The method of images is widely applied in CFT2 to generate correlation functions on $T^2$ in minimal models, which admit free field representations. \vspace{0.5cm} \begin{center} \textit{B. Open boundaries} \end{center} \vspace{0.5cm} One can also generate systems with boundaries using the method of images. Consider for concreteness a 1-dimensional quantum system on the half-line. i.e. There is a boundary at $x=0$. The effect of the boundary can be treated as a mirror, in which inserting an operator $\mathcal{O}(x,t)$ in the presence of the boundary is equivalent to inserting a pair of operators $\mathcal{O}(x,t)$ and $\mathcal{O}(-x,t)$ in the theory which is defined on the real line with no boundaries. In complex coordinates $z = t+ i x$ and $\bar z = t- i x$, this gives \begin{equation} \label{eq:folding} \langle \mathcal{O}(z, \bar z) \cdots \rangle_{UHP} = \langle \mathcal{O}(z) \bar{\mathcal{O}}^P(z*) .... \rangle_{\mathbb{R}^2}, \end{equation} where $\mathcal{O}^P(z*)$ corresponds to the parity transformed anti-holomorphic part of the operator $\mathcal{O}(z,\bar z)$. The parity transformation turns the anti-holomorphic operator into a holomorphic one, now inserted at $z*$. As an example of such a parity transformation, considers a free bosonic field $\phi(z)$. Then \begin{equation} (\partial_{\bar z} \bar \phi)^P ( z*) \equiv \eta_P \partial_z \phi(z*), \end{equation} where $\eta_P = \pm1$, depending on the precise boundary condition that is imposed. For example the Dirichlet boundary condition that sets $\phi(z,\bar z) =0$ at the boundary located on the real line gives $\eta_P = 1$, whereas the Neumann boundary condition that sets the derivative of $\phi$ to zero gives $\eta_P = -1$. We note that in this context of conformal boundary conditions, the method of images encapsulated in (\ref{eq:folding}) works beyond free field theories because in two dimensional CFTs holomorphic and anti-holomorphic conformal transformations are independent. This fact thus allows one to generate cMERA entanglers applicable to theories with boundaries from entanglers on the real line using the method of images. Consider the specific case of a free boson $\phi(x)$ again. Suppose we impose Neumann boundary condition at $x=0$ \begin{equation} \partial_x\phi(0) = 0. \end{equation} The mode expansion of $\phi_{\mathbb{R}>0}(x)$ and $\pi_{\mathbb{R}>0}(x)$ would then be given by \begin{align} &\phi_{\mathbb{R}>0}(x) = \frac{2}{\sqrt{2\pi}}\int_0^ \infty dk \cos(k x) \phi_{\mathbb{R}>0}(k), \\ & \pi_{\mathbb{R}>0}(x) = \frac{2}{\sqrt{2\pi}}\int_0^ \infty dk \cos(k x) \pi_{\mathbb{R}>0}(k). \end{align} We note that this implies \begin{equation} \label{eq:reflect} \phi_{\mathbb{R}>0}(x) = \frac{1}{2}(\phi_{\mathbb{R}} (x) + \phi_{\mathbb{R}>0}(-x)), \end{equation} and that \begin{equation} \phi_{\mathbb{R}>0}(k) = \frac{1}{2}(\phi_{\mathbb{R}}(k) + \phi_{\mathbb{R}}(-k)) \end{equation} and similarly for $\pi_{\mathbb{R}>0}(x)$ and its Fourier modes. In other words, we have selected a boundary condition that imposes that the Fourier modes are even in $k$. Consider the entangler characterized by the function $g(s, x)$ discussed in the main text. On the half-line is given by \begin{align} &\tilde K_{\mathbb{R}>0}(s) = \frac{1}{2} \int_{\mathbb{R}^2>0} dx dy \, \tilde g_{\mathbb{R}>0}(x,y) \bigg(\phi_{\mathbb{R}>0}(x) \pi_{\mathbb{R}>0}(y) \nonumber \\ &+ \pi_{\mathbb{R}>0}(x) \phi_{\mathbb{R}>0}(y)\bigg). \end{align} Now using (\ref{eq:reflect}), we notice that this can be unfolded into an integral over $\mathbb{R}^2$ \begin{equation} \tilde K_{\mathbb{R}>0}(s) = \frac{1}{2} \int_{\mathbb{R}^2} dx dy \, \tilde g(x,y) \left( \phi(x) \pi(y) + \pi(x) \phi(y)\right), \end{equation} if we define \begin{equation} \tilde g(-|x|,|y|) = \tilde g(|x|,-|y|) = \tilde g(-|x|,-|y|) = \tilde g(|x|, |y|) \end{equation} This implies that the desired solution of an entangler on the half line can be generated from that the entangler on the real line $\tilde g_{\mathbb{R}}(x-y)$ by \begin{align} &\tilde g_{\mathbb{R}>0}(x,y) = \frac{1}{4} (\tilde g_{\mathbb{R}}(x-y) + \tilde g_{\mathbb{R}}(-x-y) \nonumber \\ &+ \tilde g_{\mathbb{R}}(x+y) + \tilde g_{\mathbb{R}}(-x-y)). \end{align} In general beyond free field theories, the method of images are not expected to work. We note however that it is known that in large N theories, such as CFTs with an AdS holographic dual, correlation functions at finite temperatures can be obtained from those at zero temperatures by the method of images -- at least for sufficiently low temperatures. Therefore, there are hopes that the method of images that map cMERA to other cMERA defined on a compact space can be applied even if the theory concerned is interacting.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The geometry of a stellarator can be optimized to improve confinement and stability. Many of the figures of merit that are commonly included in stellarator optimization, such as neoclassical transport and magnetohydrodynamic (MHD) stability, are most conveniently calculated under the assumption that nested toroidal magnetic surfaces exist. Moreover, most of the physics codes in the stellarator community use the data format of the Variational Moments Equilibrium Code (VMEC) \cite{VMEC1983}, in which the existence of nested magnetic surfaces is assumed. However, magnetic surfaces are not guaranteed to exist in stellarators, as magnetic islands and chaotic field regions may be present. The extent of islands and chaos is another function of the magnetic geometry that can be optimized\cite{HansonCary,CaryHanson}. Islands and chaos can be diagnosed using other magnetic field representations or MHD equilibrium codes, but then physics objectives that presume the existence of surfaces are not straightforward to calculate. In stellarator optimization it would therefore be ideal to have the best of both worlds, taking advantage of codes that assume the existence of surfaces and use the VMEC data format, while at the same time controlling islands. In this paper we demonstrate one approach to achieving these goals. The approach here involves simultaneously using two magnetic field representations, one that assumes the existence of surfaces and one that does not. In particular, we use VMEC and the Stepped Pressure Equilibrium Code (SPEC)\cite{SPEC,SPEC2}, which both compute three-dimensional MHD equilibria. VMEC has been widely used in the stellarator community since the 1980s, so a large number of transport and stability codes are available that analyze the numerical solutions it produces. VMEC operates by minimizing the MHD energy subject to two constrained radial profiles and the constraint of nested magnetic surfaces. Hence magnetic islands and chaos cannot be represented. SPEC is a newer code in which the toroidal domain is divided into a number of nested annular regions, with the pressure constant in each region. The magnetic field is constrained to be tangent to the toroidal boundary surfaces of each region, and pressure balance is enforced across these interfaces. Within each of the regions, there is no constraint that magnetic surfaces exist, and so islands and chaos can be represented. While VMEC has been used as part of stellarator optimization for decades, the present work is the first time SPEC has been used in optimization. In the new optimization approach presented here, both VMEC and SPEC are run at each evaluation of the objective function. If any islands are present in the SPEC solution, SPEC and VMEC necessarily will not agree exactly on the magnetic field. A measure of magnetic island width is computed from the SPEC solution and included as a penalty in the objective function. Motivated by Refs.~\onlinecite{HansonCary,CaryHanson}, the measure we use is the residue \cite{Greene}, a real number that can be computed for any periodic field line, which is zero when the island width vanishes. Due to this island width penalty, the islands are eliminated during the optimization so that the VMEC and SPEC representations agree by the end. At the same time, the objective function also includes quantities derived from the VMEC solution. In particular, here we minimize the deviation from quasisymmetry, a continuous symmetry in the field strength that ensures guiding-center confinement\cite{Nuhrenberg}, based on a conversion of the VMEC solution to Boozer coordinates\cite{BoozerCoordinates}. At the start of the optimization, this VMEC-derived part of the objective function may be somewhat inaccurate because the islands were ignored in its computation. But by the end of the optimization, since the islands have been eliminated by the residue penalty, the quasisymmetry term is accurate. In this way, we can take advantage of calculations that are based on the existence of magnetic surfaces, and of the many available codes that postprocess VMEC equilibrium files, while fully accounting for the possibility of magnetic islands. Compared to earlier island healing calculations done for the National Compact Stellarator Experiment (NCSX) project \cite{Hudson2001, Hudson2002PPCF, Hudson2002PRL, Hudson2003}, there are several difference in the approach here, such as use of different equilibrium codes and physics objectives, and different measures of island width. Whereas we optimize island widths and other quantities concurrently, the NCSX approach did not use optimization; instead a nonlinear system of equations was solved in which quantities other than island width (such as MHD instability growth rates) were held fixed\cite{Hudson2003}. \section{Methods} Here we will consider the first stage of the two-stage optimization procedure used for the design of experiments such as W7-X\cite{Klinger2017} and HSX\cite{Anderson}. In this first stage, the parameter space for optimization is the shape of a toroidal boundary magnetic surface. Specifically, the parameter space consists of the Fourier modes $\{R_{m,n},\,Z_{m,n}\}$ of the boundary toroidal surface \begin{eqnarray} R(\theta,\phi) &=& \sum_{m,n}R_{m,n}\cos(m\theta-n_{\mathrm{fp}} n\phi), \\ Z(\theta,\phi) &=& \sum_{m,n}Z_{m,n}\sin(m\theta-n_{\mathrm{fp}} n\phi) \nonumber \end{eqnarray} where $\phi$ is the standard toroidal angle, $\theta$ is any poloidal angle, $n_{\mathrm{fp}}$ is the number of field periods, and we have assumed stellarator symmetry. We exclude the major radius $R_{0,0}$ from the parameter space, in order to fix the spatial scale. Here we will not consider the second stage of the two-stage approach, in which coils are optimized to produce the boundary surface resulting from the first stage. In the future, the method of this paper could also be used in a single-stage optimization, in which the parameter space consists of coil shapes, and free-boundary equilibria are used. For simplicity, we will consider a configuration with no plasma current or pressure. This choice minimizes the computational cost because a single radial domain can be used in SPEC. In the future, the procedure here could be applied to configurations with nonzero plasma current and pressure, using multiple radial domains in SPEC. The numerical example is carried out using SIMSOPT, a new software framework for stellarator optimization\cite{SIMSOPT,SIMSOPT_Repo}. The optimization is driven in python, using the default algorithm (trust region reflective) for nonlinear least-squares optimization from the scipy package\cite{scipy}. Gradients for the minimization are calculated with forward finite differences, using MPI for concurrent function evaluations. The initial state is an axisymmetric circular-cross-section torus, and the number of field periods $n_{\mathrm{fp}}$ is set at two. We first carry out a preliminary optimization without SPEC or residues. The reason for this preliminary optimization is that the $\iota$ profile evolves significantly at the beginning, causing resonances to enter and leave the domain. In this case it is awkward to include residues in the objective function, because the objective would not be a continuous function of the parameters. The objective function for the preliminary optimization is \begin{equation} \label{eq:objective0} f =(A-6)^2 + (\iota_0-0.39)^2 + (\iota_a-0.42)^2 + 2\sum_{m,n} (B_{m,n}/B_{0,0})^2 \end{equation} where $A$ is the effective aspect ratio as defined in VMEC, $\iota_0$ and $\iota_a$ are the rotational transform at the magnetic axis and edge, $B_{m,n}$ is the amplitude of the $\cos(m\vartheta-n\varphi)$ Fourier mode of the field strength in Boozer coordinates $(\vartheta,\varphi)$ for the flux surface with normalized toroidal flux $s=0.5$, and only $n \ne 0$ modes are included in the sum. All terms in the objective are computed from the VMEC solution, with the $B_{m,n}$ values computed by postprocessing of the VMEC solution with the BOOZ\_XFORM code\cite{boozxform}. The aspect ratio is included in the objective because if not, the quasisymmetry term can be reduced to zero by increasing the aspect ratio to infinity. The rotational transform terms are included in the objective because if there are no constraints on $\iota$, true axisymmetry is an optimum. The factor of 2 in the quasisymmetry term of (\ref{eq:objective0}) is chosen based on experience to give the best optimum. Using a script similar to the one described shortly for the combined VMEC-SPEC optimization, the optimization is performed in a series of three steps, as the size of the parameter space and resolution parameters are increased. For the combined VMEC-SPEC optimization, the objective function is \begin{eqnarray} \label{eq:objective} f &=&(A-6)^2 + (\iota_0-0.39)^2 + (\iota_a-0.42)^2 \\ && + 2\sum_{m,n} (B_{m,n}/B_{0,0})^2 + 2R_X^2 + 2R_0^2 \nonumber \end{eqnarray} where $R_X$ and $R_O$ are the residues\cite{Greene} for the X- and O-points of the primary island chain. The residues are computed from the SPEC solution using Newton's method to find periodic field lines with the desired helicity, then integrating along these field lines to compute the tangent map. The weight factor of 2 in the residue terms of (\ref{eq:objective}) is chosen by experimentation to yield a good optimum. The SIMSOPT python driver script to define and solve the minimization problem is shown in Fig.~\ref{fig:code}, as several features are noteworthy. On line 14, the \texttt{Spec} object is configured to use the same boundary \texttt{Surface} object as the \texttt{Vmec} instance. Therefore when the shape of this single surface is modified during the optimization, the outputs of both VMEC and SPEC change accordingly. The objective function (\ref{eq:objective}) is specified in lines 16-34. Also, since the optimization problem is defined with a script, any other desired scripting elements can be included. Here this capability is used to define a series of three optimization stages, in which the size of the parameter space (the maximum $m$ and $n$ values of the $\{R_{m,n},\,Z_{m,n}\}$ to vary) is increased at each step, along with the numerical resolution parameters of the codes. The former is valuable to avoid getting stuck in a poor local minimum, and the latter improves computational efficiency. In lines 36-41 it can be seen that for the first step, the boundary amplitudes $\{R_{m,n},\,Z_{m,n}\}$ are varied for $m = 0 \ldots 3$ and $n = -3 \ldots 3$. In the second step, the maximum $m$ and $|n|$ are increased to 4, and in the third step, the maximum $m$ and $|n|$ are increased to 5. (For the preliminary optimization, the corresponding maximum mode numbers were 1, 2, and 3.) \begin{figure} \includegraphics[width=\columnwidth]{20210530-01-combinedVmecSpecOpt_code} \caption{\label{fig:code} SIMSOPT driver script for the combined VMEC-SPEC optimization.} \end{figure} \section{Results} The configurations before and after the final optimization are shown in Figs.~\ref{fig:xsections}, \ref{fig:poincare}, and \ref{fig:3D}. In the figures, the configuration resulting from the preliminary optimization used as input for the combined VMEC-SPEC optimization is labelled "Before optimization". Fig.~\ref{fig:poincare} shows this initial configuration has a significant island chain at the $\iota=2/5$ resonance. Therefore, VMEC and SPEC do not agree on the internal flux surface shapes near the islands for this configuration. It can also be seen in Fig.~\ref{fig:poincare} that the optimization has successfully eliminated the islands. Indeed the magnitudes of the residues have been reduced from $2\times 10^{-3}$ to $2\times 10^{-6}$. The two codes agree very well on the internal surface shapes by the end of the optimization. Therefore calculations for the final configuration based on the VMEC solution, such as the Boozer coordinate transformation, can be trusted. \begin{figure} \includegraphics[width=\columnwidth]{20210530-01-015-combinedVmecSpecOpt_xsections2} \caption{\label{fig:xsections} Cross-sections of the plasma before and after the combined VMEC-SPEC optimization.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{20210530-01-015-combinedVmecSpecOpt_poincare2} \caption{\label{fig:poincare} Poincare plots computed from the SPEC solution (colored points), and VMEC magnetic surfaces (black lines). The two codes agree on the surfaces after the combined VMEC-SPEC optimization.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{m20210610_01_combinedVMECSPECOptimization_3D} \caption{\label{fig:3D} The optimized configuration. Color indicates the magnetic field strength, and field lines are shown in black.} \end{figure} Fig.~\ref{fig:iota} displays the rotational transform profiles at the beginning and end of the optimization. For this figure, $\iota$ was computed by following field lines in the SPEC solution, starting from an array of points on the inboard midplane between the magnetic axis and computational boundary. A flat region in this $\iota$ profile for the initial configuration reflects the presence of a magnetic island. The $\iota$ profile ranges from 0.39 to 0.42 for both the initial and optimized configurations, so the islands were not eliminated by shifting the resonance out of the domain, but rather by tuning of the resonant field. \begin{figure} \includegraphics[width=\columnwidth]{20210530-01-015-combinedVmecSpecOpt_iotaPlot.pdf} \caption{\label{fig:iota} Rotational transform profile computed by following field lines in the SPEC solution, before and after the optimization. The islands at $\iota=2/5$ are eliminated even though the rational surface is still in the domain. } \end{figure} The final configuration also has extremely good quasiaxisymmetry, especially on the $s=0.5$ surface where symmetry was optimized. This can be seen in the straight horizontal contours of $|B|$ in Fig.~\ref{fig:boozPlot}. Any deviation from symmetry is not perceptible in the figure. By contrast, analogous figures of $|B|$ on a surface for previously published quasisymmetric configurations have almost always shown visible ripples in the $|B|$ contours. Examples include Figs.~5-6 of Ref.~\onlinecite{Beidler}, Fig.~2 of Ref.~\onlinecite{CFQS}, and Fig.~4 in Ref.~\onlinecite{Henneberg}. The only previously published configurations we are aware of without clear curvature of the $|B|$ contours are those of Fig.~21 in Ref.~\onlinecite{LandremanSengupta2019}, which are much higher aspect ratio ($\ge 78$). \begin{figure} \includegraphics[width=\columnwidth]{20210530-01-015-combinedVmecSpecOpt_boozPlot} \caption{\label{fig:boozPlot} Magnetic field strength for the optimized stellarator shape, computed from VMEC and BOOZ\_XFORM, showing good quasisymmetry.} \end{figure} The quality of quasisymmetry in the optimized configuration can also be seen in Fig.~\ref{fig:boozPlot2}, which shows the dependence of the $|B_{m,n}|$ amplitudes on minor radius. At $s=0.5$ where symmetry was optimized, there is a striking notch in the $n\ne 0$ modes where their amplitude becomes extremely small, $< 10^{-5}$ of the mean field. This finding supports the conjecture that it may be possible to obtain quasisymmetry exactly on an isolated magnetic surface\cite{GB2}. At the plasma edge, where symmetry-breaking $B_{m,n}$ modes are largest, the largest nonsymmetric mode remains smaller than all of the symmetric modes for $m=0-3$. \begin{figure} \includegraphics[width=\columnwidth]{20210530-01-015-combinedVmecSpecOpt_boozPlot2} \caption{\label{fig:boozPlot2} Fourier amplitudes $|B_{m,n}(s)|$ of the magnetic field strength with respect to the Boozer angles for the optimized configuration, computed from VMEC and BOOZ\_XFORM. The configuration evidently has good quasisymmetry, especially on the the $s=0.5$ surface.} \end{figure} \section{Discussion} In this work we have demonstrated a method for optimizing a stellarator's geometry to eliminate magnetic islands while simultaneously optimizing other objectives that assume the existence of magnetic surfaces. This makes it possible for optimizations to include physics codes that use equilibria from the VMEC code, or other equilibrium codes that assume the existence of nested magnetic surfaces, while also ensuring good surface quality. This method can be applied in the future to configurations with nonuniform pressure by using multiple radial domains in SPEC. While quasi-axisymmetry was the main objective in the example here, the method is equally applicable to other objectives. Another extension of this work could be to use a measure of flux surface quality or island width other than residues. In principle any such method could be used in the procedure of this paper. Several such alternative measures include Mather's\cite{Mather,Meiss} $\Delta W$, the estimate by Cary and Hanson\cite{CaryHansonIslandWidth}, and converse KAM\cite{Meiss,converseKAM}. \begin{acknowledgments} We gratefully acknowledge discussions with and assistance from Aaron Bader, David Bindel, Benjamin Faber, Andrew Giuliani, Stuart Hudson, Rogerio Jorge, Thomas Kruger, Zhisong Qu, Jonathan Schilling, Florian Wechsung, and Mike Zarnstorff. This work was supported by a grant from the Simons Foundation (560651, ML). BM and CZ are supported by the U.S. Department of Energy under Contract No. DE-AC02-09CH11466 through the Princeton Plasma Physics Laboratory. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are openly available in Zenodo at \url{http://doi.org/10.5281/zenodo.5035515}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Much interest has been generated recently in lattice models for euclidean quantum gravity based on dynamical triangulations \cite{mig3,amb3,gross,mig4,amb4,brug4,us4,smit4}. The study of these models was prompted by the success of the same approach in the case of two dimensions, see for example \cite{david}. The primary input to these models is the ansatz that the partition function describing the fluctuations of a continuum geometry can be approximated by performing a weighted sum over all simplicial manifolds or triangulations $T$. \begin{equation} Z=\sum_{T}\rho\left(T\right) \label{eqn1} \end{equation} In all the work conducted so far the topology of the lattice has been restricted to the sphere $S^d$. The weight function $\rho\left(T\right)$ is taken to be of the form \begin{equation} \rho\left(T\right)=e^{-\kappa_d N_d +\kappa_0 N_0} \end{equation} The coupling $\kappa_d$ represents a bare lattice cosmological constant conjugate to the total volume (number of $d$-simplices $N_d$) whilst $\kappa_0$ plays the role of a bare Newton constant coupled to the total number of nodes $N_0$. We can rewrite eqn. \ref{eqn1} by introducing the entropy function $\Omega_d\left(N_d, \kappa_0\right)$ which counts the number of triangulations with volume $N_d$ weighted by the node term. This the primary object of interest in this note. \begin{equation} Z=\sum_{N_d} \Omega_d\left(N_d, \kappa_0\right) e^{-\kappa_d N_d} \end{equation} For this partition sum to exist it is crucial that the entropy function $\Omega_d$ increase no faster than exponentially with volume. For two dimensions this is known \cite{2dbound} but until recently the only evidence for this in higher dimensions came from numerical simulation. Indeed for four dimensions there is some uncertainty in the status of this bound \cite{usbound,ambbound,smit4}. With this in mind we have conducted a high statistics study of the three dimensional model at $\kappa_0=0$, extending the simulations reported in \cite{varsted} by an order of magnitude in lattice volume. While in the course of preparing this manuscript we received a paper \cite{boulatov} in which an argument for the bound in three dimensions is given. Whilst we observe a rather slow approach to the asymptotic, large volume limit, our results are entirely consistent with the existence of such a bound. However, the predicted variation of the mean node number with volume is not seen, rather the data supports a rather slow power law approach to the infinite volume limit. If we write $\Omega_3\left(N_3\right)$ as \begin{equation} \Omega_3\left(N_3\right)=ae^{\kappa_3^c\left(N_3\right) N_3} \end{equation} the effective critical cosmological constant $\kappa_3^c$ is taken dependent on the volume and a bound implies that $\kappa_3^c\to {\rm const} < \infty$ as $N_3\to\infty$. In contrast for a model where the entropy grew more rapidly than exponentially $\kappa_3^c$ would diverge in the thermodynamic limit. To control the volume fluctuations we add a further term to the action of the form $\delta S=\gamma\left(N_3-V\right)^2$. Lattices with $N_3\sim V$ are distributed according to the correct Boltzmann weight up to correction terms of order $O\left(1\over \sqrt{\gamma}V \right)$ where we use $\gamma=0.005$ in all our runs. This error is much smaller than our statistical errors and can hence be neglected. Likewise, as a first approximation, we can set $\kappa_3^c$ equal to its value at the mean of the volume distribution $V$ which allows us to compute the expectation value of the volume exactly since the resultant integral is now a simple gaussian. We obtain \begin{equation} \left\langle N_3\right\rangle = {1\over 2\gamma}\left(\kappa_3^3\left(V\right) -\kappa_3\right)+V \label{eqn2} \end{equation} Equally, by measuring the mean volume $\left\langle N_3\right\rangle$ for a given input value of the coupling $\kappa_3$ we can estimate $\kappa_3^c\left(V\right)$ for a set of mean volumes $V$. The algorithm we use to generate a Monte Carlo sample of three dimensional lattices is described in \cite{simon}. We have simulated systems with volumes up to $128000$ 3-simplices and using up to $400000$ MC sweeps (a sweep is defined as $V$ {\it attempted} elementary updates of the triangulation where $V$ is the average volume). Our results for $\kappa_3^c\left(V\right)$, computed this way, are shown in fig. \ref{fig1} as a function of $\ln V$. The choice of the latter scale is particularly apt as the presence of a factorial growth in $\Omega_3$ would be signaled by a logarithmic component to the effective $\kappa_3^c\left(V\right)$. As the plot indicates there is no evidence for this. Indeed, the best fit we could make corresponds to a {\it convergent} power law \begin{equation} \kappa_3^c\left(V\right)=\kappa_3^c\left(\infty\right) + a V^{-\delta} \end{equation} If we fit all of our data we obtain best fit parameters $\kappa_3^c\left(\infty\right)=2.087(5)$, $a=-3.29(8)$ and $\delta=0.290(5)$ with a corresponding $\chi^2$ per degree of freedom $\chi^2=1.3$ at $22\%$ confidence (solid line shown). Leaving off the smallest lattice $V=500$ yields a statistically consistent fit with an even better $\chi^2=1.1$ at $38\%$ confidence. We have further tested the stability of this fit by dropping either the small volume data ($V=500-2000$ inclusive), the large volume data ($V=64000-128000$ inclusive) or intermediate volumes ($V=8000-24000$). In each of these cases the fits were good and yielded fit parameters consistent with our quoted best fit to all the data. Furthermore, these numbers are consistent with the earlier study \cite{varsted}. We are thus confident that this power law is empirically a very reasonable parameterisation of the approach to the thermodynamic limit. Certainly, our conclusions must be that the numerical data {\it strongly} favour the existence of a bound. One might object that the formula used to compute $\kappa_3^c$ is only approximate (we have neglected the variation of the critical coupling over the range of fluctuation of the volumes). This, in turn might yield finite volume corrections which are misleading. To check for this we have extracted $\kappa_3^c$ directly from the measured distribution of 3-volumes $Q\left(N_3\right)$. To do this we computed a new histogram $P\left(N_3\right)$ \begin{equation} P\left(N_3\right)=Q\left(N_3\right)e^{\kappa_3 N_3+\gamma\left(N_3-V\right)^2} \end{equation} As an example we show in fig. \ref{fig2} the logarithm of this quantity as a function of volume for $V=64000$. The gradient of the straight line fit shown is an unbiased estimator of the critical coupling $\kappa_3^c\left(64000\right)$. The value of $1.9516(10)$ compares very favourably with the value $\kappa_3^c\left( 64000\right)=1.9522(12)$ obtained using eqn. \ref{eqn2}. Indeed, this might have been anticipated since we might expect corrections to eqn. \ref{eqn2} to be of magnitude $O\left(V^{-\left(1+\delta\right)}\right)$ which even for the smallest volumes used in this study is again much smaller than our statistical errors. In addition to supplying a proof of the exponential bound in \cite{boulatov} Boulatov also conjectures a relation between the mean node number and volume in the crumpled phase of the model (which includes our node coupling $\kappa_0=0$). This has the form \begin{equation} \left\langle N_0/V\right\rangle=c1 + c2 {\ln(V)\over V} \label{eqn3} \end{equation} Our data for this quantity are shown in fig. \ref{fig3}. Whilst it appears that the mean coordination may indeed plateau for large volumes the approach to this limit seems not to be governed by the corrections envisaged in eqn. \ref{eqn3} -- it is simply impossible to fit the results of the simulation with this functional form. Indeed, the best fit we could obtain corresponds again to a simple converging power with small exponent $\left\langle N_0/V\right\rangle \sim b + cV^{-d}$. The fit shown corresponds to using all lattices with volume $V\ge 8000$ and yields $b=0.0045(1)$, $c=1.14(2)$ and power $d=0.380(3)$ ($\chi^2=1.6$). Fits to subsets of the large volume data yield consistent results. Finally, we show in fig. \ref{fig4}, a plot of the mean intrinsic size of the ensemble of simplicial graphs versus their volume. This quantity is just the average geodesic distance (in units where the edge lengths are all unity) between two randomly picked sites. The solid line is an empirical fit of the form \begin{equation} L_3=e + f\left(\ln{V}\right)^{g} \end{equation} Clearly, the behaviour is close to logarithmic (as appears also to be the case in four dimensions \cite{us4}), the exponent $g=1.047(3)$ from fitting all the data ($\chi^2=1.7$ per degree of freedom). This is indicative of the crumpled nature of the typical simplicial manifolds dominating the partition function at this node coupling. It is tempting to speculate that the true behaviour is simply logarithmic and the deviation we are seeing is due to residual finite volume effects. Alternatively, we can fit the data as a linear combination of the form \begin{equation} L_3=e + f\ln{V} +g\ln{\ln{V}} \end{equation} This gives a competitive fit with $e=-1.45(4)$, $f=1.438(4)$ and $g=-0.55(3)$ with $\chi^2=1.6$. One might be tempted to favour this fit on the grounds that it avoids the problem of a power close to but distinct from unity. However, the situation must remain ambiguous without further theoretical insight. To summarise this brief note we have obtained numerical results consistent with the existence of an exponential bound in a dynamical triangulation model of three dimensional quantum gravity. Thus, practical numerical studies can reveal the bound argued for in \cite{boulatov}. Our results also favour the existence of a finite (albeit large $\sim 200$) mean coordination number in the infinite volume limit in the crumpled phase. However, the nature of the finite volume corrections to the latter appear very different from those proposed in \cite{boulatov}. Indeed, both for the critical coupling and mean coordination number we observe large power law corrections with small exponent. Finally, we show data for the scaling of the mean intrinsic extent with volume which suggests a very large (possible infinite) fractal dimension for the typical simplicial manifolds studied. This work was supported, in part, by NSF grant PHY 92-00148. Some calculations were performed on the Florida State University Cray YMP. \vfill \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Error analysis of the x-ray scattering spectra} For collective x-ray scattering, the electron temperature can be calculated by measuring the intensity ratio of the upshifted to downshifted x-ray plasmon peaks using the detailed balance relation \begin{eqnarray} \frac{S^+(k)}{S^-(k)} = e^{-\hbar \omega_p/k_B T} \label{detailed_balance_eq} \end{eqnarray} where $S^+(k)$ and $S^-(k)$ are the frequency-integrated intensities in the upshifted and downshifted plasmons, respectively, and $\hbar \omega_p \sim$ 18 eV is the plasmon energy shift. The signal in each plasmon must be distinguished from the generally stronger elastic peak, found centered at zero energy shift relative to the x-ray probe. The elastic peak is assumed to reflect the spectral distribution of the seeded FEL beam convolved with the resolution of the spectrometer. It is usually referred to as an instrument function. \begin{figure}[ht!] \includegraphics[width=0.4\textwidth]{fig2_prl.png} \caption{Forward scattering data from aluminum as is shown as Figure 2 of Sperling et al \cite{sperling2015free}. While the downshifted plasmon is evident in the region around 7960 eV in (a), the upshifted plasmon, which would be centered around 8000 eV, is a much more subtle effect. The fit of the upshifted plasmon in (b) uses an over-smoothed instrument function and neglects a consideration of the measurement uncertainties. } \label{fig2_prl} \end{figure} I show the main experimental result of Sperling et al in Figure \ref{fig2_prl}. In (a), the downshifted plasmon centered at 7960 eV is readily apparent. At a best focus FEL spot size of 2 $\mu$m FWHM, a weak upshifted plasmon was claimed to have been observed. This manifested as a small additional signal of the scattering data over the instrument function in the spectral region around 8000 eV. This is shown in detail in Fig. \ref{fig2_prl}(b). \begin{figure}[ht!] \includegraphics[width=0.5\textwidth]{hexfig_0.pdf} \caption{Forward scattering data (blue) plotted against the reconstructed instrument function (orange). Error bars for each curve are shown as the shaded regions. The data shown in (a-c) were collected with the FEL at the nominal best focus position. For comparison, (d-f) were collected with the FEL significantly defocused. Over the spectral region containing a possible upshifted plasmon (8000-8020 eV) the uncertainties in the instrument function are larger than the differences between the two curves.} \label{fig1} \end{figure} The seeded FEL beam is known to have significant intensity and spectral fluctuations \cite{amann2012demonstration}, both in the central peak and the pedestals located in the spectral region also containing the plasmons. The instrument function must be independently characterized for each experimental spectra that is collected. In the present experiment, the FEL spectrum was not directly measured. Instead, a complimentary GaAs spectrometer measured the non-collective x-ray spectrum at a scattering angle of 60\degree \cite{zastrau2014bent}. The FEL spectrum was assumed to be equivalent to the elastic peak measured by the GaAs spectrometer. This elastic peak was isolated by subtracting away the broad Compton peak, which was assumed to be parabolic in shape. The instrument function was reconstructed by convolving the extracted elastic peak with the resolution function of the HAPG spectrometer used to measure the forward scattering spectrum. Small systematic errors, for example in the baseline of the non-collective elastic peak or the intensity of the Compton peak, can potentially have large impacts on the derived instrument function. In addition, the resolution function of the HAPG spectrometer was not accurately measured. It was instead extracted by deconvolving a measurement of the Cu K$_{\alpha}$ spectrum from a theoretical model. This reconstruction process for the instrument function was not independently validated to produce the correct result. There is a high likelihood that it introduced systematic errors to the analysis, which are especially concerning given the precision needed to measure the weak upshifted plasmon signal. The GaAs spectrometer measured the elastic peak with about ten times fewer photons than the HAPG spectrometer. The resolution of the HAPG crystal was about 10 times less than the GaAs crystal. The convolution step over-smooths the natural shot noise in the instrument function. The smoothed instrument function, plotted as a dashed line in Fig. \ref{fig2_prl}, seems to suggest that the instrument function is known more precisely than it is. The error bars can be recovered by summing the absolute photon signal measured by the CCD detector on the GaAs spectrometer. The reconstructed instrument function can then be normalized to this level. The photon signal $N_{ph}$ is then calculated for each spectral bin and the associated error bars of $\sqrt{N_{ph}}$. In Figure \ref{fig1} I show scattering data in the spectral region which would contain any upshifted plasmon. Fig. \ref{fig1} (a-c) is all the heated data collected at the nominal best focus; plot (c) is the data shown presented in Fig. 2 in Sperling et al that I reproduced as Fig. \ref{fig2_prl}. For comparison I also show data taken with the FEL defocused (d-f), where the aluminum should be unheated. In each of these cases, it can be seen that the random uncertainties in the instrument function are typically larger than the variations between the scattering and instrument function. Since the instrument function was measured with fewer total photons, it has to be scaled up by an arbitrary factor of 5-8 to match the intensity of the elastic peak of the forward scattering spectrum. The uncertainties must also be scaled by the same factor, yielding larger error bars on the instrument function relative to the scattering. I caution that, even drawn with the appropriate error estimates, the solid blue line should not be interpreted as the true value of the instrument function. This curve is still inappropriately smoothed and should not be used to extract plasmon intensity ratio. \subsection{Resampling analysis} I included the uncertainties in the analysis by resampling and fitting the spectral data. The forward scattering spectrum and instrument function were modeled as ensembles of Gaussian-distributed variables, with means and standard deviations equal to the measured values in each spectral bin. The two curves were resampled from their associated probability distributions and scaled to overlap the mean values in the elastic peaks. I then subtracted the instrument function from the scattering to isolate the plasmon signal. As can be seen in Fig. \ref{fig1}, the uncertainties in the peak intensity of the instrument function are large compared to the differences between the scattering and instrument function. The resampling causes the peak intensity of the instrument function to fluctuate. Since the instrument function is scaled to the scattering data, the relative difference between the scattering and instrument function in the upshifted region will also change. Therefore, scaling the instrument function introduces another random source of error to the fitting process that was not included in the original analysis. By averaging over many repetitions, we can include this variation in the fitting. \begin{figure}[ht!] \includegraphics[width=0.5\textwidth]{hex_fit_to_plasmon.pdf} \caption{Plotted are the resampled scattering spectra (blue), instrument function (orange), and the difference between the two measured spectra (green). Any evidence of an upshifted plasmon would appear in the subtracted spectrum. The subtracted data are compared to the theoretical fits to the upshifted plasmon (red) from Sperling et al calculated for T = 6 eV (a-c) and 0.3 eV (d-f).} \label{fig2} \end{figure} I plot the resampled data in Figure \ref{fig2}. For the best focus (a-c) and defocused data (d-f) alike, it is difficult to spot by eye any significant differences between the resampled instrument function (orange) and the scattering data (blue). The situation is not much improved when looking at the isolated upshifted plasmon signal (green). The spectra are dominated by random noise, primarily from the poorly constrained instrument function. The plasmon signal does not show good qualitative agreement with the model fits in either case. I repeated the resampling procedure 100,000 times. For each trial, I evaluated the goodness-of-fit of both the 6 eV and 0.3 eV theoretical models. These models are identical to those presented in Sperling et al and Fig. \ref{fig2_prl}. By Eq. \ref{detailed_balance_eq}, the intensity of the upshifted plasmon at 0.3 eV is indistinguishable from room temperature. I used a $\chi^2$ test with N = 100, containing the spectral window in Fig. \ref{fig2}, and $\nu$ = 97. The fitting parameters to the model are the temperature, the theoretical elastic to inelastic scattering ratio, and the scaling between the instrument function and scattering. For p=0.05, $\chi^2 = 121$ is the critical value. The results are summarized in Table \ref{chi_table} \begin{table} \centering \begin{tabular}{ l | c | c | c| r } \hline Run & Focus & $<\chi^2_{\text{0.3 eV}}>$ & $<\chi^2_{\text{6 eV}}>$ & $<p>$ \\ \hline 225 & Best & 158 & \textbf{144} & 0.0014 \\ 224 & Best & 171 & \textbf{146} & 0.001 \\ 230 & Best & 146 & \textbf{123} & 0.048 \\ 226 & 10 $\mu$m & 173 & \textbf{144} & 0.0014\\ 227 & 10 $\mu$m & \textbf{133}& 154 & 0.0086\\ 229 & 30 $\mu$m & \textbf{125} & 127 & 0.042 \end{tabular} \caption{Summary of the fits to the forward scattering data. $<\chi^2_{\text{0.3 eV}}>$ and $<\chi^2_{\text{6 eV}}>$ denote the denote $\chi^2$ averaged over the repetitions for the 0.3 eV and 6 eV models, respectively. The smallest value of $\chi^2$ is shown in bold. The corresponding average p-value, $<p>$ is shown for the best fit.} \label{chi_table} \end{table} The theoretical fits all fail to reach a significance level of p$>$0.05 on average. For the best focus data, the best case was run 230 where the 6 eV model gives $\chi^2 = 123$ and p=0.04. The two other best focus runs (224,225) have much worse agreement to the 6 eV model, with $\chi^2 \sim$ 145. The defocused run 229 has a similar agreement to both the 6 eV ($\chi^2$ = 127) and 0.3 eV ($\chi^2 = 125$) models. While this run slightly favors the low-temperature model, it fits the high temperature model better than runs 224 and 2445 and is quite close to run 230. Additionally, the defocused data in run 226 has better agreement to the 6 eV model than the 0.3 eV model. Thus it is doubtful whether we can even establish if there is a difference in the upshifted plasmon region between the best focus and defocussed data sets. Variations between the scattering data and instrument function may only reflect the inaccuracies in the reconstruction of the instrument function itself and noise in the extracted inelastic scattering. In contrast to the report of Sperling et al of an elevated temperature at best focus, the data in the upshifted region does not show a good agreement to the 6 eV theoretical model. The fits fail to reach significance and the best focus and defocused data do not show a systematic relationship to the models. The large variations in the spectral data overwhelm the sensitivity of the temperature measurement. An alternative explanation is that the data are not consistent with an elevated temperature in any case. As I show next, it is highly likely that the intensity at best focus was significantly less than what was described. The data fail to show a meaningful upshifted plasmon signal because the aluminum was not significantly heated. \section{Experimental conditions} Successfully heating the aluminum sample with the FEL depends on focusing the x-ray to a small spot. The spot size was not measured in the experiment, leaving the actual experimental conditions largely unknown. The SCFLY simulations used to justify a temperature of 6 eV overestimated the x-ray fluence on target by at least a factor of 3.6. Assuming the temperature scales linearly to the fluence, the upshifted plasmon signal at best focus would be weaker by a factor of 5000. The aluminum was likely not heated to a point where an upshifted plasmon could have been detected over the noise in the measurements. The experiment used a set of beryllium compound refractive lenses, located approximately 4 m away from the interaction region, to focus the x-ray beam from a diameter of 800 $\mu$m to a nominal 1 $\mu$m. At the time of this experiment, the set of prefocusing lenses now located 100m upstream of the target chamber \cite{heimann2016compound} had not been implemented at the LCLS. The x-ray beam overfilled the 800 $\mu$m aperture of the focusing lenses. The divergence was then about 2 $\mu$m FWHM/cm of defocusing. The best focus is about 1 $\mu$m FWHM and is typically located a few cm away from the calculated best-focus position \cite{sikorski2015focus}. This is a consequence of form errors in the beryllium lenses and positioning errors in the lens holders. \begin{figure} \includegraphics[width=0.5\textwidth]{fe_kbeta_zscan_int.pdf} \caption{Results from measuring the FEL spot at 7 keV using two-photon $K_{\beta}$ excitation. The experimental best focus at z=-223 maximized production of $K_{\beta}$ x-rays. The calculated best focus position was at z=-205.5 mm.} \label{fig3} \end{figure} The FEL spot size can be optimized in situ by tuning the FEL energy below the K-edge of a transition metal and maximizing the two-photon excitation of the K$_\beta$ transition. In this process, photo-absorption of an x-ray creates a long-lived L-shell vacancy in an atom. This excited atom has a large cross section (~Mb/atom) to absorb a subsequent x-ray, promoting a K-shell electron to fill the L-shell vacancy. A characteristic K$_\beta$ x-ray is then emitted from the de-excitation of the atom. This process scales as the square of the x-ray intensity. This technique was used to optimize the x-ray focus in another recent experiment. Figure \ref{fig3} shows the results for scanning the x-ray focus position. An iron foil was irradiated with the FEL at a fixed energy of 7.05 keV. The x-ray production is maximized with the lenses at around z=-222 mm. The experimentally determined best focus was then characterized by examinig the size of the craters ablated by the x-ray beam \cite{heimann2016compound}, yielding a spot size of 2.33 $\mu$m FWHM. In comparison, the calculated best focus for this configuration was estimated to be 0.4 $\mu$m FWHM at z=-205.5 mm, 16.5 mm away from the experimentally determined best focus. For the experiment in Sperling et al, the lenses were set at the calculated best-focus position, but the actual dimensions of the focal spot were not measured. The paper asserts that the x-ray spot size was 2 $\mu$m FWHM. Using the geometrical beam divergence, the x-ray spot size in the experiment should be quoted as 1 $+$4/-0 $\mu$m FWHM. Without experimentally optimizing the focus, the x-ray intensity can be then expected to vary by a factor of $5^2$ relative to the optimum position. The SCFLY simulations used a value of 0.09 mJ for the x-ray energy per pulse incident on the target, or a per pulse energy of 0.3 mJ and a transmission to the target chamber of 30\%. The per-pulse energy was measured in the experiment to be 0.25 mJ. Results presented by Heimann et al indicate the transmission is 20\% \cite{heimann2016compound}, or an effective 0.05 mJ/pulse delivered to the target. Thus the energy input to the simulations was overestimated by about 40\%. SCFLY is 0D and therefore does not take into account the variation in the x-ray dose from the thickness of the foil. X-rays near the back side of the foil, where the FEL intensity has been diminished by photo-attenuation, are more heavily weighted in the measured forward scattering spectrum. For a foil of thickness $d$, atoms at a depth $x$ are irradiated with an intensity \begin{eqnarray} I(x) = I_0(x) e^{-\mu x} \end{eqnarray} For some scattering angle $\theta$, the contribution to the scattering spectrum from a thickness $dx$ has to be weighted by the sample transmission \begin{equation} S(x) dx = I_0 e^{-\mu x} e^{-mu (d-x)/\cos\theta} dx \end{equation} The depth-averaged dose is then \begin{eqnarray} S_{ave} = 1/d \int_0^d S(x) dx \end{eqnarray} Using the parameters from the experiment (d = 50 $\mu$m, $\mu$ = 132 cm$^-1$, $\theta$ = 24 $\deg$), depth averaging reduces the effect x-ray dose by another 50\%. Considering the corrected pulse energy and depth averaging, the net x-ray fluence is therefore reduced by a factor of 3.6. Finally, The SCFLY calculations indicate a maximum temperature of 6 eV by the end of the pulse, which was claimed to be consistent with an upshifted plasmon and an experimental temperature of 6 eV. The experimental measurements are time-integrated over the duration of the FEL pulse. Measuring an average temperature of 6 eV is inconsistent with simulations suggesting a final temperature of 6 eV. From the detailed balance relation, the time-averaged intensity ratio is \begin{eqnarray} \frac{S(k)^+}{S(k)^-} = \frac{\int_0^\infty e^{-\hbar \omega_p/k_B T(t)} FEL(t) dt }{\int_0^\infty FEL(t) dt} \end{eqnarray} where $FEL(t)$ and $T(t)$ are the FEL intensity and temperature temporal profiles. For a final temperature of 6 eV, the time-averaged SCFLY simulations would be consistent with an experimental intensity ratio of 0.01. An intensity ratio of 0.05 was claimed to be observed in the data. This would require a final temperature of 10 eV, or an even tighter x-ray focus than is likely possible. \subsection{Prevalence of successful x-ray heating} Given that the actual x-ray focal spot was not characterized, there is a large degree of uncertainty whether the chosen best focus position was small enough to create elevated temperatures in the aluminum sample. Assuming that the actual focal position is a Gaussian distributed variable centered at the calculated position, the probability that the focal spot had a FWHM of at most $d$ is \begin{eqnarray} P(d) = \sqrt{2\pi}\sigma\int_{-\ell}^\ell e^{-x^2/2*\sigma^2} dx \\ \ell = (d-d') \end{eqnarray} Here $\sigma$ = 4 $\mu$ and $d'$ is the best focus spot of 1 $\mu$m. The probability that the focal spot was at most 2 $\mu$m FWHM in size is 20\%. To compensate for the overestimate in the x-ray energy on the target for the SCFLY simulations, the focal spot would have to be at most 1.2 $\mu$m FWHM. The probability for this to have occurred is 4\%. \section{Conclusion} The temperature measurement of Sperling et al are not supported by the data. The quality of the data were too poor to perform a meaningful measurement of the upshifted plasmon at the claimed conditions. As the spot size was not measured during the experiment, it is unknown what the actual conditions were at the position of the calculated best focus. While it is possible that the x-ray spot size was small enough for an appreciable level of heating, the known errors in the focus make this very unlikely. Due to the complexity of the experimental setup and subsequent changes to the experimental facilities, subsequent measurements of the x-ray spot size may not be directly applicable. The theoretical work that was presented may be sound, but should not be considered to be validated by the experimental data. \begin{acknowledgments} The author wishes to acknowledge the assistance of LCG, HON, ADB, WF, and SBH. This work is dedicated to the memory of Mikhail V Konnik. \end{acknowledgments} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For local communities, political decisions with heavy financial consequences need rigorous and detailed studies. The purpose of those studies is to provide Decision-Makers with forecasts and projections of their financial circumstances to come, for various sequences of projects responding to political goals and various ways to finance them. Tackling those forecasts and projections is a delicate task for experts. Indeed, on the one hand factors constraining the projects of a local community are essentially laws, proper management rules which fluctuate and public opinion which is fickle. On the other hand, the way that those constraints are perceived by local community Decision-Makers is also time-varying. Often, local community calls on experts in order to strike the right balance. Practically, an expert works in straight collaboration with Decision-Makers of the local community he is engaged with in order to take into account all the targets and perceived constraints of every project of the local community. Its work consists in building financial plans (Prospective Budgets), which are in some sense optimized, consistent with the capacity of investment of the local community and, of course, comply with the political goals of the Decision-Makers. For each prospective budget, the state of various indicators, relevant to the resulting financial health of the local community in future years, are given. As a result of this work, viable scenarios satisfying partially the political goals are proposed among which, in an ideal situation, Decision-Makers may make a choice. Unfortunately in a large number of non-ideal situations, constraints and goals cannot be satisfied together. In those cases the set of viable scenarios influences the evolution of the political goals and the constraint perception in order to begin a new iteration of the work process. The existing tools dedicated to this iterative work process are somewhat limited. The goal of this paper is to set out a new tool to contribute to filling this gap in a specific context we shall describe now. ~ The local communities usually need visibility on their budget over a time period of several years, linked to the characteristic duration of political mandate. The main strict constraint, generally imposed by current legislation, is that the difference between the receipts and the expenditures cannot be negative. That makes-up the balanced budget rule. This balanced budget is in most countries shared into sub-budgets which are not necessary balanced. However, each expenditure or receipt clearly belongs to a unique sub-budget. For instance, French local communities share their budget into an investment budget and an operating budget. Positive credit balance amount from the operating budget can be transferred to the investment budget. Our work joins in the French model of local communities' management but the tackled questioning and the tool we set out are clearly more general. ~ In order to explain the goal of the tool we build, we restrict ourselves to the particular case of a Prospective Budget building where, among the political goals, two objectives are to be reached. With software environment avalaible nowadays, a Prospective Budget may be figured out with the first objective achieved. In particular, the consequences on the factors involved in the second objective may be quantified. Of course, the same can be done exchanging the roles of the two objectives. Nonetheless, generally speaking, it is not possible to satisfy both objectives, since the constraints are too numerous. Schematically, it may be said that it is possible to bring out two Prospective Budgets $S^1$ and $S^2$, where $S^1$ satisfies the first objective, which is symbolized by the fact that indicator $V_1$ reaches a targeted value $\widetilde{V_1}$. Prospective Budget $S^2$ satisfies the second objective which is translated by $V_2= \widetilde{V_2}$ for a targeted value $\widetilde{V_2}$. In Prospective Budget $S^1$, $V_2\neq \widetilde{V_2}$ but is determined by the budget building process which takes constraints into account and which is, in some sense, optimized. In a similar way, in Prospective Budget $S^2$, $V_1\neq \widetilde {V_1}$. Having those Prospective Budgets on hand, the next step consists in finding several alternative ones that are such that neither $\widetilde{V_1}$ is reached by $V_1$ nor $\widetilde{V_2}$ by $V_2$, but still satisfy the constraints and are more satisfactory. When this process is executed by an expert, the building of those alternative Prospective Budgets uses one more time the tool after having let the targeted values evolve, influenced by its knowledge and the interaction with Decision-Makers. \\ Yet, the new tool which is described in this paper has the ambition of automatically generating a collection of alternative Prospective Budgets and of introducing them in a usable way, so that Decision-Makers can choose the one that fits their goals in the best way. \\ In order to create such a new tool, we first identified that the question of finding several alternative Prospective Budgets can be formalized as finding a shape, corresponding to the extremum of a given fitness function, in a multi-dimensional space. Then we found out that the best type of optimization methods to tackle this shape search was Genetic Algorithm. Indeed, Genetic Algorithms have the capability to explore a given domain and, by nature, the result of a Genetic Algorithm is a set of solutions that optimizes the fitness function. \label{201310150919} For a review on Genetic Algorithms, we refer to {Goldberg \cite{citeulike:125978}}, {{Beasley}, {Bull} \& {Martin} \cite{BeasleyBullMartin1,BeasleyBullMartin2}} and {Davis \cite{davis:handbook}}. This approach using Genetic Algorithm for an optimization problem is not new. Nevertheless, the algorithm we propose here has innovative aspects. The first one is that we look for an optimal object in a bounded box. \\ The other innovative aspect is that we look for the argument of an optimum which is not a single point but a shape in a relatively high-dimensional space. The use of Genetic Algorithms for shape optimization is classical, and many references exists on the subject. We refer for instance to {{De Jong} \cite{DeJong}} and {{Castro}, {Ant\`onio} \& {Sousa} \cite{Castro2004356}}. We also refer to articles that implement variants of Genetics Algorithms so called Particle Swarm Optimization (see for instance {{Mattheck} \& {Burkhardt} \cite{MattheckBurkhardt}} and {{Fourie} \& {Groenwold} \cite{FourieGroenwold}}) and Fuzzy Controlled Genetic Algorithm (see for instance {{Soh} \& {Yang} \cite{SohYang}}), both used in structure optimization. But, in all those references a Genetic Algorithm is used to drive the successive setting out of the parameters of a software code in order to find out the optimal solution. The methods do not use -\,contrary to what we do\,- the capability of Genetic Algorithms to directly build a shape in the space. Once it was established that Genetic Algorithms is the pertinent tool for our question, it was needed to formalize it so as to be able to use Genetic Algorithms. Yet, the literature concerning them is rich in the context of financial optimization (we refer for instance to Chen \cite{ChenSH:2002}). Besides, the financial modeling is very active on the market finance sector (see for instance Goodman \& Stampfli \cite{GoodmanStampfli}, {Ilinski \cite{Ilinski}} and {Fama \cite{Fama1998283}}). However, it is much less productive for applications in public sector (see for instance {{Musgrave \cite{Musgrave}} and Rosen \cite{Rosen}}). Finally, mathematical modeling for finance of local community seems to be very poor (see {Tiebout \cite{Tiebout}}). Hence, on the one hand, we had to develop a model of the local community finance system. On the other hand, we developed a proper formalism (calling upon the model of the local community finance system) to develop our Genetic Like Algorithm. We now summarize this formalism. It can be considered that any given Prospective Budget is characterized uniquely by the two values $V_1$ and $V_2$. In other words, indicators $V_1$ and $V_2$ become variables on which Prospective Budget depends. To simplify the purpose, $V_1$ and $V_2$ are both supposed to be $n$-dimensional, so that it can be assumed that $(V_1,V_2)\in{\mathbb{R}}^{2n}$. Prospective Budget associated with values $V_1$ and $V_2$ writes $S(V_1,V_2)$. Of course, for some values of the variables, say $(V_1^f,V_2^f)$, Prospective Budget $S(V_1^f,V_2^f)$ does not satisfy the constraints. Then, constraints may be seen as defining a sub-domain of the space ${\mathbb{R}}^{2n}$ in which the variables lie. Within this framework, the Prospective Budget $S^1$ described above writes $S(\widetilde{V_1},V_2^c)$ where $V_2^c$ is computed by the software environment. In the same way, Prospective Budget $S^2$ writes $S(V_1^c,\widetilde{V_2})$. \\ The method explores, in ${\mathbb{R}}^{2n}$ -- the space where the variables lie, the intersection of a box containing the two points $(\widetilde{V_1},V_2^c)$ and $(V_1^c,\widetilde{V_2})$, associated with the budgets already on hand, and of the sub-domain where constraints are satisfied in order to identify a shape joining $(\widetilde{V_1},V_2^c)$ and $(V_1^c,\widetilde{V_2})$ around which Prospective Budgets fit well what Decision-Makers are waiting for, are in some sense optimized and satisfy the constraints. The box is built by considering in ${\mathbb{R}}^{2n}$ the middle point of $(\widetilde{V_1},V_2^c)$ and $(V_1^c,\widetilde{V_2})$ and by building in this point an orthonormal frame whose first vector is the normalization of vector $\overrightarrow{(\widetilde{V_1},V_2^c)(V_1^c,\widetilde{V_2})}$ -- joining $(\widetilde{V_1},V_2^c)$ to $(V_1^c,\widetilde{V_2})$. The other vectors of the frame are exhibited by the mean of the Gram-Schmidt routine. \\ The method consists in defining a fitness function $F$ which integrates the Decision-Makers' political goals. We also have to build the sub-domain on which budgets satisfy the constraints. We define a method to encode the variables in the considered box. This coding calls, among others, upon a sub-product of the Gram-Schmidt routine. Then, we implemented a Genetic Like Algorithm which consists first in generating a collection of $N$ values $({V_1^l}^0,{V_2^l}^0)_{l=1,\dots,N}$ which are within the box and satisfy the constraints. For each value, Prospective Budget $S({V_1^l}^0,{V_2^l}^0)$ and its fitness $F(S({V_1^l}^0,{V_2^l}^0)) = F({V_1^l}^0,{V_2^l}^0)$ can be computed. By crossover, mutation and constraint management methods, usually combined in Genetic Algorithms, a new collection $({V_1^l}^1,{V_2^l}^1)_{l=1,\dots,N}$ (lying in the box and satisfying the constraints) is then generated. Going further finally leads to the $k^\text{th}$ generation $({V_1^l}^k,{V_2^l}^k)_{l=1,\dots,N}$ which may be close to the sought shape in the intersection of the box and the sub-domain where constraints are satisfied. This way of using Genetic Like Algorithm appears to be new. \\ The main contributions of this paper are the buildings of the models of the local community finance system and of the Alternative Financial Solutions seeking problem, which are done in section \ref{201310021035}, and the formalization of those models under a form -\,just evoked\,- allowing the use of a Genetic Like Algorithm. This formalization and the writing of the Algorithm itself are given in section \ref{GLA}. In the last section, the method is tested, in particular on an operational problem and gives good results. This demonstration of this capability of our method is also an important contribution. \section{Description of the Alternative Financial Solutions seeking problem} \label{201310021035} This section is devoted to the description of the kind of financial problems we tackle with our method and tool. We begin by introducing, with a systemic point of view, the French local community yearly budget workings. Then, we explain the problematic of seeking alternative Multi-Year Prospective Budgets. This is done in section \ref{GLA}. \subsection{Local community Yearly Budget System working} \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{RepSysBudg.pdf} \caption{French local community Yearly Budget System working.} \label{figRepSystBudg} \end{center} \end{figure} Figure \ref{figRepSystBudg} depicts the schematic working of French local community yearly budget. To explain this working, we adopt a systemic point of view allowing us to give a global and macroscopic description, without going into technical or semantical details, of what we call in the following: the Yearly Budget System. \\ For readers interested in French local community finance system, we refer to {\cite{QualiteComptable}} and {\cite{PlanM14}}. ~ Among incomes contributing to local community operating budget, there are essentially state allocations and local "Taxes". Local community cannot influence the state allocation level, however the setting of local Tax Level is part of its own competences. As a consequence, we consider Taxes as an input of the Yearly Budget System. They lay at the top-right of Figure \ref{figRepSystBudg}. The other inputs of this system are linked with the "Current Debt". They are: the capital associated to this debt that remains due, the capital that needs to be repaid this year and the interests that have to be paid. Those amounts are defined by loan contracts of previous years. Those inputs are placed on the left-hand side of the figure. Of course, local community cannot have a direct effect on those inputs, but acts on their values in the next years by contracting or not new loans. This is symbolized by the dash line in the figure. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{RepSysInv.pdf}\vspace{-1cm} \caption{Synthetic Yearly Budget System} \label{figRepSystSynt} \end{center} \end{figure} ~ Generally, a local community plans to get operating recipes that allow it to face all operating expenditures and debt interests, and, once those expenditures realized, that leaves a remaining amount that can be used for investment. This remaining amount is called "Gross Self-Financing Capacity". This "Gross Self-Financing Capacity" is used to repay the capital that needs to be. The remainder, which is called "Net Self-Financing Capacity", contributes to the investment budget with the goal to top up subventions and loans to reach the Investment Level wanted by the community. This System generates a balanced budget. At the bottom of the figure, the "Capacity to Be Free of Debt" is mentioned. This indicator is computed from the capital that remains due and from the Gross Self-Financing Capacity. It is, by definition, the time (generally expressed in years) for the community to repay all the capital of its debt, if no other loans are contracted and if the Gross Self-Financing Capacity remains constant over the next years. This indicator is seen here as the output of the system. A generally-accepted maximum figure for the Capacity to Be Free of Debt is 15 years ~ The Yearly Budget System is presented in Figure \ref{figRepSystSynt} as a synthetic diagram. This diagram illustrates that Current Debt, Taxes and Investment Level are seen as acting on the Yearly Budget System. Since Current Debt cannot be influenced directly, only Taxes and Investment Level are considered as active Inputs of the system. Considering this makes Taxes and Investment Level the variables on which the Yearly Budget depends and makes the Capacity to Be Free of Debt a result of the Yearly Budget, or in other words, an Output of the system. In the figure, only three years (\#1, \#2 and \#3) are represented; "\dots" symbolize that other years are coming hereafter. \begin{figure} \begin{center} \includegraphics[width=0.95\textwidth]{BudProsMultAnDirect.pdf}\vspace{-3mm} \caption{Multi-Year Prospective Budget System working.} \label{BudProsMultAnDirect} \end{center}\vspace{-5mm} \end{figure} \subsection{Multi-Year Prospective Budget Systems} \label{201310080908} From the Yearly Budget System, a multi-year budget may be built. Since those kinds of multi-year budgets are intended to explore possible futures under several assumptions, we call them Multi-Year Prospective Budgets. The functioning of such a Multi-Year Prospective Budget is depicted in Figure \ref{BudProsMultAnDirect}. On the left-hand side of this picture is drawn the Budget System of the first year. This diagram is the synthetic one with arrows coming from the Current Debt box, the Tax box and the Investment Level box and an arrow going to the Capacity to Be Free of Debt box. Loans that are contracted during this first year have a consequence on the debt of the next years. This is symbolized by the arrow going from the Budget System of the first year to the Current Debt box of the second year. Going on, following the arrows, it is possible to create a Multi-Year Prospective Budget System for an arbitrary number of years. To formalize a bit, the investment leads to the realization of a set of projects, which belong to the list of all projects which are desired to be carried out. Then, Investment Level may be described using a list of numbers, the cardinal of which being the number of projects. Each number indicates if its associated project will be realized or not, and possibly, if it is so, how close or far from its targeted date it is carried out.\\ Thus, in the case where the Prospective Budget is considered over five years and if five projects are considered, every possible Multi-Year Prospective Budget depends on ten values $(I_1, I_2, I_3, I_4,I_5,T_1,T_2,T_3,T_4,T_5)=((I_i)_{i=1,...,5},(T_i)_{i=1,...,5})$; five numbers $(I_i)_{i=1,...,5}$ given information on project realization and then indicating the Investment Level, and five Tax Levels: $(T_i)_{i=1,...,5}$, one for each year. Of course, there are structural constraints on the variables: the $T_i$ cannot be negative. The Prospective Budget corresponding to a given value $(\bar I_1,\bar I_2,\bar I_3,\bar I_4,\bar I_5, \bar T_1,\bar T_2,\bar T_3,\bar T_4,\bar T_5)= ((\bar I_i)_{i=1,...,5},(\bar T_i)_{i=1,...,5})$ of the Investment Level and Tax Levels is seen as a solution $S(\bar I_1,\bar I_2,\bar I_3,\bar I_4,\bar I_5, \bar T_1,\bar T_2,\bar T_3,\bar T_4,\bar T_5)=S(((\bar I_i)_{i=1,...,5},(\bar T_i)_{i=1,...,5}))$. The five Capacities to Be Free of Debt are seen as the image associated with a Prospective Budget by a mapping. They read: $(C_k(S(\bar I_1,\bar I_2,\bar I_3,\bar I_4,\bar I_5, \bar T_1,\bar T_2,\bar T_3,\bar T_4,\bar T_5)))_{k=1,...,5} = (C_k(S((\bar I_i)_{i=1,...,5},(\bar T_i)_{i=1,...,5}))_{k=1,...,5}$. \subsection{Alternative Prospective Budget seeking problematic} Having on hand this formalism, we can insert in it the political goals of Decision-Makers, which are generally unreachable. At first, we present how part of the political goals may be incorporated to create two Multi-Year Prospective Budgets which are only partially satisfactory. Then we explain how, starting from two Budgets, Alternative Prospective Budgets may be sought. Of course, those Budgets are also only partially satisfactory, but among the generated collection of Prospective Budgets, one can be preferred to the others and thus chosen among them. \\ We study the example with five years Prospective Budgets involving five projects, but what is explained in the following is of course true for any number of years and projects. In an effort to remain schematic, we consider here that a Political Goal is a given collection $(\widetilde I_1,\widetilde I_2,\widetilde I_3,\widetilde I_4,\widetilde I_5, \widetilde C_1, \widetilde C_2, \widetilde C_3, \widetilde C_4, \widetilde C_5)= ((\widetilde I_i)_{i=1,...,5},(\widetilde C_i)_{i=1,...,5})$ of targeted Investment Level and targeted Capacities to Be Free of Debt translating into financial terms the projects the Decision-Makers plan to get realized and the level of financial sanity they want to reach. In addition, the Political Goal is provided by a Tax Evolution Pattern, which expresses at what time Decision-Makers accept tax increases and at what time they prefer stability of tax levels. With the help of a software environment properly programmed, it is possible to compute Prospective Budget $S((\widetilde I_i)_{i=1,...,5},(\widetilde T_i)_{i=1,...,5})$, whose every Yearly Budget is balanced and which satisfies the requested Political Goal, i.e. such that $C_k(S((\widetilde I_i)_{i=1,...,5}, $ $ (\widetilde T_i)_{i=1,...,5})) = \widetilde C_k$ for $k=1,...,5$. However this view is much too naive, since Political Goal $((\widetilde I_i)_{i=1,...,5},(\widetilde C_i)_{i=1,...,5})$ generally requires Tax Levels that are incompatible with regulations, or simply not acceptable to Decision-Makers. \\ The work of Decision-Makers, assisted by public-finance experts, consists in degrading Political Goal $((\widetilde I_i)_{i=1,...,5},(\widetilde C_i)_{i=1,...,5})$ to be reached with acceptable Taxes Levels. This is done with the help of a software environment. For instance, SOFI software, edited by MGDIS\footnote{\href{http://www.mgdis.fr/}{http://www.mgdis.fr/}}, provides solutions to this question in considering two problems. Those problems consist, in some sense, in inverting the routine presented in Subsection \ref{201310080908} and Figure \ref{BudProsMultAnDirect}, which was describing how Capacities to be Free of Debt were gotten from chosen Tax and Investment Levels. \\ The first problem, depicted in Figure \ref{BudProsMultAnInpInv}, consists in considering as input the targeted Investment Level $(\widetilde I_i)_{i=1,...,5}$ and, having \begin{figure}[ht] \begin{center} \includegraphics[width=0.99\textwidth]{BudProsMultAnInpInv.pdf}\vspace{-3mm} \caption{Multi-Year Prospective Budget computed considering Investment Level as input and Tax Levels and Capacities to Be Free of Debt as outputs.} \label{BudProsMultAnInpInv} \end{center} \vspace{-5mm} \end{figure} set constraints on Tax Levels, in computing a Multi-Year Prospective Budget $S((\widetilde I_i)_{i=1,...,5},(T_i^c)_{i=1,...,5})$ such that, for ${k=1,...,5}$, the Capacities to Be Free of Debt $C_k(S((\widetilde I_i)_{i=1,...,5},(T_i^c)_{i=1,...,5}))$ are as close as possible (in a given sense) to the Goals $\widetilde C_k$ and with Tax Levels $(T^c_i)_{i=1,...,5}$ that satisfy the constraints and whose every Yearly Budget is balanced. \\ In Figure \ref{BudProsMultAnInpInv}, the diagram of Figure \ref{BudProsMultAnDirect} is used again; and arrows going from the Investment Level boxes (the investment levels are indicated as inputs) to Tax and Capacity to Be Free of Debt boxes illustrate that the Taxes and Capacity to Be Free of Debt are computed from the chosen Investment Levels. (For readability, some arrows going from Investment Level boxes to Capacity to Be Free of Debt boxes are not drawn.) \\ \begin{figure}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{BudProsMultAnInpCap.pdf} \vspace{-3mm} \caption{Multi-Year Prospective Budget computed with Capacities to Be Free of Debt as inputs and Tax Levels and Investment Levels as outputs.} \label{BudProsMultAnInpCap} \end{center}\vspace{-5mm} \end{figure} The second problem we consider is represented in Figure \ref{BudProsMultAnInpCap}, takes the targeted Capacities to Be Free of Debt $(\widetilde C_k)_{k=1,...,5}$ as inputs and computes Multi-Year Prospective Budget $S((I_i^c)_{i=1,...,5},(\widetilde T_i)_{i=1,...,5})$ such that $(\widetilde T_i)_{i=1,...,5}$ satisfies the constraints on Tax Levels, $C_k(S((I_i^c)_{i=1,...,5},(\widetilde T_i)_{i=1,...,5}) = \widetilde C_k$ for $k=1,...,5$ and Investment Levels $(I_i^c)_{i=1,...,5},$ are as close as possible to the Goal $(\widetilde I_i)_{i=1,...,5}$. \\ In Figure \ref{BudProsMultAnInpCap}, the content of Figure \ref{BudProsMultAnDirect} is used again. However, in this case, the Capacities to be Free of Debt are indicated as inputs. Arrows going from them to Investment Levels (they are not all drawn for readability) and to Taxes symbolize that Investment and Taxes are consequences of the chosen Capacity to be Free of Debt Levels. \\ Once those two solutions are set out, they may be evaluated by Decision-Makers, in regards, among others, of the Tax Evolution Pattern. Other Multi-Year Prospective Budgets may then be built by modifying values within the Political Goal $((\widetilde I_i)_{i=1,...,5},(\widetilde C_i)_{i=1,...,5})$, after financial and political discussions between Decision-Makers and experts. This generation of Alternative Financial Solutions or Alternative Multi-Year Prospective Budgets can be fastidious and long. Thus, only a small number can be generated. ~ The topic of the method and tool we propose here is to automate this generation of Alternative Financial Solutions or Alternative Multi-Year Prospective Budgets, and to set out a relatively wide number of them, provided with indicators of their quality in order for Decision-Makers to be able to make a choice between them. Roughly speaking, we can consider that Prospective Budgets $S((\widetilde I_i)_{i=1,...,5},(T_i^c)_{i=1,...,5}))$ and $S((I_i^c)_{i=1,...,5},(\widetilde T_i)_{i=1,...,5}))$ are associated with two points in a 10-dimensional space and that the possible Alternative Financial Solutions or Alternative Multi-Year Prospective Budgets are gathered around a geometrical object joining those two points and that have to be sought and identified. \section{Genetic Like Algorithm} \label{GLA} On the basis of the model we set out in the previous section, we can implement a Genetic Like Algorithm. Although this approach using Genetic Algorithm is not new in the context of optimization, the algorithm proposed here has innovative aspects, as explained in the Introduction (see page \pageref{201310150919}). Among them they are the fact that we look for an optimal object in a bounded box. The other innovative aspect is that we look for the optimum not as a single point but as a shape in a relatively high-dimensional space. For this, we strongly use the fact that the result of a Genetic Algorithm is a set of solutions that is located on the sought shape. \\ We go on treating the case when the number of years and the number of projects are both five. \subsection{Dimensionless problem setting} In order to manage variables and results that are dimensionless and of order one, we first rescale the problem. For this purpose, we introduce a characteristic Investment Level describer value $\textsc{i}$, a characteristic Capacity to Be Free of Debt $\textsc{c}$ and a characteristic Tax Level $\textsc{t}$. For instance we can chose \begin{gather} \label{111} \begin{aligned} \textsc{i} = &\frac{1}{10} \big(\widetilde I_1 + \widetilde I_2 + \widetilde I_3 + \widetilde I_4 + \widetilde I_5 + I_1^c + I_2^c + I_3^c + I_4^c + I_5^c \big) =\frac{1}{10} \sum_{i=1}^{5} \widetilde I_i + I_i^c, \\ \textsc{t} = &\frac{1}{10} \big(\widetilde T_1 + \widetilde T_2 + \widetilde T_3 + \widetilde T_4 + \widetilde T_5 + T_1^c + T_2^c + T_3^c + T_4^c + T_5^c \big) =\frac{1}{10} \sum_{i=1}^{5} \widetilde T_i + T_i^c, \\ \textsc{c} = &\frac{1}{10} \sum_{k=1}^{5}C_k(S((\widetilde I_i)_{i=1,...,5},(T_i^c)_{i=1,...,5})) + C_k(S((I_i^c)_{i=1,...,5},,(\widetilde T_i)_{i=1,...,5})), \end{aligned} \end{gather} which are the mean values of values reached by the two Prospective Budgets we have on hand. Then we define the dimensionless variables and results \begin{gather} \label{222} {\cal I}_i = \frac{I_i }{\textsc{i}}, ~~~ {\cal T}_i = \frac{T_i}{\textsc{t}} \text{ ~ and ~ } \\ \label{223} {\cal C}_k({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})) = \frac{C_k({S}(({ \textsc{i}I}_i)_{i=1,...,5},({ \textsc{t} T}_i)_{i=1,...,5}) )}{\textsc{c}}. \end{gather} On those variables there are organic constraints: \begin{gather} \label{331} {\cal C}_k ({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})) \geq 0, ~~~ \text{ ~ for ~ } k=1,...,5, \end{gather} which translate the fact that the Capacity to Be Free of Debt is a duration. There are also constraints linked with legal rules, various regulations and what is politically admissible. These read: \begin{gather} \label{330} {\cal T}_i \leq {\cal T}_i^\textrm{max}({\cal T}_1,\dots, {\cal T}_{i-1}), \text{ ~ for ~ } i=1,...,5, \\ \label{332} {\cal C}_k({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal C}_i)_{i=1,...,5})) \leq {\cal C}^\textrm{max}, ~~~ \text{ ~ for ~ } k=1,...,5. \end{gather} Those constraints, when expressed in dimensionless variables, involve maximum values ${\cal C}^\textrm{max}$ and $({\cal T}^\textrm{max}_i)_{i=1,...,5}$ which essentially do not depend on the size of the concerned local community. Inequalities (\ref{332}) express the fact that, at each year, the Capacity to Be Free of Debt is limited by common rules. The ${\cal T}^\textrm{max}_i$ in (\ref{330}) depend on the Tax Levels of the previous years and are both imposed by law, which restricts tax evolution, and prescribed by what community Decision-Makers exclude. ~ \noindent{\bf Remark - }The question of knowing if the fact that every Yearly Budget of every Multi-Year Prospective Budget needs to be balanced has to enter into the constraint collection may be addressed. The answer is that we work under the assumption that any Multi-Year Prospective Budget, for instance computed using SOFI, from any variable collection $(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})$ generates loans and consequently Capacity to Be Free of Debt set ${\cal C}_k({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}))$ insuring the balanced character of all its Yearly Budgets. ~ Within the dimensionless variables, the dimensionless Political Goals and other quantities are expressed as \begin{gather} \label{333} \widetilde{\cal I}_i = \frac{\widetilde I_i}{\textsc{i}}, ~~~ \widetilde{\cal C}_i = \frac{\widetilde C_i}{\textsc{c}}, ~~~ {\cal I}_i^c = \frac{I_i^c}{\textsc{i}} \text{ ~ and ~ } {\cal C}_i^c = \frac{C_i^c}{\textsc{i}}, \end{gather} and we have on hand two dimensionless Prospective Budgets ${\cal S}((\widetilde {\cal I}_i)_{i=1,...,5},({\cal T}_i^c)_{i=1,...,5}))$ and ${\cal S}(({\cal I}^c_i)_{i=1,...,5},(\widetilde {\cal T}_i)_{i=1,...,5})$, which are associated with two points in a 10-dimensional space, located not so far from the origin. \subsection{Fitness choice} Among the criteria that may enter the fitness definition, they are the targeted values $\widetilde{\cal I}_i$ and $\widetilde{\cal C}_i$ and the Tax Evolution Pattern. We begin with explaining how easy it is to define a model of Tax Evolution Pattern within dimensionless variables. In the case where the number of years is 5, it is a collection of 5 non negative values $({\cal A}_k)_{k=1,...,5}$ such that \begin{gather} \label{444} \sum_{k=1}^{5}{\cal A}_k =1, \end{gather} and which has the property that ${\cal A}_k = {\cal A}_{k+1}$ in the case of a desired stability of Tax Level between years number $k$ and number $(k+1)$ and the property that ${\cal A}_k < {\cal A}_{k+1}$ in the case of a planned increase. Then, a way to measure how far from the Tax Evolution Pattern a given Multi-Year Prospective Budget ${\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})$ reduces to computing \begin{gather} \label{555} F_\textrm{\tiny T} (({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}))= \phi_\textrm{\tiny T}\Bigg( \sum_{k=1}^{5}\left| \frac{{\cal T}_k}{\sum_{i=1}^5 {\cal T}_i} -{\cal A}_k\right| \Bigg), \end{gather} where $\phi_\textrm{\tiny T}$ is a non-increasing function such that $\phi_\textrm{\tiny T}(0)=1$ and $\lim_{x\to+\infty}\phi_\textrm{\tiny T}(x)=0$. In this definition, the division by ${\sum_{i=1}^5 {\cal T}_i}$ allows us to insure that the values which are compared with the ${\cal A}_k$ range between 0 and 1. Ways to measure how far from the Politic Goals a Multi-Year Prospective Budget is, are the computations of \begin{multline} \label{666} F_\textrm{\tiny I} (({\cal I}_i)_{i=1,...,5})= \phi_\textrm{\tiny I}\Bigg(\sum_{k=1}^{r}\left| {\cal I}_k-\widetilde{\cal I}_k \right| \Bigg) \text{ and } \\ F_\textrm{\tiny C} (({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}))= \phi_\textrm{\tiny C}\Bigg(\sum_{k=1}^{5}\left| {\cal C}_k({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})) -\widetilde{\cal C}_k \right|\Bigg), \end{multline} where $\phi_\textrm{\tiny I}$ and $\phi_\textrm{\tiny C}$ have similar definition as $\phi_\textrm{\tiny T}$. ~ With these three functions $F_\textrm{\tiny T}$, $F_\textrm{\tiny I}$ and $F_\textrm{\tiny C}$, defining three non negative constants $\gamma_\textrm{\tiny T}$, $\gamma_\textrm{\tiny I}$ and $\gamma_\textrm{\tiny C}$, having a sum not too far from 1, we chose the following Fitness Function \begin{multline} \label{777} F(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}) = \\ \gamma_\textrm{\tiny T} F_\textrm{\tiny T} (({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}) + \gamma_\textrm{\tiny I} F_\textrm{\tiny I} ({\cal I}_i)_{i=1,...,5}) + \gamma_\textrm{\tiny C} F_\textrm{\tiny C} (({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}), \end{multline} and the largest $F(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}))$ the best the solution ${\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5}))$. ~ With materials we built, we can reformulate the question of seeking Alternative Financial Solution as follows: we want to exhibit a collection of $N$ points $(({{\cal I}^{l}_i}^{\&})_{i=1,...,5},({{\cal T}^{l}_i}^{\&})_{i=1,...,5})$ such that ${\cal S}(({{\cal I}^{l}_i}^{\&})_{i=1,...,5},({{\cal T}^{l}_i}^{\&})_{i=1,...,5})$ satisfied constraints (\ref{330}), (\ref{331}) and (\ref{332}) and with Fitness worth $F(({{\cal I}^{l}_i}^{\&})_{i=1,...,5},({{\cal T}^{l}_i}^{\&})_{i=1,...,5})$ as large as possible. \subsection{Frame building by Gram-Schmidt routine} In the 10-dimensional vector space, for two vectors ${\cal W}=(({\cal J}_i)_{i=1,...,5}),({\cal D}_i)_{i=1,...,5}))$ and ${\cal W'}=(({\cal J}'_i)_{i=1,...,5}),$ $({\cal D}'_i)_{i=1,...,5}))$, the following inner product and norm naturally exist: \begin{gather} \label{888} \langle{\cal W},{\cal W}' \rangle = \sum_{i=1}^{5}{\cal J}_i {\cal J}'_i + {\cal D}_i {\cal D}'_i \text{ ~ and ~ } \|{\cal W} \| = \sqrt{\langle{\cal W},{\cal W} \rangle}. \end{gather} Moreover, it is provided with its canonical basis: \begin{gather} \label{991} \begin{aligned} &{\mathbf e}_1 =((1,0,0,0,0), (0,0,0,0,0)), & &{\mathbf e}_2 =((0,1,0,0,0),(0,0,0,0,0)), \\ &{\mathbf e}_3 =((0,0,1,0,0),(0,0,0,0,0)), ~ \dots, & &{\mathbf e}_{10} =((0,0,0,0,0),(0,0,0,0,1)). \end{aligned} \end{gather} On the one hand, from the points $((\widetilde {\cal I}_i)_{i=1,...,5}, ({\cal T}_i^c)_{i=1,...,5})$ and $(({\cal I}_i^c)_{i=1,...,5},(\widetilde {\cal T}_i)_{i=1,...,5})$, we can build the first vector of the frame by normalizing the vector linking those two points. This vector is: \begin{gather} \label{999} {\mathbf g}_1 = \frac{\breve {\mathbf g}_1}{\| \breve {\mathbf g}_1\|}, \text{ ~ where ~ } \breve {\mathbf g}_1= ((\widetilde {\cal I}_i- {\cal I}_i^c )_{i=1,...,5},({\cal T}_i^c -\widetilde {\cal T}_i )_{i=1,...,5}). \end{gather} On the other hand, we look for index ${i_\textrm{b}}$ which is such that the absolute value of the inner product of ${\mathbf g}_1$ by ${\mathbf e}_ {i_\textrm{b}}$ is as large as possible, i.e. such that \begin{gather} \label{AA1} \langle {\mathbf g}_1,{\mathbf e}_ {i_\textrm{b}}\rangle = \max_{i=1,...,10} \{ \langle {\mathbf g}_1,{\mathbf e}_ {i}\rangle \}. \end{gather} The basis is then built by induction: Once $j$ orthonormal vectors are obtained, the $(j+1)^\text{th}$ is gotten by removing from ${\mathbf e}_ {(i_\textrm{b}+j \mod 10)}$ its projection onto every vector of the new basis already computed and by renormalization, or in other words, by computing \begin{gather} \label{AAA} {\mathbf g}_{j+1} = \frac{\breve {\mathbf g}_{j+1}}{\| \breve {\mathbf g}_{j+1}\|}, \text{ ~ where ~ } \breve {\mathbf g}_{j+1} = {\mathbf e}_ {\eta(i_\textrm{b}+j)} - \sum_{p=1}^{j} \langle {\mathbf e}_ {\eta(i_\textrm{b}+j)}, {\mathbf g}_{p}\rangle {\mathbf g}_{p}, \end{gather} where $\eta(i) = i$ if $1\leq i \leq 10$ and $\eta(i) = i-10$ if $10\leq i \leq 20$. Once all the $({\mathbf g}_{j})_{j=1,...,10}$ are obtained they make an orthonormal basis of the vector space whose first vector is born by the straight line linking the two points associated with the two dimensionless Prospective Budgets we have on hand. With the help of this basis, we will build the box in which we will seek the targeted geometrical object and the coding of Prospective Budgets. ~ Let ${\mathbf B} $ be the $10\times 10$ matrix such that if ${\cal W}=(({\cal J}_i)_{i=1,...,5},({\cal D}_i)_{i=1,...,5})$ is a vector expressed in the canonical frame \begin{gather} \label{BBB} {\cal U} = ({\cal U}_1, \dots, {\cal U}_{10})= {\mathbf B}{\cal W}, \end{gather} gives its coordinates within frame $({\mathbf g}_{j})_{i=1,...,10}$. The $i^\text{th}$ column of ${\mathbf B}$ is made of the coordinates of ${\mathbf e}_{i}$ within the new frame and the $i^\text{th}$ column of ${\mathbf B}^{-1}={\mathbf B}^T$ is made of the coordinates of ${\mathbf g}_{i}$ within the canonical frame. In the following we will consider that ${\mathbf B}$ and its inverse matrix ${\mathbf B} ^{-1}$ are known. \subsection{Box building and coding} The geometrical object will be looked after within a box containing the points $((\widetilde {\cal I}_i)_{i=1,...,5},$ $({\cal T}_i^c)_{i=1,...,5}))$ and $(({\cal I}^c_i)_{i=1,...,5},(\widetilde {\cal T}_i)_{i=1,...,5}))$ associated with the two dimensionless Prospective Budgets we have on hand. The box we chose is the cube centered in the middle point ${\cal M}$ of $[((\widetilde{\cal I}_i)_{i=1,...,5},({\cal T}_i^c)_{i=1,...,5})),$ $(({\cal I}^c_i)_{i=1,...,5},(\widetilde {\cal T}_i)_{i=1,...,5}))]$, whose coordinates are \begin{gather} \label{CCC} {\cal M} = \Bigg(\bigg(\frac{{\cal I}^c_i+\widetilde {\cal I}_i}{2} \bigg)_{i=1,...,5},\bigg(\frac{{\cal T}_i^c+\widetilde {\cal T}_i}{2} \bigg)_{i=1,...,5} \Bigg), \end{gather} with edges being $\{2\|\breve{\mathbf g}_1\| {\mathbf g}_1,(\|\breve{\mathbf g}_1\| {\mathbf g}_i)_{i=2,...,10}\}$, where $\breve{\mathbf g}_1$ is defined by formula (\ref{999}).\\ Two opposite faces of this hypercubic box are orthogonal to the straight line linking the points $((\widetilde {\cal I}_i)_{i=1,...,5}),$ $({\cal T}_i^c)_{i=1,...,5}))$ and $(({\cal I}^c_i)_{i=1,...,5}),(\widetilde {\cal T}_i)_{i=1,...,5}))$. Another (and more usable) way to characterize the box is to say that it is the range of $[-1,1]\times[-\frac12,\frac12]^{9}$ by the mapping \begin{gather} \label{DDD} {\cal P} \mapsto {\cal M} +\|\breve{\mathbf g}_1\| \, {\mathbf B} ^{-1} {\cal P}, \end{gather} whose inverse is \begin{gather} \label{EEE} {\cal R} \mapsto \frac{1}{\|\breve{\mathbf g}_1\|}{\mathbf B}( {\cal R} - {\cal M}), \end{gather} where ${\cal M}$ is given by (\ref{CCC}), $\breve{\mathbf g}_1$ by (\ref{999}) and matrix ${\mathbf B}$ by (\ref{BBB}). ~ Moreover, this transformation gives a coding of any solution ${\cal S}({\cal R})$ by a point in $[-1,1]\times[-\frac12,\frac12]^{9}$. Hence, without any supplementary effort, we have two codings at our disposal: a solution ${\cal S}({\cal R})$ may be coded by its directly interpretable values $({\cal R}_{1},\dots,{\cal R}_{10})=$ $(({\cal I}_i)_{i=1,...,5}),$ $({\cal T}_i)_{i=1,...,5})$ or by the collection of values $({\cal P}_{1},\dots,{\cal P}_{10})$ that are the coordinates of point ${\cal P}= ({1}/{\|\breve{\mathbf g}_1\|})$ ${\mathbf B}( {\cal R} - {\cal M}) \in [-1,1]\times[-\frac12,\frac12]^{9}$. Generically, in the following we will denote the coding by ${\cal Q} = ({\cal Q}_{1},\dots,{\cal Q}_{10})$. It will designate the coding by ${\cal P}$ or ${\cal R}$ or any other. \subsection{Initial Prospective Budget collection} Fixing the number of members of the collection, and denoting this number by $N$, a collection of $N$ points ${{\cal P}^{l}}^{0}=$ $({{\cal P}_{1}^{l}}^{0},\dots,{{\cal P}_{\hspace{-1pt}10}^ {l}}^{\hspace{-3pt}0})$, for ${l=1,...,N}$, of $[-1,1]\times[-\frac12,\frac12]^{9}$ is generated randomly. The initial collection of solutions is then ${{\cal R}^{l}}^{0}=$ $({{\cal R}_{1}^{l}}^{0},\dots,{{\cal R}_{\hspace{-1pt}10}^ {l}}^{\hspace{-4pt}0})$, for ${l=1,...,N}$, where ${{\cal R}^{l}}^{0}= {\cal M} +\|\breve{\mathbf g}_1\| \, {\mathbf B}^{-1} {{\cal P}^{l}}^{0}$. We assume that every dimensionless Prospective Budget ${{\cal R}^{l}}^{0}$ satisfies all the constraints (\ref{331}), (\ref{330}) and (\ref{332}). This means that the random generation has to run until $N$ dimensionless Prospective Budgets satisfying the constraints are obtained. Being Generic, the coding of this initial Prospective Budgets will be denoted by ${{\cal Q}^{l}}^{0} = ({{\cal Q}_{1}^{l}}^{0},\dots,{{\cal Q}_{10}^{l}}^{\hspace{-4pt}0})$. \subsection{Constraint management} For a collection made of $2N$ individuals, we will manage constraints by integrating them in the Fitness Function. This consists in adding to Fitness Function defined by (\ref{777}), the following quantity, or a quantity of this kind, \begin{multline} \label{FFF} -\phi\Bigg( - \sum_{k=1}^5 \min( {\cal C}_k({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})), 0))\\ +\sum_{k=1}^{5} \max ({\cal C}_k({\cal S}(({\cal I}_i)_{i=1,...,5},({\cal T}_i)_{i=1,...,5})) - {\cal C}^\textrm{max},0 ) +\sum_{k=1}^{5} \max( {\cal T}_k -{\cal T}^\textrm{max}_k,0) \Bigg), \end{multline} for a non-decreasing function $\phi$ such that $\phi(0)=0$ and $\lim_{x\to+\infty}\phi(x) =1$, multiplied by a factor $\gamma$ relatively large in front of 1. This penalization makes the value of the fitness of Prospective Budgets not respect the constraints to decrease and then diminishes their chance to pass the selection to come. \subsection{Algorithm to produce next generation} Having on hand the $m-$th collection of Prospective Budgets ${\cal S}({{\cal R}^{l}}^{m})$, for ${l=1,...,N}$, and their coding ${{\cal Q}^{l}}^{m}$, a new collection ${\cal S}({{\cal R}^{l}}^{m+1})$, with coding ${{\cal Q}^{l}}^{m+1}$ is generated by a usual Genetic Algorithm Like routine we now briefly describe.\\ In a first step, couples of codings of the Prospective Budgets are randomly formed. Then for any couple $({{\cal Q}^{l}}^{m},{{\cal Q}^{k}}^{m}) = (({{\cal Q}_{1}^{l}}^{m},\dots,{{\cal Q}_{10}^{l}}^{\hspace{-3pt}m}),({{\cal Q}_{1}^{k}}^{m},\dots,{{\cal Q}_{10}^{k}}^{\hspace{-3pt}m}))$, an integer $i_a$ is randomly chosen among $\{1,2,\dots,10\}$ and the two codings $({{\cal Q}_{1}^{l}}^{m},\dots,{{\cal Q}_{i_a-1}^{l}}^{\hspace{-14pt}m\hspace{7pt}}, {{\cal Q}_{i_a}^{k}}^{\hspace{-2pt}m},\dots,{{\cal Q}_{10}^{k}}^{\hspace{-3pt}m})$ and $({{\cal Q}_{1}^{k}}^{m},\dots,{{\cal Q}_{i_a-1}^{k}}^{\hspace{-12pt}m\hspace{5pt}}, {{\cal Q}_{i_a}^{l}}^{\hspace{-2pt}m},\dots,{{\cal Q}_{10}^{l}}^{\hspace{-5pt}m})$ are generated. At the end of this first step, we have on hand $2N$ points: the ${{\cal Q}^{l}}^{m}$, for ${l=1,...,N}$ and all the ones generated as described previously that are denoted ${{\cal Q}^{l}}^{m}$ for ${l=N+1,...,2N}$.\\ The second step consists in making some codings to mutate. For this, a small integer $i_b$, ranging, say, between 0 and $N/50$ is randomly generated. Then, $i_b$ codings are randomly chosen among the codings generated in the first step, i.e. among the ${{\cal Q}^{l}}^{m}$ with ${l=N+1,...,2N}$. For each of them, an integer $i_c$ is randomly chosen among $\{1,2,\dots,10\}$, a number $\nu$ ranging between $-1$ and $1$ is also randomly generated and the $i_c-$th component of the concerned coding is incremented by $\nu$.\\ In the third step, the Fitness Function is evaluated on every coding ${{\cal Q}^{l}}^{m}$, for ${l=1,...,2N}$, which results from the first three steps. In order to do so, it is necessary to determine the ${{\cal R}^{l}}^{m}$, for ${l=1,...,2N}$ associated with the ${{\cal Q}^{l}}^{m}$, for ${l=1,...,2N}$, the Prospective Budgets ${\cal S}({{\cal R}^{l}}^{m})$, for ${l=1,...,2N}$, and finally, the Fitnesses (penalized by constraints) $F({{\cal R}^{l}}^{m})$ for ${l=1,...,2N}$.\\ The objective of the fifth step is to randomly select $N$ codings among all the codings ${{\cal Q}^{l}}^{m}$, for ${l=1,...,2N}$ that were brought out by the previous steps with the principle that the higher the fitness of a coding, the more likely to be selected. In addition, we can use an elitism routine which consists in deterministically choosing the $N_\textrm{elit}$ codings that provide the best scores with respect to the Fitness Function.\\ In practice, in order to implement the routine just described, we used the library "Aforge.Genetics". We conducted a study to measure the impact of the values of "CrossRate" and "MutationRate" parameters of this library. This study showed that the default values were convenient. Hence we used them. \subsection{Resizing of obtained solutions} After several iterations of the algorithm just described, we have on hand a collection of $N$ codings. ${{\cal Q}^{l}}^{\&}$, for ${l=1,...,N}$. Then, the ${{\cal R}^{l}}^{\&}=(({{\cal I}^{l}_i}^{\&})_{i=1,...,5},({{\cal T}^{l}_i}^{\&})_{i=1,...,5}))$, for ${l=1,...,N}$ are deduced. If the coding is the directly interpretable one there is nothing to do. If the coding is based on points ${\cal P}\in[-1,1]\times[-\frac12,\frac12]^{9}$ one needs to apply transformation (\ref{DDD}). Then, for ${l=1,...,N}$, the resized values $(({{\cal I}^{l}_i}^{\&})_{i=1,...,5},({{T}^{l}_i}^{\&})_{i=1,...,5}))$, are computed by inverting formula (\ref{222}) and real Prospective Budgets $S(({{\cal I}^{l}_i}^{\&})_{i=1,...,5},({{T}^{l}_i}^{\&})_{i=1,...,5}))$, are also computed. \section{Tests} The method is now tested on several examples. First, it is tested on one-dimensional problems in order to set out its capability to exhibit the point where the maximum of the fitness is located, and, in the case when the fitness shows a plateau with value of which is its maximal, to produce a population which is essentially located on the interval which range is the plateau. Secondly it is tested on the example of local community finances that is described in the previous sections. \subsection{Test on a one-dimensional problem with a Fitness Function having one maximum} The Fitness Function considered here is a function with one maximum, which is the sum of two quadratic ones. More precisely, defining \begin{gather} h_1(x) = \frac12 \max\big(1 -30 \,(x-0.45)^2,0\big), ~~ h_2(x) = \frac12 \max\big(1 -30\, (x-0.55)^2,0\big), \end{gather} which are given in Figure \ref{1dWithoutPlateau} at the top and in the middle, the Fitness Function is \begin{gather} F =h_1 +h_2 , \end{gather} defined on $[0,1]$ and drawn at the bottom of Figure \ref{1dWithoutPlateau}. We have chosen those functions in order to obtain a function $F$ supported in $(0,1)$ and with values ranging in $[0,1]$. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm,bb = 1cm 9.5cm 20cm 20cm, clip=true ]{1dWithoutPlateau} \caption{Fitness Function (bottom) with one maximum, which is the sum of two functions (top and middle).} \label{1dWithoutPlateau} \end{center} \end{figure} A simplified version of the method described above was implemented on this example, with the maxima of $h_1(x)$ and $h_2(x)$ playing roles analogous to those played by points $((\widetilde {\cal I}_i)_{i=1,...,5}, ({\cal T}_i^c)_{i=1,...,5})$ and $(({\cal I}_i^c)_{i=1,...,5},$ $(\widetilde {\cal T}_i)_{i=1,...,5})$ in the above described method. On this example, the method works and gives after 500 generations a collection of points which is very concentrated around $x=0.5$, which is the argument of the maximum of the fitness function. Nonetheless its efficiency was compared to optimization methods using a similar Genetic Like Algorithm, but not involving the two points around which the argument of the maximum is sought. The method built here was not more efficient. This seems to lead to the conclusion that the contribution of this method is not to be sought in this direction. \subsection{Test on a one-dimensional problem with a Fitness Function having a maximum plateau} One of the original capabilities of the method described in this paper is that it can give a good representation of the argument of the maximum of the fitness function when it is an interval. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm,bb = 1cm 11cm 20cm 19cm, clip=true ]{f1f2g1g2} \caption{functions $f_1$ (top left), $f_2$ (top right), $g_1$ (bottom left) and $g_2$ (bottom right).} \label{f1f2g1g2}\vspace{-5mm} \end{center} \end{figure} To illustrate this capability, a fitness function having an interval as argument of its maximum will be built. \\ This fitness function is the sum of two other functions that have both one maximum. The result of this sum is a function which has a plateau whose span is smaller than the interval defined by the maximum localizations of the two functions it is the sum of. \\ In practice, in a first place, functions $f_1$ and $f_2$ defined by \begin{gather} f_1(x) = 1 -10\, |x-0.45| \text{ and } f_2(x) = 1 -10\, |x-0.55|, \end{gather} and drawn at the top of Figure \ref{f1f2g1g2} are considered. Functions \begin{gather} g_1(x) = \min\big(1 - 10\,(x-0.48), 1\big) \text{ and } g_2(x) = \min\big(1 - 10\,(0.52-x), 1\big), \end{gather} are also considered. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm,bb = 1cm 9.5cm 20cm 20cm, clip=true ]{1dWithSmallPlateau} \caption{The Fitness Function (bottom) with maxima on a plateau which is strictly included in the interval made by the argument-of-the-maximum localizations of the two functions (top and middle) it is the sum of.} \label{1dWithSmallPlateau} \end{center} \end{figure} Finally, the Fitness Function which is considered is \begin{gather} \label{201303292236} F = l_1 + l_2. \end{gather} It is the sum of the two functions: \begin{gather} l_1= 0.7 \max\big(0.5 f_1 + 0.5 g_1, 0\big) \text{ and } l_2 = 0.7 \max\big(0.5 f_2 + 0.5 g_2, 0\big). \end{gather} Functions $l_1$ and $l_2$ are drawn in Figure \ref{1dWithSmallPlateau} on the top and in the middle; they both have only one argument of their unique maximum. Fitness Function $F$ is given in the bottom of this Figure and detailed in Figure \ref{1dWithSmallPlateauDetail}. As announced, its maximum makes up a plateau which is the range of an interval ($[0.475, 0.525]$) strictly included in the interval ($[0.45, 0.55]$) made by the maximum localizations of the two functions $l_1$ and $l_2$. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm,bb = 1cm 10.5cm 20cm 19cm, clip=true ]{1dWithSmallPlateauDetail} \caption{Detail of the fitness with maximum on a plateau which is strictly included in the interval made by the argment-of-the-maximum localizations of the two functions it is the sum of.} \label{1dWithSmallPlateauDetail} \end{center} \end{figure} \\ A simplified version of the method built in section \ref{GLA} was implemented on this example to locate the arguments of the maximum of Fitness Function $F$. In this implemented method, the arguments of the maximum of $l_1(x)$ and $l_2(x)$ play the roles that points $((\widetilde {\cal I}_i)_{i=1,...,5}, ({\cal T}_i^c)_{i=1,...,5})$ and $(({\cal I}_i^c)_{i=1,...,5},$ $(\widetilde {\cal T}_i)_{i=1,...,5})$ play in the more general method. \\ In this simplified version, the collection at each generation is made of 35 points. The resulting collection after 500 generations of the algorithm is given in Table \ref{tabPlateau}. \begin{table}[htdp] \caption{The collection of points after 500 generations when the Fitness Function is given by \eqref{201303292236}.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline\hline $x$ & 0.489 & 0,491 & 0,491 & 0,500 & 0,492 & 0,489 & 0,487 \\ \hline $F$ & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 \\ \hline\hline $x$ & 0,506 & 0,492 & 0,493 & 0,490 & 0,491 & 0,497 & 0,497 \\ \hline $F$ & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 \\ \hline\hline $x$ & 0,494 & 0,493 & 0,489 & 0,496 & 0,489 & 0,494 &1,471 \\ \hline $F$ & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,00 \\ \hline\hline $x$&-0,475 & 0,464 & 0,462 & 0,485 & 0,492 & 0,501 & 0,488 \\ \hline $F$ & 0,000 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 & 0,910 \\ \hline\hline $x$ & 0,489 & 0,492 & 0,496 & 0,492 & 0,435 & 0,534 & 0,484 \\ \hline $F$ & 0,910 & 0,910 & 0,910 & 0,910 & 0,646 & 0,860 & 0,910 \\ \hline\hline \end{tabular} \end{center} \label{tabPlateau} \end{table} The collection is well distributed on the interval ($[0.475, 0.525]$) whose range is the plateau of Fitness Function $F$. In particular, this final collection does not undergo concentrations that could orientate by mistake interpretation towards concluding that the Fitness Function has isolated maxima. \\ This capability of the method is very important for reaching operational problems. This is what is done in the last test. \subsection{Test on an operational problem} \label{SubsecTOP} \begin{figure}[htbp] ~\hspace{-1.3cm} \includegraphics[width=17.5cm, bb = 2.5cm 17.8cm 18.5cm 27.5cm, clip=true]{Fig10D1} \caption{The quite liberal solution. (The software has a French interface; the translations are: Capacit\'e de d\'esendettement = Capacity to Be Free of Debt, Variation des taux = Tax Increase, Produit fiscal direct = Taxes, D\'epenses r\'eelles de fonctionnement = Operating Expenditures, Recettes r\'eelles de fonctionnement = Operating Recipes, Epargne brute = Operating Budget Excess.) } \label{LabelFig10D1} \end{figure} \begin{figure}[htbp] ~\hspace{-1.3cm} \includegraphics[width=17.5cm, bb = 2.5cm 17.8cm 18.5cm 27.5cm, clip=true]{Fig10D2} \caption{The careful solution.} \label{LabelFig10D2} \end{figure} The method set out in section \ref{GLA} is now tested in a realistic situation. The experience uses a software product, called SOFI, dedicated to the optimization of local communities' budgets, with a French interface. In this subsection, we describe the test and in subsection \ref{SubsecPCCT}, we discuss technical aspects concerning the way we tuned the method and the way we prevented the appearance of problems we met. \\ In order to keep the study as realistic as possible, the budget model and numbers that have been chosen are drawn from an actual customer of the company MGDIS. The city council (city of which we will not cite the name for confidentiality reasons) used SOFI to simulate the impact of a dozen projects onto the budget of the community. Using this application, a financial expert (Thomas Hody) provided us with two Multi-Year Prospective Budgets. They correspond to ideal but not reachable situations. The results of our Genetic Algorithm based optimization process have been validated by the same person as a solution that indeed improves on the use of the community financial resources. The first solution, presented in Figure \ref{LabelFig10D1}, is quite liberal, with all projects being realized, and the taxes increased at the maximum rate of 7\% in the first three years (see the second line of the second part of the Table in Figure \ref{LabelFig10D1}). The Capacity to Be Free of Debt ratio remains in the acceptable range (see the "Capacit\'e de d\'esendettement" line at the bottom of the table in Figure \ref{LabelFig10D1}).\\ The second solution we picked is a much more careful, with only the top priority projects being done, and a very limited increase of tax applied so that the capacity to be free of debt ratio remains below 15 years, which is the prudential limit. Figure \ref{LabelFig10D2} shows the values for this second solution.\\ Figure \ref{LabelFig10D3} shows a representation of the proceedings of the projects for the careful solution. A color code helps spotting the priorities of the projects (from hight priority: red, orange, yellow, blue: low priority ). It should be noted that, in our example, priority one projects account for the vast majority of the budget. \begin{figure}[htbp] ~\hspace{-1.3cm} \includegraphics[width=17cm, bb = 2.5cm 19.5cm 18.5cm 27.5cm, clip=true]{Fig10D3} \caption{The proceedings of the projects for the careful solution.} \label{LabelFig10D3} \end{figure} The coding follows such rules: \begin{itemize} \item The first five genes code the five less-than-one-priority projects (the priority-one projects are always active). \item The coding is equally distributed for each unit: below half, the project is inactive, above half, the project is active. \item The next five genes contain the evolution of tax, written as a double. \item We consider the limits to be 0\% at the minimum, and 7\% at the maximum. Thus, the decoding will realize a modulus 0.07 function on the values. \end{itemize} The fitness function that was used works as follows: \begin{itemize} \item The number of projects brings a linear satisfaction, and accounts for 25\% of the final fitness. \item The evolution of the tax is best at 0\% and worst at 7\%. \item Its average evolution accounts for 10\% of the global fitness, and its evolution in the last two years accounts for 5\%. \item The capacity to be free of debt is optimal at 0 year and worst at 15 years and more. This accounts for 25\% of the final grading of the solution. \item The capacity to spare money is optimal at 5\%, and accounts for 25\% of the global fitness. \item No variation at all (in the tax evolution) gives the best results, and this part of the grade accounts for the remaining 10\% of the global fitness. \end{itemize} The language used is C\#, and the Genetic Algorithm framework is the one from AForge, which is an Open Source project. The definition of the two vectors corresponding to the solution points are programmed as: \begin{gather} \begin{aligned} \text {\tt Vector v1 = } & \text {\tt new Vector(new double[] \{ 0.75, 0.75, 0.75, 0.75, 0.75,} \\ & \text {\tt 0.07, 0.07, 0.07, 0.00, 0.00 \});} \\ \text{\tt Vector v2 = } & \text {\tt new Vector(new double[] \{ 0.25, 0.25, 0.25, 0.25, 0.25,} \\ & \text {\tt 0.03, 0.02, 0.02, 0.00, 0.00 \}); } \end{aligned} \end{gather} By constraining the solutions into a hypercube using the method described in this paper, we reach an optimum which has been validated by a finance professional as a reasonable solution for the community budget. The corresponding coding is: \begin{gather} \begin{aligned} & \text {\tt [1,03949069282959, 0,19207769961155, 0,51186133809657, } \\ & \text {\tt 0,205107769055541, 0,785367264162938, -0,254824609597842,} \\ & \text {\tt 0,378497610225784, 1,04175330250962, 0,590217152232071, } \\ & \text {\tt -11,383992284572]}. \end{aligned} \end{gather} These values correspond to: \begin{gather} \begin{aligned} & \text{Project 1: OFF, Project 2: OFF, Project 3: ON, Project 4: OFF, Project 5: ON, } \\ & \text{Tax evolution : 3.48\%, 2.85\%, 6.18\%, 3.02\%, 3.39\%.} \end{aligned} \end{gather} The prudential ratios are respected, as the solution in the graphic simulator of Figure \ref{LabelFig10D4} shows (Capacity to Be Free of Debt, abbreviated as CDD in French, must remain under 15 years). \begin{figure}[htbp] ~\hspace{-1.3cm} \includegraphics[width=17.5cm, bb = 2.5cm 17.8cm 18.5cm 27.5cm, clip=true]{Fig10D4} \caption{The optimal solution.} \label{LabelFig10D4} \end{figure} The fitness obtained is 59\%, and the 500 generations took 17 minutes and 32 seconds to be simulated on the reference machine. Interestingly, an independent computation with only 100 generations, which took 3 minutes and 32 seconds, showed a final fitness of 58\%, so the convergence is quite quick for this particular example. The corresponding chromosome was: \begin{gather} \begin{aligned} & \text {\tt [1,11266759532518, 0,0838990704498006, 0,565754259647956, } \\ & \text {\tt 0,440107396614116, 0,813652694225311, 0,642521321773529, } \\ & \text {\tt 0,575349082741285, 0,447334636593206, 0,488786454202292, } \\ & \text {\tt 0,990373758336907]} \end{aligned} \end{gather} In terms of budget, this means: \begin{gather} \begin{aligned} & \text{Project 1: OFF, Project 2: OFF, Project 3: ON, Project 4: OFF, Project 5: ON,} \\ & \text{Tax evolution : 1.25\%, 1.53\%, 2.73\%, 6.88\%, 1.04\%}. \end{aligned} \end{gather} One will notice that the activation of the different projects is the same as the other solution, whereas the choice of Tax Evolution pattern is quite different. A quick conclusion would be that the fitness is more dependent on the projects activation than on the tax evolution, but this would need a robustness analysis, and this was not the subject of the present study. The most interesting part of the result is that, along the generations, the solutions found are quite concentrated, as shown in Figure \ref{LabelFig10D5}. \begin{figure}[htbp] ~\hspace{-1.3cm} \includegraphics[width=17.5cm, bb = 2.5cm 19.8cm 18.5cm 27.5cm, clip=true]{Fig10D5} \caption{The final population.} \label{LabelFig10D5} \end{figure} This should be compared to the initial Genetic Algorithm optimization without Gram Schmidt projection. \begin{figure}[htbp] ~\hspace{-1.3cm} \includegraphics[width=17.5cm, bb = 2.5cm 19.8cm 18.5cm 27.5cm, clip=true]{Fig10D6} \caption{The initial population.} \label{LabelFig10D6} \end{figure} The effect of using this particular technique is that the solutions are found into a constrained set of solution, without the user needing to explain how it is constrained, but by letting one propose two solutions surrounding the searched one. The effect on the rapidity on the convergence was expected as an additional result of the study, but we could not demonstrate any noticeable or provable effect on this factor. Further studies need to be done, with a high volume of test, in order to determine whether the two points approach helps the convergence of the Genetic Algorithms or not, and depending on which conditions on the fitness of the coding of the chromosomes. \subsection{Parameter choices for the test} \label{SubsecPCCT} Of course, the justification of the parameters chosen for the tuning of the Genetic Algorithms engine would make for an entire article on their own, so we will just provide a justification for running tests like the one described in subsection \ref{SubsecTOP}. More precisely, in this subsection, we report on a detailed study that allowed us to tune the parameters in this context. The first part of the validation of values used for auto-shuffling, crossing, mutation and random selection rates are simply that they are the default values provided by the Open Source component AForge.Genetics. One can safely assume that these values have been chosen to be a correct fit to general situations. \\ The second part of the validation completes the first one, as the previous hypothesis has indeed been verified by the complete study, the outcome of which is shortly described below. The auto-shuffling parameter allows for a dispersion of the chromosomes after selection. This is useful when the selection method forces the list of upcoming chromosomes to be sorted, which can result in lower-quality crossing of the chromosomes thereafter. The "false" parameter in our case is indeed the best in terms of convergence speed (external parameters like the size of the population have no impact on the result) as shown in Figure \ref{A-AutoShuffling}. % \begin{figure}[htbp] \begin{center} \includegraphics[width=9cm, bb = 0cm 0cm 14.8cm 5.9cm, clip=true]{A-AutoShuffling} \caption{Convergence with and without auto-shuffling.} \label{A-AutoShuffling} \end{center} \end{figure} The impact of the crossing rate has also been studied in the same conditions (accounting for variations of other parameters), and brought to the same conclusion that the default value of 0.75, provided by AForge, is quite optimal (see Figure \ref{B-CrossingRate}). \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm, bb = 0cm 0cm 14.8cm 6.8cm, clip=true]{B-CrossingRate} \caption{Influence of crossing rate.} \label{B-CrossingRate} \end{center} \end{figure} Next analyzed parameter is the mutation rate, and again, the default value of 0.1 is close from ideal in our case of study (see Figure \ref{C-MutationRate}). \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm, bb = 0cm 0cm 14.8cm 6.8cm, clip=true]{C-MutationRate} \caption{Influence of mutation rate.} \label{C-MutationRate} \end{center} \end{figure} \noindent The random selection rate is not used by the tailored selection algorithm created for the budget optimization engine. Finally, the impact of the size of the population on the convergence speed has also been analyzed (see Figure \ref{D-PopulationSize}), and brought us to use a value of 50, which was considered an optimal ratio between speed and memory use. \\ \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm, bb = 0cm 0cm 16.3cm 6.8cm, clip=true]{D-PopulationSize} \caption{Influence of population size.} \label{D-PopulationSize} \end{center} \end{figure} As an aside note, the robustness tests around the four parameters needed to run several hundreds of hour-long tests, in order to simulate all the different possible combination. This has been achieved by creating a small software component that was dedicated in running jobs on colleagues' computers at night, collecting data and centralizing results back on the author's computer in the morning. This ad-hoc mechanism has also been used for the actual simulation, in order to test its robustness. \\ Working with Genetic Algorithms makes for two competing risks that must be continuously balanced: under-fitting and over-fitting. In an under-fitting simulation, the generation algorithm introduces too much variety in the chromosomes for the fitness algorithm to restrain the population to a fitter one than the previous one. The over-fitting problem is the exact reverse, where the fitness algorithm completely stifles the expansion of the possible domain created by the generation algorithm, thus resulting in a lack of genetic variety, potentially leaving more ideal genes out of the simulation. The under-fitting problem is head-on addressed by the very principle proposed in the current paper, which is to reduce the exploration domain to one restricted to the immediate neighborhood of two relatively satisfactory points. The restriction on the domain is achieved by choosing the chromosomes on constrained multi-dimensional boxes, the corresponding business values being then retrofitted in the standard domain of values by using the Gram-Schmidt routine. The over-fitting problem is addressed by carefully choosing the selection method among commonly-accepted ones. On very continuous problems, the ''Elite'' selection mode achieves best results, by quickly removing the poor genes from the pool. On discontinuous problems like the ones linked to budget optimization, it is better to not be too harsh on the selection, and adopt a more exploratory selection mode, like "Roulette". In the "Roulette" method, the chromosomes are not simply eradicated if they do not correspond to a high ranking with respect to the fitness method, but have simply a lower chance of being selected for the next generation. This results in a more flexible approach, where the exploration of unknown domains is allowed, but more or less quickly forbidden if they do not bring an improvement on the fitness. The question of the tuning between the "Elite" and the "Roulette" part of the selection algorithm could bear a complete study on its own. In the present study, this ratio has been chosen to a balanced default value of half / half, after that a considerable number of nightly robustness tests showed that increasing the exploratory part did not bring any better solution. After these tests, the ratio was kept for all subsequent simulations. It would of course be possible to optimize the computation time by slowly decreasing this ratio to a more Elite-oriented algorithm, but the improvement on computational time (which cost was extremely low, all simulations being run on low-range PCs) would not make up for the risks on not detecting a better solution for the budget. This kind of adjustment is let for further study. \section{Conclusion} In this paper, a method based on Genetic Algorithm to build a collection of Financial Solutions from two acceptable ones is set out, and explored.\\ The way to tackle that the collection is sought in a neighborhood of the two acceptable solutions calls on a Gram-Schmidt routine to comfortably build a box surrounding them. This routine brings also a way of coding the solution that can be used in the Genetic Like Algorithm.\\ The method is then tested on simplified one-dimensional problems to exhibit that it has the capability to locate the argument of the maximum of a Fitness Function and to generate a collection of solutions which is distributed over the set of all the arguments of the maximum when the Fitness Function has a plateau as a maximum. This last capability is the important one in view of the targeted operational applications which concerns the financial strategy of local communities.\\ Finally, the method is tested on an example of the targeted operational applications and gives interesting results which is promising. It seems to be a potential alternative or a support to the heavy protocol (involving many meetings with experts and decision makers) to set out a suitable Financial Solution for local communities. \\ \noindent {\bf Acknowledgements -} The authors thank the referee who pointed some lacks in the first version of the paper. He permitted a substantial improvement of the paper. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Flavor Physics probes new phenomena by either searching for small deviations from the Standard Model (SM) based theoretical predictions or by measuring quantities which are highly suppressed within the SM. Searches for small deviations from the SM are performed using large strange, charm or bottom hadron samples, mostly by kaon experiments of $B$ factories. Measurements of highly suppressed quantities, such as CP violation phases and asymmetries in the neutral $B_s$-meson system or searches for rare $B$ decays, are performed with the hope that new physics effects would be large enough to significantly affect the measured quantities and so lead to observations of deviations from the SM expectations. The D0 and CDF detectors at the Fermilab Tevatron have each accumulated more that 9~fb$^{-1}$ of integrated luminosity. The corresponding large datasets enable the two experiments to perform unprecedented studies of heavy flavor hadron properties. We present recent D0 and CDF measurements, focusing on rare decays and CP violation in $B$-meson decays. \section{Search for $B^0_s \rightarrow \mu^{+} \mu^{-}$ Decays} Processes involving Flavor Changing Neutral Currents (FCNC) provide excellent opportunities to search for evidence of new physics since in the SM they are forbidden at tree level and can only occur through higher order loop diagrams. Two such processes are the decays $B_{s/d} \rightarrow \mu^{+} \mu^{-}$. In the SM, these decays are both Cabibbo and helicity suppressed. Their branching ratios are predicted with~10\% accuracy~\cite{buras} as: $BR(B_{s} \rightarrow \mu^{+} \mu^{-}) = (3.2 \pm 0.2) \times 10^{-9}$ and $BR(B_{d} \rightarrow \mu^{+} \mu^{-}) = (1.0 \pm 0.1) \times 10^{-10}$. These predictions are one order of magnitude smaller than the current experimental sensitivity. Enhancements to the expected $B_s \rightarrow \mu^{+} \mu^{-}$ branching fraction occur in a variety of different new physics models. For example, in supersymmetry (SUSY) models, new supersymmetric particles can increase the branching fraction $BR(B_{s} \rightarrow \mu^{+} \mu^{-})$ by several orders of magnitude at large tan$(\beta)$, the ratio of vacuum expectation values of the Higgs doublets~\cite{choudhury}. In the minimal supersymmetric standard model (MSSM), the enhancement is proportional to tan$^6(\beta)$. For large tan$(\beta)$, this search is one of the most sensitive probes of new physics available at the Tevatron experiments. Using 6.1~fb$^{-1}$, the D0 experiment has published~\cite{D0_Bs_mumu} in 2010 an upper limit on the $B_{s} \rightarrow \mu^{+} \mu^{-}$ branching ratio of $51 \times 10^{-9}$ at 95\%CL. The CDF experiment has performed a recent update~\cite{CDF_Bs_mumu} of the analysis using 7~fb$^{-1}$ of integrated luminosity which supersedes the previous CDF published result~\cite{CDF_Bs_mumu_2fb} which used 2~fb$^{-1}$ of data. In addition to increasing the size of the data set, the sensitivity of this analysis is improved another 20\% by including events which cross regions of the tracker where the trigger efficiency is rapidly changing and by including events with muons in the forward regions. Other improvements include the use of a better neural network (NN) discriminant that provides approximately twice the background rejection for the same signal efficiency. The events are collected using a set of dimuon triggers and must satisfy either of two sets of requirements corresponding to different topologies: CC events have both muon candidates detected in the central region (CMU), while CF events have one central muon and another muon detected in the forward region (CMX). The baseline selection requires high quality muon candidates with transverse momentum relative to the beam direction of $p_T > 2.0 (2.2)$~GeV/c in the central (forward) region. The muon pairs are required to have an invariant mass in the range $4.669 < $M$(\mu\mu) < 5.969$~GeV/c$^2$ and are constrained to originate from a common well-measured three-dimensional (3D) vertex. A likelihood method together with an energy loss based selection are used to further suppress contributions from hadrons misidentified as muons. A fraction of the total number of background and simulated signal events are used to train a NN to discriminate signal from background events. The remainder are used to test for NN over-training and to determine the signal and background efficiencies. To exploit the difference in the M$(\mu\mu)$ distributions between signal and background and the improved suppression of combinatorial background at large NN output ($\nu_{NN}$), the data is divided into sub-samples in the ($\nu_{NN}$, M$(\mu\mu)$) plane. The CC and CF samples are each divided into 40 sub-samples. There are eight bins in the $\nu_{NN}$. Within each $\nu_{NN}$ bin, five M$(\mu\mu)$ bins are employed, each 24 MeV/c$^2$ wide, centered on the world average $B_s$ ($B_d$) mass. The number of observed events is compared to the number expected in all 80 sub-samples for the $B_d$ search region. The data are consistent with the background expectations and yield an observed limit of \begin{center} $BR(B_d \rightarrow \mu^+\mu^-)~<~6.0~(5.0)~\times~10^{-9}$ at 95\% (90\%) C.L. \end{center} The results for the $B_s$ region are shown in Fig.~\ref{fig:Bs_mumu}. There is an excess of events over background concentrated in the region with $\nu_{NN}$~$>~0.97$. The p-value for background-only hypothesis is 0.27\%. If we consider only the two highest NN bins the p-value becomes 0.66\%. If $B_s \rightarrow \mu^+\mu^-$ events are included in the pseudo-experiments at the SM level ($BR = 3.2 \times 10^{-9}$) a p-value of 1.9\% (4.1\%) is obtained using all (only the highest 2) NN bins. \begin{figure}[tb] \includegraphics[width=80mm]{bs_mumu_plot.eps} \includegraphics[width=80mm]{BsLimitsAug2011.eps} \caption{ Comparison of background, SM prediction, and observations, separating CC and CF, but combining 5 lowest NN bins in $B_s \rightarrow \mu^+ \mu^-$ decays (left). Comparison between the $B_s \rightarrow \mu^+\mu^-$ allowed branching fractions at CDF, LHCb CMS and the SM prediction (right).} \label{fig:Bs_mumu} \end{figure} A log-likelihood fit is used to determine the $BR(B_s \rightarrow \mu^+\mu^-)$ most consistent with the data in the $B_s$ search region: \begin{center} $BR(B_s \rightarrow \mu^+\mu^-) = (1.8 ^{+1.1}_{-0.9}) \times 10^{-8}$. \end{center} Additionally, 90\% C.L. bounds are set on the braching fraction: \begin{center} $4.6 \times 10^{-9} < BR(B_s \rightarrow \mu^+\mu^-) < 3.9 \times 10^{-8}$. \end{center} The CDF experiment investigates the excess in the $0.97~<$~$\nu_{NN}$~$<~0.987$~bin which appears to be a statistical fluctuation of the background as there is no significant expectation of $B_s \rightarrow \mu^+\mu^-$ signal consistent with the observation in the two highest NN bins. The same events, the same fits and the same methodologies are used for both $B_s$ and $B^0$ searches. Since the data in the $B^0$ search region shows no excess, problems with the background estimates are ruled out. The only peaking background in this mass region is from $B \rightarrow h^+ h'^-$ decays, whose contribution to the $B^0$ search window is ten times larger than to the $B_s$ search window. However, no data excess is seen in the $B^0$ search window. The NN studies find no evidence of over-training or $\nu_{NN} - m_{\mu\mu}$ correlations and no evidence for mis-modeling of the $\nu_{NN}$ shape. The most plausible explanation for the data excess in the $0.97~<$~$\nu_{NN}$~$<~0.987$ is a statistical fluctuation. The Tevatron results are consistent with the recent measurements from the LHCb experiment, $BR(B_s \rightarrow \mu^+\mu^-) < 12 (15) \times 10^{-9}$ at 90 (95)\% CL~\cite{LHCb_Bs_mumu} and the CMS experiment $BR(B_s \rightarrow \mu^+\mu^-) < 16 (19) \times 10^{-9}$ at 90 (95)\% CL~\cite{CMS_Bs_mumu}. \section{Flavor Changing Neutral Currents in $b \rightarrow s \mu \mu$ Decays} Rare decays of bottom hadrons mediated by the flavor-changing neutral current (FCNC) process $b \rightarrow s \mu \mu$ occur in the SM through higher order amplitudes. A variety of beyond-the-standard-model (BSM) theories, on the other hand, favor enhanced rates for these FCNC decays. One can obtain rich information about the $b \rightarrow s \mu \mu$ dynamics by measuring of the branching ratios, their dependence on the di-lepton mass distributions, and the angular distributions of the decay products~\cite{b_smumu_br}. The CDF experiment has analyzed the following decays governed by the $b \rightarrow s \mu \mu$ transition: \begin{center} $\Lambda^0_b \rightarrow \Lambda \mu^+\mu^-$ \\ $B^0_s \rightarrow \phi \mu^+\mu^-$ \\ $B^+ \rightarrow K^+ \mu^+\mu^-$ \\ $B^0 \rightarrow K^{*0}(892) \mu^+\mu^-$ \\ $B^0 \rightarrow K^0 \mu^+\mu^-$ and \\ $B^+ \rightarrow K^{*+}(892) \mu^+\mu^-$. \end{center} In addition to branching fractions and differential branching fractions of these decays, the angular distributions in $B \rightarrow K^{(*)} \mu^+\mu^-$ decays are measured, as well. The analysis is based on a dataset corresponding to 6.8~fb$^{-1}$ of integrated luminosity. Previous iterations used 4.4~fb$^{-1}$ and 924~pb$^{-1}$, respectively~\cite{b_smumu_old}. The results include the first observation of the baryonic FCNC decay $\Lambda^0_b \rightarrow \Lambda \mu^+\mu^-$ and the first measurement of its branching fraction and of the differential branching fraction as a function of squared dimuon mass. Fig.~\ref{fig:Lb_Lmumu} shows the invariant mass distribution of $\Lambda \mu^+\mu^-$ from $\Lambda^0_b$ decays. Most precise branching fraction measurements in $b \rightarrow s \mu \mu$ decays are determined as follows: \begin{center} $BR(\Lambda^0_b \rightarrow \Lambda \mu^+\mu^-) = [1.73 \pm 0.42 (\rm stat.) \pm 0.55 (\rm syst.) ] \times 10^{-6}$ \\ $BR(B^0_s \rightarrow \phi \mu^+\mu^-) = [1.47 \pm 0.24 (\rm stat.) \pm 0.46 (\rm syst.) ] \times 10^{-6}$ \\ $BR(B^+ \rightarrow K^+ \mu^+\mu^-) = [0.46 \pm 0.04 (\rm stat.) \pm 0.02 (\rm syst.) ] \times 10^{-6}$ \\ $BR(B^0 \rightarrow K^{*0}(892) \mu^+\mu^-) = [1.02 \pm 0.10 (\rm stat.) \pm 0.06 (\rm syst.) ] \times 10^{-6}$ \\ $BR(B^0 \rightarrow K^0 \mu^+\mu^-) = [0.32 \pm 0.10 (\rm stat.) \pm 0.02 (\rm syst.) ] \times 10^{-6}$ \\ $BR(B^+ \rightarrow K^{*+}(892) \mu^+\mu^-) = [0.95 \pm 0.32 (\rm stat.) \pm 0.08 (\rm syst.) ] \times 10^{-6}$. \end{center} \begin{figure}[tb] \includegraphics[width=50mm]{Lambda_b_mass.eps} \includegraphics[width=55mm]{summary_fl_6bin_all_prl.eps} \includegraphics[width=55mm]{summary_afb_6bin_all_prl.eps} \caption{ $\Lambda^0_b \rightarrow \Lambda \mu^+\mu^-$ candidate mass distribution; the signal significance is 5.8 Gaussian sigma. (left). Parameters $F_L$ (center) and $A_{FB}$ (right) as function of the di-muon invariant mass in $B \rightarrow K^* \mu^+ \mu^-$ decays (simultaneous fit of $K^{*0}$ and $K^{*+}$ channels). The red curves represent the SM expectations~\cite{b_smumu_sm,Bobeth_2010wg} while the blue curves correspond to a supergravity model with large tan$(\beta)$~\cite{b_smumu_bsm}. } \label{fig:Lb_Lmumu} \end{figure} The full differential decay distribution for the decay $B \rightarrow K^{(*)} \mu^+\mu^-$ is described by four independent kinematic variables: the di-muon invariant mass squared $q^2$, the angle $\theta_{\mu}$ between the muon $\mu^{+/-}$ direction and the direction opposite to the $B/{\bar B}$-meson in the di-muon rest frame, the angle $\theta_{K}$ between the kaon direction and the direction opposite to the $B$-meson in the $K^*$ rest frame, and the angle $\phi$ between the two planes formed by the di-muon and the $K-\pi$ systems. The distributions of $\theta_{\mu}$, $\theta_K$, and $\phi$ are projected from the full differential decay distribution and can be parametrized with four angular observables, $A_{FB}$, $F_L$, $A_T^{(2)}$ and $A_{im}$: \begin{center} $\frac{1}{\Gamma} \frac{d\Gamma}{d\cos(\theta_K)} = \frac{3}{2} F_L \cos^2\theta_K + \frac{3}{4} (1-F_L) (1-\cos^2\theta_K)$, \\ $\frac{1}{\Gamma} \frac{d\Gamma}{d\cos(\theta_{\mu})} = \frac{3}{4} F_L (1-\cos^2\theta_{\mu}) + \frac{3}{8} (1-F_L) (1+\cos^2\theta_{\mu}) + A_{FB} \cos\theta_{\mu}$, \\ $\frac{1}{\Gamma} \frac{d\Gamma}{d\phi} = \frac{\pi}{2} [ 1+ \frac{1}{2} (1-F_L) A_T^{(2)} \cos2\phi + A_{im} \sin2\phi ] $, \\ \end{center} where $\Gamma = \Gamma (B \rightarrow K^* \mu^+ \mu^-)$, $A_{FB}$ is the muon forward-backward asymmetry, $F_L$ is the $K^*$ longitudinal polarization fraction, $A_T^{(2)}$ is the transverse polarization asymmetry, and $A_{im}$ is the T-odd CP asymmetry of the transverse polarizations. Fig.~\ref{fig:Lb_Lmumu} shows agreement between the $F_L$, $A_{FB}$ and $A_T^{(2)}$ as function of the di-muon invariant mass and the SM expectations~\cite{b_smumu_sm,Bobeth_2010wg}. The angular analysis results are among the most precise measurements to date. The right-handed current sensitive observables $A_T^{(2)}$ and $A_{im}$ are measured for the first time. \section{Like-Sign Dimuon Asymmetry} The D0 collaboration presents an updated measurement~\cite{dimuon_asym} of the like-sign dimuon charge asymmetry in semi-leptonic decays of $b$-hadrons using a data sample corresponding to 9~fb$^{-1}$ of integrated luminosity. The like-sign dimuon asymmetry is defined as: \begin{center} $A_{sl}^b = \frac{N_b^{++} - N_b^{--}}{N_b^{++} + N_b^{--}} = C_d a_{sl}^d + C_s a_{sl}^s$, \end{center} where $N_b^{++}$ and $N_b^{--}$ are the number of events containing two muons of same charge, produced in semi-leptonic decays of $b$-hadrons. The asymmetries $a_{sl}^q$, where $q = s/d$, are defined as: \begin{center} $a_{sl}^q = \frac{\Gamma(\bar{B}_q^0 \rightarrow \ \mu^+ X) - \Gamma(B_q^0 \rightarrow \ \mu^- X)}{\Gamma(\bar{B}_q^0 \rightarrow \ \mu^+ X) + \Gamma(B_q^0 \rightarrow \ \mu^- X)} = \frac{\Delta \Gamma_q}{\Delta M_q}$ tan$(\phi_q)$, \end{center} where $\phi_q$ is a CP violating phase and $\Delta M_q$ and $\Delta \Gamma_q$ are the mass and width difference between the eigenstates of the time propagation operator of the neutral $B^0_q$ systems. The coefficients $C_d$ and $C_s$ depend on the mean mixing probabilities and on the production rates of the $B^0$ and $B_s^0$ mesons~\cite{HFAG}: \begin{center} $C_d = 0.594 \pm 0.022$ and $C_s = 0.406 \pm 0.022$. \end{center} Using the SM predictions for $a_{sl}^q$~\cite{aslq_th}, one finds \begin{center} $A_{sl}^b(SM) = (-0.028^{+0.005}_{-0.006})\%$. \end{center} This theoretical uncertainties are negligible compared to the experimental sensitivity. A previous D0 analysis based on 6.1~fb$^{-1}$ of integrated luminosity~\cite{dimuon_asym_6fb} revealed a dimuon asymmetry of 3.2 standard deviations away from the SM expectation: \begin{center} $A_{sl}^b = (-0.00957 \pm 0.00251 (\rm stat.) \pm 0.00146 (\rm syst.)$. \end{center} The updated measurement not only uses an increased dataset due to increased integrated luminosity from 6.1~fb$^{-1}$ to 9~fb$^{-1}$, but also includes analysis improvements: 13\% increase in data sample due to looser muon longitudinal momentum selection and 20\% reduction in kaon and pion decay-in-flight backgrounds. In addition, muon impact parameter studies support the hypothesis that muons are indeed from $B$ decays. The new result is 3.9 standard deviations from the SM expectation: \begin{center} $A_{sl}^b = (-0.787 \pm 0.172 (\rm stat.) \pm 0.093 (\rm syst.) )\%$. \end{center} and it represents one of the most interesting high energy physics results. \begin{figure}[tb] \includegraphics[width=65mm]{d0_bs_mass.eps} \hskip1cm \includegraphics[width=90mm]{BsMass_5.2fb.eps} \caption{ $K^+K^-\mu^+\mu^-$ invariant mass distribution from $B_s \rightarrow J/\psi \phi$ candidates at D0 (left) and CDF (right). } \label{bs_mass} \end{figure} \section {CP Violation in $B_s \rightarrow J/\psi \phi$ Decays } While CP violation has been well-measured and found to agree with the SM expectations in kaon and in most $B$-meson decays, the study of CP violation in decays of $B_s$ mesons is still in its early stages, with the first results from $B_s \rightarrow J/\psi \phi$ decays reported by the CDF and D0 collaborations in the last couple of years~\cite{cdf_beta_s_prl,paulini,d0_beta_s_prl}. In these decays, CP violation occurs through the interference between the decay amplitudes with and without mixing. In the SM the relative phase between the decay amplitudes with and without mixing is $\beta_s^{SM}={\rm arg}(-V_{ts}V_{tb}^{*}/V_{cs}V_{cb}^{*})$ and it is expected to be very small~\cite{bigi-sanda,Ref:lenz}. New physics contributions manifested in the $B_s^0$ mixing amplitude may alter this mixing phase by a quantity $\phi_s^{NP}$ leading to an observed mixing phase $2\beta_s^{J/\psi\,\phi} = 2\beta_s^{SM} - \phi_s^{NP}$. Large values of the observed $\beta_s^{J/\psi\,\phi}$ would be an indication of physics beyond the SM~\cite{Ref:lenz,Ref:hou,Ref:ligeti,theory}. It is interesting to note that certain SUSY models with large tan($\beta$) predict enhanced $BR(B_s \rightarrow \mu \mu)$ for large CP violating mixing phase in $B_s \rightarrow J/\psi \phi$ decays~\cite{Altmannshofer}. Early measurements of the CP violation parameter $\beta_s$ from the CDF~\cite{paulini} and D0~\cite{d0_beta_s_prl} collaborations showed small deviations from the SM~\cite{bigi-sanda,Ref:lenz}, however, a combination~\cite{cdf_d0_comb} of CDF and D0 analyses, based on 2.8~fb$^{-1}$ of integrated luminosity, revealed a deviation of slightly more that two standard deviations with respect to the SM predictions. More recent updates of this measurements were performed by both the CDF~\cite{cdf_bs_5.2fb} and the D0~\cite{d0_bs_8.0fb} experiments using data samples corresponding to 5.2~fb$^{-1}$ and 8.0~fb$^{-1}$ of integrated luminosity, respectively. The CDF experiment has a $B_s$ yield of $\approx 6500$ signal events, while the D0 experiment reports a yield of $\approx 5500$ signal events, as shown in Fig.~\ref{bs_mass}. The updated measurements show better agreement with the SM expectation. The deviations are at one standard deviation level for each experiment. The CDF experiment finds~\cite{cdf_bs_5.2fb} that the CP violation phase $\beta_s$ is within the ranges [0.02,0.52]~U~[1.08,1.55]~radians at the 68\% CL. The corresponding D0 result is~\cite{d0_bs_8.0fb} $\beta_s = 0.28^{+0.19}_{-0.18}$~radians or $\phi_s = -2\beta_s = -0.55^{+0.38}_{-0.36}$~radians. The two dimensional confidence regions in the $\beta_s-\Delta\Gamma_s$ plane are shown in Fig.~\ref{bs_dg_contours}. Apart from increasing the sample sizes, each experiment includes in the analysis the s-wave contribution from $B_s \rightarrow J/\psi K^+K^-$, where the $K^+K^-$ pair is in an s-wave state. The s-wave could be either the $f_0(980)$ state or a non-resonant $K^+K^-$ state. The s-wave contribution to the CDF analysis is found to be less than 6.7\% at the 95\% CL while the corresponding D0 fraction is $(17.3 \pm 3.6)\%$. The difference between the two s-wave contributions in the two analyses is still to be understood. The $B_s$ mean lifetime, $\tau_s$, the decay with difference between the $B_s$ mass eigenstates, $\Delta \Gamma_s$, the polarization fractions in the transversity basis $|A_{0}(0)|^2$ and $|A_{\parallel}(0)|^2$ and the strong phases $\varphi_{\parallel} = {\rm arg}(A_{\parallel}(0) A_0^*(0))$, $\varphi_{\perp} = {\rm arg}(A_{\perp}(0) A_0^*(0))$ and $\varphi_s = {\rm arg}(A_s(0) A_0^*(0))$ are investigated. The CDF results are: \begin{center} $c\tau_s = 458.6 \pm 7.6 (\rm stat.) \pm 3.6 (\rm syst.) \mu$m \\ $\Delta \Gamma_s = 0.075 \pm 0.035 (\rm stat.) \pm 0.01 (\rm syst.)$~ps$^{-1}$ \\ $|A_{||}(0)|^2 = 0.231 \pm 0.014 (\rm stat.) \pm 0.015 (\rm syst.)$ \\ $|A_{0}(0)|^2 = 0.524 \pm 0.013 (\rm stat.) \pm 0.015 (\rm syst.)$ \\ $\varphi_{\perp} = 2.95 \pm 0.64 (\rm stat.) \pm 0.07 (\rm syst.)$ \end{center} Due to the low s-wave fraction present in the CDF data, the analysis has no sensitivity to the relative phase $\varphi_s$ between the s-wave amplitude $A_s(0)$ and the amplitude $A_0(0)$. The uncertainties of the phase $\varphi_{\parallel}$ are still being investigated. The corresponding D0 results, including both statistic and systematic uncertainties, are: \begin{center} $c\tau_s = 1.443^{+0.038}_{-0.035}$~ps \\ $\Delta \Gamma_s = 0.163^{+0.065}_{-0.064} $~ps$^{-1}$ \\ $|A_{||}(0)|^2 = 0.231^{+0.024}_{-0.030}$ \\ $|A_{0}(0)|^2 = 0.558^{+0.017}_{-0.019}$ \\ $\delta_{\parallel} = 3.15 \pm 0.22$ \\ $\cos(\delta_{\perp}-\delta_s) = 0.11^{+0.27}_{-0.25}$ \end{center} The Tevatron experiments have pioneered the exploration of CP violation in the neutral $B_s$ system. They have strongly constrained the size of possible presence of new physics contributions and will further restrict it with the full Run II data sample. Since the time of this presentation, the LHCb experiment has presented~\cite{beta_s_lhcb} an updated analysis of $B_s \rightarrow J/\psi \phi$ decays, providing competitive uncertainties on the CP violation parameter $\beta_s$, which was found to be in agreement with the Tevatron results and also with the SM prediction. \begin{figure}[tb] \includegraphics[width=90mm]{d0_bs_dg.eps} \hskip1cm \includegraphics[width=65mm]{tag_contour_systadjust.eps} \caption{ Confidence regions in the $\phi_s-\Delta \Gamma_s$ plane from D0 (left) and in the $\beta_s-\Delta \Gamma_s$ plane from CDF (right). When large CP violation effects from new physics are present, $\phi_s$ and $\beta_s$ are related by the simple equation $\phi_s = -2 \beta_s$. } \label{bs_dg_contours} \end{figure} \section{Study of $B_s \rightarrow J/\psi f_0(980)$ Decays} Due to the small SM value of the phase $\phi_s = {\rm arg}(-M^s_{12}/\Gamma^s_{12}) = (4.2 \pm 1.4) \times 10^{-3}$~radians~\cite{Ref:lenz}, the $B_s$ mass eigenstates and the CP eigenstates coincide to a good approximation. Here $M^s_{12}$ and $\Gamma^s_{12}$ are the off-diagonal elements of the mass and decay matrices which describe the time evolution of the neutral $B_s$ system. The measurement of the mean $B_s$ lifetime decaying to a CP eigenstate provides directly the lifetime of the corresponding mass eigenstate. If new physics has large contributions to $\phi_s$, then the the mass and CP eigenstates are no longer the same. In this case, the measured lifetime corresponds to the weighted average of the lifetimes of the two mass eigenstates, with weights depending on the size of the CP violating phase $\phi_s$~\cite{theory}. The measurement of the $B_s$ lifetime in a final state which is a CP eigenstate provides constraints on the width difference, $\Delta \Gamma_s$, and on the CP violating phase in $B_s$ mixing, $\phi_s$~\cite{Fleischer_Knegjens,Fazio}. Since the final state in the decay $B_s \rightarrow J/\psi f_0(980)$ with $f_0 \rightarrow \pi^+ \pi^-$ is a CP eigenstate, this decay can be used to measure the CP violating phase $\beta_s = {\rm arg}[(-V_{ts}V^*_{tb})/(V_{cs}V^*_{cb})]$ without performing an angular analysis~\cite{stone}. In case of large CP violation new physics effects in mixing, it holds that $\phi_s \simeq -2\beta_s$. A measurement of the phase $\beta_s$ in $B_s \rightarrow J/\psi f_0(980), f_0 \rightarrow \pi^+ \pi^-$ decays was already performed by the LHCb experiment~\cite{beta_s_lhcb}. Further interest in the decay $B_s \rightarrow J/\psi f_0(980)$ with $f_0 \rightarrow K^+ K^-$, is generated by the possibility of solving the $\beta_s$ ambiguity by using the interference between the p-wave in $B_s \rightarrow J/\psi \phi$ decays and the s-wave in $B_s \rightarrow J/\psi f_0(980)$ decays. With a sample of 3.8~fb$^{-1}$ containing $502 \pm 37(\rm stat.) \pm 18(\rm syst.)$ signal events, the CDF experiment measures~\cite{cdf_f0} \begin{center} $R_{f_0/\phi} = \frac{BF(B_s \rightarrow J/\psi f_0(980)) \times BF(f_0(980) \rightarrow \pi^+ \pi^-)}{BF(B_s \rightarrow J/\psi \phi) \times BF(\phi \rightarrow K^+K^-)} = 0.257 \pm 0.020 (\rm stat.) \pm 0.014(\rm syst.)$, \end{center} from which \begin{center} $BF(B_s \rightarrow J/\psi f_0(980)) \times BF(f_0(980) \rightarrow \pi^+ \pi^-) = (1.63 \pm 0.12(\rm stat.) \pm 0.09(sys) \pm 0.50(\rm PDG)) \times 10^{-4}$ \end{center} is derived. This is the most precise determination of $R_{f_0/\phi}$ to date. The corresponding D0 measurements, using 8~fb$^{-1}$ of integrated luminosity, with $498 \pm 74$ signal candidates, yields a relative branching fraction of \begin{center} $R_{f_0/\phi} = \frac{BF(B_s \rightarrow J/\psi f_0(980)) BF(f_0(980) \rightarrow \pi^+ \pi^-)}{BF(B_s \rightarrow J/\psi \phi) BF(\phi \rightarrow K^+K^-)} = 0.210 \pm 0.032 (\rm stat.) \pm 0.036(\rm syst.)$. \end{center} Fig.~\ref{fig:cdf_f0} shows the CDF and D0 $\mu^+\mu^-\pi^+\pi^-$ invariant mass from $B_s \rightarrow J/\psi f_0(980)$, $f_0(980) \rightarrow \pi^+ \pi^-$ candidates. Both CDF and D0 results are in good agreement with the results from LHCb $R_{f_0/\phi} = 0.252^{+0.046}_{-0.032}(\rm stat.)^{+0.027}_{-0.033}(\rm syst.)$~\cite{lhcb_f0} and from Belle $R_{f_0/\phi} = 0.206^{+0.055}_{-0.034}(\rm stat.) \pm 0.052 (\rm syst.)$~\cite{belle_f0} . \begin{figure}[tb] \includegraphics[width=75mm]{BsJpsifnot_fit.eps} \hskip1cm \includegraphics[width=75mm]{d0_f0.eps} \caption{ Invariant mass of $\mu^+\mu^-\pi^+\pi^-$ from $B_s \rightarrow J/\psi f_0(980)$, $f_0(980) \rightarrow \pi^+ \pi^-$ candidates from the CDF experiment (left) and from the D0 experiment (right).} \label{fig:cdf_f0} \end{figure} In addition to the relative branching fraction, $R_{f_0/\phi}$, the CDF experiment also measured the mean lifetime of the $B_s$ meson in $B_s \rightarrow J/\psi f_0(980)$ decays, $\tau(B_s \rightarrow J/\psi f_0(980)) = 1.70^{-0.11}_{+0.12}(\rm stat.) \pm 0.03(\rm syst.)$~ps. This result is in good agreement with theoretical expectations as well as with other determinations of the $B_s^H$ lifetime. \section{Branching Fraction, Polarization and CP Violation in $B_s \rightarrow \phi \phi$ Decays} Studies of charmless $B_s \rightarrow \phi \phi$ decays were first performed by the CDF experiment. We present first measurements of the branching ratio, of the polarization fractions and a search for CP Violation~\cite{CDF_phiphi_pol} in these decays using data corresponding to 2.9~fb$^{-1}$ of integrated luminosity. Charmless $B_s$ decays are still to be fully understood. They offer the possibility to test our current theoretical understanding and represent promising ways to search for physics beyond the Standard Model. The $B_s \rightarrow \phi \phi$ decay is part of the so-called $B \rightarrow VV$ family in which the initial state $B$-meson is a pseudo-scalar (spin 0) and the final state $VV$ contains two vector mesons (spin 1). In particular the final state for the $B_s$ to $\phi \phi$ decay is a superposition of CP eigenstates depending on the orbital angular momenta of the two $\phi$ mesons. Such decays can be used to measure the $B_s$ decay width difference ($\Delta \Gamma_s$) and the phase responsible for CP violation in the interference between decays with and without mixing. To conserve the total angular momentum in $B_s \rightarrow \phi \phi$ decays, the relative orbital angular momentum between the two $\phi$ mesons in the final state must be either 0, 1 or 2. In the angular momentum space, there are various bases which can be used to analyze decays of pseudo-scalars to two vector mesons, but any formalism involves three independent amplitudes for the three different polarizations of the decay products in the final state. Measuring the polarization fractions amounts to an important test of the corresponding theoretical predictions. Within the SM, the dominant process that contributes to the $B_s \rightarrow \phi \phi$ decay is the $b \rightarrow s \bar{s} s$ penguin digram. The same penguin amplitude appears in other $B \rightarrow VV$ processes which exhibit significant discrepancies between the measured polarization fractions and the SM predictions. Explanations involving both new physics scenarios as well as newly accounted SM effects have been suggested to explain the observations. However, none of the existing scenarios is convincing enough. To solve this ``polarization puzzle'' it is important to study as many $B \rightarrow VV$ decays as available. The first polarization analysis of $B_s \rightarrow \phi \phi$ decays, performed by the CDF experiment is presented here together with an updated measurement of the $B_s \rightarrow \phi \phi$ branching fraction. The $B_s \rightarrow \phi \phi$ invariant mass distribution is shown in Fig.~\ref{fig_bs_phi_phi_tp}. The ratio of branching fractions is determined: \begin{center} $\frac{BR(B_s \rightarrow \phi\phi)}{BR(B_s \rightarrow J/\psi \phi)} = [1.78 \pm 0.14 (\rm stat.) \pm 0.20 (\rm syst.)] \times 10^{-2}$ \end{center} Using the experimental value of the $B_s \rightarrow J/\psi \phi$ branching ratio we obtain: \begin{center} $BR(B_s \rightarrow \phi\phi) = [2.32 \pm 0.18(\rm stat.) \pm 0.26 (\rm syst.) \pm 0.78 (\rm br)] \times 10^{-5}$, \end{center} using the $BR(B_s \rightarrow J/\psi \phi)$ from~\cite{bs_jpsiphi_br}, which contributes the dominant uncertainty, labeled (br). This result is compatible with the initial observation~\cite{Bs_to_phiphi_180pb}, with substantial improvement on the statistical uncertainty. The result is also compatible with recent theoretical calculations~\cite{TH_br1} and~\cite{TH_br2}. The polarization fractions and the strong phase $\delta_{||} = A_{||}A^*_{0}$ are measured as: \begin{center} $|A_0|^2 = 0.348 \pm 0.041 (\rm stat.) \pm 0.021 (\rm syst.)$ \\ $|A_{||}|^2 = 0.287 \pm 0.043 (\rm stat.) \pm 0.011 (\rm syst.)$ \\ $|A_{\perp}|^2 = 0.365 \pm 0.044 (\rm stat.) \pm 0.027 (\rm syst.)$ \\ cos$(\delta_{||}) = -0.91^{+0.15}_{-0.13} (\rm stat.) \pm 0.09 (\rm syst.)$ \end{center} The longitudinal and transverse polarization fractions are: \begin{center} $ f_L = 0.348 \pm 0.041 (\rm stat.) \pm 0.021 (\rm syst.), $ \\ $ f_T = 0.652 \pm 0.041 (\rm stat.) \pm 0.021 (\rm syst.) $ \end{center} It is clear from this measurement that the SM expected amplitude hierarchy $|A_0| \gg |A_{||}| \simeq |A_{\perp}|$ is not valid in $B_s \rightarrow \phi \phi$ decays. Instead, the observed relation between the polarization amplitudes is given by: $|A_0| \simeq |A_{||}| \gtrsim |A_{\perp}|$, which is similar to the measurements for the $\bar b \rightarrow \bar s$ penguin transition of $B \rightarrow \phi K^*$ decays~\cite{ct8} which were the origin of the polarization puzzle. The results are compared with various theoretical predictions of the polarization amplitudes. We find that the central values are consistent within the uncertainty ranges with the expectations of QCD factorization~\cite{TH_br1}{ and with~\cite{HaiYangCheng_new}, while they are not in good agreement with the expectation of perturbative QCD~\cite{TH_br2}. Although the $B_s \rightarrow \phi \phi$ data sample size does not allow the investigation of the mixing induced CP-violation, a class of CP-violating effects which can reveal the presence of NP are the Triple Products ($TP$) correlations~\cite{phiphi_tp}. $TP$'s are defined as: $TP = {\vec p} \cdot ({\vec q_1} \times {\vec q_2})$ where ${\vec p}$ is a momentum, and ${\vec q_1}$ and ${\vec q_2}$ can be either the spins or momenta of the decay particles. Triple products are odd variables under time reversal (T), therefore they constitute potential signals of CP violation. The $TP$ asymmetry is defined as: \begin{center} $A_{TP} = \frac{\Gamma(TP>0) - \Gamma(TP<0)}{\Gamma(TP>0) + \Gamma(TP<0)}$, \end{center} where $\Gamma$ is the decay rate of the process in question. Most of these $TP$ asymmetries are expected to be small in the SM, but can be enhanced in the presence of NP in the decay. In the untagged case the $TP$ asymmetries are proportional to the so-called "true" $TP$ asymmetry, that is a true CP violating effect. In what follows, for shortness, we refer to them as $TP$ only. In the $B_s \rightarrow \phi \phi$ decays, there are two Triple Products: $TP_2$ is proportional to ${\rm Im}(A_{||} A_{\perp})$ and $TP_1$ is related to ${\rm Im}(A_0 A_{\perp})$. $TP_2$ can be probed through the observable $u =$ cos$\varphi$ sin$\varphi$, where $\varphi$ is the angle between the two $\phi$ meson decay planes. The asymmetry on $u$, $A_u$, is proportional to the asymmetry of $TP_2$ and is defined as: $A_u = (N^+ - N^-) / (N^ + N^-)$, where $N^+ (N^-)$ are the number of events with $u>0$ ($u<0$). In a similar way, we define an asymmetry $A_v$ for the variable $v = $sin$\varphi$ if cos$\theta_1$ cos$\theta_2>0$ and $v =$ sin$(-\varphi)$ if cos$\theta_1$ cos$\theta_2 \le 0$. The asymmetry $A_v$ which is proportional to the asymmetry of $TP_1$. The $u$ and $v$ distributions are shown in Fig.~\ref{fig_bs_phi_phi_tp} for sideband-subtracted signal events. \begin{figure}[tb] \includegraphics[width=65mm]{MassProj_400x400_p1.eps} \includegraphics[width=50mm]{u_distr.eps} \includegraphics[width=50mm]{v_distr.eps} \caption{ Invariant mass of $\phi(\rightarrow K^+K^-) \phi(\rightarrow K^+K^-)$ (left). The $u$ (center) and $v$ (right) distributions in $B_s \rightarrow \phi \phi$ for side-bands subtracted signal events.} \label{fig_bs_phi_phi_tp} \end{figure} The measured asymmetries of the two T-odd helicity angles functions are: \begin{center} $A_u = -0.007 \pm 0.064 (\rm stat.) \pm 0.018 (\rm syst.)$, and \\ $A_v = -0.120 \pm 0.064 (\rm stat.) \pm 0.016 (\rm syst.)$. \end{center} The first asymmetry, $A_u$, is well consistent with zero within experimental uncertainties, while the second one, $A_v$, is 1.8 standard deviations from zero considering both statistical and systematic uncertainties. These asymmetries constrain the size of two T-violating true Triple Product asymmetries of the $B_s \rightarrow \phi \phi$ decay, expected null in the SM. \section{CP Violation in $B \rightarrow DK$ Decays} The branching fractions and CP asymmetries of $B^- \rightarrow D^0 K^-$ modes allow a theoretically-clean way of measuring the CKM angle $\gamma$, which is the least well-known CKM angle, with uncertainties of about 10-20 degrees. In particular, the ADS method"~\cite{ads1,ads2} makes use of modes where the $D^0$ decays in the doubly-Cabibbo-suppressed (DCS) mode: $D^0 \rightarrow K^+ \pi^-$. The large interference between the decays in which $B^-$ decays to $D^0 K^-$ through a color-favored $b \rightarrow c$ transition, followed by the DCS decay $D^0 \rightarrow K^+ \pi^-$, and the decay in which $B^-$ decays to $D^0 K^-$ through a color-suppressed $b \rightarrow u$ transition, followed by the Cabibbo-favored (CF) decay $D^0 \rightarrow K^+\pi^-$, can lead to measurable CP asymmetries, from which the $\gamma$ angle can be extracted. The observables of the ADS method are: \begin{center} $R_{ADS}(K) = \frac{BR(B^- \rightarrow [K^+ \pi^-]_D K^-) + BR(B^+ \rightarrow [K^- \pi^+]_D K^+)}{BR(B^- \rightarrow [K^- \pi^+]_D K^-) + BR(B^+ \rightarrow [K^+ \pi^-]_D K^+)}$ \\ $A_{ADS}(K) = \frac{BR(B^- \rightarrow [K^+ \pi^-]_D K^-) - BR(B^+ \rightarrow [K^- \pi^+]_D K^+)}{BR(B^- \rightarrow [K^+ \pi^-]_D K^-) + BR(B^+ \rightarrow [K^- \pi^+]_D K^+)}$ \\ $R^{±}(K) = \frac{BR(B^{\pm} \rightarrow [K^{\mp} \pi^{\pm}]_D K^{\pm})}{BR(B^{\pm} \rightarrow [K^{\pm} \pi^{\mp}]_D K^{\pm})}$ \\ \end{center} $R_{ADS}(K)$ and $A_{ADS}(K)$ are related to the $\gamma$ angle through these relations: \begin{center} $R_{ADS}(K) = r_D^2 + r_B^2 + r_Dr_B$ cos$\gamma$ cos$(\delta_B + \delta_D)$ and \\ $A_{ADS}(K) = 2 r_B r_D$ sin$\gamma$ sin$(\delta_B + \delta_D) / R_{ADS}(K)$ \end{center} where $r_B = |A(b \rightarrow u) / A(b \rightarrow c)|$ and $\delta_B = {\rm arg}[A(b \rightarrow u) / A(b \rightarrow c)]$. $r_D$ and $\delta_D$ are the corresponding amplitude ratio and strong phase difference of the $D$ meson decay amplitudes. As can be seen from the expressions above, $A_{ADS} ({\rm max}) = 2r_B r_D / (r_B^2 + r_D^2)$ is the maximum size of the asymmetry. For given values of $r_B(\pi)$ and $r_D$, sizeable asymmetries may be found also for $B^- \rightarrow D^0 \pi^-$ decays, so interesting observables are: \begin{center} $R_{ADS}(\pi) = \frac{BR(B^- \rightarrow [K^+ \pi^-]_D \pi^-) + BR(B^+ \rightarrow [K^- \pi^+]_D \pi^+)}{BR(B^- \rightarrow [K^- \pi^+]_D \pi^-) + BR(B^+ \rightarrow [K^+ \pi^-]_D \pi^+)}$ \\ $A_{ADS}(\pi) = \frac{BR(B^- \rightarrow [K^+ \pi^-]_D \pi^-) - BR(B^+ \rightarrow [K^- \pi^+]_D \pi^+)}{BR(B^- \rightarrow [K^+ \pi^-]_D \pi^-) + BR(B^+ \rightarrow [K^- \pi^+]_D \pi+)} $ \\ $R^{\pm}(\pi) = \frac{BR(B^{\pm} \rightarrow [K^{\mp} \pi^{\pm}]_D \pi^{\pm})}{BR(B^{\pm} \rightarrow [K^{\pm} \pi^{\mp}]_D \pi^{\pm})}$ \end{center} The CDF experiment presents an ADS analysis~\cite{cdf_b_dk} on a data sample corresponding to 7~fb${^-1}$ of integrated luminosity. An extended maximum likelihood fit that combines mass and particle identification information is used to separate statistically the $B^- \rightarrow D K^-$ contributions from the $B^- \rightarrow D\pi^-$ signals and from the combinatorial and physics backgrounds. The $B^- \rightarrow D \pi^-$ signal is reconstructed with a statistical significance of 3.6~Gaussian~sigma. The suppressed signals $B^- \rightarrow D K^-$ are reconstructed with a significance of 3.2~sigma, including systematics. The plots in Fig.~\ref{BDK} show the $B$ invariant mass distribution for positive and negative charges of the suppressed sample. \begin{figure}[tb] \includegraphics[width=55mm]{dcs_pos_bg_togheter.eps} \includegraphics[width=55mm]{dcs_neg_bg_togheter.eps} \includegraphics[width=55mm]{c_global_Mpipi_projection_log.eps} \caption{ Invariant mass distribution of $B^- \rightarrow D_{\rm suppressed} h^-$ for positive (left) and negative charges (center); the $h^-$ hadron is either a $K^-$ or a $\pi^-$. Invariant mass of $\pi^+ \pi^-$ from $B \rightarrow h^+ h^-$ candidates, where the $h$ hadron is either a pion or a kaon (right).} \label{BDK} \end{figure} The ratios of the suppressed to favored branching fractions are measured as: \begin{center} $R_{ADS}(K) = [22.0 \pm 8.6(\rm stat.) \pm 2.6(\rm syst.)] \times 10^{-3}$, \\ $R^+(K) = [42.6 \pm 13.7(\rm stat.) \pm 2.8(\rm syst.)] \times 10^{-3}$, \\ $R^-(K) = [ 3.8 \pm 10.3(\rm stat.) \pm 2.7(\rm syst.)] \times 10^{-3}$, \\ $R_{ADS}(\pi) = [ 2.8 \pm 0.7(\rm stat.) \pm 0.4(\rm syst.)] \times 10^{-3}$, \\ $R^+(\pi) = [ 2.4 \pm 1.0(\rm stat.) \pm 0.4(\rm syst.)] \times 10^{-3}$, \\ $R^-(\pi) = [ 3.1 \pm 1.1(\rm stat.) \pm 0.4(\rm syst.)] \times 10^{-3}$, \end{center} as well as the direct CP-violating asymmetry \begin{center} $A_{ADS}(K) = -0.82\pm 0.44(\rm stat.) \pm 0.09(\rm syst.)$, \\ $A_{ADS}(\pi) = 0.13\pm 0.25(\rm stat.) \pm 0.02(\rm syst.)$. \end{center} The results are in agreement and competitive with $B$-factories~\cite{ads_bfac} and with the LHCb experiment~\cite{ads_lhcb}. \section{ Two Body Charmless B Decays } The decay modes of $B$-mesons into pairs of charmless pseudo-scalar mesons are effective probes of the quark-mixing (CKM) matrix and are sensitive to potential new physics effects. Their branching fractions and CP asymmetries can be predicted with good accuracy and compared to the rich experimental data available for $B_u$ and $B_d$ mesons produced in large quantities in $\Upsilon(4S)$ decays~\cite{bhh1}. Measurements of similar modes predicted, for the $B_s$ meson are important to supplement our understanding of $B$-meson decays. The measurement of observables from both strange and non-strange $B$-mesons allows a cancellation of hadronic uncertainties, thus enhancing the precision of the extraction of physics parameters from experimental data~\cite{bhh2,bhh3,bhh4,bhh5}. A combination of $B^0 \rightarrow \pi^+ \pi^-$ and $B_s \rightarrow K^+K^-$ observables has been proposed as a way to directly determine the phase of the $V_{ub}$ element of the CKM matrix (angle $\gamma$), or alternatively as a test of our understanding of the dynamics of $B$ hadron decays, when compared with other determinations of $\gamma$~\cite{bhh6}. The $B_s \rightarrow K^-\pi^+$ mode can also be used in measuring $\gamma$~\cite{bhh3}, and its CP asymmetry is a powerful model-independent test~\cite{bhh7} of the source of the direct CP asymmetry recently observed in the $B^0 \rightarrow K^+ \pi^-$ mode~\cite{bhh8}. The $B_s \rightarrow \pi^+ \pi^-$ mode proceeds only through annihilation diagrams, which are currently poorly known and a source of significant uncertainty in many theoretical calculations~\cite{bhh9}. Its features are similar to the $B^0 \rightarrow K^+ K^-$ mode, but it has a larger predicted branching fraction~\cite{bhh10}; a measurement of both modes would allow a determination of the strength of penguin-annihilation~\cite{bhh4}. Channels previously investigated by the CDF experiment are $B_s \rightarrow K^+K^-$~\cite{cdf_bhh1}, $B_s \rightarrow K^- \pi^+$, $\Lambda^0_b \rightarrow p \pi^-$ and $\Lambda^0_b \rightarrow p K^-$~\cite{cdf_bhh2}, and the corresponding asymmetries $A_{CP}(B_s \rightarrow K^- \pi^+)$, $A_{CP}(\Lambda^0_b \rightarrow p \pi^-)$ and $A_{CP}(\Lambda^0_b \rightarrow p K^-)$~\cite{cdf_bhh3}. Recently, the CDF experiment has established~\cite{bs_pipi} the first evidence for $B_s \rightarrow \pi^+ \pi^-$ decays and has set bounds on the branching fraction of the $B^0 \rightarrow K^+K^-$ decay mode. Fig.~\ref{BDK} shows the invariant mass of $\pi^+ \pi^-$ from $B \rightarrow h^+ h^-$ candidates. The $B_s \rightarrow \pi^+ \pi^-$ and $B^0 \rightarrow K^+K^-$ are $94 \pm 28 (\rm stat.) \pm 11 (\rm syst.)$ and $120 \pm 49 (\rm stat.) \pm 42 (\rm syst.)$, respectively. The branching fractions are measured as: \begin{center} $BR(B_s \rightarrow \pi^+ \pi^-) = (0.57 \pm 0.15 (\rm stat.) \pm 0.10 (\rm syst.)) \times 10^{-6}$, \\ $BR(B^0 \rightarrow K^+ K^-) = (0.23 \pm 0.10 (\rm stat.) \pm 0.10 (\rm syst.)) \times 10^{-6}$, \\ $BR(B^0 \rightarrow K^+ K^-) \in [0.05, 0.46] \times 10^{-6}$ at $90\%$~C.L. \end{center} \section {Observation of $\Xi^0_b$} The Tevatron experiments, D0 and CDF, have had major contribution to $b$-baryon spectroscopy, with the observations of the $\Xi^{-}_{b} (dsb)$~\cite{bb1}, $\Sigma_b^{*} (uub, ddb)$~\cite{bb2} and $\Omega_b (ssb)$~\cite{bb2} baryons. We report the observation by the CDF experiment of an additional heavy baryon, $\Xi^0_b (usb)$~\cite{cdf_Xi_b} and the measurement of its mass. The measurement uses 4.2~fb$^{-1}$ of integrated luminosity. The $\Xi^0_b$ baryon is observed through its decay \begin{center} $\Xi_b^0 \rightarrow \Xi_c^+ \pi^-$, where \\ $\Xi_c^+ \rightarrow \Xi^- \pi^+ \pi^+$, $\Xi^- \rightarrow \Lambda \pi^-$ and $\Lambda \rightarrow p \pi^-$. \end{center} In addition, the $\Xi_b^-$ baryon is observed through the similar decay chain \begin{center} $\Xi_b^- \rightarrow \Xi_c^0 \pi^-$, where \\ $\Xi_c^0 \rightarrow \Xi^- \pi^+$, $\Xi^- \rightarrow \Lambda \pi^-$ and $\Lambda \rightarrow p \pi^-$. \end{center} The $\Xi_b^0$ and $\Xi_b^-$ candidate mass distributions are shown in Fig.~\ref{fig:xi}. There are $25.3^{+5.6}_{-5.4}$~$\Xi_b^0$ candidates and $25.8^{+5.5}_{-5.2}$~$\Xi_b^-$ candidates with measured masses of $5787.8 \pm 5.0 (\rm stat.) \pm 1.3 (\rm syst.)$~MeV/c$^2$ and $5796.7 \pm 5.1 (\rm stat.) \pm 1.4 (\rm syst.)$~MeV/c$^2$, respectively. The $\Xi_b^0$ signal significance is greater than 6$\sigma$. Neither of these decay channels has been reported previously and the reconstruction of $\Xi_b^0$ is the first observation of this baryon in any channel. \begin{figure}[tb] \includegraphics[width=65mm]{Fig18b.eps} \hskip0.5in \includegraphics[width=65mm]{Fig18a.eps} \caption{ Invariant mass distribution of $\Xi_c^+ \pi^-$ (left) and $\Xi_c^0 \pi^-$ (right) with overlaid fit projection. } \label{fig:xi} \end{figure} \section {Conclusions} The D0 and CDF experiments are continuing to produce a rich and exciting program in heavy flavor physics: interesting effects in same-sign di-muon asymmetry and $B_s \rightarrow \mu^+ \mu^-$ decays, as well as the best measurements of CP-violating phase, $\beta_s/\phi_s$. Many interesting results will benefit from increasing the data samples. It is anticipated that each of the two Tevatron experiments will accumulate approximately 10~fb$^{-1}$ of integrated luminosity by the end of the Tevatron run.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Acknowledgements} We acknowledge the Natural Sciences and Engineering Research Council (Canada) and the EU Project CHARMING (Contract No. FP7-288786) for funding the work presented in this paper. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{ICL of the \NMARacro model} The ICL criterion of the \LBMacro extended to the \NMARacro missingness process presented in Section \ref{sect:lbmextended} has the following asymptotic form for $\Nnone$ and $\Nntwo$: \begin{align} ICL(\Nnq, \Nnl) =& {\max_{\Nthetab}}\; \log \Prob{\NXob,\NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb; \Nthetab} - \frac{\Nnq\Nnl}{2} \log\p{\Nnone\Nntwo} \nonumber\\ & - \frac{\Nnq-1}{2} \log\p{\Nnone} - \frac{\Nnl-1}{2} \log\p{\Nntwo} \label{annex:eq:icl}\\ & + \Nnone \log\p{2\pi} - \log\p{\Nnone} + \Nntwo \log\p{2\pi} - \log\p{\Nntwo} \nonumber\\ & + o(\log\Nnone) + o(\log\Nntwo) \nonumber \enspace. \end{align} \begin{proof} With independent latent variables and independent priors on the parameters, the ICL criterion reads \begin{align} ICL &= \llikli\p{\NXob, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb} \nonumber\\ &= \log \int \Prob{\sachant{\NXob, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb}{\Nthetab}} \Prob{\Nthetab}\mathrm{d}\Nthetab \nonumber\\ &= \log\int\Prob{\sachant{\NXob}{\NYoneb,\NYtwob,\NAb,\NBb,\NPb,\NQb,\Npib}}\Prob{\Npib}\Prob{\Nmu}\mathrm{d}\Npib \mathrm{d}\Nmu \label{annex:eq:iclfull}\\ &\quad+ \log\int\Prob{\sachant{\NYoneb}{\Nalphaoneb}}\Prob{\Nalphaoneb}\mathrm{d}\Nalphaoneb + \log\int\Prob{\sachant{\NYtwob}{\Nalphatwob}}\Prob{\Nalphatwob}\mathrm{d}\Nalphatwob \nonumber\\ &\quad + \log\int\Prob{\sachant{\NAb}{\NsigmaA}}\Prob{\NsigmaA}\mathrm{d}\NsigmaA + \log\int\Prob{\sachant{\NBb}{\NsigmaB}}\Prob{\NsigmaB}\mathrm{d}\NsigmaB\nonumber \\ &\quad + \log\int\Prob{\sachant{\NPb}{\NsigmaP}}\Prob{\NsigmaP}\mathrm{d}\NsigmaP + \log\int\Prob{\sachant{\NQb}{\NsigmaQ}}\Prob{\NsigmaQ}\mathrm{d}\NsigmaQ\nonumber \enspace. \end{align} As in the ICL developed by \cite{keribinicl} for the standard \LBMacro, we set non-informative Dirichlet distribution $\mathcal{D}(a,...,a)$ priors on $\Nalphaoneb$ and $\Nalphatwob$: \begin{align*} \log\Prob{\NYoneb} &=\log\int\Prob{\sachant{\NYoneb}{\Nalphaoneb}}\Prob{\Nalphaoneb;a}\mathrm{d}\Nalphaoneb \\ % &=\prod_{\Ni}\log\int \prod_{\Nq}{\p{\Nalphaone_\Nq}^{\NYone_{\Ni\Nq}}} \frac{1}{\mathcal{B}(a)}\prod_{\Ni\Nq}{\p{\Nalphaone_\Nq}^{a-1}}\mathrm{d}\Nalphaoneb \\ % & = \log \mathcal{B}(a+\sum_{\Ni}{\NYoneb_\Ni}) - \log {\mathcal{B}(a)} \\ % &= \sum_\Nq{\log\Gamma(\NYone_{\!:\Nq}+a)}+\log\Gamma(\Nnq a)-\log\Gamma(\Nnone+\Nnq a)-\Nnq\log\Gamma(a) \enspace, \end{align*} where {$\NYone_{\!:\Nq} = \sum_\Ni\NYone_{\Ni\Nq}$}. The Stirling approximation {$\log \Gamma(x)= x\log x - x -\frac{1}{2}\log x + o(\log x)$} leads to the following asymptotic development of $\log \Prob{\NYoneb}$: \begin{align*} \log \Prob{\NYoneb} &= \sum_\Nq{\log \Gamma(\NYone_{:\Nq}+a) } - \log \Gamma(\Nnone+\Nnq a) +o(\log \Nnone) \\ &= \sum_{\Nq}{\NYone_{\!:\Nq}} \log\NYone_{\!:\Nq} - \Nnone - \frac{1}{2}\Nnone \\ & \quad - \p{\Nnone \log\Nnone + \Nnq a \log\Nnone - \Nnone -\frac{1}{2}\log\Nnone} +o(\log \Nnone) \enspace. \end{align*} With the non-informative Jeffrey prior $a=\frac{1}{2}$, this gives: \begin{align} \log \Prob{\NYoneb} &= \sum_{\Nq}{\NYone_{\!:\Nq} \log(\frac{1}{\Nnone}\NYone_{\!:\Nq})} - \frac{\Nnq-1}{2} \log\Nnone +o(\log \Nnone) \nonumber\\ &= \underset{\Nalphaoneb}{\max} \log \Prob{\NYoneb; \Nalphaoneb} - \frac{\Nnq-1}{2}\log \Nnone +o(\log \Nnone) \label{equ:asymptoticY1} \enspace. \end{align} Similarly, we get: \begin{align} \log \Prob{\NYtwob} &= \sum_\Nl{\log \Gamma(\NYtwo_{:\Nl}+a) } + \log \Gamma(\Nnl a) - \log \Gamma(\Nntwo+\Nnl a) - \Nnl \log \Gamma(a) \nonumber\\ &= \underset{\Nalphatwob}{\max} \log \Prob{\NYtwob; \Nalphatwob} - \frac{\Nnl-1}{2}\log \Nntwo +o(\log \Nntwo) \label{equ:asymptoticY2} \enspace, \end{align} where {$\NYtwo_{:\Nl} = \sum_\Nj\NYtwo_{\Nj\Nl}$}. We set non-informative InverseGamma($\beta$, $\beta$) distributions (as $\beta$ tends to zero) as priors on $\NsigmaA$, $\NsigmaB$, $\NsigmaP$ and $\NsigmaQ$: \begin{align*} \log\Prob{\NAb} &= \int{\Prob{\NAb|\NsigmaA}\Prob{\NsigmaA;\beta}\; \mathrm{d}\NsigmaA}\\ % &= \prod_\Ni\log\int \p{2\NsigmaA}^{-\frac\Nnone 2} \exp\p{-\frac{\sum{\NA_\Ni^2}}{2\NsigmaA}}\; \frac{\beta^\beta}{\Gamma(\beta)} \exp\p{-\frac{\beta}{\NsigmaA}} \p{\NsigmaA}^{-\beta-1} \; \mathrm{d}\NsigmaA \\ % &= \log\frac{\beta^\beta}{\Gamma(\beta)} 2^{\p{-\frac{\Nnone}{2}}}\int \NsigmaA{}^{\p{-\frac{\Nnone}{2}-\beta-1}} \exp\p{-\frac{2\beta+\sum{\NA_\Ni^2}}{2}\cdot \frac{1}{\NsigmaA}} \; \mathrm{d}\NsigmaA \\ % &= \log\frac{\beta^\beta}{\Gamma(\beta)} 2^{\beta} \; \p{2\beta+\sum{\NA_\Ni}}^{\p{-\frac{\Nnone}{2}-\beta}} \; \Gamma(\frac{\Nnone}{2}+\beta) \enspace. \end{align*} To consider a non-informative InverseGamma($\beta$, $\beta$) distribution, we realize a first order Taylor development as $\beta$ tends to 0: \begin{equation*} \log \Prob{\NAb} \approx \log \Gamma(\frac{\Nnone}{2}) + \log \beta - \frac{\Nnone}{2} \log(\sum{\NA_\Ni^2}) \enspace. \end{equation*} Using the Stirling approximation of $\Gamma(x)$ we get the following asymptotic development of $\log\Prob{\NAb}$: \begin{align} \log\Prob{\NAb} % &= \frac{\Nnone}{2} \log \Nnone - \frac{\Nnone}{2} - \frac{1}{2} \log \Nnone - \frac{\Nnone}{2} \log \sum{\NA_\Ni^2} +o(\log \Nnone) \nonumber\\ % &= \underset{\NsigmaA}{\max} \log \Prob{\NAb; \NsigmaA} + \frac{\Nnone}{2}\log(2\pi) - \frac{1}{2} \log \Nnone +o(\log \Nnone) \enspace. \label{equ:asymptoticA} \end{align} Similarly, we get: \begin{align} \log \Prob{\NBb} = \underset{\NsigmaB}{\max} \log \Prob{\NBb; \NsigmaB} + \frac{\Nnone}{2}\log(2\pi) - \frac{1}{2} \log \Nnone +o(\log \Nnone) \nonumber\\ % \log \Prob{\NPb} = \underset{\NsigmaP}{\max} \log \Prob{\NPb; \NsigmaP} + \frac{\Nntwo}{2}\log(2\pi) - \frac{1}{2} \log \Nntwo +o(\log \Nntwo) \label{equ:asymptoticBPQ} \\ % \log \Prob{\NQb} = \underset{\NsigmaQ}{\max} \log \Prob{\NQb; \NsigmaQ} + \frac{\Nntwo}{2}\log(2\pi) - \frac{1}{2} \log \Nntwo +o(\log \Nntwo) \nonumber \enspace. \end{align}} Using the standard BIC approximation, we have \begin{align} \log \Prob{\NXob|\NYoneb, \NYtwob} % &= \log \int \Prob{\NXob|\NYoneb, \NYtwob, \Npib}\Prob{\Npib}\Prob{\Nmu} \mathrm{d}\Npib \mathrm{d}\Nmu\nonumber\\ % &=\; \max_{\Npib} \log \Prob{\NXob|\NYoneb, \NYtwob; \Npib, \Nmu} + \frac{\Nnq\Nnl}{2}\log(\Nnone\Nntwo) \label{equ:bic} \\ &\quad + o(\log \Nnone)+o(\log \Nntwo) \nonumber \enspace. \end{align} The ICL criterion \ref{annex:eq:icl} is directly derived from equations \ref{annex:eq:iclfull}, \ref{equ:asymptoticY1}, \ref{equ:asymptoticY2}, \ref{equ:asymptoticA}, \ref{equ:asymptoticBPQ} and \ref{equ:bic}. \end{proof} \subsection{ICL of the \LBMacro with MAR data} We consider the following \LBMacro extended with the MAR missingness process: \begin{align*} &\text{Latent Block Model} \\ &\quad \NYone_\Ni \iidsim \multinomial1{\Nalphaoneb}, \qquad \Nalphaoneb \in \mathbf{S}_{\Nnq-1} \\ &\quad \NYtwo_\Nj \iidsim \multinomial1{\Nalphatwob}, \qquad \Nalphatwob \in \mathbf{S}_{\Nnl-1} \\ &\quad \p{\sachant{\NXc_{\Ni\Nj}}{\NYone_{\Ni}=\Nq, \NYtwo_{\Nj} = \Nl}} \indsim \extbernoulli{}{\Npi_{\Nq\Nl}}, \qquad \Npi_{ql} \in [0,1]\\ &\text{MAR data model} \\ &\quad\NA_\Ni \iidsim \norm0{\NsigmaA}, \qquad \NsigmaA \in \mathds{R}_{+}^{*} \\ &\quad\NP_\Nj \iidsim \norm0{\NsigmaP}, \qquad \NsigmaP \in \mathds{R}_{+}^{*} \\ &\quad\p{\sachant{ M_{\Ni\Nj} }{\NA_\Ni,\NP_\Nj}} \indsim \bernoulli{\logistic\p{\Nmu+\NA_\Ni +\NP_\Nj}} \\ &\text{Observations are generated according to:} \\ &\quad\p{\sachant{ \NXo_{\Ni\Nj} }{ \NXc_{\Ni\Nj}, M_{\Ni\Nj}}} = \left\{ \begin{array}{cll} \NXc_{\Ni\Nj} &\text{ if } \quad \NM_{\Ni\Nj} = 1 \\ \NNA &\text{ if } \quad \NM_{\Ni\Nj} = 0 \\ \end{array} \right. \nonumber \end{align*} } The ICL of this model has the following asymptotic form for $\Nnone$ and $\Nntwo$: \begin{align} ICL(\Nnq, \Nnl) =\;& \max_{\Nthetab}\;\log\Prob{\NXob,\NYoneb,\NYtwob,\NAb, \NPb;\Nthetab} - \frac{\Nnq\Nnl}{2}\log\p{\Nnone\Nntwo} \nonumber\\ & - \frac{\Nnq-1}{2} \log\p{\Nnone} - \frac{\Nnl-1}{2} \log\p{\Nntwo} \label{annex:eq:iclmar}\\ & + \frac{1}{2}\p{\Nnone \log\p{2\pi} - \log\p{\Nnone} + \Nntwo \log\p{2\pi} - \log\p{\Nntwo}} \nonumber\\ & + o(\log\Nnone) + o(\log\Nntwo) \nonumber \enspace. \end{align}} \section{Computing the criterion $\mathcal{J}\p{\NRX, \Ntheta}$}\label{annexcriteria} \input{annex_criteria} \section{Initialization of the VEM algorithm with spectral clustering.} \label{annex:init} \input{annex_init.tex} \section{Asymptotic form of the Integrated Completed Likelihood}\label{annexicl} \input{annex_icl} \section{Supplemental figures for estimations} \label{annex:estimation} \input{annex_estimation} \section{Supplemental figures for the French national assembly votes analysis} \label{annex:asnt} Figure~\ref{fig:reordered_asnt_small} displays the reordered matrix of votes derived from a block clustering with a small number of classes. Such a simplification may be helpful for identifying global trends. With this model, the three MP classes are broadly identified as gathering the right-wing (first class) and left-wing (second class) opposition parties, the last class being formed of the political groups supporting the government. The opposition systems appear clearly: on the texts from classes A and E, the votes contrast the membership to the opposition parties versus the governmental alliance, whereas on texts from classes C and D, they separate the left-wing from the right-wing oppositions. Class B gathers various texts on topics of rather general agreement pertaining to social or health matters. \begin{figure \centering \includegraphics[height=5cm,trim={0cm 1.75cm 0cm 0cm},clip]{img/3_5_asnt.pdf}% \includegraphics[height=4.85cm,trim={5.5cm 0.6cm 1.5cm 0cm},clip]{img/3_5_prob.pdf} \caption{Left: matrix of votes reordered according to the row and column classes, for the MNAR LBM with 3 MP classes and 5 text classes. The red lines delineate class boundaries. The counts of MPs belonging to the three most represented political groups in each MP class is given on the left. Right: summary of the inferred opinions (expressed or not) for all classes of texts and MPs, as given by the estimated probability to support a text in each block of the reordered matrix. } \label{fig:reordered_asnt_small} \end{figure} Going back to the model selected by ICL described in Section~\ref{sec:realdata}, we analyze the text propensities to be voted upon and to be positively perceived by nonvoters. These propensities are encoded in the values of the latent variables $\NPb$ and $\NQb$. Figure~\ref{fig:asnt_nu_c_nu_d_grp} displays the scatter plot of $\NnuP_\Nj$ and $\NnuQ_\Nj$, the maximum \textit{a posteriori} estimates of $\NP_\Nj$ and $\NQ_\Nj$ under the variational distribution, for all voted texts. The abscissa $\NnuPb$ reflects the mobilization on the texts, with higher mobilization for higher values, and the ordinate $\NnuQb$ represents the additional effect of mobilizing specifically supporting voters. The fourteen-cluster membership of texts (there is no obvious relevant classification for texts) is indicated by the plotting symbol. \begin{figure \centering \includegraphics[width=0.7\textwidth]{img/asnt_nu_c_nu_d_grp.pdf} \caption{maximum \textit{a posteriori} estimates of the text propensities ($\NnuP_\Nj$, $\NnuQ_\Nj$), with their clustering class memberships. $\NnuP_\Nj$ drives the MAR effect and $\NnuQ_\Nj$ drives the \NMARacro one. } \label{fig:asnt_nu_c_nu_d_grp} \end{figure} Some relationship between missingness and membership to text classes emerge from this plot. A first cluster of text appears in the positive quadrant, with texts mainly proposed by the government, categorized in text classes A and B. A second cluster, smaller, on the upper left, is mainly formed by texts categorized in class D, voted positively by few voters. All the texts are related to the same law project regarding housing and were voted over a short period (06/03/2018 and 06/08/2018). The largest cluster, on the lower part of the graph, gathers most of the remaining texts, that would have a tendency to be voted negatively by nonvoters. These texts were proposed by either the right-wing or left-wing opposition, and get little support from a vast majority of MPs. Note also that the small group of highly voted texts, on the right-hand side, is made of texts belonging to six text classes. This reflects the fact that our model does not link the \NMARacro effect to the LBM memberships. \section{Introduction} Biclustering or co-clustering simultaneously groups the rows and the columns of a data matrix. Co-clustering has found applications in many areas such as genomic analysis \citep{PONTES2015163, Kluger03spectralbiclustering}, text analysis \citep{Dhillon, SELOSSE2020107315}, collaborative filtering \citep{recsyscoclustering, shanbanerjee}, or political analysis \citep{Latouche_2011,wyse2012block}. Co-clustering methods can be divided into categories such as, but not limited to, spectral methods \citep{dhillon2001cocluster, Kluger03spectralbiclustering}, mutual information methods \citep{Dhillon}, modularity based methods \citep{labiod2011}, non negative matrix tri-factorization \citep{trinmf} or model-based methods. Among the model-based methods, the Latent Block Model \citep{lbmgg, govaert2010, lomet2012, keribin:hal-00802764} relies on mixtures, assuming that the observations are generated from finite mixture components in rows and columns. Most standard methods of clustering or co-clustering presuppose complete information and cannot be applied with missing data, or may provide misleading conclusions when missingness is informative. A careful examination of the data generating process is necessary for the processing of missing values, which requires identifying the type of missingness \citep{rubin}: Missing Completely At Random (\MCARacro) refers to the mechanism in which the probability of being missing does not depend on the variable of interest or any other observed variable; whereas in Missing At Random (MAR) the probability of being missing depends on some observed data but is still independent from the non-observed data; and finally Missing Not At Random (\NMARacro) refers to the mechanism in which the probability of being missing depends on the actual value of the missing data. Under the MAR hypothesis, no information on the generation of data can be extracted from its absence, but under a \NMARacro assumption, this absence is informative, and ignoring this information in likelihood-based imputation methods may lead to strong biases in estimation \citep{little2019statistical}. Missing Not At Random is also known as non-ignorable missingness, in opposition to the ignorable missingness of \MCARacro and MAR settings, as the absence of data is assumed to convey some information. In this paper, we aim at clustering the rows and columns of a data matrix whose entries are missing not at random. Equivalently, we consider the clustering of the vertices of a bipartite graph whose edges are missing not at random. For this purpose, we introduce a co-clustering model that combines a \NMARacro missingness model with the Latent Block Model (LBM). In Section \ref{sec:lbm} we review the Latent Block Model introduced by \cite{lbmgg}. In Section \ref{sec:missingnessmodel}, we introduce our model, a LBM extended to a \NMARacro missingness process, and propose, in Section \ref{sec:inference}, a variational EM algorithm to infer its parameters. We also introduce, in Section \ref{sec:modelselection}, an Integrated Completed Likelihood (ICL) criterion to tackle model selection. We then conduct experiments on synthetic datasets in Section \ref{sec:simulatedata} to show that the overall approach is relevant to co-cluster \NMARacro data. Finally, an analysis of the voting records of the lower house of the French Parliament is presented in Section \ref{sec:realdata}. \subsection{Related Works} Up to our knowledge, all existing co-clustering methods consider that missing data is either \MCARacro or MAR \citep{SELOSSE2020106866, jacques:hal-01448299, Papalexakis}, except one proposed by \cite{corneli:hal-01978174} used to co-cluster ordinal data. Their model is very parsimonious as it assumes that both data and missingness are only dependent on the row and column clusters. In this setting, they are able to consider \NMARacro data even if they suppose that missingness depends indirectly from the value of the data. The model we propose is less parsimonious, thus more flexible, as it supposes that missingness depends both on the value of the data and on the row and column indexes (not only on their respective cluster indexes). In addition to that, our missing data model can be easily re-used for any other statistical co-clustering model as it is weakly-coupled to the generative model of the full data matrix. In the simple clustering framework, few mixture models handling \NMARacro data have been proposed. \citet{Marlin} combine a multinomial mixture clustering model, used as a complete data model, with a missingness model of type \NMARacro. They propose two versions of their missingness model. The first one, called CPT-v, models the data observation probability depending only on the underlying value of the data. The second one, called Logit-vd, allows the probability of a data entry to be missing to depend both on the value of the underlying data and the characteristics of the column, giving more flexibility to the model. Our missingness model respects the symmetry of the co-clustering problem by depending identically on the characteristics of the row and column. \citet{Kim} propose Bayesian-BM/OR, a simple mixture model of binomials in a Bayesian formalism. The \NMARacro missingness is modeled by three factors, related to the row, the column and the data value, all three being modeled by Bernoulli variables combined together by a ``or'' logical operator. The choice of this missingness model is motivated by algorithmic considerations that are not relevant for co-clustering models. \citet{tabouy}, in a graph perspective, deal with nonobserved dyads during the sampling of a network and consecutive issues in the inference of the stochastic block model. They propose three different \NMARacro sampling designs in which observing dyads depends either on their underlying value, or on the class or on the degree of the nodes. The Stochastic Block Model, though similar from the \LBM we use, is not usable for co-clustering purposes. Also related to missing data but not to clustering, \NMARacro is also considered in the Matrix Factorization framework. \citet{Steck} derives a weighted MF model and optimizes the parameters based on a metric that is robust to \NMARacro data. \citet{hernandez} use a double probabilistic MF model; one is for the complete data and one for the missing data, where users and items propensities are both modeled with low rank matrices. \citet{schnabel2016recommendations} propose an empirical risk minimization framework to derive a propensity scored matrix factorization method that can account for selection bias. \section{The \LBM} \label{sec:lbm} The \LBM (\LBMacro) is a {\em co-clustering} model that classifies jointly the rows and the columns of a data matrix \citep{lbmgg}. This probabilistic generative model assumes a double partition on the rows and the columns of a $(\Nnone \times \Nntwo)$ data matrix $\NXb$ that corresponds to a strong structure of the matrix in homogeneous blocks. This structure is unveiled by reordering the rows and columns according to their respective cluster index; for $\Nnq$ row clusters and $\Nnl$ column clusters, the reordering reveals $\Nnq\times\Nnl$ homogeneous blocks in the data matrix. % Note that we adopt here the original view where the data matrix is interpreted as a data table. The binary matrix $\NXb$ can also be interpreted as the biadjacency matrix of a bipartite graph, whose two sets of vertices corresponds to the rows and columns of the data matrix. In this interpretation, $\NX_{\Ni\Nj} = 1$ if an edge is present between ``row node'' $\Ni$ and ``column node'' $\Nj$, and $\NX_{\Ni\Nj} = 0$ otherwise. \ifgraph As firstly introduced, the \LBMacro considers that $\NXb$ is random binary matrix. From a graph inference point of view, this binary matrix $\NXb$ ($\Nnone \times \Nntwo$) is the adjacency matrix of a bipartite graph with $\Nnone$ nodes of type (1) and $\Nntwo$ nodes of type (2). The random variable $\NX_{\Ni\Nj}$ is associated for each pair of nodes ($\Ni$,$\Nj$) respectively of type (1) and type (2) coding the presence or the absence of edge between $\Ni$ and $\Nj$. $\NX_{\Ni\Nj} = 1$ if the edge is present and $\NX_{\Ni\Nj} = 0$ otherwise. \fi For the $(\Nnone \times \Nntwo)$ data matrix $\NXb$, two partitions are defined by the latent variables $\NYoneb$ and $\NYtwob$, with $\NYoneb$ being the $\Nnone \times \Nnq$ indicator matrix of the latent row clusters ($\NYone_{\Ni\Nq} = 1$ if row $\Ni$ belongs to group $\Nq$ and $\NYone_{\Ni\Nq} = 0$ otherwise), and $\NYtwob$ being the $\Nntwo \times \Nnl$ indicator matrix of the latent column cluster. The group indicator of row $\Ni$ will be denoted $\NYoneb_\Ni$, and similarly, the group indicator of column $\Nj$ will be denoted $\NYtwob_\Nj$. The \LBMacro makes several assumptions on the dependencies: % \paragraph{Independent rows and column clusters} The latent variables $\NYoneb$ and $\NYtwob$ are \textit{a priori} independent. \begin{equation*} p(\NYoneb, \NYtwob) = p(\NYoneb)p(\NYtwob) \enspace. \end{equation*} Note that \textit{a priori} independence does not imply \textit{a posteriori} independence: given the data matrix $\NXb$, the two partitions are (hopefully) not independent. % \paragraph{Independent and identically distributed row clusters} The latent variables $\NYoneb$ are independent and follow a multinomial distribution $\multinomial1{\Nalphaoneb}$, where $\Nalphaoneb=(\Nalphaone_1,...,\Nalphaone_\Nnq )$ is the mixing proportions of rows: \begin{align*} &p(\NYoneb; \Nalphaoneb) = \prod_\Ni{p(\NYoneb_\Ni; \Nalphaoneb)} \\ &p(\NYone_{\Ni\Nq}=1; \Nalphaoneb) = \Nalphaone_{\Nq} \enspace, \end{align*} with $\Nalphaoneb \in S_{(\Nnq-1)}=\{\Nalphaoneb\in\mathbb{R}_+^{\Nnq}| \sum_\Nq{\Nalphaone_\Nq=1}\}$. % \paragraph{Independent and identically distributed column clusters} Likewise, the latent variables $\NYtwob$ are independent and follow a multinomial distribution $\multinomial1{\Nalphatwob}$, where $\Nalphatwob~=~(\Nalphatwo_1,...,\Nalphatwo_\Nnl )$ is the mixing proportions of columns: \begin{align*} &p(\NYtwob; \Nalphatwob) = \prod_\Nj{p(\NYtwob_\Nj; \Nalphatwob)} \\ &p(\NYtwo_{\Nj\Nl}=1; \Nalphatwob) = \Nalphatwo_{\Nl} \enspace, \end{align*} with $\Nalphatwob \in S_{(\Nnl-1)}$. % \paragraph{Given row and column clusters, independent and identically distributed block entries} Given the row and colum clusters $(\NYoneb,\NYtwob)$, the entries $\NX_{\Ni\Nj}$ are independent and follow a Bernoulli distribution of parameter $\Npib=(\Npi_{\Nq\Nl};\Nq=1,...,\Nnq; \Nl=1,...,\Nnl)$: all elements of a block follow the same probability distribution. \begin{align*} &p(\sachant{\NXb}{\NYoneb,\NYtwob; \Npib}) = \prod_{\Ni\Nj}{p\p{\sachant{\NX_{\Ni\Nj}}{\NYoneb_\Ni,\NYtwob_\Nj; \Npib}}} \\ &p\p{\sachant{\NX_{\Ni\Nj}=1}{\NYone_{\Ni\Nq}\NYtwo_{\Nj\Nl}=1; \Npib}} = \Npi_{\Nq\Nl} \enspace. \end{align*} To summarize, the parameters of the \LBMacro are $\theta = (\Nalphaoneb, \Nalphatwob, \Npib)$ and the probability mass function of $\NXb$ can be written as: \begin{equation*} p\p{\NXb; \theta} = \sum_{(\NYoneb,\NYtwob) \in I \times J}{ \p{\prod_{\Ni,\Nq}{{\Nalphaone_\Nq}^{\NYone_{\Ni\Nq}}}} \p{\prod_{\Nj,\Nl}{{\Nalphatwo_\Nl}^{\NYtwo_{\Nj\Nl}}}} \p{\prod_{\Ni,\Nj,\Nq,\Nl}{\phi(\NX_{\Ni\Nj}}; \Npi_{\Nq\Nl})^{\NYone_{\Ni\Nq} \NYtwo_{\Nj\Nl} }} } \enspace, \end{equation*} where {$\phi(\NX_{\Ni\Nj}; \Npi_{\Nq\Nl}) = \Npi^{\NX_{\Ni\Nj}}_{\Nq\Nl}(1-\Npi_{\Nq\Nl})^{1-\NX_{\Ni\Nj}}$} is the mass function of a Bernoulli variable and where $I$ (resp. $J$) denotes the set of all possible partitions of rows (resp. columns) into $\Nnq$ (resp. $\Nnl$) groups. \begin{figure}[tb] \begin{framed} \centering \begin{minipage}{.3\textwidth} \centering \begin{tikzpicture}[scale=1, every node/.style={scale=1.3}] \node[circle, draw=black] (Y1) at (0,0) {$\NYoneb_\Ni$}; \node[circle,draw=black] (Y2) at (2,0) {$\NYtwob_\Nj$}; \node[circle,draw=black, inner sep=1pt] (X) at (1,-1.5) {$\NX_{\Ni\Nj}$}; \draw[-{Latex[length=3mm, width=2mm]}] (Y1) -- (X); \draw[-{Latex[length=3mm, width=2mm]}] (Y2) -- (X); \end{tikzpicture} \end{minipage} \begin{minipage}{.6\textwidth} \[ \left\{ \begin{array}{l} \forall \Ni,\;\NYoneb_\Ni \iidsim \multinomial1{\Nalphaoneb} \\ \forall \Nj,\; \NYtwob_\Nj \iidsim \multinomial1{\Nalphatwob}\\ \forall \Ni,\Nj, \;\sachant{\NX_{\Ni\Nj}}{\NYone_{\Ni\Nq}=1, \NYtwo_{\Nj\Nl} = 1} \;\indsim\;\extbernoulli{}{\Npi_{\Nq\Nl}} \end{array} \right. \] \[ \text{with } \Nalphaoneb \in \mathbf{S}_{\Nnq-1},\; \Nalphatwob \in \mathbf{S}_{\Nnl-1} \text{ and } \Npi_{ql} \in [0,1] \] \end{minipage} \label{fig:graphiclbmbernouilli} \end{framed} \captionof{figure}{Summary of the standard \LBM with binary data.} \end{figure} \section{Extension to Informative Missing Data} \label{sec:missingnessmodel} The standard \LBM does not accommodate missing observations, that is, the data matrix $\NXb$ is fully observed. This section introduces our missingness model, which will be coupled to the \LBMacro, thereby enabling to process missing data. We start by introducing some notation: from now on, $\NXob$ will denote the \guill{partially observed} data matrix, with missing entries, whereas $\NXcb$ denotes the \guill{full} (unobserved) data matrix, without missing entries. The partially observed matrix $\NXob$ is identical to the full matrix $\NXcb$ except for the missing entries; $\NXob$ takes its values in $\{0,1, \NNA\}$, where $\NNA$ denotes a missing value. It will be convenient to introduce a binary mask matrix $\NMb$ that indicates the {\em non-missing} entries of $\NXob$: if $\NM_{\Ni\Nj}=0$, then $\NXo_{\Ni\Nj}=\NNA$. \subsection{Models of Missingness} The three main types of missingness are \MCAR (\MCARacro), \MAR (\MARacro), and \NMAR (\NMARacro). We propose here a model for each missingness type. Instead of directly modeling the probability of being missing, we will model a real variable that defines the log-odds of this probability. This log-odds will be called here the ``propensity'' to be missing. \iffalse When facing \NMARacro data, most existing missingness models are making an \MCARacro or \MARacro assumption for computational simplicity reasons during the inference process. Indeed any generative model combined with a \MCARacro or \MARacro missingness model can be trained separately as the likelihood of the overall model is factorizable between the two models. Under the \MCARacro and \MARacro hypothesis, no information on the generation of data can be extracted from an absence of data, but under a \NMARacro assumption, this absence is informative, and ignoring this information may lead to strong biases in estimations that may in turn drastically affect the classifications. From these considerations, we propose an extension of the \LBMacro model enabling to deal with Not Missing At Random missingness data. The resulting model is made up of two distinct parts: the \LBMacro used to model the \guill{full} data matrix and the \NMARacro model. The two models are merged together and can not be trained separately because of their mutual dependency; however for clarity reasons, we first describe the model of the missingness process and then its use with the \LBMacro. \fi % \paragraph{\MCAR (\MCARacro)} Missingness does not depend on data, whether observed or not. A simple model of missingness is obtained by assuming that every entry of $\NXob$ has the same propensity of being missing. This is modeled by a single propensity parameter $\Nmu$. The graphical representation of this model is shown in Figure~\ref{CMARmodel}. \begin{figure} \centering \begin{tikzpicture}[scale=0.8, every node/.style={scale=1}] \node (mu) at (1.75,2) {$\Nmu$}; \node[circle,draw=black] (M) at (0,2) {${M}_{\Ni\Nj}$}; \node[circle,draw=black] (Xc) at (-2,2) {$\NXc_{\Ni\Nj}$}; \node[circle,draw=black, inner sep=0.5ex] (X) at (-1,0.25) {$\NXo_{\Ni\Nj}$}; \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (mu); \draw[{Latex[length=3mm, width=2mm]}-] (X) -- (Xc); \draw[{Latex[length=3mm, width=2mm]}-] (X) -- (M); \end{tikzpicture} \caption{\label{CMARmodel}Graphical representation of the \MCARacro model. The partially observed entry $\NXo_{\Ni\Nj}$ is generated by the corresponding entries of (i) the full matrix $\NXc_{\Ni\Nj}$ and (ii) the binary mask $\NM_{\Ni\Nj}$. The binary mask $\NMb$ does not depend on $\NXcb$ and is defined here from a single global effect parameter $\Nmu$.} \end{figure} \paragraph{\MAR (\MARacro)} Missingness depends on the observed data, but not on the unobserved data. The previous missingness model can be enlarged by allowing the propensity of missingness to depend on the row and column indexes. To do so, we can introduce a latent variable for every row, denoted $\NAb$, and another one for every column, denoted $\NPb$. For the sake of simplicity, all latent variables $\NA_\Ni$ and $\NP_\Nj$ are assumed independent. They allow deviations from the global propensity $\Nmu$. The graphical representation of this model is shown in Figure~\ref{MARmodel}. \begin{figure} \centering \begin{tikzpicture}[scale=0.8, every node/.style={scale=1}] \node (mu) at (1.75,2) {$\Nmu$}; \node[circle,draw=black] (M) at (0,2) {${M}_{\Ni\Nj}$}; \node[circle,draw=black] (Xc) at (-2,2) {$\NXc_{\Ni\Nj}$}; \node[circle,draw=black, inner sep=0.5ex] (X) at (-1,0.25) {$\NXo_{\Ni\Nj}$}; \node[circle,draw=black, inner sep=1.25ex] (A) at (-1,3.75) {$\NA_\Ni$}; \node[circle,draw=black, inner sep=1.25ex] (P) at (1,3.75) {$\NP_\Nj$}; \draw[{Latex[length=3mm, width=2mm]}-] (X) -- (Xc); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (A); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (P); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (mu); \draw[{Latex[length=3mm, width=2mm]}-] (X) -- (M); \end{tikzpicture} \caption{\label{MARmodel} Graphical representation of the \MARacro model. The partially observed entry $\NXo_{\Ni\Nj}$ is generated by the corresponding entries of (i) the full matrix $\NXc_{\Ni\Nj}$ and (ii) the binary mask $\NM_{\Ni\Nj}$. The binary mask $\NMb$ does not depend on $\NXcb$ and is and is defined by a global effect parameter $\Nmu$ and two latent variables $\NAb$ and $\NPb$ that enable deviations from $\Nmu$.} \end{figure} \paragraph{\NMAR (\NMARacro)} Missingness here depends on unobserved data: the probability of observing the entries of the matrix depends on their values, whether observed or not. We equip the previous model with two additional latent variables to adapt the propensity of each entry of the data matrix to the unobserved data, that is, to $\NXc_{\Ni\Nj}$. These new row and column latent variables, $\NBb$ and $\NQb$, adjust the propensity of missingness according to the actual value of $\NXc_{\Ni\Nj}$. The graphical representation of this model is shown in Figure~\ref{MNARmodel}. \begin{figure} \centering \begin{tikzpicture}[scale=0.8, every node/.style={scale=1}] \node[circle,draw=black, inner sep=1.25ex] (A) at (-2.2,3.75) {$\NA_\Ni$}; \node[circle,draw=black, inner sep=1.25ex] (B) at (-0.8,3.75) {$\NB_\Ni$}; \node[circle,draw=black, inner sep=1.25ex] (P) at (0.8,3.75) {$\NP_\Nj$}; \node[circle,draw=black, inner sep=1.25ex] (Q) at (2.2,3.75) {$\NQ_\Nj$}; \node (mu) at (1.75,2) {$\Nmu$}; \node[circle,draw=black] (Xc) at (-2,2) {$\NXc_{\Ni\Nj}$}; \node[circle,draw=black, inner sep=0.5ex] (X) at (-1,0.25) {$\NXo_{\Ni\Nj}$}; \draw[{Latex[length=3mm, width=2mm]}-] (X) -- (Xc); \node[circle,draw=black] (M) at (0,2) {$M_{\Ni\Nj}$}; \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (Xc); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (A); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (B); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (P); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (Q); \draw[{Latex[length=3mm, width=2mm]}-] (M) -- (mu); \draw[{Latex[length=3mm, width=2mm]}-] (X) -- (M); \end{tikzpicture} \caption{\label{MNARmodel} Graphical representation of the \NMARacro model. The partially observed entry $\NXo_{\Ni\Nj}$ is generated by the corresponding entries of (i) the full matrix $\NXc_{\Ni\Nj}$ and (ii) the binary mask $\NM_{\Ni\Nj}$. The binary mask $\NMb$ depends on $\NXcb$ and is defined by a global effect parameter $\Nmu$, two latent variables $\NAb$ and $\NPb$ that enable deviations from $\Nmu$, and two latent variables $\NBb$ and $\NQb$, which drive the deviations from the MAR model.} \end{figure} We model the latent variables $\NAb$, $\NBb$, $\NPb$, and $\NQb$ with Gaussian distributions centered at zero with free variances $\NsigmaA$, $\NsigmaB$, $\NsigmaP$, and $\NsigmaQ$, respectively: \begin{equation*} \left\{ \begin{array}{l} \forall \Ni,\qquad \NA_\Ni \iidsim \norm0{\NsigmaA} ,\quad \NB_\Ni \iidsim \norm0{\NsigmaB}\\ \forall \Nj,\qquad \NP_\Nj \iidsim \norm0{\NsigmaP} ,\quad \NQ_\Nj \iidsim \norm0{\NsigmaQ} \end{array} \right. \enspace. \end{equation*} The global parameter $\Nmu$ and the latent variables define the propensity of missingness, that is, the log-odds of being missing as follows: \begin{equation} \forall \Ni, \Nj \quad \Nodd =\left\{ \begin{array}{l} \Nmu+\NA_\Ni +\NB_\Ni+\NP_\Nj+\NQ_\Nj \quad \text{if} \quad \NXc_{\Ni\Nj}=1 \\ \Nmu+\NA_\Ni -\NB_\Ni+\NP_\Nj-\NQ_\Nj \quad \text{if} \quad \NXc_{\Ni\Nj}=0 \\ \end{array} \right. \enspace. \end{equation} % Then, given this propensity, every element $\NM_{\Ni\Nj}$ of the mask matrix is independent and follows a Bernoulli distribution: \begin{equation} \forall \Ni, \Nj \quad \sachant{ M_{\Ni\Nj} }{\NA_\Ni,\NB_\Ni,\NP_\Nj,\NQ_\Nj, \NXc_{\Ni\Nj} } \; \indsim \; \bernoulli{\logistic\p{\Nodd}} \enspace, \end{equation} with {$\logistic(x)= 1/(1+\exp(-x))$}. Note that, if we omit the latent variables $\NB_\Ni$ and $\NQ_\Nj$, the missingness model follows the \MARacro assumption since $\Nodd$, and thus $M_{\Ni\Nj}$, is then independent of $\NXc_{\Ni\Nj}$. If we also omit the latent variables $\NA_\Ni$ and $\NP_\Nj$, the missingness model follows the \MCARacro assumption. This model of missingness can be used for several applications. One of these, collaborative filtering, uses the history of user ratings to build a recommendation system. For this application, an \MCARacro modeling means that the probability of observing a rating for a particular item does not depend on the user nor the item; an \MARacro modeling means that missingness can depend on the user or the item; for example, some people give their opinion more often than others. The \MARacro simplifying assumption is often used in collaborative filtering. However, \cite{Marlin07} show that there is often a dependency between the rating frequency and the underlying preference level, lending support to the hypothesis that ratings are generated by a MNAR process, where missingness depends on the actual rating that would be given. Some people give their opinion more often when they are satisfied and other ones when they are dissatisfied. Most collaborative filtering methods do not have a principled method for extracting information from missing data, which can lead to strong biases in estimations that may in turn drastically affect predictions \citep{hernandez}. Our missingness model allows to account for the users' propensity to give their opinion, and for the items' propensity to be rated, that is, their notoriety. These propensities could also reflect exogenous factors such as price; for example, more expensive items could be evaluated more often. \subsection{\LBMacro with MNAR data} \label{sect:lbmextended} We extend the standard $\LBMacro$ using the previous modeling to \NMARacro data. Given the full matrix $\NXcb$ and the mask matrix $\NMb$, all the elements of the observed matrix $\NXob$ are independent and identically distributed: \begin{equation} \p{\sachant{ \NXo_{\Ni\Nj} }{ \NXc_{\Ni\Nj}, M_{\Ni\Nj}}} = \left\{ \begin{array}{cll} \NXc_{\Ni\Nj} &\text{ if } \quad \NM_{\Ni\Nj} = 1 \\ \NNA &\text{ if } \quad \NM_{\Ni\Nj} = 0 \\ \end{array} \right. \enspace. \end{equation} Figure~\ref{fig:graphiclbm} summarizes the \LBMacro extented to MNAR data. $\NXob$ taking its values in $(0,1,\NNA)$, the same model can be rewritten with a Categorial distribution using directly the latent variables of both models: \begin{align} \forall \Ni,\;\NYoneb_\Ni &\iidsim \multinomial1{\Nalphaoneb} \nonumber\\ \forall \Nj,\; \NYtwob_\Nj &\iidsim \multinomial1{\Nalphatwob}\nonumber\\ \forall \Ni,\;\NA_\Ni &\iidsim \norm0{\NsigmaA} \nonumber\\ \label{eq:reform} \forall \Ni,\;\NB_\Ni &\iidsim \norm0{\NsigmaB} \\ \forall \Nj,\;\NP_\Nj &\iidsim \norm0{\NsigmaP} \nonumber\\ \forall \Nj,\;\NQ_\Nj &\iidsim \norm0{\NsigmaQ} \nonumber\\ \forall \Ni,\Nj, \; \sachant{ \NXo_{\Ni\Nj} }{\NYone_{\Ni\Nq}=1,\NYtwo_{\Nj\Nl}=1,\NA_\Ni,\NB_\Ni,\NP_\Nj,\NQ_\Nj } &\indsim \categorial{ \begin{bmatrix}0\\1\\\NNA\end{bmatrix} }{ \begin{bmatrix} p_0 \\ p_1 \\ 1- p_0 - p_1 \end{bmatrix} }\nonumber \end{align}} with \begin{align} p_0 &= \p{1-\Npi_{\Nq\Nl}}\logistic\p{\Nmu + \NA_\Ni- \NB_\Ni +\NP_\Nj - \NQ_\Nj} \label{eq:p2} \\ p_1 &= \Npi_{\Nq\Nl}\logistic\p{\Nmu + \NA_\Ni+ \NB_\Ni +\NP_\Nj + \NQ_\Nj} \label{eq:p1} \enspace. \end{align}} \begin{figure} \begin{framed} \centering \begin{minipage}{1.\textwidth} \centering \begin{tikzpicture}[scale=0.8, every node/.style={scale=1}] \node[circle, draw=black] (Y1) at (0,0) {$\NYoneb_\Ni$}; \node[circle,draw=black] (Y2) at (2,0) {$\NYtwob_\Nj$}; \node[circle,draw=black] (A) at (4,0) {$\NA_\Ni$}; \node[circle,draw=black] (B) at (6,0) {$\NB_\Ni$}; \node[circle,draw=black] (P) at (8,0) {$\NP_\Nj$}; \node[circle,draw=black] (Q) at (10,0) {$\NQ_\Nj$}; \draw[color=black!30!white] (-1,1) rectangle (3,-3); \node[scale=0.7, color=black!30!white] at (-0.58,-2.8) {\LBMacro}; \draw[color=black!30!white, dashed] (0,-3) -- (0,-1) -- (3.3,-1) -- (3.3,1) -- (10.8,1) -- (10.8,-3) -- (0,-3); \node[scale=0.7, color=black!30!white] at (9.,-2.81) {MNAR missingness model}; \node[circle,draw=black] (Xc) at (1,-2) {$\NXc_{\Ni\Nj}$}; \node[circle,draw=black] (M) at (7,-2) {$M_{\Ni\Nj}$}; \node[circle,draw=black] (Xo) at (4,-4.5) {$\NXo_{\Ni\Nj}$}; \draw[-{Latex[length=3mm, width=2mm]}] (Y1) -- (Xc); \draw[-{Latex[length=3mm, width=2mm]}] (Y2) -- (Xc); \draw[-{Latex[length=3mm, width=2mm]}] (A) -- (M); \draw[-{Latex[length=3mm, width=2mm]}] (B) -- (M); \draw[-{Latex[length=3mm, width=2mm]}] (P) -- (M); \draw[-{Latex[length=3mm, width=2mm]}] (Q) -- (M); \draw[-{Latex[length=3mm, width=2mm]}] (Xc) -- (Xo); \draw[-{Latex[length=3mm, width=2mm]}] (Xc) -- (M); \draw[-{Latex[length=3mm, width=2mm]}] (M) -- (Xo); \end{tikzpicture} \end{minipage}\\\bigskip \begin{minipage}{1.\textwidth} \begin{align} &\text{Latent Block Model} \nonumber\\ &\quad \NYoneb_\Ni \iidsim \multinomial1{\Nalphaoneb}, \qquad \Nalphaoneb \in \mathbf{S}_{\Nnq-1} \nonumber\\ &\quad \NYtwob_\Nj \iidsim \multinomial1{\Nalphatwob}, \qquad \Nalphatwob \in \mathbf{S}_{\Nnl-1} \nonumber\\ &\quad \p{\sachant{\NXc_{\Ni\Nj}}{\NYone_{\Ni\Nq}=1, \NYtwo_{\Nj\Nl} = 1}} \indsim \extbernoulli{}{\Npi_{\Nq\Nl}}, \qquad \Npi_{ql} \in (0,1) \nonumber\\ &\text{MNAR model} \nonumber\\ &\quad\NA_\Ni \iidsim \norm0{\NsigmaA}, \qquad \NsigmaA \in \mathds{R}_{+}^{*} \nonumber\\ &\quad\NB_\Ni \iidsim \norm0{\NsigmaB}, \qquad \NsigmaB \in \mathds{R}_{+}^{*} \nonumber\\ &\quad\NP_\Nj \iidsim \norm0{\NsigmaP}, \qquad \NsigmaP \in \mathds{R}_{+}^{*} \nonumber\\ &\quad\NQ_\Nj \iidsim \norm0{\NsigmaQ}, \qquad \NsigmaQ \in \mathds{R}_{+}^{*} \nonumber\\ &\quad\p{\sachant{ M_{\Ni\Nj} }{\NA_\Ni,\NB_\Ni,\NP_\Nj,\NQ_\Nj, \NXc_{\Ni\Nj}=1 }} \indsim \bernoulli{\logistic\p{\Nmu+\NA_\Ni +\NB_\Ni+\NP_\Nj+\NQ_\Nj}} \nonumber\\ &\quad\p{\sachant{ M_{\Ni\Nj} }{\NA_\Ni,\NB_\Ni,\NP_\Nj,\NQ_\Nj, \NXc_{\Ni\Nj}=0 }} \indsim \bernoulli{\logistic\p{\Nmu+\NA_\Ni -\NB_\Ni+\NP_\Nj-\NQ_\Nj}} \nonumber\\ &\text{Observations are generated according to:} \nonumber\\ &\quad\p{\sachant{ \NXo_{\Ni\Nj} }{ \NXc_{\Ni\Nj}, M_{\Ni\Nj}}} = \left\{ \begin{array}{cll} \NXc_{\Ni\Nj} &\text{ if } \quad \NM_{\Ni\Nj} = 1 \\ \NNA &\text{ if } \quad \NM_{\Ni\Nj} = 0 \\ \end{array} \right. \nonumber % \end{align} \end{minipage} \end{framed} \captionof{figure}{Graphical view and summary of the \LBM extended to MNAR missingness process. The observed data $\NXo_{\Ni\Nj}$ is generated by the necessary information carried by the class and propensity of row $\Ni$ and by the class and propensity of the column $\Nj$.} \label{fig:graphiclbm} \end{figure} \section{Inference in the extented \LBMacro} \label{sec:inference} The dependency between the full data matrix $\NXcb$ and the mask matrix $\NMb$ requires a joint inference of the \LBMacro with the \NMARacro model. As the standard maximum likelihood approach cannot be applied directly, we adopt a strategy based on a variational EM. \bigskip During inference, we use the reformulation (Equation~\ref{eq:reform}). We can split our random variables into two sets: the set of unobserved latent variables and the set of observed variables consisting of $\NXob$ only. An observation of $\NXob$ only is called the incomplete data, and an observation of $\NXob$ together with the latent variables $\NAb$, $\NBb$, $\NPb$, $\NQb$, $\NYoneb$ and $\NYtwob$ is called the complete data. Given the incomplete data, our objective is to infer the model parameters $\theta$ via maximum likelihood {$\Nthetahat=\argmax_\Ntheta p(\NXob;\Ntheta)$}. We resort to the Expectation Maximization (EM) algorithm to maximize $p(\NXob; \Ntheta)$ without explicitely calculating it. The EM algorithm iteratively applies the two following steps: \begin{description}[align=left] \item[E-step] Expectation step: from the current estimate $\Ntheta^{(t)}$ of $\theta$, compute the criterion {$\mathcal{Q}(\Ntheta|\Ntheta^{(t)})$} defined as the expectation of the complete log-likelihood, conditionally on the observations $\NXob$: \[\mathcal{Q}(\Ntheta|\Ntheta^{(t)})=\expectation_{\sachant{\NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb}{\NXob, \Ntheta^{(t)}}}\brackets{\llikli\p{\NXob, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb; \Ntheta}}\] \item[M-step] Maximization step: find the parameters that maximize {$\mathcal{Q}(\Ntheta|\Ntheta^{(t)})$}. \\ \[\theta^{(t+1)}=\argmax_{\Ntheta}\;\mathcal{Q}(\Ntheta|\Ntheta^{(t)})\] \end{description} The computation of the complete log-likelihood at the E-step requires the posterior distribution of the latent variables $p(\NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb|\NXob)$ which is intractable, because the search space of the latent variables is combinatorially too large. This problem is well known in the context of co-clustering; for the \LBM, \cite{celeuxd85, keribin:hal-00802764} propose a stochastic E-step with Monte Carlo sampling, but this strategy is not suited to large-scale problems. We follow the original strategy proposed by \citet{lbmgg}, which relies on a variational formulation of the problem, since it is more efficient in high dimension. \subsection{Variational EM} The variational EM (VEM) \citep{Jordan,Jaakkola00tutorialon} introduces $q(\cdot)$, a parametric inference distribution defined over the latent variables $\NYoneb$, $\NYtwob$, $\NAb$, $\NBb$, $\NPb$, $\NQb$ and optimize the following lower bound on the log-likelihood of the incomplete data: \begin{equation} \mathcal{J}\p{q, \Ntheta} = \llikli\p{\NXob; \Ntheta} - KL\p{q(\cdot) \parallel p(\cdot | \NXob; \Ntheta)} \enspace, \end{equation} where $KL$ stands for the Kullback-Leibler divergence and $q(\cdot)$ denotes the variational distribution over the latent variables $\NYoneb$, $\NYtwob$, $\NAb$, $\NBb$, $\NPb$ and $\NQb$. It can be shown that $\mathcal{J}\p{q, \Ntheta}$ is a concave function of the variational distribution $q$ and that its maximum is reached for $q(\cdot) = p(\cdot | \NXob; \Ntheta)$. Thus, maximizing the criterion $\mathcal{J}$ is equivalent to minimizing the discrepancy between $q(\cdot)$ and $p(\cdot | \NXob; \Ntheta)$, as measured by the Kullback divergence, and is also equivalent to maximizing the likelihood. The minimization of this Kullback divergence requires to explore the whole space of latent distributions; the difficulty of the problem is equivalent, in terms of complexity, to the initial problem. The criterion $\mathcal{J}\p{q, \Ntheta}$ can also be expressed as the sum of a negative \guill{energy} and the entropy of $q$ hence its name \guill{negative variational free energy} in analogy with the thermodynamic free energy: \begin{equation} \label{eq:criterionj} \mathcal{J}\p{q, \Ntheta} = \mathcal{H}(q) + \expectation_q \brackets{\llikli\p{\NXob, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb; \Ntheta}} \enspace, \end{equation} where $\mathcal{H}(q)$ is the entropy of the variational distribution and $ \expectation_q$ is the expectation with respect to the variational distribution. The criteria $\mathcal{J}$ can become tractable if an exploration of a subspace, noted $\NRX$, of the latent distributions is made. However, this solution comes with the cost that the maximum found, will be a lower bound of the initial criteria: \begin{equation} \mathcal{J}\p{q, \Ntheta} \geq \mathcal{J}\p{\NRX, \Ntheta} \end{equation} $\mathcal{J}\p{\NRX, \Ntheta}$ is also known as the \guill{Evidence Lower BOund} (ELBO) emphasizing the lower bound property on the evidence of the data. A wise choice of the restriction on the variational distribution leads a feasible computation of the criterion. We choose to consider the following posterior shapes on the latent variables: \begin{align*} \forall&\Ni& \NYoneb_\Ni|\NXob &\underset{\NRX}\sim\multinomial1{\Ntauone_\Ni} \\ \forall&\Nj& \NYtwob_\Nj|\NXob &\underset{\NRX}\sim\multinomial1{\Ntauone_\Nj} \\ \forall&\Ni& \NA_\Ni|\NXob &\underset{\NRX}\sim\norm{\NnuA_\Ni}{\NrhoA_\Ni} \\ \forall&\Ni& \NB_\Ni|\NXob &\underset{\NRX}\sim\norm{\NnuB_\Ni}{\NrhoB_\Ni} \\ \forall&\Nj& \NP_\Nj|\NXob &\underset{\NRX}\sim\norm{\NnuP_\Nj}{\NrhoP_\Nj} \\ \forall&\Nj& \NQ_\Nj|\NXob &\underset{\NRX}\sim\norm{\NnuQ_\Nj}{\NrhoQ_\Nj} \enspace. \end{align*}} We also impose the conditional independence of the latent variables to get a feasible computation of the entropy and of the negative "energy" (Equation~\ref{eq:criterionj}) under $\NRX$. This conditional independence is widely known as the \guill{mean field approximation} \citep{Parisi}. We finally get the following fully factorized shape: \begin{align*} \NRX &=\textstyle \prod_{\Ni=1}^{\Nnone}{\multinomial{1}{\Ntauone_\Ni}}\;\times \;\; \prod_{\Nj=1}^{\Nntwo}{\multinomial{1}{\Ntautwo_\Nj}} \\ &\textstyle\quad\times \prod_{\Ni=1}^{\Nnone}{\norm{\NnuA_\Ni}{\NrhoA_\Ni}}\times \prod_{\Ni=1}^{\Nnone}{\norm{\NnuB_\Ni}{\NrhoB_\Ni}} \nonumber\\ &\textstyle\quad \times \prod_{\Nj=1}^{\Nntwo}{\norm{\NnuP_\Nj}{\NrhoP_\Nj}}\times \prod_{\Nj=1}^{\Nntwo}{\norm{\NnuQ_\Nj}{\NrhoQ_\Nj}} \nonumber \enspace, \end{align*}} \noindent where $\gamma = (\Ntauoneb, \Ntautwob,\NnuAb,\NrhoAb, \NnuBb, \NrhoBb, \NnuPb, \NrhoPb, \NnuQb, \NrhoQb )$} denotes the parameters concatenation of the restricted variational distribution $\NRX$. \bigskip The new criteria $\mathcal{J}\p{\gamma, \Ntheta}$ that we want to optimize from now on is: \begin{equation} \mathcal{J}\p{\gamma, \Ntheta} = \mathcal{H}(\NRX) + \expectation_{\NRX}\left[\llikli\p{\NXob, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb; \Ntheta}\right] \label{eq:criteriaJ} \end{equation} and the initial estimates of the model parameters $\Nthetahat$ are inferred as: \begin{equation} \Nthetahat = \underset{\theta}{\argmax\;} \p{\underset{\gamma}{\max\;} \mathcal{J}\p{\gamma, \Ntheta}} \enspace. \end{equation} This double maximization is realized with an iterative strategy and can be seen as an extension of the EM algorithm. The two steps are described in Algorithm~\ref{algo:vem}. \begin{algorithm}[tb] \SetAlgoLined \textbf{Input}: observed data $\NXob$, $\Nnq$ and $\Nnl$ number of row groups and column groups \; - Initialize $\gamma^{(0)}$ and $\Ntheta^{(0)}$\; - \While{not convergence of criterion $\mathcal{J}$}{ VE-step: find the variational parameters $\gamma^{(t+1)}$ that optimize $\mathcal{J}\p{\gamma, \Ntheta^{(t)}}$ \[\gamma^{(t+1)} = \underset{\gamma}{\argmax\;} \mathcal{J}\p{\gamma, \Ntheta^{(t)}}\] M-step: find the model parameters $\Ntheta^{(t+1)}$ that optimize $\mathcal{J}\p{\gamma^{(t)}, \Ntheta}$: \[ \Ntheta^{(t+1)} = \underset{\Ntheta}{\argmax\;} \mathcal{J}\p{\gamma^{(t)}, \Ntheta}\] } \KwResult{$\theta$ and $\gamma$: model and variational parameters} \caption{Variational Expectation Maximization algorithm.} \label{algo:vem} \end{algorithm} \subsection{Computation of the variational criterion} The restriction on the space of the variational distribution simplifies the computation of $\mathcal{H}(\NRX)$ as entropy is additive across independent variables: \begin{multline*} \mathcal{H}(\NRX) = - \sum_{\Ni\Nq}{ \Ntauone_{\Ni\Nq} \log\Ntauone_{\Ni\Nq}} - \sum_{\Nj\Nl}{ \Ntautwo_{\Nj\Nl} \log\Ntautwo_{\Nj\Nl}} + \frac{1}{2} \sum_{\Ni}{\log\p{2\pi e \NrhoA_\Ni }} \\ + \frac{1}{2} \sum_{\Ni}{\log\p{2\pi e \NrhoB_\Ni }} + \frac{1}{2} \sum_{\Nj}{\log\p{2\pi e \NrhoP_\Nj }} + \frac{1}{2} \sum_{\Nj}{\log\p{2\pi e \NrhoQ_\Nj }} \enspace. \end{multline*} The independence of latent variables allows to rewrite the expectation of the complete log-likelihood as: \begin{multline}\label{equ:expectationall} \expectation_{\NRX}\brackets{\llikli\p{\NXob, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb}} = \;\expectation_{\NRX}\brackets{\llikli\p{\NYoneb}}\\ + \expectation_{\NRX}\brackets{\llikli\p{\NYtwob}} +\expectation_{\NRX}\brackets{\llikli\p{\NAb}} + \expectation_{\NRX}\brackets{\llikli\p{\NBb}} \\ + \expectation_{\NRX}\brackets{\llikli\p{\NPb}} + \expectation_{\NRX}\brackets{\llikli\p{\NQb}} +\expectation_{\NRX}\brackets{\llikli\p{ \sachant{\NXob}{\NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb}} } \enspace. \end{multline} Despite the variational approximation, the expectation of the complete log-likelihood~\eqref{equ:expectationall} can not be exactly computed as its last term involves an expectation under $\NRX$ of nonlinear functions: \begin{multline}\label{equ:expectation_cond} \expectation_{\NRX}\brackets{\llikli\p{ \sachant{\NXob}{\NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb}}} = \hspace{-10pt} \sum_{\Ni\Nj\Nq\Nl:\NXo_{\Ni\Nj}=0} \hspace{-10pt} \Ntauone_{\Ni\Nq}\Ntautwo_{\Nj\Nl} \expectation_{\NRX}\brackets{\log\p{ p_0}} \\ + \hspace{-10pt} \sum_{\Ni\Nj\Nq\Nl:\NXo_{\Ni\Nj}=1} \hspace{-10pt} \Ntauone_{\Ni\Nq}\Ntautwo_{\Nj\Nl} \expectation_{\NRX}\brackets{\log\p{p_1}} + \hspace{-10pt} \sum_{\Ni\Nj\Nq\Nl:\NXo_{\Ni\Nj}=\NNA} \hspace{-10pt} \Ntauone_{\Ni\Nq}\Ntautwo_{\Nj\Nl} \expectation_{\NRX}\brackets{\log\p{ 1 - p_0 -p_1 }} \enspace, \end{multline} with $p_0$ and $p_1$ defined in Equations \eqref{eq:p2}--\eqref{eq:p1}. These expectations can be approximated by the delta method \cite[p. 79]{deltamethod}. Using a first order Taylor expansion would lead to a criterion without maximum, so we use a second order Taylor expansion. The full expression of the criterion is given in Appendix~ \ref{annexcriteria}. \subsection{Maximization of the variational criterion} The VEM Algorithm~\ref{algo:vem} alternates between a maximization with respect to the variational parameters $\Ngamma$ and a maximization w.r.t the model parameters $\Ntheta$. For our model, there is no explicit solution for the two maximizations of the criterion $\mathcal{J}\p{\gamma, \Ntheta}$, which are are carried out by the Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm. We used automatic differentiation to compute the gradients needed for L-BFGS and for the Taylor series used in the variational criterion. We chose the Autograd library from HIPS and the submodule Autograd from PyTorch \citep{NEURIPS2019}. These libraries rely on a reverse accumulation computational graph to compute exact gradients. Their high efficiency, even with large graphs, thanks to GPU acceleration, makes them particularly well adapted for the VEM algorithm. \subsection{Initialization} \label{initproc} VEM does not ensure convergence towards a global optimum. The EM-like algorithms are known to be sensitive to the initialization, particularly when applied to models with discrete latent space, and may get stuck into unsatisfactory local maxima \citep{Biernacki2003,Baudry2015}. A simple solution consists in training for a few iterations from several random initializations, and pursue optimization with the solution with highest value of the variational criterion \citep[see, e.g., small EM for mixtures][]{Baudry2015}. This exploration strategy spends a great deal of computing resources to bring out only a few good estimates. Another solution is to rely on simpler clustering methods, such as k-means or spectral clustering, to initialize the algorithm \citep{Shireman}. For the Stochastic Block Model, a close relative of the Latent Block Model for graphs, \citet{Rohe_2011} prove the consistency of spectral clustering to identify the parameters of the Stochastic Block Model. Following this idea, we use a double spectral clustering (with absolute eigenvalues of the Laplacian as \citet{Rohe_2011}) on rows and columns on similarity matrices, to initialize our algorithm. Although this method is not designed for \NMARacro data, it can be expected to provide a satisfying initialization of the \LBM if the missingness is not predominant. The parameters of our missingness model can not be initialized with this procedure; they are randomly initialized. The overall initialization procedure is described in Appendix~ \ref{annex:init}. \section{Model selection} \label{sec:modelselection} \subsection{Integrated Completed Likelihood criterion (ICL) } \label{sec:criterion} ICL, inspired by the Bayesian Information Criterion, was originally proposed to select a relevant number of classes for mixture models \citep{ICLbiernacki}. It was extended to select an appropriate number of (row and column) clusters in the standard \LBM \citep{keribinicl}: for $\Nnq$ row classes and $\Nnl$ column classes, the criterion reads \begin{align} ICL(\Nnq, \Nnl) & = \llikli\p{\NXb, \NYoneb, \NYtwob} \nonumber \\ & = \log \int \Prob{\sachant{\NXb, \NYoneb, \NYtwob}{\Ntheta; \Nnq, \Nnl}} \Prob{\Ntheta; \Nnq, \Nnl}\mathrm{d}\Ntheta \enspace, \end{align}} with $\Prob{\Ntheta; \Nnq, \Nnl}$ the prior distribution of parameters. By taking into account the latent variables $\NYoneb, \NYtwob$, ICL is a clustering-oriented criterion , whereas BIC or AIC are driven by the faithfulness to the distribution of $\NXb$ \citep{ICLbiernacki}. For the \LBMacro with \NMARacro missingness, ICL requires priors on the parameters of the missingness model. We chose independent and non informative InverseGamma($\beta$, $\beta$) distribution (where $\beta$ tends to zero) for the parameters $\NsigmaA$, $\NsigmaB$, $\NsigmaP$ and $\NsigmaQ$. As in \citep{keribinicl}, we use non-informative Dirichlet distribution priors on the parameters $\Nalphaoneb$ and $\Nalphatwob$ of mixing proportions of classes. ICL reads \begin{align} ICL(\Nnq, \Nnl) &= \llikli\p{\NXb, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb} \\ &= \log \int \Prob{\sachant{\NXb, \NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb}{\Ntheta; \Nnq, \Nnl}} \Prob{\Ntheta; \Nnq, \Nnl}\mathrm{d}\Ntheta \nonumber \end{align} \begin{proposition} The ICL criterion for the \LBMacro extended to the \NMARacro missingness process presented in Section~\ref{sect:lbmextended} has the following asymptotic form for a data matrix of size $\Nnone \times \Nntwo$: \begin{align*} ICL(\Nnq, \Nnl) =&\; \underset{\Ntheta}{\max}\; \log \Prob{\NXob,\NYoneb, \NYtwob, \NAb, \NBb, \NPb, \NQb; \Ntheta} \nonumber\\ & - \frac{\Nnq\Nnl}{2} \log\p{\Nnone\Nntwo} - \frac{\Nnq-1}{2} \log\p{\Nnone} - \frac{\Nnl-1}{2} \log\p{\Nntwo} \label{equ:iclAsympt}\\ & + \Nnone \log\p{2\pi} - \log\p{\Nnone} + \Nntwo \log\p{2\pi} - \log\p{\Nntwo} \nonumber\\ & + o(\log\Nnone) + o(\log\Nntwo) \enspace. \end{align*}} See proof in Appendix~\ref{annexicl}. \end{proposition} Since the maximum of the complete log-likelihood required to calculate the ICL is not available, in practice it is replaced by the lower bound provided by the variational approximation (see equation~\ref{eq:criteriaJ}). An ICL criterion for the \LBMacro with MAR missing data can be constructed in the same way, allowing for comparison with the \NMARacro model (see details in Appendix~\ref{annexicl}). \section{Experiments on simulated data} \label{sec:simulatedata} Simulated data brings all the elements to assess clustering algorithms in controlled settings. Using controlled datasets provides the means to properly test the ability of an algorithm to recover the known underlying structure. \subsection{Assessing the difficulty of a co-clustering task} In co-clustering, several loss functions are suited for measuring the discrepancy between the underlying classes ($\NYoneb$, $\NYtwob$) and some predictions ($\NYonebtild$, $\NYtwobtild$). For our experiments, we will use the measure defined by \cite{lbmgg}, that is, the ratio of misclassified entries in the data matrix: \begin{equation*} \Nlossitem{\NYoneb, \NYtwob,\NYonebtild, \NYtwobtild} = \overbrace{\Nlossrow{\NYoneb, \NYonebtild}}^{1 - \frac{1}{\Nnone} \sum_\Ni\Kronecker{\NYone_\Ni, \NYonetild_\Ni} } +\overbrace{\Nlosscol{\NYtwob, \NYtwobtild}}^{1 - \frac{1}{\Nntwo} \sum_\Nj\Kronecker{\NYtwo_\Nj, \NYtwotild_\Nj} } -\;\Nlossrow{\NYoneb, \NYonebtild} \;\Nlosscol{\NYtwob, \NYtwobtild} \end{equation*} where $\Kronecker{}$ is the Kronecker delta. In standard clustering, the difficulty of a task is often assessed by its Bayes risk, that is, by the minimum of the expectation of the loss function, which is typically approximated by Monte Carlo on simulated data. Co-clustering poses specific difficulties. % % Adding more rows or more columns alter its difficulty because the dimensions of the spaces where the clustering is performed are expanded. The duality between the rows and the columns imply that the size of the matrix is a characteristic of a co-clustering problem. In other words, given a fixed generative distribution, as the matrix size increases, the difficulty of the task decreases, in contrast to simple clustering, where the difficulty, as measured by the Bayes risk, remains constant when more examples (that is, rows) are added. A simple Monte Carlo approximation of the risk consists in averaging over many statistical units. In simple clustering, this means generating a great number of rows in a data matrix. In co-clustering, the statistical unit is the whole matrix, implying that a Monte Carlo approximation of the risk is obtained by generating a great number of data matrices; which then involves a great computational time. Furthermore, estimating the Bayes risk from a single data matrix is very inconstant; the risk may be very different between two data matrices of same size generated from the same distribution. Hence the usual notion of Bayes risk is not appropriate for co-clustering. \citet{Lomet12} argue that conditioning the Bayes risk on the observed matrix is more appropriate. They give a protocol to simulate data matrices in which the difficulty of the clustering task is controlled by the following {\em conditional Bayes risk}: \begin{equation}\label{eq:condBayesrisk} r_{item}(\NYonebtild, \NYtwobtild) = \expectation\brackets{ \sachant{\Nlossitem{ \NYoneb, \NYtwob, \NYonebtild, \NYtwobtild }} {\NXob}} \enspace, \end{equation} where the expectation is taken over $\NYoneb, \NYtwob$ only and $\NYonebtild,\NYtwobtild$ are the clusterings returned by the {\em conditional Bayes classifier}, that is, the maximum \textit{a posteriori}: \begin{equation*} \p{\NYonebtild,\NYtwobtild} = \argmin_\p{\NYoneb, \NYtwob} r_{item}(\NYoneb, \NYtwob) = \argmax_\p{\NYoneb, \NYtwob} \sum_{\Ni\Nj} p\p{\sachant{\NYone_\Ni, \NYtwo_\Nj}{\NXob}} \enspace. \end{equation*} \citet{Lomet12} released data sets, with different sizes and difficulties, simulated from the \LBM. Using their protocol, we generated new data according the \LBMacro with a \NMARacro missingness process. Data sets are generated according to the \LBMacro with three row and column classes, with parameters \begin{equation}\label{eq:LBMparam} \Nalphaoneb = \Nalphatwob = \begin{pmatrix} \sfrac13 \\ \sfrac13 \\ \sfrac13 \end{pmatrix} \qquad\text{and}\qquad \Npib = \begin{pmatrix} \epsilon & \epsilon & 1-\epsilon \\ \epsilon & 1-\epsilon & 1-\epsilon \\ 1-\epsilon & 1-\epsilon & \epsilon \\ \end{pmatrix} \enspace, \end{equation} where $\epsilon$ defines the difficulty of the clustering task. The parameters of the \NMARacro process are \begin{equation}\label{eq:MNARparam} \Nmu = 1 , \quad \NsigmaA = 1 , \quad \NsigmaB = 1 , \quad \NsigmaP = 1 , \quad \NsigmaQ = 1 \enspace, \end{equation} which gives an average proportion of 35\% of missing values. \subsection{Analyzing the classification of the proposed inference} \label{sec:classifinference} We test here the ability of the proposed inference scheme to recover row and column classes. To conduct the experiments, we generate an initial data matrix of size $\Nnone=\Nntwo=500$ with a conditional Bayes risk of $5\%$ set by choosing $\epsilon$ \eqref{eq:LBMparam} by trial and error. The size of this matrix is then progressively reduced, removing rows and columns, to increase the difficulty of the classification task. The conditional Bayes risk is re-estimated on each sub matrix to provide a reference. Our algorithm is then run on these data matrices using 20 initializations for each run, as described in Section~\ref{initproc}. We then predict the row and column classes $(\NYoneb, \NYtwob)$ with their maximum \textit{a posteriori} estimators on the variational distribution. This whole process is repeated 20 times, leading to the results presented in Figure~\ref{fig:error_classif_size}. As expected, the conditional Bayes risk decreases as the data matrices grow. The predictions returned by our algorithm follow the same pattern, with a diminishing gap to the conditional Bayes risk as the data matrices grow, which is consistent with our expectations. Appendix~\ref{annex:estimation} provides additional experimental results that show consistent estimations of the model parameters. \begin{figure \centering \scalebox{0.5}{\input{img/error_classif_size.pgf}} \caption{Classification error with respect to the size of the data matrix (lower is better); \textcolor[rgb]{0.121569,0.466667,0.705882}{$\filledstar$} is the median of the conditional Bayes risk; \textcolor[rgb]{0.172549,0.627451,0.172549}{$\filledmedtriangleup$} is the median prediction error obtained by our algorithm.} \label{fig:error_classif_size} \end{figure} \subsection{Analyzing the benefit of a \NMARacro model versus a MAR model for \NMARacro data} \label{sec:expnmar} The importance of using the right missingness model is tested by comparing the classifications returned by an \LBMacro with and without an \NMARacro model. A data set is generated according to the \LBMacro with \NMARacro values where the parameters $\Nalphaoneb$, $\Nalphatwob$ and $\Npib$ of the \LBMacro are fixed as in \eqref{eq:LBMparam}, and $\epsilon$ is chosen in order to get a conditional Bayes risk of 12\%, for data matrices of size $\Nnone = \Nntwo = 100$; the \NMARacro model parameters $\Nmu$, $\NsigmaA$ and $\NsigmaP$ all set to one which gives an average proportion of 35\% of missing values. Several data matrices are generated using these parameters while varying the value of the $\NsigmaB$ and $\NsigmaQ$ parameters that govern the \NMARacro effects; these variations do not affect the conditional Bayes risk nor the proportion of missing values. For each data matrix, we train the \LBMacro with either the \MARacro or the \NMARacro model. This process is repeated 20 times, starting from the generation of a new fully observed data matrix. The median of the classification errors $l_{item}$ are presented in Figure~\ref{fig:error_classif_nmar} as a function of the \NMARacro effect. They are essentially constant and close to the conditional Bayes risk for the \LBMacro with the \NMARacro model, whereas the \LBMacro with the \MARacro model is badly affected by \NMARacro data, eventually leading to a classification close to a totally random allocation \footnote{With equal class proportions, the expected classification error of a random allocation is $\frac{\Nnq-1}{\Nnq} + \frac{\Nnl-1}{\Nnl} - \frac{\Nnq-1}{\Nnq} \frac{\Nnl-1}{\Nnl}$, that is, 0.89 here where $\Nnq=\Nnl=3$.}. Ignoring the nature of the missingness process leads here to strong biases in estimation that in turn drastically affect classification. Thankfully, the ICL criterion may be of great help to select the right missingness model as shown in Section~\ref{sec:icl_exp_missing_model}. \begin{figure \centering \scalebox{0.5}{\input{img/error_classif_nmar.tex}}% \caption{Classification error with respect to an increase of the \NMARacro effect (lower is better); \textcolor[rgb]{1.000000,0.498039,0.054902}{$\filledstar$} is the median prediction error obtained with the MAR model; \textcolor[rgb]{0.172549,0.627451,0.172549}{$\filledmedtriangleup$} is the median prediction error obtained with the \NMARacro model.} \label{fig:error_classif_nmar} \end{figure} \subsection{Analyzing the ability of the model selection criterion to select the adequate number of classes} We reuse the parameters \eqref{eq:LBMparam} and \eqref{eq:MNARparam} to analyze the behavior of the ICL criterion. We consider different sizes of data matrices, between (30,30) and (150,150), with varying difficulty for each matrix size, with a conditional Bayes risk \eqref{eq:condBayesrisk} of respectively 5\%, 12\% and 20\% The results in Figure~\ref{iclresults} show that, as expected, the ICL criterion tends to select more often the right number of classes as the data matrices get larger and also when classes are more separated. We also observe that the ICL criterion tends to be conservative for small data matrices, by underestimating the number of classes. It could come to the fact that the size of the matrix is not large enough to consider the asymptotic approximation as valid and/or it could come from the approximations used to compute the log-likelihood $\mathcal{J}$ (variational restriction and delta method). % % \begin{figure} \centering \setlength{\extrarowheight}{1pt}% \scalebox{0.82}{\begin{tabular}{c|c|c|cccc|cccc|cccc|} \multicolumn{3}{@{}c}{} & \multicolumn{4}{@{}c@{}}{$r_{item}(\NYonebtild, \NYtwobtild) =5\%$} & \multicolumn{4}{@{}c@{}}{$r_{item}(\NYonebtild, \NYtwobtild) =12\%$} & \multicolumn{4}{c}{$r_{item}(\NYonebtild, \NYtwobtild) =20\%$} \\[1ex] \cline{4-15} \multicolumn{3}{c|}{} & \multicolumn{4}{c|}{$\Nnl$} & \multicolumn{4}{c|}{$\Nnl$} & \multicolumn{4}{c|}{$\Nnl$} \\ \cline{4-15} \multicolumn{3}{c|}{} & 2&3&4&5 & 2&3&4&5 &2&3&4&5 \\ \cline{2-15} \multirow{4}{*}{$\Nnone=\Nntwo=30$} & \multirow{4}{*}{$\Nnq$} &2&9&2&&&13&3&&&14&2&&\\ & & 3 &1& \boxed{7}&&&2& \boxed{2}&&&2& \boxed{1}&&\\ & & 4 &&&&&&&&&&&&\\ & & 5 &&&&1&&&&&&&&1\\ \cline{2-15} \multirow{4}{*}{$\Nnone=\Nntwo=40$} & \multirow{4}{*}{$\Nnq$} &2&4&2&&&17&1&&&17&1&&\\ & & 3 && \boxed{14}&&&1& \boxed{1}&&&1& \boxed{1}&&\\ & & 4 &&&&&&&&&&&&\\ & & 5 &&&&&&&&&&&&\\ \cline{2-15} \multirow{4}{*}{$\Nnone=\Nntwo=50$} & \multirow{4}{*}{$\Nnq$} &2&1&&&&11&2&&&15&1&2&\\ & & 3 &2& \boxed{17}&&&& \boxed{7}&&&1& \boxed{1}&&\\ & & 4 &&&&&&&&&&&&\\ & & 5 &&&&&&&&&&&&\\ \cline{2-15} \multirow{4}{*}{$\Nnone=\Nntwo=75$} & \multirow{4}{*}{$\Nnq$} &2&1&&&&9&&&&13&2&&\\ & & 3 &1& \boxed{17}&&&1& \boxed{9}&&&& \boxed{4}&&\\ & & 4 &&&&&1&&&&&&&\\ & & 5 &1&&&&&&&&&&&1\\ \cline{2-15} \multirow{4}{*}{$\Nnone=\Nntwo=100$} & \multirow{4}{*}{$\Nnq$} &2&&&&&2&&&&11&&&\\ & & 3 && \boxed{19}&&&& \boxed{18}&&&1& \boxed{7}&1&\\ & & 4 &&&&&&&&&&&&\\ & & 5 &&&&1&&&&&&&&\\ \cline{2-15} \multirow{4}{*}{$\Nnone=\Nntwo=150$} & \multirow{4}{*}{$\Nnq$} &2&1&&&&&1&&&5&1&&\\ & & 3 && \boxed{14}&2&&1& \boxed{18}&&&1& \boxed{11}&&\\ & & 4 &&1&&&&&&&&&&\\ & & 5 &&&&2&&&&&&&&2\\ \cline{2-15} \end{tabular}} \caption{\label{iclresults} Count number of $(\Nnq,\Nnl)$ models selected by the ICL criterion among 20 data matrices for different difficulties, as measured by the conditional Bayes risk, and different matrix sizes. All matrices are generated with the same number of row and column classes: $\Nnq=\Nnl=3$. } \end{figure} \subsection{Analysing the ability of the model selection criterion to select the adequate missingness model} \label{sec:icl_exp_missing_model} We use the models fitted in Section~\ref{sec:expnmar} to analyze the ability of the ICL criterion to select the right missingness model (\NMARacro or MAR). The difference in ICL between the MAR and \NMARacro models is computed for each data matrix, assuming that the right numbers of classes $(\Nnq,\Nnl)$ are known. The results, presented in Figure~\ref{fig:icl_mar_nmar}, show that ICL rightfully opts for the \NMARacro model almost everywhere, demonstrating the ability of this criterion to select the adequate missingness model. The MAR model is only chosen for some experiments with the lowest \NMARacro effect ($\NsigmaB=\NsigmaQ=0.01$), where the prediction performances are almost identical (see Figure \ref{fig:error_classif_nmar}), with a median difference in ICL of -0.51 (the \MARacro model is chosen 13 times over the 20 repetitions). \begin{figure}[tb] \centering \scalebox{0.7}{\input{img/icl_nmar_mar.tex}} \caption{Difference in ICL between the MAR and \NMARacro models with respect to an increase of the \NMARacro effect, \textcolor[rgb]{0.121569,0.466667,0.705882}{$\filledstar$} is the median. The \NMARacro model is selected when the difference in ICL is positive.} \label{fig:icl_mar_nmar} \end{figure} \section{Experiments on real data} \label{sec:realdata} We consider voting records\footnote{Votes from the French National Assembly are available from \url{http://data.assemblee-nationale.fr/travaux-parlementaires/votes}.} from the lower house of the French Parliament (\textit{Assemblée Nationale}). This dataset gathers the results of the 1256 ballots of year 2018 of the 577 French members of parliament (MPs) for the procedural motions and amendments for the 15th legislature (June 2017). For each text, the vote of each MP is recorded as a 4-level categorical response: ``yes'', `no'', ``abstained'' or ``absent''. Using our model, we bring out some relevant groups of texts and MPs, as well as some structure in the behavior of nonvoters. We gather the data in a matrix where each row represents an MP and each column represents a text. To use our model, we reduced the 4 response levels to 3 (``yes'', `no'', ``missing'') assuming that merging the ``abstained'' and ``absent'' categories would not affect much the underlying missingness process (``abstained'' votes represent about 4\% of the expressed votes, ``missing'' responses represent 85\% of all votes). At the lower house of French Parliament, MPs may group together according to their political affinities. Groups with less that 15 members or MPs who choose to be independent are gathered under the ``Non inscrits'' (NI) label, giving a heterogeneous range of political hues inside it. The names of the groups and their frequency are detailed in Figure~\ref{fig:hemicycle}. \begin{figure \centering \resizebox{0.45\textwidth}{!}{\input{img/hemicycle}} \begin{minipage}[b]{.5\textwidth} \baselineskip=0.5\baselineskip {\tiny\begin{center} Political groups from left-wing to right-wing \end{center} FI (17): France Insoumise \\ GDR (16): Groupe de la Gauche démocrate et républicaine \\ SOC (29): Socialistes \\ LT (19): Libertés et territoires \\ LaREM (304): La République En Marche \\ MODEM (46): Mouvement démocrate\\ UDI-AGIR (28): Les Constructifs\\ LR (104): Les Républicains \\ NI (13): Non inscrits (mixed left and right wings) } \end{minipage} \caption{Hemicycle of the political groups of the French National Assembly} \label{fig:hemicycle} \end{figure} The ICL criterion, used to select both the numbers of classes and the type of missingness, favors a \NMARacro missingness with $\Nnq=14$ MP classes and $\Nnl=14$ text classes against a MAR model with 19 MP classes 23 text classes. The reordered data matrix derived from this block clustering is displayed in Figure~\ref{fig:reordered_asnt}. Fewer classes lead to over-aggregated components hiding the subtleties of the network, but since they still correspond to well-identified groups and are more friendly to visual analysis, we provide them as additional material in Appendix~\ref{annex:asnt}. In Figure~\ref{fig:reordered_asnt}, classes of MPs are coherent to their political orientation: class 0 and 1 are mainly made up of left-wing MPs from the groups SOC, FI, GDR, LT, classes 2 and 3 are mainly made up of right-wing MPs from LR and the classes from 6 to 13 are mainly made up of centrist MPs from LaREM and MODEM who are known to be political allies. Classes of texts can be analyzed with the available metadata. A bipartite opposition system appears from classes A and C. Texts from class A are the original articles of law proposed by the government and are unsurprisingly voted positively by the MPs classes from 6 to 13 as they are from the same political mould as the French government. Texts from class C are mainly amendments proposed by minority and are voted positively by both the left wing (class 0 and 1) and the right wing (classes 2 and 3) and negatively by the MPs supporting the government (classes 6 to 13). The left and right wings are yet divided by usual issues such as immigration regulation amendments gathered in classes G and M or general economic matters gathered in classes H and I. \begin{figure}[hbt!] \centering \includegraphics[width=1.\textwidth]{img/plots_anst_14_14_all.pdf} \caption{Left: matrix of votes reordered according to the row and column classes, for the \NMARacro LBM model selected by ICL, with 14 MP classes and 14 text classes. The red lines delineate class boundaries. The counts of MPs belonging to their political groups in each MP class is given on the left. Right: summary of the inferred opinions (expressed or not) for all classes of texts and MPs, as given by the estimated probability to support a text in each block of the reordered matrix. } \label{fig:reordered_asnt} \end{figure} In our model, the latent variables $\NAb$ and $\NBb$ characterize the propensity of MPs to cast a vote. Figure~\ref{fig:nu_a_nu_b_icl} displays the scatter plot of $\NnuA_\Ni$ and $\NnuB_\Ni$, the maximum \textit{a posteriori} estimates of $\NA_\Ni$ and $\NB_\Ni$ for all MPs under the variational distribution. The abscissa represents the propensity to vote\footnote{% More rigorously, the abscissa represents the {\em global deviation from the average} propensity to vote. }, with higher values of $\NnuAb$ corresponding to a higher propensity to vote, and the ordinate $\NnuBb$ represents the additional effect of casting a vote when supporting the text. The membership of MPs to their political group is indicated by the plotting symbol. \begin{figure \centering \begin{minipage}{0.5\textwidth} \includegraphics[width=1.\textwidth]{img/asnt_nu_a_nu_b_grp.pdf} \end{minipage} \begin{minipage}{0.49\textwidth} \includegraphics[width=1\textwidth]{img/ICL_3d.pdf} \end{minipage} \captionof{figure}{Left: maximum \textit{a posteriori} estimates of the MPs propensities ($\NnuA_\Ni$, $\NnuB_\Ni$), with their political group memberships. $\NnuA_\Ni$ drives the MAR effect and $\NnuB_\Ni$ drives the \NMARacro one. Right: ICL curve. Maximum is reached for $\Nnq$=14 and $\Nnl$=14} \label{fig:nu_a_nu_b_icl} \end{figure} We see two obvious clusters separated by the vertical axis $\NnuBb$: the bottom cluster is essentially formed by MPs from the LaREM and MODEM political groups, which support the government, whereas the top cluster is formed by the opposition political groups. % The $\NnuBb$ estimates for the opposition cluster are positive, meaning that these MPs come to parliament to vote positively. This behavior is not surprising because the MPs of the opposition parties are outnumbered by the MPs supporting the government, so they must be diligent if they want their tabled motion or amendment passed. % The dependency between the political groups and the \NMARacro effect encoded in the estimates $\NnuBb$, which is confirmed by an ANOVA test (with a p-value smaller than numerical error), supports that the missingness patterns captured by our model are relevant for the problem at hand. % A similar analysis is developed on texts in Appendix \ref{annex:asnt}. \section{Conclusion} In many estimation problems, the absence of data conveys some information on the underlying phenomenon that should be exploited for its modeling. We propose a co-clustering model that accounts for this absence of data; it aims at retrieving groups of rows and columns based on the complete data matrix instead of considering only the partitioning of the observed data matrix. This model consists of two building blocks: a co-clustering model (\LBM) of the full data matrix, and a missingness model that manages the censoring that produces the observed data matrix. This missingness model preserves the symmetry of the co-clustering model by allowing two \NMARacro effects, one on the rows and the other on the columns. The overall model of the observed data matrix results from the combination of the model of the complete data matrix with the missingness model. We used variational techniques and the Delta method to obtain a tractable approximation of the lower bound of the observed log-likelihood. We proposed a model selection criterion to select both the number of classes and the type of missingness (\MARacro versus \NMARacro). Our experiments on synthetic datasets show that ignoring an informative missingness can lead to catastrophic co-clustering estimates, supporting the value of using expressive missingness models on such type of data. We also illustrate the use of our model on a real-world case where the missingness model provides an interesting basis for analyzing and interpreting the motivations of nonvoters. Our model should also be useful in other fields such as in ecology, where the probability of observing interaction between species derives from some factors that also explain the true interactions \citep{vazquez2009uniting}, or in collaborative filtering, where the probability of observing a rating depends on the actual rating that would be given by the user \citep{Marlin07}. In the latter application, the data sizes generally encountered in recommendation would require computational improvements in inference. Another useful future work is to extend our model to non-binary data.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Audio signal processing has the potential to become a useful diagnostic tool, particularly in the field of respiratory disorder detection \cite{rao_acoustic_2019}. Studies have shown that respiratory conditions such as pertussis, asthma and pneumonia can be automatically diagnosed using algorithms that analyze the cough sounds produced by patients \cite{Pramono2016, Amrulloh2015}. The benefits of using audio processing and Machine Learning (ML) algorithms to classify cough sounds is that the diagnosis can be performed quickly and easily by a device such as a smartphone, thus reducing the workload of medical professionals and supporting ubiquitous analysis. Since one of the most common symptoms of the novel coronavirus disease (COVID-19) is a dry cough \cite{whochina}, there has been significant interest in leveraging such algorithms to quickly and unobtrusively screen for the virus \cite{xia_covid-19_nodate, nessiem_detecting_2021, laguarta_covid-19_2020}. In order to train ML algorithms to screen for COVID-19 from cough sounds, a large amount of cough sound data is necessary. However, since the disease is a highly contagious airborne pathogen \cite{lotfi_covid-19_2020}, collecting such cough sound data from COVID-19 positive individuals requires significant effort and sanitary precautions to ensure the safety of those involved. To overcome such data collection limitations, several groups around the world have relied on crowdsourcing, a paradigm in which internet users upload their own cough sounds and report whether or not they had been diagnosed with the condition \cite{xia_covid-19_nodate, laguarta_covid-19_2020, orlandic_coughvid_2021, sharma_coswara_2020}. However, such user-labeled data may suffer from mislabeling, as some extensive datasets reported that thousands of uploaded recordings did not contain cough sounds at all \cite{xia_covid-19_nodate, orlandic_coughvid_2021}. As a validation step of the crowdsourced recordings, the COUGHVID dataset enlisted four expert physicians to listen to a number of cough recordings and diagnose any audible respiratory disorders (ex. COVID-19, upper and lower respiratory infections) \cite{orlandic_coughvid_2021}. However, the general trend was that the four experts did not agree on the COVID-19 diagnosis. Disagreement between physicians is common in the medical field; a study of medical referrals noted that only 12\% of final diagnoses agreed with the initial diagnoses, and 21\% of final diagnoses significantly differed from the initial ones \cite{van_such_extent_2017}. Therefore, extra care must be taken to overcome the label ambiguity of any crowdsourced cough audio databases, as the expert disagreement and user mislabeling can lead to erroneous classification. The winners of the 2017 PhysioNet/CinC Challenge on ECG signal classification observed that expert annotation inconsistencies in physiological data can be alleviated through manual re-labeling, thus leading to significant improvements in classifier performance on unseen data \cite{teijeiro_abductive_2018}. However, manual re-labeling of cough sounds is difficult to perform without extensive medical training. Furthermore, manually re-labeling such an extensive dataset would require significant time and effort. Semi-supervised learning (SSL) is a ML paradigm that can be used to automate the re-labeling process of biomedical signals \cite{zhu_speech_2021, deng_semisupervised_2018}. While SSL is often used in conjunction with Deep Learning models for medical inference \cite{zhu_speech_2021, guan_who_2018, raghu_direct_nodate, tanno_learning_2019}, it can also be used with classical ML approaches in which the extracted features leverage domain knowledge to shed light on the inner-workings of the classifier. In this work, we utilize a state-of-the-art SSL technique based on explainable ML models that integrates knowledge of a variable number of human annotators. This technique uses the cough sound recordings of the COUGHVID dataset that were labeled by expert physicians to train three classifiers, where each one models the medical knowledge of a different expert. Next, we overcome the issue of expert label scarcity by generating pseudo-labels on the entire database using each expert model. Then, the outcomes of these models were compared alongside crowdsourced user labels to identify a subset of cough recordings with the highest probability of originating from either COVID-19 positive or healthy individuals. Thus, we overcome the issues of crowdsourced data mislabeling and expert label inconsistency by identifying a high-quality subsample of datapoints -- with a threefold increase in feature separability compared to the user-labeled data, as well as a more significant difference in the power spectral densities of the two cough classes ($p = 1.2 \times 10^{-64}$) -- which can be used to train future cough classifiers. The subsample of cough audio recordings identified through our SSL approach was subsequently made available to the public for further classifier development and ML exploration. To assess the intra-class consistency of this data, we quantify the class separability of standard audio features extracted from COVID-19 versus healthy coughs in the SSL labeling scheme compared to that of the expert labels and crowdsourcing labels of the COUGHVID dataset. Finally, we demonstrate how this data can be used to train cough audio signal classifiers by training a final COVID-19 cough detection model and comparing its classification accuracy to that of the fully supervised models. As a result, in addition to applying SSL to the relatively novel task of COVID-19 screening from cough sounds, this work aims to provide an automated approach for increasing the labeling quality of biosignal datasets, which can be applied to many other pathologies. \section{Related Works} While most cough classification algorithms focus on fully supervised ML approaches \cite{Pramono2016, xia_covid-19_nodate, laguarta_covid-19_2020}, semi-supervised learning (SSL) has scarcely been applied to the task \cite{xue_exploring_2021}. In this paradigm, unlabeled data is exploited in augmenting the dataset to enhance the performance of the classifier, thus providing ample training data and overcoming the issue of label sparsity \cite{lee2013pseudo, zhang_semi-supervised_2012}. Semi-supervised audio classification algorithms have outperformed fully-supervised models both for audio signal categorization \cite{zhang_semi-supervised_2012} and cough detection \cite{hoa_semi-supervised_2011} tasks. Furthermore, Han et al. found that incorporating SSL into sound classification enabled a reduction of 52.2\% in human annotations necessary to achieve comparable results to fully-supervised methods \cite{han_semi-supervised_2016}. A simple, state-of-the-art SSL technique is Pseudo-Label, in which an initial model trained on the labeled samples classifies the unlabeled samples, which are then assigned pseudo-labels based on which predicted class has the highest probability \cite{lee2013pseudo}. Then, a final model is trained using both the original labels and pseudo-labels as ground truth. While this method shows significant performance gains over classical supervised learning on the benchmark MNIST dataset, it does not address the issue of label ambiguity present when the labels of different annotators do not match. In addition to overcoming label scarcity, SSL approaches have also proven successful in alleviating the burden of inconsistent, ambiguous, and erroneous labels on ML classification tasks \cite{zhou_brief_2018}. Considering the example of 3D image segmentation tasks, semi-supervised models have been shown to outperform fully supervised ones both in the presence of human mislabeling and added random noise \cite{lv_semi-supervised_2012}. Furthermore, SSL has been widely utilized in speech emotion recognition, a field that suffers from sparse, inconsistent labeling by multiple untrained annotators \cite{deng_semisupervised_2018, zhu_speech_2021}. In particular, Zhu et al. devised an iterative, semi-supervised scheme using the ambiguous emotion annotations of six to twelve annotators and concluded that sufficient training data and moderately reliable labels at the onset of training can significantly improve the classification performance with respect to fully supervised training \cite{zhu_speech_2021}. Recent works leveraging SSL to overcome inconsistencies in expert physicians' labels utilize an approach in which each expert is modeled by a Deep Neural Network, and then the outputs of these expert models are combined to generate a final label for each sample \cite{guan_who_2018, raghu_direct_nodate, tanno_learning_2019}. For example, Li et al. applied this approach to electronic medical record entity recognition by training five distinct models, expanding them to the whole dataset using Pseudo-Label, and then using a majority voting algorithm to generate the final labels \cite{li_semi-supervised_2021}. Furthermore, Guan et al. used individual expert modeling for diabetic retinopathy classification and found that this approach outperformed the Expectation-Maximization algorithm traditionally used for weighing the accuracies of multiple raters \cite{dawid_maximum_1979}. This promising SSL expert modeling technique has not yet been applied to the field of cough audio signal classification. Our approach leverages the insights from previous works on SSL to overcome the labeling inconsistencies of the COUGHVID dataset and identify a subset of cough audio samples with consistent labels. We use a pseudo-label-based SSL scheme to expand the expert labels onto unlabeled segments of the dataset, and then combine the expert labels to identify the final dataset. Similarly to \cite{zhu_speech_2021}, we analyze the trade-offs between label consistency and training data size when selecting the final SSL approach. As opposed to previous works that rely on Deep Learning \cite{zhu_speech_2021, deng_semisupervised_2018, guan_who_2018, raghu_direct_nodate, tanno_learning_2019, li_semi-supervised_2021}, which may be difficult to interpret and therefore not amenable to sensitive medical classification tasks, we rely on classical ML algorithms using state-of-the-art audio feature computation. Furthermore, we determine the importance of the various features to the classification outcome of the SSL approach versus the user or expert label based models to assess the similarities and differences between the approaches. \section{Methods} \subsection{Methodology Overview} \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth]{figures/Coughvid_ssl_method_v2.png} \caption{An illustration of the model development methodology, showing the different subsets of the COUGHVID dataset used at each stage. The supervised training, semi-supervised learning, and testing procedures are described in Sections \ref{sec:ml_opt}, \ref{sec:ssl}, and \ref{sec:testing}, respectively.} \label{fig:methodology} \end{figure} One of the challenges of performing COVID-19 classification based on user-labeled and expert-annotated data is label ambiguity. Since the COUGHVID dataset is crowdsourced, it cannot be known with absolute certainty if the cough recordings labeled as COVID-19 or healthy truly originated from people with the condition or lack thereof. Furthermore, the experts' cough diagnoses exhibited a Fleiss' Kappa score of 0.07 \cite{Fleiss1971}, meaning that there was only a slight agreement between the four experts about the cough diagnoses \cite{orlandic_coughvid_2021}. As shown in Fig. \ref{fig:methodology}, we assessed the label consistency in each of the COVID-vs-healthy classification schemes provided by the dataset (i.e., users, experts) by extracting audio signal features and training ML models based on each set of labels. Then, the semi-supervised learning (SSL) approach was employed to produce a final classifier. Therefore, the following ML models were developed and compared in terms of various classification accuracy metrics on their respective labeling schemes: \begin{enumerate} \item \textit{User Crowdsourcing Model:} In this classifier, the recordings in the positive class were self-labeled as ``COVID-19" by the users who uploaded them. Similarly, the negative class recordings were labeled as ``healthy" by the users. \item \textit{Expert [1,2,4] Model:} Three separate models were developed, corresponding to the labels of Experts 1, 2, and 4. Since we see from Table \ref{tab:label_counts} that Expert 3 only labeled one recording as COVID-19, this is not enough information for a ML model to reliably perform generalization. Therefore, this expert's labels are omitted from consideration in further analysis. The positive class was made up of recordings labeled by each expert as ``COVID-19", and the corresponding negative class was labeled as ``healthy\_cough". \item \textit{SSL Model:} In order to combine the knowledge from both the users and the experts into one model, semi-supervised learning was used. In this approach, the expert models and user labels were used to filter the dataset and determine the subset of coughs with the highest probability of being COVID-19 positive and healthy. The details of the implementation are described in Section \ref{sec:ssl}. \end{enumerate} \subsection{Dataset Description} \label{sec:coughvid} This analysis uses the COUGHVID crowdsourcing dataset, which is a vast repository of cough audio samples originating from diverse participants located across the globe \cite{orlandic_coughvid_2021}. The dataset is made up of user-uploaded cough recordings, many of which contain a status label indicating whether the user claimed to be diagnosed with COVID-19, exhibiting symptoms, or healthy at the time of recording. As an additional validation step, four expert physicians each labeled 1,000 cough recordings to diagnose potential respiratory disorders (i.e., COVID-19, upper respiratory infection) that are audible in the recordings. Each expert reported spending approximately 10 hours to label the cough sounds, which exemplifies the significant time and effort human labeling takes for such a task. An expanded version of the training dataset was used, containing recordings uploaded from April 2020 to October 2021. There are about 34,500 recordings in this dataset, 20,644 of which contain user status labels. Both the expert labels and the testing dataset described in \cite{orlandic_coughvid_2021} are unchanged in this work. Table \ref{tab:label_counts} displays the value counts of the cough sounds labeled as COVID-19-positive and healthy by the users and each of the experts. It should be noted that the coughs labeled by the users and experts are not mutually exclusive, and 150 coughs were annotated by all experts to assess the level of agreement between the physicians. \subsection{Cough Audio Signal Pre-Processing} Since the COUGHVID dataset contains some recordings that do not capture cough audio, the cough classifier developed in \cite{orlandic_coughvid_2021} was used to remove non-cough recordings from consideration. Furthermore, only recordings with a cough classifier output greater than 0.8 were used in this work. \begin{table} \centering \caption{Recording Counts in the COUGHVID Training Dataset} \begin{tabular}{|l|l|l|} \hline Label Origin & Healthy & COVID-19 \\ \hline Users & 15,476 & 1,315 \\ \hline Expert 1 & 259 & 279 \\ \hline Expert 2 & 67 & 285 \\ \hline Expert 3 & 199 & 1 \\ \hline Expert 4 & 221 & 84 \\ \hline \end{tabular} \label{tab:label_counts} \end{table} As an initial pre-processing step, all of the cough recordings were normalized to their maximum absolute value such that the signal values range from -1 to 1. This enables a fair comparison of the RMS power of different signal segments and provides numerical stability. Next, a 4th order Butterworth lowpass filter with a cutoff frequency of 6 kHz was applied. Consequently, the recordings were downsampled to 12 kHz. This filtering was performed to reduce high-frequency noise and increase the computational efficiency of all further signal processing and feature extraction algorithms. The cutoff frequency was chosen because visual analysis of the cough signal spectra revealed that most of the signal power lies below 6 kHz. Furthermore, past cough sound classification algorithms used cutoff frequencies ranging from 4 Hz to 8 Hz \cite{Pramono2016,Chatrzarrin2011,Drugman2011AssessmentPublication}, so an intermediate value was chosen for this work. \subsection{Cough Segmentation} \label{sec:cough_seg} Once the recordings were pre-processed, a custom cough segmentation algorithm was employed to isolate each individual cough event present in a given recording. The segmentation algorithm exploits cough physiology to divide each recording into its constituent cough sounds. This algorithm enables feature extraction on each cough, thus suppressing silence and extraneous low-amplitude sounds like breathing. Furthermore, the algorithm can be used to perform a simple Signal-to-Noise Ratio (SNR) calculation, as well as aggregation of the ML classifier labels of all coughs originating from the same recording. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/segmentation.pdf} \caption{A step-by-step illustration of the cough segmentation procedure. A hysteresis comparator was applied to the signal power to detect the sound bursts, which enables us to consistently discard the segments without relevant inputs for the subsequent ML process.} \label{fig:cough_seg} \end{figure} The algorithm is depicted in Fig. \ref{fig:cough_seg} on a recording of a breath, two coughs, another breath, and two more coughs. First, the signal is squared to compute its power. Next, a hysteresis comparator is applied to extract the sudden bursts in sound amplitude that arise from coughing. This means that potential cough candidates are determined to be regions started by the signal exceeding the upper threshold and ended by the signal going below the lower threshold. A tolerance of 10 ms is applied to the thresholds, meaning that the signal should either exceed the upper threshold or go below the lower threshold for at least 10 ms for a cough onset or offset to be recorded. The lower and upper hysteresis thresholds were set to 0.1 and 2 times the RMS signal power, respectively. These multipliers were empirically determined through analysis of a variety of cough audio signals. Next, the cough segments were analyzed and discarded based on the physiological limitations of cough duration. The cough is composed of three segments: inspiration, compression, and expiration. The latter two have known timing constraints: the compressive phase, during which inhaled air is compressed in the lungs to increase lung pressure, typically lasts 200 ms \cite{Chang2006TheCough}. The expiratory phase is initiated by a brief opening of the glottis (30-50 ms), causing the loudest phase of the cough sound and rapid airflow, followed by 200-500 ms of lower respiratory airflow \cite{Chang2006TheCough}. Therefore, the minimum possible cough sound length is approximately 230 ms. We consequently discard any cough sound candidates shorter than 200 ms, and we include the 200 ms before and after the cough candidate in each segmented cough to capture any low-amplitude noise caused during the compressive and expiratory phases. \iffalse \begin{equation} SNR = 20 \times \log_{10}(\frac{\sqrt{\frac{1}{|x_{cough}|} \sum_{x(n) \epsilon x_{cough}} x(n)^2}}{\sqrt{\frac{1}{|x_{noise}|} \sum_{x(n) \epsilon x_{noise}} x(n)^2}}) \label{eq:snr} \end{equation} \fi The cough segmentation algorithm was subsequently used to eliminate cough recordings with significant background noise. An estimate of the SNR was calculated for each signal as described in \cite{orlandic_coughvid_2021} by comparing the RMS signal power of the cough segments of a recording to that of the non-cough segments. The training and testing datasets were further filtered by retaining only the recordings with a SNR greater than 5, above which cough sounds were clearly more prominent than background noise. The cough segmentation and SNR estimation algorithms are available in the \href{https://c4science.ch/diffusion/10770/}{COUGHVID public git repository}\footnote[1]{https://c4science.ch/diffusion/10770/} to foster reproducibility. \subsection{Feature Extraction} \label{section:feature_extraction} \iffalse \begin{table*}[] \centering \caption{\label{tab:features} Extracted Audio Features} \begin{tabular}{|l|l|l|l|} \hline Feature Class & Domain & Count & Details\\ \hline MFCC\cite{Pramono2016} & Mel Freq. & 26 & Mean and St. Dev of 13 MFCCs over time \\ \hline \multirow{2}{*}{EEPD}\cite{Chatrzarrin2011} & \multirow{2}{*}{Time} & \multirow{2}{*}{19} & BPF intervals in 50-1000 Hz; \\ &&& See Chatrazzin et al. \cite{Chatrzarrin2011} for details\\ \hline \multirow{6}{*}{Spectral Features\cite{Pramono2016, Sharma2020}} & \multirow{6}{*}{Freq.} & \multirow{6}{*}{11} & Dominant Frequency, Spectral Centroid, \\ & & & Spectral Rolloff, Spectral Spread, Spectral \\ & & & Skewness, Spectral Kurtosis, Spectral Bandwidth, \\ & & & Spectral Flatness, Spectral St. Dev, \\ & & & Spectral Slope Spectral Decrease \\ \hline RMS Power\cite{Pramono2016} & Time & 1 & None \\ \hline Zero Crossing Rate\cite{Pramono2016} & Time & 1 & None\\ \hline Crest Factor\cite{Pramono2016} & Time & 1 & None \\ \hline Signal Length & Time & 1 & None \\ \hline \multirow{2}{*}{Power Spectral Density}\cite{Alvarez2019AMachineHearing} & \multirow{2}{*}{Freq.} & \multirow{2}{*}{Variable} & Frequency bandse selected through \\ & & & training data PSD analysis \\ \hline \end{tabular} \end{table*} \fi The set of 60 features extracted from each cough audio segment are the same as those computed in the cough classification algorithm in \cite{orlandic_coughvid_2021}. These features are a mixture of time, frequency, and Mel frequency domain computations that were chosen due to their previous implementation in automatic cough sound classification ML algorithms \cite{Pramono2016, Chatrzarrin2011}. These features provide both general information about the signal spectra, as well as detailed computations regarding specific frequency bands, thus allowing the subsequent feature elimination step to select only the relevant features to each classification task. The code used for feature extraction is available to the public in the COUGHVID repository \cite{orlandic_coughvid_2021}. For each classification task described in Section \ref{sec:ml_opt}, an additional set of Power Spectral Density (PSD) features were selected by inspecting the averaged PSD of each class in the training dataset, divided by the total average signal power such that the PSD curve is normalized to a unit area. The frequency bands displaying a large variation between the average normalized PSDs of the two classes were noted, and the bandpowers within these frequency ranges were added as features. This produced a variable number of PSD features for each classifier. Some of the user-labeled data in the COUGHVID dataset contains user metadata information, such as their reported age, gender, and presence of respiratory disorders. In order to provide the model with some user-specific information to assist in classification, the binary gender value was added to the feature set. In case no gender information was provided, a gender identification model was developed using the ML model optimization procedure described in Section \ref{sec:ml_opt}. The model was trained using the training data subset containing gender labels, and resulted in a classifier with an area under the receiver operating characteristic curve (AUC) of 0.8 on the testing dataset described in Section \ref{sec:testing}. This classifier was used to assign a gender of ``male" or ``female" to any cough in the dataset for which this value was not provided. \subsection{Model Comparison and Optimization} \label{sec:ml_opt} \iffalse \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/ml_pipeline_v2.png} \caption{ML model and training pipeline used to develop each classifier.} \label{fig:ml_pipeline} \end{figure} \fi For each classification task described in Section \ref{sec:coughvid}, a ML model was trained to distinguish COVID-19 from healthy coughs based on the feature vectors of the training dataset. \iffalse The overall ML model selection and optimization pipeline is illustrated in Fig. \ref{fig:ml_pipeline}. \fi Prior to optimization, the features were standardized by removing the mean and scaling to unit variance. Next, we compared the efficacy of seven different state-of-the-art binary classification ML algorithms: Logistic Regression (LR), K Nearest Neighbors (KNN), Decision Tree Classifier (DTC), Gaussian Naive Bayes (GNB), Random Forests (RF), eXtreme Gradient Boosting (XGB), and Linear Discriminant Analysis (LDA), most of which were implemented in the Python scikit-learn library \cite{noauthor_scikit-learn_nodate}. To ensure a fair comparison between the different algorithms, the hyperparameters of each model were tuned simultaneously using Tree-structured Parzen Estimates (TPE) \cite{NIPS2011_4443}. The objective of the TPE procedure was to find the combination of hyperparameters that produced the highest mean area under the receiver operating characteristic curve (AUC) across 5 cross-validation (CV) folds. The utilized CV procedure was a 5-fold GroupShuffleSplit \cite{noauthor_sklearnmodel_selectiongroupshufflesplit_nodate}; in each CV fold, 20\% of the recordings were randomly selected and used for validation, and the remaining recordings were used for training. The segmented coughs that comprised these recordings were correspondingly assigned to training or validation. This ensured that no coughs originating from the same recording were included in both the training and validation sets of each fold, thereby maintaining the generalizability of our results to unseen cough recordings. As shown in Table \ref{tab:label_counts}, there is a significant class imbalance in each of the classification tasks. This issue was addressed using the Synthetic Minority Over-Sampling Technique (SMOTE) \cite{chawla_smote_2002}, which was employed to generate synthetic training samples from linear combinations of the minority class in the training dataset. This procedure produced a balanced sample of COVID-19 and healthy labeled training coughs by creating synthetic samples of the minority class of the training data within each CV fold. Following TPE, the final mean and standard deviation AUC scores of all of the optimized models were analyzed. The model with the highest mean AUC was chosen, and its learning curve was analyzed to determine if the model was underfitting or overfitting, and whether or not the results converged to a consistent performance with the amount of data available. In the case of overfitting, Recursive Feature Elimination with Cross-Validation (RFECV) was performed on the optimized model to recursively remove the weakest features of the model. This technique has the potential to reduce the variance of the model through the elimination of weak features, but risks increasing the bias of the model by potentially eliminating important features \cite{Munson2009OnBagging}. Finally, the same TPE procedure was used to re-optimize the hyperparameters of the model with a reduced feature set. An advantage of cough segmentation is that it enables aggregation of the classifier outputs of coughs originating from the same recording, which potentially enhances the accuracy of the classifier. Each recording was segmented into $N$ cough sounds, and each cough was processed separately by the trained classifier. This resulted in a series of classifier output probabilities $[p_1, p_2, ... , p_N]$, corresponding to the probability that each cough signal is COVID-19 positive. Since this diagnosis cannot change from one cough to the next, the probabilities can be combined to form one classifier output per recording, $p_{total}$. We employed two different aggregation techniques, all based on the logit score of each cough: \subsubsection{Logit Mean} \begin{equation} p_{total} = \frac{1}{N} \sum_{i=n}^N log(\frac{p_i}{1-p_i}) \label{eq:logit_mean} \end{equation} \subsubsection{Logit Median} \begin{equation} p_{total} = Median[ log(\frac{p_i}{1-p_i}) ] \label{eq:logit_median} \end{equation} Once the optimized model was selected, one final CV split was generated to form a training and validation dataset. The model was trained on this reduced training dataset, and the ROC curve was plotted using both of the logit aggregation methods. The aggregation method with the highest AUC value for the validation set was selected. Furthermore, the optimal classifier decision threshold was determined for computing further accuracy metrics by selecting the aggregated logit threshold with the highest geometric mean between the model's sensitivity and specificity. These final hyperparameters were noted for use in testing, and the model was re-trained using the full training dataset. The final model was tested on the private, unseen COUGHVID test set, as described in Section \ref{sec:testing}. \subsection{Semi-Supervised Learning (SSL)} \label{sec:ssl} Instead of relying solely on the potentially noisy user labels or the often contradictory expert labels described in Section \ref{sec:coughvid}, an SSL approach was used to overcome the issues of label inconsistency and ambiguity by identifying a subset of consistent training data with a high probability of belonging to COVID-19-positive or healthy subjects. At a high level, this method aims to distill the knowledge of each expert onto samples that the expert did not annotate, similarly to what is done in the state-of-the-art Pseudo-Label method \cite{lee2013pseudo}. Then, the agreement between the experts' models is used to identify a set of recordings with high label confidence, similarly to previous work on SSL applied to medical datasets with inconsistent labels \cite{guan_who_2018}. This is similar to the cross-voting methodology utilized by Li et al. \cite{li_semi-supervised_2021}, except that instead of randomly partitioning the data to train each model, the data is divided based on the expert that annotated it, thereby modeling each expert's medical expertise using ML. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{figures/SSL_diagram.png} \caption{The SSL approach consists of 1) training three separate ML models based on each expert's labels, 2) using the models to classify the unlabeled samples, 3) training a new classifier with the samples exhibiting a significant agreement between the user labels and expert models, and 4) testing the final model.} \label{Fig:ssl_pipeline} \end{figure*} The semi-supervised learning methodology is illustrated in Fig. \ref{Fig:ssl_pipeline}. First, three distinct ML models were trained based on each expert's COVID-19 versus healthy cough labels and optimized based on the procedure in Section \ref{sec:ml_opt}. Each expert model aggregated the COVID-19 detection probabilities for the coughs of a given recording using the mean logit score in Equation \ref{eq:logit_mean}. Then, the optimal classification threshold of each model was applied on these scores to produce a binary COVID-19 or healthy label for each recording. This procedure resulted in three to four labels per recording: three labels from the expert models, and one user label for the recordings for which this information was provided. In the event that an expert labeled a given recording, the expert's original COVID-19 or healthy diagnosis was maintained rather than the output of the model corresponding to that expert. Next, a subset of high-confidence samples was identified by comparing the agreement of the expert models and user labels. The recordings with a high degree of agreement for being either COVID-19 positive or healthy were used to train one final classifier, and the rest were discarded. In order to select the final dataset, three different agreement schemes were tested and assessed in terms of database size and class separation: \begin{enumerate} \item \textit{Universal Agreement:} All three expert models have the same label as the user label. This scheme limits the analysis to only the user-labeled datapoints. \item \textit{Expert Agreement:} All three expert models have the same label. This scheme bypasses the user label and can thus be applied on unlabeled samples in the dataset. \item \textit{Majority Agreement:} Either all three expert models have the same label, or two expert models have the same label as a user. This is the least conservative scheme as it allows one disagreement or missing user label. \end{enumerate} \subsection{Model Evaluation} \label{sec:testing} Once all of the models described in Section \ref{sec:ml_opt} were trained and optimized, they were tested on the COUGHVID private test set to determine their generalization capabilities on unseen recordings \cite{orlandic_coughvid_2021}. This is a set of 625 recordings that have been labeled by at least one expert, and most recordings contain a user status label. The utilized success metric is AUC because it is a fair metric in terms of class imbalance. For each classification task, the training data labeling scheme was also used for testing. For example, when a model was trained using the annotations of Expert 2, it was tested only on the subset of testing data that had been labeled by Expert 2. In the case of the semi-supervised learning approach, the expert model label propagation procedure described in Section \ref{sec:ssl} was also performed on the testing set, and the final testing samples were identified using the same agreement scheme. Although the labels of the testing data may change between classification tasks, all data is drawn from the same set of recordings. Once the various success metrics of the classifiers were computed, we assessed the most important features contributing to each classifier's outcome using the Shapley additive explanation (SHAP) values. These are measures of the relative importance of each feature, indicating which feature domains and specific measures had the greatest influence on the model's decision \cite{Erikstrumbelj2010AnTheory}. \section{Results} \subsection{Semi-Supervised Learning Agreement Scheme Selection} First, we evaluate which agreement scheme among the expert models and user labels, described in Section \ref{sec:ssl}, strikes the optimal trade-off between training dataset coverage and label consistency. The three schemes were applied on the entire database and the number of remaining samples, both recordings and segmented coughs, of each class is reported in Table \ref{tab:agreement_ssl}. This analysis provides an idea of how much training and testing data is maintained, as insufficient data may result in significant overfitting in the final model. Furthermore, the features described in Section \ref{section:feature_extraction} were computed for all of the segmented cough signals in each scheme to assess how the agreement scheme affects the class separation, which is used as a proxy measure of the label consistency across expert models. To quantify this class separation, the Jensen-Shannon divergence of each feature distribution in the COVID-19 and healthy cough classes was computed and averaged across all of the computed features of the training data. This metric ranges from 0 to 1, with higher values corresponding to a larger class separation. These results are displayed in Table \ref{tab:agreement_ssl}, along with the same metrics computed on the user-labeled data subset. \begin{table} \centering \caption{Agreement Scheme Dataset Coverage} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \begin{tabular}[c]{@{}l@{}}Label\\Scheme\end{tabular} & \begin{tabular}[c]{@{}l@{}}Training\\Recs.\\(+)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Training\\Coughs\\(+)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Testing\\Recs.\\(+)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Testing\\Coughs\\(+)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Jensen-Shannon\\Divergence\end{tabular} \\ \hline User & \begin{tabular}[c]{@{}l@{}}10,850\\(720)\end{tabular} & \begin{tabular}[c]{@{}l@{}}25,227\\(1,716)\end{tabular} & \begin{tabular}[c]{@{}l@{}}287\\(163)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1,098\\(637)\end{tabular} & 0.00877 \\ \hline \begin{tabular}[c]{@{}l@{}}SSL \\Universal\end{tabular} & \begin{tabular}[c]{@{}l@{}}2325\\(14)\end{tabular} & \begin{tabular}[c]{@{}l@{}}5515\\(45)\end{tabular} & \begin{tabular}[c]{@{}l@{}}28\\(2)\end{tabular} & \begin{tabular}[c]{@{}l@{}}104\\(10)\end{tabular} & 0.0954 \\ \hline \begin{tabular}[c]{@{}l@{}}SSL \\Expert\end{tabular} & \begin{tabular}[c]{@{}l@{}}4128\\(98)\end{tabular} & \begin{tabular}[c]{@{}l@{}}9583\\(295)\end{tabular} & \begin{tabular}[c]{@{}l@{}}141\\(12)\end{tabular} & \begin{tabular}[c]{@{}l@{}}501\\(62)\end{tabular} & 0.0519 \\ \hline \begin{tabular}[c]{@{}l@{}}SSL \\Majority\end{tabular} & \begin{tabular}[c]{@{}l@{}}8331\\(285)\end{tabular} & \begin{tabular}[c]{@{}l@{}}20337\\(848)\end{tabular} & \begin{tabular}[c]{@{}l@{}}240\\(53)\end{tabular} & \begin{tabular}[c]{@{}l@{}}876\\(239)\end{tabular} & 0.0284 \\ \hline \end{tabular} \label{tab:agreement_ssl} \end{table} As Table \ref{tab:agreement_ssl} shows, in the universal agreement scheme, there were only 14 COVID-19 positive recordings remaining in the training dataset. This amount is insufficient for the model to perform generalization. As expected, the majority agreement scheme produces the largest number of training samples. Intuitively, the Jensen-Shannon divergence decreases as the agreement scheme gets less conservative, meaning that the universal agreement scheme exhibits the largest class separation across features while the majority scheme has less pronounced differences between features. However, this increase in class separation comes at the expense of a decrease in dataset coverage, so the method that conserves the most data is maintained. This decision is in line with the findings of Zhu et al., which noted that SSL schemes prioritizing a larger initial dataset with moderately consistent labels performed better than small datasets with very reliable labels \cite{zhu_speech_2021}. The majority agreement scheme was selected to identify the final COVID-19 and healthy cough samples. While this scheme has the smallest class separation of the other semi-supervised learning schemes, its Jensen-Shannon divergence is still more than three times higher than that of the user-labeled scheme. Furthermore, the number of training samples used in this agreement scheme is only 23\% smaller than that of the user-labeled scheme, meaning that the increase in class separability does not sacrifice much of the data coverage. The percentage of COVID-19-labeled coughs in the majority agreement scheme is 3.3\%, which is lower than the 6.2\% in the user-labeled dataset. However, this class imbalance is handled in training by applying the SMOTE method described in Section \ref{sec:ml_opt}. \subsection{Intra-Class Consistency Analysis} \begin{table} \centering \caption{Final Model Selection} \begin{tabular}{|l|l|l|} \hline Label Type & Model Used & Hyperparameters \\ \hline User & Linear Discriminant Analysis & None \\ \hline Exp. 1 & Logistic Regression & \begin{tabular}[c]{@{}l@{}}C=18.31,\\class\_weight=None,\\solver=`newton-cg'\end{tabular} \\ \hline Exp. 2 & Logistic Regression & \begin{tabular}[c]{@{}l@{}}C=0.01038,\\class\_weight=`balanced',\\solver=`newton-cg'\end{tabular} \\ \hline Exp. 4 & Logistic Regression & \begin{tabular}[c]{@{}l@{}}C=0.3306,\\class\_weight=None,\\solver='lbfgs'\end{tabular} \\ \hline SSL & Logistic Regression & \begin{tabular}[c]{@{}l@{}}C=0.01009,\\class\_weight=`balanced',\\solver=`newton-cg'\end{tabular} \\ \hline \end{tabular} \label{tab:model_opt_results} \end{table} Once all three expert models were trained and optimized using the procedure in Section \ref{sec:ml_opt}, these labels were propagated onto both the training and testing datasets. By selecting the subset of recordings for which the majority of labels were in agreement, we expanded the expert knowledge, combined with user self-report labels, to identify training and testing samples that had a high probability of having correct labels. The final optimized models evaluated in this work are displayed in Table \ref{tab:model_opt_results}, complete with their respective hyperparameters selected through TPE. To assess the difference in audio properties of coughs labeled as COVID-19 and healthy in this new dataset compared to those of the user labels, the average normalized PSD curves of cough signals belonging to each class are plotted in Figures \ref{fig:psd}a and \ref{fig:psd}b. \begin{figure} \centering \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{figures/user_classification_psd.pdf} \\[\abovecaptionskip] \small (a) User labeling scheme \end{tabular} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{figures/ssl_classification_psd.pdf} \\[\abovecaptionskip] \small (b) SSL labeling scheme \end{tabular} \caption{Average normalized PSD of all cough signals in the training dataset belonging to each class according to the user labels and SSL labels.}\label{fig:psd} \end{figure} The figures show a solid line, indicating the average PSD, as well as the 95\% confidence interval across all segmented cough audio samples of a given class in each labeling scheme. Fig. \ref{fig:psd}a shows very few differences between the spectra of the user-labeled COVID-19 and healthy coughs, with a slight variation in the 400-550 Hz and 1000-1500 Hz ranges. In comparison, Fig. \ref{fig:psd}b depicts a similarly shaped spectrum of healthy coughs as the user-labeled healthy coughs, but there are much more pronounced differences between the COVID-19 coughs and healthy ones. The bandpowers of the COVID-19 coughs are significantly higher in the 400-550 Hz and 1000-1500 Hz ranges than those of healthy coughs, with p-values of $1.4 \times 10^{-36}$ and $1.2 \times 10^{-64}$, respectively. This analysis highlights the substantial difference in spectral features of COVID-19 and healthy coughs identified through the semi-supervised learning approach. It is also consistent with the findings of Table \ref{tab:agreement_ssl}, which shows that on average, the chosen SSL dataset exhibits over 3x more class separability than the user-labeled data in terms of the average Jensen-Shannon divergence across all extracted audio features. To expand on the feature analysis, the top five most important features for each classifier, determined by the SHAP values of each classifier, are displayed in Table \ref{tab:shap}. \begin{table} \centering \caption{SHAP Feature Ranking across Classifiers} \begin{tabular}{|l|l|l|l|l|l|} \hline \begin{tabular}[c]{@{}l@{}}Feature\\Ranking\\(SHAP)\end{tabular} & User & Expert 1 & Expert 2 & Expert 4 & SSL \\ \hline 1 & \begin{tabular}[c]{@{}l@{}}EEPD\\350-400\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spectral\\Centroid\end{tabular} & Gender & \begin{tabular}[c]{@{}l@{}}Crest\\Factor\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC~\\Std. 0\end{tabular} \\ \hline 2 & \begin{tabular}[c]{@{}l@{}}EEPD\\600-650\end{tabular} & \begin{tabular}[c]{@{}l@{}}RMS\\Power\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Mean 7\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Mean 0\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Mean 1\end{tabular} \\ \hline 3 & \begin{tabular}[c]{@{}l@{}}RMS\\Power\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Std. 0\end{tabular} & \begin{tabular}[c]{@{}l@{}}PSD\\550-800\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spectral\\Slope\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Mean 9\end{tabular} \\ \hline 4 & \begin{tabular}[c]{@{}l@{}}EEPD\\900-950\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spectral\\Spread\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Std. 5\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spectral\\Rolloff\end{tabular} & Gender \\ \hline 5 & \begin{tabular}[c]{@{}l@{}}Dominant\\Frequency\end{tabular} & \begin{tabular}[c]{@{}l@{}}Spectral\\Skewness\end{tabular} & \begin{tabular}[c]{@{}l@{}}EEPD~\\400-450\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Mean 9\end{tabular} & \begin{tabular}[c]{@{}l@{}}MFCC\\Mean 7\end{tabular} \\ \hline \end{tabular} \label{tab:shap} \end{table} When we analyze the three expert models, it is clear that there are few features in common between the classifiers and they each weigh features of different domains (i.e., time, frequency, and Mel frequency) with varying importance. The semi-supervised learning classifier, on the other hand, has features in common with several expert models (MFCC standard deviation 0, MFCC mean 7, and gender). Furthermore, the majority of its top features are in the Mel frequency domain, which is meant to model how the human auditory system processes sound signals. \subsection{Open-Sourced SSL Dataset} In order to contribute to further research in the field of COVID-19 cough sound diagnosis, we have added the training labels obtained through our SSL majority agreement scheme to the latest version of the \href{https://zenodo.org/record/7024894#.YwjMAXZByUk}{COUGHVID dataset public Zenodo repository}. This version has been expanded to include all of the crowdsourced recordings obtained through October 2021, whereas the original dataset only contained recordings uploaded through December 2020. These new labels can be found in the newly added \texttt{status\_SSL} column of the \texttt{metadata\_compiled} CSV file. The new SSL scheme provides labels for training 1,018 recordings that were previously unlabeled by users or experts, which demonstrates the utility of SSL in utilizing data that had previously been unusable. Furthermore, there are 581 recordings that the users labeled with the ambiguous ``symptomatic" label, but the SSL model provides a ``COVID-19" or ``healthy" label. A mere 32 of these coughs were labeled by the SSL model as COVID-19 positive, which is feasible considering the COVID-19 infection rates during the period of recording. Users of the COUGHVID dataset can use these new labels and corresponding data samples to augment their COVID-19 cough classification models with highly consistent training data. The same SSL label expansion procedure was conducted for the private testing dataset described in \ref{sec:testing}, so users are welcome to test their models against these labels as ground-truth, but must acknowledge that these labels are not confirmed by RT-PCR tests. \subsection{ML Model Evaluation} \begin{table} \centering \caption{Model Testing Results} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Model & CV AUC & \begin{tabular}[c]{@{}l@{}}Test AUC\\(Not Agg.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Test AUC\\(Agg.)\end{tabular} \\ \hline User & 0.591 & 0.564 & 0.562 \\ \hline Exp. 1 & 0.653 & 0.652 & 0.681 \\ \hline Exp. 2 & 0.669 & 0.663 & 0.743 \\ \hline Exp. 4 & 0.644 & 0.561 & 0.593 \\ \hline SSL & 0.883 & 0.763 & 0.797 \\ \hline \end{tabular} \label{tab:final_results} \end{table} To demonstrate how the SSL dataset can be used to train cough classification ML models, one final model was developed using the procedure in \ref{sec:ml_opt} using the SSL labels. The final testing results of each model are displayed in Table \ref{tab:final_results}, which shows all of the accuracy metrics, as well as the AUC obtained during cross-validation, non-aggregated testing (i.e., testing on every individual cough sound), and aggregated testing on each recording using one of the formulas in Equations \ref{eq:logit_mean} and \ref{eq:logit_median}. We observe an average 5.26\% increase in accuracy between non-aggregated and aggregated testing. This implies that testing each cough separately and combining the results for each recording enhances the model's performance. Aggregating the probabilities of each cough sound in a recording may exploit the correlations between the coughs and diminish the effects of outlier cough sounds, thus providing a more robust classification than predicting each cough sound separately. The model trained on self-reported user data exhibited the worst performance. Reaching only a testing AUC of 0.562, it was scarcely better than a random classifier. We observe a high variance in the success of the expert models, with Expert 2 having the highest AUC and Expert 4 the lowest. The AUC of the semi-supervised classification method is, on average, 15.6\% higher than that of the expert models, and 29.5\% higher than that of the user model. The final SSL model utilized 32 of the available 66 features, which were selected through RFECV. The learning curve of the final SSL model is displayed in Fig. \ref{fig:learning_curve}, which shows the effect of varying the training data size (in terms of number of segmented coughs) on the training and validation accuracy. The solid line depicts the mean scores across the five CV folds, and the shading around the line indicates one standard deviation from the mean. We can see that the model converges to a validation accuracy at around 4,000 training coughs, indicating that the model has sufficient training samples to gain insights from the features. Furthermore, the relatively small 2\% gap between the training and validation scores indicates that the variance of the model is low, meaning that it is not over-fitting on the training samples. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figures/SSL_Expert_learning_curve_roc_auc_final.pdf} \caption{Learning curve of the final optimized SSL classifier displaying the training and cross-validation accuracy using varying sizes of the training dataset.} \label{fig:learning_curve} \end{figure} The ROC curve of the semi-supervised learning classifier is displayed in Fig. \ref{fig:roc}. The model had an AUC of 0.797 on the private testing set. To evaluate the classification accuracy of such a model, we compare its sensitivity and specificity to those of commonly-used at-home COVID-19 tests. The Direct Antigen Rapid Test (DART) for COVID-19 screening was reported to have a sensitivity of 78.9\% and specificity of 97.1\% within 0 to 12 days from symptom onset \cite{harmon_validation_2021}. To achieve a comparable sensitivity, our classifier exhibits a specificity of 65.8\%, which is a 32.8\% decrease from that of DART tests. This decrease in specificity could be justified by the inexpensive, ubiquitous, and non-invasive nature of an audio-based screening tool versus the traditional nasal swab. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figures/roc_with_DART.pdf} \caption{The final ROC curve of the semi-supervised classifier on the aggregated testing set. At the red point, the model achieves a sensitivity of 79.2\%, which is comparable to that of DART COVID-19 tests, and a specificity of 65.8\%.} \label{fig:roc} \end{figure} \section{Discussion} By integrating the knowledge from three medical experts with the user labels through the Pseudo-Label-based semi-supervised learning approach, we identified a subset of cough recordings that had a high probability of belonging to each class. We see from the averaged PSD curves of COVID-19 and healthy coughs in Fig. \ref{fig:psd}b that the majority-voting approach between the expert pseudo-labels successfully identified two classes of coughs with significantly different spectral characteristics and a high class separation in the extracted audio features. As each expert model and user labels aimed to separate COVID-19 versus healthy coughs, it can be postulated that these spectral characteristics are present in the underlying distributions of the two classes of cough sounds. Furthermore, these spectral differences were much less pronounced when the same analysis was performed for the user-labeled data in Fig. \ref{fig:psd}a. These figures illustrate the fact that the class separation was over three times higher in the SSL training data than of the user labeling in terms of the Jensen-Shannon divergence of the feature distributions. This increase in class separation did not come at a significant cost to the data coverage, as the training data size only decreased by 23\%. An analysis of the most important features of the expert and semi-supervised classifiers in Table \ref{tab:shap} revealed that there were few important features common among all expert classifiers, which may be reflective of the lack of expert agreement observed in \cite{orlandic_coughvid_2021}. This table also showed that the semi-supervised approach learned which features were important from most of the expert models and weighed these features heavily in its own classification outcome. Furthermore, the fact that the semi-supervised model relies almost entirely on MFCC features might imply that the classifier is learning to model the human auditory system, since these features model how humans perceive sound. Finally, an analysis of the model testing results in Fig. \ref{tab:final_results} reveals the drawbacks of classic supervised learning approaches, as well as the improved performance of SSL. First, we note that the model that was trained and tested on crowdsourced user labels achieved a testing AUC of 0.562, and such a low score implies that there is significant mislabeling present in the dataset. Next, we note a wide variance between the success of each expert model, with the AUC scores ranging from 0.593 to 0.743. This means that the expert labels tend to be inconsistent between and even within each expert's labels. However, despite the setback of label ambiguity, the semi-supervised modeling approach achieved a high final AUC score of 0.797, which was at least 7.3\% higher than any of the expert models. These results indicate that integrating the medical knowledge of multiple experts in a semi-supervised fashion results in a more robust, consistent classifier than supervised learning based on any of the individual experts' labels. Although the proposed SSL method showed an increased model performance on the COUGHVID hidden test set, such an approach must be thoroughly validated on PCR-confirmed cough samples. Furthermore, the algorithm is unable to account for concept drift due to the varying symptomologies of the different COVID-19 virus variants. The data used in training was obtained through October 2021, whereas the Omicron variant -- which had a significantly lower rate of respiratory symptoms than previous variants -- was first reported in November 2021 \cite{bouzid_comparison_2022}. Therefore, these issues must be addressed in future work to accurately assess the clinical usefulness of such respiratory disorder classification models. \section{Conclusion} Labeling medical data requires significant time and effort from expert annotators. Moreover, this tedious process often leads to inconsistencies due to a lack of agreement between experts. This situation is a key drawback to confront new viruses, as it has happened with COVID-19. In this work, we have shown that using an SSL model development method, it is possible to overcome expert label scarcity and inconsistency -- as well as user mislabeling of crowdsourced medical datasets -- to identify a subset of data points with a high-class separability. We applied this approach for the first time to the task of COVID-19 screening from cough audio recordings and achieved a performance increase of 15.6\% from fully-supervised expert models and 29.5\% from crowdsourced user labeling. We have also achieved a 3x increase in class separability in the training data identified through semi-supervised learning compared to user labeling, while only decreasing the database coverage by 23\%. Furthermore, this high-class separation can be associated with specific spectral components of the audio signals, which makes the models explainable from an acoustic perspective. Using the COUGHVID crowdsourcing dataset, we developed a signal processing pipeline that extracts state-of-the-art audio features from each segmented cough signal. Then, a model optimization procedure fostering generalizability was employed to develop classifiers based on each expert's labels as well as user labels. The expert classifiers were tested on the whole dataset, and recordings whose majority opinion between the expert models and user label agreed were maintained in training the final classifier, which enabled the usage of unlabeled samples. Such an approach can be employed for other respiratory disorder diagnosis tasks based on partially-labeled audio data, thus minimizing the need for excessive labeling from one expert and rather having several experts label small portions of the dataset. This way, the experts' opinions can be combined to account for labeling inconsistencies. Our proposed semi-supervised learning approach has proven to have the highest classifier performance of all of the tested models, with a final testing AUC of 0.797, which leads to a sensitivity of 79.2\% and specificity of 65.8\%. Using recursive feature elimination, we selected a subsample of 32 features out of the original 66 for training the final model, thereby increasing the model robustness by removing redundant or unnecessary features. SHAP analysis revealed that three of the five most important features of the final semi-supervised classifier were also significant in each expert classifier, which implies that the model successfully integrated each expert's medical knowledge. Such an approach is not specific to the COUGHVID dataset and can be used in any medical database that is only partially labeled by a set of experts. While the class separation is significant between the COVID-19 and healthy cough recordings identified through the expert pseudo-label comparison, we cannot be sure that these trends truly apply to COVID-19 positive and healthy coughs unless they are tested on cough recordings from RT-PCR-confirmed COVID-19 positive and negative individuals. However, in the absence of an extensive, RT-PCR-validated dataset, our proposed approach can be easily used to improve the quality of large, crowdsourced COVID-19 cough databases and identify samples with consistent patterns in the cough recordings of each class. These re-labeled coughs can then be used to augment datasets of medically confirmed cough sounds to enhance the training data size and potentially improve the classification accuracy. \section{Acknowledgements} This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101017915 (DIGI-PREDICT), as well as the DeepHealth Project (GA No. 825111) and the Swiss NSF ML-Edge Project (GA No. 182009). T. Teijeiro is supported by a Maria Zambrano fellowship (MAZAM21/29) from the University of Basque Country and the Spanish Ministry of Universities, funded by the European Union-Next-GenerationEU. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the past, timekeepers measured the time manually. The time given by a timer was assigned to competitors based on their starting number, and these competitors were then placed in order according to their achieved results and category. Later, manual timers were replaced by timers with automatic time-registers capable of capturing and printing out registered times. However, assigning the times to competitors based on their starting numbers, was still done manually. This work could be avoided by using electronic-measuring technology which, in addition to registering the time, also enables the registering of competitors' starting numbers. An expansion of RFID (Radio Frequency Identification) technology has helped this measuring-technology to become less expensive (\cite{web:ChampionChip2010,web:RFID2010}), and accessible to a wider-range of users (e.g., sports clubs, organizers of sporting competitions). Moreover, they were also able to compete with time-measuring monopolies at smaller competitions. In addition to measuring technology, a flexible computer system is also needed to monitor the results. The proposed computer system enables the monitoring of different sporting competitions using a various number of measuring devices and measuring points, the online recording of events, the writing of results, as well as efficiency and security. This measuring device is dedicated to the registration of events and is triggered either automatically, when the competitor crosses the measuring point that acts as an electromagnetic antenna fields with an appropriate RFID tag, or manually, when an operator presses the suitable button on a personal computer that acts as a timer. The control point is the place where the organizers want to monitor the results. Until now, each control point has required its own measuring device. However, modern electronic-measuring devices now allow for the handling of multiple control points, simultaneously. Moreover, each registered event can have a different meaning, depending on the situation within which it is generated. Therefore, an event is handled by the measuring system according to those rules that are valid for the control point. As a result, the number of control points (and measuring devices) can be reduced by using more complex measurements. Fortunately, the rules controlling events can be described easily with the use of a domain-specific language (DSL) \cite{Hudak:1996,Mernik:2005}. When using this DSL, measurements at different sporting competitions can be accomplished by an easy pre-configuration of the rules. A DSL is suited to an application domain and has certain advantages over general-purpose languages (GPL) within a specific domain \cite{Mernik:2005}. The GPL is dedicated to writing software over a wider-range of application domains. General problems are usually solved using these languages. However, a programmer is necessary for changing the behavior of a program written in a GPL. On the other hand, the advantages of DSL are reflected in its greater expressive power in a particular domain and, hence, increased productivity~\cite{Kosar:ESE:2011} , ease of use (even for those domain experts who are not programmers), and easier verification and optimization \cite{Mernik:2005}. This article presents a DSL called EasyTime, and its implementation. EasyTime is intended for controlling those agents responsible for recording events from the measuring devices, into a database. Therefore, the agents are crucial elements of the proposed measuring system. To the best of the author's knowledge there is no comparable DSL of time measuring for sport events, whilst some DSLs for performance measurement of computer systems ~\cite{Arpaia:2011,Pakin:2007} as well as on general measurement systems do indeed already exist~\cite{Kos:2011}. Finally, EasyTime has been successfully employed in practice, as well. For instance, it measured times at the World Championship for the double ultra triathlon in 2009~\cite{Fister:2011}, and at a National Championship in the time-trials for bicycle in 2010~\cite{Fister:2011}. The structure of the remaining article is as follows; In the second section, those problems are illustrated that accompany time-measuring at sporting competitions. Focus is directed primarily on triathlon competitions, because they contain three disciplines that need to be measured, and also because of their lengthy durations. The design of DSL EasyTime is briefly shown in section three. The implementation of the EasyTime compiler is described in the fourth section, whilst the fifth section explains the execution of the program written in EasyTime. Finally, the article is concluded with a short analysis of the work performed, and a look at future work. This paper extends a previous workshop paper \cite{Fister:2011a} by providing general guidelines on how to transform formal language specifications using denotational semantics into attribute grammars. The concreteness of these guidelines is shown on EasyTime DSL. \section{Measuring Time in Sporting Competitions} In practice, the measuring time in sporting competitions can be performed manually (classically or with a computer timer) or automatically (with a measuring device). The computer timer is a program that usually runs on a workstation (personal computer) and measures in real-time. Thereby, the processor tact is exploited. The processor tact is the velocity with which the processor's instructions are interpreted. A computer timer enables the recording of events that are generated by the competitor crossing those measure points (MP) in line with the measuring device. In that case, however, the event is triggered by an operator pressing the appropriate button on the computer. The operator generates events in the form of $\langle\#,MP,TIME\rangle$, where $\#$ denotes the starting number of a competitor, $MP$ is the measuring point, and $TIME$ is the number of seconds since 1.1.1970 at 0:0:0 (timestamp). One computer timer represents one measuring-point. Today, the measuring device is usually based on RFID technology \cite{Finkenzeller:2010}, where identification is performed using electromagnetic waves within a range of radio frequencies, and consists of the following elements: \begin{itemize} \item readers of RFID tags, \item primary memory, \item LCD monitor, \item numerical keyboard, and \item antenna fields. \end{itemize} More antenna fields can be connected on to the measuring device. One antenna field represents one measuring point. Each competitor generates an event by crossing the antenna field using passive RFID tags that include an identification number. This number is unique and differs from the starting number of the competitor. The event from the measuring device is represented in the form of $\langle\#,RFID,MP,TIME\rangle$, where the identification number of the RFID tag is added to the previously mentioned triplet. The measuring devices and workstations running the computer timer can be connected to the local area network. Communication with devices is performed by a monitoring program, i.e. an agent, that runs on the database server. This agent communicates with the measuring device via the TCP/IP sockets, and appropriate protocol. Usually, the measuring devices support a $Telnet$ protocol that is character-stream oriented and, therefore, easy to implement. The agent employs the file transfer protocol ($ftp$) to communicate with the computer timer. \subsection{Example: Measuring Time in Triathlons} Special conditions apply for triathlon competitions, where one competition consists of three disciplines. This article, therefore, devotes most of its attention to this problem. The triathlon competition is performed as follows: first, the athletes swim, then they ride a bicycle and finally run. In practice, all these activities are performed consecutively. However, the transition times, i.e. the time that elapses when a competitor shifts from swimming to bicycling, and from bicycling to running, are added to the summary result. There are various types of triathlon competitions that differ according to the lengths of various courses. In order to make things easier, organizers often employ round courses (laps) of shorter lengths instead of one long course. Therefore, the difficulty of measuring time is increased because the time for each lap needs to be measured. Measuring time in triathlon competitions can be divided into nine control points (Fig.~\ref{pic:slika_1}). The control point (CP) is a location on the triathlon course, where the organizers need to check the measured time. This can be intermediate or final. When dealing with a double triathlon there are 7.6 km of swimming, 360 km of bicycling, and 84 km of running. Hence the swimming course of 380 meters consists of 20 laps, the bicycling course of 3.4 kilometers contains 105 laps, and the running course of 1.5 kilometers has 55 laps (Fig.~\ref{pic:slika_1}). \begin{figure*}[htb] \vspace{-5mm} \begin{center} \includegraphics [scale=0.9]{Slika3a.pdf} % \caption{Definition of control points in the triathlon} \label{pic:slika_1} \end{center} \vspace{-5mm} \end{figure*} Therefore, the final result for each competitor in a triathlon competition (CP8) consists of five final results: the swimming time SWIM (CP2-CP0), the time for the first transition TA1 (CP3-CP2), the time spent bicycling BIKE (CP5-CP3), the time for the second transition TA2 (CP6-CP5), the time spent running RUN (CP8-CP6), and three intermediate results: the intermediate time for swimming (CP1), the intermediate time for bicycling (CP4) and the intermediate time for running (CP7). However, the current time INTER\_x and the number of remaining laps LAPS\_x are measured by the intermediate results, where $x=\{1,2,3\}$ denotes the appropriate discipline (1=SWIM, 2=BIKE and 3=RUN). The DSL EasyTime was developed in order to achieve this goal, and has been employed in practice by conducting measurements at the World Championship in the Double Triathlon in 2009. Note that the measurements were realized according to Fig.~\ref{pic:slika_1}. The next sections presents the design, implementation, and operation of EasyTime. \section{The Design of the EasyTime Domain-Specific Language} Typically, the development of a DSL consists of the following phases~\cite{Mernik:2005}: \begin{itemize} \item a domain analysis, \item a definition of an abstract syntax, \item a definition of a concrete syntax, \item a definition of formal semantics, and \item an implementation of the DSL. \end{itemize} Domain analysis provides an analysis of the application domain, i.e. measuring time in sporting competitions. The results of this analysis define those concepts of EasyTime that are typically represented within a feature diagram~\cite{Deursen:2002,Stuikys:2009}. The feature diagram also describes dependencies between the concepts of DSL. Thus, each concept can be broken-down into features and sub-features. In the case of EasyTime, the concept $race$ consists of sub-features: $events$ (e.g., $swimming$, $bicycling$, and $running$), $control\ points$, $measuring\ time$, $transition$ $area$, and $agents$. Each $control\ point$ is described by its $starting$ and $finish$ line and at least one $lap$. In addition, the feature $transition\ area$ can be introduced as the difference between the finish and start times. Both $updating\ time$ and $decrementing\ laps$ are sub-features of $measuring\ time$. However, an $agent$ is needed for the processing of events received from the measuring device. It can act either $automatically$ or $manually$. Note that during domain analysis not all the identified concepts are useful for solving actual problem. Hence, the identified concepts can be further classified into ~\cite{Mauw:2004}: \begin{itemize} \item irrelevant concepts, those which are irrelevant to the actual problem; \item variable concepts, those which actually need to be described in the DSL program; and \item fixed concepts, those which can be built into the DSL execution environment. \end{itemize} Domain analysis identifies several variable and fixed concepts within the application domain that needs to be mapped into EasyTime syntax and semantics~\cite{Mernik:2005}. At first, the abstract syntax is defined (context-free grammar). Each variable concept obtained from the domain analysis is mapped to a non-terminal in the context-free grammar; additionally, some new non-terminal and terminal symbols are defined. The translations of the EasyTime domain concepts to non-terminals are presented and explained in Table~\ref{tab:tab1}, whilst an abstract syntax is presented in Table~\ref{tab:X}. Note that, the concepts \textit{Events} and \textit{Transition} are irrelevant for solving actual problem and are not mapped into non-terminals' symbols (denoted as \textit{none} in Table~\ref{tab:tab1}). Interestingly, a description of agents and measuring places cannot be found in other DSLs or GPLs. Whilst attribute declaration is similar to variable declaration in many other programming languages. However, note that there is the distinction that variables are actually database attributes allocated for every competitor. Some statements, such as assignment, conditional statement, and compound statement can be found in many other programming languages, whilst decrement attributes and update attributes are domain-specific constructs. \begin{table*}[htb] \caption{Translation of the application domain concepts into a context-free grammar} \label{tab:tab1} \vspace{-5mm} \scriptsize \begin{center} \begin{tabular}{ l l l l } \hline Application domain concepts & Non-terminal & Formal sem. & Description \\ \hline Race & P & $\mathcal{CP}$ & Description of agents; control points; measuring \\ & & & places. \\ \hline Events (swimming, cycling, & none & none & Measuring time is independent from the type of an \\ running) & & & event. However, good attribute's identifier in control \\ & & & points description will resemble the type of an event. \\ \hline Transition area times & none & none & Can be computed as difference between events final \\ & & & and starting times. \\ \hline Control points (start, number & D & $\mathcal{D}$ & Description of attributes where start and finish time \\ of laps, finish) & & & will be stored as well as remaining laps. \\ \hline Measuring places (update time, & M & $\mathcal{CM}$ & Measuring place id; agent id, which will control this \\ decrement lap) & & & measuring place; specific actions (presented \\ & & & with new non-terminal S) which will be performed \\ & & & at this measuring place (e.g., decrement lap). \\ \hline Agents (automatic, manual) & A & $\mathcal{A}$ & Agent id; agent type (automatic, manual); agent sour- \\ & & & ce (file, ip). \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table*} \begin{table}[htb] \caption{The abstract syntax of EasyTime} \label{tab:X} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline $P \in $ \textbf{Pgm} & & $A \in $ \textbf{Adec} \\ $D \in $ \textbf{Dec} & & $M \in $ \textbf{MeasPlace} \\ $S \in $ \textbf{Stm} & & $b \in $ \textbf{Bexp} \\ $a \in $ \textbf{Aexp} & & $n \in $ \textbf{Num} \\ $x \in $ \textbf{Var} & & $file \in $ \textbf{FileSpec} \\ $ip \in $ \textbf{IpAddress} & & \\ & & \\ $P$ & ::= & $A\ D\ M$ \\ $A$ & ::= & $n$ \textbf{manual} $file$ \textbar $\ n$ \textbf{auto} $ip$ \textbar $\ A_{1};A_{2}$ \\ $D$ & ::= & \textbf{var} $x := a$ \textbar $\ D_{1};D_{2}$ \\ $M$ & ::= & \textbf{mp}[$n_{1}$] $\rightarrow$ \textbf{agnt}[$n_{2}]\ S$ \textbar $\ M_{1};M_{2}$ \\ $S$ & ::= & \textbf{dec} $x$ \textbar \ \textbf{upd} $x$ \textbar $\ x := a$ \textbar $\ (b) \rightarrow S$ \textbar $\ S_{1};S_{2}$ \\ $b$ & ::= & \textbf{true} \textbar \ \textbf{false} \textbar $\ a_{1} == a_{2}$ \textbar $\ a_{1} != a_{2}$ \\ $a$ & ::= & $n$ \textbar $\ x$ \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} Although a language designer can proceed after domain analysis with informal or formal design patterns~\cite{Mernik:2005} the formal design step is preferred since it can identify problems before the DSL is actually implemented~\cite{Viroli:2011}. Moreover, formal specifications can be implemented automatically by language development systems, thus significantly reducing the implementation effort~\cite{Mernik:2005}. The meaning of the EasyTime language constructs is prescribed during the formal semantics phase. Each language construct, belonging to the syntax domain, is mapped into an appropriate semantic domain (Table~\ref{tab:Y}) by semantic functions $\mathcal{CP}$, $\mathcal{A}$, $\mathcal{D}$, $\mathcal{CM}$, $\mathcal{CS}$, $\mathcal{CB}$, and $\mathcal{CA}$ (Table~\ref{tab:tab6}). \begin{table}[htb] \caption{Semantic domains} \label{tab:Y} \footnotesize \vspace{-5mm} \begin{center} \begin{tabular}{ | l l | } \hline \textbf{Integer}=$\{\ldots -3,-2,-1,0,1,2,3 \ldots\}$ & $n \in$ \textbf{Integer} \\ \textbf{Truth-Value}=$\{true,false\}$ & \\ \textbf{State}=\textbf{Var}$\rightarrow$\textbf{Integer} & $s \in$ \textbf{State} \\ \textbf{AType}=$\{manual,auto\}$ & \\ \textbf{Agents}=\textbf{Integer}$\rightarrow$\textbf{AType}$\ \times\ (FileSpec\ \cup\ IpAddress)$ & $ag \in$\textbf{Agents} \\ \textbf{Runners}=$(Id \times RFID \times LastName \times FirstName)^{*}$ & $r \in $ \textbf{Runners}\\ \textbf{DataBase}=$(Id \times Var_{1} \times Var_{2} \times \ldots \times Var_{n})^{*}$ & $db \in $ \textbf{DataBase}\\ \textbf{Code}=\textbf{String} & $c \in $ \textbf{Code} \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} \begin{table}[!htb] \caption{EasyTime formal semantics} \label{tab:tab6} \footnotesize \vspace{-5mm} \begin{center} \begin{tabular}{ | l l l | } \hline $\mathcal{CP}:\textbf{Pgm} \rightarrow \textbf{Runners}$ & $\rightarrow$ & \textbf{Code} $\times$ \textbf{Integer} $\times$ \textbf{DataBase} \\ $\mathcal{CP} \lsem A\ D\ M \rsem r$ & = & let $s= \mathcal{D} \lsem D \rsem \O$: \\ & & $\>\>\>db = $\textnormal{create\&insertDB}$(s,r)$ \\ & & in $(\mathcal{CM}\lsem M \rsem (\mathcal{A}\lsem A \rsem \O), db)$ \\ & & \\ $\mathcal{A}$~:~\textbf{Adec} $\rightarrow$ \textbf{Agents} & $\rightarrow$ & \textbf{Agents} \\ $\mathcal{A} \lsem n$ \textbf{manual} $ file\rsem ag$ & = & $ag [ n \rightarrow (manual, file) ]$ \\ $\mathcal{A} \lsem n$ \textbf{auto} $ ip\rsem ag$ & = & $ag [ n \rightarrow (auto, ip) ]$ \\ $\mathcal{A} \lsem A_{1};A_{2}\rsem ag$ & = & $\mathcal{A} \lsem A_{2} \rsem (\mathcal{A} \lsem A_{1} \rsem ag)$ \\ & & \\ $\mathcal{D}$~:~\textbf{Dec}$\rightarrow$\textbf{State} & $\rightarrow$ & \textbf{State} \\ $\mathcal{D} \lsem \textbf{var}\ x := a \rsem s$ & = & $s[x \rightarrow a]$ \\ $\mathcal{D} \lsem D_{1},D_{2} \rsem s$ & = & $\mathcal{D} \lsem D_{2} \rsem (\mathcal{D} \lsem D_{1} \rsem s)$\\ & & \\ $\mathcal{CM}$~:~\textbf{MeasPlace} $\rightarrow$ \textbf{Agents}&$\rightarrow$&\textbf{Code} $\times$ \textbf{Integer} \\ $\mathcal{CM} \lsem \textbf{mp}[n_{1}] \rightarrow \textbf{agnt}[n_{2}] S \rsem ag$&=&(WAIT $i:\mathcal{CS} \lsem S \rsem (ag, n_{2}), n_{1} )$ \\ $\mathcal{CM} \lsem M_{1}; M_{2} \rsem ag$&=&$\mathcal{CM} \lsem M_{1} \rsem ag: \mathcal{CM} \lsem M_{2} \rsem ag$ \\ & & \\ $\mathcal{CS}$~:~\textbf{Stm}$\rightarrow$ \textbf{Agents} $\times$ \textbf{Integer} & $\rightarrow$ & \textbf{Code} \\ $\mathcal{CS} \lsem$ \textbf{dec} $x \rsem (ag,n)$ & = & FETCH $x$:DEC:STORE $x$ \\ $\mathcal{CS} \lsem$ \textbf{upd} $x \rsem (ag,n)$ & = & FETCH $y$:STORE $x\ $ where \\ & & $y = \left\{\begin{array}{l l} \textnormal{accessfile}(ag(n)\downarrow 2) & \textnormal{if}\ ag(n)\downarrow 1 = manual \\ \textnormal{connect}(ag(n)\downarrow 2) & \textnormal{if}\ ag(n)\downarrow 1 = automatic \\ \end{array}\right.$ \\ $\mathcal{CS} \lsem x := a \rsem(ag,n)$ & = & $\mathcal{CA}\lsem a\rsem$:STORE $x$ \\ $\mathcal{CS} \lsem (b)\rightarrow S \rsem(ag,n)$ & = & $\mathcal{CB} \lsem b\rsem$:BRANCH($\mathcal{CS}\lsem S\rsem (ag,n),NOOP$)\\ $\mathcal{CS} \lsem S_{1};S_{2} \rsem(ag,n)$ & = & $\mathcal{CS}\lsem S_{1}\rsem(ag,n):\mathcal{CS}\lsem S_{2}\rsem (ag,n)$ \\ & & \\ $\mathcal{CB}$~:~\textbf{Bexp} & $\rightarrow$ & \textbf{Code} \\ $\mathcal{CB} \lsem \textbf{true} \rsem $ & = & TRUE \\ $\mathcal{CB} \lsem \textbf{false} \rsem $ & = & FALSE \\ $\mathcal{CB} \lsem a_{1}==a_{2} \rsem$ & = & $\mathcal{CA} \lsem a_{2}\rsem:\mathcal{CA} \lsem a_{1}\rsem$:EQ \\ $\mathcal{CB} \lsem a_{1}!=a_{2} \rsem$ & = & $\mathcal{CA} \lsem a_{2}\rsem:\mathcal{CA} \lsem a_{1}\rsem$:NEQ \\ & & \\ $\mathcal{CA}$~:~\textbf{Aexp} & $\rightarrow$ & \textbf{Code} \\ $\mathcal{CA} \lsem n \rsem$ & = & PUSH $n$ \\ $\mathcal{CA}\lsem x \rsem$ & = & FETCH $x$ \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} \begin{algorithm}[htb] \caption{EasyTime program for measuring time in a triathlon competition as illustrated in Fig.~\ref{pic:slika_1}} \label{alg:prog} \scriptsize \begin{algorithmic}[1] \STATE 1 manual "abc.res"; \STATE 2 auto 192.168.225.100; \STATE \STATE var ROUND1 := 20; \STATE var INTER1 := 0; \STATE var SWIM := 0; \STATE var TRANS1 :=0; \STATE var ROUND2 := 105; \STATE var INTER2 :=0; \STATE var BIKE := 0; \STATE var TRANS2 :=0; \STATE var ROUND3 := 55; \STATE var INTER3 := 0; \STATE var RUN := 0; \STATE \STATE mp[1] $\rightarrow$ agnt[1] \{ \STATE \ \ (true) $\rightarrow$ upd SWIM; \STATE \ \ (true) $\rightarrow$ dec ROUND1; \STATE \} \STATE mp[2] $\rightarrow$ agnt[1] \{ \STATE \ \ (true) $\rightarrow$ upd TRANS1; \STATE \} \STATE mp[3] $\rightarrow$ agnt[2] \{ \STATE \ \ (true) $\rightarrow$ upd INTER2; \STATE \ \ (true) $\rightarrow$ dec ROUND2; \STATE \ \ (ROUND2 == 0) $\rightarrow$ upd BIKE; \STATE \} \STATE mp[4] $\rightarrow$ agnt[2] \{ \STATE \ \ (ROUND3 == 55) $\rightarrow$ upd TRANS2; \STATE \ \ (true) $\rightarrow$ upd INTER3; \STATE \ \ (true) $\rightarrow$ dec ROUND3; \STATE \ \ (ROUND3 == 0) $\rightarrow$ upd RUN; \STATE \} \end{algorithmic} \normalsize \end{algorithm} These semantic functions translate EasyTime constructs into the instructions of the simple virtual machine. The meaning of virtual machine instructions has been formally defined using operational semantics (Table~\ref{tab:am}) as the transition of configurations $<c,~e,~db,~j>$, where $c$ is a sequence of instructions, $e$ is the evaluation stack to evaluate arithmetic and boolean expressions, $db$ is the database, and $j$ is the starting number of a competitor. More details of EasyTime syntax and semantics are presented in~\cite{Fister:2011}. This article focuses on the implementation phase, as presented in the next section. \begin{table*}[htb] \caption{The virtual machine specification} \label{tab:am} \begin{center} \vspace{-5mm} \scriptsize \begin{tabular}{ | l l l l | } \hline $\langle \textnormal{PUSH}\ n:c,e,db,j \rangle$ & $\triangleright$ & $\langle c,n:e,db,j \rangle$ & \\ $\langle \textnormal{TRUE}:c,e,db,j \rangle$ & $\triangleright$ & $\langle c,true:e,db,j \rangle$ & \\ $\langle \textnormal{FALSE}:c,e,db,j \rangle$ & $\triangleright$ & $\langle c,false:e,db,j \rangle$ & \\ $\langle \textnormal{EQ}:c,z_{1}:z_{2}:e,db,j \rangle$ & $\triangleright$ & $\langle c,(z_{1}==z_{2}):e,db,j \rangle$ & \textnormal{\ if}\ $z_{1},z_{2} \in \textbf{Int}$ \\ $\langle \textnormal{NEQ}:c,z_{1}:z_{2}:e,db,j \rangle$ & $\triangleright$ & $\langle c,(z_{1}!=z_{2}):e,db,j \rangle$ & \textnormal{\ if}\ $z_{1},z_{2} \in \textbf{Int}$ \\ $\langle \textnormal{DEC}:c,z:e,db,j \rangle$ & $\triangleright$ & $\langle c,(z-1):e,db,j \rangle$ & \textnormal{\ if}\ $z \in \textbf{Int}$ \\ $\langle \textnormal{WAIT}\ i:c,e,db,j \rangle$ & $\triangleright$ & $\langle c,e,db,i \rangle$ & \\ $\langle \textnormal{FETCH}\ x:c,e,db,j \rangle$ & $\triangleright$ & $\langle c,\textnormal{select}\ x\ \textnormal{from}\ db\ \textnormal{where}\ Id=j:e,db,j \rangle$ & \\ $\langle \textnormal{FETCH}\ accessfile(fn):c,e,db,j \rangle$ & $\triangleright$ & $\langle c,time:e,db,j \rangle$ & \\ $\langle \textnormal{FETCH}\ connect(ip):c,e,db,j \rangle$ & $\triangleright$ & $\langle c,time:e,db,j \rangle$ & \\ $\langle \textnormal{STORE}\ x:c,z:e,db,j \rangle$ & $\triangleright$ & $\langle c,e,\textnormal{update}\ db\ \textnormal{set}\ x=z\ \textnormal{where}\ Id=j,j \rangle$ & \textnormal{\ if}\ $z \in \textbf{Int}$ \\ $\langle \textnormal{NOOP}:c,e,db,j \rangle$ & $\triangleright$ & $\langle c,e,db,j \rangle$ & \\ $\langle \textnormal{BRANCH}(c_{1},c_{2}):c,t:e,db,j \rangle$ & $\triangleright$ & $\left\{\begin{matrix} \langle c_{1}:c,e,db,j \rangle \\ \langle c_{2}:c,e,db,j \rangle \end{matrix}\right.$ & $\begin{array}{l l} \\ \textnormal{if}\ t=true \\ \textnormal{otherwise} \\ \end{array}$ \\ \hline \end{tabular} \normalsize \vspace{-5mm} \end{center} \end{table*} The sample program written in EasyTime that covers the measuring time in the double ultra triathlon is presented by Algorithm~\ref{alg:prog}. In lines 1-2 two agents are defined. Agent no. 1 is manual and agent no. 2 is automatic. In lines 4-14 several variables, attributes in a database for each competitor, are defined and initialized appropriately. For example, from Figure \ref{pic:slika_1} it can be seen that 20 laps are needed for the swimming course and $ROUND1$ is set to 20, 105 laps are needed for the bicycling course and $ROUND2$ is set to 105, and 55 laps are needed for the running course and $ROUND3$ is set to 55. Lines 16-19 define the first measuring place which is controlled by manual agent no. 1. At this measuring place the intermediate swimming time must be updated in the database ($upd~SWIM$) and the number of laps must be decremented ($dec~ROUND1$). Lines 20-22 define the second measuring place which is also controlled by manual agent no. 1. At this measuring place only transition time must be stored in the database ($upd~TRANS1$). Lines 23-27 define the third measuring place which is controlled by automatic agent no. 2. At this measuring place we must update the intermediate result for bicycling ($upd~INTER2$) and decrement the number of laps ($dec~ROUND2$). If a competitor finished all the requested 105 laps ($ROUND2==0$) then time spent on the bicycle must be stored in the database ($upd~BIKE$). Lines 28-33 define the fourth measuring place which is also controlled by automatic agent no. 2. At this measuring place we must first check if a competitor has just started running ($ROUND3==55$). If this is the case, we must record the transition time between bicycling and running ($upd~TRANS2$). At this measuring place we also must update the intermediate result for running ($upd~INTER3$) and decremented number of laps ($dec~ROUND3$). If a competitor finished all the requested 55 laps ($ROUND3==0$) then the final time must be stored in the database ($upd~RUN$). \section{Implementation of the Domain-Specific Language EasyTime} \subsection {A LISA Compiler-Generator} One of the benefits of formal language specifications is the unique possibility for automatic language implementation. Although some compiler generators accept denotational semantics~\cite{Paulson:1982}, the generated compilers are mostly inefficient. Although many compiler-generators based on attribute grammars~\cite{Knuth:1968,Paakki:1995} exist today, we selected a LISA compiler-compiler that was developed at the University of Maribor in the late 1990s~\cite{Mernik:2002}. The LISA tool produces a highly efficient source code for: the scanner, parser, interpreter or compiler, in Java. The lexical and syntactical parts of the language specification in LISA supports various well-known formal methods, such as regular expressions and BNF~\cite{Aho:1972}. LISA provides two kinds of user interfaces: \begin{itemize} \item a graphic user interface (GUI) (Fig.~\ref{pic:LISA_GUI}), and \item a Web-Service user interface. \end{itemize} \begin{figure*}[htb] \begin{center} \includegraphics [scale=0.8] {LISA.png} \caption{LISA GUI} \label{pic:LISA_GUI} \end{center} \vspace{-5mm} \end{figure*} The main features of LISA are as follows: \begin{itemize} \item since it is written in Java, LISA works on all Java platforms, \item a textual or a visual environment, \item an Integrated Development Environment (IDE), where users can specify, generate, compile and execute programs on the fly, \item visual presentations of different structures, such as finite-state-automata, BNF, a dependency graph, a syntax tree, etc., \item modular and incremental language development~\cite{Mernik:2005a}. \end{itemize} LISA specifications are based on Attribute Grammar (AG)~\cite{Paakki:1995} as introduced by D.E. Knuth~\cite{Knuth:1968}. The attribute grammar is a triple $AG=\langle G,A,R \rangle$, where $G$ denotes a context-free grammar, $A$ a finite set of attributes, and $R$ a finite set of semantic rules. In line with this, the LISA specifications (Table~\ref{tab:tab30}) include: \begin{itemize} \item lexical regular definitions (lexicon part in Table~\ref{tab:tab30}), \item attribute definitions (attributes part in Table~\ref{tab:tab30}), \item syntax rules (rule part before compute in Table~\ref{tab:tab30}), \item semantic rules, (rule part after compute in Table~\ref{tab:tab30}) and \item operations on semantic domains (method part in Table~\ref{tab:tab30}). \end{itemize} \begin{table}[htb] \caption{LISA specifications} \label{tab:tab30} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l | } \hline \ \ {\bf language } $L_{1}$ $[$ {\bf extends} $L_{2}$, ..., $L_{N}$$]$ \{\\ \ \ \ \ {\bf lexicon} \{\\ \ \ \ \ \ \ $[[$P$]$ {\bf overrides} $|$ $[$P$]$ {\bf extends}$]$ R regular expr.\\ \ \ \ \ \ \ \ \ \vdots\\ \ \ \ \ \}\\ \ \ \ \ {\bf attributes} type A1, ..., AM\\ \ \ \ \ \ \ \vdots\\ \ \ \ {\bf rule} $[[$Y$]$ {\bf extends} $|$ $[$Y$]$ {\bf overrides}$]$ Z \{\\ \ \ \ \ \ \ X ::= X$_{11}$ X$_{12}$ ... X$_{1p}$ {\bf compute} \{\\ \ \ \ \ \ \ \ \ \ \ semantic functions \}\\ \ \ \ \ \ \ \ \ \vdots\\ \ \ \ \ \ \ $|$\\ \ \ \ \ \ \ \ \ X$_{r1}$ X$_{r2}$ ... X$_{rt}$ {\bf compute} \{\\ \ \ \ \ \ \ \ \ \ \ semantic functions \} \\ \ \ \ \ \ \ ;\\ \ \ \ \ \ \ \}\\ \ \ \ \ \ \ \vdots\\ \ \ \ \ {\bf method} $[[$N$]$ {\bf overrides} $|$ $[$N$]$ {\bf extends}$]$ M \{\\ \ \ \ \ \ \ operations on semantic domains\\ \ \ \ \ \ \ \}\\ \ \ \ \ \vdots\\ \ \ \} \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} Lexical specifications for EasyTime in LISA (Fig.~\ref{pic:LISA_GUI}) are similar to those used in other compiler-generators, and are obtained from EasyTime concrete syntax (Table~\ref{tab:tab17}). Note that in the rule part of LISA specifications the terminal symbols that are defined by regular expressions in the lexical part are denoted with symbol \# (e.g., \#Id, \#Int). EasyTime concrete syntax is derived from EasyTime abstract syntax (Table~\ref{tab:X}). The process of transforming abstract syntax into concrete syntax is straightforward, and presented in~\cite{Fister:2011}. Semantic rules are written in LISA as regular Java assignment statements and are attached to a particular syntax rule. Hence, the rule part in LISA (Table~\ref{tab:tab30}) specifies the BNF production as well as the attribute computations attached to this production. Since the theory about attribute grammars is a standard topic of compiler science, it is assumed that a reader has a basic knowledge about attribute grammars ~\cite{Knuth:1968,Paakki:1995}. \begin{table}[htb] \caption{The concrete syntax of EasyTime} \label{tab:tab17} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline PROGRAM & ::= & AGENTS DECS MES\_PLACES \\ AGENTS & ::= & AGENTS AGENT \textbar\ $\varepsilon$ \\ AGENT & ::= & \#Int auto \#ip ; \textbar\ \#Int manual \#file ; \\ DECS & ::= & DECS DEC \textbar\ $\varepsilon$ \\ DEC & ::= & var \#Id $:=$ \#Int ; \\ MES\_PLACES & ::= & MES\_PLACE MES\_PLACES \textbar\ MES\_PLACE \\ MES\_PLACE & ::= & mp[ \#Int ] $->$ agnt [ \#Int ] \{ STMTS \} \\ STMTS & ::= & STMT STMTS \textbar\ STMT \\ STMT & ::= & dec \#Id ; \textbar\ upd \#Id ; \textbar\ \#Id $:=$ EXPR ; \textbar\ ( LEXPR ) $->$ STMT \\ LEXPR & ::= & true \textbar\ false \textbar\ EXPR == EXPR \textbar\ EXPR != EXPR \\ EXPR & ::= & \#Int \textbar\ \#Id \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} \subsection{Translation scheme from denotational semantics to attribute grammars} The most difficult part of transforming formal EasyTime specifications into LISA specifications, consists of mapping denotational semantics into attribute grammars. This mapping can be described in a systematic manner, and can also be used for the implementation of other DSLs (e.g.,~\cite{Lukovic:2011}). It consist of the following steps similar to the translation scheme from natural semantics into attribute grammars \cite{Attali1994}: \begin{enumerate} \item Identification of syntactic and semantic domains in each semantic function of denotational semantics. Identified syntactic domains must have their counterparts in non-terminals of concrete syntax. Identified semantic domains must be represented appropriately, with suitable data structures (ty\-pes) in chosen programming language. \item Identification of inherited and synthesized attributes for each non-terminal derived in step 1. Semantic argument, which is an input parameter in semantic function, is represented as inherited attribute, while an output parameter is represented as synthesized attribute. According to \cite{Knuth:1968}, the starting non-terminal should not have inherited attributes. Whilst LISA automatically infers whether an attribute is inherited or synthesized \cite{Knuth:1968}, the type of attribute must be specified (Fig.~\ref{pic:LISA_GUI}). \item For all identified attributes attached to a particular non-terminal's, semantic equations need to be developed that are in conformance to semantic equations from denotational semantics. In particular, semantic equations need to be written for each synthesized attribute of the left-hand side non-terminal and for each inherited attribute attached to non-terminals of the right-hand side. This rule is applied to every production of a concrete syntax. In this step the whole semantic equation is not yet written, only the existence of such an equation is identified. \item In the productions of concrete syntax certain new non-terminals appear, which are consequences of transformation of abstract syntax into concrete syntax. These non-terminals also carry information that are needed for computations. In this step such non-terminals are identified and attached attributes are classified into inherited and synthesized. \item Finalizing semantics for all identified semantic equations. These semantic equations need to be in conformance to denotational semantics, and require careful examination of semantic functions of denotational semantics (e.g., $\mathcal{CP}$, $\mathcal{A}$, $\mathcal{D}$, $\mathcal{CM}$, $\mathcal{CS}$, $\mathcal{CB}$, and $\mathcal{CA}$ from Table~\ref{tab:tab6}). This step is most demanding. \item In code generation, certain additional tests are usually performed, which are sometimes non-described in formal semantics, in order to be on a proper abstraction level. For example, only declared variables can be used in expressions and commands of a language under development. Such additional tests require that new attributes are defined to carry the results of tests, as well as existing attributes being propagated to appropriate constructs (e.g., expressions, commands). An attribute grammar is finalized during this step. \end{enumerate} Note that the presented guidelines are general and not restricted to a particular class of attribute grammars~\cite{Knuth:1968,Paakki:1995} (e.g., S-attributed, L-attributed, ordered attribute grammar, absolutely non-circular attribute grammar). Actually, the class of obtained attribute grammar can be identified only after the translation has been completely performed. \subsection{Translation scheme from EasyTime formal semantics to LISA} When applying the aforementioned rules to EasyTime, the following results are obtained after each step. Step 1: The following non-terminals from Table \ref{tab:tab17} represent syntactic domains (Table~\ref{tab:X}): PROGRAM $\in$ \textbf{Pgm}, MES\_PLACES $\in$ \textbf{MeasPlace}, DECS $\in$ \textbf{Dec}, AGENTS $\in$ \textbf{Adec}, STMTS $\in$ \textbf{Stm}, etc. Semantic domains (Table~\ref{tab:Y}) such as \textbf{Integer}, \textbf{Truth-Value}, \textbf{Code} have direct counterparts with Java types: int, boolean, and String. While semantic domains which are functions (e.g., \textbf{State}, \textbf{Agents}) can be modeled with Java Hashtable type. For example, from Figure~\ref{pic:LISA_GUI} we can notice that attribute $inState$, which represents function \textbf{State}, is of type Hashtable. Using methods such as $put()$, $get()$, and $containsKey()$ we can respectively insert a new variable, obtain a variable's value, and check if the variable is declared. Other semantic domains (e.g., cartesian product) can be modeled easily with a Java rich type system. Hence, in LISA the type of attributes regarding an attribute grammar can be any valid pre-defined or user-defined Java type. An example of auxiliary operations on semantic domains (e.g., Hashtable), is presented in \cite{Fister:2011a}. \newpage Step 2: From $\mathcal{CP}:\textbf{Pgm} \rightarrow \textbf{Runners} \rightarrow$ \textbf{Code} $\times$ \textbf{Integer} $\times$ \textbf{DataBase} (Table \ref{tab:tab6}) it can be concluded that to non-terminal PROGRAM one inherited (representing a parameter of type \textbf{Runners}) and three synthesized attributes (representing parameters of \textbf{Code}, \textbf{Integer}, and \textbf{DataBase}) need to be attached. However, the starting non-terminal should not have inherited attributes~\cite{Knuth:1968,Paakki:1995}. From the definition of semantic function $\mathcal{CP}$ (Table~\ref{tab:tab6}) it can be noticed that the input parameter of type \textbf{Runners} are only needed to create a database. Hence, both parameters (of type \textbf{Runners} and \textbf{DataBase}) can be omitted from LISA specifications, and its functionality can be externally implemented. Moreover, it was decided to represent both the generated code and the identification number of the virtual machine, where the code is going to be executed, as a string \textbf{"}(Code, Integer)\textbf{"}. Hence, only one synthesized attribute, PROGRAM.code, is attached to starting non-terminal PROGRAM.\\ From $\mathcal{A}:\textbf{Adec}\rightarrow \textbf{Agents} \rightarrow$ \textbf{Agents} (Table \ref{tab:tab6}) it can be concluded that one inherited and one synthesized attribute need to be attached to non-terminal AGENTS. For this purpose AGENTS.inAG is an inherited attribute, and AGEN\-TS.outAG a synthesized attribute. Both attributes are of type Hashtable since semantic domain \textbf{Agents} is a function, which can be modeled as a Hashtable.\\ From $\mathcal{D}:\textbf{Dec}\rightarrow \textbf{State} \rightarrow$ \textbf{State} (Table \ref{tab:tab6}) it can be concluded that one inherited and one synthesized attributes need to be attached to non-terminal DECS. For this purpose DECS.inState is inherited attribute, and DECS.outState a synthesized attribute. Both attributes are of type Hashtable since semantic domain \textbf{State} is a function, which can be modeled as a Hashtable.\\ From $\mathcal{CM}:\textbf{MeasPlace} \rightarrow \textbf{Agents} \rightarrow$ \textbf{Code} $\times$ \textbf{Integer} (Table \ref{tab:tab6}) it can be concluded that one inherited and two synthesized attributes need to be attached to non-terminal MES\_PLACES. Again, it was decided to represent both, a generated code and the identification number of virtual machine, as a string. For this purpose MES\_\-PLACES.inAG is an inherited attribute and MES\_PLACES.code is a synthesized attribute.\\ From $\mathcal{CS}:\textbf{Stm} \rightarrow$ \textbf{Agents} $\times$ \textbf{Integer} $\rightarrow$ \textbf{Code} (Table \ref{tab:tab6}) it can be concluded that two inherited and one synthesized attribute need to be attached to non-terminal STMTS. For this purpose STMTS.inAG and STMTS.n are inherited attributes of type Hash\-table and int, respectively. The attribute STMTS.code is a synthesized attribute of type String. The attributes, inherited and synthesized, attached to the appropriate non-terminals are collated in Table~\ref{tab:tab18}. \begin{table}[htb] \caption{Attributes of non-terminals representing syntactic domains from EasyTime formal semantics} \label{tab:tab18} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l | l | l | } \hline X & Inherited(X) & Synthesized(X)\\ \hline PROGRAM & & code\\ AGENTS & inAG & outAG\\ DECS & inState & outState\\ MES\_PLACES & inAG & code\\ STMTS & inAG, n & code\\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} Step 3: In this step semantic equations are given for each synthesized attribute of the left-hand side non-terminal, and for each inherited attribute for the right-hand side non-terminal. This procedure is applied to each production in the context-free grammar (Table \ref{tab:tab17}). The LISA specification fragment as illustrated in Table~\ref{tab:tab19} indicates, which semantic equations need to be developed. Let us explain the process for the first production. Since the non-terminal PROGRAM, left-hand side non-terminal, has only one synthesized attribute $code$ (Table \ref{tab:tab18}) only one semantic equation must be defined (PROGRAM.code = ...;). Other non-terminals (AGENTS, DECS, MES\_PLACES) in the first production are on the right hand side and hence only inherited attributes attached to those non-terminals must be defined (AGENTS.inAG = ...; DECS.\-inState = ...; MES\_\-PLACES.\-inAG = ...;). Note that the order of these semantic equations is irrelevant~\cite{Knuth:1968,Paakki:1995}. \begin{table}[htb] \caption{Semantic equations under development that are obtained after Step 3} \label{tab:tab19} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline PROGRAM & ::= & AGENTS DECS MES\_PLACES compute \{ \\ & & \ \ \ \ \ \ \ \ AGENTS.inAG = ...; \\ & & \ \ \ \ \ \ \ \ DECS.inState = ...; \\ & & \ \ \ \ \ \ \ \ MES\_PLACES.inAG = ...; \\ & & \ \ \ \ \ \ \ \ PROGRAM.code = ...; \}; \\ & & \\ AGENTS & ::= & AGENTS AGENT compute \{ \\ & & \ \ \ \ \ \ \ \ AGENTS[1].inAG = ...; \\ & & \ \ \ \ \ \ \ \ AGENTS[0].outAG = ...; \};\\ & & \\ DECS & ::= & DECS DEC compute \{ \\ & & \ \ \ \ \ \ \ \ DECS[1].inState = ...; \\ & & \ \ \ \ \ \ \ \ DECS[0].outState = ...; \}; \\ & & \\ MES\_PLACES & ::= & MES\_PLACE MES\_PLACES compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACES[1].inAG = ...; \\ & & \ \ \ \ \ \ \ \ MES\_PLACES[0].code = ...; \}; \\ & & \\ STMTS & ::= & STMT STMTS compute \{ \\ & & \ \ \ \ \ \ \ \ STMTS[1].n = ...; \\ & & \ \ \ \ \ \ \ \ STMTS[1].inAG = ...; \\ & & \ \ \ \ \ \ \ \ STMTS[0].code = ...; \}; \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} \newpage Step 4: From step 3, it can be identified the following non-terminals, which appears in concrete syntax (Table \ref{tab:tab17}) and were unidentified in steps 1 - 3: AGENT, DEC, MES\_\-PLACE, and STMT (Table~\ref{tab:tab24}). If the structure of these non-terminals is simple (e.g., AGENT, DEC) then attributes attached to these non-terminals carried only synthesized attributes representing mostly lexical values (Table~\ref{tab:tab25}). Semantic equations can be derived immediately for those attributes. On the other hand, some non-terminals might be complex (e.g., MES\_\-PLACE, STMT) and inherited attributes attached to these non-terminals are also needed. The attributes might be similar to those attributes attached to other non-terminals in productions, where new non-terminals appear (Table~\ref{tab:tab18}). Moreover, semantic equations may no longer be simple (Table~\ref{tab:tab25}). For example, attributes attached to non-terminals MES\_\-PLACE and STMT (Table~\ref{tab:tab24}) are the same as those attached to non-terminals STMTS and MES\_PLACES, respectively (Table~\ref{tab:tab18}). However, due to the semantics of the update statement (Table~\ref{tab:tab6}) another attribute $y$ is attached to the non-terminal STMT (Table~\ref{tab:tab24}). \begin{table}[htb] \caption{Attributes for additional non-terminals} \label{tab:tab24} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l | l | l | } \hline X & Inherited(X) & Synthesized(X) \\ \hline AGENT & & number, type, file\_ip \\ DEC & & name, value \\ MES\_PLACE & inAG & code \\ STMT & inAG, n & code, y \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} \begin{table}[htb] \caption{Semantic equations for additional non-terminals} \label{tab:tab25} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline AGENT & ::= & \#Int auto \#ip \; compute \{ \\ & & \ \ \ \ \ \ \ \ AGENT.number = Integer.valueOf(\#Int[0].value()).intValue(); \\ & & \ \ \ \ \ \ \ \ AGENT.type = "auto"; \\ & & \ \ \ \ \ \ \ \ AGENT.file\_ip = \#ip.value(); \}; \\ & & \\ DEC & ::= & var \#Id \:\= \#Int \; compute \{ \\ & & \ \ \ \ \ \ \ \ DEC.name = \#Id.value(); \\ & & \ \ \ \ \ \ \ \ DEC.value = Integer.valueOf(\#Int.value()).intValue(); \}; \\ & & \\ MES\_PLACES & ::= & MES\_PLACE MES\_PLACES compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACE.inAG = ...; \}; \\ & & \\ MES\_PLACE & ::= & mp [ \#Int ] $->$ agnt [ \#Int ] \{ STMTS \} compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACE.code= ...; \}; \\ & & \\ STMTS & ::= & STMT STMTS compute \{ \\ & & \ \ \ \ \ \ \ \ STMT.n = ...; \\ & & \ \ \ \ \ \ \ \ STMT.inAG = ...; \}; \\ & & \\ STMT & ::= & upd \#Id \; compute \{ \\ & & \ \ \ \ \ \ \ \ STMT.y = ...; \\ & & \ \ \ \ \ \ \ \ STMT.code = ...; \}; \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} Step 5: The reasoning of this step is only explained for semantic functions $\mathcal{A}$ and $\mathcal{CM}$ (Table \ref{tab:tab6}), which are translated into attributes for non-terminals AGENTS, AGENT, MES\_PLACES, and MES\_PLACE (Tables ~\ref{tab:tab18} and ~\ref{tab:tab24}). For other semantic functions the reasoning is similar. The semantic equation $\mathcal{A} \lsem A_{1};A_{2}\rsem ag$ = $\mathcal{A} \lsem A_{2} \rsem$ $(\mathcal{A} \lsem A_{1} \rsem ag)$ (Table \ref{tab:tab6}) constructs $ag \in Agents$, which is a function from an integer, denoting an agent, into an agent's type (manual or auto), and an agent's ip or agent's file. This function is described in LISA as presented in Table~\ref{tab:tab25a}. From Table~\ref{tab:tab25a} it can be noticed how the attribute $outAG$, which represents the $ag \in Agents$, is constructed simply by the calling method $insert()$. The method $insert()$ will insert a new agent with a particular number, type, and file\_ip into the Hashtable. Note also, how the missing equations from Step 3 have been developed. The net effect is that we are constructing a list, more precisely a hash table, of agents where we are recording the agent's number ($AGENT.number$), the agents's type ($AGENT.type$), and the agent's ip or file ($AGENT.file\_ip$) (see Step 4). The complete LISA specifications for semantic function $\mathcal{A}$, is shown in Algorithm \ref{alg:agent_lisa}. \begin{table}[htb] \caption{Semantic equation for AGENTS} \label{tab:tab25a} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline AGENTS & ::= & AGENTS AGENT compute \{ \\ & & \ \ \ \ \ \ \ \ AGENTS[1].inAG = AGENTS[0].inAG; \\ & & \ \ \ \ \ \ \ \ AGENTS[0].outAG = insert(AGENTS[1].outAG, \\ & & \ \ \ \ \ \ \ \ new Agent(AGENT.number, AGENT.type, AGENT.file\_ip)); \\ & & \ \ \ \ \ \ \ \ \} \\ & & \ \ \ \ $\mid$ epsilon compute \{ \\ & & \ \ \ \ \ \ \ \ AGENTS.outAG = AGENTS.inAG; \\ & & \ \ \ \ \ \ \ \ \}; \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} \begin{algorithm}[tbh] \caption{Translation of Agents into LISA specifications} \label{alg:agent_lisa} \scriptsize \begin{algorithmic}[1] \STATE rule Agents \{ \STATE \ \ \ \ AGENTS ::= AGENTS AGENT compute \{ \STATE \ \ \ \ \ \ \ \ AGENTS[1].inAG = AGENTS[0].inAG; \STATE \ \ \ \ \ \ \ \ AGENTS[0].outAG = insert(AGENTS[1].outAG, \STATE \ \ \ \ \ \ \ \ \ new Agent(AGENT.number, AGENT.type, AGENT.file\_ip)); \STATE \ \ \ \ \} \STATE \ \ \ \ $|$ epsilon compute \{ \STATE \ \ \ \ \ \ \ \ AGENTS.outAG = AGENTS.inAG; \STATE \ \ \ \ \}; \STATE \} \STATE rule AGENT \{ \STATE \ \ \ \ AGENT ::= \#Int manual \#file \; compute \{ \STATE \ \ \ \ \ \ \ \ AGENT.number = Integer.valueOf(\#Int[0].value()).intValue(); \STATE \ \ \ \ \ \ \ \ AGENT.type = "manual"; \STATE \ \ \ \ \ \ \ \ AGENT.file\_ip = \#file.value(); \STATE \ \ \ \ \}; \STATE \ \ \ \ AGENT ::= \#Int auto \#ip \; compute \{ \STATE \ \ \ \ \ \ \ \ AGENT.number = Integer.valueOf(\#Int[0].value()).intValue(); \STATE \ \ \ \ \ \ \ \ AGENT.type = "auto"; \STATE \ \ \ \ \ \ \ \ AGENT.file\_ip = \#ip.value(); \STATE \ \ \ \ \}; \STATE \} \end{algorithmic} \normalsize \end{algorithm} The reasoning for the semantic function $\mathcal{CM}$ is done in a similar manner. The semantic equation $\mathcal{CM} \lsem M_{1}; M_{2} \rsem ag$ = $\mathcal{CM} \lsem M_{1} \rsem ag: \mathcal{CM} \lsem M_{2} \rsem ag$ (Table \ref{tab:tab6}) translates the first construct $M_1$ into code before performing the translation of the second construct $M_2$. This function is described in LISA, as represented in Table~\ref{tab:tab25b}, with the following meaning: The code for the first construct $\mathit{MES\_PLACE}$ is simply concatenated with the code from the second construct $MES\_PLACES[1]$. \begin{table}[htb] \caption{Semantic equation for MES\_PLACES} \label{tab:tab25b} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline MES\_PLACES & ::= & MES\_PLACE MES\_PLACES compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACES[0].code = MES\_PLACE.code + \\ & & \ \ \ \ \ \ \ \ "$\backslash$ n" + MES\_PLACES[1].code; \}; \\ MES\_PLACES & ::= & MES\_PLACE compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACES.code = MES\_PLACE.code \}; \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} The semantic equation $\mathcal{CM}\lsem \textbf{mp}[n_{1}]\rightarrow\textbf{agnt}[n_{2}] S \rsem ag$ = $(WAIT\ i:\mathcal{CS} \lsem S \rsem$ $(ag,n_{2}), n_{1})$ (Table \ref{tab:tab6}) is described in LISA, as presented in Table~\ref{tab:tab26}. \begin{table}[htb] \caption{Semantic equation for MES\_PLACE} \label{tab:tab26} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline MES\_PLACE & ::= & mp [ \#Int ] $->$ agnt [ \#Int ] \{ STMTS \} compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACE.code= "(WAIT i " + STMTS.code + \\ & & \ \ \ \ \ \ \ \ ", " + \#Int[0].value() + ")"; \}; \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} However, in this step the undefined semantic equations from steps 3 and 4 also need to be developed (Table~\ref{tab:tab27}). For example, a list of agents ($inAG$) needs to be propagated. \begin{table}[htb] \caption{Developing undefined semantic equations for MES\_PLACES} \label{tab:tab27} \vspace{-5mm} \footnotesize \begin{center} \begin{tabular}{ | l l l | } \hline MES\_PLACES & ::= & MES\_PLACE MES\_PLACES compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACE.inAG = MES\_PLACES[0].inAG; \\ & & \ \ \ \ \ \ \ \ MES\_PLACES[1].inAG = MES\_PLACES[0].inAG; \\ & & \ \ \ \ \ \ \ \ ... \}; \\ MES\_PLACES & ::= & MES\_PLACE compute \{ \\ & & \ \ \ \ \ \ \ \ MES\_PLACE.inAG = MES\_PLACES.inAG; \\ & & \ \ \ \ \ \ \ \ ... \}; \\ \hline \end{tabular} \end{center} \normalsize \vspace{-5mm} \end{table} Step 6: Easytime also uses variables in statements, and additional checks must be performed if only declared variables appear in expressions and statements. For this reason an additional attribute $ok$ of type boolean has been introduced into the specifications. Moreover, to be able to check if a variable is declared, it is necessary to propagate attribute $inState$ into the measuring places, statements, and expressions. The complete LISA specifications for MES\_PLACE are shown in Algorithm \ref{alg:mp_lisa} also using attributes $ok$ and $inState$. \begin{table}[htb] \caption{Semantic equations for the starting production} \label{tab:tab28} \footnotesize \vspace{-5mm} \begin{center} \begin{tabular}{ | l l l | } \hline PROGRAM & ::= & AGENTS DECS MES\_PLACES compute \{ \\ & & \ \ \ \ \ \ \ \ AGENTS.inAG = new Hashtable(); \\ & & \ \ \ \ \ \ \ \ DECS.inState = new Hashtable(); \\ & & \ \ \ \ \ \ \ \ MES\_PLACES.inAG = AGENTS.outAG; \\ & & \ \ \ \ \ \ \ \ MES\_PLACES.inState = DECS.outState; \\ & & \ \ \ \ \ \ \ \ PROGRAM.code = MES\_PLACES.ok ? "$\backslash$ n" + \\ & & \ \ \ \ \ \ \ \ MES\_PLACES.code + "$\backslash$ n" : "ERROR"; \}; \\ \hline \end{tabular} \end{center} \vspace{-5mm} \normalsize \end{table} \begin{algorithm}[tbh] \caption{Translation of MES\_PLACE into LISA specifications} \label{alg:mp_lisa} \scriptsize \begin{algorithmic}[1] \STATE rule Mes\_places \{ \STATE \ \ \ \ MES\_PLACES ::= MES\_PLACE MES\_PLACES compute \{ \STATE \ \ \ \ \ \ \ \ MES\_PLACE.inAG = MES\_PLACES[0].inAG; \STATE \ \ \ \ \ \ \ \ MES\_PLACES[1].inAG = MES\_PLACES[0].inAG; \STATE \ \ \ \ \ \ \ \ MES\_PLACE.inState = MES\_PLACES[0].inState; \STATE \ \ \ \ \ \ \ \ MES\_PLACES[1].inState = MES\_PLACES[0].inState; \STATE \ \ \ \ \ \ \ \ MES\_PLACES[0].ok = MES\_PLACE.ok \&\& MES\_PLACES[1].ok; \STATE \ \ \ \ \ \ \ \ MES\_PLACES[0].code = MES\_PLACE.code + "$\backslash$n" + MES\_PLACES[1].code; \STATE \ \ \ \ \}; \STATE MES\_PLACES ::= MES\_PLACE compute \{ \STATE \ \ \ \ \ \ \ \ MES\_PLACE.inAG = MES\_PLACES.inAG; \STATE \ \ \ \ \ \ \ \ MES\_PLACE.inState = MES\_PLACES.inState; \STATE \ \ \ \ \ \ \ \ MES\_PLACES.ok = MES\_PLACE.ok; \STATE \ \ \ \ \ \ \ \ MES\_PLACES.code = MES\_PLACE.code; \STATE \ \ \ \ \}; \STATE \} \STATE rule MES\_PLACE \{ \STATE \ \ \ \ MES\_PLACE ::= mp $\backslash[$ \#$\mathit{Int}$ $\backslash]$ $\backslash-\backslash>$ $\mathit{agnt}$ $\backslash[$ \#$\mathit{Int}$ $\backslash]$ $\backslash\{$ STMTS $\backslash\}$ compute \{ \STATE \ \ \ \ \ \ \ \ STMTS.inAG = MES\_PLACE.inAG; \STATE \ \ \ \ \ \ \ \ STMTS.inState = MES\_PLACE.inState; \STATE \ \ \ \ \ \ \ \ STMTS.n = Integer.valueOf(\#Int[1].value()).intValue(); \STATE \ \ \ \ \ \ \ \ MES\_PLACE.ok = STMTS.ok; \STATE \ \ \ \ \ \ \ \ MES\_PLACE.code = "(WAIT i " + STMTS.code + ", " + \#Int[0].value() + ")"; \STATE \ \ \ \ \}; \STATE \} \end{algorithmic} \normalsize \end{algorithm} Semantic equations for other production are obtained in a similar manner. Let us conclude this example by finalizing semantic equations for the starting production (see also Table~\ref{tab:tab19}). The initial hash table for agents ($AGENTS.inAG$) and declarations ($DECS.inState$) are empty (Table~\ref{tab:tab28}). Agents and declarations are constructed after visiting the subtrees represented by the non-terminals $AGENTS$ and $DECS$, and stored into attributes $AGENTS.outAG$ and $DECS.$ $outState$, that are passed to the subtree represented by the non-terminal $MES\_$ $PLACES$. If all the syntactic constraints are satisfied ($MES\_PLACES.ok==true$), then the generated code is equal to a code produced by the subtree represented by the non-terminal $MES\_PLACES$. \section{Operation} Local organizers of sporting competitions were faced with two possibilities before developing EasyTime: \begin{itemize} \item to rent a specialized company to measure time, \item to measure time manually. \end{itemize} The former possibility is expensive, whilst the latter can be very unreliable. However, both objectives (i.e. inexpensiveness and reliability), can be fulfilled by EasyTime. On the other hand, producers of measuring devices usually deliver their units with software for the collecting of events into a database. Then these events need to be post-processed (batch processed) to get the final results of the competitors. Although this batch-processing can be executed whenever the organizer desires, each real-time application requests online processing. Fortunately, EasyTime enables both kinds of event processing. In order to use the source program written in EasyTime by the measuring system, it needs to be compiled. Note that the code generation \cite{Aho:1972} of a program in EasyTime is performed only if the parsing is finished successfully. Otherwise the compiler prints out an error message and stops. For each of measuring places individually, the code is automatically generated by strictly following the rules, as defined in Section 3. An example of the generated code from the Algorithm~\ref{alg:prog} for the controlling of measurements, as illustrated by Fig.~\ref{pic:slika_1}, is presented in Table~\ref{tab:tab10}. Note that the generated code is saved into a database. The meaning of the particular instructions of virtual machine (e.g., WAIT, FETCH, STORE), is explained in Table~\ref{tab:am}. \begin{table}[htb] \caption{Translated code for the EasyTime program in Algorithm~\ref{alg:prog}} \label{tab:tab10} \begin{center} \vspace{-5mm} \small \begin{tabular}{ | l | } \hline (WAIT i FETCH accessfile("abc.res") STORE SWIM \\ FETCH ROUND1 DEC STORE ROUND1, 1) \\ \\ (WAIT i FETCH accessfile("abc.res") STORE TRANS1, 2) \\ \\ (WAIT i FETCH connect(192.168.225.100) STORE INTER2 \\ FETCH ROUND2 DEC STORE ROUND2 \\ PUSH 0 FETCH ROUND2 EQ BRANCH( FETCH \\ connect(192.168.225.100) STORE BIKE, NOOP), 3) \\ \\ (WAIT i FETCH connect(192.168.225.100) STORE INTER3 \\ PUSH 55 FETCH ROUND3 EQ BRANCH( FETCH \\ connect(192.168.225.100) STORE TRANS2, NOOP) \\ FETCH ROUND3 DEC STORE ROUND3 \\ PUSH 0 FETCH ROUND3 EQ BRANCH( FETCH \\ connect(192.168.225.100) STORE RUN, NOOP), 4) \\ \hline \end{tabular} \vspace{-5mm} \normalsize \end{center} \end{table} As a matter of fact, the generated code is dedicated to the control of an agent by writing the events received from the measuring devices, into the data\-base. Normally, the program code is loaded from the database only once. That is, only an interpretation of the code could have any impact on the performance of a measuring system. Because this interpretation is not time consuming, it cannot degrade the performance of the system. On the other hand, the precision of measuring time is handled by the measuring device and is not changed by the processing of events. In fact, the events can be processed as follows: \begin{itemize} \item batch: manual mode of processing, and \item online: automatic mode of processing. \end{itemize} The agent reads and writes the events that are collected in a text file, when the first mode of processing is assumed. Typically, events captured by a computer timer are processed in this mode. Here, the agent looks for an existence of the event text file that is configured in the agent statement. If it exists, the batch processing is started. When the processing is finished, the text file is archived and then deleted. The online processing is event oriented, i.e. each event generated by the measuring device is processed in time. In both modes of processing, the agent works with the program PGM, the runner table RUNNERS, and the results table DATABASE, as can be seen in Fig.~\ref{pic:slika_3}. An initialization of the virtual machine is performed when the agent starts. The initialization consists of loading the program code from PGM. That is, the code is loaded only once. At the same time, the variables are initialized on starting values. In order to ensure the reliability of Easytime in practice, competitors are not allowed to go directly from swimming to running, because the course is complex and the competitor must to go through both transition areas. In the case that a competitor skips over the next discipline, the referees disqualify him/her immediately. Actually, EasyTime is only of assistance to referees. All misuses of the triathlons rules do not have any impact on its operation. \begin{figure}[htb] \vspace{-5mm} \begin{center} \includegraphics [scale=0.85]{Fig3.pdf} % \caption{Executable environment of a program in EasyTime} \label{pic:slika_3} \end{center} \vspace{-5mm} \end{figure} After the development of EasyTime another demand has arisen - drafting detection in triathlons. This problem is especially expressive in cycling, where competitors wishing to improve their results ride their cycles within close-knit groups. In this way, competitors achieve a higher speed and save energy for later efforts. Typically, within such groups of competitors the hardest work is performed by the leading competitor because he needs to overcome on air resistance. At the same time, other competitors may take a rest. Actually, the drafting violation arises when one competitor rides behind the other closer than 7 meters for more than 20 seconds. Interestingly, this phenomenon is only pursued during long-distance triathlons, whilst drafting is allowed over short-distances. Any competitor who violates this drafting rule is punished by the referees with 5 minutes of elimination from the cycling race. The referees observe the race from motorcycles and determine the drafting violations according to their feelings. In this sense only, this assessment is very subjective. On the other hand, the referees can control one competitor a time. Consequently, an automatic system is needed for detecting drafting violations during triathlons. A drafting detection system is proposed in order to track this violation. This system is based on smart-phones because these incorporate the following features: information access via wireless networks and GPS navigation. Smart-phones need to be borne by competitors on their bicycles (Fig.~\ref{pic:sys}). These determine information about competitor current GPS positions and transmit these over wireless modems to a web-service. From the positions of all competitors the web-service calculates whether a particular competitor is violating the drafting rule. In addition, these violations can be tackled by the referees on motorcycles using smart-phones. \begin{figure*}[htb] \begin{center} \includegraphics [scale=0.6] {IM3-eng.png} \caption{Proposed system for drafting detection in triathlons} \label{pic:sys} \end{center} \vspace{-5mm} \end{figure*} Normally, the organizers of triathlons demand the integration of EasyTime within the system for drafting violation. At a glance, this integration can be performed at the computer-system level, i.e., the mobile agent is added to the existing EasyTime agents. This mobile agent acts as a web-service and runs on an application server. Like EasyTime, it uses its own database. Each record in this database represents a competitor's current GPS position that can be defined as tuple $\langle \#,x,y,z,t,l \rangle$, where $\#$ denotes the competitor's starting number, $x,y,z$ his current position within the coordinate system UTM, $t$ the registration time in the mobile device, and $l$ the calculated path-length. This length $l$ is obtained by projecting the current position of the $\#$-th competitor on the line that connects the points gained by tracking the cycling course with a precise GPS device, at each second. This has an impact on the competitor's current position, from which the distance is calculated to the competitor in front of him. At the moment, both systems run on the same server separately. However, further development of a wireless technology and pervasive computing~\cite{Weiser:1991} indicates that EasyTime should have the ability to run on an application server as well. Interestingly, the measuring time in biathlons represents another great challenge for EasyTime. Here, competitors ski on cross-country skiis and stop at certain places to shoot at targets with rifles carried by them. In order to measure time during biathlons, EasyTime needs to be modified slightly. In line with this, two measuring devices are need, and a special measuring device for counting hits. The first measuring device is dedicated to measuring the four laps of skiing, whilst the second is applied for counting the penalty laps. Each missed shot attracts one additional penalty lap. The measuring device for counting hits is described in EasyTime as a 'new agent'. This agent is responsible for setting the number of additional penalty laps to be measured using the second measuring device. In contrast to the static initialization of the laps' counter in EasyTime, a new request is demanded, i.e, a dynamic initialization of this laps' counter needs to be implemented. EasyTime could also be extended and used in some other application domains. For example, EasyTime could be employed as an electric shepherd for tracking livestock (cows, sheep, etc.) in the mountains. In this case, each animal would be labeled with a RFID tag that is controlled by crossing the measuring place twice a day. First, in the morning, when the animals go from their stalls and, second, in the evening, when they return to their stalls. Each crossing of the measuring place by the animal decrements a counter of herd-size for one. Essentially, the EasyTime tracking system reports an error, when the counter is not decreased to zero within a specified time interval. In order for this tracking system to work properly, the herd-size counter has to be initialized twice a day (for example, at 12:00 am and 12:00 pm). Additionally, EasyTime could be used in the clothing industry for tracking cloth through the production. Clothing production consists of the following phases: preparing, sewing, ironing, adjusting, quality-control and packing~\cite{Fister:2008,Fister:2010}. The particular cloth origins during the preparation stage, where the parts of cutting patterns are collected into bundles, labeled with the RFID tags, and delivered for sewing. This transition of the bundle into the sewing room presents a starting point for the EasyTime tracking system. The other control points are, as follows: transition from sewing room into ironing, transition from ironing into adjusting, transition from adjusting into quality-control, and transition from quality-control into packing room that represents the finishing point of the cloth production. Note that these transitions act similarly to those transition areas in Ironman competitions. Usually, the cloth does not traverse through the production in any one-way because quality-control can return it to any of the past production phases. In this case, EasyTime could be used for tracking errors during clothing production. \section{Conclusion} The flexibility of the measuring system is a crucial objective in the development of universal software for measuring time in sporting competitions. Therefore, the domain-specific language EasyTime was formally designed, which enables the quick adaptation of a measuring system to the new requests of different sporting competitions. Preparing the measuring system for a new sporting competition with EasyTime requires the following: changing a program's source code that controls the processing of an agent, compiling a source code and restarting the agent. Using EasyTime in the real-world has shown that when measuring times in small sporting competitions, the organizers do not need to employ specialized and expensive companies any more. On the other hand, EasyTime can reduce the heavy configuration tasks of a measuring system for larger competitions, as well. In this paper, we explained how the formal semantics of EasyTime are mapped into LISA specifications from which a compiler is automatically generated. Despite the fact that mapping is not difficult, it is not trivial either, as some additional rules must be defined for attribute propagation. Moreover, we need to take care of error reporting (e.g., multiple declarations of variables). In future work, EasyTime could be replaced by the domain-specific modeling language (DSML) \cite{Sprinkle:2009,Stuikys:2010,Vitiutinas:2011} that could additionally simplify the programming of a measuring system.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction\label{sec:Introduction}} \subsection{\added{Shear flow instability in strongly stratified and viscous medium}} The wake behind a bluff body has long been studied as one of the fundamental open shear flows in hydrodynamic stability, but few have studied the flow in a stratified medium at low Reynolds numbers. Much of the earlier research focused on the stabilising effect of stratification. Many have demonstrated that stratification has a stabilising effect on shear flows, theoretically \citep{Koppel1964}, numerically \citep{Gage1968,Hazel1972} and experimentally \citep{Boyer1989}. All of the results confirmed \replaced{Howard's theorem \citep{Howard1961,Miles1961}, which stated }{the seminal finding by \mbox{\citet{Howard1961}} and \mbox{\citet{Miles1961}}, who demonstrated }that the stability property of a vertical shear flow under stable linear stratification is governed by the `local' Richardson number: \begin{equation} \Ri=N^{*2}\left(\frac{dU^{*}}{dZ^{*}}\right)^{-2},\label{eq:Howards_criterion} \end{equation} where the superscript $^*$ indicates dimensional quantities, $N^{*}$ is the Brunt-V\"{a}is\"{a}l\"{a} frequency, $U^*$ the base-flow velocity and $dU^*/dZ^*$ the shear rate of the base-flow with $Z^*$ being the vertical direction. \added{While a criterion on $\Ri$ for stability had already been conjectured by \citet{Richardson1926}, \citet{Prandtl1930} and \citet{Taylor1931}, it was} \citet{Howard1961} \added{who elegantly }showed that \added{the flow becomes stable} if the local Richardson number everywhere is greater than or equal to $1/4$\deleted{, the flow becomes stable}. The implication of (\ref{eq:Howards_criterion}) is that the buoyancy force would stabilise the flow, while the shear would destabilise it, such that there exists a critical $\Ri$ that determines the necessary (but not sufficient) condition for flow instability. This criterion, however, applies only to the instabilities of vertical shear flows, as the strong stable stratification would inhibit vertical fluid motion. Indeed, when a shear flow exists in the horizontal plane, the stabilising mechanism is no longer at play. \citet{Blumen1971,Blumen1975} first demonstrated the existence of horizontal shear instability by considering a base-flow profile that contains both horizontal and vertical shear. In the context of geophysical fluid dynamics, this horizontal shear instability has been known as barotropic instability \citep[see ][]{PedloskyJoseph1987Gfd,Vallis2017}. Recently, such a horizontal shear instability has also been confirmed even in strongly stratified flow where the base flow is not exactly aligned to the vertical direction. For example, both the stability analysis of a tilted stratified inviscid Bickley jet \citep{Candelier2011} and the experiment of a tilted stratified cylinder wake \citep{Meunier2012} have shown that there exists a horizontal shear instability arising at very low Froude number even for $\Ri>1/4$ everywhere. In both of the investigations, the most unstable mode was shown to experience a branch switching behaviour as Froude number decreases (see figure 6 in \citet{Meunier2012} and figure 3 in \citet{Candelier2011}), indicating a new type of instability mode arising at low Froude number. With a further decrease of the Froude number, the low-Froude-number mode was found to be even more destabilised, and, interestingly, the mode structure appears to be quite similar to that of a typical inflectional instability. Furthermore, the experiment by \citet{Meunier2012} showed that the critical Reynolds number of the low-Froude-number mode returns to a value similar to that of a homogeneous wake (i.e. the high-Froude-number instability mode) as the Froude number reaches zero. \added{In the context of stratified turbulence, the horizontal shear has also been understood to play a significant role in turbulence production. \citet{Jacobitz1998} studied the effect of tilted uniform shear on turbulence under stable stratification. They found that the introduction of horizontal shear through tilting significantly increases turbulence production. \citet{Jacobitz2002} further compared turbulence statistics in horizontal uniform shear flow with those in vertical one under strong stratification. He found that the horizontal shear flow exhibits significantly larger turbulent velocity and density fluctuation than the vertical one. These results point to the important role of the horizontal shear in turbulence production in a strongly stratified flow, setting out the understanding of the genesis of turbulence from the horizontal shear.} In the regime where the new instability mode arises in a titled shear flow (i.e. at low Froude number), the vertical length scale of the system has been shown to be determined by the interplay between the buoyancy force from the stratification and the fluid viscous force. In particular, \citet{Billant2001} proposed that, at high Reynolds numbers, the vertical length scale is proportional to the Froude number, while challenging the earlier argument of \citet{Lilly1983}, who proposed that highly stratified flow at high Reynolds number can be described only in terms of the two-dimensional dynamics on the horizontal plane. With this new scaling argument, the recent observations in the tilted stratified flows lead us to raise the following questions on the low Froude number instability mode: \begin{enumerate} \item If the Howard's rule does not apply to the horizontal instability mode, how can this mode be destabilised with increasing stratification, while maintaining the typical form of inflectional instability (i.e. von K\'arm\'an-vortex street in the experiment of \citet{Meunier2012})? \item As a result of the scaling proposed by \citet{Billant2001}, three-dimensional zigzag instability has been previously reported in strongly stratified flows \citep[see][]{Billant2000a,Billant2000b,Billant2000c}. Why was any structure, the vertical length scale of which is proportional to Froude number, not observed in the primary instability in the experiment by \citet{Meunier2012}? Is it possible to provide any theoretical justification that the low-Froude-number mode is inherently two dimensional in this particular case? In fact, this point may also be intricately linked to some `Squire-Yih-like' theorem for a flow configuration such as that of \citet{Meunier2012}. \item While the experimental data of \citet{Meunier2012} suggests that the critical Reynolds number of the low-Froude-number instability mode is roughly independent of the tilting angle, the stability analysis of \citet{Candelier2011} showed that the growth rate of the instability is strongly dependent on the tilting angle. Here, we note that the analysis by \citet{Candelier2011} was carried out by prescribing a constant base flow while the tilting angle varies. However, the question of how the base flow is changed with respect to the tilting angle remains unanswered. This issue might be critical to address the difference between the experimental result of \citet{Meunier2012} and the theoretical one of \citet{Candelier2011}. \end{enumerate} \subsection{\added{Contribution of the present study}} \added{The objective of the present study is to gain better understanding on the low-Froude-number instability in titled shear flows under strong stratification by addressing the questions above. To this end, we perform a linear stability analysis of viscous parallel wake flow under strong stratification, especially focusing on the scaling and emergence of the low-Froude-number instability. Particular emphasis of the present study is given to address the following points:} \begin{enumerate} \item \added{Derivation of the equation explicitly describing the horizontal shear-flow instability in the limit of low Froude number and low buoyancy Reynolds number;} \item \added{A Squire-like theorem for horizontal instability (i.e. barotropic instability) in a weakly titled shear flow at low buoynacy Reynolds numbers;} \item \added{Spatio-temporal stability analysis of tilted two-dimensional wake for qualitative comparison with the experimental data;} \item \added{The stabilisation mechanisms of horizontal shear flow instability with increase of Froude number;} \item \added{Identification of physical factors contributing to base-flow change in the experiment of \cite{Meunier2012} and the subsequent modelling of their roles. } \end{enumerate} \added{It is important to mention that these points make the present studies clearly distinguished from previous investigations, such as \cite{Candelier2011} who studied tilted shear flow instability in the inviscid limit. Indeed, one of the key contributions of the present study is the theoretical elucidation of the non-trivial role of viscosity in the primary instability of tilted shear flows (see \S\ref{sec:scaling}), providing a more complete theoretical description on the recent experimental observation by \cite{Meunier2012}.} The paper is organised as follow. In \S \ref{sec:formulation}, the equations of motion are introduced and the linear stability analysis is formulated with its numerical method. In \S \ref{sec:scaling}, we simplify the set of linearised equations into an Orr-Sommerfeld type of equation in the limit of low Froude number. Based on the equation derived, we will provide a theoretical justification as to why the low-Froude-number instability would remain two dimensional while taking into account the length scale argument of \citet{Billant2001} and \citet{Brethouwer2007}. In \S \ref{sec:Result}, an absolute and convective instability analysis will be performed and its result will be compared with that of \citet{Meunier2012}. A further discussion will then be followed to address the issue of how the low-Froude-number mode is destabilised with decreasing Froude number. Finally, we will discuss what would be the expected nature of the base flow to explain the discrepancy between the experimental observation of \citet{Meunier2012} and the stability analysis of \citet{Candelier2011}. A summary and concluding remarks of this paper will be given in \S\ref{sec:conclusion}. \section{Problem formulation \label{sec:formulation}} \subsection{Equations of motion} Given the flow configuration where the base flow is tilted against the direction of gravity, it is instructive to start by introducing the coordinate systems used in the present study. We will adopt the same Cartesian coordinate systems as those in \citet{Meunier2012}, as is illustrated in figure \ref{fig:fig1}. Here, $(x^*,y^*,z^*)$ are the coordinates aligned with a two-dimensional bluff body (i.e. cylinder) with $x^*$ being the streamwise, $y^*$ the transverse, $z^*$ the spanwise direction, respectively. (Note that, throughout the present study, the superscript $^{*}$ indicates dimensional quantities, while those without it are non-dimensionalised ones.) The $(x^*,y^*,z^*)$ coordinate system is set to be tilted against the laboratory one defined by $(X^*,Y^*,Z^*)$ coordinates with an angle $\theta$. We shall assume that the base-flow (i.e. wake) profile remains to be unchanged in the $(x^*,y^*,z^*)$ coordinates, although this issue will be discussed later in \S \ref{subsec:corrections}. The relation between the two coordinate systems is then written as \begin{equation} (X^*,Y^*,Z^*)=(x^*,y^*\cos\theta+z^*\sin\theta,-y^*\sin\theta+z^*\cos\theta). \end{equation} In the $(x^*,y^*,z^*)$ coordinate, velocity is denoted by $\boldsymbol{u}^*=(u^*,v^*,w^*)$ and pressure by $p^*$. Similarly, in the $(X^*,Y^*,Z^*)$ coordinates, velocity is denoted by $\boldsymbol{U}^*=(U^*,V^*,W^*)$. Finally, the gravity is acting in the negative $Z^*$ direction, such that the density variation is imposed along the same direction. Under the Boussinesq approximation, the dimensionless equations of motion, defined in the $(x,y,z)$ coordinate system, are given as follows: \begin{subequations} \begin{equation} \frac{\partial\boldsymbol{u}}{\partial t}+({\boldsymbol{u}}\bcdot \bnabla) \boldsymbol{u}=-\bnabla p+\frac{1}{\Rey} \nabla^2 \boldsymbol{u}+b\hat{\boldsymbol{g}},\label{eq:non_dim_u} \end{equation} \begin{equation} \frac{\partial b}{\partial t}+(\boldsymbol{u}\bcdot\boldsymbol{\nabla})b=\frac{1}{\Rey \Sc} \nabla^2 b,\label{eq:non_dim_rho} \end{equation} \end{subequations} in which $\Rey={U}_{ref}^{*}D^{*}/\nu^{*}$ is the Reynolds number, $\Sc=\nu^{*}/\kappa^{*}$ the Schmidt number, $p$ the pressure, and $\hat{\boldsymbol{g}}$ the unit vector representing the direction of gravity. Here, $\nu^{*}$ is the kinematic viscosity, $\kappa^{*}$ the diffusivity, and $g^{*}$ the gravitational acceleration, while the reference length scale $D^*$ and the velocity scale ${U}_{ref}^*$ will be defined later with the introduction of the base-flow profile. For the density fluctuation, we consider the dimensionless buoyancy $b$, defined as $(g^{*}D^{*}/{U}_{ref}^{*2})(\rho/\rho_0)$, where $\rho$ is the non-dimensional density and $\rho_0=\rho|_{y=0}$ is the dimensionless density at the origin. Here, $\rho|_{y=0}$ is set to equal to unity because $\rho^*|_{y=0}$ is used as the reference density for non-dimensionalisation. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Figure1.pdf} \caption{Sketch of the flow configuration and the coordinate systems. \label{fig:fig1}} \end{figure} A two-dimensional parallel wake is considered as an example of tilted shear flow. The wake flow is chosen to make a comparison with the experimental result in \citet{Meunier2012}, although we shall see that most of the arguments made in this paper are not restricted to the wake flow. The base-flow profile of the parallel wake in the present study is identical to the one in \citet{Monkewitz1988}: \begin{subequations}\label{eq:U_profile} \begin{equation} u_0(y;R,a)=1-R+2RF(y), \end{equation} with \begin{equation} F(y)=\{1+\sinh^{2a}(y\,\mathrm{arcsinh}1)\}^{-1}, \end{equation} \end{subequations} in which $u_0(y)$ is the streamwise velocity of the base flow, $a$ the shape parameter (or the stiffness parameter), $R$ the velocity ratio, defined by $R=(u_{c}-u_{\infty})/(u_{c}+u_{\infty})$, where $u_{c}$ and $u_{\infty}$ are the centreline and the freestream velocities, respectively. If $a\rightarrow\infty$, the profile becomes a top hat, whereas if $a$ is small, the profile becomes very smooth. Also, for $-1<R<0$, the profile depicts a wake with no counter flow, while, for $R<-1$, it becomes a wake with counter flow at the centre. From the base-flow profile in (\ref{eq:U_profile}), the reference velocity is defined as ${U}_{ref}^{*}=(u_{c}^{*}+u_{\infty}^{*})/2$. The reference length scale $D^*$ is defined to be $u_0^{*}(y^*=D^{*})={U}_{ref}^{*}$ ($u_0(y=1)=1$ equivalently). Finally, for the buoyancy, $b$, a stable linear stratification is considered as the basic state: i.e. $b_0=(g^{*}D^{*}/{U}_{ref}^{*2})(1+(d\rho/dZ)Z)$ where $d\rho/dZ$ is constant. Now, we consider a small perturbation around the basic state: \begin{equation} \qquad\boldsymbol{u}=\boldsymbol{u}_0(y)+\boldsymbol{u}',\qquad b=b_0+b',\label{eq:Perturb} \end{equation} where $'$ represents the perturbation variables, and $\boldsymbol{u_0}(y)=(u_0(y),0,0)$. Then, the linearised equations of motion are given as \begin{subequations}\label{eq:lineareq} \begin{equation} \frac{\partial\boldsymbol{u}'}{\partial t} +(\boldsymbol{u}_0\bcdot\bnabla)\boldsymbol{u}' +(\boldsymbol{u}'\bcdot\bnabla)\boldsymbol{u}_0 =-\bnabla p'+ \frac{1}{\Rey} \nabla^2 \boldsymbol{u}' +b'\hat{\boldsymbol{g}},\label{eq:non_dim_u_Bos} \end{equation} \begin{equation} \frac{\partial b'}{\partial t} +(\boldsymbol{u}_0 \bcdot\bnabla)b' -(\boldsymbol{u}'\bcdot\bnabla)(Z/\Frou^{2}) =\frac{1}{\Rey \Sc}\nabla^2 b',\label{eq:non_dim_b} \end{equation} \end{subequations} where $\Frou=U_{ref}^*/(N^*D^*)$ with $N^*$ being the constant Brunt-V\"{a}is\"{a}l\"{a} frequency. Equations (\ref{eq:lineareq}) then admit the following normal-mode solution: \begin{equation} \left[\begin{array}{c} u'\\ v'\\ w'\\ b'\\ p' \end{array}\right]=\left[\begin{array}{c} \tilde{u}(y)\\ \tilde{v}(y)\\ \tilde{w}(y)\\ \tilde{b}(y)\\ \tilde{p}(y) \end{array}\right]\exp\{i(\alpha x+\beta z-\omega t)\},\label{eq:pert_Fourier} \end{equation} where $\alpha$ and $\beta$ are given real wavenumbers in the $x$ and $z$ directions, and $\omega$ is the complex frequency. We can then write (\ref{eq:lineareq}) and the continuity equation as: \begin{subequations}\label{eq:normalmode} \begin{equation} i\omega \tilde{u}=\mathcal{L}\tilde{u}+DU\tilde{v}+i\alpha\tilde{p},\label{eq:lin_u} \end{equation} \begin{equation} i\omega \tilde{v}=\mathcal{L}\tilde{v}+D\tilde{p}-\tilde{b}\sin\theta,\label{eq:lin_v} \end{equation} \begin{equation} i\omega \tilde{w}=\mathcal{L}\tilde{w}+i\beta\tilde{p}+\tilde{b}\cos\theta,\label{eq:lin_w} \end{equation} \begin{equation} i\omega \tilde{b}=\mathcal{L}_{\rho}\tilde{b}+\frac{\sin\theta}{\Frou^{2}}\tilde{v}-\frac{\cos\theta}{\Frou^{2}}\tilde{w},\label{eq:lin_rho} \end{equation} \begin{equation} i\alpha\tilde{u}+D\tilde{v}+i\beta\tilde{w}=0,\label{eq:lin_incompress} \end{equation} \end{subequations} where $D={d}/{dy}$, $k^{2}=\alpha^{2}+\beta^{2}$, $\mathcal{L}=iu_0\alpha-(D^2-k^2)/\Rey$ and $\mathcal{L}_{\rho}=iu_0\alpha-(D^2-k^2)/\Rey \Sc$. \subsection{Numerical Method} The equations (\ref{eq:normalmode}) are solved as an eigenvalue problem, in which $\omega$ becomes the eigenvalue and $(\tilde{u},\tilde{v},\tilde{w},\tilde{b},\tilde{p})^{T}$ is the corresponding eigenfunction. We discretise (\ref{eq:normalmode}) using a Chebyshev collocation method \citep{Weideman2000}, and solve the resulting numerical eigenvalue problem with the \texttt{eig} function in the \textsc{MATLAB} library. All of the following results are computed \replaced{up to 300}{with 100} mesh points with the wall-normal domain size of $y\in[-60.6,60.6]$\added{ -- such a large number of grid points was needed for three-dimensional instability mode emerging at low $Fr$ with non-zero tilting angle.} \added{Zero velocity perturbations and zero buoyancy fluctuation flux are imposed at the boundaries.} \deleted{We have checked the result using 300 mesh points, and it does not yield any discernible difference.} The numerical solver is also validated by comparing with the results in \citet{Hazel1972} by setting $\theta=0$ and with those in \citet{Candelier2011} by setting $\Rey\rightarrow \infty$. In the present study, mainly $\Rey$, $\Frou$, tilting angle $\theta$ are varied, while keeping $Sc$ fixed at a value of 700. However, keeping $Sc$ as the same value is not a great limitation\deleted{, as was previously discussed by \mbox{\citet{Meunier2012}}}. \added{Indeed, the change of $Sc$ was found not to yield any discernible behaviour of the flow as long as it is large enough.} The present analysis is performed for the wake velocity profile at $a=1.34$ and $R=-1.105$ \citep{Monkewitz1988} defined in the $(x,y,z)$ coordinates, and the effect of the wake velocity profile is discussed in \S \ref{subsec:AU} and \S \ref{subsec:corrections}. \section{Scaling analysis \label{sec:scaling}} \subsection{A low Froude number approximation \label{subsec:Asymptotic-argument}} Before proceeding to the numerical result of the stability analysis, we first examine the equations of motion in the low Froude number limit to explore any possible instability process. The equation (\ref{eq:lin_rho}), containing $\Frou$, can be written as \begin{equation} v'\sin\theta=w'\cos\theta -\Frou^{2}[\frac{\partial}{\partial t}+u_0 \frac{\partial}{\partial x}-\frac{\nabla^2}{\Rey \Sc}]b'. \label{eq:v_w_same_dev} \end{equation} If we take $\Frou \rightarrow 0$ and $\Frou^2/\Rey \Sc \rightarrow 0$ with the assumption of finite $\Rey$, equation (\ref{eq:v_w_same_dev}) yields \begin{equation} w'\cos\theta-v'\sin\theta=0. \label{eq:v_w_same} \end{equation} Now, it is not difficult to realise that the \replaced{left}{right}-hand side of (\ref{eq:v_w_same}) is simply the vertical velocity fluctuation $W'$ in the $(X,Y,Z)$ coordinates, indicating $W'=0$. Numerical results at $\Frou=0.01$ also confirms this observation, as shown in figure \ref{fig:fig2}. The result of (\ref{eq:v_w_same}) indicates the suppression of vertical velocity by strong buoyancy force, such that the perturbation velocity field lies only on the horizontal plane. This suppression of the vertical velocity has also been well discussed in previous studies \citep[see][]{Candelier2011,Meunier2012}. Also, (\ref{eq:v_w_same_dev}) implies that $W'$ scales as $\Frou^2$ for small non-zero $\Frou$. For now, we shall proceed our discussion while keeping $W'=0$ in our approximation, and its validity will be discussed in \S \ref{subsec:vert_length_scale} in relation to the previously proposed asymptotic scaling \citep{Billant2001,Brethouwer2007}. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{Figure2.pdf} \caption{Confirmation of the relation (\ref{eq:v_w_same}) at $\theta=30^{\circ}$ and $60^{\circ}$ using numerically calculated eigenmodes ($\Frou=0.01$ and $\Rey\rightarrow \infty$). \label{fig:fig2}} \end{figure} With (\ref{eq:v_w_same}), the $y$ and $z$ components of (\ref{eq:non_dim_u_Bos}) can be simplified into \begin{equation}\label{eq:non_dim_pb} \sin\theta \frac{\partial p'}{\partial y}- \cos\theta \frac{\partial p'}{\partial z}=b', \end{equation} implying that $w'$ and $b'$ in (\ref{eq:non_dim_u_Bos}) are explicitly given in terms of $v'$ and $p'$ with (\ref{eq:v_w_same}) and (\ref{eq:non_dim_pb}). Then, the momentum equation of (\ref{eq:lineareq}) can be written as the following single equation for $v'$: \begin{subequations}\label{eq:v_OStot} \begin{equation} \left[(\frac{\partial}{\partial t}+u_0\frac{\partial}{\partial x}-\frac{\nabla^2}{\Rey})\nabla^2_\theta-\cos^2 \theta \frac{\partial^{2}u_0}{\partial y^{2}}\frac{\partial}{\partial x}\right]v'=0,\label{eq:v_OS} \end{equation} where \begin{equation} \nabla^2_\theta=\pardxx{} + \cos^2 \theta \pardyy{} + 2 \cos \theta \sin \theta \pardyz{} + \sin^2 \theta \pardzz{}. \end{equation} \end{subequations} Here, it can be realised that $\nabla^2_\theta=\partial^2 / \partial X^2 + \partial^2 / \partial Y^2$ in the $(X,Y,Z)$ coordinates, indicating that (\ref{eq:v_OS}) can be written as \begin{equation} \left[\left( \frac{\partial}{\partial t}+U_0\frac{\partial}{\partial X}-\frac{1}{\Rey}(\frac{\partial^{2}}{\partial X^{2}}+\frac{\partial^{2}}{\partial Y^{2}}+\frac{\partial^{2}}{\partial Z^{2}}) \right) (\frac{\partial^{2}}{\partial X^{2}}+\frac{\partial^{2}}{\partial Y^{2}})-\frac{\partial^{2}U_0}{\partial Y^{2}}\frac{\partial}{\partial X}\right]V'=0.\label{eq:v_lowFr_No_Modal} \end{equation} From (\ref{eq:v_lowFr_No_Modal}), we can make several important observations on the nature of instabilities arising in (\ref{eq:lineareq}) when $\Frou \rightarrow 0$: \begin{enumerate} \item Equation (\ref{eq:v_lowFr_No_Modal}) is no more dependent of $\Frou$ and $b'$, implying that the density stratification cannot affect any instability arising from (\ref{eq:v_lowFr_No_Modal}). Furthermore, (\ref{eq:v_lowFr_No_Modal}) only contains the horizontal shear $\partial U_0/\partial Y$, suggesting the barotropic nature of the possible instability at $\Frou \rightarrow 0$ (i.e. instability in the horizontal plane). \item The form of (\ref{eq:v_lowFr_No_Modal}) is very similar to the physical-space Orr-Sommerfeld equation in the $X$-$Y$ plane: in fact, it is identical to the Orr-Sommerfeld equation, except the term with $\partial^{2}/\partial Z^{2}$ in (\ref{eq:v_lowFr_No_Modal}). Such a similarity strongly suggests that the given horizontal shear would admit an inflectional instability if $\partial^{2}U_0 / \partial Y^{2}$ is not zero for some $Y$. \item In (\ref{eq:v_lowFr_No_Modal}), $\partial^{2}/\partial Z^{2}$ emerges in the viscous term, implying that any vertical variation in the velocity perturbation might be stabilising via viscous dissipation. Indeed, we shall see the purely stabilising effect of the $\partial^{2}/\partial Z^{2}$ term in \S \ref{subsec:Squire}. This observation also has important implication to the vertical length scale, as we shall see in \S \ref{subsec:vert_length_scale}. \item Finally, in (\ref{eq:v_lowFr_No_Modal}), there is no explicit dependence on the tilting angle $\theta$ because all the $\theta$-dependent terms disappear by introducing $(X,Y,Z)$ coordinates. This indicates that the stability of (\ref{eq:lineareq}) at $\Frou \rightarrow 0$ would mainly be affected by the horizontal projection of the given base-flow shear (i.e. $\partial^{2}U_0 / \partial Y^{2}$ in (\ref{eq:v_lowFr_No_Modal})). From this observation, we shall discuss the implication on how the tiliting angle $\theta$ affects the stability at $\Frou \rightarrow 0$ in \S \ref{subsec:corrections}. \end{enumerate} \subsection{\added{Primary instability of tiltiled shear flows at low $\Frou$} \label{subsec:2D}} \added{As suggested by \citet{Deloncle2007} and \citet{Candelier2011}, there is no Squire's theorem stating that the most unstable mode is two-dimensional in horizontal and tilted shear flows. However, the numerical results of \citet{Deloncle2007} for inviscid horizontal shear flows and \citet{Candelier2011} for inviscid tilted Bickley jets showed that the most unstable mode arises when $\beta=0$ (i.e. when the mode is two-dimensional). The same is numerically true in the present viscous case, as we shall demonstrate both asymptotically and numerically.} \subsubsection{Squire's theorem for weakly tilted shear flows \label{subsec:Squire}} \deleted{As suggested by Deloncle et. al. (2007) and Candelier et al. (2011), there is no Squire's theorem stating that the most unstable mode is two-dimensional in horizontal and tilted shear flows. However, the numerical results of Deloncle et. al. (2007) for inviscid horizontal shear flows and Candelier et al. (2011) for inviscid tilted Bickley jets showed that the most unstable mode arises when $\beta=0$ (i.e. when the mode is two-dimensional). The same is numerically true in the present viscous case, as demonstrated in figure 3.} In this section, instead of relying on such numerical calculations, we attempt to mathematically demonstrate that the two-dimensional instability mode is always the most unstable one for small $\theta$ as long as the flow belongs to the regime where the low Froude number approximation in \S\ref{subsec:Asymptotic-argument} is valid. The main technical difficulty here is that the spanwise direction in the $(x,y,z)$ coordinates is not orthogonal to the direction of gravity. However, the low Froude number approximation of the equation of motion (\ref{eq:v_lowFr_No_Modal}) relieves this difficulty. Let us consider a spanwise uniform mode (i.e. $\beta=0$) in the $(x,y,z)$ coordinates\deleted{, as sketched in figure (deleted)}. This mode may then be interpreted as a tilted stack of horizontal modes in the $(X,Y,Z)$ coordinates\deleted{, as shown in figure (deleted)}. If the given mode is uniform in the $z$-direction, each of these horizontal modes should have exactly the same spatial shape, but with a $Y$-direction shift depending on the vertical location $Z$. In other words, the perturbation velocity homogeneous in the $z$-direction should satisfy the following relation: \begin{subequations}\label{eq:sym} \begin{equation}\label{eq:sym1} V'(X,Y,Z)=V'(X,Y-Z\tan\theta,0). \end{equation} The same is true for the base flow $U_0(Y,Z)(=u_0(y))$ as it is uniform in the $z$-direction: i.e. \begin{equation}\label{eq:sym2} U_0(Y,Z)=U_0(Y-Z\tan\theta,0). \end{equation} \end{subequations} Now, we assume that the base flow is weakly tilted (i.e. $\theta \ll 1$), so that we can introduce $\epsilon_\theta=\tan \theta$ and $Z_0=\epsilon_\theta Z$. Using WKBJ approximation, we can write $V'(X,Y,Z)$ as \begin{equation} V'(X,Y,Z)=\tilde{V}(Y;Z_0) \exp \left[ \frac{i}{\epsilon_\theta} \int^{Z_0} k_Z(Z_0) d Z_0 -i\omega t + i k_X X \right], \end{equation} where $\omega$ is the eigenfrequency of the linear instability mode arising in the $(Y,Z)$ domain, $k_Z(Z_0)$ is the local wavenumber in the $Z$-direction and $k_X$ the streamwise wavenumber. At the leading order, (\ref{eq:v_lowFr_No_Modal}) becomes \begin{equation} \left[(-i\omega+ik_X U_0-\frac{D_Y^2-k_X^2-k_Z^2(Z_0)}{\Rey})(D_Y^2-k_X^2)-ik_X D_Y^2 U_0\right]\tilde{V}(Y;Z_0)=0,\label{eq:V_OS_wavenumber_labframe} \end{equation} where $D_Y=\partial/\partial Y$. Here, we note that, if $k_z(Z_0)=0$, the stability property of (\ref{eq:V_OS_wavenumber_labframe}) does not change with $Z_0$ due to the nature of the base flow in (\ref{eq:sym2}). This then leads the eigenvalue $\omega$ in (\ref{eq:V_OS_wavenumber_labframe}) to satisfy the following relation: \begin{equation}\label{eq:omega} \omega=\omega_{2D}-\frac{ik_z(Z_0)^2}{\Rey}, \end{equation} where $\omega_{2D}$ is the eigenfrequency obtained from (\ref{eq:V_OS_wavenumber_labframe}) by setting $k_Z(Z_0)=0$. Now, it is evident that having non-zero $k_Z(Z_0)$ would only decrease the value of $\omega_i$. Therefore, it is not difficult to find that the most unstable mode globally arising in the $(Y,Z)$ domain is given when $k_Z(Z_0)=0$ for every $Z_0$. Since the linear operator in (\ref{eq:V_OS_wavenumber_labframe}) is invariant under the transformation of $(Y;Z_0) \rightarrow (Y-Z_0\tan\theta,0)$ for $k_Z(Z_0)=0$, the eigenmode obtained from (\ref{eq:V_OS_wavenumber_labframe}) would satisfy (\ref{eq:sym1}) for given $k_X$, indicating that the two-dimensional mode in the $(x,y,z)$ coordinate is the most unstable in a slightly tilted horizontal shear flow. \deleted{We note that the argument here is valid only for small $\theta$ (i.e. slightly tilted case). For a strongly tilted case where the cylinder is close to horizontal, it may well break down, as the slowly varying assumption of the base flow in the $Z$-direction would not be valid any more. Indeed, Bosco \& Meunier (2014) has documented the appearance of another mode (Mode L) that is three-dimensional at high $\theta$. } \subsubsection{\added{Numerical analysis for highly tilted shear flows}} \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \centering{} \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4a.pdf} } \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4b.pdf} }\\ \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4d.pdf} } \caption{Contour of the temporal growth rate ($\omega_{i,\max}$) of the most unstable mode in the real $\alpha-\beta$ plane \replaced{at $\Frou=0.01$, with $(a)$ $\Rey=7.8, \theta=30^\circ$, $(b)$ $\Rey=7.8, \theta=60^\circ$ $(c)$ $\Rey=50, \theta=85^\circ$. Here, the wavenumber of the most unstable mode is indicated with the cross symbol on each contour, and it is always given for $\beta=0$.}{($\Frou=0.01$, $\Rey=50$, and $\theta=60^{\circ}$). }\label{fig:fig4}} \end{figure} \added{We note that the theoretical argument in \S \ref{subsec:Squire} is valid only for small $\theta$ (i.e. slightly tilted case). For a strongly tilted case where the cylinder is close to horizontal, it does not guarantee the slowly varying assumption of the base flow in the $Z$-direction. However, the numerical result, as shown in figure \ref{fig:fig4}, reveals that the two-dimensional mode indeed remains to be most unstable even at tilting angle as high as $\theta=85^\circ$. Therefore, the numerical result extends the theoretical argument for weakly tilted flow made in the previous section to the strongly tilted one. This observation is also consistent with \cite{Meunier2012}, who experimentally showed that the horizontal vortex shedding emerges as the primary instability for any titling angles.} \subsubsection{\added{Numerical analysis for higher Froude number}} \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \centering{} \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4exta.pdf} } \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4extb.pdf} }\\ \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4extc.pdf} } \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure4extd.pdf} } \caption{\added{Contour of the temporal growth rate ($\omega_{i,\max}$) of the most unstable mode in the real $\alpha-\beta$ plane at $(a,b)$ $\Rey=7.8$ and $(c,d)$ $\Rey=50$, with $(a,c)$ $\Frou=0.5$ and $(b,d)$ $\Frou=1$. Here, $\theta$ is $(a,b)$ $30^\circ$ and $(c,d)$ $60^\circ$. Here, the wavenumber of the most unstable mode is indicated with the cross symbol on each contour, and it is always given for $\beta=0$.}\label{fig:fig4ext}} \end{figure} \added{We have also extended the numerical result to higher $\Frou$ for completeness, as shown in figure \ref{fig:fig4ext}. Although there seems to be another emerging three-dimensional mode at low $\alpha$ when $\theta=60^\circ$ and $\Rey=50$ (see the kinks in figures \ref{fig:fig4ext}$c$, $d$, which indicate a branch-switching behaviour), the two-dimensional inflectional instability mode still remains to be most unstable, consistent with the observation of \citet{Meunier2012}. However, in this regime of parameters (i.e. relatively high buoyancy Reynolds number), it is important to note that the Squire-like theorem, which we demonstrated previously, does not precisely apply, as will be discussed in \S\ref{subsec:vert_length_scale}. Indeed, the recent work by \citet{Facchini2018a} has shown that a three-dimensional instability can arise in horizontal plane Couette flow where inflectional instability mechanism is ruled out by its base flow. While we have not observed such a three-dimensional instability as the most unstable primary instability in the present study, we do not rule out such possibility in other flow configurations where buoyancy Reynolds number is not small. } \subsection{\added{Asymptotic regimes, vertical length scales and primary instabilities}\label{subsec:vert_length_scale}} The highly horizontal nature (i.e. very small vertical velocity) of the possible instability mode at $\Frou \rightarrow 0$ in the present study might remind one of the features of pancake vortical structures in a typical geophysical flow setting. However, given our discussion in \S\ref{subsec:Squire}, the primary instability in the present study would not vary vertically, unlike the pancake vortical structures. In fact, the key difference between the regime of the present analysis and the one of geophysical flow originates from the strength of viscosity. In the present study, the vertical derivative in (\ref{eq:v_lowFr_No_Modal}) appears in the viscous dissipation term, implying that the vertical direction would be correlated through viscous diffusion transport. Therefore, the appropriate vertical length scale in this case should be determined by viscosity. By contrast, in the geophysical flow regime where pancake vortical structures typically emerge, its vertical length scale would be determined by the strength of the stratification, as was proposed by \citet{Billant2001}. The effect of viscosity in a strongly stratified medium has previously been discussed by \citet{Brethouwer2007}. They suggested the so-called buoyancy Reynolds number $\mathscr{R}= \Rey\Frou^2$ as the key parameter that determines the vertical length scale at low $\Frou$. If $\mathscr{R} \gg 1$, viscous force is unimportant and the relevant vertical length scale becomes proportional to $\Frou$ \citep{Billant2001}. On the other hand, if $\mathscr{R} \ll 1$ like in the present case, the vertical length scale would be proportional to $\Rey^{-1/2}$, the regime more relevant to typical laboratory experiments. \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \centering{} \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure3a.pdf}\label{fig:fig3a} } \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure3b.pdf}\label{fig:fig3b} } \caption{Self-similarity in the dispersion relation for purely horizontal flow ($\theta=0^\circ$): ($a$) contour of temporal growth rate $\omega_i$ in the $\alpha-\beta/\Frou$ plane for $\Frou=0.05,0.1,0.2$ ($\Rey \rightarrow \infty$); ($b$) temporal growth rate as a function of $\beta/\sqrt[]{\Rey}$ for $\Rey=200,400,800,1600$ ($\alpha=0.8$ and $\Frou=0.001$).\label{fig:fig3}} \end{figure} The argument on the relevant vertical length scale can also be demonstrated numerically in the present wake flow. Figure \ref{fig:fig3} presents a temporal stability analysis for a set of low Froude number in a wide range of Reynolds numbers when $\theta=0^\circ$. If the effect of viscosity is ignored to reach the regime of $\mathscr{R} \gg 1$, the growth rate shows self-similarity with respect to $\beta \Frou$ (figure \ref{fig:fig3a}) as was previously shown by \citet{Deloncle2007}. However, if the viscosity is brought into the analysis to be in the regime of $\mathscr{R} \ll 1$, the temporal growth rate exhibits self-similarity with respect to $\beta/\sqrt{\Rey}$ (figure \ref{fig:fig3b}), also consistent with (\ref{eq:omega}) (note that $\beta=k_Z(Z_0)$ for $\theta=0^\circ$). \added{In practice, $\mathscr{R} > O(10)$ is usually necessary to observe the layered pancake structures \citep{Lucas2017}}. The numerical observation here and the asymptotic analysis in \S\ref{subsec:Squire} now suggest that if the primary instability in a strongly stratified horizontal shear flow emerges at sufficiently low Reynolds number, it is quite possible to be two dimensional. Here, it is important to remember that the typical inflectional instability arises at $\Rey\sim O(10)$ \citep{Ho1984}. Given the buoyancy Reynolds number argument, this implies that the primary instability, which originates from horizontal inflectional base flow, should be two dimensional as long as $\Frou$ \added{is below $O(1)$}. This also explains why the low-Froude-number instability mode observed in the experiment of \citet{Meunier2012} was two dimensional -- their $\Frou\sim O(10^{-1}\added{-1})$ and $\Rey \sim O(10)$, thus $\mathscr{R}$ \replaced{is below $O(10)$}{becomes well below unity}. In this respect, it is worth mentioning the recent work by \citet{Facchini2018a}\added{,} where a new type of three-dimensional linear instability was reported in horizontal Couette flow. However, in this case, $\Frou \sim O(10^{-1}-1)$ and $\Rey \added{> O(10^3)}$. Therefore, the buoyancy Reynolds number is $\mathscr{R} \sim O(\added{10^2-10^4})$, which does not belong to the asymptotic regime discussed in \replaced{this section}{ \S\ref{subsec:Asymptotic-argument} and \S3.2}. \deleted{Lastly, the above self-similarity in the vertical length scale can only be shown in the special case when $\theta=0^\circ$, where the base flow is not imposing another length scale to the flow. When the cylinder is tilted (i.e. $\theta \neq 0^\circ$), the geometry of the base flow would externally introduce another vertical length scale \mbox{\citep{Billant2001}} and this argument would not necessarily be valid.} \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{lcccccccc} References & Shear & $\mathscr{R}(= \Rey\Frou^2)$ & Primary instability & Approaches \\[5pt] \citet{Billant2000a,Billant2000c} & H & $O(10)$ & 3D & EXP/LT \\ \citet{Deloncle2007} & H & $\infty$ & 2D/3D & LT \\ \citet{Lucas2017} & H & $O(1-10^2) $ & 2D/3D & LT/NT \\ \citet{Facchini2018a} & H & $O(10^2-10^4) $ & 3D & EXP/NS/LT \\ \citet{Candelier2011} & H/T & $\infty$ & 2D* & LT \\ \citet{Meunier2012} & H/T & $O(10^{-1}-10) $ & 2D & EXP/NS \\ Present study & H/T & $O(10^{-4} - 10) $ & 2D & LT with ST \end{tabular} \caption{\added{A summary of two- and three-dimensional nature of primary instabilities observed in strongly stratified shear flows. Here, the acronyms indicate: H, horizontal; H/T, horizontal and tilted; 2D, two-dimensional; 3D, three-dimensional; EXP, experiment; NS, numerical simulation, LT, linear theory; NT, nonlinear theory; ST, Squire-like theorem. For the instabilities marked as `2D/3D', the most unstable mode remains two dimensional, although the three-dimensional modes have growth rate close to that of two-dimensional one due to the $\beta \Frou$ scaling. For the instabilities marked as `2D*', the nature of three-dimensional modes was not fully explored, despite the potential importance of this mode due to high buoyancy Reynolds number.}} \label{tab1} \end{center} \end{table} \added{In table \ref{tab1}, the variety form of instabilities arising from strongly stratified shear flows is summarised. Here, we note that the work by \citet{Billant2000a,Billant2000c,Billant2000b,Billant2001} is on the so-called `zig-zag' instability which arises from from a vertical columnar vortex pair under strongly stratification. Therefore, in this case, it is not relevant to study two-dimensional horizontal instability. In horizontal shear flows, such as vertical columnar vortex pair \citep{Billant2000c}, Bickley jet \citep{Deloncle2007,Candelier2011}, sinusoidal shear flow \citep{Lucas2017}, it is evident that the primary instability is prone to a three-dimensional mode, which varies vertically, as long as the buoyancy Reynolds number $\mathscr{R}$ is greater than $O(1)$. In linear theory, this feature appears through the self-similar scaling of $\omega_i \sim f(\beta \Frou)$ where $\omega_i$ is the growth rate of the instability \citep{Deloncle2007}, and this instability subsequently develops into layered coherent structures in turbulent regime underpinned by non-trivial equilibrium states of the given system \citep{Lucas2017}.} \added{Despite the relatively well-established importance of vertically varying structures in stratified shear flows, most of previous linear stability analyses have shown that two-dimensional instability mode is the still most unstable \cite[]{Deloncle2007,Candelier2011,Lucas2017}, except \citet{Facchini2018a} who showed that the primary instability in stratified horizontal Couette flow is three dimensional. Here, it is important to note that, except the Couette flow of \citet{Facchini2018a}, all the two-dimensional primary instabilities, reported by the previous studies and the present one, are inflectional ones, regardless of the value of buoyancy Reynolds number. In the present study, we have shown theoretically that such a two-dimensional instability should be most unstable if buoyancy Reynolds number is sufficiently low and that this behaviour is linked with the self-similar scaling of $\omega_i \sim f(\beta \Rey^{-1/2})$ in this regime. This theoretical result is well supported by the experiment by \citet{Meunier2012}, and also suggests that such a two-dimensional instability may arise as the primary instability in laboratory experiments where the buoyancy Reynolds number is often quite small.} \added{Compared to the relatively well-studied horizontal shear flows, tilted shear flows have been much less studied. The numerical and experimental results in the present study and \citet{Meunier2012} suggested that two-dimensional instability would still be most unstable if buoyancy Reynolds number is sufficiently low and the shear flow admits an inflectional instability which typically arises at low Reynolds numbers (e.g. vortex shedding). At high buoyancy Reynolds number, \citet{Candelier2011} showed that such a two-dimensional instability is still most unstable for any tilting angles in Bickley jet. However, it is yet clear whether this nature, which appears to arise essentially from the presence of inflectional instability, would extend to the flows without any inflectional instability, such as Couette flow and uniform shear flow. Furthermore, in a tilted shear flows, the presence of tiling angle introduces another vertical length scale \citep{Billant2001}. Therefore, any theoretical foundations established in horizontal shear flows would not necessarily be valid for large tilting angles.} \section{Results and discussion \label{sec:Result}} \subsection{Absolute and convective nature of the primary instability\label{subsec:AU}} Using the numerical solver described in \S\ref{sec:formulation}, we now compute the neutral curve for absolute instability. Given the Squire's theorem shown in \S \ref{subsec:2D}, we will focus on $\beta=0$ in the remaining of the paper. In particular, we will fix $\theta$ and vary $\Frou$ progressively to find the critical Reynolds number for absolute stability $\Rey_c$ for a given set of $\theta$ and $\Frou$. To efficiently find absolute instability, the secant method is used to seek for the pinching point in the complex planes (i.e. $d \omega/d \alpha=0$), which provides the absolute frequency $\omega_0=\omega(\alpha_0)$ where $\alpha_0$ is the absolute streamwise wavenumber. Two modes are found to be most absolutely unstable at a high and a low Froude number respectively, as shown in figure \ref{fig:fig5}. The stability trend is found to be very similar to that of figure 10 in \citet{Meunier2012}. They both show that as the Froude number decreases from the high $\Frou$ regime (say $\Frou>1$), the stratification tends to stabilise the flow. At some low $\Frou$, the low-Froude-number mode becomes important. As its consequence, a sharp cusp emerges in the neutral Reynolds number curve (i.e. the point where the blue (dark) and pink (light) lines meet each other in figure \ref{fig:fig5}), as was also observed in the experimental result \citep{Meunier2012}. With a further decrease of $\Frou$, the flow is more destabilised with a decrease of the critical Reynolds number. This destabilising behaviour with\replaced{ increasing stratification strength (decreasing $\Frou^2$) might sound counter-intuitive,}{ elevation of the stratification level does not sound well intuitively,} and it will be discussed shortly in \S \ref{subsec:finite_Fr}. \added{Lastly, this behaviour of absolute instability for $\beta=0$ is qualitatively the same as that of three-dimensional temporal instability (see Appendix \ref{sec:appen_three}).} \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \centering{} \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure5a.pdf}\label{fig:fig5a} } \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure5b.pdf}\label{fig:fig5b} } \caption{The critical Reynolds number for absolute instability with respect to Froude number at $(a)$ $\theta=30^{\circ}$ and $(b)$ $\theta=60^{\circ}$ .\label{fig:fig5}} \end{figure} Despite the good qualitative agreement in the behaviour of the critical Reynolds number with the experimental observation of \citet{Meunier2012}, there is an important difference in the behaviour of $\Rey_c$ with respect to $\theta$. In the present stability analysis, $\Rey_c$ at $\Frou \rightarrow 0$ increases as $\theta$ increases from $30^{\circ}$ to $60^{\circ}$ (blue/darker lines in figure \ref{fig:fig5} at $\Frou \rightarrow 0$). Such a trend was not observed in the experiment of \citet{Meunier2012}, who showed that $\Rey_c$ does not change considerably with such a change of $\theta$. In fact, the behaviour in the present stability analysis is rather similar to that in \citet{Candelier2011}. We will address this issue by proposing some simple explanations in \S \ref{subsec:corrections}. Finally, to ensure the robustness of the observation made here, the stability analysis for the low-Froude-number mode is repeated by considering a range of base-flow profiles. Figure \ref{fig:fig6} shows contour of the critical Reynolds number obtained by $R$ and $a$ in (\ref{eq:U_profile}). As expected, decreasing $R$ (i.e. having more flow reversal) is found to enhance the absolutely unstable nature of the mode. For $a$ close to unity, increasing $a$ (i.e. a stiffer profile) destabilises the flow. The overall behaviour of the critical Reynolds number here is remarkably similar to that found in viscous wakes of homogeneous (i.e. non-stratified) fluid (see figure 4 of \citet{Monkewitz1988}), confirming that the low-Froude-number mode is indeed inflectional and similar to the wake instability in homogeneous fluid. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Figure6.pdf} \caption{Contour of critical Reynolds number $\Rey_c$ for absolute instability in the $R-a^{-1}$ plane ($\Frou=0.01$ and $\theta=30^{\circ}$). Here, the region marked with `C' is convectively unstable, whereas that with `A' is absolutely unstable. \label{fig:fig6}} \end{figure} \subsection{Stabilisation of the low Froude-number mode with increase of $\Frou$ \label{subsec:finite_Fr}} In figure \ref{fig:fig5}, we have observed that the low Froude number mode is destabilised as $\Frou$ increases. Since the entire analysis in \S \ref{subsec:Asymptotic-argument} is centred around $W'=0$ at $\Frou \rightarrow 0$ (i.e. (\ref{eq:v_w_same})), here we start our analysis by extending it to the next order. We first rewrite (\ref{eq:v_w_same_dev}) in the $(X,Y,Z)$ coordinates: \begin{equation} W' =\Frou^{2}[\frac{\partial}{\partial t}+U_0 \frac{\partial}{\partial X}-\frac{\nabla^2}{\Rey \Sc}]b', \label{eq:W_scaling} \end{equation} indicating that $W'\sim \Frou^2$. If we take $\Frou^2=\epsilon \ll 1$, the following asymptotic expansion can be written: \begin{subeqnarray}\label{eq:eps_split} U'&=& U'_1+\epsilon U'_2 + \textit{O}(\epsilon^2),\\ V'&=& V'_1+\epsilon V'_2 + \textit{O}(\epsilon^2), \\ W'&=& W'_1+\epsilon W'_2 + \epsilon^2 W'_3 +\textit{O}(\epsilon^3), \\ b'&=& b'_1+\epsilon b'_2 + \textit{O}(\epsilon^2), \\ p'&=& p'_1+\epsilon p'_2 + \textit{O}(\epsilon^2). \end{subeqnarray} At $O(\epsilon^{-1})$, (\ref{eq:non_dim_rho}) then yields \begin{equation} W_1'=0, \end{equation} retrieving (\ref{eq:v_w_same}). At $\textit{O}(1)$, (\ref{eq:non_dim_u}), (\ref{eq:non_dim_rho}) and the continuity equation in the $(X,Y,Z)$ coordinates can be written as \begin{subequations}\label{eq:asym_0} \begin{eqnarray} \left[ \frac{\partial}{\partial t}+U_0 \frac{\partial}{\partial X}-\frac{\nabla^2}{\Rey} \right] U'_1 + V'_1\frac{dU_0}{dY} & = & -\pardX{p'_1}, \label{eq:asym_u_0} \\ \left[ \pardt{} +U_0 \pardX{} - \frac{\nabla^2}{\Rey} \right] V'_1 &=& -\pardY{p'_1} \label{eq:asym_v_0}, \\ b'_1+\pardZ{p'_1} &=& 0, \label{eq:asym_w_0}\\ \left[ \frac{\partial}{\partial t}+U_0 \frac{\partial}{\partial X}-\frac{\nabla^2}{\Rey \Sc} \right] b'_1 & = & W'_2, \label{eq:asym_b_0} \\ \pardX{U'_1}+\pardY{V'_1} & = & 0. \label{eq:asym_cont} \end{eqnarray} \end{subequations} Here, we note that (\ref{eq:asym_u_0}), (\ref{eq:asym_v_0}) and (\ref{eq:asym_cont}) are decoupled with (\ref{eq:asym_w_0}) and (\ref{eq:asym_b_0}). Furthermore, they can be combined to recover (\ref{eq:v_lowFr_No_Modal}), indicating that this is merely a different derivation of (\ref{eq:v_lowFr_No_Modal}) obtained in the $(X,Y,Z)$ coordinates. For the same reason, (\ref{eq:asym_w_0}) is also identical to (\ref{eq:non_dim_pb}), and it indicates the hydrostatic balance of $b_1'$ caused by the vertical velocity perturbation at $O(\epsilon)$ (i.e. $W_2'$). Since (\ref{eq:asym_0}) is identical to the low-Froude-number approximation of (\ref{eq:lineareq}) given in \S\ref{subsec:Asymptotic-argument}, we further proceed to the next order. At $\textit{O}(\epsilon)$, the equations of motion are \begin{subequations}\label{eq:asym_1} \begin{eqnarray} \left[ \frac{\partial}{\partial t}+U_0 \frac{\partial}{\partial X}-\frac{\nabla^2}{\Rey} \right] U'_2 + V'_2\frac{dU_0}{dY} + W'_2\frac{dU_0}{dZ}& = & -\pardX{p'_2}, \label{eq:asym_u_2} \\ \left[ \pardt{} +U_0 \pardX{} - \frac{\nabla^2}{\Rey} \right] V'_2 &=& -\pardY{p'_2} \label{eq:asym_v_2}, \\ \left[ \pardt{} +U_0 \pardX{} - \frac{\nabla^2}{\Rey} \right] W'_2 &=& -\pardZ{p'_2}-b'_2 \label{eq:asym_w_2}, \\ \pardX{U'_2}+\pardY{V'_2}+ \pardZ{W'_2} &=& 0, \label{eq:asym_cont1}\\ \left[ \frac{\partial}{\partial t}+U_0 \frac{\partial}{\partial X}-\frac{\nabla^2}{\Rey \Sc} \right] b'_2 & = & W'_3 . \label{eq:asym_b_2} \end{eqnarray} \end{subequations} Now, it becomes evident that the key structural difference between (\ref{eq:asym_0}) and (\ref{eq:asym_1}) is the presence of $W'_2$ in (\ref{eq:asym_1}) -- indeed, if $W'_2=0$, the form of (\ref{eq:asym_0}) is identical to that of (\ref{eq:asym_1}). This implies that the presence of the non-zero vertical velocity (i.e. $W_2'$) is the key player for the stabilisation mechanism of the low-Froude-number mode on increasing $\Frou$ from a very small value. Furthermore, the form of (\ref{eq:asym_1}) suggests that there may be two possible stabilisation mechanisms played by $W'_2$: one is modification of the shear instability through an interaction with the vertical shear (i.e. $W'_2 (dU_0/dZ)$ in (\ref{eq:asym_u_2})), and the other is coupling with the stabilising buoyancy through (\ref{eq:asym_w_2}). Despite the useful physical insight gained here, it is difficult to solve (\ref{eq:asym_1}) even numerically. This is because of the unknown $W_3'$ in (\ref{eq:asym_b_2}), which will have to be obtained from the equations at $O(\epsilon^2)$. Unfortunately, the form of the equations at $O(\epsilon^2)$ is exactly identical to that of (\ref{eq:asym_1}), requiring the vertical velocity perturbation at $O(\epsilon^3)$. In fact, this pattern repeats in the equations at any subsequent orders, creating a closure problem that makes it difficult to proceed the current analysis any further. \subsection{Energy budget analysis \label{subsec:budget}} \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \centering \sidesubfloat[]{ \includegraphics[width=0.42\textwidth]{Figure7a.pdf} \label{fig:fig7a} } \sidesubfloat[]{ \includegraphics[width=0.42\textwidth]{Figure7b.pdf} \label{fig:fig7b} } \caption{Contribution of production ($P_{uv}/E_u$) and conversion to potential energy per kinetic energy to the growth rate $\omega_i$: $(a)$ $\theta=30^\circ, \alpha=0.4$; $(b)$ $\theta=60^\circ, \alpha=0.4$. Here, note that the difference between the (blue) thick line ($\omega_i$) and (red) thin line ($P_{UV}+P_{UW}/E_u$) indicates the contribution of $P_{Wb}/E_u$, while the difference between the (red) thin line and the dashed lined ($P_{UV}/E_u$) indicates the contribution of $P_{UW}/E_u$. For reference, the horizontal dotted line shows the growth rate at $\Frou=0.01$. \label{fig:fig7}} \end{figure} Given the difficulty discussed in \S\ref{subsec:finite_Fr}, we now proceed to explore the more precise stabilisation mechanisms using the numerical result. In particular, we perform an energy budget analysis of the perturbation state to understand the role of two mechanisms discussed above. For simplicity, we shall now consider only the inviscid case. We introduce total energy of the perturbed state in the eigenvalue problem (\ref{eq:normalmode}), such that \begin{subequations} \begin{equation} E_{tot}=E_{u}+E_{b}, \end{equation} with \begin{equation} E_{u}=\int_{-\infty}^{\infty} |\tilde{u}|^2 + |\tilde{v}|^2 + |\tilde{w}|^2 ~dy,~~E_{b}= \int_{-\infty}^{\infty}\Frou^2|\tilde{b}|^2~dy, \label{eq:Full_5_energy_E} \end{equation} \end{subequations} where $E_u$ is the kinetic energy of the perturbed state and $E_b$ is the potential energy. The energy balance of (\ref{eq:normalmode}) is given by \begin{subequations}\label{eq:Full_5_energy_total} \begin{equation} \omega_{i} (E_{u}+E_{b})=P_{uv} , \end{equation} with \begin{equation} P_{uv}=-\int_{-\infty}^{\infty}\Real(\bar{\tilde{u}}\tilde{v}DU)~dy, \end{equation} \end{subequations} where $\bar{\cdot}$ indicates complex conjugate (see Appendix for details). Equation (\ref{eq:Full_5_energy_total}) now indicates that $P_{uv}$ (i.e. production by the given base flow) is the only source term of the instability. The contribution of the production $P_{uv}$ can be split into $E_u$ and $E_{b}$, the former of which becomes kinetic energy of the instability mode and the latter is potential energy converted by buoyancy. Furthermore, from (\ref{eq:lin_rho}), the potential energy balance is written as \begin{subequations} \begin{equation} \omega_i E_{b}=-P_{vb}-P_{wb}, \end{equation} with \begin{equation} P_{vb}=\int_{-\infty}^{\infty}\Real(\bar{\tilde{v}}\sin\theta\tilde{b})dy,~~ P_{wb}=-\int_{-\infty}^{\infty}\Real(\bar{\tilde{w}}\cos\theta\tilde{b}) dy, \end{equation} \end{subequations} where $P_{vb}$ and $P_{wb}$ are the rate of potential energy generation by $v'$ and $w'$, respectively. Then, the kinetic energy balance of the instability mode becomes \begin{equation} \omega_{i} E_{u}=P_{uv}+P_{wb}+P_{vb}. \label{eq:Full_5_energy_sim_nonzero} \end{equation} Given the importance of the vertical velocity perturbation discussed in \S\ref{subsec:finite_Fr}, we now convert (\ref{eq:Full_5_energy_sim_nonzero}) into the one in the $(X,Y,Z)$ coordinates. Using $\tilde{U}(y)=\tilde{u}$, $\tilde{V}(y)=\tilde{v}\cos \theta +\tilde{w}\sin \theta$ and $\tilde{W}(y)=\tilde{w}\cos \theta -\tilde{v}\sin \theta$, the kinetic energy balance in the $(X,Y,Z)$ coordinates is given by \begin{subequations}\label{eq:Full_5_energy_sim_nonzero_lab} \begin{equation} \omega_{i} E_{U}=P_{UV}+P_{UW}+P_{Wb}, \end{equation} where \begin{equation} E_{U}=\int_{-\infty}^\infty |\tilde{U}|^2 + |\tilde{V}|^2 + |\tilde{W}|^2 dy, \label{eq:Full_5_energy_E_UWV} \end{equation} \begin{equation} P_{UV}= - \int_{-\infty}^\infty \Real \left(\bar{\tilde{U}} \frac{dU_0}{dY} \tilde{V} \right)dy, \quad P_{UW}= - \int_{-\infty}^\infty \Real \left( \bar{\tilde{U}} \frac{dU_0}{dZ} \tilde{W} \right) dy, \end{equation} \begin{equation} P_{Wb}= - \int_{-\infty}^\infty \Real \left( \bar{\tilde{W}} \tilde{b} \right) dy. \end{equation} \end{subequations} Here, $E_U=E_u$, $P_{uv}=P_{UV}+P_{WV}$ and $P_{Wb}=P_{vb}+P_{wb}$. From (\ref{eq:Full_5_energy_sim_nonzero_lab}), it becomes clear that kinetic energy of the instability mode is formed by balance of two mechanisms: 1) production $P_{UV}+P_{UW}$ by the given base-flow shear in the horizontal and vertical directions; 2) conversion to potential energy (or loss of kinetic energy) through $P_{Wb}$. Figure \ref{fig:fig7} shows the numerical results of how the production per given kinetic energy ($P_{uv}/E_u$) and the conversion to potential energy per given kinetic energy ($P_{Wb}/E_u$) contribute to the stabilising effect on the low Froude number mode on increasing $\Frou$. As expected, $P_{Wb}$ is stabilising as $\Frou^2$ increases. However, it is also found that the production $P_{uv}$ also significantly decreases, as $\Frou$ is increased. In particular, $P_{UW}$ takes energy from the instability (i.e. stabilising) on increasing $\Frou$, while $P_{UV}$ plays a destabilising role of the flow. \begin{figure} \centering \includegraphics[width=0.98\textwidth]{Figure8.pdf} \caption{On the left shows the normalised value of $\bar{U}'W'$ (real) of the most unstable mode, the base flow and its derivatives in $Z$ projected on $Z$ axis, where each plot is normalised by its own maximum value. On the right shows the spatial structure of $U'$ and $W'$ of the most unstable mode on the $X-Z$ plane, and the contour plot of density perturbation $b'$ added to the linear and stable background stratification. In this figure, $\alpha=0.4$, $\Frou=0.5$ and $\theta=30^\circ$. Note that $W'$ is scaled up by a arbitrary value for visualisation purpose. \label{fig:fig8} } \end{figure} To understand more precise physical picture on the stabilisation mechanisms, the spatial mode structure of $U'$ and $W'$ in the $X$-$Z$ plane is shown for a relatively small $\Frou=0.5$ in figure \ref{fig:fig8}. As expected, the finite $\Frou$ allows for vertical velocity fluctuation $W'$. The positive and negative $W'$ are well correlated to high and low buoyancy fluctuations $b'$, indicating transport of buoyancy field by $W'$ (($X,Z)\simeq (0.3\pi/\alpha,~0.2)$ and ($X,Z)\simeq (0.8\pi/\alpha,0.2)$) in figure \ref{fig:fig8}): i.e. the high $b'$ is transported from the lower region where basic-state buoyancy is high, and the high $b'$ is transported from the upper region with low buoyancy field. It is evident that such high and low buoyancy fluctuations would create upward and downward gravitational force, directions of which are opposite to those of $W'$. Therefore, this mechanism would play a stabilising role, consistent with the action of $P_{Wb}$ in (\ref{eq:Full_5_energy_sim_nonzero_lab}). It is also interesting to observe the positive and negative $W'$ emerge in the regions where the streamwise velocity fluctuations are respectively positive and negative (($X\simeq 0.25\pi/\alpha,~Z=0.2$) and ($X\simeq 1.25\pi/\alpha,~Z=0.2$) in figure \ref{fig:fig8}). This suggests that the low-Froude-number inflectional instability mode interacts with $W'$, such that a positive correlation between $U'$ and $W'$ is generated, providing a stabilising mechanism through $P_{UW}$ in (\ref{eq:Full_5_energy_sim_nonzero_lab}). \subsection{Simple corrections of base flow for consistent comparison with experimental data \label{subsec:corrections}} \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \sidesubfloat[]{ \includegraphics[width=0.42\columnwidth]{Figure9a.pdf} \label{fig:fig9a} } \sidesubfloat[]{ \centering{} \includegraphics[width=0.42\columnwidth]{Figure9b.pdf} \label{fig:fig9b} } \caption{The most unstable frequency $\omega_{max}$ (complex) and streamwise wavenumber $\alpha_{max}$ (real) of the temporal instability of an inviscid wake as a function of $\theta$ ($\Frou=0.01$) with $(a)$ a fixed base-flow profile $u_0(y)$ on the $x$-$y$ plane and $(b)$ a fixed base-flow profile $U_0(Y)$ on the $X$-$Y$ plane. Note that, in $(a)$, the fit of $\cos \theta$ curves overlaps with the corresponding numerical data.} \end{figure} As discussed in \S\ref{sec:Introduction}, \citet{Candelier2011} showed that the inviscid growth rate of the low-Froude-number instability mode is highly dependant on $\theta$, whereas the experimental result in wake flow by \citet{Meunier2012} showed that $\Rey_c$ remains at a similar level when $\theta=30^\circ$ and $\theta=60^\circ$. Given the nature of the base flow considered so far (i.e. the base flow kept to be the same in the $(x,y,z)$ coordinates like in \citet{Candelier2011}), it is not surprising to see that the critical Reynolds number for absolute instability in \S \ref{sec:Result} is found to considerably vary with $\theta$. Therefore, in the remaining part of this paper, we will explore what physical processes would need to be further modelled in order to make a consistent comparison between the present stability analysis and the experimental data in \citet{Meunier2012}. Here, we propose three possible physical origins that may explain the aforementioned difference between the present study and \citet{Meunier2012}: 1) the nature of the base flow kept to be the same in the $(x,y,z)$ coordinates; 2) the effect of viscosity; 3) the effect of changing cross-section of the given bluff body on the horizontal plane with $\theta$ in the experiment of \citet{Meunier2012}. It should be stressed that the propositions given here are based on physical understanding of the flow and comparison with experiments. Therefore, they should be viewed as observation-based suggestions that may help to improve modelling of `real' base flow with the change of the tilting angle $\theta$ in the experiment of \citet{Meunier2012}. \subsubsection{Base flow on the horizontal plane \label{subsec:Base-Flow}} According to \citet{Candelier2011}, the stability of an inviscid tilted shear flow under the low-Froude-number approximation ($\Frou \rightarrow 0$) satisfies the following relationship: \begin{equation} \omega(\alpha,\Frou,\theta)=\omega(\alpha/\cos\theta,0,0)\cos\theta, \label{eq:inviscid_scaling_law} \end{equation} given the base flow is kept unchanged on the tilted plane (i.e. $x$-$y$ plane). This dispersion relation can also be demonstrated numerically in the present case by taking the inviscid limit, as shown in figure \ref{fig:fig9a}. Here, the dispersion relation (\ref{eq:inviscid_scaling_law}) can be interpreted as the stability of a tilted base flow being the same as that of its projection on the horizontal plane. Indeed, if the base flow is kept the same on the tilted plane, the width (i.e. the length scale of the system) of the base flow projected on the horizontal plane is rescaled by a factor of $(\cos \theta)^{-1}$. This is mathematically equivalent to multiplying $Y$ in (\ref{eq:v_lowFr_No_Modal}) by this factor recovering (\ref{eq:inviscid_scaling_law}) from (\ref{eq:v_lowFr_No_Modal}) in the inviscid limit. Therefore, if the base-flow projection on the horizontal plane $U_0(Y)$ is enforced to be unchanged with respect to $\theta$ (instead of a constant base flow $u_0(y)$ on the tilted plane), the inviscid stability at $\Frou\rightarrow0$ should be independent of $\theta$. This is also demonstrated numerically in figure \ref{fig:fig9b}. \subsubsection{Viscosity and bluff-body geometry \label{subsec:ellipse_effect}} The independence of $\omega$ on $\theta$ obtained by the use of unchanging base flow on the horizontal plane with $\theta$ provides an important explanation on the discrepancy between the results of the stability analysis and the experiment. However, in the low Reynolds number regime where the experiment by \citet{Meunier2012} was performed, one should not ignore the effect of viscosity: given the strong vertical stratification, $\partial^2/\partial Z^2$ in the viscous dissipation term in (\ref{eq:v_lowFr_No_Modal}) can certainly introduce $\theta$-dependence. We have re-computed the viscous absolute instability at $\Frou = 0.01$ ($\Frou \rightarrow 0$) with an unchanging $U_0(Y)$ profile. It is found that critical Reynolds number $\Rey_c$ is still dependent on the tilting angle $\theta$ in the viscous case, as shown in figure \ref{fig:fig10a} (blue line). Figure \ref{fig:fig10b} shows that the corresponding absolute streamwise wavenumber $\alpha_0$ does not change much like in figure \ref{fig:fig9b}. This suggests that the increase of the critical Reynolds number is likely to be caused by the viscous term in (\ref{eq:v_lowFr_No_Modal}). \floatsetup[figure]{style=plain,subcapbesideposition=top} \begin{figure} \centering{} \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure10a.pdf} \label{fig:fig10a} } \sidesubfloat[]{ \includegraphics[width=0.45\columnwidth]{Figure10b.pdf} \label{fig:fig10b} } \caption{The behaviour of neutral absolute instability mode ($\Frou=0.01$): $(a)$ $\Rey_c$; $(b)$ $\omega_{0}$ and $\alpha_{0}$. The base-flow profile on the horizontal plane is kept to be the same as $\theta$ changes (see also text).} \end{figure} Given the effect of viscosity on the critical Reynolds number, it appears that keeping base flow the same on the horizontal plane for different $\theta$ would not fully explain the different $\theta$-dependency of the instability between the stability analysis and the experiment. Therefore, we should finally consider the effect of the changing horizontal cross-sectional geometry with the tilting angle $\theta$, which would also be significant in a real bluff-body wake. It is evident that increasing tilting angle would result in a more elongated horizontal cross-sectional body (an ellipse stretched in the $Y$ direction in the case of circular cylinder). As such, we hypothesise that the more elongated the horizontal section of the given body is, the more flow is destabilised. In other words, the more elongated horizontal cross section in the $Y$ direction would increase the length scale of the given system, thereby increasing the effective Reynolds of the system. This may then reduce $\Rey_c$ at higher $\theta$. Now, we will check this hypothesis by empirically correcting $\Rey_c$ from the viscous analysis with the formula \begin{equation}\label{eq:corr} \Rey_{correct}=\Rey_c C_{shape}(\theta)\cos\theta , \end{equation} where $C_{shape}$ is the correction factor for the elongated ellipse. Such a factor is assumed to be the ratio between $\Rey_{c}$ of an elliptical cylinder and $\Rey_{c}$ of a circular cylinder, which can be empirically calculated from the data \citet{Thompson2014}. We note that, in (\ref{eq:corr}), the factor $\cos \theta$ is introduced, so that the length scale used in \citet{Thompson2014} becomes consistent with that in the present study: \citet{Thompson2014} kept the width of the ellipse unchanged while shortening the streamwise length, whereas, in our analysis, the streamwise length is kept unchanged and the width is set to be elongated. The corrected neutral Reynolds number $\Rey_{correct}$ is shown in figure \ref{fig:fig10a} (red dashed line), together with the uncorrected neutral Reynolds number $\Rey_c$ (blue solid line). The behaviour of $\Rey_{correct}$ now appears to be more consistent with that in \citet{Meunier2012}, as $\Rey_{correct}$ at $\theta=30^{\circ}$ and $\theta=60^{\circ}$ are of similar values. While one may argue that the corrected result has qualitatively reconciled with the data from \citet{Meunier2012}, we should admit that this still remains a proposition that needs to be confirmed with experiments or simulations. In particular, a further examination appears to be required on how reliable such an ad-hoc correction is. The empirical law proposed by \citet{Meunier2012} suggested that $\Rey_c$ is not a strong function of $\theta$ at $\Frou \rightarrow 0$, but the corrected result is still suggesting a stabilising effect at higher $\theta$. \section{Conclusion \label{sec:conclusion}} This paper aims to gain a better understanding of the low-Froude-number mode using a linear stability analysis. It has successfully reproduced the stability result of the experiment of \citet{Meunier2012}, in which there exists a branch switching from the high-Froude-number mode to the low-Froude-number one when $\Frou$ decreases. To better understand the nature of the low-Froude-number mode, we have put forward the use of laboratory frame (i.e. the $(X, Y, Z)$ coordinate system). Using the laboratory frame and the approximation at $\Frou \rightarrow 0$, we have simplified the set of linearised equations of motion to (\ref{eq:v_lowFr_No_Modal}), which emerges in a very similar form of the Orr-Sommerfeld equation in physical space. Based on (\ref{eq:v_lowFr_No_Modal}), we have deduced that the low-Froude-number mode observed in \citet{Meunier2012} and in the present stability analysis is presumably a two-dimensional and horizontal (barotropic) inflectional instability. Using (\ref{eq:v_lowFr_No_Modal}) and the WKBJ approximation, we have shown that the most unstable mode is indeed two-dimensional as long as the tilting is weak\added{, while the numerical result extends the two-dimensional argument to all $\theta$ at low $\Frou$}. The physical understanding of this theoretical result is that any vertical variation in the small perturbation would only introduce more viscous dissipation, thereby stabilising the given instability. It is important to mention that, at $\theta=0^\circ$, this result is valid only in the regime of low buoyancy Reynolds number $\mathscr{R}=\Rey \Frou$ \citep[see also][]{Brethouwer2007} where the self-similarity is observed with respect to $\beta Re^{-1/2}$. We also investigated how the increasing $\Frou$ from $\Frou=0$ stabilise the system, especially given our understanding that the low Froude number mode is horizontal and two-dimensional at $\Frou=0$. Using an asymptotic expansion and energy budget analysis, we have observed that the emergence of small vertical velocity at $\textit{O}(\Frou^2)$ plays the key role in stabilisation of the flow on increasing $\Frou^2$. This stabilisation mechanism is also found to be associated with the paradoxically stabilising buoyancy on increasing Froude number and with the modification of inflectional instability. We have also tried to explain the different $\theta$-dependency of the low-Froude-number instability mode of \citet{Candelier2011} from \citet{Meunier2012} by proposing a suitable behaviour of base flow with respect to the tilting angle. While the proposed base-flow corrections may yield a reasonable agreement between the stability analysis and the experiment, they still remain to be tested. \added{In this respect, a global stability analysis with a base flow obtained from a full numerical simulation would be highly desirable to confirm the propositions we have made in the present study. This would be the important next step towards complete understanding of instabilities in titled stratified bluff-body wake.} \section*{Acknowledgement} L. F. gratefully acknowledges funding from the President's PhD Scholarship of Imperial College London. We would also like to thank Professor C. \replaced{P.}{C.} Caulfield and Dr P. Billant for the insightful discussions. L. F. is grateful to Dr P. Meunier who shared his experience with the experiment.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The field radiative corrections for multi-particle processes has received a lot of attention, in the last few years, also thanks to new computational techniques~\cite{Ossola:2006us,Ellis:2009zw,Berger:2009ep} and to new tools~\cite{Ossola:2007ax,Giele:2008bc,vanHameren:2009dr,Hirschi:2011pa} that are nowadays available. Next to Leading Order (NLO) QCD calculations for Hadron Colliders physics are needed for two reasons. Firstly, they are an important ingredient for reliably computing backgrounds in new physics searches, that very often rely on analyses performed in rather narrow corners of the Phase-Space or in tails of distributions, where, on the one hand, not enough statistics is present to extract the background directly from the data and where, on the other hand, radiative corrections are expected to be large. Secondly, NLO calculations should be always preferred when measuring and constraining fundamental quantities of the Standard Model (SM), such as the mass of the Higgs particle (if found) and its couplings, $M_W$, $\alpha_S$ or $M_{\rm top}$. Both in the case of new physics searches, where the new produced particles undergo long decay chains, and in the case of SM measurements, where the hard event is accompanied by a rather strong jet activity, multi-leg final states are expected as a typical signature. In this contribution, I review the main results that have been recently obtained in the subject of NLO QCD calculation for multi-leg final states observables. \section{NLO processes needed at the LHC} In Les Houches 2007, theoreticians and experimentalists agreed upon a list of processes interesting to know at the NLO accuracy in QCD~\cite{Bern:2008ef}. In occasion of the following NLO multi-leg Les Houches workshop~\cite{Binoth:2010ra}, the job was already almost accomplished, at least at the parton level, mainly due to new breakthrough techniques to compute the one-loop part of the NLO corrections~\cite{Ossola:2006us,Ellis:2009zw,Berger:2009ep}. For reader's reference I present, in table~\ref{tab:tab1}, the original list and the few entries added in 2009. \begin{table} \begin{center} \begin{tabular}{|lll|} \hline $pp \to W+j$ &$ pp \to t \bar t + 2j$&$ pp \to V + 3j$ \\ $ pp \to H+2j$&$ pp \to VVb \bar b$ &$ {pp \to t \bar t b\bar b}$ \\ $ pp \to VVV$ &$ pp \to VV + 2j$ &$ pp \to b \bar b b \bar b$ \\ \hline \hline $pp \to t \bar t t \bar t $ &$ pp \to 4j$&$ pp \to W + 4j$ \\ $ pp \to Z+3j$&$ pp \to W b \bar b j$ & \\ \hline \end{tabular} \end{center} \caption{The original 2007 Les Houches Wish List (top) and its 2009 update (bottom).} \label{tab:tab1} \end{table} At present, all the parton level processes in table~\ref{tab:tab1}, except $pp \to 4j$, are known at the the NLO accuracy in QCD, in the sense that theory papers have been written containing NLO results and distributions. However, it has to be pointed out that, even if table \ref{tab:tab1} looks quite impressive, the final NLO product, needed from an experimental point of view, should be a usable code, fully automatic and matched with Parton Shower and Hadronization. Progress in the direction of a complete full automation of parton level NLO predictions has been achieved very recently by the authors of the {\sc MadLoop} code of~\cite{Hirschi:2011pa}, where the ability of {\sc MadGraph} ~\cite{Maltoni:2002qb} to compute amplitudes is merged with the OPP integrand reduction method implemented in {\sc CutTools} to automatically generate one-loop corrections. On the other hand, interfacing with the Parton Shower and Hadronization is possible within the MC@NLO~\cite{Frixione:2002ik} and POWHEG~\cite{Frixione:2007vw} frameworks. Finally, the first attempt of automatizing {\em both} the NLO computations {\em and} the subsequent merging with the Parton Shower and Hadronization codes is under way, under the name a{\sc MC@NLO}~\cite{Frederix:2011zi,ref2}. \section{NLO Tools} It is evident that sophisticated programs are needed to compute multi-leg processes at NLO. The existing tools can be naturally divided in three categories, as listed in table \ref{tab:tab2}, namely codes based on Analytic Formulae, on traditional Feynman Diagram techniques and, finally, on OPP or Generalized Unitarity methods. \begin{table} \begin{center} \begin{tabular}{|l|} \hline\\ {\underline{Analytic Formulae:}} \\ \\ {\sc MCFM}~\cite{Campbell:2002tg} \\\\ {\underline{Feynman Diagrams:}} \\ \\ {\sc Bredenstein, Denner, Dittmaier, Pozzorini}~\cite{Bredenstein:2009aj} \\ {\sc FormCalc/LoopTools/FeynCalc}~\cite{Hahn:1998yk} \\ {\sc Golem}~\cite{Binoth:2010pb} \\\\ {\underline{OPP/Generalized Unitarity:}} \\ \\{\sc MadLoop}~\cite{Hirschi:2011pa} \\ {\sc Helac-NLO/Cuttools}~\cite{Ossola:2007ax,vanHameren:2009dr} \\ {\sc BlackHat/Sherpa}~\cite{Berger:2009ep} \\ {\sc Rocket}~\cite{Giele:2008bc} \\ {\sc Golem/Samurai}~\cite{Mastrolia:2010nb,Heinrich:2010ax}\\ \hline \end{tabular} \caption{Some available NLO tools. \label{tab:tab2}} \end{center} \end{table} As usual, most of the programs have been cross checked, to establish their technical agreement. An example of such {\em tuned} comparisons is reported in table \ref{tab:tab3}, for the process $pp \to ttbb$. It is a remarkable fact that the two codes use two completely different techniques. \begin{table} \begin{center} \begin{tabular}{|c | c c | c c |} \hline &&&& \\ Process & $\sigma^{\mbox{\footnotesize{LO}}}$ [fb] {\cite{Bredenstein:2009aj}} & $\sigma^{\mbox{\footnotesize{LO}}}$ [fb] {\cite{vanHameren:2009dr}} & $\sigma^{\mbox{\footnotesize{NLO}}}$ [fb] {\cite{Bredenstein:2009aj}} & $\sigma^{\mbox{\footnotesize{NLO}}}$ [fb] {\cite{vanHameren:2009dr}} \\ &&& & \\ \hline &&&&\\ $ q \bar{q}\rightarrow t\bar{t}b\bar{b} $ & 85.522(26) & 85.489(46) & 87.698(56) & 87.545(91) \\ &&&&\\ \hline &&&&\\ $ pp\rightarrow t\bar{t}b\bar{b} $ & 1488.8(1.2) & 1489.2(0.9) & 2638(6) & 2642(3) \\ &&&&\\ \hline \end{tabular} \end{center} \caption{Example of {\em tuned} comparisons between HELAC-NLO~\cite{vanHameren:2009dr} and the program of~\cite{Bredenstein:2009aj}. \label{tab:tab3}} \end{table} Analogous successful comparisons have been performed by the {\sc Golem} group and the team Dittmaier, Kallweit and Uwer on $pp \to ZZ+j + X$~\cite{Binoth:2010ra}. The second, even more important task of the comparisons, is the assessment of the theoretical accuracy at which a given process is known. In this second type of exercise, each program freely varies a few parameters (such as renormalization and factorization scales). The goodness of the LO prediction (at least in the shape of the distributions) can also be determined that way. In fig.~\ref{fig:fig1} I report, as an example, the result of a comparison of {\sc BlackHat}, {\sc Rocket} and {\sc Sherpa} on $pp \to W+ 3jets$ at NLO. \begin{figure} \begin{center} \includegraphics[scale=0.9]{comparison1.eps}. \end{center} \caption{Comparisons on $pp \to W+ 3jets$: $p_t$ and rapidity of the 3rd jet. \label{fig:fig1}} \end{figure} As it can be easily understood, with the advent of the LHC data the techniques used to obtain the NLO results are getting less and less important, since the interest is now going towards commonly accepted interfaces to merge different parts of the NLO calculations. As an example, an accord to interface Monte Carlo (MC) programs, generating the real radiation, together with programs providing the virtual one-loop contributions (OLP), can be found in~\cite{Binoth:2010xt} (the so called Binoth Les-Houches accord). In fig.~\ref{fig:fig2}, I show this accord at work between {\sc BlackHat/Rocket} on the OLP side and {\sc MadFKS}~\cite{Frederix:2009yq} on the MC side, in the case of $e^+e^- \to jets$ as implemented by Frederix, Maitre and Zanderighi~\cite{Binoth:2010ra}. \begin{figure} \begin{center} \includegraphics[scale=1]{mcolp2.eps}. \caption{Results on $e^+e^- \to jets$ using the Binoth Les Houches accord. \label{fig:fig2}} \end{center} \end{figure} The Binoth Les-Houches accord is also used by a{\sc MC@NLO}~\cite{Frederix:2011zi,ref2} to interface virtual and real corrections. Finally, it should be noticed that the field is evolving so rapidly that new NLO processes are continuously computed. As an illustrative example, I quote $pp \to W^+W^\pm jj$ in~\cite{Melia:2010bm,Melia:2011dw}, $pp \to t \bar t \to W^+W^-b \bar b$ including all off-shell effects in~\cite{Denner:2010jp,Bevilacqua:2010qb} and $pp \to Wjjjj$ ~\cite{Berger:2010zx}. \section{Conclusions} I have presented recent progresses in our theoretical understanding of perturbative QCD at NLO. New automatic NLO tools exist nowadays to deal with the LHC data and to cope with the complexity of the present and forthcoming measurements. The further step of automatically interfacing such programs with Parton Shower and Hadronization has already been undertaken. \section*{Acknowledgment} Work supported by the Spanish MEC under project FPA2008-02984.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} The numerical integration of the time-dependent Schr\"odinger equation (TDSE) has become the main theoretical approach for the quantitative study of a vast amount of phenomena, including strong field processes in atoms and molecules, quantum collisions and chemical reactions. In strong field physics, current light sources can create ultrashort pulses of very high intensity making the numerical solution of the TDSE unavoidable if accurate results are required. In the low frequency regime where the photon energy is much lower than the ionization potential, the advent of high-intensity lasers has allowed detailed investigations of phenomena such as above-threshold ionization \cite{Muller}, high order harmonic generation \cite{L'Huillier}, multiphoton multiple ionization \cite{Hansch}, attosecond pulse generation \cite{Antoine}, molecular self-spectroscopy \cite{Corkum}, etc. In the high frequency regime where the photon energy is of the order or larger than the ionization potential, very intense coherent X-ray sources are under development. They are based on the collective electronic response of a plasma to ultra intense laser fields \cite{Borot} as well as the next generation free electron lasers (FEL) such as the European XFEL project. The latter is expected to boost the average photon flux by about two orders of magnitude in comparison with already existing FELs. The interaction of atoms or molecules with intense X-ray pulses with a duration in the femtosecond or subfemtosecond regime is expected to lead to highly non-linear processes which can no longer be described within perturbation theory as is currently the case.\\ In quantum collision theory, the interaction Hamiltonians do not usually depend explicitly on time. Time independent approaches such as the R-matrix \cite{Burke} or the S-matrix methods \cite{Joachain} suffice. However, in many cases, the lack of knowledge of the asymptotic boundary conditions or the explicit introduction of a time in the interaction Hamiltonian through the classical description of a heavy projectile makes the numerical solution of the corresponding TDSE more convenient \cite{Sidky}. Methods such as the time-dependent close coupling \cite{Pindzola} are particularly efficient. Nevertheless, when the quantum systems involved become more complex as is often the case for chemical reactions or in condensed matter physics, the numerical solution of the TDSE is no longer possible. Different approaches are necessary as for instance the time-dependent density functional theory (TDDFT)\cite{Runge, Marques}. In that case, the Hamiltonian is replaced by the self-consistent Kohn-Sham Hamiltonian. In fact, TDDFT can be viewed as a reformulation of time-dependent quantum mechanics where the basic unknown is no longer the many-body wavefunction, but the time-dependent electron density \cite{Castro}. This density can be obtained from the solution of a set of one-body equations, the so-called Kohn-Sham equations that have the same form as the usual TDSE.\\ The necessity of integrating numerically the TDSE motivates the development of efficient and accurate time pro\-pagators. Once the TDSE has been discretized in the spatial or/and energy domain by means of either a finite difference grid method or an approach based on spectral or finite element methods, the time integration of the TDSE reduces to the solution of a system of first order differential equations which may be written as follows: \begin{equation} \frac{\mathrm{d}}{\mathrm{d}t}\mathbf{Y}=-\mathrm{i}\mathbf{H}(t)\mathbf{Y},\label{eq1} \end{equation} where $\mathbf{H}(t)$ is a matrix that depends explicitly on time. The main difficulty we have to face in solving such a system of ordinary differential equations is the fact that the spectrum of matrix $\mathbf{H}$ is not bound. In general, the matrix $-\mathrm{i}\mathbf{H}$ has very high, purely imaginary, eigenvalues. These very high eigenvalues give rise to extremely fast oscillations of the true solution and usually determine the time step of the numerical time propagator. Another way to describe this problem is to say that the system behaves as a stiff system. Although there is no rigorous mathematical definition of the stiffness, a system is said to be stiff in a given interval of integration if the numerical method is forced to use a step length which is excessively small in relation to the smoothness of the exact solution \cite{Lambert}. In addition, increasing the size of the system generates eigenvalues each time higher thereby increasing its stiffness.\\ The problem associated with stiff systems is twofold: stability and accuracy. To each numerical method is associated a function named the stability function that determines the stability properties of the method and the range of time steps for which the numerical solution is stable and remains bounded. In the case of stiff systems, it can be shown, for instance, that none of the explicit methods of Runge Kutta (R-K) type is stable. In that case, it is necessary to use an implicit R-K scheme. Note that, by contrast to an explicit method which only requires matrix-vector products, all implicit schemes require solving systems of algebraic equations at each time step. For systems of considerable size, the computer time becomes, in these conditions, rapidly excessive. Fortunately, explicit schemes exist that are not of R-K type but having the stability properties required for dealing with such stiffness problems. The accuracy problem is more delicate. If an appropriate integration method is used, the stability problem may be avoided but, for a reasonable step length, the solution components corresponding to the largest eigenvalues are approximated very inaccurately \cite{Lapidus}. However, there is no mathematical tool which allows one to predict whether the numerical solution of a stiff system will be accurate or not. Very often, the highest eigenvalues that correspond to very high energies do not play any physical role but, this does not imply that the error made in calculating the corresponding high energy components of the full numerical solution will not affect the final result. In fact, it is important to proceed on a case by case basis.\\ In this contribution, we analyze in detail two explicit one-step integration schemes that have the required stability properties for dealing with stiff systems. The first method is due to Fatunla \cite{Fatunla1, Fatunla2} and the second one is a Krylov subspace method usually called the Arnoldi algorithm. The Arnoldi algorithm has already been used in many different contexts: strong field physics \cite{Smyth}, condensed matter physics \cite{Castro}, etc... However, as far as we know, no systematic study of its stability and accuracy properties exists so far. In order to test the accuracy of both methods, we use a predictor-corrector scheme in which the predictor is either Fatunla's method or Arnoldi's algorithm while the corrector is a four-stage diagonally implicit Radau IIA method of order seven. Here, we consider the interaction of a quantum system with a strong and ultrashort electromagnetic pulse and test the three methods in the case of three different quantum systems: a model potential, atomic hydrogen and helium. We also examine the performances of these explicit schemes in a completely different context namely the calculation of a fidelity function that measures the decoherence of two-electron quantum states when a time independent perturbation is applied to a planar two-electron quantum dot where both electrons are confined in an anharmonic potential. In fact this is a difficult problem, which exhibits a strong degree of stiffness although its Hamiltonian is time independent. Finally, let us mention that a comparison of different time propagation algorithms for the time dependent Schr\"odinger equation may be found in \cite{Leforestier}.\\ This article is organized as follows. Section II is devoted to the general formulation of the TDSE. After some preliminary remarks, we give and discuss the general spectral representation of the TDSE and finally define the stability function of a given algorithm. In section III, we introduce the various algorithms (Fatunla's method, Arnoldi's algorithm and the predictor-corrector scheme) in the context of our model potential. For both explicit schemes, we give their stability function and analyze in detail their accuracy in the case of the model potential. Section IV is devoted to the results obtained with Fatunla's and Arnoldi's methods for the model potential, the interaction of atomic hydrogen with both a high and a very low frequency strong laser field and single ionization of helium. Finally, we consider the problem of the planar two-electron quantum dot. Unless otherwise stated, atomic units are used throughout this paper.\\ \section{General formulation of the TDSE} \label{tdse} \subsection{Preliminary remarks} \label{preliminary} Our aim is to study the interaction of a quantum system with an external time-dependent field. Solving numerically the corresponding TDSE proceeds in two steps: the discretization in the spatial or/and energy domain of the equation and the time propagation of the solution. There are typically three ways of discretizing the TDSE: the finite difference grid (FDG) methods and the approaches based on spectral or finite element methods. The simplest approaches are the FDG methods. These methods based on a spatial discretization are essentially local. They are very often used because the subsequent time propagation involves solving very sparse systems of algebraic equations. However, it is often tricky to extract information on how these methods account for the electronic structure of the quantum system under consideration and some observables are sometimes difficult to calculate. Furthermore, these methods yield finite-order rates of convergence in terms of the number of spatial grid points. In other words, the errors go to zero as the inverse of this number at a power given by the order of the method.\\ The spectral methods based on an energy discretization are non-local. They consist in writing the solution as a truncated expansion in terms of $\mathcal{L}^2$ integrable functions. These functions form a complete basis set. Different choices of basis sets are possible, which usually depend on the physics of the problem. The most commonly used functions are the Hermite functions, the Coulomb Sturmian functions \cite{Rotenberg} and orthogonal polynomials. There are essentially two types of spectral methods: Galerkin and collocation \cite{Gottlieb}. In addition to the energy discretization as is the case in the Galerkin method, the collocation method involves also a spatial discretization. However, by contrast to the FDG methods, the grid mesh points are not arbitrarily chosen. They are the abscissae of the Gaussian quadrature associated with the basis functions. The spectral approaches are very appropriate for a very accurate description of the bound and resonance states of the quantum system under consideration. This is particularly true for resonance states very close to the ionization thresholds \cite{Eiglsperger}. The convergence of the spectral methods in terms of the number of basis functions depends on the analytical properties of the solution. If the successive spatial derivatives of this solution do not exhibit singularities, the convergence is exponential. This means that the errors go to zero faster than any finite power of the inverse of the number of basis functions. On the other hand, if the solution exhibits singularities in its successive derivatives and if the basis wavefunctions do not account for these singularities, the convergence is much slower. Typical examples of such singularities are the Kato cusps present in many-particle system wavefunctions \cite{Kato}. Another drawback of the spectral methods is the fact that the matrix associated with the Hamiltonian is, in most cases, not sparse. \\\\ The finite element methods which are based on a subdivision of the whole spatial domain of integration into simple subdomains are in fact closely related to the spectral methods. They differ, however, by the fact that the basis functions have bounded support, being therefore piecewise regular. In addition, these methods yield also finite-order rates of convergence like in the case of the FDG methods. Piecewise Lagrange polynomials or B-splines are very often used as basis functions. In general, these methods are particularly efficient in describing the electronic continuum states of the system under consideration. In addition, singularities or large gradients in the solution can be treated by considering non-regular subdomains. These methods are very often used, especially those based on B-splines \cite{Bachau} because the subsequent time propagation involves relatively sparse systems of equations to solve like in the case of the FDG methods. In the present contribution, we use spectral or/and B-spline based methods in all the cases treated. \\ \subsection{The spectral representation of the TDSE} \label{spectral} The TDSE for a quantum system interacting with an external field can be written as \begin{equation} \mathrm{i}\,\frac{{\partial\Psi \left( {\mathbf{r},t} \right)}}{{\partial t}} = H\left( {\mathbf{r},t} \right)\,\Psi \left( {\mathbf{r},t} \right),\label{eq2} \end{equation} where $\Psi \left( {\mathbf{r},t} \right)$ is the wavefunction of the system, $\mathbf{r}$ represents any set of $n$ spatial coordinates and $t$ is the time. The total Hamiltonian $H\left( {\mathbf{r},t} \right)$, which depends explicitly on time, is given by \begin{equation} H\left( {\mathbf{r},t} \right)=H_0\left( {\mathbf{r}} \right)+ V\left( {\mathbf{r},t} \right),\label{eq3} \end{equation} with $H_0\left( {\mathbf{r}} \right)$ the unperturbed Hamiltonian and $V\left( {\mathbf{r},t} \right)$ the time-dependent interaction potential (velocity form in all the cases treated here). Using a complete basis set $\left\{ {{f_i}\left(\mathbf{ r} \right)} \right\}$ of square integrable functions, we write the wavefunction $\Psi(\mathbf{r},t)$, the solution of equation (\ref{eq2}), as the following truncated expansion, \begin{equation} \Psi(\mathbf{r},t)=\sum_{i=1}^{N}\psi_{i}(t)f_{i}(\mathbf{r}), \label{eq4} \end{equation} where the expansion coefficients $\psi_{i}(t)$ are time-dependent. $N$ represents the number of terms in the expansion and is taken sufficiently large to represent the wavefunction to the desired accuracy. As a result, the TDSE is transformed into a matrix equation for the vector $\boldsymbol{\Psi}(t)=\{\psi_{i}(t)\}_{N}$, given by \begin{equation} \mathrm{i}\; \mathbf{B} \frac{\mathrm{d}}{\mathrm{d} t}\boldsymbol{\Psi}(t)=\mathbf{H}(t) \boldsymbol{\Psi}(t). \label{eq5} \end{equation} For a non-orthonormal basis, the overlap matrix $\mathbf{B}$ and the Hamiltonian $\mathbf{H}(t)$ have elements defined by \begin{eqnarray} \left[\mathbf{B}\right]_{ij}&=&\langle f_{i} | f_{j} \rangle, \label{eq6}\\ \left[\mathbf{H}\right]_{ij}&=&\langle f_{i} | H(\mathbf{r},t) | f_{j}\rangle. \label{eq7} \end{eqnarray} The time evolution of the wavepacket is then given by the solution of the following $N$-dimensional system of first order differential equations: \begin{equation} \frac{\mathrm{d}}{\mathrm{d}t}\boldsymbol{\Psi}(t)=-\mathrm{i}\; \mathbf{B}^{-1}\mathbf{H}(t)\boldsymbol{\Psi}(t). \label{eq8} \end{equation} There is actually no need to evaluate explicitly the inverse of the overlap matrix $\mathbf{B}$. This matrix is always symmetric and positive definite, which allows a numerically stable and fast Cholesky decomposition. In that case the action of $\mathbf{B}^{-1}$ on a vector can be calculated straightforwardly by solving a very sparse system of algebraic equations. The vector $\boldsymbol{\Psi}(t)$ is said to be $\mathbf{B}$-orthogonal and its norm is given by \begin{equation} \boldsymbol{\Psi}^{\dagger} \cdot \mathbf{B} \cdot \boldsymbol{\Psi}=1. \label{eq9} \end{equation} Note that in the case of the FDG methods, a system of equations similar to the system (5) has to be solved but the matrix $\mathbf{H}$ is no longer associated with the Hamiltonian. \\ \subsection{The boundary and asymptotic conditions} \label{asymptotic} In solving numerically the TDSE, the discretization method has to account correctly for the non-trivial problems of the boundary and asymptotic conditions. By way of illustration, let us consider the case of the ionization of atomic hydrogen by an intense low frequency laser field. The amplitude of the electron quiver motion determines the minimum spatial grid size or the minimum number of basis functions to be included. For high intensities and very low frequencies, this amplitude may become of the order of thousands of atomic units thereby requiring excessively long computational times. In addition, during the interaction process, ionization takes place and fast emitted electrons will rapidly reach the boundaries of the computational domain. It is therefore important to choose appropriate boundary conditions to avoid spurious reflections of the wavefunction at these boundaries. Such reflections can be avoided by further increasing the size of the computational domain, but this becomes rapidly untractable. Instead, reflection problems can be overcome by introducing complex absorbing potentials \cite{DiMenza,Muga}. Those potentials however are usually not completely reflection free. A better approach is exterior complex scaling (ECS) in which the outgoing electron coordinate becomes complex beyond a certain distance from the nucleus which is larger than the amplitude of the quiver motion \cite{He,Scrinzi1}. \\ For single electron systems, the extraction of the information on the differential probability densities does not cause any problem since the asymptotic behavior of the field-free continuum states is known. This contrasts with the multi-electron systems where the asymptotic behavior of the multiple continuum wavefunctions is unknown. In that case, one can either develop approximate expressions for these continuum wavefunctions or use more sophisticated time-dependent methods that circumvent the problem. When the outgoing electrons are sufficiently far from each other so that their interaction becomes negligible, multiple continuum wavefunctions are usually approximated by a product of Coulomb functions \cite{Colgan,Laulan,Foumouo,Feist}. The validity of this approximation which gives reliable results is discussed in \cite{Malegat1}. More sophisticated methods that avoid any projection of the final wavepacket on approximated multiple continuum wavefunctions have been developed. Palacios {\it et al.} \cite{Palacios} have derived a time-dependent method where the extraction of the information from the wavepacket is based on ECS. Malegat {\it et al.} extract the information from the total wavepacket after propagating semiclassically its Fourier components in space over very large distances \cite{Malegat2,Malegat1}. Scrinzi has extended the time-dependent surface flux method to single and double ionization of two-electron systems \cite{Scrinzi2}. Hutchinson {\it et al.} \cite{Hutchinson} are developing a time-dependent R-matrix approach that can describe the interaction of any (light) atomic systems with short electromagnetic pulses. More recently, Hamido {\it et al.} have developed the so-called time scaled coordinate (TSC) method \cite{Hamido}. This latter method which is used in some of the cases treated in this contribution, consists in performing a time-dependent scaling of the radial coordinates of the electrons together with a phase transformation of the wavefunction. As a result, an harmonic potential appears in the scaled Hamiltonian, which confines the wavefunction in configuration space. It can be shown that a relatively long time after the interaction, the wavefunction becomes stationary and its modulus gives directly the momentum distribution of the particles resulting from the fragmentation of the system. Consequently this method clearly circumvents the above mentioned difficulties. It however introduces different length scales that need to be treated with multiresolution techniques and that influence the stability of the numerical time propagation scheme.\\ \subsection{The stability function} \label{stability} In order to analyze the stability of a one-step numerical time propagation scheme, it is convenient to consider the following standard test problem (Dahlquist's equation): \begin{equation} \frac{\mathrm{d}y}{\mathrm{d} t}=\lambda y, \label{eq10} \end{equation} where $\lambda$ is a constant. If we assume that $y(0)=\eta$, the solution of this equation is $y(t)=\eta\exp(\lambda t)$. Usually, a system of equations is said to be stiff when its Jacobian matrix has some eigenvalues with a very large negative real part. In the case of Eq.(\ref{eq10}), assuming that the real part of $\lambda$ is very large and negative leads to a solution that tends extremely rapidly to zero. We have therefore to look for the conditions that have to be imposed on the numerical time propagation scheme in order that the numerical solution $y_n=y(n\delta t)\rightarrow 0$ as $n\rightarrow\infty$ where $\delta t$ is the time step. By applying the one-step numerical time propagation scheme to Eq.(\ref{eq10}), we obtain \begin{equation} y_{n+1}=R(\lambda\delta t)y_n\label{eq11}, \end{equation} where $R(z)$ is the so-called stability function. In order that $y_n$ tends to zero as $n\rightarrow\infty$, we must impose $R(\lambda\delta t) < 1$ thereby implying some constraints on the time step $\delta t$. The set $S=\{z=\lambda\delta t\in\mathbb{C};\;|R(z)|\leq1\}$ is called the stability domain of the numerical scheme. This latter one is said to be A-stable if its stability domain is included in $\mathbb{C}^-=\{z;\;Re\;z\leq 0\}$. It is L-stable if, apart from being A-stable, the stability function has the property $\lim_{Re(\lambda\delta t)\rightarrow -\infty}|R(\lambda\delta t)|=0$. L-stable methods are the most stable ones \cite{Lambert}. \\ In the present case, the Jacobian of the system of equations we are interested in has large purely imaginary eigenvalues. Although such systems behave like a stiff system, the analysis of the stability of the numerical scheme is more delicate. Suppose for instance that the numerical scheme we use is L-stable and that its stability domain covers the half-plane $\mathbb{C}^-$ as well as large parts of the right half-plane $\mathbb{C}^+$. In these conditions, uninteresting high oscillations of the true solution may be damped by the numerical scheme. However, the norm of the solution will not be necessarily preserved since $|R(\lambda\delta t)|\leq 1$. We must impose, as an additional constraint, that $|R(\lambda\delta t)|=1$. This means that if $\lambda$ is purely imaginary, $R(\lambda\delta t)=\exp(\lambda\delta t)$. Following Fatunla \cite{Fatunla2}, a numerical time propagation scheme is said to be exponentially fitted at a complex value $\lambda=\lambda _0$ if the stability function $R(\lambda\delta t)$ satisfies the relation \begin{equation} R(\lambda_0\delta t)=\exp(\lambda_0\delta t)\label{eq12}. \end{equation} \section{Time propagation algorithms} \label{algorithm} In this section we describe and compare the performance of two explicit one-step time propagation schemes, namely Fatunla's method and Arnoldi's algorithm in terms of stability and accuracy. To test the accuracy of both schemes we use an implicit predictor-corrector (P-C) method. The predictor is either Fatunla's method or Arnoldi's one and the corrector is a four-stage Radau II-A implicit method which is of Runge-Kutta type. Monitoring the time step during the time propagation using both explicit schemes is a key point which will be addressed first within a simple one-dimensional model of one electron in a Gaussian potential of the form $V(x)=-V_{0}e^{-\beta x^2}$, where $ V_{0}$ and $\beta$ are constants and exposed to a cosine square pulse.\\ In the following sections, these two algorithms will be also tested in two more demanding physical situations. The first situation is the interaction of a quantum system with a strong electromagnetic pulse. The quantum systems we shall be studying in that case are atomic hydrogen and helium, both treated in full dimensions. The second physical situation is the time evolution of a two-electron wavepacket in a two-dimensional quantum dot. \subsection{Fatunla's method} \label{fatunla} The idea behind Fatunla's method is to take into account the intrinsic frequencies of the atom-field system by introducing interpolating oscillatory functions that approximate the solution of the TDSE. This allows one to deal with problems displaying eigenvalues that differ by many orders of magnitude. That explains why Fatunla's method has the capability to solve stiff equations, while requiring only matrix vector products. More precisely, we write the first order differential equation (\ref{eq8}) as \begin{equation} \frac{\mathrm{d}}{\mathrm{d}t}\boldsymbol{\Psi}(t)=-\mathrm{i}\; \mathbf{B}^{-1}\mathbf{H}(t)\boldsymbol{\Psi}(t)=\mathbf{f}(t,\boldsymbol{\Psi})\label{eq13}. \end{equation} The solution $\boldsymbol{\Psi}(t)$ over a subinterval $[t_{n},t_{n}+\delta t=t_{n+1}]$ is approximated by the interpolating oscillatory function \begin{equation} \widetilde{\mathbf{F}}(t)=(\mathbf{I}-e^{\boldsymbol{\Omega}_{1}t})\mathbf{a}-(\mathbf{I}-e^{-\boldsymbol{\Omega}_{2}t})\mathbf{b}+\mathbf{c}, \label{eq14} \end{equation} with $\mathbf{I}$ being the identity matrix. $\boldsymbol{\Omega}_{1}$ and $\boldsymbol{\Omega}_{2}$ are diagonal matrices, usually called the stiffness matrices, and $\mathbf{a},\mathbf{b},\mathbf{c}$ are constant vectors. By demanding that the interpolating function (\ref{eq14}) coincides with the theoretical solution at the endpoints of the interval $[t_{n},t_{n+1}]$, and that it satisfies the differential equation at $t=t_{n}$, we arrive at the recursion formula, \begin{equation} \boldsymbol{\Psi}_{n+1}=\boldsymbol{\Psi}_{n}+\mathbf{R}\mathbf{f}_{n}+\mathbf{S}\mathbf{f}_{n}^{(1)}, \label{eq15} \end{equation} where we use the notation $\mathbf{f}_{n}=\mathbf{f}(t_{n},\boldsymbol{\psi}_{n})$, $\mathbf{f}_{n}^{(1)}=\displaystyle{\left.\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{f}(t,\boldsymbol{\Psi})\right|_{t=t_{n}}}$. $\mathbf{R}$ and $\mathbf{S}$ represent diagonal matrices defined as \begin{equation} \mathbf{R}=\boldsymbol{\Omega}_{2}\boldsymbol{\Phi}-\boldsymbol{\Omega}_{1}\boldsymbol{\varXi}, \qquad \mathbf{S}=\boldsymbol{\Phi}+\boldsymbol{\varXi}. \label{eq16} \end{equation} $\boldsymbol{\Phi}$ and $\boldsymbol{\varXi}$ are diagonal matrices with non-zero entries given by \cite{Fatunla1,Fatunla2}, \begin{eqnarray} \Phi_{j}=\frac{e^{\Omega_{1,j}\delta t}-1}{\Omega_{1,j}\left(\Omega_{1,j}+\Omega_{2,j}\right)}, \label{eq17} \end{eqnarray} and \begin{eqnarray} \varXi_{j}=\frac{e^{-\Omega_{2,j}\delta t}-1}{\Omega_{2,j}\left(\Omega_{1,j}+\Omega_{2,j}\right)} .\label{eq18} \end{eqnarray} The recursion formula (\ref{eq15}) depends on the so far unknown stiffness matrices $\boldsymbol{\Omega}_{1}$ and $\boldsymbol{\Omega}_{2}$. These matrices can be written in terms of the function $\mathbf{f}_{n}$ and its derivatives up to third orther in $t_{n}$. The use of the Taylor expansion of $\Psi_{n+1}=\Psi(t_n+\delta t)$ and of the Maclaurin series of $\displaystyle{{e^{{\Omega _1}{\kern 1pt} \delta t}} = \sum\limits_{j = 0}^\infty {\frac{{\delta {t^j}}}{{j!}}\,} {\Omega _{1,j}}}$ and of $\displaystyle{{e^{ - {\Omega _2}{\kern 1pt} \delta t}} = \sum\limits_{j = 0}^\infty {\frac{{\delta {t^j}}}{{j!}}{{\left( { - 1} \right)}^j}\,} {\Omega _{2,j}}}$, substituted in the recursion relation (\ref{eq15}), leads to a simple system of algebraic equations for $\Omega_1$ and $\Omega_2$. The components of the stiffness matrices obtained after solving these equations read as \cite{Fatunla2}, \begin{eqnarray} \Omega_{1,j}&=&\frac{1}{2}\left(-D_{j}+\sqrt{D_{j}^{2}+4 E_{j}}\right), \nonumber \\ \Omega_{2,j}&=&\Omega_{1,j}+D_{j}, \label{eq19} \end{eqnarray} where $D_{j}$ and $E_{j}$ ($j=1,....,N$) are given in terms of the components of the derivatives $\mathbf{f}_{n}^{(k)}$ ($k=0,1,2,3$), \begin{eqnarray} D_{j}&=&\frac{f_{n,j}^{(0)}f_{n,j}^{(3)}-f_{n,j}^{(1)}f_{n,j}^{(2)}}{f_{n,j}^{(1)}f_{n,j}^{(1)}-f_{n,j}^{(0)}f_{n,j}^{(2)}} ,\nonumber \\ E_{j}&=&\frac{f_{n,j}^{(1)}f_{n,j}^{(3)}-f_{n,j}^{(2)}f_{n,j}^{(2)}}{f_{n,j}^{(1)}f_{n,j}^{(1)}-f_{n,j}^{(0)}f_{n,j}^{(2)}}, \label{eq20} \end{eqnarray} provided that the denominator in Eq.(\ref{eq20}) is not zero. \\ Fatunla \cite{Fatunla2} has established that his method is L-stable and exponentially fitted to any complex value $\lambda$. This means that the corresponding stability function $R(\lambda\delta t)=\exp(\lambda\delta t)$, gives the optimum stability properties. Furthermore, it can be shown that the $j$th component of the local truncation error at $t=t_{n+1}$ is given by \begin{eqnarray} T_{n+1,j}=\frac{\delta t^{5}}{5!}\left[ f_{n,j}^{(4)}+\left(\Omega_{2,j}^{3}-\Omega_{2,j}^{2}\Omega_{1,j}+\Omega_{2,j}\Omega_{1,j}^{2}-\Omega_{1,j}^{3}\right) f_{n,j}^{(1)}\right. \nonumber \\ \left.-\Omega_{1,j}\Omega_{2,j}\left(\Omega_{1,j}^{2}-\Omega_{1,j}\Omega_{2,j}+\Omega_{2,j}^{2}\right) f_{n,j}^{0}\right]+\mathcal{O}(\delta t^{6}) .\label{eq21} \end{eqnarray} \bigskip The implementation of Eq.(\ref{eq15}) to calculate $\boldsymbol{\Psi}_{n+1}$ requires the calculation of the function $\mathbf{f}_{n}$ and its first derivatives $\mathbf{f}_{n}^{(1)}$ at each value of $t_{n}$, and also the stiffness matrices $\boldsymbol{\Omega}_{1}$ and $\boldsymbol{\Omega}_{2}$ to obtain the matrices $\mathbf{R}$ and $\mathbf{S}$. We also calculate the truncation error $\mathbf{T}_{n+1}$ to control the size of the integration step imposing a boundary criterion for $\left|\mathbf{T}_{n+1}\right|$. Note that to calculate the truncation error, we also need to evaluate $\mathbf{f}_{n}^{(4)}$. \begin{figure}[h] \begin{center} \includegraphics[width=11cm,height=8cm]{fig1.pdf} \end{center} \caption{(Color online) Evolution of the time step in Fatunla's method (blue line) for our Gaussian model problem. The cosine square pulse envelope (red line) is also shown on an arbitrary scale. The Gaussian potential parameters are $V_{0}=1$ a.u. and $\beta=1$ a.u. and we use an electromagnetic pulse with $I=10^{14}$ Watt/cm$^2$ peak intensity, $\omega=0.7$ a.u. photon energy and a duration of 10 optical cycles.} \label{fig1} \end{figure} The stiffness parameters carry the intrinsic information on the natural oscillations of the system. Due to this fact, Fatunla's scheme can afford larger values of the time step compared with other explicit methods of Runge-Kutta type \cite{Madronero}.\\ \begin{figure}[b!] \begin{center} \includegraphics[width=11cm,height=8cm]{fig2.pdf} \end{center} \caption{(Color online) Absolute error in the norm $\Delta=\left\vert(\vert\Psi(\mathbf{r},t)|-|\Psi(\mathbf{r},t_{0})|)\right|$ on a logarithmic scale for different lower and upper bounds of the truncation error. The parameters of the Gaussian model problem are the same as in Fig.\ref{fig1}. The inset is a blow-up of the region at the beginning of the time propagation.} \label{fig2} \end{figure} In Fig.\ref{fig1}, we show the evolution of the time step in Fatunla's propagation for our Gaussian model problem. The pulse envelope is also plotted in arbitrary units to illustrate the duration of the pulse. We see that the time step becomes increasingly large after the end of the pulse, reaching values of around $2$ at the end of the total propagation ($500$ a.u. of time). It is clear that the most demanding part of the propagation, and therefore the most time consuming one, is during the interaction of the pulse with the system. This observation is important when it is necessary to propagate the wavefunction up to large distances after the end of the pulse, as is the case when the TSC method is used. In the results for the time step shown in Fig.\ref{fig1}, the latter one is adapted according to the condition $10^{-14}<\lvert \mathbf{T}_{n+1} \rvert < 10^{-9}$, that is, if the truncation error is lower than the lower bound $10^{-14}$ then we increase the time step, and if it is higher than the upper bound $10^{-9}$ it is decreased. With this choice, the overall conservation of the norm is about $10^{-5}$, which is enough for the model problem case we are studying. For many physical problems, this level of accuracy in the norm is sufficient but, if a higher accuracy is needed, then we might expect that it is sufficient to shift the bounds of the truncation error. However, as shown in Fig.\ref{fig2}, such a conclusion is not correct. In Fig.\ref{fig2}, we consider three different constraints on the truncated error and calculate on a logarithmic scale, the absolute error on the norm denoted by $\Delta$ as a function of time. This error is defined as the absolute value of the difference between 1 and the norm at time t . In these three cases, the time propagation is started with the same time step namely $10^{-3}$ a.u. This time step always increases while the truncated error is smaller than the prescribed lower bound and decreases if the truncated error is above the upper bound. In all three cases, we observe a significant loss of accuracy in $\Delta$ at the very beginning of the time propagation. As described by Madro\~nero and Piraux \cite{Madronero}, this is due to initially very small values of the denominators in Eq. (20) which leads to inaccurate values of the stiffness matrix elements and of the truncated error. This problem is therefore intrinsically related to Fatunla's method and leads to difficulties in correctly controlling the time step. In fact , if we keep the time step constant from the beginning, we have a much better control of $\Delta$. We have also checked that this is true even in the field free case. On the other hand, in general, we see from the inset in Fig.\ref{fig2} that we maintain a higher accuracy when the constraint on the truncated error is more severe. In addition, we also observe several small jumps in $\Delta$ the magnitude of which are much smaller than the jump in $\Delta$ at the beginning of the time propagation. We attribute these jumps to an accumulation of roundoff errors. Indeed, we expect more roundoff errors in the case the constraint on the truncated error is the strongest since a smaller time step leads to a larger amount of calculations. Note that the jump observed in the red continuous line corresponds to a change of only one digit in the accuracy of the norm. The overall relative accuracy we obtain even for the most severe constraint we use on the truncation error is of the order of $10^{-5}$. To achieve a greater accuracy, it is necessary to use a fixed and very small time step. These results show clearly that the achievable accuracy for the adaptive time step approach in Fatunla's method has a lower bound for a given initial time step. As a result, the use of Fatunla's method rests on a compromise between the computer time required and the accuracy needed. In the following, we consider the interaction of helium with a strong laser pulse. In that case, the accuracy on the norm reduces to about 4 significant digits when Fatunla's method is used. This prevents us to calculate the probability of single ionization in various channels where the latter one is less than $10^{-4}$ a.u. for field intensities currently used in the experiments.\\ In conclusion Fatunla's method allows one to treat stiff problems while fully exploiting the advantages of explicit schemes, namely that it only involves matrix vector multiplications. However it has its own limitations. \subsection{Krylov subspace method} \label{arnoldi} In this section we consider a powerful method to propagate the TDSE solution, which provides accuracy of solutions and stability of propagation. It uses projection techniques on Krylov subspaces \cite{Saad}. This approach was proposed by Arnoldi \cite{Arnoldi} in the calculation of the eigenstates of a matrix. Here we briefly recall the method used by Arnoldi as a time propagator \cite{Park}, to solve the differential equation (\ref{eq5}). Since the overlap matrix is positive-definite, we can use the Cholesky decomposition $\mathbf{B}=\mathbf{U}^{\dagger}\mathbf{U}$ to form an orthonormal basis defining the new coefficients $\mathbf{\Phi}=\mathbf{U}\boldsymbol{\Psi}$. The TDSE for these coefficients is written in the form, \begin{equation} \frac{\mathrm{d}\mathbf{\Phi}(t)}{\mathrm{d}t}=-\mathrm{i} \mathbf{\widehat{H}}(t)\mathbf{\Phi}(t) ,\label{eq22} \end{equation} where $\mathbf{\widehat{H}}=(\mathbf{U}^{\dagger})^{-1}\mathbf{H} \mathbf{U}^{-1}$. If we assume that the time interval is sufficiently small that the Hamiltonian may be treated as constant in time over a time step $\delta t$, it is trivial to demonstrate that Eq.(\ref{eq22}) has a solution given by \begin{equation} \mathbf{\Phi}(t+\delta t)=e^{-\mathrm{i} \mathbf{\widehat{H}}(t)\delta t} \mathbf{\Phi}(t). \label{eq23} \end{equation} If $\mathbf{\widehat{H}}$ is diagonalizable and can be written as $\mathbf{\widehat{H}}=\mathbf{E}\mathbf{\Lambda}\mathbf{E}^{-1}$, where $\mathbf{\Lambda}$ is a diagonal matrix with the eigenvalues $\lambda_{i}$ of $\mathbf{\widehat{H}}$ on the main diagonal and $\mathbf{E}$ is the matrix with the corresponding eigenvectors of $\mathbf{\widehat{H}}$ as its columns, then Eq.(\ref{eq23}) can be reexpressed as follows \begin{equation} \mathbf{\Phi}(t+\delta t)=\mathbf{E} e^{-\mathrm{i}\mathbf{\Lambda}(t)\delta t} \mathbf{E}^{-1} \mathbf{\Phi}(t) .\label{eq24} \end{equation} However, for very large $N$ this may be unnecessary and computationally very demanding. Instead, we can define the exponential in Eq.(\ref{eq23}) using a Taylor expansion of the form, \begin{equation} \mathbf{\Phi}(t+\delta t)=\left(\mathbf{I}-\mathrm{i} \delta t \mathbf{\widehat{H}}(t)+... + \frac{(-\mathrm{i} \delta t)^{k}}{k!} \mathbf{\widehat{H}}^{k}(t)+...\right)\mathbf{\Phi}(t). \label{eq25} \end{equation} We use the successive matrix products as a basis set forming a Krylov subspace spanned by $(m+1)$ linearly independent vectors, denoted by \begin{equation} K_{m+1}=span\{\mathbf{\Phi},\mathbf{\widehat{H}}\mathbf{\Phi},...,\mathbf{\widehat{H}}^{m}\mathbf{\Phi}\}. \label{eq26} \end{equation} To build the Krylov subspace, we first use Gram-Schmidt orthogonalization of the initial vectors $\{\mathbf{\Phi},\mathbf{\widehat{H}}\mathbf{\Phi},...,\mathbf{\widehat{H}}^{m}\mathbf{\Phi}\},$ to obtain an orthonormal basis $\{\mathbf{q}_{0},\mathbf{q}_{1},...,\mathbf{q}_{m}\}$. The procedure starts with $\mathbf{q}_{0}=\mathbf{\Phi}/\vert \mathbf{\Phi}\vert$, where the norm is defined as $\vert \mathbf{\Phi}\vert= \sqrt{\mathbf{\Phi}^{\dagger}\cdot\mathbf{\Phi}}$. The $\mathbf{q}_{k}$ are obtained by calculating $\mathbf{\widehat{H}}\mathbf{q}_{k-1}$ and then orthonormalizing each vector with respect to $\mathbf{q}_{0},...,\mathbf{q}_{k-1}$. If we define $\mathbf{Q}$ to be a matrix formed by the $m+1$ column vectors $(\mathbf{q}_{0},...,\mathbf{q}_{m})$, we finally get \begin{equation} \mathbf{\widehat{H}} \mathbf{Q}=\mathbf{Q} \mathbf{h},\label{eq27} \end{equation} giving \begin{equation} \mathbf{h}=\mathbf{Q}^{\dagger}\mathbf{\widehat{H}}\mathbf{Q}. \label{eq28} \end{equation} We see here that $\mathbf{h}$ is the Krylov subspace representation of the full Hamiltonian $\mathbf{\widehat{H}}$, and that in this procedure, we obtain simultaneoulsy the Krylov vectors $\mathbf{q}_{0},...,\mathbf{q}_{m}$. Arnoldi's algorithm is general and applies to non-hermitian matrices. It reduces the dense matrix $\mathbf{h}$ to an upper Hessenberg form, and in the particular case of hermitian matrices, to a symmetric tridiagonal form. In this latter case, Lanczos has shown that this matrix can be obtained by means of a simple recursion formula. However, this formula is known to be problematic when the size of the Krylov subspace is large because the orthogonality of the Krylov vectors is rapidly lost \cite{Saad}. It is the reason why we do not use this algorithm in the present case. Once we obtain the orthonormal Krylov subspace $\mathbf{Q}$ and the representation $\mathbf{h}$ of the Hamiltonian, it can be easily shown that Eq.(\ref{eq23}) can be written as \begin{equation} \mathbf{\Phi}(t+\delta t)=\mathbf{Q} e^{-\mathrm{i}\mathbf{h}\delta t} \mathbf{Q}^{\dagger} \mathbf{\Phi}(t). \label{eq29} \end{equation} The matrix $\mathbf{h}$ for all our case studies is tridiagonal, and its size is never bigger than $100 \times 100$, so the calculation of the exponential through direct diagonalization, as in Eq.(\ref{eq24}), is straightforward.\\ In actual numerical calculations, the Arnoldi algorithm \cite{Saad} requires some modifications. After a first calculation of a new Krylov vector $\mathbf{q}_{j+1}$, we ensure that the norm is equal to one, by re-checking the orthogonality against the previously calculated vectors, and perform again the Gram-Schmidt procedure if necessary. In principle the orthogonality condition determines the maximum size of the Krylov subspace and the algorithm can be used with a number $m-1$ of vectors. Also, if we start generating the Krylov vectors from the ground state of the system, then $\mathbf{\Phi}(t=t_{0})$ is an eigenstate of the Hamiltonian, making it impossible to build a linearly independent set of Krylov vectors. To solve this problem, instead of using the vector $\mathbf{\Phi}(t=t_{0})$ as a starting point, we use a modified vector $\mathbf{\Phi}(t=t_{0})+\boldsymbol{\varrho}$, with $\boldsymbol{\varrho}$ a vector of random entries no larger than $10^{-10}$.\\ \begin{figure}[ttp] \begin{center} \includegraphics[width=11cm,height=9cm]{fig3.pdf} \end{center} \caption{Number of Krylov vectors required to obtain convergence of the final vector propagated for different values of the (fixed) time step. The parameters of the Gaussian model problem are the same as in Fig.\ref{fig1}.} \label{fig3} \end{figure} By construction (see Eq.\ref{eq23}), the stability function associated to the numerical time propagator based on Arnoldi's algorithm is given by $R(\lambda\delta t)=\exp(\lambda\delta t)$. As a result, it has exactly the same stability properties as Fatunla's algorithm. However, it is worth remembering that Eq.(\ref{eq23}) is only valid if the Hamiltonian is time independent. It is therefore a good approximation only for small values of $\delta t$. In the present case, there are two types of errors. The first one is directly related to Arnoldi's algorithm for the calculation of the exponential of a matrix. This type of error has been discussed in detail by Saad \cite{Saad2} and later on by Hochbruck and Lubich \cite{Hochbruck}. We have checked that this type of error is always negligible and does not depend on the time step. The second type of error is due to assuming that the Hamiltonian does not depend explicitly on time over the time step $\delta t$. We estimated this type of error by calculating $\|\mathrm{d}\mathbf{\Psi}/\mathrm{d}t+\mathrm{i}\mathbf{H}\mathbf{\Psi}\|$ and checked that, as expected, it is of the order $\delta t^2$. Another way to estimate this type of error is to compare our results with those obtained with an Arnoldi based method that takes explicitly into account the time dependence of the Hamiltonian. This can be done by using a Magnus expansion of the time evolution operator \cite{Magnus,Iserles}. However, this method requires very time consuming calculations beyond the scope of this contribution. On the other hand, as it was already noted by other authors \cite{Park}, enlarging the size of the Krylov space allows for larger time steps to be considered. In Fig.\ref{fig3}, we give the number of Krylov vectors necessary to obtain convergence of our results as a function of the time step used in the calculations for the case of the one-dimensional Gaussian model potential with the same parameters as in Fig.\ref{fig1}. In our calculations, the time step $\delta t$ is kept constant during the propagation. The choice of the optimal value of the time step and of the corresponding dimension of the Krylov space is therefore the result of a compromise while trying to reduce the computer time.\\ The innovative use of the Arnoldi method as an explicit approach offers then the convenience that we only require matrix-vector and scalar products, which then transforms the method in a time-efficient approach as is the case for Fatunla's method. Furthermore, this particular scheme is norm-conserving with the advantage of providing a check for the method, even though it also means that it is not easy to quantify the error in the calculation of the norm. In the following sections the accuracy of both methods is tested in various situations by using a high order predictor-corrector method. \subsection{A predictor-corrector method.} \label{predcor} In this subsection, we briefly describe the predictor-corrector (P-C) scheme we use to test the accuracy of both explicit methods described above. The predictor is either Fatunla's or Arnoldi's algorithm. The corrector is a fully implicit method of Runge-Kutta type which, here, is a four-stage Radau IIA method of order 7. \\ In a general Runge-Kutta method, the numerical solution $\mathbf{\Psi}_{n+1}$ of Eq.(\ref{eq8}) at a given time $t=t_{n+1}$ is obtained from the solution $\mathbf{\Psi}_n$ at time $t=t_n$ as \begin{equation} \boldsymbol{\Psi}_{n+1}=\boldsymbol{\Psi}_{n}+\delta t \sum_{i=1}^{s}b_{i}f(t_{i},\mathbf{Y}_{i}), \label{eq30} \end{equation} where $\delta t$ is the time step and $f(t_{i},\mathbf{Y}_{i})=-\mathrm{i} \mathbf{B}^{-1}\mathbf{H}(t_{i})\mathbf{Y}_{i}$ with $t_{i}=t_{n}+c_{i}\delta t$. $b_{i}$ and $c_{i}$ are coefficients defining the Runge-Kutta method for a number $s$ of stages. We assume that the solution vector $\mathbf{\Psi}$ is of dimension $N$. The quantities $\mathbf{Y}_{i}$ estimate the solution $\mathbf{\Psi}$ at the intermediate time $t_i$. They are obtained by solving the following linear ($sN\times sN$) system of equations \begin{equation} \mathbf{Y}_{i}=\boldsymbol{\Psi}_{n}+\delta t \sum_{k=1}^{s}a_{ik}f(t_{k},\mathbf{Y}_{k}), \label{eq31} \end{equation} where the $a_{ik}$ are again given by the method. Solving such system represents the main difficulty of an implicit Runge-Kutta scheme. If this scheme is used for the corrector, we could in principle avoid solving such system of equations by using an iterative procedure in which we replace the vector $\mathbf{Y}_k$ in the right hand side of Eq.(\ref{eq31}) by $\mathbf{Y}^{(j-1)}_k$ where $j$ gives the order of the iteration process. At the order $0$, $\mathbf{Y}^{(0)}_k$ is provided by the predictor. However, such an iterative procedure is not stable. Instead, we follow a different iterative procedure that has been developed by van der Houwen and Sommeijer \cite{Houwen1,Houwen2}. By introducing a diagonal matrix whose entries are calculated to guarantee optimum stability properties, they transform the ($sN\times sN$) system (\ref{eq31}) into a set of uncoupled ($N\times N$) systems of equations that can be solved in parallel at each iteration. More precisely, they rewrite Eq.(\ref{eq31}) as follows \begin{equation} \mathbf{Y}_{i}^{(j)}-\delta t \, d_{ii} \, f(t_{i},\mathbf{Y}_{i}^{(j)})=\boldsymbol{\Psi}_{n}+ \delta t \sum_{k=1}^{s}(a_{ik}-d_{ik})f(t_{k},\mathbf{Y}_{k}^{(j-1)}), \label{eq32} \end{equation} where the $d_{ik}$ are the entries of the diagonal matrix. The iterations in Eq.(\ref{eq32}) start with $\mathbf{Y}_{i}^{(0)}$, provided by the predictor. The iteration scheme is performed until a value $j=$\textbf{max}$_{cor}$ for which we have convergence. Then we can replace $\mathbf{Y}_{i}=\mathbf{Y}_{i}^{(m)}$ in Eq. (\ref{eq30}) to obtain the solution at $t=t_{n+1}$. Once we have calculated $\mathbf{\Psi}_{n+1}$, we can evaluate its norm and use its conservation as a criterion to monitor the size of the time step.\\ \begin{figure}[h] \begin{center} \includegraphics[width=11cm,height=8cm]{fig4.pdf} \end{center} \caption{(Color online) Number of iterations in the Bi-CGSTAB and their multiplicities during the propagation with the electromagnetic pulse. The parameters of the Gaussian model problem are $V_{0}=4$ a.u., $\beta=0.1$ a.u., with a pulse of peak intensity $I=10^{16}$ Watt/cm$^2$, photon energy $\omega=0.5$ a.u. and duration of 8 optical cyles.} \label{fig4} \end{figure} \begin{figure}[b!] \begin{center} \includegraphics[width=11cm,height=8cm]{fig5.pdf} \end{center} \caption{(Color online) Time step evolution for P-C method of time propagation during the propagation with the electromagnetic pulse. The parameters of the Gaussian model problem are the same as in Fig.\ref{fig4}.} \label{fig5} \end{figure} Using the P-C method requires solving a large number of $(N\times N)$ systems of equations. To solve these systems, we use an iterative method known as the biconjugate gradient stabilized method (Bi-CGSTAB) \cite{Vorst}. In order to reduce drastically the number of iterations, we use a pre-conditioner based on an incomplete LU factorization. In Fig.\ref{fig4}, we consider the case of our one-dimensional Gaussian model potential with $V_0=4$ a.u. and $\beta=0.1$ a.u. and a pulse of frequency $\omega=0.5$ a.u., duration of 8 optical cycles and peak intensity $ I=10^{16}$ Watt/cm$^2$. We show, in this Fig.\ref{fig4}, the multiplicity as a function of the number of iterations in the Bi-CGSTAB during the interaction. By multiplicity, we mean the number of times a given number of iterations is repeated during the whole propagation. It can be seen that without pre-conditioner, the number of iterations can grow significantly before reaching convergence, while using the preconditioned Bi-CGSTAB, the number of iterations is maintained below five. This reduces the computational time needed by 25\%. However, care must be taken when including a pre-conditioner, since, by increasing the number of operations, it may increase the computational times even though it accelerates convergence. As mentioned above, the corrector scheme is iterated up to $j=$\textbf{max}$_{cor}$ where convergence is achieved. In Fig.\ref{fig5} we plot the time evolution of the time step during the propagation for different values of the maximum of iterations \textbf{max}$_{cor}$ in the corrector . We see here that, as we increase this maximum number of iterations, the value of the time step becomes larger. The relative error in the norm which is the same for all the calculations is of the order of $10^{-11}$. Moreover, the computational time with \textbf{max}$_{cor}$=100 is half the time consumed for \textbf{max}$_{cor}$=10 because it allows to use a much larger time step. It is therefore advisable to use large values of \textbf{max}$_{cor}$ to speed up the calculations. \section{Results} \subsection{Model potential} In this section we first present results for our case study of the one-dimensional Gaussian potential taking $V_0=1$ and $\beta=1$. The electron wavepacket is developed in a basis of 200 B-splines and we use the time scaled coordinate method during the propagation \cite{Hamido}. \begin{figure}[!h] \begin{center} \includegraphics[width=10cm,height=6.6cm]{fig6.pdf} \end{center} \caption{(Color online) Energy distribution for the Gaussian model potential. The time propagation uses Fatunla's propagator with adaptive time step and Arnoldi's propagator with five Krylov vectors and a fixed time step $\delta t = 0.3$ a.u. The parameters of the model problem are $V_0=1$ and $\beta=1$ with a pulse of peak intensity I=$10^{14}$ Watt/cm$^2$, photon energy $\omega=0.7$ a.u. and a duration of $10$ optical cycles. The relative difference between both curves is of the order of $10^{-3}$.} \label{fig6} \end{figure} We run our codes on an INTEL XEON 2.33 GHz Processor 51.40 (32 GB Ram). We choose a pulse of frequency $\omega=0.7$ a.u. and a full duration of 90 a.u. of time which corresponds to 10 optical cycles. The peak intensity I = $10^{14}$ Watt/cm$^2$. \begin{figure}[!h] \begin{center} \includegraphics[width=9.5cm,height=6.8cm]{fig7.pdf} \end{center} \caption{(Color online) Energy distribution for the Gaussian model potential. The time propagation uses Fatunla's propagator with adaptive time step and Arnoldi's propagator with $20$ Krylov vectors and a fixed time step $\delta t = 0.1$ a.u. The parameters of the model problem are $V_0=1$ and $\beta=1$ with a pulse of peak intensity $I=10^{14}$ Watt/cm$^2$, photon energy $\omega=0.1$ a.u. and duration of $10$ optical cycles.The relative difference between both curves is of the order of $10^{-3}$.} \label{fig7} \end{figure} \begin{figure}[!b] \begin{center} \includegraphics[width=12cm,height=10cm]{fig8.pdf} \end{center} \caption{Energy distribution for the Gaussian model potential. The time propagation uses Fatunla's propagator with adaptive time step and the predictor-corrector method. The parameters of the model problem are $V_0=4.0$ and $\beta=0.1$ with a pulse of peak intensity $I=10^{16}$ Watt/cm$^2$, photon energy $\omega=0.5$ a.u. and duration of $8$ optical cycles.} \label{fig8} \end{figure} In Fig.\ref{fig6}, the energy distribution is calculated by propagating the scaled wavepacket to a stationary state until a time of 1500 a.u. when convergence is achieved. The results shown are obtained using the two explicit propagators. Both methods converge to the same result but Fatunla's propagator uses 2.3 s of computer time with an adaptive time step while Arnoldi's propagator using five Krylov vectors and a fixed time step $\delta t = 0.3$ a.u. takes 6.2 s. For these values of intensity and frequency both methods give easily the correct result. However Arnoldi's method performs poorly from a computer time point of view. This can be understood by referring to Fig.\ref{fig1} where we show that Fatunla's propagator allows the use of ever larger time steps, particularly during the propagation after the end of the pulse, while Arnoldi's propagator keeps the same time-step throughout the propagation.\\ \begin{figure}[ttp] \begin{center} \includegraphics[width=12cm,height=10cm]{fig9.pdf} \end{center} \caption{Energy distribution for the Gaussian model potential. The time propagation uses Arnoldi's propagator with fixed time step and the P-C method with adaptive time step. The parameters of the Gaussian model problem are as in Fig.\ref{fig8}.} \label{fig9} \end{figure} To check how these methods behave in a more challenging case, we consider the same model potential with a pulse of frequency 0.1 a.u. with the same number of optical cycles and peak intensity. In this case, the total pulse duration is equal to 630 a.u. We see in Fig.\ref{fig7} that both methods give identical results. These results are obtained after propagating the wavepacket up to a time of 2500 a.u. The running time with Fatunla's propagator is equal to 958.27 s while in this case, Arnoldi's propagator performs better using 553.69 s for a subspace of 20 Krylov vectors and a fixed time step $\delta t = 0.1$ a.u. It can be seen that in general, Arnoldi's propagator performs better than Fatunla's propagator for long pulses.\\ To further probe these methods we increase the number of bound states supported by our potential by choosing ${V_0} = 4$ and $\beta=0.1$. The pulse has a frequency $\omega=0.5 $ a.u. with a duration of 100.53 a.u. that corresponds to 8 optical cycles. The peak intensity I = $10^{16}$ Watt/cm$^{ - 2}$. In this case, we use 1700 B-splines to propagate the wavepacket up to a time of 5000 a.u. Fig.\ref{fig8} shows the energy distribution obtained using Fatunla's propagator (straight line) and the predictor-corrector scheme (squares), which is used to test the accuracy of Fatunla's method. Comparison of these two methods shows that Fatunla keeps the accuracy in the results down to a value of $10^{-5}$ a.u. for the energy distribution. The TSC approach is used with an asymptotic scaling factor of 0.1. Fatunla's propagator takes 379.36 s while the P-C method with adaptive time step takes 1801.49 s. It is clear that Fatunla uses remarkably less computer time and works as long as the accuracy required is up to six digits. Fig.\ref{fig9} shows the energy distribution obtained with Arnoldi's propagator for the same parameters as in Fig.\ref{fig8}. We show the results obtained with Arnoldi's approach and two different fixed time steps and compare these results with those obtained with the P-C scheme. The Krylov subspace contains 20 vectors and the wavepacket is again propagated up to 5000 a.u. The circles show the results for a time step $\delta t=0.03$ a.u. and the stars for $\delta t=0.3$ a.u. We note that increasing the time step leads to less accurate results by comparison with the P-C method. Arnoldi scheme takes 5957.19 s for a time step of 0.03 a.u. and 598.11 s for a time step of 0.3 a.u.\\ \subsection{Hydrogen Atom} We now apply these methods to the more complex case of the interaction of hydrogen atom with a cosine square laser pulse. We use a spectral method based on the expansion of the wavefunction in a basis of Coulomb Sturmian functions \cite{Madronero}, without implementing the TSC method. Unless otherwise stated, we performed all calculations on a laptop (with an INTEL core 2 duo processor of 2.4 GHz). The first pulse we use has a frequency of 0.7 a.u., a duration of 10 optical cycles and an intensity I=10$^{14}$ Watt/cm$^2$, as in the case of Fig.\ref{fig6}. In these rather simple conditions, we use 10 angular momenta. The non-linear parameter $\kappa$ of the Coulomb Sturmian functions is taken equal to 0.3 a.u. Fatunla's and Arnoldi's algorithms produce the converged energy distribution as shown in Fig.\ref{fig10}. The calculations carried on with Fatunla's propagator and an adaptive time step take 10.50 s of computer time. The integration performed with Arnoldi's method takes 13.72 s. For a time step of $\delta t=0.05$ a.u. and 100 Coulomb Sturmian functions per angular momentum, it needs only 5 Krylov vectors. In this case Arnoldi's method is slower than Fatunla's method. This is related to the number of basis-set functions used. As this number increases, higher eigenvalues are generated in the Hamiltonian spectrum thereby increasing the stiff character of the system of equations to solve. In that case, more Krylov vectors have to be included to maintain the accuracy of the results. \begin{figure}[!ht] \begin{center} \includegraphics[width=10cm,height=7cm]{fig10.pdf} \end{center} \caption{(Color online) Energy distribution resulting from the interaction of the hydrogen atom with a cosine square pulse. Fatunla's and Arnoldi's propagators are used. The pulse has a peak intensity I=10$^{14}$ Watt/cm$^2$, a frequency $\omega=0.7$ a.u. and a duration of 10 optical cycles. The basis-set of functions used is a set of 100 Coulomb Sturmian functions per angular momentum. Ten angular momenta are included and the non-linear parameter $\kappa$ of the Coulomb Sturmian functions is equal to 0.3. The Arnoldi propagator uses 5 Krylov vectors and a time step of $\delta t = 0.05$ a.u.The relative difference between both curves is of the order of $10^{-3}$.} \label{fig10} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=10cm,height=6.8cm]{fig11.pdf} \end{center} \caption{(Color online) Energy distribution resulting from the interaction of the hydrogen atom with a cosine square pulse. Arnoldi's propagator is used. The pulse parameters are as in Fig.\ref{fig10}, using 100 Coulomb Sturmian functions per angular momentum. Ten angular momenta are included and the non-linear parameter $\kappa$ of the Coulomb Sturmian functions is equal to 0.3. For a time step of $\delta t = 0.05$ a.u., we compare results when 5 and 4 Krylov vectors are used.} \label{fig11} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=10cm,height=7.2cm]{fig14.pdf} \end{center} \caption{(Color online) Energy distribution resulting from the interaction of the hydrogen atom with a cosine square pulse. Arnoldi's propagator is used. The pulse has a peak intensity I=10$^{14}$ Watt/cm$^2$, a frequency $\omega=0.114$ a.u. and a duration of 20 optical cycles. We use a set of 600 Coulomb Sturmian functions per angular momentum. Ten angular momenta are taken into account and the non-linear parameter of the Coulomb Sturmian functions $\kappa=0.3$. The Arnoldi propagator uses 25 Krylov vectors and a time step of $\delta t = 0.05$ a.u. The relative difference between both curves is of the order of $10^{-3}$.} \label{fig14} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=10cm,height=7.2cm]{fig15.pdf} \end{center} \caption{(Color online)Energy distribution resulting from the interaction of the hydrogen atom with a cosine square pulse. Arnoldi's propagator is used. The pulse has a peak intensity I=10$^{14}$ Watt/cm$^2$, a frequency $\omega=0.0228$ a.u. and a duration of 4 optical cycles. The basis-set of functions used is a set of 1200 Coulomb Sturmian functions per angular momentum. 80 angular momenta are taken into account and the non-linear parameter $\kappa$ of the Coulomb Sturmian functions is equal to 0.3. The Arnoldi propagator uses 70 Krylov vectors and a time step of $\delta t = 0.05$ a.u.} \label{fig15} \end{figure} In Fig.\ref{fig11} we illustrate the effect of reducing the number of Krylov vectors $n_k$ from 5 to 4. It is surprising to see that the propagator gives a completely flat spectrum when the dimension of the Krylov space is insufficient. Fig.\ref{fig11} shows that for a basis set of 100 Coulomb Sturmian functions per angular momentum, accurate results for the energy distribution require a minimum of 5 Krylov vectors.\\ These calculations performed in a Coulomb Sturmian basis can be further tested by varying their non-linear parameter $\kappa$. If instead of using $\kappa=0.3$, we use $\kappa=0.4$, all the other parameters remaining the same, we again obtain a completely flat energy distribution. By increasing the value of the non-linear parameter $\kappa$, the value of the eigenenergies increases thereby increasing the stiff character of the problem. To successfully reproduce an accurate energy distribution we now would need to increase the number of Krylov vectors. If on the other hand, we keep the value of $\kappa$ equal to 0.3 and increase the number of basis functions, converged results are only obtained when 8 Krylov vectors are used. The increase in the number of Coulomb Sturmians generates higher eigenenergies thereby increasing again the stiff character of the system. The eigenvalues of matrix \textbf{h} range from the eigenvalue of the initial state (by construction) to approximately the highest one of matrix \textbf{H}. In summary, any change which results in a higher maximum eigenvalue for \textbf{H} necessitates an increase in the number of Krylov subspace vectors required for convergence. It is interesting to note that to get an accurate spectrum, one of the eigenvalues of \textbf{h} must converge to 0.2 which corresponds to the position of the maximum of the spectrum which is what we expect from energy conservation ($0.2=-0.5+\omega$). If none of the eigenvalues converges to 0.2, the spectrum is completely flat because all the eigenvalues of \textbf{h} which are usually very high except the first one, do not contribute significantly to the spectrum. In addition, it is important to stress that decreasing the time step does not modify the minimum number of Krylov vectors to be used.\\ In Fig.\ref{fig14} we compare the performance of Arnoldi's and Fatunla's methods for a more difficult case. We consider a pulse of frequency $\omega=0.114$ a.u. and a duration of 20 optical cycles. The pulse intensity is the same as before, I=10$^{14}$ Watt/cm$^2$. We use a basis set of 600 Coulomb Sturmian functions per angular momentum. 10 angular momenta are included in the calculations and the non-linear parameter $\kappa=0.3$. Both energy distributions agree but Fatunla's scheme, which needs a very small time step, takes 66004 s of computer time while Arnoldi's method takes 1419 s with 25 Krylov vectors and a time step of 0.05 a.u. This case illustrates clearly that Arnoldi's algorithm copes in an efficient way with the stiffness of the problem by increasing the size of the Krylov subspace.\\ In Fig.\ref{fig15} we show results obtained for the challenging case of a pulse of very low frequency $\omega=0.0228$ a.u. and a duration of 4 optical cycles for the same intensity as before. To reproduce the energy distribution we need to use 1200 Coulomb Sturmian functions per angular momentum. 80 angular momenta are included in the calculations and the non-linear parameter $\kappa$ of the Coulomb Sturmian functions is equal to 0.3. For this rather stiff problem Arnoldi's algorithm has to include a minimum of 70 Krylov vectors for a time step $\delta t = 0.05$ a.u. The calculation takes 24 hours on an 8 processor cluster using OpenMP. Fatunla's algorithm also reproduces the same energy distribution but the computer time used is more than four times larger. In fact, we observe that for larger scale problems where the degree of stiffness is important, Fatunla's method requires time steps that become prohibitively small thereby increasing the computational time. \subsection{Helium Atom} \begin{figure}[!ht] \begin{center} \includegraphics[width=12cm,height=8cm]{fig16.pdf} \end{center} \caption{Single ionisation spectrum resulting from the interaction of an helium atom with a short cosine square pulse. Arnoldi's propagator is used. The pulse has a peak intensity of I=10$^{14}$ Watt/cm$^2$, a frequency $\omega=2.1$ a.u. and a duration of 6 optical cycles. The basis-set of functions used is a set of 140 B-spline functions of order 7 per electron angular momentum. The total angular momentum L=0,1,2 and the maximum value of the individual electron angular momentum is three. The box size is 150 a.u. The Arnoldi propagator uses 40 Krylov vectors.} \label{fig16} \end{figure} In this subsection, we show briefly results for the single ionization of helium by an intense electromagnetic pulse as an example of a more challenging problem. Following the remarks above, we perform here the calculations using only Arnoldi's algorithm. As mentioned before, Fatunla's algorithm is not accurate enough to calculate cross sections in various single ionization channels. The pulse has a peak intensity I=10$^{14}$ Watt/cm$^2$, a frequency $\omega=2.1$ a.u. and a duration of 6 optical cycles. The wavefunction is expanded in a basis set that uses 140 B-spline functions of order 7 per electron angular momentum \cite{Bachau}. Three values of the total angular momentum (L=0,1,2) are taken into account and the maximum value of the individual electron angular momentum is three. The box size is 150 a.u. The step size during the interaction with the pulse is fixed at 0.01 a.u., while after the interaction the propagation used a step size of 1 a.u. The calculations are performed with 40 Krylov vectors. It takes 31 hours to run on a cluster with 10 Intel Xeon L5520 2.26 GHz processors using MPI (Message Passing Interface) and 3 GB of RAM per processor. Fig.\ref{fig16} shows the results obtained for the energy distribution of the single ionization of helium. As expected we observe a dominant peak at 1.2 a.u. which corresponds to the energy conservation. The spectrum is obtained by projecting the wave packet after the end of the pulse on a product of a Coulomb wave of the screened nucleus times a bound state of He$^{+}$. \subsection{Quantum dot} In this last section, we consider a different problem where the choice of a very efficient explicit time propagator turns out to be crucial. The system under consideration is a model for a planar two-electron quantum dot with an anharmonic confining potential. The properties of quantum dots have great resemblance to those of atoms or molecules. Optical lattices, which can be viewed as an array of quantum dots, and well-approved methods from semiconductor physics make quantum dots easily accessible. A confinement of the electrons to a two-dimensional plane is justified, in particular for solid state quantum dots, where the electron gas is localized on a parallel plane between two layers of different semiconductors. The Hamiltonian for this problem is of the form \begin{equation} H_{\varepsilon} = {H_1} + {H_2} + {V_{{\mathop{\rm int}} }},\label{eq33} \end{equation} where the indices $1$ and $2$ refer to the two electrons. ${V_{{\mathop{\rm int}} }} =\displaystyle{ \frac{1}{{\,{r_{1\,2}}}}}$, with $ {{r_{1\,2}}}$ being the inter-electronic distance. The Hamitonians $H_j$ are given by, \begin{equation} {H_j} = \frac{1}{2} \mathbf{p}_j^{{\kern 1pt} 2} + \frac{{{\omega ^2}}}{2} \mathbf{r}_j^{{\kern 1pt} 2} + \varepsilon {\left( { \mathbf{r}_j^{{\kern 1pt} 2}} \right)^2}, \label{eq34} \end{equation} with $\omega$ the harmonic frequency and $\varepsilon$ the strength of the anharmonic perturbation. $\mathbf{r}_j$ and $\mathbf{p}_j$ are the coordinate and momentum of electron $j$, respectively. For $\varepsilon\equiv 0$ our model coincides with the well-known Hooke's atom, which is separable in the centre-of-mass and relative coordinates. The Schr\"odinger equation can be regularized \cite{Schroeter1} using the Jacobian of a suitable parabolic coordinate transformation. We then write the resulting equation in terms of circular creation and annihilation operators. A set of selection rules is obtained determining the coupling between basis states and the matrix elements, according to the principal quantum numbers of the harmonic oscillators. The TDSE, \begin{equation} H\;\Psi \left( {{{\bf{r}}_{\,1}},{{\bf{r}}_{\,2}},t} \right) = \mathrm{i} \frac{\partial }{{\partial \,t}}\;\Psi \left( {{{\bf{r}}_{\,1}},{{\bf{r}}_{\,2}},t} \right), \label{eq35} \end{equation} is solved to obtain $\Psi \left( {{{\bf{r}}_{\,1}},{{\bf{r}}_{\,2}},t} \right)$, with $H$ given in Eq.(\ref{eq33}). The question of decoherence of these quantum states can be studied through the quantum fidelity, which gives the overlap of the solutions of the TDSE, with and without the potential ${V_{{\mathop{\rm anharmonic}} } =\displaystyle{ \varepsilon {\left(\left( { \mathbf{r}_1^{{\kern 1pt} 2}} \right)^2+\left( { \mathbf{r}_2^{{\kern 1pt} 2}} \right)^2\right)}}}$. The perturbation potential ${V_p =\displaystyle{ \left( { \mathbf{r}_1^{{\kern 1pt} 2}} \right)^2+\left( { \mathbf{r}_2^{{\kern 1pt} 2}} \right)^2}}$ breaks the separability of Hooke's atom. We note that the Hamiltonian $H_{\varepsilon}$ in this case is not explicitly dependent on time and so it is different in nature to the Hamiltonians that we treated in previous examples. For a general Hamiltonian $H_0$ and a small real parameter $\varepsilon$ that represents the strength of the perturbation, we write \begin{equation} {H_\varepsilon } = {H_0} + \varepsilon {\kern 1pt} {V_{{\mathop{\rm p}.} }}\label{eq36} \end{equation} The quantum fidelity $F_\varepsilon$ at time $t$ is defined as, \begin{equation} {F_\varepsilon }\left( t \right) = {\left| {\left\langle {{\Psi _0}\left( t \right)} \right|\left. {{\Psi _\varepsilon }\left( t \right)} \right\rangle } \right|^2},\label{eq37} \end{equation} where $\Psi _\varepsilon$ and $\Psi _0$ are the quantum states propagated with Eq.(\ref{eq35}) for a perturbed and non-perturbed Hamiltonian, respectively. We can expand the quantum fidelity in terms of the perturbation parameter $\varepsilon$ \cite{Gorin}, as, \begin{equation} {F_\varepsilon }\left( t \right) = 1 - \chi \left( t \right){\varepsilon ^2} + O\left( {{\varepsilon ^4}} \right),\label{eq38} \end{equation} with $ \chi \left( t \right)$ being the quantum susceptibility. Taking the two first terms, we evaluate ${F_\varepsilon }\left( t \right)$ up to order ${\varepsilon ^2}$, valid near unity. \begin{figure}[!b] \begin{center} \includegraphics[width=12cm,height=8cm]{fig17.pdf} \end{center} \caption{(Color online) Susceptibility $ \chi \left( t \right)$ calculated for a quantum dot with $\omega=$1.0 a.u., using the Arnoldi's propagator. We took 5 Krylov vectors and compared results for two different values of the time step.} \label{fig17} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=12cm,height=8cm]{fig18.pdf} \end{center} \caption{(Color online) Susceptibility $ \chi \left( t \right)$ calculated for a quantum dot with $\omega=$1.0 a.u., using the Arnoldi's propagator. We take two different combinations of time step and of the dimension of the Krylov subspace used, to illustrate the compromise between these two quantities.} \label{fig18} \end{figure} For our particular case $H_0=H_{\varepsilon=0}$ and consequently $V_{{\mathop{\rm p}} }=\displaystyle{ {\left(\left( { \mathbf{r}_1^{{\kern 1pt} 2}} \right)^2+\left( { \mathbf{r}_2^{{\kern 1pt} 2}} \right)^2\right)}}$. The observable calculated in this problem is the susceptibility $\chi \left( t \right)$ and we take the harmonic frequency to be $\omega=$1.0 a.u. and the perturbation parameter to be $\varepsilon=10^{-5}$. We study the evolution of the initial bound state of energy E=7 a.u. and vanishing angular momentum, singlet state with even parity \cite{Schroeter2}. The total number of functions in the basis set is 2370. The integration of the TDSE was first attempted using Fatunla's method. The stiffness of this problem forces the adaptive time step to become excessively small (of the order of $10^{-5}$) so that the computer time needed by the method becomes of the order of several days instead of seconds. Furthermore the accuracy necessary to represent the effect of very small perturbations on the system could not be achieved. As a consequence we used Arnoldi's integrator, testing different combinations of the values of the time step and of the dimension of the Krylov subspace. It is worth stressing that the time evolution operator calculated within Arnoldi's method is essentially exact since the total Hamiltonian is time independent. However, the stiffness of the problem which is very strong because of the anharmonic character of the potential is expected to impose important constraints on the time step. In Fig.\ref{fig17} we show results for the quantum susceptibility using 5 Krylov vectors and two different time steps. In order to get converged results, this shows that we need a time step of at least $\delta t = 10^{-4}$ a.u., leading to a computational time of 4 hours. The same calculation performed with the P-C method took 17 days, 8 hours and 29 minutes. Fig.\ref{fig18} shows results for the observable $\chi \left( t \right)$ under the same conditions as in Fig.\ref{fig17} but using Krylov subspaces of higher dimension ($n_k=7$ and $n_k=9$). For $n_k=7$ converged results were obtained with a step size of $\delta t = 5 \times10^{-4}$ a.u. leading to a computational time of 45 minutes. However this figure illustrates the compromise to be achieved between the size of the time step used and the dimension of the Krylov subspace. For $n_k=9$, a time step of $\delta t = 10^{-3}$ a.u. leads to a calculation taking 30 minutes of computer time only. The choice of the optimal value of time step and of the Krylov subspace dimension needs to be balanced. This means to search for the optimal larger value of the time step for which the propagation will take less iterations. These calculations performed with the P-C method take 9 days, 17 hours and 14 minutes for $n_k=7$ and $\delta t = 5 \times10^{-4}$ and 8 days, 5 hours and 43 minutes for a.u. $n_k=9$ and $\delta t = 10^{-3}$ a.u. The computer used in these calculations was a single core of a Intel(R) Core (TM) 2 Quad CPU Q 9400(2.66 GHz) with 8 GB main memory. \section{CONCLUSIONS} In this contribution, we addressed the problem of the numerical integration of the time-dependent Schr\"odinger equation describing physical processes whose complexity requires the use of state of the art methods. The problem can be reduced to the solution of a system of first order differential equations. The main difficulties we have to face are the size of the system and its stiff character which results from the presence of very high energy eigenvalues in the Hamiltonian spectrum. These difficulties impose important constraints on the choice of the time propagator. Given the size of the system, this time propagator must be explicit. This means that it involves only matrix-vector products instead of solving large system of algebraic equations at each time step as is the case for implicit methods. In addition, this propagator must have optimum stability and accuracy properties to cope with the stiffness of the system. We have analyzed and compared the performance of two one-step explicit time propagators namely Fatunla's and Arnoldi's algorithms. It turns out that both of these methods share the same optimum stability properties. Nevertheless, we show that their accuracy properties differ significantly in most of the problems that we treat here. As a matter of fact, the accuracy of the method depends essentially on the stiffness of the system to solve which determines the appropriate choice of the propagator. \\ In all the problems considered here, the relative accuracy of Fatunla's method is always limited to about $10^{-6}$. In some cases, this might be sufficient but we should not forget that when the degree of stiffness increases, the adaptive time step becomes excessively small making the method inapplicable. By contrast, highly accurate results are obtained with Arnoldi's algorithm in all cases treated here. However, for a given time step, there is a minimal number of Krylov vectors to take into account. If the actual number used is smaller than this minimal number, generally there is an abrupt transition and the results are wrong giving a flat spectrum (in some cases this transition is not so abrupt but is rapid nevertheless.) On the other hand, when the degree of stiffness is high, this minimal number may become very large thereby imposing strong limitations on the applicability of the method. This is the case when the spacing between grid points becomes very small or, for spectral methods, when the size of the basis set is very large. In applying Arnoldi's scheme, it is therefore important to try to reduce the stiffness as much as possible. An obvious way to achieve this is to move to the atomic basis in which the Hamiltonian is diagonal and to eliminate the highest energy eigenvalues which, in principle do not play any physical role. In that case however, the ac-Stark shift of the levels will not be evaluated accurately. In addition, our calculations in the case of the Gaussian potential model clearly show that the energy electron spectrum calculated with Arnoldi's algorithm deteriorates. \section{ACKNOWLEDGEMENTS} A.L.F. gratefully acknowledges the financial support of the IISN (Institut Interuniversitaire des Sciences Nucl\'eaires) through contract No. 4.4.504.10, "Atoms, ions and radiation; Experimental and theoretical study of fundamental mechanisms governing laser-atom interactions and of radiative and collisional processes of astrophysical and thermonuclear relevance". F.M.F. and P.F.O'M thank the Universit\'e catholique de Louvain (UCL) for financially supporting several stays at the Institute of Condensed Matter and Nanosciences of the UCL. They also thank The european network COST (Cooperation in Science and Technology) through the Action CM1204 "XUV/X-ray light and fast ions for ultrafast chemistry (XLIC) for financing one short term scientific mission at UCL. Computational resources have been provided by the supercomputing facilities of the UCL and the Consortium des Equipements de Calcul Intensif en F\'ed\'eration Wallonie Bruxelles (CECI) funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we provide a computation algorithm to get a global solution for the maximum rank correlation (MRC) estimator using the mixed integer programming (MIP) approach. The new algorithm returns a global solution. The MRC estimator was first proposed by \citet{han1987non} to estimate the generalized regression model: \eq{ y_i = D \circ F(x'_i\beta,\epsilon), \label{eq:generlaize-regression} } where $D:\mathbb{R}\mapsto\mathbb{R}$ is non-degenerate monotonic and $F:\mathbb{R}^2\mapsto\mathbb{R}$ is strictly monotonic in each arguments. The object of interest is the linear index parameter $\beta$. The model is general enough to include a binary choice model, a censored regression model, and a proportional hazards model as its example. \citet{han1987non} proposed to estimate $\beta$ by maximizing Kendall's rank correlation coefficient: \eq{ \what{\beta} = \argmax_{\beta \in \mathcal{B}} \frac{1}{n(n-1)}\sum_{i=1}^n \sum_{j\neq i} \indf{x_i' \beta > x_j' \beta } \indf{y_i > y_j }, \label{eq: obj-mrc1} }where $\indf{\cdot}$ is an indicator function. He showed the consistency of the MRC estimator and \citet{sherman1993limiting} proved the $\sqrt{n}$-consistency and the asymptotic normality later. The flexible model structure leads to various extensions of the MRC estimator: for example, a quantile index model (\cite{khan2001two}), a generalized panel model (\cite{abrevaya2000rank}), a rank estimation of a nonparametric function (\cite{chen2002rank}), a functional coefficient model (\cite{shin2010local}), a random censoring model (\cite{khan2007partial}), and a partial linear model (\cite{abrevaya2011rank}). There exist various semiparametric estimators in the class of single-index models (see, for example, the recent work by \citet{ahn2018simple} and the references therein). Compared to them, the MRC estimator has the following advantages. First, it does not require any bandwidth selection since it does not involve any nonparametric estimation components. Second, it can be applied to various models without much modification (see, e.g.,~the references above and \citet*{khan2019inference} for the multinomial models). Finally, it is \emph{point robust} in the sense that it provides a nontrivial identified set that includes the true parameter value when sufficient conditions for point identification are not satisfied (see the discussion in \citet{khan2018discussion} for details). Implementing the MRC estimator casts some computational challenges in practice, where the grid search method is not feasible. First, the objective function in \eqref{eq: obj-mrc1} is not differentiable in $\beta$ and we cannot apply a gradient-based optimization algorithm. Second, the objective function is not concave. Therefore, any solution found by a numerical algorithm could not be a global solution but a local one. This difficulty is well described in \citet{chay1998estimation}, where they apply Powell's conjugate directions method, the simplex method with multiple starting points, and the piece-wise grid search method repeatedly to achieve a better solution in the empirical application. Even after these repeated searches, we are not sure whether the current solution is the global optimum. Finally, the objective function is the second order U-process and requires $O(n^2)$ computations for a single evaluation. \citet{abrevaya1999computation} shows that the computation order can be reduced to $O(n\log n)$ by adopting the binary search tree structure. However, the fundamental local solution issue still remains. The contribution of this paper is twofold. First, we propose a new computation algorithm that assures the global solution of the MRC estimator. We achieve this goal by transforming all indicator functions into binary parameters to be estimated along with additional constraints. We show that the proposed mixed integer programming (MIP) problem is equivalent to the original optimization problem. Although MIP is still an NP(non-deterministic polynomial-time)-hard problem (see, e.g.~\citet{wolsey1998integer} and \citet{johnson1978densest} for details), we use a modern mixed integer programming (MIP) solver and confirm that it is feasible to get the solution within a reasonable time budget. The additional advantage of the MIP approach is that it provides us with the gap between the objective function value at the current best solution and the bound of the possible global maximum at any time point of the computation procedure. By this MIP gap, we can measure the quality of the interim solution when the time limit prevents us from waiting for the convergence of the procedure. Second, we consider an application of the best subset rank prediction and analyze the prediction performance. Building on \citet{chen2018best}, we derive a non-asymptotic bound of the tail probability of the predictive performance measure. Since the objective function is defined as a second-order U-process, we develop a new technique to derive the finite sample tail probability bound for higher order U-processes. We review some related literature. The MIP procedure is recently adopted in various applications in econometrics and statistics. \cite{florios2008exact} show that the maximum score estimator of \citet{manski1975maximum} can be reformulated as an MIP structure. \citet*{bertsimas2016best} consider the best subset selection problem and show that the MIP algorithm outperforms other penalty based methods in terms of achieving sparse solutions with good predictive power. \citet{chen2018best, chen2018exact} investigate the binary prediction problem with variable selection and the instrumental variable quantile regression in the MIP formulation. \citet{kitagawa2018should} apply the MIP procedure when they estimate the personalized optimal welfare policy. Finally, \citet{lee2018factor} develop a MIP computation algorithm to estimate a two-regime regression model when the regime is determined by multi-dimensional factors. To the best of our knowledge, however, this is the first paper in the literature to apply the MIP approach when the objective function is defined as a higher order U-process. The remainder of the paper is organized as follows. In section 2, we propose the MIP computation algorithm for the maximum rank correlation estimator. We show that the proposed algorithm is equivalent to the original optimization problem and illustrate how it can achieve a feasible solution. In Section 3, we consider the best subset rank prediction problem and derive the non-asymptotic tail probability bound of the performance measure. In section 4, we show the better performance of the proposed MIP algorithm by applying it to the female labor participation data of \cite{mroz1987sensitivity}. Additional numerical evidence is provided through Monte Carlo simulation studies in section 5. We provide some concluding remarks in section 6. \section{Exact Computation via Mixed Integer Optimization} In this section we describe the computational challenges of the maximum rank correlation (MRC) estimator and propose a new algorithm to compute a global solution of it. We illustrate the advantage of the new algorithm by investigating a simple numerical example. We first discuss the computational difficulties of the MRC estimator. Recall that MRC is defined as follows: \eq{ \what{\beta} = \argmax_{\beta \in \mathcal{B}} \frac{1}{n(n-1)} \sum_{i=1}^n \sum_{j\neq i} \indf{x_i' \beta > x_j' \beta} \indf{y_i > y_j}, \label{eq:obj-mrc} }where $\mathcal{B}$ is the parameter space of $\beta$ and $\indf{\cdot}$ is an indicator function. Note that the objective function is neither differentiable nor concave in $\beta$. Furthermore, it is defined as a second-order U-process, which requires $O(n^2)$ order of computations for each evaluation of a candidate parameter value.\footnote{\citet{abrevaya1999computation} proposes a nice algorithm that reduces the computation order to $O(n\log n)$ by using the binary search tree. However, it still does not guarantee the global solution.} As a result, we cannot apply any gradient-based optimization algorithm. Researchers usually adopt a simplex-based algorithm such as the Nelder-Meade method in MRC applications. However, it is difficult to get the global solution even with multiple starting points since the objective function is not globally concave. A grid search algorithm would give more robust solutions but the curse of dimensionality makes it infeasible in most cases when the dimension of $x$ is larger than 2. In this paper we propose an alternative computational algorithm that is based on the mixed integer programming (MIP) procedure. Let $x_{ij} := x_i - x_j$ be a pairwise difference of $x_i$ and $x_j$. Let $\eps$ be a small positive number, e.g.\ $\eps=10^{-6}$, to denote an effective zero. Consider the following mixed integer programming problem: for $i,j =1,\ldots, n$ and $i \neq j$ \eq{ &\left(\what{\beta}, \left\{\what{d}_{ij}\right\}\right) = \argmax_{\beta, \{d_{ij}\}} \frac{1}{n(n-1)}\sum_{i=1}^n \sum_{j\neq i} d_{ij} \indf{y_i > y_j} \label{eq:obj-mio}\\ &\mbox{subject to} \notag \\ & \hskip60pt \beta \in \mathcal{B} \\ & \hskip60pt (d_{ij} - 1) M_{ij} < x_{ij}'\beta\le d_{ij} M_{ij} \label{eq:const2} \\ & \hskip60pt d_{ij} \in \{0,1\} \label{eq:const3} }where $ M_{ij} = \max_{\beta \in \mathcal{B}} \left\vert x_{ij}' \beta \right\vert + \eps $. \footnote{Note that $M_{ij}$ is not a user-chosen turning parameter as it is the empirical bound of $\vert x_{ij}'\beta\vert$ determined by the parameter space and the data.} Since the objective function in \eqref{eq:obj-mio} is the linear function of the binary variables $\{d_{ij}\}$, the formulation becomes a mixed integer linear programming problem. We check the equivalence between the original problem in \eqref{eq:obj-mrc} and the MIP problem in \eqref{eq:obj-mio}--\eqref{eq:const3}. Consider that the MIP problem chooses $\what{d}_{ij}=1$ for some $i,j$. Then, the constraint \eqref{eq:const2} implies that the estimate for $\what{\beta}$ should satisfy $0<x_{ij}'\what{\beta} \le M_{ij}$, which is equivalent to $x_i\what{\beta} > x_j\what{\beta}$ for a large enough $M_{ij}$. Similarly, $\what{d}_{ij}=0$ is equivalent to $x_i\what{\beta} \le x_j\what{\beta}$. In sum, the constraint forces ${d}_{ij}=1\{x_{ij}'\beta >0\} = 1\{x_i'\beta > x_j'\beta\}$ given any $\beta \in \mathcal{B}$. Therefore, we can compute the global solution $\what{\beta}$ for \eqref{eq:obj-mrc} by solving the equivalent MIP problem in \eqref{eq:obj-mio}--\eqref{eq:const3}. These two optimization problems give us the same numerical results but the MIP procedure has a clear computational advantage over the original problem, which is illustrated below. Modern numerical solvers such as CPLEX and Gurobi make it possible to solve a large scale MIP problem by adopting branch-and-bound type approaches. We provide a heuristic explanation of how a vanilla branch-and-bound algorithm reduces the computational burden followed by a numerical example. Consider a binary tree representation for all possible values of $\{d_{ij}\}$ (for example, see Figure \ref{fig:binary-tree}). A bottom node of the tree represents a different possible solution for $\{d_{ij}\}$, and $\beta$ can be easily solved by the linear programming procedure since $d_{ij}$ is fixed there. However, we have $2^{n(n-1)}$ bottom nodes in total and the brute force approach is still infeasible with a standard sample size. The branch-and-bound approach help eliminate a partition of the final nodes systematically. Suppose that we are in a node located in the middle of the tree, where only a part of $\{d_{ij}\}$ is fixed. Let $U^*$ be the current best objective function value.\footnote{An initial solution can be achieved from the linear programming problem at any bottom node of the tree.} Now we solve a subproblem after relaxing all $\{d_{ij}\}$ that are not fixed by parent nodes into \emph{continuous} variables on the interval $[0,1]$. This relaxed subproblem can be solved easily by the linear programming procedure since it does not contain integer parameters anymore. There are two cases where we can reduce the computational burden. First, the objective function value of the relaxed problem, say $U^*_R$, is less than or equal to $U^*$. Since the objective function value of the original subproblem is always worse than that of the relaxed subproblem, we cannot achieve a better result than $U^*$ by solving any bottom nodes below the current node. Thus, we can drop all of them from the computation list. Second, $U^*_R > U^*$ and the solution of the relaxed problem satisfies the binary restriction for $\{d_{ij}\}$. This solution coincides with that of the original subproblem. Then, we update $U^* = U^*_R$ and can drop all bottom nodes below it from the computation list. While moving on to a child node, we solve a relaxed subproblem repeatedly and drop a partition of the bottom nodes from our computation list. We provide a simple numerical example to illustrate how the branch-and-bound algorithm works. \begin{example}\label{ex:bab} Consider a sample of $\{(y_i, x_{1i}, x_{2i})\}_{i=1}^4=\{(1,0,2), (0,1,0), (0,1,1), (0,0.5,2)\}$. We normalize $\beta_1=1$ and set the parameter space for $\beta_2$ as $[-5,5]$. There are only three paired observations that satisfy the condition $\{y_i>y_j\}$ and the MIP problem becomes \eqs{ & \argmax_{\beta_2, d_{12}, d_{13}, d_{14}} \frac{1}{12} \left( d_{12} + d_{13} + d_{14} \right)\\ & \hskip10pt \mbox{subject to} \notag \\ & \hskip30pt \beta_2 \in [-5,5] \\ & \hskip30pt (d_{12} - 1) \cdot 11 < -1 + 2\beta_2 \le d_{12}\cdot 11 \\ & \hskip30pt (d_{13} - 1) \cdot 6 < -1 + \beta_2 \le d_{13}\cdot 6 \\ & \hskip30pt (d_{14} - 1) \cdot 1 < -0.5 \le d_{14}\cdot 1 \\ & \hskip30pt d_{12}, d_{13}, d_{14} \in \{0,1\}. } Figure \ref{fig:binary-tree} shows the binary tree representation and the brute force approach requires solving 8 linear programming problems at the bottom nodes. We set $U^* = - \infty$ and solve the first relaxed subproblem at the child node of $d_{12}=1$ (the first right branch in Figure \ref{fig:binary-tree}). The solution for this relaxed subproblem is $(\beta_2, d_{13}, d_{14})=(5,1,0)$ with the objective function value $Q^*_R=2/12$. Since $U^*_R>U^*$ and $(d_{13}, d_{14})$ satisfies the binary restriction, we update $U^*=2/12$ and drop all the nodes below $d_{12}=1$. We next look at the relaxed subproblem at $d_{12}=0$ (the first left branch in Figure \ref{fig:binary-tree}). A solution is $(\beta_2, d_{13}, d_{14})=(1/2, 11/12, 0)$ with the objective function value $U^*_R=23/144$. Since $U^*_R < U^*$, we can drop all the nodes below $d_{12}=0$. Recall that any objective function value from the bottom nodes under $d_{12}=0$ cannot be larger than $23/144$. Therefore, we achieve the solution by solving only two linear programming problems out of the total eight problems. \end{example} \begin{figure}[t] \caption{Binary Tree Representation of $\{d_{ij}\}$} \label{fig:binary-tree} \centerline{\includegraphics[scale=.9]{figures/mio_tree.eps}} \footnotesize Note: The triplet at the bottom of the decision tree denotes a possible choice for $(d_{12},d_{13},d_{14})$. For example, $010$ means $(d_{12},d_{13},d_{14})=(0,1,0)$. \end{figure} Finally, we have some remarks on the implementation of the MIP procedure in \eqref{eq:obj-mio}--\eqref{eq:const3}. First, $M_{ij}$ can be computed by solving a separate linear programming problem and saving those values. Alternatively, we can also set a big number $M_{max}$ for all $i,j$, which is large enough to cover the absolute bound of the index $ x_{ij}'\beta$. The second order U-process property requires $O(n^2)$ computations for getting each $M_{ij}$ and it is usually faster to impose a constant number $M_{max}$ for all $M_{ij}$ than to solve the linear programming problem for each $i.j$ in our simulation studies. Second, it is well-known in the numerical optimization literature that any strict inequality should be switched into a weak inequality with some numerical precision bound. Thus, we change the second constraint in \eqref{eq:const2} into \eqs{ (d_{ij} - 1) M_{ij} + \eps \le x_{ij}'\beta\le d_{ij} M_{ij}. } Third, when there are many tied observations in the dependent variable, we can reduce computation cost substantially by vectorizing paired observations and dropping the tied pairs as we have observed in Example \ref{ex:bab}. \section{Best Subset Rank Prediction}\label{sec:theory} In this section we consider the application of a rank prediction problem. The goal is to find a linear index model that gives the best rank prediction of $y$ given $x$. The dimension of $x$ is potentially large and the model selection turns out to be the selection of the best predictors, i.e.~the best subset. We propose an $\ell_0$-constraint maximum rank correlation estimation procedure and show that the MIP method in \eqref{eq:obj-mrc}--\eqref{eq:const3} can be immediately extended to this estimation problem. Building on \citet{chen2018best}, we also provide the non-asymptotic bound of the rank prediction error. This bound implies that the dimension of $x$ can grow exponentially fast if the best subset size grows slowly enough, e.g.~at a polynomial rate. Suppose that we have a training set of $\{(y_i,x'_i): i=1,\ldots,n\}$, where $y$ can be either discrete or continuous. We want to learn the rank prediction rule for $y$ as well as to select the best $s$ predictors among $x$'s. Let $x=(x_1,x_{-1}')$ be $(p+1)$ covariates and we know that $x_1$ should be included in the predictor set. Let $\Vert \cdot \Vert_0$ be the $\ell_0$-norm, i.e.\ $\Vert \beta \Vert_0$ is the number of non-zero elements of the vector $\beta$. For any $k\neq l$, we propose the following prediction rule: \eqs{ R_{\beta}(x_k,x_l) = 1\{ x_{1,k} + x_{-1,k}'\beta > x_{1,l} + x_{-1,l}'\beta \}, }where $R_{\beta}(x_k,x_l)=1$ implies that $y_k$ is predicted to be larger than $y_l$. When we are given the whole prediction set $\{x_l: l=1\ldots, n_p\}$, the rank of $y_k$ is predicted by $\sum_{l=1}^{n_p} R_\beta(x_k,x_l)$. Let $F$ be the joint distribution of $(Y,X)$ and $Q:=P\times P$ be the product measure of $P$. Then, we choose the prediction rule as a sample analogue of \eqs{ S(\beta) := Q\left[1\{y_k > y_l\} = R_{\beta}(x_k,x_l)\right]. } Recall that we also want to select the best $s$ predictors out of the total $p$ covariates of $x_{-1}$. Therefore, the prediction rule composed of the best $s$ predictors can be achieved by solving the the following $\ell_0$-constraint optimization problem: \eq{ \max_{\beta \in \mathcal{B}_s} S_n(\beta), \label{eq:max_problem} }where $\mathcal{B}_s : = \{\beta \in R^p: \Vert \beta \Vert_0 \le s \}$ and \eq{ S_n(\beta) = \frac{2}{n(n-1)} \sum_{i =1}^n \sum_{j > i} 1\{ 1\{y_i> y_j \} = R_{\beta}(x_i,x_j) \}.\label{eq:def-of-S_n} } We evaluate the performance of the predictor by the following measure: \eqs{ U_n := S^*_{s} - S(\what{\beta}), }where $S^*_{s}:=\sup_{\beta \in \mathcal{B}_s} S(\beta)$ and $\what{\beta}$ is the solution of the constraint maximization problem defined in \eqref{eq:max_problem}--\eqref{eq:def-of-S_n} above. Note that $U_n \ge 0$ by the definition of $S_n^*$ and that a good prediction rule results in a small value of $S_n$ with a high probability. In the next theorem, we provide a non-asymptotic bound of $U_n$. Let $a \vee b := \max\{a,b\}$ and $r_n:= s\ln(p\vee n) \vee 1$. \begin{theorem}\label{thm-main} Suppose that $s \ge 1$. For any $\sigma>0$, there exists a universal constant $D_{\sigma}$ such that \eq{ \Pr \left( U_n > 4\sqrt{\frac{D_{\sigma}r_n}{n}} \right) \le \exp(-2 \sigma r_n) \label{eq:thm1-main} }provided that \eq{ &(12s + 12) \ln (D_{\sigma} r_n) \le r_n + (24s+24) \ln2, \label{eq:thm1-con1}\\ &\left(8s + \frac{17}{2} \right) \ln (D_{\sigma} r_n) + (16s + 16)(9 \ln 2 +1) \le r_n. \label{eq:thm1-con2} } \end{theorem} Theorem \ref{thm-main} shows that the tail probability of $U_n$ decreases exponentially in $r_n$. The probability bound in \eqref{eq:thm1-main} is non-asymptotic and holds for every $n$ if two inequality conditions \eqref{eq:thm1-con1}--\eqref{eq:thm1-con2} hold. Compared to the non-asymptotic bound of the best subset selection in \citet{chen2018best}, Theorem \ref{thm-main} requires an additional condition \eqref{eq:thm1-con2} to bound the second order degenerate U-process. However, focusing on the leading terms, we confirm that both conditions hold if \eq{ 12(\ln s + \ln D_{\sigma} + \ln\left(\ln(p \vee n)\right) \le \frac{1}{2} \ln (p \vee n ). } Note that Theorem \ref{thm-main} implies that $E(U_n) = O(n^{-1/2}\sqrt{s\ln(p\vee n)}) = o(1)$ if $s\ln(p\vee n)=o(n)$. Therefore, the best subset rank prediction performs well even when $p$ grows exponentially provided that $s$ increases slowly, e.g.\ at a polynomial rate. We finish this section by formulating the $\ell_0$-constraint optimization problem as an MIP problem. Let $x_{-1,ij}:=x_{-1,i} - x_{-1,j}$ as before. For $i,j=1,\ldots,n$, $i\neq j$, $h=1,\ldots,p$, we consider the following constraint MIP problem: \eq{ & \left(\what{\beta}, \left\{\what{d}_{ij}\right\}, \left\{\what{e}_{h}\right\}\right) = \argmax_{\beta, \{d_{ij}\}, \left\{{e}_{h}\right\}} \frac{2}{n(n-1)} \sum_{i =1}^n \sum_{j > i} \Big[ (1-\indf{y_i > y_j}) + (2\cdot \indf{y_i > y_j}-1) \cdot d_{ij} \Big] \label{eq:obj-predict}\\ &\mbox{subject to} \notag \\ & \hskip60pt (d_{ij} - 1) M_{ij} < (x_{1,i} - x_{1,j}) + x_{-1,ij}'\beta \le d_{ij} M_{ij} \label{eq:const1-predict} \\ & \hskip60pt e_h \underline{\beta}_h \le \beta_h \le e_h \overline{\beta}_h \label{eq:const2-predict} \\ & \hskip60pt \sum_{h=1}^p e_{h} \le s \label{eq:const3-predict} \\ & \hskip60pt d_{ij} \in \{0,1\} \label{eq:const4-predict} \\ & \hskip60pt e_{h} \in \{0,1\}, h \in \{1,\ldots,p\} \label{eq:const5-predict} }where $\underline{\beta}_h$ and $\overline{\beta}_h$ are the lower bound and the upper bound of $\beta_h$, respectively. The constraint MIP problem in \eqref{eq:obj-predict}--\eqref{eq:const5-predict} is equivalent to the original constraint optimization problem. The objective function in \eqref{eq:obj-predict} is numerically same with $S_n$ since $d_{ij}$ is identical to $R_{\beta}$ for each $\beta$. Furthermore, the constraint \eqref{eq:const2-predict} makes $\beta_h=0$ whenever $e_h=0$. Thus, the $\ell_0$-norm constraint $\Vert \beta \Vert_0 \le s$ is achieved by the constraints \eqref{eq:const2-predict}, \eqref{eq:const3-predict} and \eqref{eq:const5-predict}. Note that the objective function can be also written in the familiar rank correlation form: \eqs{ \frac{2}{n(n-1)} \sum_{i=1}^n \sum_{j>i} \Big[ 1(y_i>y_j) d_{ij} + 1(y_i\le y_j) (1-d_{ij}) \Big], }which is equivalent to \eqref{eq:obj-predict}. \section{Empirical Illustration} In this section we illustrate the advantage of the MIP procedure in an empirical application. We revisit the female labor force participation application in \cite{mroz1987sensitivity} and estimate the binary choice model using the generalized regression model in \eqref{eq:generlaize-regression}. Specifically, the composite functions are defined as $F(x'\beta,\eps):=x'\beta+\eps$ and $D(A):=1\{A>0\}$ so that it becomes a semiparametric binary choice model: \eqs{ y_i = 1\{x_i'\beta + \eps_i > 0\}, }where the distribution of $\eps_i$ is not specified. The parameter of interest is $\beta$ and we estimate it using the maximum rank correlation estimator defined in \eqref{eq:obj-mrc}. The outcome variable, $y_i$, is 1 if she participated in the labor force and 0, otherwise. We choose the following seven variables from the data for the covariate $x_i$: the number of kids less than 6-year-old ($kidslt6$), the number of kids aged between 6 and 18 ($kidge6$), years of education ($educ$), family income minus her income ($nwifeinc$) in \$1,000, years of experience ($exper$), experience squared ($expersq$), and age ($age$). We randomly draw 100 observations out of 753 for this computation exercise. Table \ref{tb: summary} reports summary statistics of both samples and we confirm that there is not much difference in terms of the mean and the standard deviation of each variable. We normalize the coefficient of $kidslt6$ to be -1. Note that the grid search method is infeasible given the sample size and the number of regressors in this application. \begin{table}[thp] \begin{center} \caption{Summary Statistics} \label{tb: summary} \begin{tabular}{lrrrr} \hline Variable Names& Mean & Std. Div. & Mean & Std. Div. \\ \hline & \multicolumn{2}{c}{\underline{Subsample}} & \multicolumn{2}{c}{\underline{Original Sample}} \\ Labor Participation & 0.55 & 0.50 & 0.57 & 0.50 \\ \\ kidslt6 & 0.22 & 0.52 & 0.24 & 0.52 \\ kidsge6 & 1.54 & 1.31 & 1.35 & 1.32 \\ educ & 11.74 & 2.12 & 12.29 & 2.28 \\ nwifeinc & 19.66 & 10.64 & 20.13 & 11.63 \\ exper & 10.83 & 8.30 & 10.63 & 8.07 \\ age & 42.92 & 8.15 & 42.54 & 8.07 \\ \hline Sample Size & \multicolumn{2}{c}{100}& \multicolumn{2}{c}{753}\\ \hline \end{tabular} \end{center} \footnotesize \renewcommand{\baselineskip}{11pt} \textbf{Note:} The data set is from \citet{mroz1987sensitivity}. The original sample was collected from the Panel Studies of Income Dynamics in 1975. The variable names are explained in the main text. \end{table} Table \ref{tb: application} summarizes the estimation results. First, we estimate the model using the mixed integer programming procedure (MIP) with a time budget of 600 seconds. To compare its performance with the existing methods, we also estimate it using the following five methods: the Nelder-Mead simplex method with an initial value from OLS (Nelder-Mead 1), the Nelder-Mead method with multiple initial values until the time budget of 600 seconds is reached (Nelder-Mead 2), the iterative grid search method (Iter-Grid), the simulated annealing method (SANN), and the Markov Chain Monte Carlo (MCMC) method in \citet{chernozhukov2003mcmc}. The parameter space was set to be $\mathcal{B}=[-10,10]^6$. The random starting points of Nelder-Mead 2 was generated from the uniform distribution on $\mathcal{B}$. We use the 2,001 equi-spaced grid points for each parameter for Iter-Grid. The Nelder-Mead method has been adopted in the applications of the MRC estimator, where the grid search is infeasible (for example, see \citet{cavanagh1998rank}, \citet{abrevaya2003pairwise}, \citet{khan2009inference}). A more sophisticated version of the iterative grid search method is introduced by \cite{wang2007note} and it is adopted in \citet*{fan2020rank} for their simulation studies with multi-dimensional regressors. The estimation result in Table \ref{tb: application} reveals several advantages of MIP over the existing alternative algorithms. First, MIP achieves the best objective function value among the candidate estimation methods within a reasonable time budget. Second, some estimates of $\hat{\beta}$ by alternative algorithms are qualitatively different from the solution of MIP. The coefficient estimate of $kidsge6$ by Nelder-Mead 1 shows the opposite direction. The estimates of $educ$ by alternative algorithms show much higher effects than MIP. MCMC shows the closest result although it is still suboptimal. Third, the Nelder-Mead algorithm with the multiple staring point for 600 seconds does not improve the result. In fact, the objective function value of Nelder-Mead 2 is lower than that of Nelder-Mead 1 which uses only one starting point of the OLS estimate. Finally, Figure \ref{fig:obj-values} shows how difficult the optimization problem is. We plot the empirical objective function values over the convex combinations of two $\beta$ estimates of MIP and Nelder-Mead 1. We can confirm that the objective function is not concave and that there exist many local maxima even between these two estimates. \begin{table}[thp] \begin{center} \caption{Female Labor Participation} \label{tb: application} \resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrrrrr} \hline Method & Obj. & Time (sec.) & kidslt6 & kidsge6 & educ & nwifeinc & exper & expersq & age \\ \hline MIO & 0.2140 & 600.33 & -1.0000 & -0.1523 & 0.0775 & -0.0066 & 0.0480 & 0.0008 & -0.0696 \\ Nelder-Mead 1 & 0.2087 & 0.27 & -1.0000 & 0.0385 & 0.2812 & -0.0147 & 0.2061 & -0.0028 & -0.0533 \\ Nelder-Mead 2 & 0.2026 & 609.39 & -1.0000 & -1.3480 & 9.4946 & -0.9316 & 8.9737 & -0.1376 & -2.0547 \\ Iter-Grid & 0.1989 & 4.62 & -1.0000 & -0.3800 & 2.9300 & -0.0400 & 1.5500 & -0.0100 & -0.2200 \\ SANN & 0.2018 & 6.43 & -1.0000 & 1.6951 & 2.1155 & -0.4312 & -0.2469 & 0.3394 & -0.4344 \\ MCMC & 0.2129 & 2.36 & -1.0000 & -0.1342 & 0.1522 & -0.0077 & 0.0816 & 0.0006 & -0.0788 \\ \hline \end{tabular}} \end{center} \footnotesize \renewcommand{\baselineskip}{11pt} \textbf{Note:} MIP denotes the mixed integer programming method. Nelder-Mead 1 and 2 denote the Nelder-Mead simplex methods with an initial value from OLS and multiple random initial values given the time budget of 600 seconds. Iter-Grid denotes the iterative grid search method with an initial value from OLS. SANN denotes the simulated annealing method. MCMC denotes the Markov Chain Monte Carlo method in \citet{chernozhukov2003mcmc}. The unit of computation time is seconds. \end{table} \begin{figure}[thp] \begin{center} \caption{Objective Function Values}\label{fig:obj-values} \includegraphics[scale=0.6]{figures/fig_nm1.pdf} \end{center} \footnotesize \renewcommand{\baselineskip}{11pt} \textbf{Note:} The empirical objective function values are plotted over the convex combinations of two $\beta$ estimates of MIP and Nelder-Mead 1: $\hat{\beta}_{\alpha} = \alpha \cdot \hat{\beta}_{MIP} + (1-\alpha) \cdot \hat{\beta}_{NM1}$ for $\alpha \in [0,1]$, where $\hat{\beta}_{MIP}$ and $\hat{\beta}_{NM1}$ are MIP and Nelder-Mead 1 estimates, respectively. \end{figure} In sum, inference based on inferior local solutions could lead researchers to imprecise or incorrect conclusions in practice although the theoretical properties of the MRC estimator are robust. \section{Monte Carlo Simulations} In this section we investigate the performance of the proposed MIP algorithm for the MRC estimator via Monte Carlo simulation studies. We focus on the achieved objective function value and the computation time in this section. All simulations are carried out on a computer equipped with AMD Ryzen Threadripper 1950X 16-Core processor and 64 Gigabytes of RAM. \begin{figure}[hp] \caption{Loss/Tie/Win Ratio (Binary)} \label{fg:win-bin} \centering \begin{tabular}{cc} $\underline{n=50}$ & $\underline{n=100}$ \\ \includegraphics[scale=0.3]{figures/plot_n50_k2_binary.pdf} \includegraphics[scale=0.3]{figures/plot_n50_k3_binary.pdf} & \includegraphics[scale=0.3]{figures/plot_n100_k2_binary.pdf} \includegraphics[scale=0.3]{figures/plot_n100_k3_binary.pdf}\\ $\underline{n=200}$ & $\underline{n=400}$ \\ \includegraphics[scale=0.3]{figures/plot_n200_k10_binary.pdf} \includegraphics[scale=0.3]{figures/plot_n200_k20_binary.pdf} & \includegraphics[scale=0.3]{figures/plot_n400_k10_binary.pdf} \includegraphics[scale=0.3]{figures/plot_n400_k20_binary.pdf}\\ \multicolumn{2}{c}{\includegraphics[scale=0.85]{figures/legend.pdf}} \end{tabular} \end{figure} We consider the following two regression models: for $i=1,\ldots, n$, \eq{ & \mbox{Binary Regression: } \hskip14pt y_i = 1\{x_i'\beta + \eps_i > 0\} \label{eq:binary}\\ & \mbox{Censored Regression: } \hskip3pt y_i = \max\{x_i'\beta + \eps_i, 0\} } where $x_i$ is a $k$-dimensional vector generated from $N(0,I_k)$, $\eps_i$ is an error term generated from $N(0,0.25^2)$, and $\beta$ is a parameter of interest. The true parameter value is set to be $\beta_0=(1,\ldots,1)$. Recall that we do not know the true transformation function (binary or censored) of the data generating process when we estimate $\beta$ by the MRC estimator. For identification, we normalize the first coefficient of $\beta$ to be 1. We compare the performance of the MIP algorithm with that of the Nelder-Mead algorithm (NM), the iterative grid search (Iter-Grid), the simulated annealing (SANN), and the Markov Chain Monte Carlo method (MCMC). For all methods, the parameter space is set to be $\mathcal{B}=[-10,10]^{k-1}$. The time budget is set to be 600 seconds. The Nelder-Mead algorithm with repeated random starting points (Nelder-Mead 2 in the previous section) does not perform better (especially for large $k$) than Nelder-Mead 1 and is dropped in these simulation studies. We first consider small-scale designs and check if the MIP algorithm achieves the global objective function. We set the sample size and the dimension of regressors to be $n=(50,100)$ and $k=(2,3)$, respectively for these small-scale designs. We next extend them into $n=(200,400)$ and $k=(10,20)$ and check the performance of the MIP algorithm in the limited time budget. Therefore, we consider 8 different designs in total in each regression model (binary/censored). We conduct 10 replications of each simulation design. \begin{figure}[hp] \caption{Loss/Tie/Win Ratio (Censored)} \label{fg:win-cen} \centering \begin{tabular}{cc} $\underline{n=50}$ & $\underline{n=100}$ \\ \includegraphics[scale=0.3]{figures/plot_n50_k2_censored.pdf} \includegraphics[scale=0.3]{figures/plot_n50_k3_censored.pdf} & \includegraphics[scale=0.3]{figures/plot_n100_k2_censored.pdf} \includegraphics[scale=0.3]{figures/plot_n100_k3_censored.pdf}\\ $\underline{n=200}$ & $\underline{n=400}$ \\ \includegraphics[scale=0.3]{figures/plot_n200_k10_censored.pdf} \includegraphics[scale=0.3]{figures/plot_n200_k20_censored.pdf} & \includegraphics[scale=0.3]{figures/plot_n400_k10_censored.pdf} \includegraphics[scale=0.3]{figures/plot_n400_k20_censored.pdf}\\ \multicolumn{2}{c}{\includegraphics[scale=0.85]{figures/legend.pdf}} \end{tabular} \end{figure} Figures \ref{fg:win-bin}--\ref{fg:win-cen} report the Loss/Tie/Win ratios of each alternative algorithm against MIP. In the graph, `Loss' means the objective function value of the algorithm is lower than that of MIP. `Tie' and `Win' are defined similarly. Overall, MIP outperforms the altenative methods. In the case of Binary Regression in Figure \ref{fg:win-bin}, MIP always achieves an equal or better objective function value than the alternative methods in all designs except one draw in $n=400, k=10$. In the small-scale design ($n=50, 100$ and $k=2,3$), SANN performs similar to MIP but the performance of MIP dominates in the large-scale design ($n=200,400$, $k=10,20$). It is interesting that MIP performs better as the dimension of $k$ increases when $n=400$. As we confirm in Table \ref{tb:bin_time} below, MIP finds a more precise solution (lower MIP gap) in a substantially shorter time when $n=400$ and $k=20$ than when $n=400$ and $k=10$. We observe similar patterns in Censored Regression in Figure \ref{fg:win-cen}. The outperfomance of MIP is clearer in the large-scale designs ($n=200,400$ and $k=10,20$). The overall performance of MIP in Censored Regression is better than that in Binary Regression when $n=200$. However, when $n=400$, MIP finds a worse solution than its competitors about 10-20\%. This is because the implied parameter space of $d_{ij}$ has a much bigger dimension in Censored Regression than Binary Regression as it has less tied pairs of $(y_i,y_j)$. Recall that $d_{ij}$ is multiplied by 0 in the objective function when $y_i$ and $y_j$ are tied and we do not need to estimate such a $d_{ij}$. \begin{table}[hp] \begin{center} \caption{Computation Time and MIP Gap (Binary)} \label{tb:bin_time} \begin{tabular}{lrrrrr} \hline & MIP & NM & Iter & SANN & MCMC \\ \hline \multicolumn{6}{c}{\underline{$n=50, k=2$}}\\ Max & 0.08 (0.00) & 0.19 & 0.12 & 0.80 & 0.59 \\ Median & 0.04 (0.00) & 0.00 & 0.07 & 0.79 & 0.45 \\ \\ \multicolumn{6}{c}{\underline{$n=50, k=3$}}\\ Max & 0.09 (0.00) & 0.00 & 0.26 & 0.97 & 0.51 \\ Median & 0.07 (0.00) & 0.00 & 0.17 & 0.84 & 0.47 \\ \\ \multicolumn{6}{c}{\underline{$n=100, k=2$}}\\ Max & 0.47 (0.00) & 0.00 & 0.13 & 2.11 & 0.71 \\ Median & 0.26 (0.00) & 0.00 & 0.13 & 2.06 & 0.69 \\ \\ \multicolumn{6}{c}{\underline{$n=100, k=3$}}\\ Max & 18.85 (0.00) & 0.00 & 0.66 & 2.23 & 0.75 \\ Median & 0.64 (0.00) & 0.00 & 0.40 & 2.16 & 0.71 \\ \\ \multicolumn{6}{c}{\underline{$n=200, k=10$}}\\ Max & 28.81 (0.00) & 0.08 & 14.86 & 17.80 & 4.45 \\ Median & 0.27 (0.00) & 0.05 & 14.29 & 17.27 & 4.24 \\ \\ \multicolumn{6}{c}{\underline{$n=200, k=20$}}\\ Max & 0.50 (0.00) & 0.14 & 67.11 & 22.78 & 6.13 \\ Median & 0.37 (0.00) & 0.12 & 34.18 & 22.02 & 5.68 \\ \\ \multicolumn{6}{c}{\underline{$n=400, k=10$}}\\ Max & 607.64 (0.29) & 0.32 & 77.06 & 60.26 & 13.89 \\ Median & 600.53 (0.09) & 0.20 & 49.24 & 58.55 & 13.16 \\ \\ \multicolumn{6}{c}{\underline{$n=400, k=20$}}\\ Max & 1.87 (0.00) & 0.70 & 326.98 & 92.20 & 22.00 \\ Median & 1.53 (0.00) & 0.54 & 211.61 & 91.18 & 21.25 \\ \hline \end{tabular} \end{center} \renewcommand{\baselineskip}{11pt} \textbf{Note:} MIP denotes the mixed integer programming method. NM does the Nelder-Mead simplex methods, Iter-Grid does the iterative grid search method, SANN does the simulated annealing method, and MCMC does the Markov Chain Monte Carlo method. The MIP gaps are given in the parentheses under the MIP column. The units for Time and MIP Gap are seconds and percent, respectively. \end{table} \begin{table}[hp] \begin{center} \caption{Computation Time and MIP Gap (Censored)} \label{tb:cen_time} \begin{tabular}{lrrrrr} & MIP & NM & Iter-Grid & SANN & MCMC \\ \hline \multicolumn{6}{c}{\underline{$n=50, k=2$}}\\ Max & 0.21 (0.00) & 0.00 & 0.10 & 1.14 & 0.51 \\ Median & 0.15 (0.00) & 0.00 & 0.09 & 1.04 & 0.49 \\ \\ \multicolumn{6}{c}{\underline{$n=50, k=3$}}\\ Max & 8.80 (0.00) & 0.01 & 0.53 & 1.30 & 0.62 \\ Median & 2.78 (0.00) & 0.00 & 0.28 & 1.03 & 0.51 \\ \\ \multicolumn{6}{c}{\underline{$n=100, k=2$}}\\ Max & 20.18 (0.00) & 0.00 & 0.17 & 3.11 & 1.00 \\ Median & 5.82 (0.00) & 0.00 & 0.17 & 2.91 & 0.88 \\ \\ \multicolumn{6}{c}{\underline{$n=100, k=3$}}\\ Max & 600.26 (0.34) & 0.01 & 2.08 & 6.94 & 2.22 \\ Median & 317.32 (0.00) & 0.01 & 1.19 & 6.59 & 1.83 \\ \\ \multicolumn{6}{c}{\underline{$n=200, k=10$}}\\ Max & 600.26 (2.13) & 0.24 & 34.38 & 25.24 & 6.17 \\ Median & 600.24 (1.21) & 0.09 & 21.98 & 23.79 & 5.63 \\ \\ \multicolumn{6}{c}{\underline{$n=200, k=20$}}\\ Max & 602.52 (1.11) & 0.29 & 110.4 & 32.48 & 8.28 \\ Median & 600.31 (0.88) & 0.21 & 79.79 & 31.23 & 7.61 \\ \\ \multicolumn{6}{c}{\underline{$n=400, k=10$}}\\ Max & 623.87 (1.99) & 0.55 & 114.97 & 103.47 & 22.15 \\ Median & 606.45 (1.72) & 0.40 & 88.23 & 93.78 & 20.69 \\ \\ \multicolumn{6}{c}{\underline{$n=400, k=20$}}\\ Max & 610.85 (6.04) & 1.35 & 490.84 & 167.22 & 38.82 \\ Median & 602.57 (1.16) & 0.98 & 331.36 & 152.99 & 36.03 \\ \hline \end{tabular} \end{center} \renewcommand{\baselineskip}{11pt} \textbf{Note:} See the note under Table \ref{tb:bin_time} for details. \end{table} Tables \ref{tb:bin_time}--\ref{tb:cen_time} provide some summary statistics of the computation time and the MIP gap. We first discuss the result of Binary Regression in Table \ref{tb:bin_time}. In small-scale designs, MIP requires about the same computation time as the alternative algorithms and it finds the global solution in less than a second except $n=100$ and $k=3$. In large-scale designs MIP is still able to find the global solution within the allocated time budget of 600 seconds, except when $n=400$ and $k=10$. In that design MIP hits the time limit of 600 seconds more often and the MIP gap does not achieve 0\%, i.e.\ we are not sure whether the solution is global or not. However, the gap size is quite small and less than 1\%. It is noteworthy that MIP performs much better in terms of the MIP gap when $k$ is bigger in large-scale designs. The computation time is even dramatically reduced when $n=400$. We turn our attention to the result of Censored Regression in Table \ref{tb:cen_time}. As we discussed above, Censored Regression requires more computation time than Binary Regression and it mostly reaches the time limit of 600 seconds when $n=100$ and $k=3$. In large-scale designs, we observe the MIP gaps larger than 1\%, which could be the reason that the solutions of MIP are sometimes worse than those of the alternative algorithms. Other patters are quite similar to those in Binary Regression including that the performance of MIP becomes better when $k$ is higher for large-scale designs. In sum, the performance of the proposed MIP algorithm for the MRC estimator is satisfactory. It always finds the global solution in small-scale designs where the existing methods fail to do quite often. Furthermore, it performs better than the alternative algorithms even in large-scale designs by spending a feasible amount of computation time. The MIP gap also provides useful guidance for the quality of a solution in hand when a researcher should stop searching for the global solution because of the time limit. \section{Conclusion} In this paper we propose a feasible computation method to get a global solution for the maximum rank correlation estimator of \cite{han1987non} using the mixed integer programming (MIP). We show that the proposed MIP method outperforms the alternative methods in the empirical example of female labor-force participation. One advantage of the proposed MIP method is that it can be easily extended to many constraint rank maximization problems as illustrated in the best subset rank prediction problem, where we also prove that the non-asymptotic bound of the tail probability decays exponentially. This result sheds light on the research of the high-dimensional rank estimation models, which we leave for future research. \bibliographystyle{chicago} {
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of the most intriguing problems in classical field theory has been the construction of a finite range spin--2 theory with local Lorentz-invariance and self interactions. The first linear theory introduced by Pauli and Fierz \cite{bib:Pauli-Fierz} proved to be incompatible with observations in the small mass limit \cite{bib:vdvz}. Although a non-linear screening mechanism can establish agreement with the predictions of general relativity in this limit \cite{Vainshtein:1972sx}, a generic non-linear completion of Pauli-Fierz theory contains an extra degree of freedom, the Boulware-Deser mode, which violates unitarity \cite{Boulware:1973my}. This issue was resolved only recently by de Rham, Gabadadze and Tolley (dRGT) \cite{deRham:2010kj} who identified the particular tuning which successfully eliminates the dynamics of the Boulware-Deser mode, yielding the long sought for massive spin--2 field theory with five degrees of freedom. The dRGT theory in the diffeomorphism invariant formulation is constructed out of two metrics: the physical metric $g_{\mu\nu}$ to which the matter couples directly, and the non-dynamical fiducial metric with the restricted form \begin{equation} f_{\mu\nu} \equiv \eta_{ab}\partial_\mu \phi^a \partial_\nu\phi^b\,, \label{eq:fmunudefined} \end{equation} where the four scalar fields $\phi^a$ with $a=0,1,2,3$ enjoy Poincar\'e invariance in the field space with metric $\eta_{ab} = {\rm diag}(-1,1,1,1)$. These scalar fields correspond to the gravitational analogue of the St\"uckelberg trick; for a non-trivial field configuration, the fiducial metric corresponds to the flat-space time in a given coordinate system, breaking all four of diffeomorphisms that define general relativity. In particular, in the unitary gauge where $\phi^a= \delta^a_\mu x^\mu$, one has $f_{\mu\nu}= \eta_{\mu\nu}$. The graviton mass terms are then constructed by contracting the physical metric and the fixed reference metric through the combination $\sqrt{g^{-1} f}$, where the square-root is shorthand for tensor exponent $1/2$. The dRGT tuning allows only four independent combinations of various powers of $\sqrt{g^{-1} f}$ in the action. Being a modified gravity theory in the IR, the dRGT theory attracted considerable attention, particularly in the context of late time cosmology to address the (new) cosmological constant problem with self-accelerating solutions. In particular, the theory was found to forbid a Friedmann--Lema\^itre--Robertson--Walker (FLRW) cosmology with a flat spatial geometry \cite{D'Amico:2011jj}, while solutions with negative curvature \cite{Gumrukcuoglu:2011ew} were found to suffer from a non-linear instability \cite{DeFelice:2012mx}. The flat FLRW solutions for the physical metric can be found by allowing an inhomogeneous fiducial metric \cite{inhomogeneous} but these solutions are also plagued by instabilities \cite{instability}. The only stable cosmological solutions are expected to break homogeneity and/or isotropy (see e.g. \cite{D'Amico:2011jj,bib:anisotropicfriedmann}). Motivated by the stability of Minkowski solution and the non-existence of a flat FLRW solution, Ref.~\cite{D'Amico:2012zv} introduced an extension of dRGT theory by including an extra scalar field, the ``quasidilaton'', which is associated with the global symmetry \begin{equation} \sigma\to\sigma+\sigma_0\,,\qquad \phi^a \to e^{-\sigma_0/\Mpl}\phi^a\,. \label{eq:quasidilatonsymmetry} \end{equation} Under this transformation, $e^{\sigma/\Mpl}\sqrt{g^{-1} f}$ is invariant and is used to build the mass term for graviton. As a result, the quasidilaton field $\sigma$ acts like the conformal mode of $f_{\mu\nu}$. As the Minkowski background in dRGT is an allowed solution, the conformally flat FLRW background in the quasidilaton theory is therefore permitted. An interesting property of this cosmology is that it flows to a late time attractor solution \cite{Anselmi:2015zva} where the quasidilaton field follows the evolution of e--foldings of expansion. On this attractor, the contribution of the mass term to the total energy density of the universe is effectively a cosmological constant, i.e. it is a self-accelerating solution. Studies of perturbative stability around this background revealed that scalar perturbations are always unstable in the UV \cite{Gumrukcuoglu:2013nza,D'Amico:2013kya}. Although the decoupling limit analysis suggests the existence of stable cosmologies, these correspond to space-times which are homogeneous and isotropic only approximately in the full theory \cite{Gabadadze:2014kaa}. The instability of FLRW solutions can be avoided by including an additional term to the fiducial metric \eqref{eq:fmunudefined} \begin{equation} \tilde{f}_{\mu\nu} \equiv \eta_{ab}\partial_\mu \phi^a \partial_\nu\phi^b - \frac{\alpha_\sigma}{m^2}\,\partial_\mu \left({e^{-\sigma/\Mpl}}\right)\,\partial_\nu \left({e^{-\sigma/\Mpl}}\right)\,. \label{eq:extended-fiducial} \end{equation} Although the action constructed with the above fiducial metric is known as ``extended quasidilaton theory'' \cite{DeFelice:2013tsa}, the second term above is still allowed by the quasidilatonic transformations \eqref{eq:quasidilatonsymmetry} and preserves the dRGT tuning \cite{Mukohyama:2013raa}. The FLRW background dynamics in extended quasidilaton theory is dramatically different than the one in the original theory, although at late times, flows to the same late time attractor independently of $\alpha_\sigma$~\footnote{See Ref.~\cite{Kahniashvili:2014wua} for a restricted background analysis, and also the end of Sec.~\ref{sec:kination} for our comment on this work.}. Apart from a finite region of the parameter space, the late time self-accelerating vacuum solutions are stable \cite{DeFelice:2013tsa,DeFelice:2013dua}. Given that the cosmological vacuum can be stable, the next step is to investigate the effect of matter fields. In the presence of a matter sector consisting of a single canonical scalar field with a potential, Ref.~\cite{Motohashi:2014una} showed that the stability conditions in vacuum are preserved, although they become time dependent, potentially threatening the stability of a general FLRW background. As an example, the authors considered a scalar field with vanishing potential (occasionally called a ``kination'' field); in this case, the available parameter space for a stable background at early times becomes of measure zero, leading to an IR ghost in the gravity sector. However, Ref.\cite{Heisenberg:2015voa} argued that the stability conditions get modified if the system is sufficiently away from the late time fixed point when the kination field dominates in the early stages. In this paper we undertake a detailed investigation of the stability of perturbations in the presence of matter. The goal of the paper is twofold: in the presence of a perfect fluid we demonstrate the massive gravity analogue of Jeans instability, which manifests itself as an IR ghost (in the matter sector). We also revisit the purely kinetic scalar field example of Ref.~\cite{Motohashi:2014una} to show that the IR ghost (in the gravity sector) is outside the regime of validity of the effective field theory (EFT). The paper is organized as follows: in Sec.~\ref{sec:theory-and-background} we review the extended quasidilaton theory and discuss the background evolution in the presence of a generic k-essence field. In Sec.~\ref{sec:perturbations}, we introduce cosmological perturbations and obtain the stability conditions. In particular, we show the presence of an IR ghost when the scalar field can be interpreted as an analogue perfect fluid; this ghost instability in the IR is shown to be harmless in Sec.~\ref{sec:canonicaltrans}, where we perform a canonical transformation to change the variable to the density perturbations. In Sec.~\ref{sec:kination}, we consider the case of a canonically normalized scalar field, focusing on the case of a kination field and show that the IR ghost instability associated with it will be outside the reach of the EFT. We conclude with a discussion in Sec.~\ref{sec:discussion}. \section{Cosmological background} \label{sec:theory-and-background} In this Section, we consider the minimal action for the extended quasidilaton field, along with a generic scalar field, minimally coupled to the physical metric, to represent the matter sector. In this construction, we study the evolution of a cosmological background, on the late time de Sitter attractor. \subsection{The theory} We start by reviewing the extended quasidilaton theory. The action in the Einstein frame is given by\footnote{It should be noted that we have omitted two acceptable terms that are allowed by the dRGT tuning, namely the ${\cal L}_0$ term, which corresponds to adding a bare cosmological constant, and the ${\cal L}_1 = [{\cal K}]$ term which is absent to prevent tadpoles. We also remark that one of the four parameters in the mass term $m$, $\alpha_2$, $\alpha_3$ and $\alpha_4$ is redundant. Although it is common to choose $\alpha_2=1$ in the literature, we find that working with three $\alpha_n$ parameters allows us to present our results in an accessible manner.} \begin{equation} {S}=\int d^4x \sqrt{-g}\left\{\frac{\Mpl^2}{2} \left[R + 2 m^2 \left(\alpha_2 {\cal L}_2+ \alpha_3 {\cal L}_3+\alpha_4 {\cal L}_4\right)\right]- \frac{\Omega}{2}\partial_\mu\sigma\partial^\mu\sigma + P(X,\chi)\right\}\,, \label{eq:action} \end{equation} where $\Omega$ is a free parameter that controls the strength of the coupling between the quasidilaton and the massive graviton. The graviton mass terms above are tuned in the fashion of dRGT \cite{deRham:2010kj} \begin{eqnarray} {\cal L}_{2} & \equiv & \frac{1}{2}\,([{\cal K}]^{2}-[{\cal K}^{2}])\,,\\ {\cal L}_{3} & \equiv & \frac{1}{6}\,([{\cal K}]^{3}-3[{\cal K}][{\cal K}^{2}]+2[{\cal K}^{3}])\,,\\ {\cal L}_{4} & \equiv & \frac{1}{24}\,([{\cal K}]^{4}-6[{\cal K}]^{2}[{\cal K}^{2}]+3[{\cal K}^{2}]^{2} +8[{\cal K}][{\cal K}^{3}]-6[{\cal K}^{4}])\,, \end{eqnarray} which are written in terms of the traces of powers of the building block tensor \begin{equation} {\cal K}^{\mu}{}_{\nu}=\delta^{\mu}{}_{\nu}-e^{\sigma/\Mpl}\left(\sqrt{g^{-1}\tilde{f}}\right)_{\ \ \nu}^{\mu}\,, \end{equation} where we used the extended fiducial metric $\tilde{f}_{\mu\nu}$ defined in Eq.~\eqref{eq:extended-fiducial}. The scalar field $\sigma$ is the quasidilaton field associated with the global symmetry in Eq.~\eqref{eq:quasidilatonsymmetry}. Finally, the matter Lagrangian is taken to be an arbitrary function of the scalar field $\chi$ and its canonical kinetic term \begin{equation} X \equiv -\frac{1}{2}\partial_\mu\chi\partial^\mu\chi\,. \end{equation} This scalar field can be used as a model for an irrotational fluid with the analogue pressure, energy density and sound speed given by \cite{ArmendarizPicon:2000ah} \begin{equation} P = P (X,\chi) \,,\qquad \rho \equiv 2P_{,X} X - P, \qquad c_s^2 \equiv \frac{P_{,X}}{\rho_{,X}}\,. \label{eq:fluid} \end{equation} \subsection{Background equations of motion} \label{sec:bg-general} We now look for homogeneous, isotropic and spatially flat solutions. In order to preserve these symmetries at the perturbations level, the two background metrics need to respect these in the same coordinate system. To this goal, we take a homogeneous quasidilaton configuration $\sigma= \sigma(t)$ and choose the background St\"uckelberg fields to coincide with the unitary gauge configuration \begin{equation} \phi^a = \delta^a_i x^i + \delta^a_0 f(t)\,, \end{equation} where for later convenience, $f(t)$ is kept arbitrary to preserve time reparametrization invariance. In this background, the extended fiducial metric (\ref{eq:extended-fiducial}) becomes \begin{equation} ds_{\tilde{f}}^2 = -n(t)^2 dt^2 + \delta_{ij}dx^idx^j\,, \label{eq:extended-fiducial-FLRW} \end{equation} where we defined the lapse function through \begin{equation} n(t)^2 \equiv \dot{f}^2 +\frac{\alpha_\sigma}{\Mpl^2m^2}\,{\rm e}^{-2 \sigma/\Mpl}\,\dot{\sigma}^2\,. \label{eq:lapsedef} \end{equation} For the dynamical metric $g$, we adopt the flat FLRW ansatz, \begin{equation} ds_g^2=-N(t)^2 dt^2 +a(t)^2 \delta_{ij} dx^idx^j\,. \end{equation} Finally, we assume a $\chi$ field condensate $\chi=\chi(t)$, with analogue pressure, energy density and sound speed given by \eqref{eq:fluid} on the background $X= \frac{\dot\chi^2}{2\,N^2}$. Bringing all these ansatze together, we find, up to total derivatives, the following mini-superspace action: \begin{equation} \frac{S_{\rm mss}}{V} = \Mpl^2\int dt \,a^3 N\,\Bigg\{-\frac{3\,\dot{a}^2}{a^2N^2}-m^2\left[U(\xi)+\frac{r-1}{4}\,\xi\,U'(\xi)\right]+\frac{1}{\Mpl^2} \left[ \frac{\Omega \,\dot{\sigma}^2}{2\,N^2}+P \right]\Bigg\}\,, \label{eq:minisuperspace} \end{equation} where we defined \begin{equation} \xi(t)\equiv \frac{{\rm e}^{\sigma/\Mpl}}{a}\,,\quad r(t)\equiv \frac{a\,n}{N} = \frac{a\,\sqrt{\dot{f}^2 +\frac{\alpha_\sigma}{\Mpl^2m^2}\,{\rm e}^{-2 \sigma/\Mpl}\,\dot{\sigma}^2}}{N}\,,\quad U(\xi) \equiv -6 \,\alpha_2 (\xi-1)^2 +4\,\alpha_3 (\xi-1)^3 -\alpha_4 (\xi-1)^4\,. \label{eq:definitions} \end{equation} We now compute the background equations of motion by varying the action (\ref{eq:minisuperspace}) with respect to $N$, $a$, $\sigma$, $\chi$ and $f$. We remark that one of the equations of motion (except $\delta S/\delta N$) can be written as a combination of others through the contracted Bianchi identity \begin{equation} \frac{\partial}{\partial t} \frac{\delta S_{\rm mss}}{\delta N} = \sum_{q=a,\sigma,\chi,f} \frac{\dot{q}}{N}\frac{\delta S_{\rm mss}}{\delta q}\,, \label{eq:bianchi} \end{equation} which is satisfied off-shell. We start with the Friedmann equation, obtained by varying the action with respect to $N$, \begin{equation} 3\,H^2 = m^2\rho_{m}+\frac{\rho}{\Mpl^2}+\frac{\Omega}{2}\left(H+\frac{\dot\xi}{N\,\xi}\right)^2\,, \label{eq:EQN} \end{equation} where we defined the dimensionless energy density of the mass term as \begin{equation} \rho_m \equiv U(\xi)-\frac{\xi}{4}U'(\xi)\,. \end{equation} The equation for the acceleration is derived by varying the action \eqref{eq:minisuperspace} with respect to $a$, then using \eqref{eq:EQN}: \begin{equation} \frac{2\,\dot H}{N} = m^2J\,\xi\,(r-1)-\frac{\rho+P}{\Mpl^2}-\Omega\,\left(H+\frac{\dot\xi}{N\,\xi}\right)^2\,, \label{eq:EQA} \end{equation} with \footnote{In the frequently used 3-parameter setting with $\alpha_2=1$, the three functions $\rho_m$, $J$ and $U'(\xi)$ are no longer independent, satisfying \begin{equation} J\Big\vert_{\alpha_2=1} = \xi+\frac{\xi}{(\xi-1)^2}\left(\rho_m+\frac{U'}{4\,\xi}\right)\,.\nonumber \end{equation} \label{fn:J--a2=1}} \begin{equation} J\equiv \frac{1}{3}\,\frac{\partial}{\partial \xi}\left(U(\xi)-\frac{\xi}{4}U'(\xi)\right)\,. \end{equation} The equation of motion for the quasidilaton field $\sigma$ can be written as a second order equation for the ratio $\xi$ defined in Eq.\eqref{eq:definitions} \begin{equation} \frac{\Omega}{N\,a^3}\,\frac{d}{dt}\left[a^3\left(H+\frac{\dot\xi}{N\,\xi}\right)\right] -\frac{\alpha_\sigma}{4\,N\,\xi\,a^4}\,\frac{d}{dt}\left[\frac{a^4 U'(\xi)}{r}\left(H+\frac{\dot\xi}{N\,\xi}\right)\right]=m^2\,\xi\left[3\,J\,(r-1)-U'(\xi)r\right]\,. \label{eq:EQS} \end{equation} The matter field, which is minimally coupled to the physical metric, obeys the usual conservation equation \begin{equation} \frac{\dot\rho}{N}+3\,H\,(\rho+P)=0\,. \label{eq:EQC} \end{equation} Using the definitions (\ref{eq:fluid}) and the above equation of motion, we can also derive the useful relations \begin{eqnarray} \dot{P} &=& -3\,N\,c_s^2H\,(\rho+P)+\dot{\chi}(P_{,\chi}-c_s^2\rho_{,\chi})\,,\nonumber\\ \dot{P}_{,\chi} &=& -c_s^2 (\rho_{,\chi}+P_{,\chi})\left(3\,H\,N + \frac{\dot{\chi}\,\rho_{,\chi}}{\rho+P}\right)+\dot{\chi}P_{,\chi\chi}\,, \label{eq:dotP} \end{eqnarray} where a subscript ``$,\chi$'' denotes differentiation with respect to $\chi$ and the time derivative in $\dot{P}_{,\chi}$ always acts after the $\chi$ derivative. Finally, we calculate the equation of motion for the temporal St\"uckelberg field. Although this equation can be computed through the contracted Bianchi identity \eqref{eq:bianchi}, it is nevertheless useful to obtain it via variation of the action with respect to $f(t)$. Observing that the only $f(t)$ dependence in the action (\ref{eq:minisuperspace}) comes from the function $r(t)$ which contains its first derivative, the St\"uckelberg equation of motion can be readily integrated to give \begin{equation} \frac{m^2\Mpl^2\,\dot{f}}{4\,n} \, U'(\xi) \xi= \frac{\kappa}{a^4}\,, \label{eq:eqstuck0} \end{equation} where $\kappa$ is an integration constant whose contribution redshifts as $a^{-4}$ and the quartic polynomial in $\xi$ is given by \begin{equation} \xi\,U'(\xi) = 4\,\xi\,(\xi-1)\left[-3\,\alpha_2+3\,\alpha_3(\xi-1)-\alpha_4(\xi-1)^2\right]\,. \label{eq:eqstuck} \end{equation} \subsection{Late time attractor} \label{sec:bg-fixedpoint} At late times, as the right hand side of Eq.\eqref{eq:eqstuck0} redshifts away, the system approaches a fixed point solution at $\xi=\xi_{\rm fp}$ given by one of the roots of the quartic polynomial $\xi\,U'(\xi)$. The solution $\xi_{\rm fp}=0$ is unphysical as it leads to strong coupling \cite{D'Amico:2012zv}. The other trivial solution, $\xi_{\rm fp}=1$ does not give rise to self-acceleration so we will also disregard it.\footnote{It should be noted however that $\xi_{\rm fp}=1$ is still a valid fixed point where all five graviton degrees of freedom are dynamical, acquire non-trivial masses and are subject to stability conditions as given in \cite{DeFelice:2013tsa,DeFelice:2013dua}. Although this branch still gives a phenomenology different than general relativity, we will instead concentrate on the fixed points that exhibit self-acceleration.} We are thus left with the remaining two roots of $U'(\xi)$, which are \begin{equation} \xi_{\rm fp} =\xi_\pm \equiv 1 + \frac{3\,\alpha_3}{2\,\alpha_4} \pm \sqrt{\frac{9\,\alpha_3^2}{4\,\alpha_4^2}-\frac{3\,\alpha_2}{\alpha_4}}\,. \label{eq:Xpm} \end{equation} The constancy of the ratio $\xi$ on the fixed point completely determines the evolution of the background quasidilaton field $\sigma$ through its definition \eqref{eq:definitions}, giving \begin{equation} \frac{\dot{\sigma}}{N\,\Mpl}\Bigg\vert_{\rm fp} = H \left(1+ \frac{\dot{\xi}}{N\,\xi\,H}\right)\Bigg\vert_{\rm fp}= H\,. \label{eq:sigmafp} \end{equation} Thus, in the late asymptotic regime, the contribution to the expansion from the mass term acts as an effective cosmological constant $m^2\rho_m$, while the contribution of the quasidilaton kinetic energy modifies the effective strength of gravitational interactions. The equations of motion \eqref{eq:EQN}, \eqref{eq:EQA}, \eqref{eq:EQS} on the fixed point attractor thus become \begin{eqnarray} \frac{(6-\Omega)}{2}\,H^2 &=& \frac{\rho}{\Mpl^2}+m^2\rho_m\,,\nonumber\\ \frac{\dot{H}}{N} &=& -\frac{3\,(P+\rho)}{\Mpl^2 (6-\Omega)}\,,\nonumber\\ r &=& 1 + \frac{\Omega}{m^2\,(6-\Omega)\,J\,\xi}\,\left[(6-\Omega)H^2 - \frac{P+\rho}{\Mpl^2}\right]\,, \label{eq:BG-FIXEDPOINT} \end{eqnarray} whereas Eq.\eqref{eq:EQC} continues to hold. We stress that the background dynamics on the attractor is independent of the extension parameter $\alpha_\sigma$, so these equations are also valid in the original quasidilaton theory with $\alpha_\sigma=0$ \cite{D'Amico:2012zv}. \subsection{Allowed parameter space} \label{sec:parameter} Let us now discuss the regime of parameters where we can have a sensible evolution. From the Friedmann equation, i.e. the first of \eqref{eq:BG-FIXEDPOINT}, the positivity of the effective gravitational constant imposes \begin{equation} 6-\Omega>0\,, \label{eq:cond1} \end{equation} putting an upper bound on the coefficient of the kinetic term of quasidilaton $\Omega$. As we will also show in Sec.\ref{sec:scalars}, this parameter is also bound from below $\Omega>0$ by the stability requirements of the scalar sector \eqref{eq:condUV}. From here on, we will assume that the effective cosmological constant $m^2\rho_m$ is the main source for the present day accelerated expansion, or less restrictively, that $m^2\rho_m>0$. This leads to the following: \begin{equation} (6-\Omega)\,H^2 - 2\,\frac{\rho}{\Mpl^2}>0\,. \label{eq:cond2} \end{equation} In the perturbative analysis, we will encounter terms of the form $(6-\Omega)H^2 - (P+\rho)/\Mpl^2$ whose sign will be crucial in determining the stability. For a fluid with equation of state $w \equiv P/\rho$, satisfying the dominant energy condition $-1<w<1$, we find that (\ref{eq:cond2}) implies \begin{equation} (6-\Omega)H^2 - \frac{(1+w)\rho}{\Mpl^2}>0 \,. \label{eq:cond3} \end{equation} For standard constituents of $\Lambda$CDM cosmology, this is a relevant regime. Alternatively, we can express the condition (\ref{eq:cond3}) using the second of Eq.\eqref{eq:BG-FIXEDPOINT} \begin{equation} 3\,H^2+\frac{\dot{H}}{N} > 0\,. \label{eq:cond3b} \end{equation} We remark that in the presence of a bare cosmological constant, one only needs to shift $U(\xi)\to U(\xi)+\Lambda_{bare}/m^2$, effectively absorbing $\Lambda_{bare}$ into the definition $\rho_m$, and the background equations \eqref{eq:BG-FIXEDPOINT} will still be valid. From Eq.\eqref{eq:lapsedef}, the positivity of the quantity $(\dot{f}/n)^2$ gives \begin{equation} \frac{\dot{f}^2}{n^2}=1- \frac{\alpha_\sigma H^2}{m^2\xi^2 r^2}\left(1+\frac{\dot{\xi}}{N\,\xi\,H}\right)^2 > 0\,, \label{eq:fdot} \end{equation} which translates into an upper bound on the parameter $\alpha_\sigma$. On the late time attractor, this is simply \begin{equation} \frac{\alpha_\sigma H^2}{m^2\xi^2}< r^2 \,,\quad {\rm for~}\xi=\xi_{\rm fp}\,. \label{eq:alphabarup} \end{equation} As the stability of scalar perturbations in vacuum requires $\alpha_\sigma>0$ \cite{DeFelice:2013tsa} (see also Sec.\ref{sec:scalars}), the ratio $\dot{f}/{n}$ is always constrained to be less than unity from \eqref{eq:fdot}, regardless of the details of the evolution. Along with this information, we see that the only way the past history can accommodate the growth of the right hand side of Eq.\eqref{eq:eqstuck0} is if the ratio satisfies $\xi>1$. This is due to the polynomial dependence on $\xi$ in \eqref{eq:eqstuck0}: since $\xi$ cannot cross the root $\xi =1$ in the past, the fixed point solution will be consistent only if $\xi_{\rm fp}>1$ \cite{Anselmi:2015zva}. Finally, requiring a positive $m^2\rho_m$ and imposing that the tensor graviton mass \eqref{eq:MGW2}, which also corresponds to squared sound speed of vector gravitons \eqref{eq:vectorsoundspeed}, is positive in the late time acceleration domination stage, the allowed parameter regions are: \begin{eqnarray} {\cal P}_{++} &:& \Big\{\xi_{\rm fp} = \xi_+ \,,\qquad \alpha_2m^2>0 \,,\qquad \frac{\alpha_3}{\alpha_2}>0 \,,\qquad 0<\frac{\alpha_4}{\alpha_2}<\frac{2\,\alpha_3^2}{3\,\alpha_2^2} \Big\} \,,\nonumber\\ {\cal P}_{+-} &:& \Big\{\xi_{\rm fp} = \xi_+ \,,\qquad \alpha_2m^2<0 \,,\qquad \frac{\alpha_3}{\alpha_2}\leq0 \,,\qquad \frac{\alpha_4}{\alpha_2}<0 \Big\} \,,\nonumber\\ {\cal P}_{--} &:& \Big\{\xi_{\rm fp} = \xi_-\,, \qquad \alpha_2m^2<0 \,,\qquad \frac{\alpha_3}{\alpha_2}>0 \,,\qquad 0\leq\frac{\alpha_4}{\alpha_2}<\frac{3\,\alpha_3^2}{4\,\alpha_2^2} \Big\}\,, \label{eq:allparam} \end{eqnarray} where the first subscript of ${\cal P}$ corresponds to the fixed point solution $\xi_\pm$ while the second subscript denotes the sign of the parameter $\alpha_2 m^2$. Notice that there is no allowed region for ${\cal P}_{-+}$.\footnote{The parameter region ${\cal P}_{++}$ is in agreement with those in \cite{Anselmi:2015zva}, where they fixed $\alpha_2=1$ and only considered $m^2>0$.} It is also worthwhile to mention that in all three allowed parameter regions, the function $J$ satisfies \begin{equation} m^2 J >0\,. \label{eq:Jpositive} \end{equation} This condition, combined with \eqref{eq:cond1}, \eqref{eq:cond3} and the third of \eqref{eq:BG-FIXEDPOINT}, gives \begin{equation} \frac{r-1}{\Omega}>0\,. \label{eq:r-1} \end{equation} As we will later show in Eq.\eqref{eq:condUV}, the stability of scalar perturbations imposes $\Omega>0$, thus reducing the above inequality to $r>1$. \section{Cosmological Perturbations} \label{sec:perturbations} We now introduce perturbations, by decomposing them based on how they transform under spatial rotations. Thanks to the background symmetry, the tensor, vector and scalar sectors decouple at the linear level. The metric perturbations are given by \begin{eqnarray} \delta g_{00} &=& -2\,N^2\,\Phi\,,\nonumber\\ \delta g_{0i} &=& N\,a\,\left(\partial_i B+B_i\right)\,,\nonumber\\ \delta g_{ij} &=& a^2 \left[2\,\delta_{ij}\psi +\left(\partial_i\partial_j-\frac{\delta_{ij}}{3}\partial^k\partial_k\right)E+\partial_{(i}E_{j)}+h_{ij}\right]\,, \end{eqnarray} where the latin indices are raised by $\delta^{ij}$ and $\delta^{ij}h_{ij} = \partial^ih_{ij} = \partial^i E_i = \partial^i B_i=0$. The two scalar fields in the system are perturbed as \begin{equation} \sigma= \sigma_0 + \Mpl \delta\sigma\,,\qquad \chi = \chi_0 + \Mpl \delta\chi\,. \end{equation} In the remainder of the paper, we omit the subscript $~_0$ of the background quantities for the sake of clarity. We also fix the residual time reparametrization invariance by choosing the physical time coordinate, i.e. $N=1$. Finally, we keep the St\"uckelberg fields to be purely background, i.e. $\delta \phi^a=0$, thus exhausting the gauge freedom completely. In this set-up, there are in total $12$ degrees of freedom (dof) in the system and no gauge symmetries: $2$ dof in the tensor sector ($h_{ij}$), $4$ dof in the vector sector ($B_i$, $E_i$) and $6$ dof in the scalar sector ($\Phi$, $B$, $\psi$, $E$, $\delta\sigma$, $\delta\chi$). Out of these, $2$ scalar ($\Phi$, $B$) and $2$ vector ($B_i$) dof are non-dynamical. Furthermore, the dRGT tuning allows us to integrate out one more combination. Eventually, we will be left with $2$ tensor dof, $2$ vector dof and $3$ scalar dof. These correspond to the $5$ polarizations of the massive spin--2 field, the quasidilaton and the matter field. We now present the analysis of the perturbations for each sector independently. \subsection{Tensor modes} The action quadratic in tensor perturbations is obtained, up to boundary terms, as \begin{equation} S^{(2)}_{\rm tensor} = \frac{\Mpl^2}{8}\int d^3k\,dt\,a^3\,\left[\dot{h}_{ij}^\star \dot{h}^{ij}-\left(\frac{k^2}{a^2}+M_{GW}^2\right)h_{ij}^\star h^{ij}\right]\,, \end{equation} where the tensor mass is given by \begin{equation} M_{GW}^2\equiv\frac{m^2\xi}{\xi-1}\left[[r-2+(2r-1)\xi]J-\frac{(r-1)\xi^2}{(\xi-1)^2}\rho_m\right]\,. \label{eq:MGW2} \end{equation} The stability of the tensor modes is reminiscent of the vacuum case studied in Ref.~\cite{Gumrukcuoglu:2013nza}. \footnote{The tensor graviton mass found in (\ref{eq:MGW2}) is in agreement with the one given in \cite{Gumrukcuoglu:2013nza} for the vacuum de Sitter solution. In Ref.\cite{Gumrukcuoglu:2013nza}, the parameter choice $\alpha_2=1$ is made, which, using the expression for $J$ on the fixed point from Footnote~\ref{fn:J--a2=1}, implies $\rho_m = (J-\xi)(1-\xi)^2/\xi$, while the vacuum equations of motion with $\dot{H}=0$ give $m^2 J=\Omega H^2/[(r-1)\xi]$.} They do not exhibit gradient or ghost-like instabilities, although if $M_{GW}^2<0$, it is possible to have a tachyonic instability. On the other hand, just like in the vacuum case, the time-scale of the instability is of the order of $H^{-1}$, so it takes the age of the universe to develop. However, as we will see, the stability of vector modes also relies on the positivity of $M_{GW}^2$, so we will restrict our discussion to the parameter space discussed in Sec.\ref{sec:parameter}. \subsection{Vector modes} The action for the vector perturbations is found to be \begin{equation} S^{(2)}_{\rm vector} = \frac{\Mpl^2}{16}\int d^3k\,dt\,k^2a^3 \left\{ \dot{E}_i^\star\dot{E}^i-\frac{2}{a}\left(\dot{E}_i^\star B^i+B_i^\star \dot{E}^i\right)-M_{GW}^2 E_i^\star E^i +\frac{4}{a^2}\left[1+ \frac{2\,\Omega\,a^2(3\,H^2+\dot{H})}{3\,k^2(r^2-1)}\right]B_i^\star B^i \right\}\,. \end{equation} Solving for the non-dynamical degree $B_i$, we get \begin{equation} B_i = \frac{3\,k^2a\,(r^2-1)}{4\,\Omega\,a^2(3\,H^2+\dot{H})+6\,k^2(r^2-1)}\,\dot{E}_i\,, \end{equation} replacing which, the action becomes \begin{equation} S^{(2)}_{\rm vector} = \frac{\Mpl^2}{16}\int d^3k\,dt\,k^2a^3 \left[ \left(1+\frac{3\,\tfrac{k^2}{a^2}\,(r^2-1)}{2\,\Omega(3\,H^2+\dot{H})}\right)^{-1} \dot{E}_i^\star\dot{E}^i-M_{GW}^2 E_i^\star E^i \right]\,. \end{equation} In order to avoid ghost instability, the following condition needs to be satisfied \begin{equation} 1+\frac{3\,\tfrac{k^2}{a^2}\,(r^2-1)}{2\,\Omega(3\,H^2+\dot{H})} > 0\,. \end{equation} The quantity $(3 H^2+\dot{H})$ is positive for a fluid with $w<1$ \eqref{eq:cond3b}. Thus, in order to have positive kinetic term at all momenta, one needs to impose $(r^2-1)/\Omega>0$. This is precisely the condition we get in the parameter space discussed in Sec.\ref{sec:parameter}, i.e. from Eq.\eqref{eq:r-1}. We can also calculate the propagation speed of the vector modes by expanding their frequency in the subhorizon limit, $\omega_V^2 = c_V^2 k^2/a^2 + {\cal O}(k^0)$. Requiring that there is no gradient instability imposes \begin{equation} c_V^2 = \frac{3\,M_{GW}^2(r^2-1)}{2\,\Omega\,(3\,H^2+\dot{H})} >0\,. \label{eq:vectorsoundspeed} \end{equation} Again, the parameter region \eqref{eq:allparam} already assumes $M_{GW}^2>0$, so the vector modes do not have gradient instability either. \subsection{Scalar modes} \label{sec:scalars} Unfortunately, it is not practical to present the details of the scalar sector calculation, nor it is very informative due to the complicated and opaque expressions. However, we sketch here the steps of the calculation. Once the action quadratic in scalar perturbations is calculated and plane wave expansion is introduced, there are 6 degrees of freedom, which are $\Phi$, $B$, $\psi$, $E$, $\delta\sigma$ and $\delta\chi$. At the first step, the modes $\Phi$ and $B$ do not have any time derivatives in the quadratic action and their equations of motion give \begin{eqnarray} \Phi &=& \frac{3\,c_s^2}{(6-\Omega)(3\,c_s^2 H^2+\dot{H})}\left[ -\frac{\rho_{,\chi}\delta\chi}{\Mpl}+ \frac{\Mpl\,\dot{H}(6-\Omega)}{3\,\dot{\chi}c_s^2}\,\delta\dot{\chi}+\frac{2\,k^2}{a^2}\left(\psi+\frac{k^2}{6}\,E+a\,H\,B\right) \right.\nonumber\\ &&\left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad- \frac{\Omega(3\,H^2+\dot{H})}{r-1}(\delta\sigma-\psi)-H(\Omega\,\delta\dot{\sigma}-6\,\dot{\psi})\right]\,, \nonumber\\ B &=& \frac{1}{a\,H(3\,H^2+\dot{H})}\left[[3(r^2-1-\bar{\alpha})H^2-\bar{\alpha}\dot{H}]\delta\sigma-\frac{\Mpl(6-\Omega)(r^2-1)\,H\,\dot{H}}{\Omega\dot{\chi}}\,\delta\chi \right.\nonumber\\ &&\left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +\frac{6(r^2-1)H}{\Omega}\,\left(-H\,\Phi+\dot{\psi}+\frac{k^2}{6}\dot{E}\right)\right]\,,\nonumber\\ \label{eq:solPhiB} \end{eqnarray} where \begin{equation} \rho_{,\chi} \equiv \frac{\partial}{\partial\chi}\left(2\,P_{,X}X-P\right)=2\,P_{,X\chi}X-P_{,\chi}\,, \end{equation} and we defined \begin{equation} \bar{\alpha} \equiv \frac{H^2\,\alpha_\sigma}{m^2\,\xi^2}\,. \end{equation} Although the above solutions are the most suitable ones for presentation, they are still coupled at this level. By using Eqs.(\ref{eq:solPhiB}) to simultaneously solve for $B$ and $\Phi$, one can obtain an action which depends now on 4 modes, $\psi$, $E$, $\delta\sigma$ and $\delta\chi$. As we mentioned earlier, one combination corresponds to the Boulware-Deser mode, rendered non-dynamical by the dRGT tuning. By redefining the fields through \begin{equation} Y_1 \equiv \delta \sigma -\left(\psi+\frac{k^2}{6}\,E\right)\,,\qquad Y_2 \equiv \delta\chi - \frac{\dot{\chi}}{\Mpl\,H}\,\left(\psi+\frac{k^2}{6}\,E\right)\,,\qquad Y_3 \equiv \frac{k}{2}\,E\,, \label{eq:scalarbasis} \end{equation} the remaining degree $\psi$ becomes non-dynamical and can be integrated out. Eventually, we obtain an action of the form \begin{equation} S^{(2)}_{\rm scalar}= \frac{\Mpl^2}{2}\int d^3k \,dt\,a^3 \,\left(\dot{Y}^\dagger\,K\,\dot{Y} + \dot{Y}^\dagger\,N\,Y- Y^\dagger\,N\,\dot{Y}-Y^\dagger \,M \,Y\right)\,, \end{equation} where $K$, $M$ and $N$ are $3\times3$ real, time-dependent matrices with $K^T=K$, $M^T=M$ and $N^T=-N$. At this stage the components are not suitable for presentation, although we will present some of the necessary components for the stability discussion. The signature of the eigenvalues of the kinetic matrix indicates whether the corresponding modes are ghost--like or not. Introducing a rotated basis \begin{equation} Z = R^{-1}\,Y\,, \end{equation} with the rotation matrix \begin{equation} R = \left( \begin{array}{ccc} 0& 0 & 1\\\\ 1 &-\frac{K_{23}}{K_{22}}&\frac{K_{12}K_{33}-K_{13}K_{23}}{K_{23}^2-K_{22}K_{33}} \\\\ 0 & 1 &\frac{K_{13}K_{22}-K_{12}K_{23}}{K_{23}^2-K_{22}K_{33}} \end{array} \right)\,, \label{eq:basisrotate} \end{equation} the kinetic matrix in this new basis becomes diagonal \begin{equation} R^T\,K\,R = {\rm diag}\left(K_{22}\,,\; K_{33}-\frac{K_{23}^2}{K_{22}}\,,\;\frac{{\rm det}K}{K_{22}K_{33}-K_{23}^2}\right) \equiv {\rm diag}\left(\kappa_1\,,\;\kappa_2\,,\;\kappa_3\,\right) \,. \label{eq:diagonalkinetic} \end{equation} The eigenvalues are as follows: \begin{eqnarray} \kappa_1 &=& \frac{\Mpl^2}{\dot{\chi}^2}\,\left[-\frac{3\,c_s^2}{(6-\Omega)\dot{H}}-\frac{\Omega\,(3\,H^2+\dot{H})}{H^2\,\left[\frac{12\,k^2}{a^2}(\bar{\alpha}-1)+\Omega\,(6-\Omega)\,(3\,H^2+\dot{H})\right]}\right]^{-1}\,, \nonumber\\ \kappa_2 &=& \frac{4\,\Omega\,(3\,H^2+\dot{H})k^2}{r^2\left[\frac{12\,k^2}{a^2}(\bar{\alpha}-1)+ \Omega(6-\Omega)(3\,H^2+\dot{H})\right]} \left[ \bar{\alpha} + \frac{\Omega(r^2-\bar{\alpha})(6-\Omega)(3\,H^2+\dot{H})}{\frac{12\,k^2}{a^2}\,(r-1)^2} \right]\,, \nonumber\\ \kappa_3 &=& \Omega + \frac{\Omega\,(6-\Omega)^2(r^2-\bar{\alpha})(3\,H^2+\dot{H})}{\frac{12\,k^2}{a^2}(r-1)^2} \,\left[ \bar{\alpha}+\frac{\Omega(r^2-\bar{\alpha})(6-\Omega)(3\,H^2+\dot{H})}{\frac{12\,k^2}{a^2}\,(r-1)^2} \right]^{-1}\,. \label{eq:kineticeig} \end{eqnarray} Before studying these exact expressions, let us first analyze the subhorizon limit. This will allow us to determine whether the cosmology is UV stable. The kinetic matrix is diagonal at leading order, with $K_{12}={\cal O}(k^{-2})$, $K_{13}=K_{23}={\cal O}(k^{-1})$ and \begin{equation} K_{11} = \Omega + {\cal O}(k^{-2})\,,\qquad K_{22}= \frac{\Mpl^2(6-\Omega)(-\dot{H})}{3\,\dot{\chi}^2c_s^2}+ {\cal O}(k^{-2})\,,\qquad K_{33}= \frac{\bar{\alpha}\,a^2\Omega \,(3\,H^2+\dot{H})}{3\,r^2(\bar{\alpha}-1)}+ {\cal O}(k^{-2})\,. \label{eq:subhorizonK} \end{equation} The absence of ghosts in the UV, i.e. the positivity of the kinetic terms bring further conditions on the parameters. From background equations of motion \eqref{eq:BG-FIXEDPOINT}, and the positivity of gravitational constant \eqref{eq:cond1}, while assuming dominant energy condition for $\chi$ fluid \eqref{eq:cond3b}, we find that \begin{equation} \Omega > 0 \,,\qquad \frac{\bar{\alpha}}{\bar{\alpha}-1}>0\,. \label{eq:condUV} \end{equation} The second condition can be satisfied if $\bar\alpha<0$ or $\bar\alpha>1$, and it is strictly non-zero \cite{Gumrukcuoglu:2013nza,D'Amico:2013kya}. The mixing matrix has only one component at order $k$: \begin{equation} N_{12}={\cal O}(k^0)\,,\qquad N_{13} = \frac{\bar{\alpha}\,\Omega\,(3\,H^2+\dot{H})}{6\,(\bar{\alpha}-1)\,H\,r}\,k+{\cal O}(k^{-1})\,,\qquad N_{23}={\cal O}(k^{-1})\,. \end{equation} Finally, the mass matrix, at order $k^2$, is also diagonal, with $M_{12}={\cal O}(k^0)$, $M_{13}=M_{23}=M_{33}={\cal O}(k)$ and \begin{equation} M_{11} = -\frac{\Omega(3\,H^2+\bar{\alpha}\,\dot{H})}{3(\bar{\alpha}-1)a^2\,H^2}\,k^2+{\cal O}(k)\,,\qquad M_{22} = \frac{\Mpl^2(6-\Omega)(-\dot{H})}{3\,a^2\,\dot{\chi}^2}\,k^2+{\cal O}(k)\,. \end{equation} Using these, we can calculate the sound speed of each eigenmode in the subhorizon limit by solving \begin{equation} {\rm det} \left[ -K\,\omega^2 -i\,\omega\,(2\,N+\dot{K})+(M+\dot{N})\right] = 0\,, \end{equation} where $\omega^2 = \mathbb{1}\left[C_S^2 k^2/a^2 + {\cal O}(k^0)\right]$. Since $\dot{K}$ at order $k$ and $\dot{N}$ at order $k^2$ vanish, they do not contribute and we find the sound speeds for the three dynamical degrees as \begin{equation} C_{S,I}^2=1\,,\qquad C_{S,II}^2=0\,,\qquad C_{S,III}^2=c_s^2\,. \label{eq:cs2} \end{equation} The first two degrees coincide with the modes in the vacuum case \cite{DeFelice:2013tsa} while the last mode clearly corresponds to the matter perturbation. Thus we have established that the stability conditions for the scalar sector in the UV are same as in the vacuum case, with the additional requirement that the equation of state for the matter field satisfies $-1<w<1$. We now turn to the small momentum limit, $k\ll a\,H$. From Eq.(\ref{eq:kineticeig}), we find \begin{eqnarray} \kappa_1 &=& \frac{\Mpl^2H^2(6-\Omega)(-\dot{H})}{(3\,c_s^2H^2+\dot{H})\dot{\chi}^2}+{\cal O}(k^2) \,, \nonumber\\ \kappa_2 &=& \frac{\Omega\,a^2(r^2-\bar{\alpha})(3\,H^2+\dot{H})}{3\,(r-1)^2r^2}+{\cal O}(k^2)\,, \nonumber\\ \kappa_3 &=&6+{\cal O}(k^2)\,. \label{eq:condIR} \end{eqnarray} The requirement that the last two terms are positive, along with the conditions (\ref{eq:condUV}), give \begin{equation} 0<\Omega<6\,,\qquad 1<\bar{\alpha}<r^2\,, \label{eq:conditions} \end{equation} where we discarded the option $\bar{\alpha}<0$, which satisfies both IR and UV no-ghost conditions, but makes the kinetic terms $\kappa_2$ and/or $\kappa_3$ negative at intermediate momenta. It can be verified from Eq.(\ref{eq:kineticeig}) that for an expansion satisfying $\dot{H}<0$ and $3\,H^2+\dot{H}>0$, the conditions (\ref{eq:conditions}) are sufficient to make the kinetic terms $\kappa_2$ and $\kappa_3$ positive at any momenta. We also remark that the upper bounds for $\bar{\alpha}$ and $\Omega$ coincide with the ones imposed by the existence requirement of background solution \eqref{eq:alphabarup} and positive gravitational constant \eqref{eq:cond1}, respectively. On the other hand, the first kinetic eigenvalue, which corresponds to matter field perturbations, can be positive at arbitrarily low momenta only if \begin{equation} 3 + \frac{\dot{H}}{c_s^2H^2}>0\,. \label{eq:condnontrivial} \end{equation} An alternative approach to obtain this condition is to use the exact expressions (\ref{eq:kineticeig}). Then, it is evident that there is a critical momentum \begin{equation} \frac{k_c}{a} = \sqrt{-\frac{\Omega(6-\Omega)(3\,H^2+\dot{H})\left(3+\frac{\dot{H}}{c_s^2\,H^2}\right)}{36(\bar{\alpha}-1)}}\,, \label{eq:criticalk} \end{equation} below which, the second (negative) term in the denominator of $\kappa_1$ dominates. When $k=k_c$, the kinetic term $\kappa_1$ diverges; if the mass matrix and non-linear interactions are finite at this point, the degree of freedom corresponding to $\kappa_1$ is weakly coupled. In other words, the transition from stable to ghost degree can proceed dynamically within the regime of validity of the EFT. To impose stability of all modes with arbitrary $k$, the critical momentum needs to be imaginary, i.e. the condition (\ref{eq:condnontrivial}) should be satisfied. For a scalar field with a canonical kinetic term and a field dependent potential, we have $P(X,\chi)=X-V(\chi)$ with unit sound speed, so the condition \eqref{eq:condnontrivial} coincides with the weak energy condition \eqref{eq:cond3b}. In this very simple scalar field theory, the field perturbation is not a ghost under already assumed conditions. This case is discussed in detail in Sec.\ref{sec:kination}. However, it is straightforward to devise a case where \eqref{eq:condnontrivial} no longer holds. If we consider the $\chi$ fluid to be non-relativistic matter with an effective equation of state $w=0$, then during the matter dominated stage we have $\dot{H}=-3\,H^2/2$. In order to have no ghost degrees at any momenta, the condition reduces to $c_s^2>1/2$, which is clearly relativistic. More generally, let us consider a perfect fluid with constant equation of state, and assume $\chi\to\chi+\chi_0$ invariance in the action. In this case, we have $c_s^2=w$, and the expression for the critical momentum \eqref{eq:criticalk} can be written as \begin{equation} \frac{k_c}{a\,H} = \frac{\vert 1-w\vert}{4} \sqrt{\frac{\Omega(6-\Omega)}{w\,(\bar\alpha-1)}}\,, \end{equation} which can be imaginary only if \begin{equation} w<0\,. \end{equation} However, this choice renders the second kinetic term \eqref{eq:subhorizonK} negative in the UV for the perfect fluid limit $c_s^2=w$. Finally, for a dust fluid with $w=0$, the critical momentum diverges to infinity, so na\"ively matter perturbations are ghost--like at any momenta. This is an IR instability of the matter which exhibits itself as a ghost instability. As we argue in the next Section, its presence can be traced back to the choice of the field perturbation as the fundamental variable. We note that this instability is remarkably different than the IR instability found in \cite{Motohashi:2014una}. Specifically, the instability in this Section turns up in the matter sector, when $1/\kappa_1$ crosses zero. In contrast, the instability in \cite{Motohashi:2014una} emerges in the gravity sector, when $\kappa_2$ crosses zero. The latter case will be discussed in detail in Sec.\ref{sec:kination}. \section{Exorcising the ghosty matter perturbations} \label{sec:canonicaltrans} The matter perturbations becoming ghostlike in the IR does not necessarily imply a catastrophic instability of the cosmological model \cite{Gumrukcuoglu:2016jbh}. Instead, it may be an indication of a physical feature associated with a mild instability. To clarify this point, we now perform a canonical transformation and treat the energy density perturbation of the matter field as the fundamental variable. In this picture it becomes clear that the IR ghost instability that we encountered in the perfect fluid analogue is actually a classical instability that becomes relevant below some momentum scale, much like the Jeans instability. Instead of using the scalar field perturbation as the fundamental variable, we therefore choose energy density perturbations. For the scalar field with Lagrangian $P(X,\chi)$, the matter energy density is given by the second of Eq.\eqref{eq:fluid}, which at first order in perturbations reads \begin{equation} \bar{\delta \rho} = \frac{\rho+P}{c_s^2}\left(\Mpl\,\frac{\delta\dot\chi}{\dot\chi}-\Phi\right) + \Mpl\,\delta\chi\left(P_{,X\chi}\dot{\chi}^2-P_{,\chi}\right)\,. \label{eq:drhodefined} \end{equation} Inspecting the non-reduced action quadratic in scalar perturbations, we find that the only reference to the $|\delta\dot\chi|^2$ appears in the combination $c_s^2 \bar{\delta\rho}^2/(2(\rho+P))$. With this information, we introduce an auxiliary field $\delta\rho$ to the action as follows: \begin{equation} \tilde{S}^{(2)}_{\rm scalar} = S^{(2)}_{\rm scalar} -\int d^3k\,dt\,a^3 \frac{c_s^2}{2\,(\rho+P)}\,\vert \delta\rho-\bar{\delta\rho}\vert^2\,, \label{eq:newscalaraction} \end{equation} where $\bar{\delta\rho}$ is the expression given in \eqref{eq:drhodefined} in terms of the scalar field perturbations, while $\delta\rho$ is the newly introduced auxiliary variable. The coefficient of the second term is chosen such that the coefficient of the $|\delta\dot{\chi}|^2$ term in $\tilde{S}^{(2)}_{\rm scalar}$ vanishes. Notice that varying the above action wrt $\delta\rho^\star$, we find \begin{equation} \delta\rho =\bar{\delta\rho}\,, \end{equation} and the two actions $\tilde{S}^{(2)}_{\rm scalar}$ and $S^{(2)}_{\rm scalar}$ coincide. However, we instead vary the action \eqref{eq:newscalaraction} with respect to the scalar field perturbation $\delta\chi$, whose time derivatives can now be removed by adding boundary terms. Its equation of motion gives \begin{equation} \delta\chi = -\frac{\dot\chi}{\Mpl(\rho+P){\cal Q}^2}\left[\delta\dot\rho+3(1+c_s^2)H\,\delta\rho+(\rho+P)\left(\frac{k^2B}{a}+3\,\dot{\psi}\right)\right]\,, \label{eq:soldchi} \end{equation} where we defined \begin{equation} {\cal Q}^2 \equiv \frac{k^2}{a^2}+9\,H^2 \left(c_s^2-\frac{\dot{P}}{\dot{\rho}}\right) =\frac{k^2}{a^2}+ \frac{3\,H\,\dot\chi}{\rho+P} \Bigg((1+c_s^2)P_{,\chi}-c_s^2P_{,X\chi}\dot\chi^2\Bigg)\,. \label{eq:Q2def} \end{equation} We note that for a shift symmetric matter action, invariant under $\chi\to\chi+\chi_0$, the second term in parentheses vanishes and the quantity ${\cal Q}$ coincides with the physical momentum. Using the solution \eqref{eq:soldchi} back in the action, we recover the same number of variables as the starting point, removing $\delta\chi$ in favor of $\delta \rho$. This procedure is equivalent to a canonical transformation \cite{DeFelice:2015moy}. From this point on, the calculation proceeds as in Sec.\ref{sec:scalars}. We first integrate out the modes $\Phi$ and $B$ whose equations of motion now read: \begin{eqnarray} \Phi &=& \frac{1}{(6-\Omega)H^2}\left[ -\frac{\delta\rho}{\Mpl^2}+ \frac{2\,k^2}{a^2}\left(\psi+\frac{k^2}{6}\,E+a\,H\,B\right) - \frac{\Omega(3\,H^2+\dot{H})}{r-1}(\delta\sigma-\psi)-H(\Omega\,\delta\dot{\sigma}-6\,\dot{\psi})\right]\,, \nonumber\\ B &=& \frac{\Omega\,a\,{\cal Q}^2}{H\,\left[ \Omega\,a^2{\cal Q}^2(3\,H^2+\dot{H})- k^2(6-\Omega)(r^2-1)\dot{H}\right]} \nonumber\\ &&\qquad\times \left[[3(r^2-1-\bar{\alpha})H^2-\bar{\alpha}\dot{H}]\delta\sigma -\frac{3(r^2-1)H}{\Omega\,{\cal Q}^2\Mpl^2} \left[\delta\dot\rho+3(1+c_s^2)H\,\delta\rho+3\,(\rho+P)\dot{\psi}\right] \right.\nonumber\\ &&\left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +\frac{6(r^2-1)H}{\Omega}\,\left(-H\,\Phi+\dot{\psi}+\frac{k^2}{6}\dot{E}\right)\right]\,. \label{eq:solPhiBcanonical} \end{eqnarray} After using the solutions for $\Phi$ and $B$ in the action, we perform a linear transformation on the field basis and define: \begin{equation} \tilde{Y}_1 \equiv \delta \sigma -\left(\psi+\frac{k^2}{6}\,E\right)\,,\qquad \tilde{Y}_2 \equiv \delta\rho - \frac{\dot{\rho}}{H}\,\left(\psi+\frac{k^2}{6}\,E\right)\,,\qquad \tilde{Y}_3 \equiv \frac{k}{2}\,E\,, \label{eq:scalarbasiscanonical} \end{equation} and as in Sec.\ref{sec:scalars}, the field $\psi$ becomes non-dynamical as a consequence of dRGT tuning. Once this field is integrated out, the action becomes formally \begin{equation} \tilde{S}^{(2)}_{\rm scalar}= \frac{\Mpl^2}{2}\int d^3k \,dt\,a^3 \,\left(\dot{\tilde{Y}}^\dagger\,\tilde{K}\,\dot{\tilde{Y}} + \dot{\tilde{Y}}^\dagger\,\tilde{N}\,\tilde{Y}- \tilde{Y}^\dagger\,\tilde{N}\,\dot{\tilde{Y}}-\tilde{Y}^\dagger \,\tilde{M} \,\tilde{Y}\right)\,, \end{equation} where matrices $\tilde{K}$, $\tilde{M}$ and $\tilde{N}$ are $3\times3$ time-dependent real matrices. The kinetic matrix $\tilde{K}$ can be diagonalized in the manner described in Eqs.\eqref{eq:basisrotate}-\eqref{eq:diagonalkinetic}, giving the eigenvalues \begin{eqnarray} \tilde\kappa_1 &=& \frac{3\,a^2}{\Mpl^4(6-\Omega)(-\dot{H})}\,\left[\frac{(-\dot{H})\,(\bar\alpha-1)(6-\Omega)^2}{\frac{12\,k^2}{a^2}(\bar{\alpha}-1)+\Omega\,(6-\Omega)\,(3\,H^2+\dot{H})} +\frac{{\cal Q}^2}{k^2/a^2}\right]^{-1}\,, \nonumber\\ \tilde\kappa_2 &=& \frac{4\,\Omega\,(3\,H^2+\dot{H})k^2}{r^2\left[\frac{12\,k^2}{a^2}(\bar{\alpha}-1)+ \Omega(6-\Omega)(3\,H^2+\dot{H})\right]} \left[ \bar{\alpha} + \frac{\Omega(r^2-\bar{\alpha})(6-\Omega)(3\,H^2+\dot{H})}{\frac{12\,k^2}{a^2}\,(r-1)^2} \right]\,, \nonumber\\ \tilde\kappa_3 &=& \Omega + \frac{\Omega\,(6-\Omega)^2(r^2-\bar{\alpha})(3\,H^2+\dot{H})}{\frac{12\,k^2}{a^2}(r-1)^2} \,\left[ \bar{\alpha}+\frac{\Omega(r^2-\bar{\alpha})(6-\Omega)(3\,H^2+\dot{H})}{\frac{12\,k^2}{a^2}\,(r-1)^2} \right]^{-1}\,. \label{eq:kineticeigcanonical} \end{eqnarray} We notice that the canonical transformation did not affect the two kinetic terms corresponding to the scalar graviton and quasidilaton perturbations, so we have $\tilde{\kappa}_2=\kappa_2$ and $\tilde{\kappa}_3=\kappa_3$. On the other hand, the first kinetic term has changed with respect to \eqref{eq:kineticeig}. Let us discuss the stability conditions by considering the subhorizon limit of the system. Although the initial system was shown to be UV stable in Sec.\ref{sec:scalars}, we wish to make sure that the canonical transformation did not flip the sign of the first kinetic term in the UV. For large momenta, the kinetic eigenvalue is: \begin{equation} \tilde{\kappa}_1 = \frac{3\,a^2}{\Mpl^4(6-\Omega)(-\dot H)} + {\cal O}(k^{-2})\,, \end{equation} where we used that ${\cal Q}=k/a +{\cal O}(k^0)$ in this limit. This kinetic term is always positive for a matter fluid with equation of state $w >-1$. Expanding the other matrices for large momenta, one can also show that the sound speed for all three perturbations are still given by \eqref{eq:cs2}, so the transformation did not introduce any gradient instability either. On the other hand, the qualitative result at intermediate momenta and in particular in the IR is now dramatically different compared to the case in Sec.\ref{sec:scalars}. If ${\cal Q}$ is non-vanishing in the limit $k\to0$, i.e. if the second term in Eq.\eqref{eq:Q2def} is non-zero, then it determines the sign of the kinetic term, \begin{equation} \tilde{\kappa}_1 = \frac{3\,k^2}{\Mpl^4(6-\Omega)(-\dot H)\,\left(c_s^2 - \frac{\dot{P}}{\dot{\rho}}\right)} + {\cal O}(k^{4})\,. \end{equation} In this case, the no-ghost condition for the matter $c_s^2 > \dot{P}/\dot{\rho}$ can be written as: \begin{equation} \dot{\chi} \left[(1+c_s^2)P_{,\chi}-c_s^2P_{,X\chi}\dot\chi^2\right]>0\,. \label{eq:matter-stability-general} \end{equation} It should be noted that if this IR stability condition is satisfied, then the kinetic term $\tilde{\kappa}_1$ is manifestly positive at any given momenta, including intermediate ones. Although our scalar field action is general, it is not meaningful to impose the condition \eqref{eq:matter-stability-general} to arbitrary matter sectors, whose action may inherently be unstable regardless of its coupling to the massive gravity. The stability conditions should be evaluated for the specific problem at hand. As an example, we consider shift symmetric k-essence field with $P(X,\chi)=P(X)$. This is a relevant example as it can be used to model irrotational fluids. In this case, ${\cal Q} =k/a$ and both terms in the first of Eq.\eqref{eq:kineticeigcanonical} are manifestly positive for $0<\Omega<6$, $\bar\alpha>1$ and $-1<w<1$. Remarkably, the IR ghost which was present for any perfect fluid in the analysis of Sec.\ref{sec:scalars} is removed when we use the density perturbation as the variable. As a second example, we consider a canonical scalar field with a potential $V$, i.e. \begin{equation} P(X,\chi) = X - V(\chi)\,. \end{equation} In this case, the no-ghost condition \eqref{eq:matter-stability-general} becomes: \begin{equation} \dot{\chi} V(\chi) = \partial_t V(\chi) <0\,. \end{equation} In other words, if the potential decreases with time, there is no ghost instability in the matter sector at any momenta. It is interesting to note that for this example, the analysis in Sec.\ref{sec:scalars} did not reveal any ghost instability for the field perturbation, as we found that $\kappa_1$ in Eq.~\eqref{eq:kineticeig} is manifestly positive as long as $c_s^2=1$. We stress that a canonical transformation does not change the nature of an instability or the physical observables in general. An IR ghost instability in one picture can correspond to a tachyonic instability in another. The crucial point is to determine whether the instability is safe or not, for relevant cases. We proved the stability of the system in the UV and thus any potential instability in the matter sector is bound to appear in the IR in the form of a tachyonic instability in terms of fluid perturbations. We argue that such an IR instability is harmless as it would be a manifestation of the Jeans instability. \section{Canonical scalar field with a potential} \label{sec:kination} We now examine closely the case when the matter field has a canonical kinetic term and a potential. This is the scenario previously considered by Ref.\cite{Motohashi:2014una}, which corresponds to \begin{equation} P(X,\chi)=X-V(\chi)\,, \label{eq:Canonicalscalar} \end{equation} and leading to energy density $\rho = X+V$ and sound speed $c_s^2=1$. In this construction, the scalar field is not assumed to be an analogue of a fluid, so we will be using our results from Sec.\ref{sec:scalars}. For this example, we can use the background equations of motion \eqref{eq:BG-FIXEDPOINT} to replace $\dot{H}$ and $V(\chi)$ in terms of $H$ and $\dot{\chi}$, and rewrite the eigenvalues \eqref{eq:kineticeig} of the kinetic matrix as \begin{eqnarray} \kappa_1 &=& (6-\Omega)\,\left(\frac{\Omega(6-\Omega-\Xi^2)+4(\bar{\alpha}-1)\frac{k^2}{a^2H^2}}{\Omega(6-\Omega-\Xi^2)^2+4\,(\bar{\alpha}-1)(6-\Omega)\frac{k^2}{a^2H^2}}\right)\,,\nonumber\\ \kappa_2 &=& \frac{\Omega\,a^2H^2\,(6-\Omega-\Xi^2)}{(6-\Omega)r^2(r-1)^2} \left( \frac{\Omega(r^2-\bar{\alpha})(6-\Omega-\Xi^2)+4\,\bar{\alpha}(r-1)^2 \frac{k^2}{a^2H^2}}{\Omega(6-\Omega-\Xi^2)+4\,(\bar{\alpha}-1)\frac{k^2}{a^2H^2}} \right)\,,\nonumber\\ \kappa_3 &=& 2\,\Omega\,\left(\frac{3\,(r^2-\bar{\alpha})(6-\Omega-\Xi^2)+2\,\bar{\alpha}(r-1)^2\frac{k^2}{a^2H^2}}{\Omega\,(r^2-\bar{\alpha})(6-\Omega-\Xi^2)+4\,\bar{\alpha}(r-1)^2\frac{k^2}{a^2H^2}}\right)\,, \label{eq:kineticeigscalar} \end{eqnarray} where in the spirit of Ref.~\cite{Motohashi:2014una}, we defined the dimensionless quantity \begin{equation} \Xi \equiv \frac{\dot{\chi}}{\Mpl H}\,. \end{equation} The no-ghost conditions for this case are then simply: \begin{equation} 0<\Omega<6\,,\qquad 1<\bar{\alpha}<r^2\,,\qquad 6-\Omega-\Xi^2 >0\,. \label{eq:Huconditions} \end{equation} It can be verified that the latter condition in this set-up corresponds to \eqref{eq:cond3b} for $\Omega<6$. The main focus of Ref.\cite{Motohashi:2014una} is on the second of (\ref{eq:Huconditions}), which provides a time dependent range for the $\alpha_\sigma$ parameter. The concern is that even when one starts in a stable regime, the system can evolve into an instability. This is indeed a valid concern and Ref.\cite{Motohashi:2014una} actually provides a specific example where it is justified. This example is a special case of the model \eqref{eq:Canonicalscalar} where the potential vanishes, i.e. a kinetic energy dominated scalar field. This allows the background evolution to reduce to two equations: \begin{equation} 3\,H^2+\dot{H} = \frac{6\,m^2\rho_m}{6-\Omega}\,, \end{equation} and \begin{equation} r-1 = \frac{2\,\Omega\,\rho_m}{(6-\Omega)J\,\xi}\,, \end{equation} hence, $r$ is constant. Since during the kination dominated stage, the expansion rate redshifts as $a^{-3}$, thus $\bar\alpha \propto a^{-6}$. Therefore at early times, $\bar\alpha$ will grow very rapidly with respect to the constant $r$, eventually violating the stability condition $\bar{\alpha}<r^2$. The kinetic term where this condition arises from is the second of \eqref{eq:kineticeigscalar}, which can be rewritten for the massless case as \begin{equation} \kappa_2 = \frac{2\,\Omega\,a^2m^2\rho_m}{(6-\Omega)r^2(r-1)^2}\left[\frac{\Omega(r^2-\bar\alpha)m^2\rho_m+2\,\bar\alpha(r-1)^2 k^2/a^2}{\Omega\,m^2\rho_m +2\,(\bar\alpha-1) k^2/a^2}\right]\,. \end{equation} We see that the problem arises when the numerator vanishes. Assuming that $\bar\alpha$ can actually exceed the value of $r^2$, then the modes with physical momentum \begin{equation} \frac{k}{a} > p_c \equiv \sqrt{\frac{\Omega\,m^2\rho_m(\bar\alpha-r^2)}{2\,(r-1)^2\bar\alpha}}\,, \end{equation} will have a positive kinetic term. As the ratio $(\bar\alpha-r^2)/\bar{\alpha}$ converges to unity at early times, the sooner the condition $\bar\alpha<r^2$ breaks, fewer modes will be ghost--like. Despite the apparent severity of this problem, the specific example of kination dominated universe is non-representative. In this case, since $r$ is constant, going sufficiently early in time will result in $\bar\alpha$ coinciding with the constant $r^2$, making the kinetic term $\kappa_2$ cross zero. On the other hand, as one goes back in time the expansion rate $H$ will eventually grow as high as the effective field theory cutoff of the theory $\Lambda_3 = (m^2\Mpl)^{1/3}$. Assuming $m\sim H_0$, the effective theory is valid up to $\Lambda_3 \sim 10^{-40}\Mpl \simeq 10^{-19} {\rm MeV}$. For a conservative scenario where thermalization occurs shortly after inflation, the expansion rate will reach $\Lambda_3$ already during the radiation dominated stage.\footnote{To put this cutoff scale into perspective, the QCD phase transition occurs at $H\sim 10^{-16}{\rm MeV}$ (assuming $g_*\sim 100$ and $T=150 {\rm MeV}$)} Since this time is well after any kination field can be relevant, the kinetic terms of all perturbations will be of ${\cal O}(1)$ and positive. Therefore, except for the IR ghost in the matter sector which is equivalent to the standard Jeans instability, no ghost will emerge within the regime of validity of the effective field theory. Although we can avoid the specific problem with the kination field in a standard cosmological scenario, it is still not clear whether the evolution would drive the system beyond the stability bounds, even if $r$ is time dependent. On the other hand, the appearance of the ghost is qualitatively different than the IR ghost in the matter sector discussed at the end of Sec.\ref{sec:scalars}. In the case of a violation of $\bar\alpha<r^2$, the change of sign in $\kappa_2$ is a result of a vanishing numerator, meaning that the corresponding degree of freedom becomes infinitely strongly coupled. Thus the strong coupling scale is zero at the critical momentum $k/a=p_c$. In other words, even if at low energies, a matter source starts to drive the system away from the stability, we cannot reliably talk about the regime where the ghost in Ref.\cite{Motohashi:2014una} would show up. The situation has actually another aspect. The IR stability bound $\bar\alpha<r^2$ is also the condition \eqref{eq:alphabarup} that is required by the existence of the cosmological solution. If this condition is violated, we get an inconsistency leading to an imaginary $\dot{f}/n$. Then the question is, if one has an evolution (such as the kination domination) that forces the quantity $\dot{f}^2/n^2$ to cross zero, whether our ansatze would still hold and the equations would prevent this, or whether we would depart from the cosmological construction. Ref.~\cite{Heisenberg:2015voa} addressed this problem by pointing out that at early times, there is a departure from the fixed point solution, giving rise to a non zero $\dot{\xi}$, which may prevent the evolution to reach the $\dot{f}/n=0$ point. In fact, as $r$ can no longer be solved algebraically away from the fixed point, this may resolve the issue of a constant $r$ in the kination example as well. However, understanding the behavior of the solutions away from the fixed point is challenging. The reason is that in the full set of equations, the evolution of $r$ is given by \eqref{eq:EQS}, after combining it with Eqs.~\eqref{eq:EQA} and \eqref{eq:eqstuck0}. Although one can evolve the equations numerically starting with initial conditions away from the fixed point, the coefficient of the $\dot{r}$ term vanishes on the fixed point and the numerical evolution would hit a singularity as the late time asymptotic solution is approached. In fact, this is the same property that allows us to determine $r$ algebraically at late times. Although Ref.~\cite{Kahniashvili:2014wua} considered the background evolution away from the fixed point, the authors actually considered a very restricted part of the evolution, where they set $\dot{f}/n$=constant, effectively reducing the St\"uckelberg constraint to the original quasidilaton one. The equation that is solved in Ref.~\cite{Kahniashvili:2014wua} to evolve $r$ directly comes from the constancy of $\dot{f}/{n}$, thus the singularity at the fixed point is never observed. Another possibility is that the system may evolve towards a fixed point with $\dot{f}/N=0$ as the r.h.s. of \eqref{eq:eqstuck0} approaches zero. In this case the final configuration with $\dot{f}/N=0$ does not allow us to choose the unitary gauge for the time variable. Nonetheless the extended fiducial metric defined in \eqref{eq:extended-fiducial-FLRW} remains regular as far as $\dot{\sigma}/N\ne 0$. By assuming that $\dot{\xi}/N\to 0$ as $\dot{f}/N\to 0$, one obtains \begin{equation} \frac{1}{\Mpl}\frac{\dot{\sigma}}{N} \to H\,,\quad r \to \frac{\sqrt{\alpha_{\sigma}}}{m}\frac{H}{\xi}\,, \end{equation} and thus the extended fiducial metric is indeed regular at late time. By further assuming that $\dot{H}\to 0$, $\rho\to 0$ and $P\to 0$, one finds a new de Sitter fixed point specified by constant values of $H$ and $\xi$, which are determined by the following two equations derived from the background equations of motion. \begin{equation} 3\left(1-\frac{\Omega}{6}\right) H^2 = m^2\rho_m\,, \quad m\left(\sqrt{\alpha_{\sigma}}H-m\xi\right)J = \Omega H^2\,. \end{equation} Whether this new de Sitter fixed point solution is stable or not is an interesting problem for the future work. The moral of this Section is as follows. The instability associated with the purely kinetic scalar field is irrelevant in realistic cosmology, although with other matter fields one should still be wary regarding the violation of the stability conditions, which also determine whether the background solutions exist. In order to understand this problem better, we need a detailed and complete numerical study, which is beyond the scope of this paper. \section{Discussion} \label{sec:discussion} In this paper we studied the perturbative stability of the extended quasidilaton cosmology, in the presence of a matter sector that consists of a generic k-essence type scalar field. The analysis was restricted to the late time attractor solution where the graviton mass term acts like a cosmological constant and the quasidilaton field modifies the gravitational constant. We found that in the UV the solution is stable for a wide range of parameters. Conversely, in the IR two distinct types of ghosts can emerge, depending on the dominant fields in the matter sector. When the scalar field behaves like a perfect fluid, we found that the scalar field perturbations become ghost--like in the IR. However, unlike the UV ghosts, this is not a signal of a catastrophic instability; we showed that this ghost can be removed by an appropriate choice of matter variable, namely the density perturbation. On the other hand, our analysis is not sufficient to verify that this IR ghost in the field perturbation becomes a classical tachyonic instability for the density perturbation. The main reason is that we have a system of three fields, all coupled to each other with coefficients that are time dependent. In the Lagrangian picture, it is generically impossible to decouple these to obtain the full frequency eigenvalues. Although this can be achievable in the Hamiltonian picture, it is technically involved and beyond the scope of the paper. In addition, to show that ghost modes can be avoided, we have also shown that the gradient part of the eigenfrequencies do not lead to any instabilities, which should be read as a proof of the stability of the system in the UV. Any potential instability in the matter sector is thus bound to appear in the IR, in the form of a tachyonic instability. We argue that such an IR instability is harmless as it would be a manifestation of the Jeans instability in the context of massive gravity. The second type of IR ghost emerges in the gravity sector. Although the two stability conditions corresponding to this sector have the same form as in the vacuum case \cite{DeFelice:2013tsa}, the conditions now become time dependent especially at early times due to the contribution from the background matter fields. In particular, we considered the example of Ref.~\cite{Motohashi:2014una}, where a purely kinetic energy (kination) scalar field can lead to a violation of one of the stability conditions, resulting in an IR ghost. We found that this specific scenario can be avoided in standard cosmology as any kination field would be subdominant when the energy of the universe surpasses the EFT scale. Moreover, in more general scenarios where the matter evolution drives the system away from stability, the model becomes infinitely strongly coupled and the IR instability does not appear within the regime of validity of the EFT. On the other hand, this type of evolution raises some questions about the background evolution. Even before such a strong coupling is reached, the evolution needs to break the existence requirement for background solutions in Eq.\eqref{eq:alphabarup}. Such an outcome can be avoided if the full system of equations prevents the evolution to ever reach this problematic point, e.g. by driving the solution away from the fixed point. Such a resolution was considered in Ref.~\cite{Heisenberg:2015voa}, although it is challenging to verify this scenario in a concrete set-up. In order to fully determine the fate of the background solutions, it is necessary to develop a detailed numerical analysis of the full equations of motion, away from the late time attractor. If this open problem can be addressed, the extended quasidilaton theory would be the simplest massive gravity theory which can accommodate a stable, realistic cosmology. \acknowledgments We thank Tina Kahniashvili for clarifying their calculations in Ref.~\cite{Kahniashvili:2014wua}. AEG acknowledges support by STFC grant ST/L00044X/1. KK is supported by the UK Science and Technologies Facilities Council grants ST/K00090X/1 and ST/N000668/1 and the European Research Council through grant 646702 (CosTesGrav). The work of SM was supported by Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) No. 24540256, and by World Premier International Research Center Initiative (WPI), MEXT, Japan.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} \renewcommand{\theequation}{1.\arabic{equation}} \setcounter{equation}{0} We abbreviate SE for the Schr\"odinger equation and WDW for the Wheeler-deWitt equation. \section{EXACT UNCERTAINTY AND THE SE} \renewcommand{\theequation}{2.\arabic{equation}} \setcounter{equation}{0} The exact uncertainty principle of Hall and Reginatto is discussed at length in \cite{c1,h1,h2,h3,r1}. Basically following e.g. \cite{h1,h3} one defines Fisher information via $(\bullet)\,\,F_x=\int dx P(x)[\partial_xlog(P(x))]^2$ and a Fisher length by $\delta x=F_x^{-1/2}$ where $P(x)$ is a probability density for a 1-D observable x. The Cramer-Rao inequality says $Var(x)\geq F_x^{-1}$ or simply $\Delta x\geq\delta x$. For a quantum situation with $P(x)=|\psi(x)|^2$ and $\psi$ satisfying a SE one finds immediatly \begin{equation}\label{1.1} F_X=\int dx|\psi|^2\left[\frac{\psi'}{\psi}+\frac{\bar{\psi}'}{\bar{\psi}}\right]^2dx= \end{equation} $$=4\int dx\bar{\psi}'\psi'+\int dx|\psi|^2\left[\frac{\psi'}{\psi}-\frac {\bar{\psi}'}{\bar{\psi}}\right]^2 =\frac{4}{\hbar^2}\left[<p^2>_{\psi}-<p_{cl}^2>_{\psi}\right]$$ where $p_{cl}=(\hbar/2i)[(\psi'/\psi)-(\bar{\psi}'/\bar{\psi})]$ is the classical momentum observable conjugate to x ($\sim S_X$ for $\psi=Rexp(iS/\hbar)$). Setting now $p=p_{cl}+p_{nc}$ one obtains after some calculation $(\blacklozenge)\,\,F_x=(4/\hbar^2)(\Delta p_{nc})^2=1/(\delta x)^2\Rightarrow \delta x\Delta p_{nc}=\hbar/2$ as a relation between nonclassicality and Fisher information. Note $<p>_{\psi}=<p_{cl}>_{\psi}$, $\partial_t|\psi|^2+\partial_x[|\psi|^2m^{-1}p_{cl}]=0$ from the SE, and $(\Delta x)(\Delta p)\geq (\delta x)(\Delta p)\geq (\delta x)(\Delta p_{nc})$. \\[3mm]\indent We recall also that from \eqref{1.1} $F_x$ is proportional to the difference of a quantum and a classical kinetic energy. Thus $(\hbar^2/4)F_x(1/2m)=(1/2m) <p^2>_{\psi}-(1/2m)<p^2_{cl}>_{\psi}$ and $E_F=(\hbar^2/8m)F_x$ is added to $E_{cl}$ to get $E_{quant}$. By deBroglie-Bohm (dBB) theory there is a quantum potential \begin{equation}\label{1.2} Q=\frac{\hbar^2}{8m}\left[\left(\frac{P'}{P}\right)^2-2\frac{P''}{P}\right];\,\, P=|\psi|^2 \end{equation} and evidently $(\bigstar)\,\,<Q>_{\psi}=\int PQdx=(\hbar^2/8m)F_x$ (upon neglecting the boundary integral term at $\pm\infty$ - i.e. $P'\to 0$ at $\pm\infty$). \\[3mm]\indent Now the exact uncertainty principle (cf. \cite{h1,h3,r1}) looks at momentum fluctuations $(\clubsuit)\,\,p=\nabla S+f$ with $<f>=\bar{f}=0$ and replaces a classical ensemble energy $<E>_{cl}$ by ($P\sim|\psi|^2$) \begin{equation}\label{1.3} <E>=\int dx P\left[(2m)^{-1}\overline{|\nabla S+f|^2}+V\right]=<E>_{cl}+ \int dx P\frac{\overline {f\cdot f}}{2m} \end{equation} Upon making an assumption of the form $(\spadesuit)\,\,\overline{f\cdot f}= \alpha(x,P,S,\nabla P,\nabla S,\cdots)$ one looks at a modified Hamiltonian $(\bullet\bullet)\,\,\tilde{H}_q[P,S]=\tilde{H}_{cl}+\int dx P(\alpha/2m)$. Then, assuming \begin{enumerate} \item Causality - i.e. $\alpha$ depends only on $S,P$ and their first derivatives \item Independence for fluctuations of noninteracting uncorrelated ensembles \item $f\to L^Tf$ for invertible linear coordinate transformations $x\to L^{-1}x$ \item Exact uncertainty - i.e. $\alpha=\overline{f\cdot f}$ is determined solely by uncertainty in position \end{enumerate} one arrives at \begin{equation}\label{1.4} \tilde{H}_q=\tilde{H}_{cl}+c\int dx\frac{\nabla P\cdot\nabla P}{2m P} \end{equation} and putting $\hbar=2\sqrt{c}$ with $\psi=\sqrt{P}exp(iS/\hbar)$ a SE is obtained (cf. Sections 4 and 5 for more detail). \\[3mm]\indent As pointed out in \cite{c2} in the SE situation with Q as in \eqref{1.2}, in 3-D one has \begin{equation}\label{1.5} \int PQd^3x\sim -\frac{\hbar^2}{8m}\int\left[2\Delta P-\frac{1}{P}(\nabla P)^2\right]d^3x=\frac{\hbar^2}{8m}\int\frac{1}{P}(\nabla P)^2d^3x \end{equation} since $\int_{\Omega}\Delta Pd^3x=\int_{\partial\Omega}\nabla P\cdot{\bf n}d\Sigma$ can be assumed zero for $\nabla P=0$ on $\partial\Omega$. Hence (cf. Section 5 for more precision) \begin{theorem} Given that any quantum potential for the SE has the form \eqref{1.2} (with $\nabla P=0$ on $\partial\Omega$) it follows that the quantization can be identified with momentum fluctuations of the type studied in \cite{h3} and thus has information content as described by the Fisher information. \end{theorem} \section{WDW} \renewcommand{\theequation}{3.\arabic{equation}} \setcounter{equation}{0} The same sort of arguments can be applied for the WDW equation (cf. \cite{c2, h1,h2,p2,r1,s1}). Thus take an ADM situation \begin{equation}\label{2.1} ds^2=-(N^2-h^{ij}N_iN_j)+2N_idx^idt+h_{ij}dx^idx^j \end{equation} and assume dynamics generated by an action $(\blacklozenge\bl)\,\,A=\int dt[\tilde{H}+\int {\mathfrak D}hP\partial_tS]$. One will have equations of motion $(\bigstar\bgs)\,\, \partial_tP=\delta\tilde{H}/\delta S$ and $\partial_tS=-\delta\tilde{H}/\delta P$ (cf. \cite{c1,h2}). A suitable ``classical" Hamiltonian is \begin{equation}\label{2.2} \tilde{H}_c[P,S]= \int{\mathfrak D}hPH_0\left[h_{ij},\frac{\delta S}{\delta h_{ij}}\right]; \end{equation} $$H_0= \int dx\left[N\left(\frac{1}{2}G_{ijk\ell}\pi^{ij}\pi^{k\ell}+V(h_{ij})\right)- 2N_i\nabla_j\pi^{ij}\right]$$ where $G_{ijk\ell}$ is the deWitt (super)metric $(\clubsuit\clubsuit)\,\, G_{ijk\ell}=(1/\sqrt{h})(h_{ik}h_{j\ell}+h_{i\ell}h_{jk}-h_{ij}h_{k\ell})$ and $V\sim \hat{c}\sqrt{h}(2\Lambda-{}^3R)$. Then thinking of $\pi^{ij}=\delta S/\delta h_{ij}+ f^{ij}$ and e.g. $\tilde{H}_q=\tilde{H}_c+(1/2)\int{\mathfrak D}hP\int dx NG_{ijk\ell} \overline{f^{ij}f^{k\ell}}$ one arrives via exact uncertainty at a Fisher information contribution (cf. \cite{f1,f2}) \begin{equation}\label{2.3} \tilde{H}_q[P,S]=\tilde{H}_{cl}+\frac{c}{2}\int{\mathfrak D}h\int dx NG_{ijk\ell}\frac{1}{P}\frac {\delta P}{\delta h_{ij}}\frac{\delta P}{\delta h_{k\ell}} \end{equation} with $\hbar=2\sqrt{c}$ and $\psi=\sqrt{P}exp(iS/\hbar)$ resulting in (for $N=1$ and $N_i=0$) \begin{equation}\label{2.4} \left[-\frac{\hbar^2}{2}\frac{\delta}{\delta h_{ij}}G_{ijk\ell}\frac{\delta}{\delta h_{k\ell}}+V\right]\psi=0 \end{equation} with a sandwich ordering ($G_{ijk\ell}$ in the middle). In general there are also constraints \begin{equation}\label{2.5} \frac{\delta \psi}{\delta N}=\frac{\delta \psi}{\delta N_i}=\partial_t\psi=0;\,\,\nabla_j\left(\frac{\delta \psi}{\delta h_{ij}}\right)=0 \end{equation} We note here (keeping $N=1$ with $N_i=0$) \begin{equation}\label{2.6} \frac{\delta}{\delta h_{ij}}\left(G_{ijk\ell}\frac{\delta}{\delta h_{k\ell}}\sqrt{P}e^{iS/\hbar}\right)= \left[\frac{\delta G_{ijk\ell}}{\delta h_{ij}}\left( \frac{1}{2}P^{-1/2}\frac{\delta P}{\delta h_{k\ell}}+\frac{iP^{1/2}}{\hbar} \frac{\delta S}{\delta h_{k\ell}}\right)+\right. \end{equation} $$+G_{ijk\ell}\left\{-\frac{1}{4}P^{-3/2}\frac{\delta P}{\delta h_{k\ell}}\frac {\delta P}{\delta h_{ij}}+\frac{1}{2}P^{-1/2}\frac{\delta^2P}{\delta h_{k\ell}\delta h_{ij}}- \frac{P^{1/2}}{\hbar^2}\frac{\delta S}{\delta h_{k\ell}}\frac{\delta S}{\delta h_{ij}}+\right.$$ $$\left.\left.+\frac{i}{2\hbar}P^{-1/2}\left(\frac{\delta P}{\delta h_{k\ell}}\frac{\delta S}{\delta h_{ij}}+\frac{\delta S}{\delta h_{k\ell}}\frac{\delta P}{\delta h_{ij}}\right) +\frac{iP^{1/2}}{\hbar}\frac{\delta^2S}{\delta h_{k\ell}\delta h_{ij}}\right\}\right]e^{iS/\hbar}$$ Therefore writing out the WDW equation gives \begin{equation}\label{2.7} -\frac{\hbar^2}{4P}\frac{\delta}{\delta h_{ij}}\left[G_{ijk\ell}\frac{\delta P}{\delta h_{k\ell}}\right]+ \end{equation} $$+\frac{\hbar^2}{8P^2}G_{ijk\ell}\frac{\delta P}{\delta h_{k\ell}} \frac{\delta P}{\delta h_{ij}}+G_{ijk\ell}\left[ \frac{\hbar^2}{8P}\frac{\delta^2P} {\delta h_{ij}\delta h_{ij}}+\frac{1}{2}\frac{\delta S}{\delta h_{k\ell}} \frac{\delta S}{\delta h_{ij}}\right]+V=0;$$ $$2P\frac{\delta G}{\delta h_{ij}}\frac{\delta S}{\delta h_{k\ell}}+G\left(\frac{\delta P}{\delta h_{k\ell}}\frac{\delta S}{\delta h_{ij}}+ \frac{\delta S}{\delta h_{k\ell}}\frac{\delta P}{\delta h_{ij}}\right)+2PG\frac {\delta^2S}{\delta h_{k\ell}\delta h_{ij}}=0$$ \indent It is useful here to compare with $-(\hbar^2/2m)\psi''+V\psi=0$ which for $\psi=Rexp(iS/\hbar)$ yields \begin{equation}\label{2.8} \frac{1}{2m}S_x^2+V+Q=0;\,\,Q=-\frac{\hbar^2}{4m}\frac{R''}{R}=\frac{\hbar^2}{8m} \left[\frac{2P''}{P}-\left(\frac{P'}{P}\right)^2\right] \end{equation} along with $\partial(R^2S')=\partial(PS')=0$ (leading to (2.5)). The analogues here are then in particular \begin{equation}\label{2.9} \frac{1}{2m}S_x^2\sim \frac{1}{2}G_{ijk\ell}\frac{\delta S}{\delta h_{k\ell}}\frac{\delta S}{\delta h_{ij}};\,\,Q=\frac{\hbar^2}{8m}\left[\frac{2P''}{P}-\left(\frac{P'}{P}\right)^2 \right]\sim \end{equation} $$\sim-\frac{\hbar^2}{4P}\frac{\delta}{\delta h_{ij}}\left[G_{ijk\ell}\frac{\delta P}{\delta h_{k\ell}}\right]+G_{ijk\ell}\left\{\frac{\hbar^2}{8P^2}\frac{\delta P}{\delta h_{k\ell}} \frac{\delta P}{\delta h_{ij}}+\frac{\hbar^2}{4P}\frac{\delta^2P}{\delta h_{ij}\delta h_{k\ell}}\right\}$$ We note that the Q term arises directly from \begin{equation}\label{2.10} Q=-\frac{\hbar^2}{2}P^{-1/2}\frac{\delta}{\delta h_{ij}}\left(G_{ijk\ell}\frac{\delta P^{1/2}} {\delta h_{k\ell}}\right) \end{equation} and hence \begin{equation}\label{2.11} \int {\mathfrak D}f\,PQ=-\frac{\hbar^2}{2}\int {\mathfrak D}f P^{1/2}\frac{\delta}{\delta h_{ij}} \left(G_{ijk\ell}\frac{\delta P^{1/2}}{\delta h_{k\ell}}\right) \end{equation} But from $\int {\mathfrak D}f\delta[\,\,\,]=0$ one has (cf. (4.3)) \begin{equation}\label{2.12} \int{\mathfrak D}f P^{1/2}\frac{\delta}{\delta h_{ij}}\left(G_{ijk\ell}\frac{\delta P^{1/2}}{\delta h_{k\ell}}\right)=-\int{\mathfrak D}f\frac{\delta P^{1/2}}{\delta h_{ij}}G_{ijk\ell}\frac {\delta P^{1/2}}{\delta h_{k\ell}} \end{equation} This suggests heuristically (see Section 4 for more details of proof and Section 5 for more precision) \begin{theorem} Given a WDW equation of the form (3.4) with associated quantum potential given via (3.10) (or (3.9)) it follows that the quantum potential can be expressed via momentum fluctuations as in (3.3) (for $N=1$). \end{theorem} \section{SOME FUNCTIONAL CALCULUS} \renewcommand{\theequation}{4.\arabic{equation}} \setcounter{equation}{0} We go here to \cite{c1,h2,h13,n1} and will first sketch the derivation of (3.4) following \cite{h1,h2} (cf. also \cite{c1}). The relevant functional calculus goes as follows. One defines a functional F of fields $f$ and sets \begin{equation}\label{4.1} \delta F=F[f+\delta f]-F[f]=\int dx\frac{\delta F}{\delta f_x}\delta f_x \end{equation} Here e.g. $dx\sim d^4x$ and in the space of fields there is assumed to be a measure ${\mathfrak D}f$ such that $\int {\mathfrak D}f\equiv\int {\mathfrak D}f'$ for $f'=f+h$ (cf. \cite{b1,h2}). Then evidently $(\spadesuit\spadesuit)\,\,\int {\mathfrak D}f(\delta F/\delta f)=0$ when $\int {\mathfrak D}f\,F[f]<\infty$. Indeed \begin{equation}\label{4.2} 0=\int {\mathfrak D}f(F[f+\delta f]-F[f])=\int dx\delta f_x\left(\int{\mathfrak D}f\frac{\delta F}{\delta f_x}\right) \end{equation} and this provides an integration by parts formula \begin{equation}\label{4.3} \int {\mathfrak D}f\,P\left(\frac{\delta F}{\delta f}\right)=-\int {\mathfrak D}f\,\left(\frac{\delta P}{\delta f}\right)F \end{equation} for $P[f]$ a probability density functional. Classically a probability density functional arises in discussing an ensemble of fields and conservation of probability requires \begin{equation}\label{4.4} \partial_tP+\sum_a\int dx\frac{\delta}{\delta f_x^a}\left.\left(P\frac{\delta H}{\delta g_x^a}\right|_ {g=\delta S/\delta f}\right) \end{equation} where $g_x^a$ is the momentum corresponding to $f_x^a$ and one assumes a motion equation \begin{equation}\label{4.5} \partial_tS+H\left(f,\frac{\delta S}{\delta f},t\right)=0 \end{equation} The equations of motion here are then \begin{equation}\label{4.6} \partial_tP=\frac{\Delta\tilde{H}}{\Delta S};\,\,\partial_tS=-\frac{\tilde{H}}{\Delta P} \end{equation} where $(\bullet\bullet\bullet)\,\,\tilde{H}(P,S,t)=<H>=\int{\mathfrak D}f PH(f,(\delta S/\delta f),t)$. The variational theory here involves functionals $I[F]=\int {\mathfrak D} f\,\xi(F,\delta F/\delta f)$ and one can write \begin{equation}\label{4.7} \Delta I=I[F+\Delta F]-I[F]=\int{\mathfrak D}f\left[\frac{\partial\xi}{\partial F}\Delta F+\int dx\left(\frac {\partial\xi}{\partial(\delta F/\delta f_x)}\right)\frac{\delta (\Delta F)}{\delta f_x}\right]= \end{equation} $$=\int{\mathfrak D} f\left[\frac{\partial\xi}{\partial F}-\int dx\frac{\delta}{\delta f_x}\left(\frac {\partial\xi}{\partial(\delta F/\delta f_x)}\right)\right]\Delta F+$$ $$+\int dx\int{\mathfrak D}f\frac{\delta}{\delta f_x}\left[\left(\frac{\partial\xi}{\partial(\delta F/\delta f_x}\right)\delta F\right]$$ Assuming the term $\int{\mathfrak D}f[\,\,\,]\Delta F$ is finite the last integral vanishes and one obtains $(\blacklozenge\bl\blacklozenge)\,\,\Delta I=\int {\mathfrak D}f(\Delta I/\Delta F)\Delta F$, thus defining a variational derivative \begin{equation}\label{4.8} \frac{\Delta I}{\Delta F}=\frac{\partial\xi}{\partial F}-\int dx\frac{\delta}{\delta f_x}\left(\frac {\partial\xi}{\partial(\delta F/\delta f_x)}\right) \end{equation} In the Hamiltonian theory one can work with a generating function S such that $(\bigstar\bgs\bigstar)\,\,g=\delta S/\delta f$ and $\partial_tS+H(f,\delta S/\delta f,t)=0$ (HJ equation) and solving this is equivalent to $\partial_tf=\delta H/\delta g$ and $\partial_tg=-\delta H/\delta f$ (cf. \cite{h2}). Once S is specified the momentum density $g$ is determinied via $g=\delta S/\delta f$ and an ensemble of fields is specified by a probability density functional $P[f]$ (and not by a phase space density functional$\rho[f.g]$. In the HJ formulation one writes $(\clubsuit\clubsuit\clubsuit)\,\,V_x[f]=\partial f_x/\partial t=(\delta H/\delta g)|_{g= \delta S/\delta f)}$ and hence the associated continuity equation $\partial_t\int {\mathfrak D}fP$ is \begin{equation}\label{4.9} \partial_tP+\int dx\frac{\delta}{\delta f_x}[PV_x]=0 \end{equation} provided $<V_x>$ is finite. \\[3mm]\indent Now after proving (2.4) one proceeds as follows to produce a SE. The Hamiltonian formulation gives $(\spadesuit\spadesuit\spadesuit)\,\,\partial_tP=\Delta\tilde{H}/\Delta S$ and $\partial_tS=-\Delta\tilde{H}/\delta P$ where the ensemble Hamiltonian is \begin{equation}\label{4.10} \tilde{H}=\tilde{H}[P,S,t]=<H>=\int{\mathfrak D}f PH[f,\delta S/\delta f,t] \end{equation} where P and S are conjugate variables. The equations $(\spadesuit\spadesuit\spadesuit)$ arise from $\Delta\tilde{A}=0$ where $\tilde{A}=\int dt[-\tilde{H}+\int {\mathfrak D}fS\partial_tP$. One specializes here to quadratic Hamiltonian functions \begin{equation}\label{4.11} H_c[f,g,t]=\sum_{a,b}dx K_x^{ab}[f]g_x^ag_x^b+V[f] \end{equation} and to this is added a term as in (2.4) to get $\tilde{H}$ (which does not depend on S). Hence from $(\spadesuit\spadesuit\spadesuit)$ with $\partial_tf_x=\delta H_c/\delta g_x$ one obtains following (4.9) \begin{equation}\label{4.12} \partial_tP+\int dx\frac{\delta}{\delta f_x}\left[P\frac{\delta H}{\delta g_x}\right]_{g=\delta S/\delta f}=0 \end{equation} (cf. 4.8)). The other term in $\tilde{H}$ is simply \begin{equation}\label{4.13} (\hbar^2/4)\int {\mathfrak D}f\int PK_x^{ab}(\delta P/\delta f_x^a)(\delta P/\delta f_x^b)(1/P^2) \end{equation} and this provides a contribution to the HJ equation via $\partial_tS=-\Delta\tilde{H}/\Delta P$ which will have the form \begin{equation}\label{4.14} Q=-\frac{\hbar^2}{4}P^{-1/2}\int dx\frac{\delta}{\delta f_x^a}\left(K_x^{ab}\frac{\delta P^{1/2}} {\delta f_x^b}\right) \end{equation} corresponding to (3.10). We note further then from (3.12) \begin{equation}\label{4.15} Q\sim \frac{\hbar^2}{2}\int dx G_{ijk\ell}\frac{\delta P^{1/2}}{\delta h_{ij}}\frac{\delta P^{1/2}} {\delta h_{k\ell}}\sim\frac{\hbar^2}{8}\int dx G_{ijk\ell}\frac{1}{P}\frac{\delta P} {\delta h_{ij}}\frac{\delta P}{\delta h_{k\ell}} \end{equation} as in (3.3). Hence Theorem 3.1 is established under the hypotheses indicated concerning ${\mathfrak D}f$ etc. \section{ENTROPY AND FISHER INFORMATION} \renewcommand{\theequation}{5.\arabic{equation}} \setcounter{equation}{0} We recall first (cf. \cite{b2,c1,c8}) that the relation between the SE and the quantum potential (QP) is not 1-1. The QP Q depends on the wave function $\psi=Rexp(iS/\hbar)$ via $Q=-(\hbar^2/2)(\Delta R/R)$ for the SE and thus the solution of a quantum HJ equation, involving S and R(via Q), requires the companion ``continuity" equation to determine S and R (and thence $\psi$). There is some lack of uniqueness since Q determines R only up to uniqueness for solutions of $\Delta R+(2m/\hbar^2)QR=0$ and even then the HJ equation $S_t+\cdots=0$ could introduce still another arbitrary function (cf. \cite{c1,c8}). Thus to indicate precisely what is said in Theorems 2.1 and 3.1 we rephrase this in the form \begin{theorem} In Theorem 2.1 we see that given a SE described via a probability distribution $P\,\,(=|\psi|^2)$ one can identify this equation as a quantum model arising from a classical Hamiltonian $\tilde{H}_{cl}$ perturbed by a Fisher information term as in (2.4). Thus the quantization involves an information content with entropy significance (cf. here \cite{c2,o1}) for entropy connections). This suggests that any quantization of $\tilde{H}_{cl}$ arises (or can arise) through momentum perturbations related to Fisher information and it also suggests that $P=|\psi|^2$ (with $\int Pd^3x=1$) should be deemed a requirement for any solution $\psi$ of the related SE (note $\int Pd^3x =1$ eliminates many putative counterexamples). Thus once P is specified as a probability distribution for a wave function $\psi=\sqrt{P}exp(iS/\hbar)$ arising from a SE corresponding to a quantization of $\tilde{H}_{cl}$, then Q can be expressed via Fisher information. Similarly given Q as a Fisher information perturbation of $\tilde{H}_{cl}$ (arising from momentum fluctuations involving P as in (2.4)) there is a unique wave function $\psi=\sqrt{P}exp(iS/\hbar)$ satisfying the corresponding SE. \end{theorem} \begin{theorem} For Theorem 3.1 let us assume there exists a suitable ${\mathfrak D}f$ as in Section 4, which is a measure in the (super)space of fields $h$. Then there is an integration by parts formula (4.3) which removes the need for considering surface terms in integrals $\int d^4x$ (cf. \cite{d1} for cautionary remarks about Green's theorem, etc.). Consequently given a WDW equation of the form (3.4) with corresponding Q as in (3.10) (and $\psi= \sqrt{P}exp(iS/\hbar)$, one can show that the equation can be modelled on a perturbation of a classical $\tilde{H}_c$ via a Fisher information type perturbation as in (3.3) (cf. here \cite{c1,f1,f2}). Here P represents a probability density of fields $h_{ij}$ which determine $G_{ijk\ell}$ (and V incidentally) and the very existence of a quantum equation (i.e. WDW) seems to require entropy type input via Fisher information fluctuation of fields. This suggests that quantum gravity requires a statistical spacetime (an idea that has appeared before - cf. \cite{c1}). \end{theorem} \indent We sketch now some material from \cite{c3,c17} supporting the idea of a statistical geometrodynamics (SGD). Here one builds a model of SGD based on (i) Positing that the geometry of space is of statistical origin and is explained in terms of the distinguishability Fisher-Rao (FR) metric and (ii) Assuming the dynamics of the geometry is derived solely from principles of inference. There is no external time but an intrinsic one \`a la \cite{b12}. A scale factor $\sigma(x)$ is required to assign a Riemannian geometry and it is conjectured that it can be chosen so that the evolving geometry of space sweeps out a 4-D spacetime. The procedure defines only a conformal geometry but that is entirely appropriate d'apr\`es \cite{y2}. One uses the FR metric in two ways, one to distinguish neighboring points and the other to distinguish successive states. Consider then a ``cloud" of dust with coordinate values $y^i\,\,(\,i=1,2,3)$ and estimates $x^i$ with $p(y|x)dy$ the probability that the particle labeled $x^i$ should have been labeled $y^i$ (the FR metric encodes the use of probability distributions - instead of structureless points). One writes \begin{equation}\label{5.1} \frac{p(y|x+dx)-p(y|x)}{p(y|x)}=\frac{\partial log[p(y|x)]}{\partial x^i}dx^i \end{equation} \begin{equation}\label{5.2} d\lambda^2=\int d^4yp(y|x)\frac{\partial log[p(y|x)]} {\partial x^i}\frac{\partial log[p(y|x)]}{\partial x^j}dx^idx^j=\gamma_{ij}dx^idx^j \end{equation} and $d\lambda^2=0\iff dx^i=0$. The FR metric $\gamma_{ij}$ is the only local Riemannian metric reflecting the underlying statistical nature of the manifold of distributions $p(y|x)$ and a scale factor $\sigma$ giving a metric $g_{ij}(x)=\sigma(x)\gamma_{ij}(x)$ is needed for a Riemannian metric (cf. \cite{c3,c17}). Also the metric $d\lambda^2$ is related to the entropy of $p(y|x+dx)$ relative to $p(y|x)$, namely \begin{equation}\label{5.3} S[p(y|x+dx)|p(y|x)]=-\int d^3yp(y|x+dx)log\frac{p(y|x+dx)}{p(y|x)}=-\frac{1}{2}d\lambda^2 \end{equation} and maximizing the relative entropy S is equivalent to minimizing $d\lambda^2$. One thinks of $d\lambda$ as a spatial distance in specifying that the reason that particles at $x$ and $x+dx$ are considered close is because they are difficult to distinguish. To assign an explicit $p(y|x)$ one assumes the relevant information is given via $<y^i>=x^i$ and the covariance matrix $<(y^i-x^i)(y^j-x^j)>=C^{ij}(x)$; this leads to \begin{equation}\label{5.4} p(y|x)=\frac{C^{1/2}}{(2\pi)^{3/2}}exp\left[-\frac{1}{2}C_{ij}(y^i-x^i)(y^j-x^j)\right] \end{equation} where $C^{ik}C_{kj}=\delta^i_j$ and $C=det(c_{ij})$. Subsequently to each $x$ one associates a probability distribution \begin{equation}\label{5.5} p(y|x,\gamma)=\frac{\gamma^{1/2}(x)}{(2\pi)^{3/2}}exp\left[-\frac{1}{2}\gamma_{ij} (x)(y^i-x^i)(y^j-x^j)\right] \end{equation} where $\gamma_{ij}(x)=C_{ij}(x)$ (extreme curvature situations are avoided here). One deals with a conformal geometry described via $\gamma_{ij}$ and a scale factor $\sigma(x)$ will be needed to compare uncertainties at different points; the choice of $\sigma$ should then be based on making motion ``simple". \\[3mm]\indent Thus define a macrostate via \begin{equation}\label{5.6} P[y|\gamma]=\prod_xp(y(x)|x,\gamma_{ij}(x))= \end{equation} $$=\left[\prod_x\frac{\gamma^{1/2}(x)}{(2\pi)^{3/2}} \right]exp\left[-\frac{1}{2}\sum_x\gamma_{ij}(x)(y^i-x^i)(y^j-x^j)\right]$$ Once a dust particle in an earlier state $\gamma$ is identified with the label $x$ one assumes that this particle can be assigned the same label $x$ as it evolves into the later state $\gamma+\Delta\gamma$ (equilocal comoving coordinates). Then the change between $P[y|\gamma+\Delta\gamma]$ and $P[y|\gamma]$ is denoted by $\Delta\ell$ and is measured via their relative entropy (this is a form of Kullback-Liebler entropy - cf. \cite{c1}) \begin{equation}\label{5.7} S[\gamma+\Delta\gamma|\gamma]=-\int\left(\prod_xdy(x)\right)P[y|\gamma+\Delta\gamma]log\frac{P[y|\gamma+ \Delta\gamma]}{P[y|\gamma]}=-\frac{1}{2}\Delta\ell^2 \end{equation} Since $P[y|\gamma]$ and $P[y|\gamma+\Delta\gamma]$ are products one can write \begin{equation}\label{5.8} S[\gamma+\Delta\gamma,\gamma]=\sum_xS[\gamma(x)+\Delta\gamma(x),\gamma(x)]= \end{equation} $$=-\frac{1}{2}\sum_x\Delta\ell^2(x); \,\,\Delta\ell^2(x)=g^{ijk\ell}\Delta\gamma_{ij}(x)\Delta\gamma_{k\ell}(x)$$ where, using \eqref{5.5} \begin{equation}\label{5.9} g^{ijk\ell}=\int d^3yp(y|x,\gamma)\frac{\partial log[p(y|x,\gamma)]}{\partial\gamma_{ij}} \frac{\partial log[p(y|x,\gamma)]}{\partial \gamma_{k\ell}}= \end{equation} $$=\frac{1}{4}\left(\gamma^{ik}\gamma^{ji}+\gamma^{i\ell}\gamma^{jk}\right)$$ Then $\Delta L^2=\sum_x\Delta\ell^2(x)$ can be written as an integral if we note that the density of distinguishable distributions is $\gamma^{1/2}$. Thus the number of distinguishable distributions, or distinguishable points in the interval $dx$ is $dx\gamma^{1/2}$ ($dx\sim d^3x$) and one has \begin{equation}\label{5.10} \Delta L^2=\int dx\gamma^{1/2}\Delta\ell^2=\int dx\gamma^{1/2}g^{ijk\ell}\Delta\gamma_{ij} \Delta\gamma_{k\ell} \end{equation} Thus the effective number of distinguishable points in the interval $dx$ is finite (due to the intrinsic fuzziness of space). Now to describe the change $\Delta\gamma_{ij}(x)$ one introduces an arbitrary time parameter $t$ along a trajectory \begin{equation}\label{5.11} \Delta\gamma_{ij}=\gamma_{ij}(t+\Delta t,x)-\gamma_{ij}(t,x)=\partial_t\gamma_{ij}\Delta t \end{equation} Thus $\partial_t\gamma_{ij}$ is the ``velocity" of the metric and \eqref{5.10} becomes \begin{equation}\label{5.12} \Delta L^2=\int dx\gamma^{1/2}g^{ijk\ell}\partial_t\gamma_{ij}\partial_t\gamma_{k\ell}\Delta t^2 \end{equation} \\[3mm]\indent Now go to an arbitrary coordinate frame where equilocal points at $t$ and $t+\Delta t$ have coordinates $x^i$ and $\tilde{x}^i=x^i-\beta^i(x)\Delta t$. Then the metric at $t+\Delta t$ transforms into $\tilde{\gamma}_{ij}$ with \begin{equation}\label{5.13} \gamma_{ij}(t+\Delta t,x)=\tilde{\gamma}_{ij}(t+\Delta t,x)-(\nabla_i\beta_j+\nabla_j\beta_i)\Delta t \end{equation} where $\nabla_i\beta_j=\partial_i\beta_j-\Gamma^k_{ij}\beta_k$ is the covariant derivative associated to the metric $\gamma_{ij}$. In the new frame, setting $\tilde{\gamma}_{ij}(t+\Delta t,x)-\gamma_{ij} (t,x)=\Delta\gamma_{ij}$ one has \begin{equation}\label{5.14} \Delta_{\beta}\gamma_{ij}=\Delta\gamma_{ij}-(\nabla_i\beta_j+\nabla_j\beta_i)\Delta t\sim \Delta_{\beta}\gamma_{ij}=\dot{\gamma}_{ij}\Delta t \end{equation} $$\dot{\gamma}_{ij}=\partial_t\gamma_{ij}-\nabla_i\beta_j-\nabla_j\beta_i$$ leading to \begin{equation}\label{5.15} \Delta_{\beta}L^2=\int dx\gamma^{1/2}g^{ijk\ell}\dot{\gamma}_{ij}\dot{\gamma}_{k\ell}\Delta t^2 \end{equation} \indent Next one addresses the problem of specifying the best matching criterion, i.e. what choice of $\beta^i$ provides the best equilocality match. This is treated as a problem in inference and asks for minimum $\Delta_{\beta}L^2$ over $\beta$. Hence one gets \begin{equation}\label{5.16} \delta (\Delta_{\beta}L^2)=2\int dx\gamma^{1/2}g^{ijk\ell}\dot{\gamma}_{ij}\dot{\gamma}_{k\ell}\Delta t^2 =0\Rightarrow \end{equation} $$\Rightarrow \nabla_{\ell}(2g^{ijk\ell}\dot{\gamma}_{ij})=0\equiv \nabla_{\ell}\dot{\gamma}^{k\ell}=0$$ (using \eqref{5.9} and $\dot{\gamma}^{k\ell}=\partial_t\gamma^{k\ell}+\nabla^k\beta^{\ell}+\nabla^{\ell} \beta^k$). These equations determine the shifts $\beta^i$ giving the best matching and equilocality for the geometry $\gamma_{ij}$ and alternatively they could be considered as constraints on the allowed change $\Delta\gamma_{ij}=\partial_t\gamma_{ij}\Delta t$ for given shifts $\beta^i$. In describing a putative entropic dynamics one assumes now e.g. continuous trajectories with each factor in $P[y|\gamma]$ evolving continuously through intermediate states labeled via $\omega(x)=\omega\zeta(x)$ where $\zeta(x)$ is a fixed positive function and $0<\omega<\infty$ is a variable parameter (some kind of many fingered time \`a la Schwinger, Tomonaga, Wheeler, et al). It is suggested that they dynamics be determined by an action \begin{equation}\label{5.17} J=\int_{t_i}^{t_f}dt\int dx\gamma^{1/2}[g^{ijk\ell}\dot{\gamma}_{ij}\dot{\gamma}_{k\ell}]^{1/2} \end{equation} The similarities to ``standard" geometrodynamics are striking. \subsection{INFORMATION DYNAMICS} We go here to \cite{c5,c6} and consider the idea of introducing some kind of dynamics in a reasoning process. One looks at the Fisher metric defined by \begin{equation}\label{7.39} g_{\mu\nu}=\int_Xd^4x p_{\theta}(x)\left(\frac{1}{p_{\theta}(x)}\frac{\partial p_{\theta}(x)}{\partial\theta^{\mu}}\right) \left(\frac{1}{p_{\theta}(x)}\right)\left(\frac{\partial p_{\theta}(x)}{\partial\theta^{\nu}}\right) \end{equation} and constructs a Riemannian geometry via \begin{equation}\label{7.40} \Gamma_{\lambda\nu}^{\sigma}=\frac{1}{2}g^{\nu\sigma}\left(\frac{\partial g_{\mu\nu}}{\partial\theta^{\lambda}}+ \frac{\partial g_{\lambda\nu}}{\partial\theta^{\mu}}-\frac{\partial g_{\mu\lambda}}{\partial \theta^{\nu}}\right); \end{equation} $$R^{\lambda}_{\mu\nu\kappa}=\frac{\partial\Gamma^{\lambda}_{\mu\nu}}{\partial\theta^{\kappa}}-\frac{\partial\Gamma^{\lambda}_{\mu\kappa}} {\partial\theta^{\nu}}+\Gamma^{\eta}_{\mu\nu}\Gamma^{\lambda}_{\kappa\eta}-\Gamma^{\eta}_{\mu\kappa}\Gamma^{\lambda}_{\nu\eta}$$ Then the Ricci tensor is $R_{\mu\kappa}=R^{\lambda}_{\mu\lambda\kappa}$ and the curvature scalar is $R=g^{\mu\kappa}R_{\mu\kappa}$. The dynamics associated with this metric can then be described via functionals \begin{equation}\label{7.41} J[g_{\mu\nu}]=-\frac{1}{16\pi}\int\sqrt{g(\theta)}R(\theta)d^4\theta \end{equation} leading upon variation in $g_{\mu\nu}$ to equations \begin{equation}\label{7.42} R^{\mu\nu}(\theta) -\frac{1}{2}g^{\mu\nu}(\theta)R(\theta)=0 \end{equation} Contracting with $g_{\mu\nu}$ gives then the Einstein equations $R^{\mu\nu}(\theta)=0$ (since $R=0$). J is also invariant under $\theta\to\theta+\epsilon(\theta)$ and variation here plus contraction leads to a contracted Bianchi identity. Constraints can be built in by adding terms $(1/2)\int\sqrt{g} T^{\mu\nu}g_{\mu\nu}d^4\theta$ to $J[g_{\mu\nu}]$. If one is fixed on a given probability distribution $p(x)$ with variable $\theta^{\mu}$ attached to give $p_{\theta}(x)$ then this could conceivably describe some gravitational metric based on quantum fluctuations for example. As examples a Euclidean metric is produced in 3-space via Gaussian $p(x)$ and complex Gaussians will give a Lorentz metric in 4-space. \section{OTHER FORMS OF WDW} \renewcommand{\theequation}{6.\arabic{equation}} \setcounter{equation}{0} In general there are many approaches to WDW and we cite in particular \cite{a1,a2,a7,a3,b12,b81,c1,c2,c7, d1,g4,g2,g3,h4,k3,k5,k2,k1,k4,k6,k17,m1,n1,n2,n3,p1,p2,p3,p4,r2,r3,r4,s3,s1,s4,s14,s13, s2,t1,w1,w2,w3,y1,y2}. In particular (for $\phi$ a matter field) the theory of \cite{p3,p4} leads to a Bohmian form \begin{equation}\label{6.1} \left\{-\hbar^2\left[\kappa G_{ijk\ell}\frac{\delta}{\delta h_{ij}}\frac{\delta}{\delta h_{k\ell}} +\frac{1}{2}h^{-1/2}\frac{\delta^2}{\delta \phi^2}\right]+V\right\}\psi(h_{ij},\phi)=0; \end{equation} $$V=h^{1/2}\left[-\kappa^{-1}({}^3R-2\Lambda)+\frac{1}{2}h^{ij}\partial_i\partial_j\phi+U(\phi)\right]$$ involving (for $A^2\sim P$) \begin{equation}\label{6.2} \kappa G_{ijk\ell}\frac{\delta S}{\delta h_{ij}}\frac{\delta S}{\delta h_{k\ell}}+\frac{1}{2}h^{-1/2}\left(\frac{\delta S}{\delta \phi}\right)^2+V+Q=0; \end{equation} $$Q=-\frac{\hbar^2}{A}\left(\kappa G_{ijk\ell}\frac{\delta^2A}{\delta h_{ij}\delta h_{k\ell}}+\frac{h^{-1/2}}{2}\frac {\delta ^2A}{\delta\phi^2}\right)$$ where the unregularized Q above depends on the regularization and factor ordering prescribed for the WDW equation. In addition to (6.2) one has \begin{equation}\label{6.3} \kappa G_{ijk\ell}\frac{\delta}{\delta h_{ij}}\left(A^2\frac{\delta S}{\delta h_{k\ell}}\right)+\frac{h^{-1/2}}{2} \frac{\delta}{\delta\phi}\left(A^2\frac{\delta S}{\delta\phi}\right)=0 \end{equation} Other Bohmian situations are indicated in \cite{c1} and we are preparing a survey article. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} In the Internet of Things (IoT) era, humans have been increasingly removed from the surveillance loop in favor of a connected ecosystem of edge devices performing vision-based tasks \cite{ref:al2018survey}. Automatic analysis is the only viable option given the huge amount of data continuously collected from different IoT edge devices. For example, resource-constrained unmanned aerial vehicles (UAVs) or image sensors can be used as surveillance devices for detecting forest fires~\cite{ref:firedrone} or infrastructure damages after natural disasters~\cite{ref:koch2015review}. In these scenarios, autonomous UAVs or edge devices collect data that may be sent to other edge devices or to the cloud for automated machine learning (ML) based analysis. According to the 2019 Embedded Markets Study \cite{EESurvey}, 43\% of IoT applications incorporating advanced technologies are using embedded vision and 32\% are using machine learning. However, using these IoT devices often requires meeting the tight storage, energy and/or communication bandwidth constraints, while maintaining the effectiveness of surveillance. \input{fig_latex/iceTeaFlow.tex} Image compression can address these needs in edge devices that operate in constrained environments and at the same time reduce network traffic \cite{azar2019energy}. Compressed images are easier to store and more energy efficient to transmit long-range. An ideal image compression technique for IoT applications should: \begin{itemize} \item Optimize for machine-to-machine communication and machine-based interpretation in diverse IoT applications - i.e., pattern recognition or feature extraction on the image. Visual perception by human users should be given less importance. \item Aim for minimizing the communication bandwidth as IoT is creating 1000X more dense networking requirements \cite{AiIoT1000,iotSpectrumSurvey}, often driven by image/video communication. \item Gear towards minimizing the overall energy and space requirement on resource-constrained edge devices. \end{itemize} The standard image compression methods, such as JPEG \cite{ref:jpeg}, JPEG 2000~\cite{ref:jpeg2000}, and WebP~\cite{ref:webp} are tailored to maintain good human-perceivable visual quality and were not designed with IoT applications in mind. Properties of IoT applications which can be leveraged to obtain increased compression are as follows: \begin{itemize} \item The image domain is biased based on the application and on each specific edge image sensor device. The bias can be divided into two categories: (1) color distribution bias, (2) common pattern bias. We define patterns as segment outlines in an image. This information can be learned and utilized. \item Depending on the application, specific entities of the images may hold greater value with respect to the rest of the image. Such applications, therefore, have a region of interest bias which can be learned and utilized. \item Coarse-grained ML tasks prevalent in IoT applications can tolerate extreme levels of compression. \end{itemize} Building on these observations, we propose MAGIC, a \textbf{M}achine le\textbf{A}rning \textbf{G}uided \textbf{I}mage \textbf{C}ompression framework for achieving extreme levels of image compression in IoT systems while maintaining sufficient accuracy for coarse-grained AI tasks. MAGIC consists of three major steps: (1) knowledge acquisition, (2) encoding and (3) decoding. During knowledge acquisition, different application and domain-specific information such as color distribution, common pattern bias and region of interest bias can be extracted in the form of (1) a color quantization dictionary, (2) a common pattern dictionary and (3) a machine learning model which can intelligently represent image segments as a set of common pattern dictionary entries. During the encoding stage, an image is segmented into non-overlapping triangles using an efficient Delaunay triangulation (DT) method. The ML model, we name pattern prediction model, and the common pattern dictionary from the knowledge acquisition stage are used to guide the image segmentation process. Finally, the colors are assigned by averaging the pixel colors within each triangle and quantizing them based on the color quantization dictionary, which is constructed by analyzing the color distribution from the domain using k-means. The decode phase operates similarly by reconstructing the segments using DT and assigning colors from the color quantization dictionary. We have implemented MAGIC as a completely configurable framework that can be used to compress images from a given dataset. We evaluate MAGIC extensively using two publicly available datasets: fire detection \cite{ref:fireDS} and building crack detection \cite{ref:crackDS} and observe promising performance. For the building crack detection dataset, at a 1.06\% accuracy loss, we obtained 22.09x more compression with respect to the source images. For the fire detection dataset, at a 2.99\% accuracy loss, we obtained 42.65x more compression with respect to the source images. We show up to $\sim$167x more compression than source at a higher accuracy loss ($\sim$13\%). Furthermore, we analyze the variability in compressed image size and the energy requirements of MAGIC. The rest of the paper is organized as follows: Section~\ref{sec:related} discusses background of vision in IoT and related works in compression. Section~\ref{sec:motivation} provides motivations for this work and Section~\ref{sec:methodology} introduces the proposed methodology. Section~\ref{sec:results} presents evaluation and comparison of MAGIC with JPEG 2000 and WebP. Section~\ref{sec:discussion} discusses possible extensions and improvements. Section~\ref{sec:conclusion} concludes the paper. \section{Background \& Related Works}\label{sec:related} In this section, we will give a brief introduction to vision tasks in IoT application, and discuss state-of-the-art compression techniques. \subsection{Computer Vision in IoT Applications} IoT applications are gaining popularity in several spheres such as industry, home, healthcare, retail, transport and even security \cite{iotFaceDetection}. Many applications in these domains involve capturing images at the edge device and transmitting the image to cloud or other edge devices for analysis. For example: \begin{itemize} \item UAV based fire detection techniques have been proposed which uses optical remote sensing \cite{arialFireDetection}. \item Detecting infrastructure damage in a post-disaster scenario using UAV imaging is being investigated in \cite{UAVRoofHole_IoT, UAV_earthquake_IoT}. \item IoT image sensors and computer vision techniques are widely used for flood monitoring, warning and damage mitigation \cite{floodIoT}. \end{itemize} Image sensors and intelligent data analysis are two key aspects of surveillance based IoT applications. Additionally, security-oriented IoT surveillance applications actively rely on computer vision to detect anomalies\cite{iotFaceDetection}. \subsection{Need for Image Compression in IoT Vision} Different IoT applications require sensing image data at the edge and transmitting them over to other edge devices or cloud for analysis. These edge devices operate with strict space, energy, and bandwidth requirements. Compressing images not only has the direct effect of reducing the space and network traffic requirement but also can reduce energy consumption. IoT-based communication is expected to reach 50\% of network traffic by 2025 \cite{AiIoT1000}. For example, a typical 4G network is designed to support thousands of devices worth of traffic in a region. However, with the increase in the number of IoT devices being connected to the network, it may become impossible to efficiently serve all devices simultaneously. Therefore, compression at the edge can help reduce network stress. The energy required to transmit data increases with distance \cite{sadler2006data}. For long-range transmission devices such as MaxStream XTend (at 500 mW transmit power), the energy required for one byte of transmission can be higher than 1 million clock cycles worth of computation \cite{sadler2006data}. Hence, even with the cost of additional computation, compression can ultimately lead to less overall energy expenditure. Due to all these reasons, image compression is a vital step for any IoT vision application. \input{fig_latex/color_Dist} \input{fig_latex/crackSegExample} \subsection{State-of-the-art Image Compression Techniques} \input{tables/comparison} Several image compression techniques proposed over the years can be divided primarily into two categories: (1) Lossless compression techniques and (2) Lossy compression techniques. Lossless compression techniques such as Arithmetic Coding (\cite{arithmeticCoding_1,arithmeticCoding_2}) and Huffman Coding (\cite{huffman}) aim to completely preserve the content under compression, but generally at the cost of significant mathematical computations \cite{imageCompSurvey}. For coarse-grained ML tasks, such quality is not needed, therefore, lossy compression techniques are preferred. As the name suggests, lossy compression allows for variable data loss to achieve higher rates of compression. The data lost is generally not perceivable by humans. Some lossy compression techniques include, JPEG \cite{ref:jpeg}, JPEG 2000 \cite{ref:jpeg2000}, and WebP \cite{ref:webp} which perform quantization in the frequency domain using techniques such as discrete wavelet transform and discrete cosine transform. Another class of lossy compression performs quantization in the spatial domain, such as triangulation-based image compression most recently proposed in \cite{ref:marwood2018representing}. This compression technique relies on the DT of a set of points in the image matrix to construct the image out of non-overlapping triangles during both encoding and decoding. In this way, triangulation allows for sending minimal amounts of information at the cost of slightly more encoding/decoding time. While we use DT, our compression algorithm and compression goals are vastly different than \cite{ref:marwood2018representing}. Machine learning has been used to further improve compression \cite{ref:rippel2017real, ref:toderici2017full, ref:Li_2018_CVPR, ref:balle2018variational}. In all these works, the goal is to maximize visual quality metrics (PSNR, MS-SSIM, and remove artifacts) all while minimizing bits per pixel (BPP). However, complex, large neural networks, are not ideal for use in edge devices. More recently, people have been targeting image compression optimized for ML accuracy over human perceived quality. Liu et al. propose DeepN-JPEG \cite{ref:deepnjpeg} which modifies JPEG's quantization table for deep neural network (DNN) accuracy over human visual quality. DeepN-JPEG is targeted for generalized AI-models and can achieve only 3.5x compression compared to source images. However, our approach can achieve up to 42.65x more compression than the source. Similarly, in \cite{ref:medical}, Liu et al. modify JPEG 2000 to extract frequencies relevant for neural network (NN) based segmentation of 3D medical images. Weber et al. develop a recurrent neural network (RNN) based compression with the aim of maximizing the accuracy of generalized classifiers and investigate the accuracies of several classifiers for images compressed for human perception versus machine perception \cite{ref:weber2019lossy}. In Table~\ref{comparison}, we qualitatively compare MAGIC with different state-of-the-art relevant image compression techniques. MAGIC distinguishes itself as the only image compression technique to be targeted for coarse ML vision tasks in IoT applications. The compression range of MAGIC is higher than other techniques because it is designed to leverage domain knowledge. \section{Motivation} \label{sec:motivation} Most IoT applications designed to perform a particular automated vision task will have some bias in the images being captured and analyzed. The amount of bias will depend on the application and the sensory edge device in use. For a given application the images will have (1) a pixel color distribution bias depending on the environment where the image occurs and (2) a pattern bias due to prevalence of certain common objects in the images. Apart from the image set bias, the IoT application may have its own bias for certain objects and features which are relevant for the ML analysis task. \subsection{Color Distribution Bias} Image color bias will exist to an extent in any IoT domain-specific application. Apart from the application level color distribution bias, there may be bias attributed to the physical location of the device. Such location bias can be more easily observed for stationary devices. Harnessing the bias for each device separately may be beneficial but in this paper, we limit our study to the application level image color distribution bias. We plot the pixel color distributions for the forest fire dataset \cite{ref:fireDS} and the building crack dataset \cite{ref:crackDS} as shown in Fig.~\ref{color_Dist}. We can clearly observe that certain regions of Red, Green and Blue spectrum are more represented than others. This bias will appear more prominent and severe if we consider the joint Red-Green-Blue distribution. If we could take advantage of this bias by limiting the color space tuned for the specific application then we may be able to compress more. \subsection{Common Pattern Bias} \input{fig_latex/FireSegExample} The images captured and analyzed by task-specific IoT applications will have pattern (image segment outlines) bias because of the nature of the objects that are present in the images. For a building crack detection application, the images will consist of cracked and uncracked surfaces (Fig.~\ref{CrackSegExample}) and for a forest fire surveillance application, the images will consist of trees and occasional fires (Fig.~\ref{fire_seg_example}). Just like color distribution bias, common pattern bias will also exist both in the application level and in the device location level. If we could capture and store these domain-specific repeating patterns in a dictionary, for example, then we could potentially save space by storing dictionary entry indices instead of concrete data. \subsection{Region of Interest Bias} Certain objects/regions in the image may hold more importance depending on the IoT task. If the image can be compressed based on the application-specific requirement then we will be able to save important regions at higher quality while sacrificing other regions. For example, let us assume that we have an IoT application which is designed to detect green cars among green and blue cars. Only by using common pattern bias knowledge, we cannot distinguish between green and blue cars. Both cars will have the same level of quality. But with the extra region of interest bias knowledge, we can save space by only learning to only represent the green cars with high quality. \section{Methodology}\label{sec:methodology} \input{algo/calibration.tex} In this section, we present our learning guided compression technique (MAGIC) targeted for coarse-grained ML vision tasks in intelligent IoT ecosystems. Fig.~\ref{flow} illustrates the overall flow. Just like any other compression technique, there is a procedure for encoding the image and a procedure for decoding the image. Additionally, to take advantage of the bias present in the application domain, we propose a knowledge acquisition procedure. In this paper, we focus on the first aspect of domain knowledge learning, namely, color distribution bias. The other two areas of domain knowledge (pattern bias and ROI bias) are not strictly learned. The common pattern dictionary (for segmentation bias) is statically generated and the pattern prediction model (for ROI bias) is trained based on automated supervision. However, the algorithms are implemented such that future inclusion of human supervision and learning in the other two domain knowledge areas can be easily performed. We will now describe the three major steps of MAGIC in greater detail. \subsection{Knowledge Acquisition} Before compression is carried out, the knowledge acquisition procedure is used to analyze a set of sample images from the given use-case and learn common features that can be reused during compression. This learning stage allows for more efficient image compression. To capture the application-specific domain knowledge we use the following constructs and techniques. \subsubsection{Color Quantization Dictionary} We construct a dictionary of most frequently occurring colors for a specific application. Colors are now represented as entries in the dictionary instead of the standard 24-bit RGB value. The number of entries in the dictionary can be controlled by the user. To construct the color dictionary, we first extract the color distribution from a set of domain-specific sample images and then apply unsupervised machine learning (k-means) to extract the colors which are strong representatives of the entire color space. The color quantization dictionary will be used during the encoding and decoding phase for representing the image. Algo. \ref{alg:calibration} describes in details how the color quantization dictionary is constructed. \subsubsection{Common Pattern Dictionary} Compressing an image with MAGIC involves segmenting an image into representative triangles using Delaunay triangulation (DT). The triangle segments are determined from the points sprayed on the 2D image plane. Hence, patterns in an image segment can be represented as a set of points in a 2D plane. The forest fire images in Fig.~\ref{fire_seg_example} illustrate this process. The common pattern dictionary is a data structure for saving the regularly occurring spray point patterns that occur in an image segment. The patterns are indexed in the dictionary such that a higher index is associated with more complex details. The pattern dictionary can be statically generated to increase compression robustness across different image domains or learned during the knowledge acquisition phase to be in more tune with the application domain. \input{algo/splitTriangle} \subsubsection{Machine Learning Model for Pattern Prediction} We train a machine learning model that learns to represent the segments of an image as a set of patterns from the Common Pattern Dictionary. Similar to other compression technique, we operate on `blocks' of an image and must partition the image. Each block needs to be assigned a point spray pattern entry from the common pattern dictionary during encoding. The assignment can be based on how much texture details the image block has or the importance of the image block for a given application. MAGIC employs the trained ML model (pattern prediction model) for assigning an image block to an entry from the common pattern dictionary. Iterative heuristic driven DT segmentation methods have time complexity $\mathcal{O}(IM\log M)$, where $I$ is the number of iterations and $M$ is the maximum number of points used for computing DT. Our pattern prediction model can provide the points in $\mathcal{O}(1)$ followed by a single DT of complexity $\mathcal{O}(M\log M)$. Therefore, the pattern prediction model has two benefits: (1) The ML guided assignment of an image block to a specific pattern dictionary entry is faster than determining the segmentation pattern of the image block using iterative heuristic means and (2) the ML model can be trained to retain more details for specific image blocks which may be important for the specific visual task. \subsubsection{Knowledge Acquisition Algorithm} Before communication can start between a sender entity and a receiver entity, we must construct the above three components during the knowledge acquisition phase. The pattern prediction model (1) must reside on the sender (encoder) side. The common pattern dictionary (2) and color quantization dictionary (3) should reside on both sender and receiver sides. Algo. \ref{alg:calibration} defines the knowledge acquisition process which can be used to construct these components. We collect a set of sample images (learning dataset) that can approximately represent the nature of images that are to be communicated. In line 3, the common pattern dictionary is generated. For this iteration of MAGIC, the generation is such that entry indexed $i$ has exactly $i$ points sprayed randomly in a ($bDim$ x $bDim$) block. For each image, we construct the $pointArr$ (set of points on the 2D image plane) which determines the segmentation. The $pointArr$ is initially populated with grid points sprayed uniformly based on the parameter $grid$ (line 6 using Algo.~\ref{alg:gridSpray}) and edge points determined by an edge detection algorithm (line 7). In our case, we use canny edge detection. We add more points to the $pointArr$ by repeatedly splitting triangles with standard deviation of pixel intensity greater than $th$ (lines 10-12 using Algo.~\ref{alg:splitTriangle}). This process is done to capture more information, but we note that this may in some cases result in unnecessary details and ultimately less compression. Therefore, we keep at most 1 point in the $pointArr$ for every ($pw$ x $pw$) non-overlapping window (line 13). We then perform DT to obtain the triangle list (line 15). For each triangle in the triangle list, we obtain the average color and update the $colorFreq$. The $colorFreq$ holds the frequency of each triangle color encountered across all the images (lines 16-21). $cb$ (number of bits for representing colors) is a user input to control the size of the color quantization dictionary. We divide the image into blocks of dimension ($bDim$ x $bDim$) and compute the common pattern dictionary ($patDict$) entry index which best corresponds to the point spray pattern of each block (line 25). The $dictInd$ and the RGB block ($blockList[j]$) act as the label and input data (respectively) for training our point prediction model (lines 26-27). We cluster the entries (weighted by their frequency) in the $colorFreq$ using k-means algorithm \cite{scikit-learn}. The number of clusters is $2^{cb}$. The cluster representatives are assigned an index and collectively form the color quantization dictionary ($colorDict$). In this way, we employ unsupervised machine learning to leverage domain-specific color distribution information. The model training process depends on the ML model architecture selected for the domain-specific point prediction task. After the knowledge acquisition phase completes the application is ready to encode (compress) and decode images. \input{algo/gridSpray} \subsection{Encoding Procedure} \input{algo/encoder.tex} Algo. \ref{alg:encode_main} defines the image encoding process at the sender side. For the given image, we divide it into blocks based on the dimension specified by $bDim$ (line 2). For each block, we predict the pattern dictionary entry to use with the help of the point prediction model (line 5). The label predicted by the ML model is divided by the input $d$, a tunable parameter that allows for dynamic image quality. Higher values of $d$ are associated with higher compression rates. The predicted labels for each block are appended to the $labelsArr$ (line 6). For a label predicted for a specific block, we fetch the associated point spray pattern from the common pattern dictionary ($patDict$) and append the points to the $pointArr$ after computing their absolute position with respect to the image (lines 8-11). $pointArr$ is next populated with grid points sprayed uniformly based on the parameter $grid$ (lines 13 using Algo.~\ref{alg:gridSpray}). We perform DT to obtain the $triangleList$ in line 14. For each triangle in the $triangleList$ we compute the average color ($avgColor$) and find its closet match ($quantColor$) from the color quantization dictionary ($colorDict$). The $quantColor$ is appended to the $colorList$. The final encoded image consists of the following converted and packed as bits: \begin{itemize} \item $img.rows$: The number of pixel rows in the image (16~bits). \item $img.col$: Number of pixel columns in the image (16~bits). \item $grid$: Number of pixels to skip between 2 grid points sprayed (16~bits). \item $bDim$: Dimension of the image block to use (16~bits). \item $labelsArr$: $\log_{2}$($patDict$ size) bits for each entry. \item $colorDict$: $\log_{2}$($colorDict$ size) bits for each entry. \end{itemize} The encoded image ($endImg$) is returned. \subsection{Decoding Procedure} \input{algo/decoder.tex} Algo. \ref{alg:decode_main} defines the image decoding process at the receiver side. Based on the encoding format, $rows$, $cols$, $grid$, $bDim$, $labelArr$ and $colorList$ are extracted from the encoded image ($encImg$) in line 2. For each label in the $labelArr$, we fetch the associated point spray pattern from the pattern dictionary and append the points to the $pointArr$ after computing their absolute position with respect to the image and the block index ($bIndex$) (lines 6-8). The $pointArr$ is next populated with grid points sprayed uniformly based on the parameter $grid$ (line 11 using Algo.~\ref{alg:gridSpray}). We perform DT to obtain the $triangleList$ in line 12. We initialize a blank image with the obtained dimensions in line 14. For each triangle in the $triangleList$, we obtain the RGB color ($trueColor$) from the color quantization dictionary using the corresponding entry from the $colorList$ (line 16). We color the pixels in $recImg$ for the given triangle using $trueColor$ (line 17). The final decoded/recovered image ($recImg$) is returned from this method. \section{Results}\label{sec:results} \input{fig_latex/CrackAcc} \input{fig_latex/fireAcc} MAGIC compression is designed to excel in autonomous task-specific IoT applications where the analysis of the images is done by machine learning models. To quantitatively analyze the effectiveness of MAGIC for IoT applications we pick two use-cases: \begin{enumerate} \item \textbf{Forest fire surveillance}~\cite{ref:fireDS}. \item \textbf{Infrastructure analysis}~\cite{ref:crackDS}. \end{enumerate} In the next few subsections, we describe the experimental setup and compare the accuracy of MAGIC compressed images to JPEG 2000 and WebP under different quality factor (QF) settings. We use ImageMagick's convert command for JPEG 2000 and WebP compressions which have quality factor from 1 to 100, with 1 resulting in the highest compression \cite{imageMagick}. We explore the effect the MAGIC input parameters $pw$, $d$, and $cb$ have on the rate of compression and accuracy. Finally, we introduce a computation, transmission energy cutoff for analyzing the energy efficiency of MAGIC. \subsection{Experimental Setup} \input{fig_latex/EntropyFeatureExample} The neural network architecture for the domain-specific ML models is shown in Fig.~\ref{magicEntropyUltraSmallNet}. We obtain separate model weights by training on each dataset and knowledge acquisition parameters (controlling the level of compression) using Keras \cite{chollet2015keras}. The input to the neural network is the flattened per-pixel local entropy features of the 64x64 image blocks. The entropy of a pixel is defined as the number of bits required to represent the local grey-scale distribution of a defined neighbourhood \cite{scikit-image}. A higher entropy value is correlated to higher diversity and higher information density. We use a neighbourhood of 5 pixels to train our models. In Fig.~\ref{EntropyFeatureExample}, we see the visual representation of the entropy feature of a sample image. The output of the neural network domain-specific point prediction model is used to compute the entry in the common pattern dictionary that is to be assigned for the input image block. For both building crack detection and forest fire detection task, we use a statically generated point spray pattern dictionary containing 4096 entries such that entry $i$ has exactly $i$ points sprayed randomly in a 64x64 block. Hence using an entry with a high value of $i$ is equivalent to capturing more information in the image block. \input{fig_latex/fireDS.tex} \subsection{Evaluation up to Lowest Quality Factor } \input{fig_latex/magicEntropyUltraSmallNet} \input{tables/UltraLowComb} \subsubsection{Infrastructure Analysis} We construct two randomly sampled, disjoint sets of 2000 images for both knowledge acquisition and evaluation, respectively. 1000 images from the positive (with crack) class and another 1000 images from the negative (no crack) class are present in each of these sets. For knowledge acquisition parameters~(Algo.~\ref{alg:calibration}), we use block dimension ($bDim$) 64, number of iteration ($iterLimit$) 10, prune window size ($pw$) (4 and 8), grid dimension ($grid$) $ceil((rows+cols)/20)$, triangle standard deviation splitting threshold ($th$) 5, and $cb$ 8. We compress the sampled 2000 evaluation images using MAGIC with compression parameters~(Algo.~\ref{alg:encode_main}) block dimension ($bDim$) 64, $d$ (1 up to 12 in separate instances), grid dimension ($grid$) $ceil((rows+cols)/20)$ along with the domain-specific point prediction model ($model$) and the color quantization dictionary obtained. To compare with MAGIC, we compress the same images with JPEG 2000 and WebP from QF 1 to 10. We obtain a separate dataset for each JPEG 2000, WebP, and MAGIC setting. Fig.~\ref{BuildingCrackDS} shows sample images from the compressed datasets. For each dataset, we extract the features from the second fully connected (fc2) layer of pretrained VGG-16 \cite{ref:vgg} to train and test a support vector machine for the classification task using 30-fold cross-validation (20/80 test/train splits). From Fig.~\ref{CrackAcc}, MAGIC was able to compress beyond JPEG 2000 QF=1 while maintaining almost similar classification accuracy. The MAGIC images in the dataset compressed with $d = 12$ and $pw = 8$ are on average 22.09x smaller (1.06\% accuracy loss) than source dataset (ACC=98.97\%, BPP=0.9479), 2.51x smaller (0.24\% accuracy loss) than JPEG 2000 QF=1 (ACC=98.15\%, BPP=0.1080), and 1.98x smaller (1.69\% accuracy loss) than WebP QF=1 (ACC=99.60\%, BPP=0.0851). \subsubsection{Forest Surveillance} From the forest fire dataset \cite{ref:fireDS}, we extract 643 images of which 227 have fire and 416 have no fire. We ignore the images which are not relevant to forests. We use 20 images from the dataset (10 from each class) to perform the knowledge acquisition procedure. As knowledge acquisition parameters~(Algo.~\ref{alg:calibration}) we use block dimension ($bDim$) 64, number of iteration ($iterLimit$) 10, prune window size ($pw$) (5 and 8), grid dimension ($grid$) $ceil((rows+cols)/20)$, triangle standard deviation splitting threshold ($th$) 5 and $cb$ 8. The domain-specific point prediction model is trained in the same manner as for the infrastructure analysis task. We compress the remaining 623 images (excluding the knowledge acquisition learning set) using MAGIC with compression parameters~(Algo.~\ref{alg:encode_main}) block dimension ($bDim$) 64, $d$ (1 through 12), grid dimension ($grid$) $ceil((rows+cols)/20)$ along with the domain-specific point prediction model ($model$) and the color quantization dictionary obtained from the knowledge acquisition stage. Again, we obtain a separate dataset for each JPEG 2000 (QF 1 to 10), WebP (QF 1 to 10), and MAGIC settings. Fig.~\ref{fireDS} shows sample images from the fire dataset for JPEG 2000, WebP, and MAGIC. We extract the features for each dataset similar to the building crack dataset and carry out classification using a support vector machine with 30-fold cross-validation (20/80 test/train splits). As seen in Fig.~\ref{fireAcc}, we observe the same trend from the previous dataset. The MAGIC images compressed with $d = 8$ and $pw = 8$ are on average 42.65x smaller (2.99\% accuracy loss) than source dataset (ACC=97.17\%, BPP=1.864), 2.32x smaller (1.20\% accuracy loss) than JPEG 2000 QF=1 (ACC=95.38\%, BPP=0.1014), and 5.85x smaller (3.18\% accuracy loss) than WebP QF=1 (ACC=97.36\%, BPP=0.2559). \subsection{Evaluation beyond Lowest Quality Factor} WebP and JPEG 2000 are unable to compress beyond QF=1 without some level of pre-processing. On the other hand, MAGIC naturally can achieve a very large compression range. In Table~\ref{PushMore}, we evaluate MAGIC at extreme levels of compression using smaller $cb$ bit sizes. We can compress up to $\sim$167x more than source at $\sim$13\% accuracy loss for the fire dataset and $\sim$69x more than source at $\sim$6\% accuracy loss for the building crack dataset. Depending on the application requirements, MAGIC can gracefully trade-off accuracy for lower BPP using the parameters exposed to the user. This extreme level of compression is possible due to MAGIC's ability to leverage domain knowledge. \input{fig_latex/BuldDS_WithComp.tex} \subsection{MAGIC Time \& Energy Analysis} \input{tables/buld_fullPowerRes} MAGIC, as per its current implementation, takes longer time to compress images as compared to JPEG 2000 and WebP. However, as shown above, MAGIC can achieve a higher compression rate while still performing well when it comes to coarse-grained machine vision classification tasks. To explore the potential energy savings of MAGIC compression, we introduce a threshold, C/T Cutoff (inspired by Sadler et al.~\cite{sadler2006data}), for determining the sufficient computation and transmission energy consumption ratio beyond which MAGIC will be beneficial for overall energy consumption in a given resource-constrained computing system. The C/T Cutoff for MAGIC compression (for a specific set of parameters) can be computed using the Equation~\ref{eq1} where $E_1$ is the average MAGIC encoding time, $E_2$ is the average encoding time of the competitor method (JPEG 2000, WebP), $I_1$ is the average image size of MAGIC, $I_2$ is the average image size of the competitor method (JPEG 2000, WebP) and $f$ is the CPU clock frequency. The setup time during encoding is due to loading the libraries and initializing the Python environment. In an amortized analysis for a batch operation, the setup time can be considered negligible. For MAGIC compression (for a specific set of parameters) to save energy when compared to other compression standards, the operating device must have a C/T value greater than MAGIC's C/T Cutoff. In Tables~\ref{buld_fullPower_res}, \ref{fire_fullPower_res}, we see the C/T Cutoffs for different MAGIC compression settings for building crack detection and forest fire detection datasets, respectively. We use $f$ = 3.7 GHz for computing the C/T cutoff values. Any device with C/T value greater than the cutoff will benefit (in terms of operational power consumption) from using MAGIC with respect to the method being compared against (JPEG 2000, WebP). For example in Table~\ref{fire_fullPower_res}, with MAGIC (pw=8, cb=2, d=1) the JPEG 2000 (JP2K) C/T cutoff is 0.497 which means the energy for 1 byte transmission must be greater than the execution energy of 0.497 million clock cycles (CC) in a system for MAGIC to have higher energy savings than JPEG 2000. \input{equations/powerEqn} \section{Discussion}\label{sec:discussion} In this section, we investigate the properties of the current embodiment of MAGIC. We note, the MAGIC framework can be improved across many dimensions. We make preliminary studies of these possibilities and explore future extensions of MAGIC in this section. \subsection{Variation in Compression Ability} The image compression technique being used must generate images of less size variability for maintaining consistent overall system performance. We compress the images using JPEG 2000, WebP and MAGIC to generate box plots showing the variation of BPP for the sampled distributions in Fig.~\ref{firebppbox} and Fig.~\ref{crackbppbox}. We observe that MAGIC provides low variation in BPP as compared to JPEG 2000 and WebP images. Due to different parameters in the knowledge acquisition and encoding phase, specifically $pw$, MAGIC has fine control over the compressed image size. Hence, MAGIC can provide steady performance even in biased scenarios, where other techniques may not give good compression. \input{tables/fire_fullPowerRes} \subsection{Improving Prediction Accuracy } Post-processing the MAGIC images or using a more powerful pattern prediction model can improve the prediction accuracy by about 1-2\%. Images compressed using MAGIC consist of triangulation artifacts. One way to remove the artifacts is to recursively subdivide the triangles and compute the approximate color of each sub triangle based on the colors of the triangle and its neighbours. Using this technique, we were able to increase the classification accuracy. However, there will be extra computation due to post-processing in the decoder end. If the decoder system resides in the cloud, then this step can be considered to squeeze out extra performance. As explained earlier, we use entropy features for training and using our neural network models, but we have noticed that VGG-16 fc2 features perform slightly better. Using a VGG inspired large convolution neural network for carrying out the domain-specific point prediction task also improves the performance slightly. However, we intentionally use simple entropy features and a small neural network to boost speed, and help reduce energy consumption and space requirements. In an application where time, space, and energy are not constrained, we can opt for more complex feature extraction methods and larger neural network architectures for domain-specific point prediction. \subsection{Time Complexity and Performance Improvements} Time complexity analysis of encoder (Algo.~\ref{alg:encode_main}) and decoder (Algo.~\ref{alg:decode_main}) algorithms simplify to $\mathcal{O}(N+M\log M+TR)$. The major contributors in encoding are $\mathcal{O}(N)$ for tiling (line 2), $\mathcal{O}(M\log M)$ for DT (line 14), and $\mathcal{O}(TR)$ for triangle color calculation (line 17, the pixels associated with a triangle are determined by searching in a rectangle circumscribing the triangle), where $N$ is the number of pixels in the image, $M$ is the number of points sprayed, $T$ is the number of triangles, and $R$ is the dimension of the bounding rectangle of the biggest triangle. For decoding, the contributors are $\mathcal{O}(N)$ for predicted point absolute position computation (lines 6-8), $\mathcal{O}(M\log M)$ for DT (line 12), and $\mathcal{O}(TR)$ for triangle color assignment/drawing (line 17). In both algorithms, we expect the $\mathcal{O}(M\log M)$ DT step to consume the most time. Time complexity analysis of the knowledge acquisition algorithm (Algo.~\ref{alg:calibration}) simplifies to $\mathcal{O}(KN\log N + KIM\log M + KITR + SVC + PQ)$. The major contributors are $\mathcal{O}(KN\log N)$ for canny edge detection for all $K$ images (line 7), $\mathcal{O}(KIM\log M + KITR)$ for the split operation across all $K$ images (line 11), $\mathcal{O}(SVC)$ for color dictionary computation using k-means algorithm (line 29) and, $\mathcal{O}(PQ)$ for training the point prediction model (line 30). $N$, $M$, $T$, $R$ hold the same meaning as before and additionally $K$ is the number of images in the $imgList$, $I$ is the $iterLimit$, $S$ is the iteration limit for k-means algorithm, $V$ is the number of points in the $colorFreq$ map, $C$ is the number of centroids specified for k-means, $P$ is the number of training samples in $trainX$ and $trainY$, $Q$ is the number of training epochs for the point prediction model. The runtime performance of both decoder and encoder can be improved through parallelization, hardware implementation and code tweaking. Many of the block operations such as block feature extraction and point spray pattern prediction can be easily parallelized. Hardware implementation can provide the most speed up and may help reduce energy consumption as well. In future works, we will focus on improving the time and energy performance of MAGIC using different means. \input{fig_latex/crack_bppVar_box} \input{fig_latex/fire_bppVar_box} \subsection{Manual Region-of-Interest Guided Compression} As previously described, ROI bias can be automatically captured by training the pattern prediction model with supervised images. Beyond learning the region-of-interest bias, MAGIC offers manual ROI based compression. With this feature, users can specify additional regions of an image that can be retained at a higher quality. In Fig.~\ref{ROI_Example_MAGIC}, we see an example ROI-guided image compression where the fire region is designated manually as a region-of-interest. Note that the image region with the fire maintains much higher information than the remaining regions. \subsection{Extension of MAGIC to video compression} A video can be thought of as a collection of images. To this end, MAGIC can be extended to process videos as well. Depending on the sampling rate of the image sensor, we noticed that, adjacent video frames have very little content difference. Taking this into consideration, we can save more in terms of space, computation and transmission. The two main components of a MAGIC encoded image are $labelsArr$ and $colorDict$. We can represent frame[$N$] by reusing the $colorDict$ and $labelArr$ of frame[$N-1$]. In Eqn.~\ref{eq2}, $OP$ is the set of obsolete point spray patterns which are no longer present in the new frame and $NP$ is the set of new point spray patterns which are introduced in the new frame. Similarly, as shown in Eqn.~\ref{eq3}, the $colorDict$[$N-1$] can be modified by removing the obsolete triangle colors and introducing the colors of the new triangles in frame $N$. Future works will investigate and formalize the MAGIC flow applied to video. \input{equations/videoEquation} \input{fig_latex/ROI_Example_MAGIC} \section{Conclusion} \label{sec:conclusion} The increasing use of intelligent edge devices in diverse IoT applications, specifically for a multitude of computer vision tasks, calls for innovation in an image compression technique that meets the unique requirements of these applications. They primarily require high compression ratio while maintaining machine vision accuracy at an acceptable level. The MAGIC framework we have presented in this paper addresses this need. We have shown that effective use of domain knowledge learned with ML can provide high compression in resource-constrained edge applications while keeping appropriate features for machine vision. The proposed framework is flexible for application in diverse domains and scalable to large image sizes. Our experiments for coarse-grained ML tasks using two datasets highlight the effectiveness of MAGIC. We achieve up to 42.65x higher compression than the source (beyond JPEG 2000 and WebP) while achieving similar accuracy. We compute the transmission computation energy cutoffs to demonstrate at what level MAGIC compressed images can be more energy efficient than standard techniques. Further, we show low compression variance compared to standard image compression techniques. With the use of a common pattern dictionary, the proposed ML-based compression procedure can be easily extended for recognizing coarse-grain patterns in edge devices. Moreover, it can potentially be extended to video compression where domain knowledge is expected to play an even stronger role. Future work will investigate these extensions as well as further improvement in the performance of MAGIC in a variety of edge applications. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Revealing facts} Cold periods of the Pleistocene ice ages are correlated with large quantities of inclusions in the ice. A survey of Antarctic ice cores showed that in cold periods the masses are larger by one to two orders of magnitude compared to the values of warm periods \cite{EPICA800,Lambert}. An analysis with detailed time resolution correlates quantitatively low temperatures with high impurity content \cite{Lambert}. The distribution of mass of the dust versus grain size shows a clear distinction between small and large grains, where the boundary is at a diameter of about 4 $\mu$m \cite{Steffensen,Delmonte}. The terrestrial origin of the large grains has been unambiguously determined by mineralogical methods \cite{Biscaye}. For the large grains the mass distribution versus grain size is irregular and varies from one period to the next \cite{Steffensen}. In contrast, the small grains show a bell shaped mass distribution, which can be easily parametrized, and which varies little between cold periods. In order to determine the origin of the small grains the isotope distributions of Sr and Nd has been compared with that of samples from many regions of the globe \cite{Delmonte}. An agreement was found between material from Antarctica and from Patagonia. In Table 1b of \cite{Delmonte} the samples of South America are dated; those of the Pampas have ages between 10 and 25 kyr BP, while the samples from other regions are without dates. It was concluded that the small grains have been transported from Patagonia to Antactica \cite{Delmonte,Gaiero}. However, it is also possible that the grains from the two regions have the same extraterrestrial origin. \section{Extraterrestrial atoms} The 100 ka period is a dominant feature of the Pleistocene ice ages. While the ellipticity of Earth's orbit has a 100 ka periodicity, the corresponding variation of the insolation is minute, so that it would require a large amplification to explain the observed temperature variations of the period. R.A. Muller and G.J. McDonald \cite{Muller1,Muller2} pointed out that the inclination of Earth's orbit (the angle between Earth's orbital plane and the invariant plane of the planetary system, which is perpendicular to its angular momentum), also has an approximate 100 ka period. They postulated a disk shaped cloud circling the Sun. Then a change of angle between Earth's orbital plane and the plane of the cloud modulates the solar irradiation on the Earth. The cloud would also produce a correlation between cold periods and impurity content in ice cores \cite{Lambert}, provided that the cloud extends beyond Earth's orbit. In a Milankowich theory without the cloud the inclination is irrelevant, since the Sun radiates isotropically. Woelfli et al.\,\cite{latitude} focussed their attention on another unexplained fact, namely the existence of remains of Mammoths deep in Arctic East Siberia. It indicates that this region had a lower latitude in the Pleistocene. The authors developed a model for a rapid geographic shift of the poles. This shift terminates the Pleistocene. As a by-product of the model a cloud of atoms and ions circling the Sun had to exist during a time of order of a few million years. This agrees with the observed total length of time of the Pleistocene ice ages \cite{Tiedemann}, i.e.\ about 2.5 to 3 Ma. In contrast the Milankowich theory has no time limit backward or forward. \section{Mass distribution of the grains} The extraterrestrial ions and atoms enter into the high atmosphere with planetary velocity. They are stopped and form molecules. These diffuse and coagulate with others. Clusters move downwards, occasionally joining others until finally they reach the ground. As a didactic illustration we imitated such a process in a simple computer model. It contains six vertical layers of a two dimensional square net with 8 lattice points on each horizontal line. The horizontal lines are closed to a circle to avoid boundaries. The top layer receives particles in random positions, which combine with any cluster that may exist there. Then in a random position of the net, any cluster that may exist there makes a side step with a probability that decreases with cluster size. It combines with a cluster that it may encounter. Finally in random positions of the net the clusters make a downward step with a probability that increases with cluster size, also combining with any cluster they may find. This sequence is repeated a million times. Fig.\ 1 shows a plot of the mass that reaches the ground layer for each cluster size. The fluctuations vary from run to run. Consistently the curve shows a bell shape. Smaller than typical clusters are rare, because small clusters sink slowly and have time to grow before they reach the ground. Similarly large clusters diffuse downwards with good speed and have no time to grow to excessive size. We do not intend to make this simple program look more realistic, since this could only show once more that in a complex model open parameters can be ajusted to fit experimental data. A determination of realistic parameters from the chemistry of the atmosphere is beyond our capabilities. The purpose of our simple program is merely to make a bell shaped mass distribution plausible. \section{Proposed experimental test} In the model of the Pleistocene ice ages of Woelfli et al.\,\cite{latitude} the atoms or ions of the disk-shaped cloud around the Sun had been emitted from a planetary object in an extremely excentric orbit. This object was hot from tidal work and solar radiation. Particle emission from it is limited by the escape velocity. This favours light particles and leads to an isotope effect within an atomic species. Magnesium which has three stable isotopes should be a good candidate for a test. Being light it is easily emitted, so that extraterrestrial atoms would outnumber those of terrestrial origin. A measurement of the isotopic distribution of Mg from small clusters of a cold period could reveal its extraterrestrial origin. \begin{figure}[htbp] \begin{center} \includegraphics[width=4in]{WBFig.pdf} \caption{{\bf Mass distribution versus cluster size in a didactic model.}} \end{center} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter{Dirac algebra and trace theorems} \label{Acko} In this Appendix we define the Dirac gamma matrices and collect their properties. We also show spin sums, some Fierz identities and the proof of the identity $\;\overline{\nu_{L}^{c}}\nu_{R}^{c} = \overline{\nu_{L}}\nu_{R}$. \section{Gamma matrices} \label{gamas} The so-called Dirac representation of the gamma matrices is given by \begin{eqnarray} \gamma^{0} = \left( \begin{array}{rr} {\bf I} & 0 \\ 0 & -{\bf I} \end{array} \right) ,\;\;\; \gamma^{i} = \left( \begin{array}{rr} 0 & \sigma^{i} \\ - \sigma^{i} & 0 \end{array} \right) ,\;\;\; \gamma_{5}\;\;\equiv\;\;i \gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\;\; = \;\; \left( \begin{array}{rr} 0 & {\bf I} \\ {\bf I} & 0 \end{array} \right), \end{eqnarray} where ${\bf I}$ is $2 \times 2$ identity matrix and the Pauli matrices $\sigma^{i}$ are defined as \begin{eqnarray} \sigma^{1} = \left( \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right) ,\; \sigma^{2} = \left( \begin{array}{rr} 0 & -i \\ i & 0 \end{array} \right) ,\; \sigma^{3} = \left( \begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \right) . \end{eqnarray} Some properties of the gamma matrices follow: \begin{eqnarray} \gamma_{0} & = & \gamma^{0},\;\;\;\;\;\gamma_{i}\;\;=\;\;-\gamma^{i}, \nonumber \\ {\gamma_{0}}^{\dagger} & = & \gamma_{0}, \nonumber \\ \gamma_{0} \gamma_{0} & = & {\bf 1}, \nonumber \\ \gamma_{0} {\gamma_{\mu}}^{\dagger} \gamma_{0} & = & \gamma_{\mu}, \nonumber \\ \gamma_{0}^{T} & = & \gamma_{0}, \nonumber \\ \gamma_{5}^{T} & = & \gamma_{5}, \nonumber \\ \gamma_{5}^{2} & = & {\bf 1}, \nonumber \\ \nu_{1}^{T} \: \Gamma \: \nu_{2} & = & - {\nu_{2}}^{T} \: \Gamma^{T} \:\nu_{1}, \end{eqnarray} where $\nu_{1}, \nu_{2}$ are spinors, ${\bf 1}$ is $4 \times 4$ unit matrix, and $ \Gamma$ represents product of gamma matrices. Trace theorems: \begin{eqnarray} Tr({\bf 1}) & = & 4, \nonumber \\ Tr(\gamma^{\mu}\gamma^{\nu}) & = & 4 g^{\mu\nu}, \nonumber \\ Tr(\gamma^{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\sigma}) & = & 4\left( g^{\mu\nu}g^{\lambda\sigma} - g^{\mu\lambda}g^{\nu\sigma} + g^{\mu\sigma}g^{\nu\lambda}\right), \nonumber \\ Tr(\gamma_{5}) & = & 0, \nonumber \\ Tr(\gamma_{5}\gamma^{\mu}\gamma^{\nu}) & = & 0, \nonumber \\ Tr(\gamma_{5}\gamma^{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\sigma}) & = & 4 i \epsilon^{\mu\nu\lambda\sigma}, \nonumber \\ Tr(odd\;number\;of\;gamma\;matrices) & = & 0. \end{eqnarray} Product rules: \begin{eqnarray} \gamma^{\mu}\gamma^{\nu} + \gamma^{\nu}\gamma^{\mu} & = & 2 g^{\mu\nu} , \nonumber \\ \gamma_{\mu}\gamma^{\mu} & = & 4, \nonumber \\ \gamma_{\mu}\gamma^{\nu}\gamma^{\mu} & = & - 2 \gamma^{\nu}, \nonumber \\ \gamma_{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\mu} & = & 4 g^{\nu\lambda}, \nonumber \\ \gamma_{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\sigma}\gamma^{\mu} & = & - 2 \gamma^{\sigma}\gamma^{\lambda}\gamma^{\nu}, \nonumber \\ \gamma_{5}\gamma^{\mu} + \gamma^{\mu}\gamma_{5} & = & 0 \nonumber \\ g^{\mu\nu}g_{\mu\nu} & = & 4. \end{eqnarray} An explicit representation of the charge conjugation matrix C is \begin{eqnarray} C = i \gamma^{2} \gamma^{0} & = & \left( \begin{array}{rrrr} 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right),\;\;\;\;\; C^{T}C^{T}\;\; = \;\; - {\bf 1}. \end{eqnarray} \section{Spin sums} \label{spins} \begin{eqnarray} \sum_{s}u_{\alpha}(p,s)\overline{u_{\beta}}(p,s) & = & {({\not p} + m)}_{\alpha\beta}\;, \nonumber \\ \sum_{s}v_{\alpha}(p,s)\overline{v_{\beta}}(p,s) & = & {({\not p} - m)}_{\alpha\beta}\;, \nonumber \\ \sum_{\lambda}\epsilon_{\mu}(p,\lambda)\epsilon_{\nu}^{*}(p,\lambda) & = & - g_{\mu\nu} + \frac{p_{\mu}p_{\nu}}{M_{V}^{2}}\;, \end{eqnarray} where $u_{\alpha}(p,s), v_{\alpha}(p,s)$ are spinors with momentum $p$, spin $s$ and mass $m$; $\epsilon_{\mu}(p,\lambda)$ is a polarization vector of a weak boson $V = W, Z$ with momentum $p$, spin $\lambda$ and mass $M_{V}$. \section{Fierz identities used for the calculation of the boxes} \begin{eqnarray} \big[\bar u \gamma_{\alpha} (1-\gamma_{5}) \gamma_{\epsilon} \gamma_{\gamma} u \big] \times \big[{\overline v_{e}} \gamma^{\alpha} (1-\gamma_{5}) \gamma^{\epsilon} \gamma^{\gamma} v_{\nu_{e}} \big] & = & 16 \big[\bar u \gamma_{\mu} (1-\gamma_{5}) u \big] \nonumber \\ & \times & \big[{\overline v_{e}} \gamma^{\mu} (1-\gamma_{5}) v_{\nu_{e}} \big], \\ \big[\bar u \gamma_{\alpha} (1-\gamma_{5}) \gamma_{\epsilon} \gamma_{\gamma} u \big] \times \big[{\overline v_{e}} \gamma^{\gamma} (1-\gamma_{5}) \gamma^{\epsilon} \gamma_{\alpha} v_{\nu_{e}} \big] & = & 4 \big[\bar u \gamma_{\mu} (1-\gamma_{5}) u \big] \nonumber \\ & \times & \big[{\overline v_{e}} \gamma^{\mu} (1-\gamma_{5}) v_{\nu_{e}} \big]. \end{eqnarray} \section{Proof of the identity $\;\;\;\overline{\nu_{L}^{c}}\nu_{R}^{c} = \overline{\nu_{L}}\nu_{R}$} \label{proof} Using the definition of the charge conjugate field, \begin{eqnarray} \nu^{c} & = & C\gamma_{0}\nu^{*}, \;\;\;(C= i \gamma^{2} \gamma^{0}) \nonumber \\ \overline{\nu^{c}} & = & \nu^{T} C , \end{eqnarray} we get \begin{eqnarray} \overline{\nu_{L}^{c}}\nu_{R}^{c} & = & \left[\frac{1-\gamma_{5}}{2} \nu^{c}\right]^{\dagger} \gamma_{0} \frac{1+\gamma_{5}}{2} \nu^{c} = {\nu^{c}}^{\dagger}{\frac{1-\gamma_{5}}{2}}^{\dagger} \gamma_{0} \frac{1+\gamma_{5}}{2}\nu^{c} \nonumber \\ & = & {\nu^{c}}^{\dagger} \gamma_{0}\frac{1+\gamma_{5}}{2} \frac{1+\gamma_{5}}{2} \nu^{c} = \overline{\nu^{c}}\frac{1+\gamma_{5}}{2}\nu^{c} \nonumber \\ & = & \nu^{T} C \frac{1+\gamma_{5}}{2} C \gamma_{0}\nu^{*} = - {\nu^{*}}^{T} \left[C \frac{1+\gamma_{5}}{2} C \gamma_{0}\right]^{T}\nu \nonumber \\ & = & - \nu^{\dagger}\left[\gamma_{0} C^{T} \frac{1+\gamma_{5}}{2} C^{T} \right]\nu = - \overline{\nu} C^{T} \frac{1+\gamma_{5}}{2} C^{T}\nu \nonumber \\ & = & - \overline{\nu} C^{T}C^{T} \frac{1+\gamma_{5}}{2} \nu = \overline{\nu} \frac{1+\gamma_{5}}{2} \nu \nonumber \\ & = & \overline{\nu_{L}}\nu_{R}. \end{eqnarray} \chapter{Couplings of $\nu^{'}$ and $N$ to Higgs} \label{Becko} We begin with some useful properties of the rotation matrix $G$ (see Eq. \ref{matrixg}). From $GG^{\dagger} = G^{\dagger}G = 1$ we have \begin{eqnarray} \label{relationsg} U_{1}U_{1}^{\dagger} + U_{2}U_{2}^{\dagger} = 1, &\;\;\;\; \;\;\;\;& U_{1}^{\dagger}U_{1} + U_{3}^{\dagger}U_{3} = 1, \nonumber \\ U_{3}U_{3}^{\dagger} + U_{4}U_{4}^{\dagger} = 1, &\;\;\;\; \;\;\;\;& U_{2}^{\dagger}U_{2} + U_{4}^{\dagger}U_{4} = 1, \nonumber \\ U_{1}U_{3}^{\dagger} + U_{2}U_{4}^{\dagger} = 0, &\;\;\;\; \;\;\;\;& U_{1}^{\dagger}U_{2} + U_{3}^{\dagger}U_{4} = 0, \nonumber \\ U_{3}U_{1}^{\dagger} + U_{4}U_{2}^{\dagger} = 0, &\;\;\;\; \;\;\;\;& U_{2}^{\dagger}U_{1} + U_{4}^{\dagger}U_{3} = 0. \end{eqnarray} Further, from (see Sec. \ref{diagon3}) \begin{eqnarray} U_{1}D + U_{2}M & = & 0, \nonumber \\ U_{3}D + U_{4}M & = & M^{'}, \end{eqnarray} we get \begin{eqnarray} U_{2}^{\dagger}U_{1}D + U_{2}^{\dagger}U_{2}M & = & 0, \nonumber \\ U_{4}^{\dagger}U_{3}D + U_{4}^{\dagger}U_{4}M & = & U_{4}^{\dagger}M^{'}. \end{eqnarray} Adding these two equations and using Eq. \ref{relationsg} we find the following relation between $M$ and $M{'}$: \begin{equation} \label{mmu} M = U_{4}^{\dagger}M^{'}. \end{equation} To derive the Lagrangian describing the couplings of $\nu^{'}$ and $N$ to $H$, ${\cal L}_{H}$, we rewrite Eq. \ref{yukawa1} as \begin{eqnarray} {\cal L} & = & - \frac{g_{2}}{\sqrt{2} M_{W}} \left(\overline{\nu_{L}}\:\overline{l_{L}}\right)D \tilde{\Phi}n_{R} + h.c.\;, \end{eqnarray} where $D$ is a $3 \times 3$ matrix in family space and, (see Eqs. \ref{higgsplus}, \ref{higgsminus}), \begin{eqnarray} \tilde{\Phi} = i \tau_{2} \Phi^{*} = \left( \begin{array}{c} {\phi^{0}}^{*} \\ -\phi^{-} \end{array} \right) = \left( \begin{array}{c} \frac{1}{\sqrt{2}}(v + H - i \chi) \\ - \phi^{-} \end{array} \right). \end{eqnarray} Selecting the $H$ part we get \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}}\overline{\nu_{L}} D n_{R}H + h.c.\;. \end{eqnarray} In the next step we add and subtract a term: \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}}\overline{\nu_{L}} D n_{R}H + h.c. \nonumber \\ & - & \frac{g_{2}}{2 M_{W}}\overline{S_{L}} M n_{R}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\overline{S_{L}} M n_{R}H + h.c.\;. \end{eqnarray} The first two lines of this relation can be compared with Eq. \ref{fa3}. We can now use the results of Sec. \ref{diagon3}, which give \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}} \overline{S_{L}^{'}}M^{'}n_{R}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}} \left(\overline{\nu_{L}^{'}}U_{2} + \overline{S_{L}^{'}}U_{4}\right) M n_{R} H + h.c. \nonumber \\ & = & - \frac{g_{2}}{2 M_{W}}\overline{S_{L}^{''}}T M^{'} Z^{\dagger} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\left(\overline{\nu_{L}^{'}}U_{2}M Z^{\dagger} + \overline{S_{L}^{''}}T U_{4} M Z^{\dagger}\right) n_{R}^{''}H + h.c.\;. \end{eqnarray} Now we use $M = U_{4}^{\dagger} M^{'}$ (see Eq. \ref{mmu}) and $M^{''} = T M^{'} Z^{\dagger}$ (see Eq. \ref{mprimed}): \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}}\overline{S_{L}^{''}}T M^{'} Z^{\dagger} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\left(\overline{\nu_{L}^{'}}U_{2}U_{4}^{\dagger} M^{'} Z^{\dagger} + \overline{S_{L}^{''}}T U_{4} U_{4}^{\dagger} M^{'} Z^{\dagger}\right) n_{R}^{''}H + h.c. \nonumber \\ & = & - \frac{g_{2}}{2 M_{W}}\overline{S_{L}^{''}}M^{''} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\left(\overline{\nu_{L}^{'}}U_{2}U_{4}^{\dagger} T^{\dagger} T M^{'} Z^{\dagger} + \overline{S_{L}^{''}}T U_{4} U_{4}^{\dagger} T^{\dagger} T M^{'} Z^{\dagger}\right) n_{R}^{''}H + h.c. \nonumber \\ & = & - \frac{g_{2}}{2 M_{W}}\overline{S_{L}^{''}}M^{''} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\left(\overline{\nu_{L}^{'}}U_{2}U_{4}^{\dagger} T^{\dagger} M^{''} + \overline{S_{L}^{''}}T U_{4} U_{4}^{\dagger} T^{\dagger} M^{''}\right) n_{R}^{''}H + h.c.\;. \end{eqnarray} Using $U_{2}U_{4}^{\dagger} = - U_{1}U_{3}^{\dagger}$ and $U_{4}U_{4}^{\dagger} = 1 - U_{3}U_{3}^{\dagger}$, see Eq. \ref{relationsg}, \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}}\overline{S_{L}^{''}}M^{''} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\big[\overline{\nu_{L}^{'}}(-U_{1}U_{3}^{\dagger}) T^{\dagger} M^{''} + \overline{S_{L}^{''}}T (1 - U_{3} U_{3}^{\dagger}) T^{\dagger} M^{''}\big] n_{R}^{''}H + h.c.\; ,\;\;\; \end{eqnarray} and putting \begin{eqnarray} K_{L} & = & U_{1}^{\dagger},\;\;\;\;\; K_{H}=U_{3}^{\dagger}T^{\dagger}, \end{eqnarray} we get \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}}\overline{S_{L}^{''}}M^{''} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}}\overline{\nu_{L}^{'}}(-K_{L}^{\dagger}K_{H}) M^{''} n_{R}^{''}H + h.c. \nonumber \\ & + & \frac{g_{2}}{2 M_{W}} \overline{S_{L}^{''}} (1 - K_{H}^{\dagger} K_{H}) M^{''} n_{R}^{''}H + h.c. \nonumber \\ & = & - \frac{g_{2}}{2 M_{W}} \overline{S_{L}^{''}}(K_{H}^{\dagger} K_{H}) M^{''} n_{R}^{''}H + h.c. \nonumber \\ & - & \frac{g_{2}}{2 M_{W}} \overline{\nu_{L}^{'}}(K_{L}^{\dagger}K_{H}) M^{''} n_{R}^{''}H + h.c. \nonumber \\ & = & - \frac{g_{2}}{2 M_{W}} \overline{N_{L}}(K_{H}^{\dagger} K_{H}) M^{''} N_{R}H + h.c. \nonumber \\ & - & \frac{g_{2}}{2 M_{W}} \overline{\nu_{L}^{'}}(K_{L}^{\dagger}K_{H}) M^{''} N_{R}H + h.c. \nonumber \\ & = & - \frac{g_{2}}{2 M_{W}}\overline{N} (K_{H}^{\dagger} K_{H}) M_{N} N H \nonumber \\ & - & \frac{g_{2}}{2 M_{W}} \overline{\nu^{'}}(K_{L}^{\dagger}K_{H}) M_{N} \frac{1+\gamma_{5}}{2}N H \nonumber \\ & - & \frac{g_{2}}{2 M_{W}} \overline{N}(K_{H}^{\dagger}K_{L}) M_{N} \frac{1-\gamma_{5}}{2}\nu^{'} H. \end{eqnarray} The couplings of $\chi, \phi^{+}$ and $\phi^{-}$ are found by analogy. \chapter{Feynman rules} \label{Cecko} We list here the Feynman rules needed for the computation of the non-SM diagrams contributing to the processes studied in this thesis. They are given in the 't~Hooft-Feynman gauge (see Sec. \ref{quant1}). The rules for the vertices correspond to the interaction Lagrangians of Sec. \ref{inter3}. The SM case is obtained in the limit \begin{equation} K_{H} \rightarrow 0, \;\;\;\;\; K_{L} \rightarrow 1. \end{equation} In vertices, where applicable, the arrows indicate in addition to the flow of the charge also the flow of momenta. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,8) \put(0.3,+0.3){\mbox{\epsfxsize=1.2in\epsffile{frules1.eps}}} \put(2.6,7.72){$+ \;i e \gamma_{\mu}$} \put(2.6,6.58){$+ \;\frac{ie}{s_{W}c_{W}}\gamma_{\mu} \left[\left(-\frac{1}{2}+s_{W}^{2}\right) \frac{(1-\gamma_{5})}{2}+s_{W}^{2}\frac{(1+\gamma_{5})}{2}\right]$} \put(2.6,5.47){$+ \;\frac{ie}{4s_{W}c_{W}} \left(K_{L}^{\dagger}K_{L}\right)_{ij} \gamma_{\mu}(1-\gamma_{5})$} \put(2.6,4.27){$+ \;\frac{ie}{4s_{W}c_{W}}\left(K_{L}^{\dagger}K_{H} \right)_{ia}\gamma_{\mu}(1-\gamma_{5})$} \put(2.6,3.03){$+ \;\frac{ie}{4s_{W}c_{W}}\left(K_{H}^{\dagger}K_{H} \right)_{ab}\gamma_{\mu}(1-\gamma_{5})$} \put(2.6,1.78){$+ \;\frac{ig_{2}}{2\sqrt{2}}(K_{L})_{li} \gamma_{\mu}\left(1-\gamma_{5}\right)$} \put(2.6,0.65){$+ \;\frac{ig_{2}}{2\sqrt{2}}(K_{H})_{la} \gamma_{\mu}\left(1-\gamma_{5}\right)$} \end{picture} \end{center} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,7.5) \put(0.3,+0.7){\mbox{\epsfxsize=1.2in\epsffile{frules2.eps}}} \put(2.6,6.79){$- \;\frac{ig_{2}}{2M_{W}}M_{N}\left(K_{L}^{\dagger} K_{H}\right)_{ia}\frac{\left(1+\gamma_{5}\right)}{2}$} \put(2.6,5.72){$- \;\frac{ig_{2}}{2M_{W}}M_{N}\left(K_{H}^{\dagger} K_{L}\right)_{ai}\frac{\left(1-\gamma_{5}\right)}{2}$} \put(2.6,4.60){$- \;\frac{g_{2}}{2M_{W}}M_{N}\left(K_{L}^{\dagger} K_{H}\right)_{ia}\frac{\left(1+\gamma_{5}\right)}{2}$} \put(2.6,3.44){$+ \;\frac{g_{2}}{2M_{W}}M_{N}\left(K_{H}^{\dagger} K_{L}\right)_{ai}\frac{\left(1-\gamma_{5}\right)}{2}$} \put(2.6,2.24){$- \;\frac{ig_{2}}{2M_{W}}M_{N}\left(K_{H}^{\dagger} K_{H}\right)_{ab}$} \put(2.6,1.11){$- \;\frac{g_{2}}{2M_{W}}M_{N}\left(K_{H}^{\dagger}K_{H} \right)_{ab}\gamma_{5}$} \end{picture} \end{center} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,7.5) \put(0.3,+0.7){\mbox{\epsfxsize=1.5in\epsffile{frules3.eps}}} \put(2.6,6.88){$+ \;\frac{ig_{2}}{\sqrt{2}M_{W}}\left(K_{H}\right)_{la} \left[ M_{N}\frac{(1+\gamma_{5})}{2} - m_{l}\frac{(1-\gamma_{5})}{2}\right]$} \put(2.6,5.88){$- \;\frac{ig_{2}m_{l}}{\sqrt{2}M_{W}}\left(K_{L}\right)_{li} \frac{(1-\gamma_{5})}{2}$} \put(2.6,4.86){$+ \;\frac{ig_{2}}{\sqrt{2}M_{W}}\left(K_{H}^{\dagger} \right)_{al} \left[ M_{N}\frac{(1-\gamma_{5})}{2} - m_{l}\frac{(1+\gamma_{5})}{2}\right]$} \put(2.6,3.82){$- \;\frac{ig_{2}m_{l}}{\sqrt{2}M_{W}}\left(K_{L}^{\dagger} \right)_{il} \frac{(1+\gamma_{5})}{2}$} \put(2.6,2.53){$- \;\frac{iec_{W}}{s_{W}}\left[g_{\nu\lambda} (p_{1}-p_{2})_{\mu} +g_{\lambda\mu}(p_{2}-p_{3})_{\nu}\right.$} \put(2.6,2.25){$+ \;\left.g_{\mu\nu}(p_{3}-p_{1})_{\lambda}\right]$} \put(2.6,1.17){$+ \;ig_{2}M_{W}g^{\mu\nu}$} \end{picture} \end{center} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,7.5) \put(0.3,+0.7){\mbox{\epsfxsize=1.3in\epsffile{frules4.eps}}} \put(2.6,6.67){$0$} \put(2.6,5.58){$+\frac{ig_{2}}{2}\left(p_{1}-p_{-}\right)_{\mu}$} \put(2.6,4.47){$+\frac{g_{2}}{2}\left(p_{-}-p_{2}\right)_{\mu}$} \put(2.6,3.32){$-\frac{ig_{2}}{2}\frac{1-2s_{W}^{2}}{c_{W}} \left(p_{-}-p_{+}\right)_{\mu}$} \put(2.6,2.19){$-ig_{2}M_{W}\frac{s_{W}^{2}}{c_{W}}g^{\mu\nu}$} \put(2.6,1.07){$-ig_{2}M_{W}\frac{s_{W}^{2}}{c_{W}}g^{\mu\nu}$} \end{picture} \end{center} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,7.5) \put(0.3,+1.5){\mbox{\epsfxsize=1.5in\epsffile{frules5.eps}}} \put(2.6,7.25){$+\frac{i}{\not q - m}$} \put(2.6,6.21){$-\frac{ig^{\alpha\beta}}{q^{2}}$} \put(2.6,5.25){$-\frac{ig^{\alpha\beta}}{q^{2}-M_{V}^{2}}$} \put(2.6,4.18){$+\frac{i}{q^{2}-M_{W}^{2}}$} \put(2.6,3){$+\frac{i}{q^{2}-M_{Z}^{2}}$} \put(2.6,1.8){$+\frac{i}{q^{2}-M_{H}^{2}}$} \end{picture} \end{center} \end{figure} \chapter{Dimensional regularization and some useful integrals} \label{Decko} \section{Dimensional regularization} \label{dimreg} Before one-loop amplitudes with divergent momentum integrals can be renormalized, they have to be regularized. Regularization defines integrals, parametrizes their divergences, and separates their finite parts. Dimensional regularization \cite{thooftdim} defines integrals by analytically continuing them from 4-dimensional to $n$-dimensional space-time. The computation of integrals in $n$ dimensions typically yields (see Eq. \ref{lambda}) \begin{eqnarray} \Lambda_{V}(0) & = & -\frac{\alpha_{137}}{4\pi}\left( \frac{2}{\epsilon} + finite \;\;constants\right), \;\;\;\;\epsilon = 4 - n, \nonumber \end{eqnarray} that is, the divergence is parametrized as a simple pole at $n = 4$. In our calculations, we used the following momentum integrals in $n$ dimensions: \begin{eqnarray} \label{momint} I_{0}(l) & = & \int \frac{d^{n}k}{(2\pi)^{n}} \frac{1}{\big(k^{2} + 2k\cdot s + t\big)^{l}} \; = \; \frac{i (- \pi)^{n/2}}{(2\pi)^{n}} \frac{\Gamma(l- n/2)}{\Gamma(l)} \frac{1}{(t-s^{2})^{(l- n/2)}}\nonumber \\ & \equiv & N(l) \frac{1}{(t-s^{2})^{(l- n/2)}}, \\ I_{\mu}(l) & = & \int \frac{d^{n}k}{(2\pi)^{n}} \frac{k_{\mu}}{\big(k^{2} + 2k\cdot s + t\big)^{l}} \; = \; - s_{\mu} I_{0}(l), \\ I_{\mu\nu}(l) & = & \int \frac{d^{n}k}{(2\pi)^{n}} \frac{k_{\mu}k_{\nu}}{\big(k^{2} + 2k\cdot s + t\big)^{l}} \; = \; I_{0}(l) \Big[ s_{\mu}s_{\nu} + \frac{1}{2}g_{\mu\nu}(t-s^{2}) \nonumber \\ & \times & \frac{1}{l - n/2 -1}\Big]. \end{eqnarray} Specifically, for $l = 2,3$ we have \begin{eqnarray} \label{endva} N(2) & = & \frac{i (- \pi)^{\frac{n}{2}}}{(2\pi)^{n}} \frac{\Gamma(2- \frac{n}{2})}{\Gamma(2)} \; = \; \frac{i (- 1)^{\frac{n}{2}}}{(4\pi)^{2}} \Big(1+\frac{\epsilon}{2}\ln 4\pi\Big) \Big(\frac{2}{\epsilon} - \gamma\Big), \\ N(3) & = & \frac{i (- \pi)^{\frac{n}{2}}}{(2\pi)^{n}} \frac{\Gamma(3- \frac{n}{2})}{\Gamma(3)} \; = \; \frac{i (- 1)^{\frac{n}{2}}}{2(4\pi)^{2}} \Big(1+\frac{\epsilon}{2}\ln 4\pi\Big) \Big(1- \frac{\epsilon}{2}\gamma\Big), \end{eqnarray} where we used \begin{eqnarray} \Gamma\Big(2-\frac{n}{2}\Big) = \Gamma\Big(2- \frac{4 - \epsilon}{2}\Big) = \Gamma\Big(\frac{\epsilon}{2}\Big) = \frac{2}{\epsilon} - \gamma, \nonumber \\ \Gamma\Big(3-\frac{n}{2}\Big) = \Gamma\Big(1+ \frac{\epsilon}{2}\Big) = \frac{\epsilon}{2}\Gamma\Big(\frac{\epsilon}{2}\Big) = 1- \frac{\epsilon}{2}\gamma, \end{eqnarray} where $\gamma \doteq 0.5772$ is Euler-Mascheroni constant. In $n = 4 - \epsilon$ dimensions, $\alpha$ becomes a dimensional quantity: \begin{eqnarray} \label{alfamu} \alpha = \frac{e^{2}}{4\pi} \rightarrow \alpha \mu^{\epsilon} = \alpha \Big(1 + \frac{\epsilon}{2} \ln \mu^{2} + ... \Big), \end{eqnarray} where $\mu$ is an arbitrary mass scale. The combination of Eq. \ref{endva} and Eq. \ref{alfamu} yields \begin{eqnarray} \label{alfamu1} \alpha \mu^{\epsilon} \Big(1+\frac{\epsilon}{2}\ln 4\pi\Big) \Big(\frac{2}{\epsilon} - \gamma\Big) & = & \alpha \Big(\frac{2}{\epsilon} - \gamma + \ln 4\pi + \ln \mu^{2} \Big) \;\;=\;\; \alpha \Delta_{\mu}. \end{eqnarray} It is this $\Delta_{\mu}$ rather than $\frac{2}{\epsilon}$, which is usually thought of as the parameterization of the divergence since factors $\gamma, \ln 4\pi$ and $\ln \mu^{2}$ are always present along with $\frac{2}{\epsilon}$ and they all together cancel out in renormalized quantities. We use also other variants of the $\Delta$ symbol: \begin{eqnarray} \label{deltas} \Delta & = & \frac{2}{4-n}-\gamma-\ln \pi \;=\;\frac{2}{\epsilon} -\gamma-\ln \pi, \nonumber \\ \Delta_{\mu} & = & \frac{2}{\epsilon}-\gamma + \ln 4\pi + \ln \mu^{2}, \nonumber \\ \Delta_{m} & = & \frac{2}{\epsilon}-\gamma + \ln 4\pi - \ln \frac{m^{2}}{\mu^{2}}. \end{eqnarray} To cast momentum integrals into the form of Eq. \ref{momint}, the following Feynman parameterization is used: \begin{eqnarray} \frac{1}{a_{0}a_{1}a_{2}...a_{n}} & = & \Gamma (n+1) \int_{0}^{1} dx_{1} \int_{0}^{x_{1}} dx_{2} ... \int_{0}^{x_{n-1}} dx_{n} \nonumber \\ & \times & \frac{1}{[a_{0}+(a_{1}-a_{0})x_{1}+...(a_{n}-a_{n-1})x_{n}]^{n+1}}. \end{eqnarray} Higher powers in $a_{i}$ are obtained by differentiation with respect to this parameter. In this work, these specific expressions were used: \begin{eqnarray} \frac{1}{ab} & = & \int_{0}^{1} dx \frac{1}{[a+(b-a)x]^{2}}, \\ \frac{1}{abc} & = & 2 \int_{0}^{1}dx \int_{0}^{x} dy \frac{1}{[a+(b-a)x+(c-b)y]^{3}}, \\ \label{pabc} \frac{1}{abc} & = & 2 \int_{0}^{1}dx \int_{0}^{1} dy \frac{y}{\big[axy + by(1-x) + c(1-y)\big]^{3}}, \\ \frac{1}{abc^{2}} & = & 6 \int_{0}^{1}dx \int_{0}^{x} dy \frac{y}{[a+(b-a)x+(c-b)y]^{4}}. \end{eqnarray} In $n$ dimensions, the algebra of the Dirac matrices described in Appendix A is generalized as follows: \begin{eqnarray} \label{nalgebra} \gamma^{\mu}\gamma^{\nu} + \gamma^{\nu}\gamma^{\mu} & = & 2 g^{\mu \nu}, \nonumber \\ g^{\mu \nu} g_{\mu \nu} & = & n, \nonumber \\ \gamma_{\mu}\gamma^{\mu} & = & n, \nonumber \\ Tr(\gamma^{\mu} \gamma^{\nu}) & = & n g^{\mu \nu}, \nonumber \\ Tr(\gamma^{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\sigma}) & = & n \left( g^{\mu\nu}g^{\lambda\sigma} - g^{\mu\lambda}g^{\nu\sigma} + g^{\mu\sigma}g^{\nu\lambda}\right), \nonumber \\ \gamma_{\mu}\gamma^{\nu}\gamma^{\mu} & = & (2 - n) \gamma^{\nu}, \nonumber \\ \gamma_{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\mu} & = & 4 g^{\nu\lambda} + (n - 4) \gamma^{\nu}\gamma^{\lambda}, \nonumber \\ \gamma_{\mu}\gamma^{\nu}\gamma^{\lambda}\gamma^{\sigma}\gamma^{\mu} & = & - 2 \gamma^{\sigma}\gamma^{\lambda}\gamma^{\nu} - (n -4) \gamma^{\nu}\gamma^{\lambda}\gamma^{\sigma}, \nonumber \\ \gamma_{5}\gamma^{\mu} + \gamma^{\mu}\gamma_{5} & = & 0. \end{eqnarray} \section{The computation of ${\cal I}_{0}, {\cal I}_{1}(m), {\cal I}_{2}(m)$ and ${\cal I}_{3}(m)$ integrals} We start with two useful integrals one often encounters during the computation of momentum integrals: \begin{eqnarray} \int_{0}^{1} dx \ln (Cx + D) & = & \ln (C + D) + \frac{D}{C} \ln \frac{C + D}{D} - 1, \nonumber \\ \nonumber \\ \int_{0}^{1} dx \big[x \ln (Ex + F)\big] & = & \frac{(E+F)^{2}}{2E^{2}} \ln (E+F) - \frac{F^{2}}{2E^{2}} \ln F - \frac{1}{2E^{2}} \frac{(E+F)^{2}}{2} \nonumber \\ & + & \frac{1}{4}\frac{F^{2}}{E^{2}} - \frac{F(E+F)}{E^{2}}\ln (E+F) + \frac{FE}{E^{2}} \nonumber \\ & + & \frac{F^{2}\ln F}{E^{2}}. \end{eqnarray} To compute ${\cal I}_{0}, {\cal I}_{1}(m), {\cal I}_{2}(m)$ and ${\cal I}_{3}(m)$ we note these integrals (defined in \linebreak Sec.~\ref{secbox}) are related via \begin{equation} {\cal I}_{2}(M_{Z}) = {\cal I}_{1} + M_{N}^{2} \;\; {\cal I}_{3}(M_{Z}), \end{equation} and the integral ${\cal I}_{0}$ is obtained as a special case of ${\cal I}_{1}(m)$ for $M_{N} \rightarrow 0$. To calculate the ${\cal I}_{1}(m)$ integral, we use the parameterization of Eq. \ref{pabc}: \begin{eqnarray} \frac{1}{abc} & = & 2 \int_{0}^{1}dx \int_{0}^{1}dy \frac{y}{\big[axy + by(1-x) + c(1-y)\big]^{3}}. \nonumber \end{eqnarray} The term in the denominator is \begin{eqnarray} axy + by(1-x) + c(1-y) & = & (k^{2} - M_{W}^{2})xy + (k^{2} - m^{2})(y-xy) \nonumber \\ & + & (k^{2} - M_{N}^{2})(1-y) \nonumber \\ & = & k^{2} - M_{W}^{2}xy + m^{2}xy - m^{2}y \nonumber \\ & - & M_{N}^{2}(1-y). \end{eqnarray} Using the momentum integral Eq. \ref{momint}, we can write for ${\cal I}_{1}(m)$ \begin{eqnarray} \frac{i}{(4\pi)^{2}}{\cal I}_{1}(m) & = & \int \frac{d^{4}k}{{(2\pi)}^{4}} \frac{1}{(k^{2}-M_{N}^{2})(k^{2}-M_{W}^{2})(k^{2}-m^{2})} \nonumber \\ & = & 2 \int_{0}^{1}dx \int_{0}^{1}dy \int \frac{d^{4}k}{(2\pi)^{4}} \nonumber \\ & \times & \frac{y}{\big[k^{2} - M_{W}^{2}xy + m^{2}xy - m^{2}y - M_{N}^{2}(1-y) \big]^{3}} \nonumber \\ & = & \frac{i}{(4\pi)^{2}}\int_{0}^{1}dx \int_{0}^{1}dy \nonumber \\ & \times & \frac{y}{\big(m^{2} - M_{W}^{2}\big)yx - m^{2}y - M_{N}^{2}(1-y)}. \end{eqnarray} First we integrate over the $x$ parameter: \begin{eqnarray} {\cal I}_{1}(m) & = & \int_{0}^{1}dy\:y \int_{0}^{1}dx \frac{-1}{\big(M_{W}^{2} - m^{2}\big)yx + m^{2}y + M_{N}^{2}(1-y)} \nonumber \\ & = & \int_{0}^{1}dy\:y \int_{0}^{1}dx \frac{-1}{Ax+B} \; = \; - \int_{0}^{1}dy\:y \frac{1}{A}\big[\ln (A+B) - \ln B\big] \nonumber \\ & = & \int_{0}^{1}dy\:y \frac{1}{\big(m^{2} - M_{W}^{2}\big)y}\big[\ln (M_{W}^{2}y + M_{N}^{2} \; - \; M_{N}^{2}y) \big. \nonumber \\ & - & \big. \ln (m^{2}y + M_{N}^{2} - M_{N}^{2}y)\big], \end{eqnarray} and then the integration over $y$: \begin{eqnarray} {\cal I}_{1}(m) & = & \int_{0}^{1}dy \frac{1}{\big(m^{2} - M_{W}^{2}\big)}\big[\ln (M_{W}^{2}y + M_{N}^{2} - M_{N}^{2}y) - \ln (m^{2}y + M_{N}^{2} - M_{N}^{2}y)\big] \nonumber \\ & = & \frac{1}{\big(m^{2} - M_{W}^{2}\big)}\Big[\int_{0}^{1}dy \ln (Cy + D) - \int_{0}^{1}dy \ln (Ey + F)\Big] \nonumber \\ & = & \frac{1}{\big(m^{2} - M_{W}^{2}\big)}\Big[\ln (C + D)+ \frac{D}{C}\ln \frac{C+D}{D} - 1 - \ln (E + F) \Big. \nonumber \\ & - & \Big. \frac{F}{E}\ln \frac{E+F}{F} + 1 \Big] \nonumber \\ & = & \frac{1}{\big(m^{2} - M_{W}^{2}\big)} \Big[\ln \frac{M_{W}^{2}}{m^{2}} + \frac{M_{N}^{2}}{M_{W}^{2} - M_{N}^{2}} \ln \frac{M_{W}^{2}}{M_{N}^{2}} - \frac{M_{N}^{2}}{m^{2} - M_{N}^{2}} \ln \frac{m^{2}}{M_{N}^{2}} \Big]. \end{eqnarray} The ${\cal I}_{3}(m)$ integral is found using similar steps: \begin{eqnarray} {\cal I}_{3}(m) & = & \frac{{(4\pi)}^{2}}{i}\int \frac{d^{4}k}{{(2\pi)}^{4}} \frac{1}{(k^{2}-M_{N}^{2})^{2}(k^{2}-M_{W}^{2})(k^{2}-m^{2})} \nonumber \\ & = & 6 \int \frac{d^{4}k}{{(2\pi)}^{4}} \int_{0}^{1}dx \int_{0}^{x}dy \frac{y}{\big[k^{2}-M_{W}^{2}+(M_{W}^{2}-m^{2})x+(m^{2}-M_{N}^{2})y\big]^{4}} \nonumber \\ & = & \int_{0}^{1}dx \int_{0}^{x}dy \frac{y}{\big[(M_{W}^{2}-m^{2})x -M_{W}^{2} +(m^{2}-M_{N}^{2})y\big]^{2}} \nonumber \\ & = & \frac{1}{m^{2}-M_{W}^{2}} \biggl\{ \frac{1}{M_{N}^{2}-M_{W}^{2}} + \frac{M_{W}^{2} \ln \frac{M_{W}^{2}}{M_{N}^{2}}}{{(M_{N}^{2}-M_{W}^{2})}^{2}} -\frac{1}{M_{N}^{2}-m^{2}} \biggr. \nonumber \\ & - & \biggl. \frac{m^{2} \ln \frac{m^{2}}{M_{N}^{2}}}{{(M_{N}^{2}-m^{2})}^{2}} \biggr\}. \end{eqnarray} \chapter{Renormalization constants, unrenormalized self-energies and 't Hooft scalar integrals} \label{Ecko} \section{Renormalization constants} \label{recons} It is convenient to define the following linear combinations of the renormalization constants ($i=1,2$): \begin{eqnarray} \delta Z_{i}^{\gamma} & = & s_{W}^{2}\;\delta Z_{i}^{W}\;+\;c_{W}^{2}\;\delta Z_{i}^{B}, \nonumber \\ \delta Z_{i}^{Z} & = & c_{W}^{2}\;\delta Z_{i}^{W}\;+\;s_{W}^{2}\;\delta Z_{i}^{B}, \nonumber \\ \delta Z_{i}^{\gamma Z} & = & \frac{c_{W}s_{W}}{c_{W}^{2}-s_{W}^{2}} \left(\delta Z_{i}^{Z}\;-\;\delta Z_{i}^{\gamma}\right), \nonumber \\ \delta Z_{V}^{f} & = & (\delta Z_{L}^{f} + \delta Z_{R}^{f})/2\;,\;\;\;\;\;\;\; \delta Z_{A}^{f}\; = \; (\delta Z_{L}^{f} - \delta Z_{R}^{f})/2. \end{eqnarray} The renormalized self-energies are obtained from unrenormalized ones by adding appropriate counterterms: \begin{eqnarray} \label{rselfe} \hat{\Sigma}^{\gamma}\left(p^{2}\right) & = & \Sigma^{\gamma}\left(p^{2}\right) \;+\;\delta Z_{2}^{\gamma}\;p^{2}, \nonumber \\ \hat{\Sigma}^{Z}\left(p^{2}\right) & = & \Sigma^{Z}\left(p^{2}\right) \;-\;\delta M_{Z}^{2}\;+\;\delta Z_{2}^{Z}\;\left(p^{2}-M_{Z}^{2}\right), \nonumber \\ \hat{\Sigma}^{W}\left(p^{2}\right) & = & \Sigma^{W}\left(p^{2}\right) \;-\;\delta M_{W}^{2}\;+\;\delta Z_{2}^{W}\;\left(p^{2}-M_{W}^{2}\right), \nonumber \\ \hat{\Sigma}^{\gamma Z}\left(p^{2}\right) & = & \Sigma^{\gamma Z} \left(p^{2}\right) \;-\;\delta Z_{2}^{\gamma Z}p^{2}\;+\;\left(\delta Z_{1}^{\gamma Z}\;- \;\delta Z_{2}^{\gamma Z}\right)M_{Z}^{2}, \nonumber \\ \hat{\Sigma}^{f}\left(p\right) & = & {\not p}\left(\Sigma_{V}^{f}(p^{2}) + \delta Z_{V}^{f}\right)\;+\;{\not p}\gamma_{5}\left(\Sigma_{A}^{f}(p^{2}) - \delta Z_{A}^{f}\right) \nonumber \\ & + & m_{f}\left(\Sigma_{S}^{f}(p^{2}) - \delta Z_{V}^{f} - \frac{\delta m_{f}}{m_{f}}\right) , \end{eqnarray} where $\Sigma_{V}^{f}, \Sigma_{A}^{f}$ and $\Sigma_{S}^{f}$ are vector, axial vector and scalar part of the fermion self-energy, respectively (see Eq. \ref{hasthe}). The renormalized electromagnetic, weak neutral and charged current vertices are given by \begin{eqnarray} \label{rvertexa} \hat{\Gamma}^{\gamma ff} & = & \Gamma^{\gamma ff} \;+\; ie_{137}\gamma_{\mu} \left(\delta Z_{1}^{\gamma} - \delta Z_{2}^{\gamma} + \delta Z_{V}^{f} - \delta Z_{A}^{f}\gamma_{5}\right) \nonumber \\ & - & ie_{137}\gamma_{\mu}(v_{f}-a_{f}\gamma_{5})\left(\delta Z_{1}^{\gamma Z} - \delta Z_{2}^{\gamma Z}\right), \nonumber \\ \hat{\Gamma}^{Zff} & = & \Gamma^{Zff} \;+\; ie_{137}\gamma_{\mu} (v_{f}-a_{f}\gamma_{5})\left(\delta Z_{1}^{Z} - \delta Z_{2}^{Z}\right)\;-\;ie_{137}\gamma_{\mu}\left(\delta Z_{1}^{\gamma Z} - \delta Z_{2}^{\gamma Z}\right) \nonumber \\ & & + ie_{137}\gamma_{\mu}\left(v_{f}\delta Z_{V}^{f} + a_{f}\delta Z_{A}^{f}\right)\;-\;ie_{137}\gamma_{\mu}\gamma_{5}\left(v_{f}\delta Z_{A}^{f} + a_{f}\delta Z_{V}^{f}\right), \nonumber \\ \hat{\Gamma}^{Wl\nu} & = & \Gamma^{Wl\nu} \;+\; i\frac{e_{137}}{2\sqrt{2}s_{W}} \gamma_{\mu}(1-\gamma_{5})\left(1+\delta Z_{1}^{W}-\delta Z_{2}^{W}+\delta Z_{L}^{f}\right). \end{eqnarray} The renormalization constants are obtained from the OS renormalization conditions Eqs. \ref{RC1}-\ref{RC2} (we show only constants needed for our calculations): \begin{eqnarray} \label{rconstants} \delta M_{W}^{2} & = & Re\;\Sigma^{W}(M_{W}^{2}), \nonumber \\ \delta M_{Z}^{2} & = & Re\;\Sigma^{Z}(M_{Z}^{2}), \nonumber \\ \delta Z_{2}^{\gamma} & = & - \frac{\partial \Sigma^{\gamma}}{\partial p^{2}}(0), \nonumber \\ \delta Z_{1}^{\gamma} & = & - \frac{\partial \Sigma^{\gamma}}{\partial p^{2}}(0)\;-\;\frac{s_{W}}{c_{W}}\frac{\Sigma^{\gamma Z}(0)}{M_{Z}^{2}}, \nonumber \\ \delta Z_{2}^{Z} & = & - \frac{\partial \Sigma^{\gamma}}{\partial p^{2}}(0)\;-\;2\frac{c_{W}^{2}-s_{W}^{2}}{s_{W}c_{W}}\frac{\Sigma^{\gamma Z}(0)}{M_{Z}^{2}}\;+\;\frac{c_{W}^{2}-s_{W}^{2}}{s_{W}^{2}} \left(\frac{\delta M_{Z}^{2}}{M_{Z}^{2}}-\frac{\delta M_{W}^{2}}{M_{W}^{2}} \right), \nonumber \\ \delta Z_{1}^{Z} & = & - \frac{\partial \Sigma^{\gamma}}{\partial p^{2}}(0)\;-\;\frac{3c_{W}^{2}-2s_{W}^{2}}{s_{W}c_{W}}\frac{\Sigma^{\gamma Z}(0)}{M_{Z}^{2}}\;+\;\frac{c_{W}^{2}-s_{W}^{2}}{s_{W}^{2}} \left(\frac{\delta M_{Z}^{2}}{M_{Z}^{2}}-\frac{\delta M_{W}^{2}}{M_{W}^{2}} \right), \nonumber \\ \delta Z_{2}^{W} & = & - \frac{\partial \Sigma^{\gamma}}{\partial p^{2}}(0)\;-\;2\frac{c_{W}}{s_{W}}\frac{\Sigma^{\gamma Z}(0)}{M_{Z}^{2}}\;+\;\frac{c_{W}^{2}}{s_{W}^{2}}\left(\frac{\delta M_{Z}^{2}}{M_{Z}^{2}}-\frac{\delta M_{W}^{2}}{M_{W}^{2}} \right), \nonumber \\ \delta Z_{V}^{f} & = & - \Sigma_{V}^{f}(m_{f}^{2}) - m_{f}^{2} \big[2 \Sigma^{f'}_{V}(m_{f}^{2}) + 2 \Sigma^{f'}_{S}(m_{f}^{2})\big],\;\;\;\;\;\Sigma^{f'}_{V,S}(m_{f}^{2})\;=\; \frac{\partial \Sigma_{V,S}^{f}}{\partial p^{2}}(m_{f}^{2}), \nonumber \\ \delta Z_{A}^{f} & = & + \Sigma_{A}^{f}(m_{f}^{2}). \end{eqnarray} \section{Unrenormalized self-energies in the SM} Below we present complete SM gauge boson self-energies corresponding to Figs. \ref{pfd} - \ref{wfd}. They were calculated in Ref. \cite{key6}. For the definition of the function $F$ see Sec. \ref{scalari}; for the definition of the $\Delta$ factors see Eq. \ref{deltas}; $s = p^{2}$, where $p$ is the 4-momentum of the gauge boson; $w=M_{W}^{2}, z=M_{Z}^{2}, h=M_{H}^{2}$. \begin{eqnarray} \label{agama} \Sigma^{\gamma}(s) & = & \frac{\alpha}{4\pi}\Big\{\frac{4}{3}\sum_{f} Q_{f}^{2}\Big[ s\Delta_{f} + (s+2m_{f}^{2})F(p;m_{f},m_{f}) - \frac{s}{3}\Big]\Big. \nonumber \\ & - & \Big. 3 s \Delta_{W} - (3s+4w)F(p;M_{W},M_{W})\Big\}, \\ \nonumber \\ \label{asedem} \Sigma^{\gamma Z}(s) & = & \frac{\alpha}{4\pi}\Big\{-\frac{4}{3}\sum_{f} Q_{f} v_{f}\Big[ s\Delta_{f} + (s+2m_{f}^{2})F(p;m_{f},m_{f}) - \frac{s}{3}\Big]\Big. \nonumber \\ & + & \frac{1}{c_{W}s_{W}}\Big[\Big(3 c_{W}^{2} + \frac{1}{6}\Big)s+2w \Big] \Delta_{W} \nonumber \\ & + & \Big.\frac{1}{c_{W}s_{W}}\Big[\Big(3 c_{W}^{2} + \frac{1}{6}\Big)s+ \Big(4c_{W}^{2}+\frac{4}{3}\Big)w\Big] F(p;M_{W},M_{W}) + \frac{s}{9c_{W}s_{W}}\Big\}, \\ \nonumber \\ \label{azet} \Sigma^{Z}(s) & = & \frac{\alpha}{4\pi}\Big\{\frac{4}{3}\sum_{l=e,\mu,\tau} 2a_{l}^{2}s\Big(\Delta_{l} + \frac{5}{3} - \ln\Big(-\frac{s}{m_{l}^{2}} -i\epsilon\Big)\Big)\Big. \nonumber \\ & + & \frac{4}{3}\sum_{f\neq\nu}\Big[(v_{f}^{2}+a_{f}^{2})\Big(s\Delta_{f}+ (s+2m_{f}^{2})F(p;m_{f},m_{f})-\frac{s}{3}\Big)\Big. \nonumber \\ & - & \Big.\frac{3}{8c_{W}^{2}s_{W}^{2}}m_{f}^{2} (\Delta_{f}+F(p;m_{f},m_{f}))\Big] \nonumber \\ & + & \Big[\Big(3-\frac{19}{6s_{W}^{2}}+\frac{1}{6c_{W}^{2}}\Big)s+ \Big(4+\frac{1}{c_{W}^{2}}-\frac{1}{s_{W}^{2}}\Big)M_{Z}^{2}\Big] \Delta_{W} \nonumber \\ & + & \Big[\left(-c_{W}^{4}(40s+80w) + (c_{W}^{2}-s_{W}^{2})^{2}(8w+s) + 12w\right) F(p;M_{W},M_{W}) \Big. \nonumber \\ & + & \Big(10z-2h+s+\frac{(h-z)^{2}}{s}\Big)F(p;M_{H},M_{Z})- 2h\ln\frac{h}{w} - 2z\ln{z}{w} \nonumber \\ & + & (10z-2h+s)\Big(1-\frac{h+z}{h-z}\ln\frac{M_{H}}{M_{Z}} - \ln \frac{M_{H}M_{Z}}{w} \Big) \nonumber \\ & + & \Big.\Big.\frac{2}{3}s\Big(1 + (c_{W}^{2}-s_{W}^{2})^{2} - 4c_{W}^{2}\Big)\Big]\frac{1}{12c_{W}^{2}s_{W}^{2}}\Big\}, \\ \nonumber \\ \label{adablju} \Sigma^{W}(s) & = & \frac{\alpha}{4\pi} \frac{1}{s_{W}^{2}}\Big\{\frac{1}{3} \sum_{l=e,\mu,\tau}\Big[\Big(s-\frac{3}{2}m_{l}^{2}\Big)\Delta_{l}\Big.\Big. \nonumber \\ & + & \Big.\Big(s-\frac{m_{l}^{2}}{2} - \frac{m_{l}^{4}}{2s}\Big)F(p;0,m_{l}) + \frac{2}{3}s - \frac{m_{l}^{2}}{2}\Big] \nonumber \\ & + & \sum_{q-doublets} \frac{1}{3} \Big[ \frac{\Delta_{+}}{2}\Big(s - \frac{5}{2}m_{+}^{2} + \frac{m_{-}^{2}}{2}\Big) + \frac{\Delta_{-}}{2} \Big(s - \frac{5}{2}m_{-}^{2} + \frac{m_{+}^{2}}{2}\Big)\Big. \nonumber \\ & + & \Big(s- \frac{m_{+}^{2}+m_{-}^{2}}{2} - \frac{(m_{+}^{2}-m_{-}^{2})^{2}}{2s}\Big) F(p;m_{+},m_{-}) \nonumber \\ & + & \Big. \Big(s - \frac{m_{+}^{2}+m_{-}^{2}}{2}\Big)\Big(1 - \frac{m_{+}^{2}+m_{-}^{2}}{m_{+}^{2}-m_{-}^{2}} \ln{m_{+}}{m_{-}}\Big) -\frac{s}{3}\Big] \nonumber \\ & - & \Big[ \frac{19}{2} s + 3w\Big(1 - \frac{s_{W}^{2}}{c_{W}^{2}}\Big)\Big] \frac{\Delta_{W}}{3} \nonumber \\ & + & \Big[ s_{W}^{4} z -\frac{c_{W}^{2}}{3}\Big(7z + 7w +10s - 2\frac{(z-w)^{2}}{s}\Big)\Big. \nonumber \\ & - & \Big.\frac{1}{6}\Big( w + z -\frac{s}{2} - \frac{(z-w)^{2}}{2s}\Big)\Big] F(p;M_{Z},M_{W}) \nonumber \\ & + & \frac{s_{W}^{2}}{3}\Big(-4w-10s + \frac{2w^{2}}{s}\Big) F(p;0,M_{W}) \nonumber \\ & + & \frac{1}{6}\Big(5w -h +\frac{s}{2} + \frac{(h-w)^{2}}{2s} \Big) F(p;M_{H},M_{W}) \nonumber \\ & + & \Big[\frac{c_{W}^{2}}{3}\Big(7z+7w+10s-4(z-w)\Big) - s_{W}^{4}z + \frac{1}{6}\Big(2w-\frac{s}{2}\Big)\Big]\frac{3z}{z-w} \ln\frac{z}{w} \nonumber \\ & - & \Big(\frac{2}{3}w+\frac{s}{12}\Big)\frac{h}{h-w}\ln\frac{h}{w} - \frac{c_{W}^{2}}{3}\Big(7z +7w+\frac{32}{3}s\Big)+s_{W}^{4}z \nonumber \\ & + & \Big.\frac{1}{6}\Big(\frac{5}{3}s+4w-z-h\Big) - \frac{s_{W}^{2}}{3}\Big(4w + \frac{32}{3}s\Big)\Big\}. \end{eqnarray} \section{'t Hooft scalar integrals} \label{scalari} Here we define various $C, B, A$ and $F$ functions and reduce them to scalar integrals $C_{0}$, $B_{0}$ and $A_{0}$. For the calculation of $C_{0}$ and $B_{0}$ we refer the reader to the original work of 't Hooft and Veltman, Ref. \cite{thooft}. The $C_{0}$ function is defined as (with finite parts indicated by the superscript): \begin{eqnarray} \label{cnula} C_{0}(m_{1},m_{2},m_{3}) & \equiv & C_{0}(p_{1},p_{2};m_{1},m_{2},m_{3}) \; \equiv \; C_{0}^{fin}(m_{1},m_{2},m_{3}) \nonumber \\ & = & -\int \frac{d^{n}q}{i\pi^{2}} \frac{1}{D}, \end{eqnarray} where \begin{eqnarray} D & = & (q^{2}-m_{1}^{2}+i\epsilon)\;\big[(q-p_{1})^{2}-m_{2}^{2} +i\epsilon\big]\; \big[(q-p_{1}-p_{2})^{2}-m_{3}^{2}+i\epsilon\big]. \end{eqnarray} The functions $C_{ij}$ are defined by: \begin{eqnarray} C_{\mu} & = & -\int \frac{d^{n}q}{i\pi^{2}}\frac{q_{\mu}}{D} = -p_{1\mu}C_{11} - p_{2\mu}C_{12}, \nonumber \\ \nonumber \\ C_{\mu\nu} & = & -\int \frac{d^{n}q}{i\pi^{2}} \frac{q_{\mu}q_{\nu}}{D} \nonumber \\ & = & p_{1\mu}p_{1\nu}C_{21} + p_{2\mu}p_{2\nu}C_{22} + (p_{1\mu}p_{2\nu} + p_{1\nu}p_{2\mu})C_{23} - g_{\mu\nu}C_{24}. \end{eqnarray} The functions $C_{11}, C_{24}, C_{23}$ are reduced (in the limit $p_{1}^{2} = p_{2}^{2} = m_{l}^{2} \ll (p_{1}+p_{2})^{2}=M_{Z}^{2}$, applicable for our considerations of the leptonic decays of the Z boson) to: \begin{eqnarray} C_{11}(m_{1},m_{2},m_{3}) & = & C_{11}^{fin}(m_{1},m_{2},m_{3}) = -\frac{1}{M_{Z}^{2}}[f_{2}C_{0}(m_{1},m_{2},m_{3}) \nonumber \\ & - & B_{0}^{fin}(p_{1}+p_{2};m_{1},m_{3}) + B_{0}^{fin}(p_{1};m_{1},m_{2})], \nonumber \\ \nonumber \\ C_{24}(m_{1},m_{2},m_{3}) & = & \frac{1}{4}\Delta + C_{24}^{fin}(m_{1},m_{2},m_{3}), \nonumber \\ C_{24}^{fin}(m_{1},m_{2},m_{3}) & = & [m_{1}^{2}C_{0}(m_{1},m_{2},m_{3}) + f_{1}C_{11}(m_{1},m_{2},m_{3}) \nonumber \\ & + & B_{1}^{fin}(p_{1}+p_{2};m_{1},m_{3})](-\frac{1}{2}) + \frac{1}{4}, \nonumber \\ \nonumber \\ C_{23}(m_{1},m_{2},m_{3}) & = & C_{23}^{fin}(m_{1},m_{2},m_{3}) = -\frac{1}{M_{Z}^{2}} [B_{1}^{fin}(p_{1}+p_{2};m_{1},m_{3}) \nonumber \\ & + & B_{0}^{fin}(p_{2};m_{2},m_{3}) + f_{1}C_{11}(m_{1},m_{2},m_{3})] \nonumber \\ & + & C_{24}^{fin}(m_{1},m_{2},m_{3}) \frac{2}{M_{Z}^{2}}, \end{eqnarray} where \begin{eqnarray} f_{2} & = & M_{Z}^{2}+m_{2}^{2}-m_{3}^{2}, \nonumber \\ f_{1} & = & m_{1}^{2}-m_{2}^{2}. \end{eqnarray} The functions $B_{0}, B_{1}$ are defined as: \begin{eqnarray} \label{abfunc} B_{0}(p;m_{1},m_{2}) & = & \int \frac{d^{n}q}{i\pi^{2}} \frac{1}{(q^{2}-m_{1}^{2}+i\epsilon)\big[(q-p)^{2}-m_{2}^{2}+i\epsilon\big]} \nonumber \\ & = & \Delta + B_{0}^{fin}(p;m_{1},m_{2}), \nonumber \\ B_{0}^{fin}(p;m_{1},m_{2}) & = & -\int_{0}^{1} dx\; \ln \big[p^{2}x^{2} + m_{1}^{2} - (p^{2}+m_{1}^{2}-m_{2}^{2})x\big], \nonumber \\ \nonumber \\ B_{\mu}(p;m_{1},m_{2}) & = & \int \frac{d^{n}q}{i\pi^{2}} \frac{q_{\mu}}{(q^{2}-m_{1}^{2}+i\epsilon)\big[(q-p)^{2}-m_{2}^{2}+i\epsilon \big]} \;\;=\;\;-p_{\mu}B_{1}, \nonumber \\ B_{1}(p;m_{1},m_{2}) & = & -\frac{1}{2}\Delta + B_{1}^{fin}(p;m_{1},m_{2}), \nonumber \\ B_{1}^{fin}(p;m_{1},m_{2}) & = & \int_{0}^{1} dx \; \ln \big[p^{2}x^{2} + m_{1}^{2}-(p^{2}+m_{1}^{2}-m_{2}^{2})x\big] x, \nonumber \\ \nonumber \\ B_{\mu\nu}(p;m_{1},m_{2}) & = & \int \frac{d^{n}q}{i\pi^{2}} \frac{q_{\mu}q_{\nu}}{(q^{2}-m_{1}^{2}+i\epsilon)\big[(q-p)^{2}-m_{2}^{2} +i\epsilon\big]} \nonumber \\ & = & p_{\mu}p_{\nu}B_{21} - g_{\mu\nu}B_{22}. \end{eqnarray} The functions $B_{1}, B_{21}$ and $B_{22}$ can be reduced to \begin{eqnarray} B_{1} & = & \frac{1}{2p^{2}}\big\{ -A_{0}(m_{1})+A_{0}(m_{2}) - (p^{2} + m_{1}^{2} - m_{2}^{2})B_{0}\big\}, \nonumber \\ B_{21} & = & \frac{1}{3p^{2}}\big\{-A_{0}(m_{2})-2(p^{2}+m_{1}^{2}- m_{2}^{2})B_{1} -m_{1}^{2}B_{0}-1/2(m_{1}^{2}+m_{2}^{2}-p^{2}/3)\big\}, \nonumber \\ B_{22} & = & \frac{1}{6}\big\{+A_{0}(m_{2})-(p^{2}+m_{1}^{2}-m_{2}^{2})B_{1} -2m_{1}^{2}B_{0}-(m_{1}^{2}+m_{2}^{2}-p^{2}/3)\big\}. \end{eqnarray} The functions $A$ are defined as \begin{eqnarray} A_{0}(m) & = & - \int \frac{d^{n}q}{i\pi^{2}}\frac{1}{(q-p)^{2}-m^{2}} \;\;\; = \;\;\; - \int \frac{d^{n}q}{i\pi^{2}}\frac{1}{q^{2}-m^{2}} \nonumber \\ & = & - m^{2}(\Delta - \ln m^{2} + 1), \nonumber \\ \nonumber \\ A_{\mu}(p,m) & = & \int \frac{d^{n}q}{i\pi^{2}}\frac{q_{\mu}}{(q-p)^{2}-m^{2}} \;\;\; = \;\;\; - p_{\mu}A_{0}(m). \end{eqnarray} Relations between F and B functions: \begin{eqnarray} \label{baf} F(p;m_{1},m_{2}) & = & -1 + \frac{m_{1}^{2}+m_{2}^{2}} {m_{1}^{2}-m_{2}^{2}}\;\ln \frac{m_{1}}{m_{2}} + \ln m_{1} + \ln m_{2}+ B_{0}(p;m_{1},m_{2}), \nonumber \\ F(p;0,m) & = & -1 + \ln m^{2} + B_{0}(p;0,m), \nonumber \\ B_{1}(p;m_{1},m_{2}) & = & \frac{m_{2}^{2}-m_{1}^{2}}{2} \frac{F(p;m_{1},m_{2})}{p^{2}} - \frac{1}{2}B_{0}(p;m_{1},m_{2}). \end{eqnarray} For $s = p^{2}$ small with respect to $m_{1}^{2}, m_{2}^{2}, m^{2}$, we have \begin{eqnarray} \label{males} F(p;m_{1},m_{2}) & = & \frac{s}{{(m_{1}^{2}-m_{2}^{2})}^{2}}\bigg[ \frac{m_{1}^{2}+m_{2}^{2}}{2} - \frac{m_{1}^{2}m_{2}^{2}}{m_{1}^{2}-m_{2}^{2}} \ln \frac{m_{1}^{2}}{m_{2}^{2}}\biggr], \nonumber \\ B_{0}(p;m_{1},m_{2}) & = & 1 - \frac{m_{1}^{2}+m_{2}^{2}} {m_{1}^{2}-m_{2}^{2}}\ln \frac{m_{1}}{m_{2}} -\ln m_{1} -\ln m_{2} +O(s), \nonumber \\ B_{0}(p;0,m) & = & 1 -2\ln m +O(s), \nonumber \\ B_{1}(p;m_{1},m_{2}) & = & \frac{1}{2}\frac{1}{m_{2}^{2}-m_{1}^{2}}\bigg[ \frac{m_{1}^{2}+m_{2}^{2}}{2} - \frac{m_{1}^{2}m_{2}^{2}}{m_{1}^{2}-m_{2}^{2}} \ln \frac{m_{1}^{2}}{m_{2}^{2}}\biggr] -\frac{1}{2}B_{0}(p;m_{1},m_{2}), \nonumber \\ B_{1}(p;0,m) & = & -\frac{1}{4} + \ln m + O(s), \nonumber \\ B_{1}(p;m,0) & = & -\frac{3}{4} + \ln m + O(s). \end{eqnarray} Finally, in photon loops we encounter functions with regularized photon mass \linebreak $m_{\lambda}~\rightarrow~0$: \begin{eqnarray} \label{bphoton} \displaystyle \left. B_{0}(p;m_{\lambda},m_{l}) \right|_{p^{2}=m_{l}^{2}} & = & 2-2\ln m_{l}, \nonumber \\ \displaystyle \left. B_{1}(p;m_{\lambda},m_{l}) \right|_{p^{2}=m_{l}^{2}} & = & -\frac{1}{2} + \ln m_{l}, \nonumber \\ \displaystyle \left. B_{1}(p;m_{l},m_{\lambda}) \right|_{p^{2}=m_{l}^{2}} & = & -\frac{3}{2} + \ln m_{l}, \nonumber \\ \displaystyle \left. \frac{\partial B_{0}}{\partial p^{2}}(p;m_{\lambda},m_{l}) \right|_{p^{2}=m_{l}^{2}} & = & \frac{1}{m_{l}^{2}}\Big(-1 - \ln \frac{m_{\lambda}}{m_{l}}\Big), \nonumber \\ \displaystyle \left. \frac{\partial B_{1}}{\partial p^{2}}(p;m_{\lambda},m_{l}) \right|_{p^{2}=m_{l}^{2}} & = & - \frac{1}{2 m_{l}^{2}}. \end{eqnarray} \chapter{Introduction} \section{The problem of small neutrino masses} \label{tposnm} Why are macroscopic things around us massive ? We know they, as composite objects, get their mass from the mass of their constituents and from the interactions among the constituents. But if the constituents are elementary particles, where is {\it their} mass coming from ? For theoretical physicists mass represents a rather unwelcome pollution of their elegant massless theories. They believe the world is essentially symmetric and it is the mass that makes this symmetry hard to see (symmetry is hidden or spontaneously broken). Therefore they build massless theories possessing beautiful symmetries and then break these symmetries to give particles their masses. The exact mechanism of symmetry breaking is not known and may not be known for a long time. It is one of the most attractive problems of particle physics. Some argue we have to wait for the ultimate solution until physics at the Planck scale ($10^{19}$ GeV) is understood, others suggest optimistically that $10^{5}$ GeV might do. The ultimate theory should predict the masses of all elementary particles in agreement with experiment. It is believed the origin of the weak boson masses (associated with the electroweak symmetry breaking) is better understood than that of the fermion masses (flavour symmetry breaking). Concentrating on the fermions, we list the quark and lepton content of the so-called standard model of electroweak interactions in Table \ref{tparticles}, along with their masses \cite{griff}. \begin{table}[htb] \begin{center} \begin{tabular}{|l|c|} \hline 1. & Mass [MeV] \\ \hline $u$ & 4.2 \\ $d$ & 7.5 \\ $\nu_{e}$ & 0 \\ $e$ & 0.5110 \\ \hline \end{tabular} $\;\;\;\;\;\;$ \begin{tabular}{|l|c|} \hline 2. & Mass [MeV] \\ \hline $c$ & 1 100 \\ $s$ & 150 \\ $\nu_{\mu}$ & 0 \\ $\mu$ & 105.6 \\ \hline \end{tabular} $\;\;\;\;\;\;$ \begin{tabular}{|l|c|} \hline 3. & Mass [GeV] \\ \hline $t$ & 176 \\ $b$ & 4.2 \\ $\nu_{\tau}$ & 0 \\ $\tau$ & 1.784 \\ \hline \end{tabular} \end{center} \caption{The three families of quarks ($u,c,t,d,s,b$) and leptons (neutrinos $\nu_{e}, \nu_{\mu}, \nu_{\tau}$; charged leptons $e, \mu, \tau$) and their masses according to the standard model of electroweak interactions} \label{tparticles} \end{table} These masses are not predicted by the standard model. The quark and charged lepton masses represent an experimental input, $9$ parameters out of the total $17$ present in the standard model \footnote{Of the remaining 8 parameters, the four CKM mixing matrix parameters are also closely related to the origin of the quark masses.}. The neutrinos are postulated as massless. This postulate, however, is based on the assumption that the neutrino is the only fermion without a right-handed field - an asymmetry going against the spirit of symmetric theories which work with both left-handed and right-handed fields. Therefore the massless neutrinos are not natural in the standard model and none of the twelve fermion masses is actually predicted. While the ultimate solution for mass prediction may be far from us, it is worthwhile to think of partial steps which could bring us closer to it. A popular strategy aims at the reduction of the twelve independent mass parameters. From grand unified theories (GUT's) which describe strong and electroweak interactions as different manifestations of a single force, there are hints the fermion masses are related to each other by simple formulae at very high energies (GUT scale $\sim 10^{16}$ GeV) \footnote{For example in the simplest version of $SO(10)$ GUT, all masses in a given family are equal at the GUT scale.} and it is in the transition to our low-energy world \footnote{By low energies we mean here energies up to a few hundred GeV and by very low energies those up to a few GeV.} where masses pick up different corrections and end up in the array of seemingly unrelated numbers shown in Table~\ref{tparticles}. In the past the efforts to partially explain fermion masses in grand unified models experienced great difficulties facing the problem of small neutrino masses. In the discussion above, we dismissed the way the standard model postulates the zero mass neutrinos. What is then the experimental basis for the claim the neutrino masses are small, possibly zero ? Ironically, even though neutrinos are possibly the second most abundant elementary species in the Universe, we do not know exactly what their mass is. This is partly due to the smallness of this mass and partly because it is so hard to detect neutrinos (a typical cross section is $10^{-43} \mbox{ cm}^{2}$ for a $10$ MeV neutrino, more than twenty orders of magnitude below the cross section for an electron of the same energy). Experiments set the following upper limits on the masses of the electron, muon and tau type neutrino \cite{pdb}: \begin{equation} \begin{array}{lcr} m_{\nu_{e}} & < & 7.2 \;{\rm eV} \\ m_{\nu_{\mu}} & < & 270 \;{\rm keV} \\ m_{\nu_{\tau}} & < & 31 \;{\rm MeV} \end{array} \end{equation} The neutrino masses are, even at their allowed maxima, strikingly small when compared with the masses of the charged leptons and quarks within their families. For example, within the first family of the standard model, $u$ and $d$ quarks have masses of a few MeV, and the electron has a mass of $0.511$ MeV (see Table \ref{tparticles}). The simple formulae relating the fermion masses we mentioned earlier, put different fermion masses on the same scale: it is thus conceivable in GUT's that masses of the first family are scattered around a few MeV, but the mass of the electron type neutrino is a deep mystery, being at least five orders of magnitude below this natural scale for the first family. The difference of five orders cannot be explained by corrections masses pick up running from the GUT scale to low energies. Something else must be involved. Similar behaviour, although perhaps less pronounced, can be observed among the second and third family members. \section{Solutions to the problem} \label{sttp} Before we start to discuss possible solutions to our problem, let us be more specific about the relation between GUT's and low-energy theories, such as the standard model. While the standard model \cite{key1,key2,key3,key4} has a natural scale of $10^{2}$ GeV, GUT models \cite{gut} describe physics actively operating at the GUT scale of $10^{16}$ GeV; at the same time they should explain low-energy data at least as well as the standard model. GUT models, although more elegant than the standard model, are also more complicated: they have a richer gauge structure and symmetry breaking sector and, especially in the case of $E_{6}$, a fermion sector with more particles. A thorough discussion of neutrino masses in these models is beyond the scope of this work. Here we are mainly interested in the phenomenology of neutral heavy leptons at low energies. Fortunately, at low energies most of the extraneous baggage associated with the complicated structure of GUT models has very little impact - it is integrated out and the remaining effective theory often represents just a minimal extension of the standard model of electroweak interactions. This is in fact no big surprise. It simply reflects a very good agreement of experimental data with the standard model and the fact the GUT's extend the standard model rather than replace it. The physics active at high energies thus decouples at the scales currently accessible to us. For example, in the model studied in this thesis, it is just two extra neutrino fields per family which enlarge the standard model. Therefore we will focus our discussion on these minimal extensions, referring to unification models as motivation for a particular simple extension of the standard model. Interestingly, basic directions in the theoretical treatment of neutrino masses can be followed with just minor extensions of the fermion sector of the standard model, leaving the gauge structure $SU(2)_{L}\times U(1)_{Y}$ and symmetry breaking sector intact. The simplest model with massive neutrinos one can immediately think of is a straightforward extension of the standard model. One can introduce the right-handed neutrino field missing in the standard model and treat the neutrino in the same way as all other fermions - as a massive Dirac particle. This means the neutrino mass is still not predicted and the simplest model thus fails to address the problem of the small neutrino masses. A possible solution was found by Yanagida and Gell-Mann, Ramond and Slansky in the famous see-saw mechanism \cite{guts,seesaw}. A simple low-energy see-saw model has the same fermion content as the simplest model just described, however, mass terms violating the total lepton number $L$ are allowed. This leads to the description of neutrinos as Majorana fermions rather than Dirac; neutral heavy leptons (NHL's) are introduced into this theory as a necessary ingredient. In Chapter 2 we describe the difference between Majorana and Dirac fermions more formally. Here it suffices to say that a Dirac neutrino is a particle like all other fermions with left-handed and right-handed particle and antiparticle states, while a Majorana neutrino is a particle which is its own antiparticle and therefore comes in just two states described by a left-handed and a right-handed field. The neutrino is the only particle of the fermion content of the standard model that can possibly be described as a Majorana particle because it is neutral. The see-saw mechanism comes with the following relation for the mass of a neutrino $m_{\nu}$: \begin{eqnarray} \label{ss1} m_{\nu} = D^{2}/M_{N}. \end{eqnarray} The mass $D$ is a typical family mass (say $1-2$ MeV for the first family) and $M_{N}$ is the NHL mass. This relation tells us the neutrinos become very light with respect to $D$ due to a very large mass $M_{N}$. Their mass is not predicted ($M_{N}$ is unknown); nevertheless its smallness is understood since a very large mass scale required for the $M_{N}$ is naturally expected in GUT's. Whether neutrinos are Dirac or Majorana particles is an issue by itself. Experiments which try to address it rely on the fact that Majorana neutrinos break lepton number conservation, whereas Dirac neutrinos respect it. A clear answer would be provided by the observation of a neutrinoless double beta decay, a so far unobserved process mediated only by Majorana neutrinos~\footnote{Another possibility of proving the Majorana nature of the neutrino was suggested as a result of theoretical studies of the origin of the lepton symmetry breaking. This symmetry can be broken explicitly (it is not a symmetry of the Lagrangian at any energy or temperature); it can also be broken spontaneously. If the lepton number $L$ is a global symmetry which is broken spontaneously, a massless, pseudoscalar Majoron arises \cite{majoron}. The discovery of this particle would prove the Majorana nature of the neutrino.} \cite{betabeta}. Theories with massive neutrinos are popular since some motivation for nonzero neutrino masses comes also from outside particle physics. Massive neutrinos could explain the mystery of missing solar neutrinos \cite{solarnu} through matter enhanced time dependent neutrino oscillations, the so-called MSW effect \cite{msw}. In cosmology, massive neutrinos could explain at least part of the dark matter puzzle. The possibility of this increased after COBE data on the anisotropy of the cosmic microwave background radiation were analysed: there are hints some $10 - 30$ \% of the dark matter is hot \cite{hot} and neutrinos with masses between 2 - 7 eV make a good candidate. Cosmology further constrains (subject to plausible assumptions) the mass of each of the three neutrinos to $m_{\nu} < 25$ eV (see Ref. \cite{mohapatra}, Sec. 15.3.1). These astrophysical and cosmological indications are quite appealing and some authors argue on their basis that an $SO(10)$ GUT with three nearly degenerate Majorana neutrino masses of about $2$ eV could be the correct unified theory \cite{hints}. The see-saw mechanism is not the only one which addresses the issue of the smallness of the neutrino mass. An alternative was worked out in the class of models wherein neutrinos remain massless while other neutral leptons can acquire a large mass. Here it is argued there can be some global symmetry present, such as lepton number $L$, which prevents neutrinos from becoming massive. One such model could arise as a low-energy limit of a superstring-inspired $E_{6}$ GUT \cite{vallemo}. The superstring inspiration consists of yet another (left-handed) neutrino field added to the fermion content of the standard model. Therefore we have three neutrino fields per family: the standard left-handed one, the right-handed one required by GUT's and the one needed by superstrings. This field content and the lepton number symmetry \footnote{We note the conserved lepton number is also present in the standard model and in the simplest model with massive neutrinos. However, in the former the conserved $L$ is a consequence of the missing right-handed neutrino field and in the latter, the two neutrino fields per family are not enough to keep neutrinos massless.} give rise to three massless neutrinos and to three massive Dirac NHL's, as described in Chapter 3. In this thesis I study phenomenological implications of this model (henceforth called 'our' model), with emphasis put on signatures of NHL's in precision data from LEP collider at CERN (leptonic widths of the Z boson) and Tevatron collider at Fermilab (the mass of the W boson, $M_{W}$). Both theories with massive and massless neutrinos (our model) naturally exhibit some of the properties familiar from the quark sector of the standard model. For instance, neutrino mixing may arise with an analogue of the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix for the leptonic sector. Individual lepton family numbers are expected to be violated, as is lepton universality and CP symmetry. The total lepton number $L$ has already been discussed. What is not predicted by our model, in contrast with see-saw models, are time dependent neutrino oscillations and neutrinoless double beta decay. The physics of light neutrinos is thus richer in see-saw models. On the other hand the physics of neutral heavy leptons looks more promising in our model. The NHL's contribution to low-energy observables is proportional to their mixings and these depend on the ratio $\frac{D}{M_{N}}$. In see-saw models this ratio is normally very small since the ratio $\frac{D^{2}}{M_{N}} = m_{\nu}$ is very small. In our model with massless neutrinos there is no such restriction and, consequently, mixings can be relatively large. To show that the differences between theories with massive neutrinos and our model should not be taken too seriously, we note that there is a variant of our model where the lepton number symmetry is slightly broken and the neutrinos are given a small mass \cite{vallemo,garval} and there are variants of see-saw models where the restriction from the ratio $\frac{D^{2}}{M_{N}}$ is avoided (assuming certain symmetries in the neutrino mass matrix) and possibly large mixings of NHL's arise as a result \cite{pilaftsis2,Ng1}. \section{The phenomenology of NHL's} To investigate the phenomenology of NHL's more closely, let us be more specific about how they enter low-energy observables via their mixings. For simplicity we neglect interfamily mixings in this section. The key point, formally derived and discussed in Chapters 2 and 3, is that the neutrinos taking part in weak interactions, $\nu_{l}\; (l = e, \mu, \tau)$ are no longer states of definite mass but are combinations of light (massless in our model) neutrinos $\nu^{'}$ and NHL's $N$: \begin{equation} \nu_{l} = K_{L}\:\nu^{'} \; + \; K_{H} N. \end{equation} Here $K_{L}$ and $K_{H}$ are mixing parameters related through $K_{L}^{2} + K_{H}^{2} = 1$. The chance of finding the signature of an NHL is proportional to the size of $K_{H}$, which is equal to the ratio we discussed in the previous section, $K_{H} = \frac{D}{M_{N}}$. This picture is model independent, whether NHL's come from our model, or a see-saw one. In the standard model we have $K_{H} = 0, K_{L} = 1$ and the weak eigenstates $\nu_{l}$ become identical with the mass eigenstates $\nu^{'}$. The eigenstates $\nu_{l}$ interact weakly by coupling to the mediators of electroweak interactions, W and Z bosons. Through the mixing $K_{H}$, so do NHL's. For example, the simplest way to discover NHL's would be to directly produce them at the Z boson factories (LEP collider at CERN and SLC at SLAC). However, to be produced directly, their mass would have to be smaller than the Z boson mass - this is the energy available at the Z factories. Since there has been so far no evidence for NHL production, we conclude that either they are more massive than the Z, or their mixing is small enough to suppress their production to such a degree that they escape detection. Another method to probe mixings of NHL's is an indirect one, through the measurement of the mixing parameter $K_{L}$ of the light neutrinos in, for example, pion and beta decays. NHL's are not produced in the decays, nevertheless their existence could be revealed if $K_{L}$ is far enough from 1 to reduce the decay rates beyond the experimental uncertainties. These so-called universality constraints give us the best limits on mixings. While the direct production is sensitive to NHL masses only up to the mass of the Z boson $M_{Z}$, indirect methods are sensitive only to mixings, not masses. What if the NHL is heavier than the Z boson ? Can we obtain some information on its mass ? In cases when particle physicists face a problem of probing masses of hypothetical particles which are larger than the energy currently available, they study contributions of these particles in radiative corrections (loops) to some observables. Since loops are higher order terms of perturbation theory, they represent just a small correction to the lowest order (tree-level) calculation and often their size is not greater than experimental uncertainties. For example, only recently have precision experiments seen some evidence for the genuine electroweak loop corrections of the standard model. Despite their smallness, loop corrections in combination with precision data can impose important restrictions on the parameter space of various models. The major part of my thesis studies how NHL's contribute via loops to the regular leptonic decays of the Z boson, a lepton universality breaking parameter (both studied in precision tests at LEP) and the mass of the W boson. This is a novel approach to the study of NHL's in loops. The previous studies concentrated mainly on flavour-violating decays, such as $\mu \rightarrow e \gamma$ at very low energies or $Z \rightarrow e^{+}\mu^{-}$ at the Z factories' energy. We argue that our calculations probe NHL masses and mixings more efficiently than these traditional studies of flavour-violating processes. The limits on NHL mass we obtain from the leptonic decays of the Z boson and the lepton universality breaking parameter are comparable to the limit derived from the considerations of perturbative theory breakdown, discussed in Sec. \ref{breakdown}. This work is organized as follows: In Chapter 2 we treat models of neutrino mass formally. After specifying the classical Lagrangian of the standard model, we investigate how the peculiar nonzero energy density of the vacuum associated with the existence of a fundamental scalar Higgs field accommodates fermion masses in the standard model. We then move on to describe neutrino masses in $SU(2)_{L}\times U(1)_{Y}$ models beyond the standard model; we consider the simplest extension of the standard model leading to massive Dirac neutrinos and a see-saw model with Majorana neutrinos as examples. We also briefly describe neutrino mass in grand unified models and show where motivation for our model comes from. In Chapter 3 we discuss our model, a superstring-inspired $SU(2)_{L}\times U(1)_{Y}$ model of neutrino mass, in detail. We define the fermion content and the neutrino mass matrix and show how massless neutrinos and NHL's arise through the diagonalization of the mass matrix in the case of one family and also in the general case of three families. The mixing matrix is described and phenomenologically relevant mixing parameters are defined. In the second part of that chapter we review existing constraints on NHL's. As a prerequisite for one-loop calculations, the standard model at the one-loop level is discussed formally in Chapter 4. A key ingredient of the calculations is the renormalization of the standard model and we spend some time dealing with its salient features. In Chapter 5 we revisit the flavour-violating leptonic decays of the Z boson in our model. Our results are followed by a discussion of flavour-violating processes in general. In Chapter 6 we study the impact NHL's have through loops on flavour-conserving leptonic decays of the Z boson, lepton universality breaking in these decays, and the W boson mass. One-loop corrections are classified and calculated and the most important diagrams are identified. Violation of the decoupling theorem by the NHL's and its relevance is discussed. Implicit dependence of our results on muon decay is clarified. Chapter 7 completes calculations from the previous chapter by considering the full set of diagrams contributing to the muon decay. In the last chapter we conclude by summarizing our main results. \chapter{The models of neutrino mass} In this chapter we treat the models of neutrino mass formally. We start with the standard model, then we proceed with models beyond the SM. As discussed in the Introduction, we concentrate mainly on minimal extensions of the standard model - models with $SU(2)_{L} \times U(1)_{Y}$ gauge structure. In Sec. \ref{classical1} we present the classical electroweak Lagrangian defining the standard model. We rather state than discuss the principles upon which it is based. The purpose is to specify our notation and to give a practical reference point for the discussions of the models of neutrino mass (Chapters 2 and 3) and for one-loop calculations (Chapters 5,6,7) \footnote{One-loop calculations actually require the quantization and the subsequent extension of the classical Lagrangian by additional terms; this is discussed in Chapter 4, which directly precedes one-loop calculations in Chapters 5,6 and 7.}. In Sec. \ref{fermion2} we examine the fermion masses in the standard model, in Sec. \ref{neutrino3} we describe the generation of neutrino masses in simple extensions of the standard model (a simple model with Dirac neutrinos and a see-saw model with Majorana neutrinos) and finally, in Sec. \ref{grand4} we briefly deal with neutrino masses in grand unified models. The superstring-inspired minimal extension of the standard model, motivated in this last section, is discussed in detail in Chapter 3. \section{Classical electroweak Lagrangian} \label{classical1} The standard model of electroweak interactions (SM) \cite{key1,key2,key3,key4} is a gauge theory based on a local $SU(2)_{L} \times U(1)_{Y}$ symmetry group, which describes electromagnetic and weak interactions as manifestations of a single electroweak force. The $SU(2)_{L}$ part is the group of the weak isospin $I$ and the $U(1)_{Y}$ part is the group of the weak hypercharge $Y$. The quantum numbers $I_{3}$ (the third component of $I$) and $Y$ are related to the electric charge $Q$ via \begin{eqnarray} Q & = & I_{3} + \frac{Y}{2}. \end{eqnarray} The $SU(2)_{L} \times U(1)_{Y}$ symmetry is spontaneously broken to $U(1)_{em}$ (the group of the electric charge $Q$) and particle masses are generated by the nonsymmetric vacuum. We now list the fermion content of the SM \footnote{In fact, we already did it in Table \ref{tparticles}; here we use more formal description.}. The particles, represented by chiral (left-handed and right-handed) fields, form three families and therefore can be represented as three-component vectors in family space. With the theoretical treatment of fermion masses in mind we differentiate between the field content of an unbroken electroweak theory and that of a broken one. The left-handed lepton fields of the unbroken theory, \begin{eqnarray} \psi_{L} & \equiv & (\psi_{1_{L}},\psi_{2_{L}},\psi_{3_{L}}) \; = \; \left[ \left( \begin{array}{c} \nu_{e} \\ e \end{array} \right)_{L},\; \left( \begin{array}{c} \nu_{\mu} \\ \mu \end{array} \right)_{L},\; \left( \begin{array}{c} \nu_{\tau} \\ \tau \end{array} \right)_{L}\right] , \end{eqnarray} transform as doublets ($I = \frac{1}{2}$) under $SU(2)_{L}$ with the hypercharge $Y = -1$. In short, their quantum numbers are ($\frac{1}{2}, -1$). The right-handed lepton fields (0,-2) are \begin{eqnarray} \psi_{R} & \equiv & (\psi_{1_{R}},\psi_{2_{R}},\psi_{3_{R}}) \; = \; (e_{R},\mu_{R},\tau_{R}). \end{eqnarray} Note there are no right-handed neutrino fields in the SM. The left-handed ($\frac{1}{2}, \frac{1}{3}$) and the right-handed up ($0, \frac{4}{3}$) and down ($0, -\frac{2}{3}$) quark fields of the unbroken theory are respectively \begin{eqnarray} q_{L}^{'} & \equiv & (q_{1_{L}}^{'},q_{2_{L}}^{'},q_{3_{L}}^{'}) \; = \; \left[ \left( \begin{array}{c} u_{1}^{'} \\ d_{1}^{'} \end{array} \right)_{L},\; \left( \begin{array}{c} u_{2}^{'} \\ d_{2}^{'} \end{array} \right)_{L},\; \left( \begin{array}{c} u_{3}^{'} \\ d_{3}^{'} \end{array} \right)_{L}\right], \nonumber \\ \nonumber \\ u_{R}^{'} & \equiv & (u_{1_{R}}^{'},u_{2_{R}}^{'},u_{3_{R}}^{'}), \nonumber \\ d_{R}^{'} & \equiv & (d_{1_{R}}^{'},d_{2_{R}}^{'},d_{3_{R}}^{'}). \end{eqnarray} The quark fields (weak eigenstates) of the broken theory are different from the fields of the unbroken theory (and also from the quark mass eigenstates) : \begin{eqnarray} \label{weakstate} q_{L} & \equiv & (q_{1_{L}},q_{2_{L}},q_{3_{L}}) \; = \; \left[ \left( \begin{array}{c} u \\ \tilde{d} \end{array} \right)_{L},\; \left( \begin{array}{c} c \\ \tilde{s} \end{array} \right)_{L},\; \left( \begin{array}{c} t \\ \tilde{b} \end{array} \right)_{L}\right], \nonumber \\ \nonumber \\ u_{R} & \equiv & (u_{1_{R}},u_{2_{R}},u_{3_{R}}) \; = \; (u_{R},c_{R},t_{R}), \nonumber \\ \tilde{d_{R}} & \equiv & (\tilde{d_{1_{R}}},\tilde{d_{2_{R}}},\tilde{d_{3_{R}}}) \; = \; (\tilde{d_{R}},\tilde{s_{R}},\tilde{b_{R}}), \end{eqnarray} where $\;\;\tilde{d} = V_{CKM} d\;\;$ are weak eigenstates of the broken theory obtained from mass eigenstates $d$ through the Cabibbo-Kobayashi-Maskawa (CKM) matrix $V_{CKM}$ (see also Eq. \ref{charcur}). The classical electroweak Lagrangian is the sum of the fermion part, the gauge part and the Higgs part: \begin{eqnarray} {\cal L}_{EW} = {\cal L}_{G} + {\cal L}_{F} + {\cal L}_{H}. \end{eqnarray} The fermion part, which describes the fermions and their interactions, is given by \begin{eqnarray} \label{fermionl} {\cal L}_{F} & = & i \sum_{j=1}^{3}\left\{\overline{\psi_{j_{L}}}\; \gamma^{\mu}\;{\cal D}_{\mu}\; \psi_{j_{L}} + \overline{\psi_{j_{R}}}\;\gamma^{\mu}\;{\cal D}_{\mu} \;\psi_{j_{R}} + \overline{q_{j_{L}}}\; \gamma^{\mu}\;{\cal D}_{\mu}\; q_{j_{L}} \right. \nonumber \\ & + & \left. \overline{u_{j_{R}}}\;\gamma^{\mu}\;{\cal D}_{\mu}\;u_{j_{R}} + \overline{d_{j_{R}}}\;\gamma^{\mu}\;{\cal D}_{\mu}\;d_{j_{R}}\right\}, \end{eqnarray} where \begin{eqnarray} {\cal D}_{\mu} \psi_{L} & = & \left[ \left(\partial_{\mu} + i\frac{g_{1}}{2} Y B_{\mu}\right){\bf I} - i\frac{g_{2}}{2} \mbox{{\boldmath $\vec{\tau}$}} \cdot \vec{W_{\mu}} \right] \psi_{L}, \\ {\cal D}_{\mu} \psi_{R} & = & \left(\partial_{\mu} + i\frac{g_{1}}{2} Y B_{\mu}\right) \psi_{R}, \end{eqnarray} are covariant derivatives for left and right-handed fields respectively. These derivatives ensure the gauge invariance of the ${\cal L}_{F}$ by introducing a weak isospin triplet of gauge fields $\vec{W_{\mu}} \equiv (W_{\mu}^{1},W_{\mu}^{2},W_{\mu}^{3})$ and a weak isospin singlet gauge field $B_{\mu}$. The gauge fields interact with the fermions with the strength $g_{2}$, the $SU(2)_{L}$ coupling constant and the strength $g_{1}$, the $U(1)_{Y}$ coupling constant. ${\bf I}$ is a $2 \times 2$ unit matrix in isospin space and $\frac{1}{2}\mbox{{\boldmath $\vec{\tau}$}}$ are generators of $SU(2)_{L}$ transformations in two-dimensional representation; $\mbox{{\boldmath $\vec{\tau}$}}$ are Pauli matrices (see Appendix \ref{Acko}). Weak isospin $I$ is the eigenvalue of the operator $\left( \frac{1}{2} \mbox{{\boldmath $\vec{\tau}$}}\right)^{2}$ and $I_{3}$ is the eigenvalue of the operator $\frac{1}{2} \mbox{{\boldmath $\tau_{3}$}}$. The gauge part of the Lagrangian describes the gauge fields and their self-interac-{\linebreak}tions; it is given by \begin{eqnarray} {\cal L}_{G} = - \frac{1}{4} W^{a}_{\mu \nu} W^{\mu \nu ,a} -\frac{1}{4} B_{\mu \nu} B^{\mu \nu}, \end{eqnarray} where $ a=1,2,3 $ is the $SU(2)$ index and \begin{eqnarray} W^{a}_{\mu \nu} & = & \partial_{\mu} W_{\nu}^{a} -\partial_{\nu} W_{\mu}^{a} + g_{2} \: \epsilon_{abc} \: W_{\mu}^{b} W_{\mu}^{c}, \nonumber \\ B_{\mu \nu} & = & \partial_{\mu} B_{\nu} - \partial_{\nu} B_{\mu}, \end{eqnarray} are field strength tensors for the isotriplet $W_{\mu}^{a}$ and the isosinglet $B_{\mu}$ fields respectively. The Higgs part of the Lagrangian, responsible for spontaneous electroweak symmetry breaking, is the sum of two terms: \begin{eqnarray} {\cal L}_{H} = {\cal L}_{HG} + {\cal L}_{HF}. \end{eqnarray} Here ${\cal L}_{HG}$ describes the Higgs-gauge interactions and ${\cal L}_{HF}$ the Higgs-fermion or so-called Yukawa interactions. ${\cal L}_{HG}$ has the form \begin{eqnarray} {\cal L}_{HG} = ({\cal D}_{\mu}\Phi)^{\dagger} ({\cal D}^{\mu}\Phi) - V(\Phi), \end{eqnarray} where \begin{eqnarray} \Phi & = & \left( \begin{array}{c} \phi^{+} \\ \phi^{0} \end{array} \right), \nonumber \\ \nonumber \\ {\cal D}_{\mu}\Phi & = & \left[(\partial_{\mu} + i\frac{g_{1}}{2} Y B_{\mu}){\bf I} - i\frac{g_{2}}{2} \mbox{{\boldmath $\vec{\tau}$}} \cdot \vec{W_{\mu}}\right]\Phi, \nonumber \\ \nonumber \\ V(\Phi) & = & -\mu^{2}\Phi^{\dagger}\Phi + \lambda(\Phi^{\dagger}\Phi)^{2}, \;\;\;\; \lambda > 0. \end{eqnarray} ${\cal D}_{\mu}\Phi$ is the covariant derivative for the $Y = 1$ Higgs doublet $\Phi$ $(\frac{1}{2},1)$ with the charged component $\phi^{+}$ and the neutral component $\phi^{0}$; $V(\Phi)$ is the Higgs potential constructed so it can lead to the vacuum in which the average value (vacuum expectation value) of the Higgs doublet, denoted $\langle \Phi \rangle$, is nonzero. To keep $U(1)_{em}$ unbroken, it is the neutral component $\phi^{0}$ which develops the vacuum expectation value: \begin{eqnarray} \label{vev} \langle \Phi \rangle & = & \frac{1}{\sqrt{2}}\left( \begin{array}{c} 0 \\ v \end{array} \right), \;\;\;\; v = \frac{\mu}{\sqrt{\lambda}}. \end{eqnarray} The symmetry is broken spontaneously because the electroweak Lagrangian is symmetric under $SU(2)_{L} \times U(1)_{Y}$ transformations but the lowest energy state, the vacuum, is not (here $\langle \Phi \rangle$ is not symmetric). The Higgs doublet can be written now as \begin{eqnarray} \label{higgsplus} \Phi & = & \left( \begin{array}{c} \phi^{+} \\ \phi^{0} \end{array} \right) = \left( \begin{array}{c} \phi^{+} \\ \frac{1}{\sqrt{2}}(v+H+i\chi) \end{array} \right), \end{eqnarray} where $\phi^{\pm}$ and $\chi$ are unphysical Higgs fields and $H$ is the physical Higgs field. The spontaneous symmetry breaking gives rise to massive gauge fields $W_{\mu}^{\pm}$ and $Z_{\mu}$, mediators of weak charged and neutral interactions leaving the massless photon field $A_{\mu}$, the mediator of electromagnetic interactions: \begin{eqnarray} W_{\mu}^{\pm} & = & \frac{1}{\sqrt{2}}\left(W_{\mu}^{1} \mp i W_{\mu}^{2}\right), \nonumber \\ Z_{\mu} & = & + \cos \theta_{W} W_{\mu}^{3} + \sin \theta_{W} B_{\mu}, \nonumber \\ A_{\mu} & = & - \sin \theta_{W} W_{\mu}^{3} + \cos \theta_{W} B_{\mu}. \end{eqnarray} The $W^{\pm}$ mass $M_{W}$ and the $Z$ mass $M_{Z}$ are given by \begin{eqnarray} M_{W} & = & \frac{v}{2} g_{2}, \;\;\;\;\;\; M_{Z} \; = \; \frac{v}{2} \sqrt{g_{1}^{2}+g_{2}^{2}}. \end{eqnarray} The Weinberg angle $\theta_{W}$ is defined as \begin{eqnarray} \cos \theta_{W} & = & \frac{M_{W}}{M_{Z}} \; = \; \frac{g_{2}}{\sqrt{g_{1}^{2} + g_{2}^{2}}}. \end{eqnarray} The electric charge $e=\sqrt{4\pi \alpha}$ can be expressed as \begin{eqnarray} e = \frac{g_{1}g_{2}}{\sqrt{g_{1}^{2} + g_{2}^{2}}}, \end{eqnarray} or \begin{eqnarray} g_{2} = \frac{e}{\sin \theta_{W}},\;\;\;\;\;\;g_{1} = \frac{e}{\cos \theta_{W}}. \end{eqnarray} The second term of the Higgs part of the Lagrangian, ${\cal L}_{HF}$, is discussed in the next section. \section{Fermion masses in the SM} \label{fermion2} The spontaneous symmetry breaking is responsible also for fermion masses. The starting point is ${\cal L}_{HF}$ which describes the Yukawa interactions between fermions and the Higgs doublet: \begin{eqnarray} \label{yukawa} {\cal L}_{HF} & = & - \sum_{i=1}^{3} \sum_{j=1}^{3}\left[\tilde{G}_{ij} \overline{u_{i_{R}}^{'}} (\tilde{\Phi}^{\dagger}q_{j_{L}}^{'}) + G_{ij}\overline{d_{i_{R}}^{'}}(\Phi^{\dagger}q_{j_{L}}^{'})\right] + h.c. \nonumber \\ & - & \;\;\;\;\sum_{i=1}^{3}\left[\makebox[1.03in][c]{ } \;\;\;\: + h_{i} \overline{\psi_{i_{R}}} (\Phi^{\dagger}\psi_{i_{L}})\right] +h.c., \end{eqnarray} where \begin{eqnarray} \label{higgsminus} \tilde{\Phi} & = & i \tau_{2} \Phi^{*} = \left( \begin{array}{c} {\phi^{0}}^{*} \\ -\phi^{-} \end{array} \right) \end{eqnarray} is the $Y = -1$ Higgs doublet $(\frac{1}{2},-1)$ and $\tilde{G}_{ij},G_{ij}, h_{i}$ are arbitrary Yukawa couplings which are free parameters in the SM. The purpose of the empty space in the second line of Eq. \ref{yukawa} will be clarified below. To generate masses, one substitutes in Eq. \ref{yukawa} the vacuum expectation value $\langle \Phi \rangle$ (see Eq. \ref{vev}) for $\Phi$. Thus the first term (plus its h.c.) in the first line of Eq.~\ref{yukawa} gives mass to $u,c,t$ quarks; the second term gives mass to $d,s,b$ quarks and the term in the second line gives mass to charged leptons. Let us study the charged lepton case first. We get the electron mass $m_{e}$ from the second line of Eq. \ref{yukawa} for $i = 1$. The substitution of $\langle \Phi \rangle$ yields \begin{eqnarray} & - & \frac{1}{\sqrt{2}}h_{1}\overline{e_{R}} {\left( \begin{array}{c} 0 \\ v \end{array} \right)}^{\dagger} \left( \begin{array}{c} \nu_{e_{L}} \\ e_{L} \end{array} \right) -\frac{1}{\sqrt{2}}h_{1} \overline{\left( \begin{array}{c} \nu_{e_{L}} \\ e_{L} \end{array} \right)} \left( \begin{array}{c} 0 \\ v \end{array} \right)e_{R} \; \nonumber \\ \nonumber \\ & = & \; -\frac{1}{\sqrt{2}}h_{1}\overline{e_{R}}\:v\:e_{L} -\frac{1}{\sqrt{2}}h_{1}\overline{e_{L}}\:v\:e_{R} \; = \; -\frac{1}{\sqrt{2}}h_{1} v \left( \overline{e_{R}}e_{L}+ \overline{e_{L}}e_{R}\right) \; \nonumber \\ \nonumber \\ & \equiv & \; - m_{e}\left( \overline{e_{R}}e_{L}+ \overline{e_{L}}e_{R}\right) \; = \; - m_{e}\overline{e}e, \end{eqnarray} which is the familiar form of the Dirac mass term. Without any inter-generation couplings in the lepton part of Eq. \ref{yukawa} (Yukawa couplings $h_{i}$ are simple numbers as opposed to matrices $\tilde{G}_{ij},G_{ij}$), there are no mixings among leptons in the SM \footnote{When discussing fermion masses, one cannot avoid the question of possible mixings among fermions. It is because we look for mass effects in various weak processes where the states of definite weak quantum numbers (weak interaction eigenstates) participate rather than the states of definite mass (mass eigenstates). Mixings then relate mass eigenstates to weak eigenstates. Further, mixings and masses are connected through their common origin derived from Yukawa couplings and the vacuum expectation value of the Higgs doublet. For these reasons we will study mixings along with masses.}. As a result, lepton family numbers (flavours) are separately conserved and there are no lepton flavour-violating processes. The total lepton number $L$ is also conserved since it is the sum of lepton family numbers. Quark masses are more involved, because inter-generation couplings are allowed ($ \tilde{G}_{ij},G_{ij}$ are nondiagonal matrices in flavour space). As a consequence, $q^{'}_{j_{L}}, u^{'}_{i_{R}}$ and $d^{'}_{i_{R}}$, the weak eigenstates of the unbroken theory, are different from the mass eigenstates $u,c,t$ and $d,c,b$. They are related through the unitary matrices $A_{L}, A_{R}, B_{L}$, $B_{R}$,~\cite{barger}: \begin{eqnarray} \left( \begin{array}{c} u^{'}_{1_{L,R}} \\ u^{'}_{2_{L,R}} \\ u^{'}_{3_{L,R}} \end{array} \right) = A_{L,R} \left( \begin{array}{c} u_{L,R} \\ c_{L,R} \\ t_{L,R} \end{array} \right) ,\;\;\;\;\;\;\;\;\;\; \left( \begin{array}{c} d^{'}_{1_{L,R}} \\ d^{'}_{2_{L,R}} \\ d^{'}_{3_{L,R}} \end{array} \right) = B_{L,R} \left( \begin{array}{c} d_{L,R} \\ s_{L,R} \\ b_{L,R} \end{array} \right). \end{eqnarray} To generate quark masses we again substitute $\langle \Phi \rangle$ for $\Phi$, now in the first line of Eq. \ref{yukawa}. We obtain mass matrices $\frac{v}{\sqrt{2}}\tilde{G}_{ij}$ and $\frac{v}{\sqrt{2}}G_{ij}$ which are diagonalized by the matrices $A$ and $B$ to yield masses $m_{u}, ..., m_{b}$ of $u, ..., b$ quarks: \begin{eqnarray} \frac{v}{\sqrt{2}}A_{R}^{-1}\tilde{G}A_{L} = \left( \begin{array}{lll} m_{u} & 0 & 0 \\ 0 & m_{c} & 0 \\ 0 & 0 & m_{t} \end{array} \right) ,\;\;\;\;\; \frac{v}{\sqrt{2}}B_{R}^{-1}GB_{L} = \left( \begin{array}{lll} m_{d} & 0 & 0 \\ 0 & m_{s} & 0 \\ 0 & 0 & m_{b} \end{array} \right). \end{eqnarray} Mixings arise in the charged current interactions of quarks: the quark charged current Lagrangian (part of ${\cal L}_{F}$, Eq. \ref{fermionl}) is given as \begin{eqnarray} \label{charcur} {\cal L}_{cc} & = & \frac{g_{2}}{\sqrt{2}}W^{\mu} \overline{(u^{'}_{1_{L}},u^{'}_{2_{L}},u^{'}_{3_{L}})} \gamma_{\mu} \left( \begin{array}{c} d^{'}_{1_{L}} \\ d^{'}_{2_{L}} \\ d^{'}_{3_{L}} \end{array} \right) \; = \; \frac{g_{2}}{\sqrt{2}}W^{\mu}\overline{(u_{L},c_{L},t_{L})} \:A_{L}^{\dagger}B_{L}\:\gamma_{\mu} \left( \begin{array}{c} d_{L} \\ s_{L} \\ b_{L} \end{array} \right), \nonumber \\ \end{eqnarray} where $V_{CKM} \equiv A_{L}^{\dagger}B_{L}$ is the $3\times 3$ unitary CKM mixing matrix \cite{ckm}. It is a nondiagonal matrix inducing transitions between families in charged current interactions. Acting on mass eigenstates $d, s, b$, it gives us weak eigenstates $\tilde{d}, \tilde{s}, \tilde{b}$ (see Eq. \ref{weakstate}). There is no mixing in the neutral current Lagrangian, hence no flavour-changing neutral currents at the lowest order of perturbation theory, although they can arise at the one-loop level. For neutrino masses, there is an empty space in the second line of Eq. \ref{yukawa} because no right-handed neutrino fields $\nu_{R}$ are included. Thus, there cannot be nonzero neutrino masses in the SM \footnote{There actually could be nonzero neutrino masses without right-handed neutrino fields if the Higgs sector of the SM was appropriately extended \cite{mohapatra}.}. The generation of fermion masses in the SM, as we have just described it, is considered to be the least satisfactory part of the SM. Each mass enters as an unknown parameter (Yukawa couplings are not predicted) which has to be supplied by experiment. The SM rather accommodates fermion masses than predicts them. The problem of fermion masses, and neutrino masses in particular, has been a top priority for particle physicists for some time now. \section{Neutrino masses in $SU(2)_{L}\times U(1)_{Y}$ models beyond the SM} \label{neutrino3} As noted in the Introduction, basic directions in the theoretical treatment of neutrino masses can be followed in the class of models based on the same symmetry group as the SM, on $SU(2)_{L} \times U(1)_{Y}$. This fixes the gauge sector; the fermion content and the Higgs (symmetry breaking) sector offer some freedom which is used by different models within the class. We keep here also the symmetry breaking sector of the SM untouched and extend the fermion sector only. We examine two such models in this section and the third one, our model, in Chapter 3. \subsection{A simple model of neutrino mass} \label{asimple31} In this straightforward extension of the SM one postulates one right-handed neutrino field $\nu_{R}$ per family with the $SU(2)_{L}\times U(1)_{Y}$ quantum numbers (0,0). Neutrinos are then treated in the same manner as all other fermions in the SM. The presence of right-handed neutrino fields allows new Yukawa interactions, \begin{eqnarray} {\cal L}_{new} & = & - \sum_{i=1}^{3} \sum_{j=1}^{3}\tilde{h}_{ij} \overline{\nu_{i_{R}}} (\tilde{\Phi}^{\dagger}\psi_{j_{L}}) + h.c.\;. \end{eqnarray} This is the term missing from Eq. \ref{yukawa}. Neutrinos acquire Dirac mass by analogy with up type quarks in the SM (see Sec. \ref{fermion2}); the only minor difference is that here we do not introduce mixings among the charged leptons. Neutrino mass eigenstates are then different from weak eigenstates $\nu_{e},\nu_{\mu},\nu_{\tau}$, leading to neutrino mixing and the violation of family lepton numbers. The shortcoming of this model is that it provides no answer to the problem of smallness of neutrino masses. We can make masses small by tuning Yukawa couplings $\tilde{h}_{ij}$ but this is not satisfactory, as there is no good reason why the $\tilde{h}_{ij}$ should themselves be small. \subsection{See-saw mechanism in an $SU(2)_{L}\times U(1)_{Y}$ model, Majorana neutrinos and Majorana NHL's} \label{see-saw32} Charged fermions are formally described by Dirac spinors. Neutrinos are described in the same way in the simple extension discussed above. However, because neutrinos are neutral, another possibility opens up. They could be Majorana particles. To illustrate the difference between Dirac and Majorana neutrinos, let us decompose the Dirac mass term into its components, which form the so-called Majorana basis of a matrix representation of mass terms (see Ref. \cite{mohapatra}, Sec. 4.5), \begin{eqnarray} \label{decomp} m \;\overline{\nu}\nu & = & m \:\left(\overline{\nu_{L}}\nu_{R} + \overline{\nu_{R}}\nu_{L} \right) = m \;\overline{\nu_{L}} \nu_{R} + h.c. = \frac{1}{2} m \:\left(\overline{\nu_{L}}\nu_{R}+\overline{\nu_{L}^{c}} \nu_{R}^{c}\right) + h.c. \; = \nonumber \\ \nonumber \\ & = & \frac{1}{2}\left(\overline{\nu^{c}_{L}}\;\overline{\nu_{L}}\right) \left( \begin{array}{ll} 0 & m \\ m & 0 \end{array} \right) \left( \begin{array}{c} \nu_{R} \\ \nu_{R}^{c} \end{array} \right) + h.c. \nonumber \\ & = & \frac{1}{2}\left(\overline{\nu_{L}}\;\overline{\nu_{L}^{c}}\right) \left( \begin{array}{ll} 0 & m \\ m & 0 \end{array} \right) \left( \begin{array}{c} \nu_{R}^{c} \\ \nu_{R} \end{array} \right) + h.c., \end{eqnarray} where $\nu^{c} = C\gamma_{0}\nu^{*}$ ($C= i \gamma^{2} \gamma^{0}$) is the charge conjugate field of $\nu$, $\nu_{R}^{c} \equiv \frac{1}{2}(1+\gamma_{5})\nu^{c}$ is the charge conjugate of the field $\nu_{L}$, and $\nu_{L}^{c}$ is the charge conjugate of the field $\nu_{R}$. In the above, we used the identity $\overline{\nu_{L}^{c}}\nu_{R}^{c} = \overline{\nu_{L}}\nu_{R}$, proven in Appendix \ref{proof}. From Eq. \ref{decomp} it is obvious that the Dirac mass term has a very special mass matrix in the Majorana basis, namely, the two diagonal terms are zero. Can we make these two matrix elements nonzero ? The answer is yes, if we are willing to accept the violation of the total lepton number $L$, or equivalently baryon minus lepton ($B-L$) number \footnote{The relevance of $B-L$, rather than $L$, is discussed in Ref. \cite{mohapatra}, Sec. 2.4.}. We already broke individual lepton family numbers and there is nothing sacred about $B-L$ symmetry either. In an $SU(2)_{L}\times U(1)_{Y}$ see-saw model we introduce the following mass matrix (written for the case of one family), \begin{eqnarray} \label{majorana} -{\cal L}_{mass} & = & \frac{1}{2}\left(\overline{\nu_{L}}\;\overline{n_{L}^{c}}\right) \left( \begin{array}{ll} 0 & D \\ D & M \end{array} \right) \left( \begin{array}{c} \nu_{R}^{c} \\ n_{R} \end{array} \right) + h.c., \end{eqnarray} so the fermion content is the same as that of the simple model of Sec. \ref{asimple31} \footnote{Note that in Eq. \ref{decomp} we use notation $\nu_{L}, \nu_{R}$ for left-handed and right-handed chiral fields respectively; in contrast, here we use $n_{R}$ rather than $\nu_{R}$ for the right-handed field. The reason lies in the fact that for a Dirac neutrino two independent fields $\nu_{L}, \nu_{R}$ combine to form a single particle, while in this case $\nu_{L}$ with its partner $\nu_{R}^{c}$ form (in the limit $M \gg D$) a light Majorana neutrino and $n_{R}$ with its partner $n_{L}^{c}$ form a Majorana NHL; hence we use a different notation for the field describing a different particle.} , but here we allow Majorana mass terms breaking $B-L$ number conservation \footnote{For $B-L$ to be conserved, ${\cal L}_{mass}$ must be invariant under the following transformations: $\nu_{L} \rightarrow e^{- i (B - L) \alpha} \nu_{L} = e^{- i \alpha} \nu_{L};\; n_{R} \rightarrow e^{- i \alpha} n_{R};\; n^{c}_{L} \rightarrow e^{+ i \alpha} n^{c}_{L};\; \overline{n^{c}_{L}} \rightarrow e^{- i \alpha} \overline{n^{c}_{L}} $. The term in Eq.~\ref{blbreak} transforms as $\overline{n^{c}_{L}} n_{R} \rightarrow e^{- i \alpha} e^{- i \alpha} \: \overline{n^{c}_{L}} n_{R} \neq \overline{n^{c}_{L}} n_{R}$, i.e., it breaks $B-L$ conservation.}, \begin{eqnarray} \label{blbreak} \frac{1}{2}M \:\overline{n_{L}^{c}}n_{R} + h.c.\;. \end{eqnarray} The matrix ${\cal M} = \left( \begin{array}{ll} 0 & D \\ D & M \end{array} \right)$ now describes two massive Majorana neutrinos rather than a single Dirac one. To see that, we have to diagonalize ${\cal M}$ (e.g. Ref. \cite{mohapatra}, Sec.~5.1.4), \begin{eqnarray} {\cal M} & = & O^{T} \left( \begin{array}{ll} m_{1} & 0 \\ 0 & m_{2} \end{array} \right) \left( \begin{array}{rl} -1 & 0 \\ 0 & 1 \end{array} \right) O, \end{eqnarray} where $m_{1,2} = \frac{1}{2}\big(\sqrt{M^{2}+4 D^{2}}\mp M \big)$ are the masses of the two Majorana neutrinos. In the above, \begin{eqnarray} O & = & \left( \begin{array}{lr} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right), \;\;\;\;\; \tan 2\theta = \frac{2D}{M}, \end{eqnarray} is an orthogonal rotation matrix defining massive Majorana neutrinos $\nu^{'}, N$ as \begin{eqnarray} \left( \begin{array}{c} \nu^{'}_{L} \\ N_{L} \end{array} \right) \equiv O \left( \begin{array}{c} \nu_{L} \\ n_{L}^{c} \end{array} \right), \;\;\;\;\; \left( \begin{array}{c} \nu^{'}_{R} \\ N_{R} \end{array} \right) \equiv \left( \begin{array}{rl} -1 & 0 \\ 0 & 1 \end{array} \right) O \left( \begin{array}{c} \nu_{R}^{c} \\ n_{R} \end{array} \right). \end{eqnarray} From here we can show \begin{eqnarray} \nu^{'} & = & \nu^{'}_{L}+ \nu^{'}_{R} \; = \; \cos \theta (\nu_{L}-\nu_{R}^{c}) - \sin \theta ( n_{L}^{c}- n_{R}) \; = \; -\nu^{'c}, \nonumber \\ N & = & N_{L}+ N_{R} \; = \; \sin \theta (\nu_{L}+\nu_{R}^{c}) + \cos \theta ( n_{L}^{c}+ n_{R}) \; = \; N^{c}, \end{eqnarray} that is, $\nu^{'}$ and $N$ are their own charge conjugates, their own antiparticles; therefore they are Majorana neutrinos. We see how this model explains the small neutrino masses when we assume that $M \gg D$. In this limit, the masses $m_{1}$ of $\nu^{'}$ and $m_{2}$ of $N$ become \begin{eqnarray} \label{see} m_{1} \doteq \frac{D^{2}}{M}, \;\;\;\;\; m_{2} \doteq M; \end{eqnarray} and using $\sin 2\theta \doteq \tan 2\theta \doteq 2\theta = \frac{2 D}{M}$, we find for the weak eigenstate $\nu_{L}$ \begin{eqnarray} \label{ss2} \nu_{L} & \doteq & \nu^{'}_{L} + \frac{D}{M} N_{L} \; \doteq \; \nu^{'}_{L}. \end{eqnarray} Eq. \ref{see} is the famous see-saw mass relation (see also Eq. \ref{ss1}), whereby a weakly interacting neutrino, $\nu_{L} \doteq \nu^{'}_{L}$, gets very light compared to the typical family fermion mass $D$ thanks to the very large Majorana mass $M$. Assuming $D \sim m_{\tau}, \;M$ has to be greater than about $10^{8} \; {\rm GeV}$ in order to meet the cosmological bound (see Sec.~\ref{sttp}) $m_{\nu} \; < 25$ eV. It looks as though we have replaced the problem of the smallness of the neutrino mass with another one, the problem of the big mass $M$. Indeed, in the context of an $SU(2)_{L} \times U(1)_{Y}$ model, the origin of the big mass $M$ is a mystery. At this point we invoke our motivational grounds, the unification models (see the next section). There are in fact large scales in these models associated with the unification energies. The $SU(2)_{L} \times U(1)_{Y}$ see-saw model could be a low-energy limit of some GUT theory. In this thesis we are specifically interested in NHL's, described in this section by the field $N$ with the mass $m_{2} \equiv M$. From Eqs. \ref{see}, \ref{ss2} it is obvious that NHL's in this model are, first, too heavy to be observed directly in the near future, and second, their contribution to left-handed weak eigenstates is so small that there is little hope to see even their indirect effects. See-saw models tend to be phenomenologically uninteresting. There are however models with special forms of the Dirac and Majorana mass matrices (in the general case of $n$ families, masses $D$ and $M$ in Eq. \ref{majorana} become Dirac and Majorana $n \times n$ mass matrices) that avoid this suppression \cite{pilaftsis2,Ng1}. For instance, Pilaftsis \cite{pilaftsis2} finds a relation among the elements of $D$ and $M$ matrices that leads to massless neutrinos at the tree-level and small Majorana masses are generated radiatively. The cosmological constraint on the scale $M$ is much weaker in this model and consequently, the mixing of NHL's ($K_{H} \sim D/M$) is not suppressed. We shall refer to such models as see-saw models with enhanced mixings. Although calculations in this work were carried out in the context of a superstring-inspired model, our analysis is qualitatively valid also for this class of see-saw models. \section{Neutrino mass in grand unified models} \label{grand4} Here we briefly touch the question of neutrino mass in grand unified models (GUT's). A nice short review of the subject can be found in Ref. \cite{mohapatra}; the case of $SO(10)$ is discussed also in Ref. \cite{seesaw}, and that of $E_{6}$ in Ref. \cite{strings}. There are 15 chiral fermion fields per generation currently known, $e_{L}, e_{R}, \nu_{e}$ and twelve $u$ and $d$ quark fields. In the simplest GUT model, $SU(5)$, these $15$ fields are assigned to $\{10\}$ and $\overline{\{5\}}$-dimensional representations. There is no right-handed neutrino postulated; therefore, one cannot generate a Dirac mass for the neutrino and also it is not possible to generate Majorana mass as described in Sec. \ref{see-saw32}. One can still generate Majorana mass without a right-handed neutrino if an appropriate Higgs field is introduced. The problem is that this Higgs field is introduced {\it ad hoc} and, as a result, neutrino masses do not arise in $SU(5)$ naturally. Moreover, $SU(5)$ is ruled out by the proton decay measurements \cite{proton}. The next popular group is $SO(10)$. This group contains left-right symmetric $SU(2)_{L} \times SU(2)_{R} \times SU(4)_{C}$ as its subgroup, which implies automatically the right-handed neutrino. The number of chiral fermion fields per generation is thus $16$, filling the fundamental $\{16\}$ representation. With a right-handed neutrino in the fundamental representation, neutrino masses in $SO(10)$ can arise naturally via the see-saw mechanism (see Sec. \ref{see-saw32}). The actual values of neutrino masses are sensitive to the Majorana mass matrix $M$ (see Eq. \ref{majorana}), which in turn can tell us about the particular branch of the $SO(10)$ breaking down to low-energy $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$. $SO(10)$ predicts a lower rate for proton decay than does $SU(5)$. Finally, a lot of attention is paid to $E_{6}$ based GUT's \cite{strings}. This is thanks to their superstring connections. Green and Schwarz \cite{green} showed that string theory in ten dimensions is anomaly free for the gauge group $E_{8} \times E^{'}_{8}$ and that the compactification of the additional six dimensions can result in the breaking of $E_{8}$ down to $E_{6}$, which becomes an effective GUT group. The fundamental representation of $E_{6}$ is $\{27\}$-dimensional, implying $27$ chiral fermion fields per generation, $11$ more than we had in $SO(10)$. These eleven fields must be new particles, often referred to as exotics. They include a colour triplet weak isosinglet quark and its antiparticle and five new leptons. Of the new leptons, four (two charged and two neutral) form two weak isodoublets and the fifth one is a weak isosinglet. Curiously, the superstring-inspired $E_{6}$ model experiences certain difficulties in understanding the small neutrino masses \cite{vallemo,bernabeu1} : there are no appropriate Higgs fields to provide the large Majorana mass $M$ for the see-saw mass matrix (see Eq.~\ref{majorana}) and therefore the see-saw mechanism does not operate here. Interesting solutions to this problem suggest that besides the fundamental $\{27\}$-plet there exists an additional, $E_{6}$ singlet neutral fermion field $S_{L}$. At low energies, $S_{L}$, along with the right-handed neutrino $n_{R}$, can enrich the neutral lepton spectrum of the SM. The other three neutral exotic leptons decouple from the low-energy spectrum. The mass matrix formed by $\nu_{L}, n_{R}$ and $S_{L}$ offers an alternative to the see-saw mechanism in generating naturally light (in fact massless) neutrinos. The phenomenological implications of such a superstring-inspired low-energy model, which is just a minimal extension of the SM, are studied in this thesis. The model itself is described in detail in the next chapter. \chapter{A superstring-inspired $SU(2)_{L} \times U(1)_{Y}$ model of neutrino mass} In this thesis we study phenomenological aspects of an $SU(2)_{L} \times U(1)_{Y}$ model, which extends the neutral fermion spectrum of the SM by two new fields, the right-handed neutrino $n_{R}$ and a left-handed field $S_{L}$. The model could arise as a low-energy limit of a superstring-inspired $E_{6}$ GUT \cite{vallemo,bernabeu1}; it was also suggested as a low-energy limit of a supersymmetry-inspired $SO(10)$ GUT \cite{wolfe}. Superstring-inspired GUT's have an interesting problem with neutrino masses (see the discussion in Sec.~\ref{grand4}): the see-saw mechanism does not apply here and unacceptably large neutrino masses arise as a consequence \cite{vallemo,bernabeu1}. The existence of the field $S_{L}$ was suggested as a potential solution to this problem. $S_{L}$ is an $E_{6}$ singlet which may be present in superstring models. At low energies it can remain in the neutral fermion spectrum along with the right-handed neutrino $n_{R}$ and the usual left-handed neutrino $\nu_{L}$. These three fields together with imposed $B - L$ conservation form a mass matrix leading to an alternative to the see-saw mechanism in addressing the problem of the smallness of neutrino masses. In this chapter we define the model and give a detailed treatment of neutrino masses and mixing matrix, and the neutrino interaction Lagrangian. \section{Fermion content and mass matrix} \label{content1} In this superstring-inspired model we keep, in line with introductory arguments in Sec. \ref{neutrino3}, the gauge sector and the Higgs sector of the SM untouched. The fermion content is enlarged by two neutrino fields, $n_{R}$ and $S_{L}$, per family. Their $SU(2)_{L}\times U(1)_{Y}$ quantum numbers are $(0,0)$. The field $n_{R}$ is a right-handed neutrino, while $S_{L}$ is an $E_{6}$ singlet neutrino field. In a single family, we thus have the following leptons (given with their quantum numbers): \begin{equation} \begin{array}{cccc} \left( \begin{array}{c} \nu_{e} \\ e \end{array} \right)_{L} & e_{R} & n_{R} & S_{L} \\ \\ \Big(\frac{1}{2},-1\Big) & (0,-2) & (0,0) & (0,0) \end{array} \end{equation} The definition of the model is completed by specifying the mass matrix ${\cal M}$. In the Majorana basis it is given by \begin{eqnarray} \label{ourmatrix} -{\cal L}_{mass} & = & \frac{1}{2}{\cal M} \; = \; \frac{1}{2}\left(\overline{\nu_{L}}\;\overline{n_{L}^{c}}\;\overline{S_{L}} \right) \left( \begin{array}{lll} 0 & D & 0 \\ D^{T} & 0 & M^{T} \\ 0 & M & 0 \end{array} \right) \left( \begin{array}{c} \nu_{R}^{c} \\ n_{R} \\ S^{c}_{R} \end{array} \right) + h.c.. \end{eqnarray} Each $\nu_{L},n_{R},S_{L}$ represents now a collection of three fields, one for each family, e.g. $\nu_{L} = (\nu_{e},\nu_{\mu},\nu_{\tau})$ is the vector of the three SM weak eigenstate neutrinos. $D$ and $M$ are $3 \times 3$ Dirac mass matrices. The top diagonal element must vanish unless we extend the symmetry breaking sector of our model. A weak isotriplet Higgs field could allow this term. However, we retain the symmetry breaking sector of the SM. The middle element is zero due to the absence of the appropriate Higgs fields that would provide the Majorana mass. This is enforced by imposed $B-L$ number conservation, which is also responsible for all other zeros in the mass matrix ${\cal M}$. Only terms preserving the $B-L$ number, $\overline{\nu_{L}}\;D\;n_{R}+ h.c.$ and $\overline{S_{L}}\;M\;n_{R} + h.c.$, remain (see footnote 7, p. 23 on $B-L$ conservation). To find the physical neutrino states of the model, we have to diagonalize the mass matrix ${\cal M}$. We will do it within a single family first and then we will generalize the procedure for the three families. \subsection{Diagonalization of ${\cal M}$ for a single family} \label{diagon2} In the case of a single family, $\nu_{L},n_{R},S_{L}$ represent each only one field and matrices $D,M$ become simple numbers. We perform the following rotation, \begin{eqnarray} \label{rotation} \left( \begin{array}{c} \nu_{L} \\ n_{L}^{c} \\ S_{L} \end{array} \right) \equiv O \left( \begin{array}{c} \nu_{L}^{'} \\ n_{L}^{c} \\ S_{L}^{'} \end{array} \right),\;\;\;\;\;\;\;\;\;\; \left( \begin{array}{c} \nu_{R}^{c} \\ n_{R} \\ S^{c}_{R} \end{array} \right) \equiv O \left( \begin{array}{c} \nu_{R}^{'c} \\ n_{R} \\ S^{'c}_{R} \end{array} \right), \end{eqnarray} where \begin{eqnarray} O & = & \left( \begin{array}{ccc} c_{\theta} & 0 & s_{\theta} \\ 0 & 1 & 0 \\ - s_{\theta} & 0 & c_{\theta} \end{array} \right), \;\;\;\;\; c_{\theta} = \cos \theta, \; s_{\theta} = \sin \theta, \; \tan \theta =\frac{D}{M}. \end{eqnarray} The mass matrix ${\cal M}$ becomes \begin{eqnarray} {\cal M} & = & \left(\overline{\nu_{L}^{'}}\; \overline{n_{L}^{c}}\; \overline{S_{L}^{'}} \right) O^{T} \left( \begin{array}{lll} 0 & D & 0 \\ D & 0 & M \\ 0 & M & 0 \end{array} \right) O \left( \begin{array}{c} \nu_{R}^{'c} \\ n_{R} \\ S^{'c}_{R} \end{array} \right) + h.c. \nonumber \\ \nonumber \\ & = & \left(\overline{\nu_{L}^{'}}\; \overline{n_{L}^{c}}\; \overline{S_{L}^{'}} \right) \left( \begin{array}{ccc} 0 & D c_{\theta} - M s_{\theta} & 0 \\ D c_{\theta} - M s_{\theta} & 0 & D s_{\theta} + M c_{\theta} \\ 0 & D s_{\theta} + M c_{\theta} & 0 \end{array} \right) \left( \begin{array}{c} \nu_{R}^{'c} \\ n_{R} \\ S^{'c}_{R} \end{array} \right) + h.c. \nonumber \\ \nonumber \\ & = & \left(\overline{\nu_{L}^{'}}\; \overline{n_{L}^{c}}\; \overline{S_{L}^{'}} \right) \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & \sqrt{D^{2} + M^{2}} \\ 0 & \sqrt{D^{2} + M^{2}} & 0 \end{array} \right) \left( \begin{array}{c} \nu_{R}^{'c} \\ n_{R} \\ S^{'c}_{R} \end{array} \right) + h.c. , \end{eqnarray} yielding a massless neutrino $\nu^{'}$. Moreover, we recognize the submatrix \begin{eqnarray} \left(\overline{n_{L}^{c}}\; \overline{S_{L}^{'}} \right) \left( \begin{array}{cc} 0 & \sqrt{D^{2} + M^{2}} \\ \sqrt{D^{2} + M^{2}} & 0 \end{array} \right) \left( \begin{array}{c} n_{R} \\ S^{'c}_{R} \end{array} \right) \end{eqnarray} as the matrix representation of a Dirac mass term (see Eq. \ref{decomp}) in the Majorana basis. Indeed, putting \begin{eqnarray} \label{identif} n_{L}^{c} \equiv N^{c}_{L},\; \; \; S^{'c}_{R} \equiv N^{c}_{R},\; \; \; n_{R} \equiv N_{R},\; \; \; S_{L}^{'} \equiv N_{L}, \end{eqnarray} we reproduce Eq. \ref{decomp} and therefore, besides the massless neutrino $\nu^{'}$, we generate a Dirac neutral heavy lepton $N$ with the mass $M^{'} = \sqrt{D^{2} + M^{2}}$. The weak eigenstate $\nu_{L}$ is given by \begin{eqnarray} \nu_{L} & = & \cos \theta \; \nu^{'}_{L} + \sin \theta \; S_{L}^{'} \nonumber \\ & = & \frac{M}{\sqrt{D^{2} + M^{2}}} \; \nu^{'}_{L} + \frac{D}{\sqrt{D^{2} + M^{2}}} \; N_{L} \nonumber \\ & \equiv & K_{L} \nu^{'}_{L} + K_{H} N_{L}, \end{eqnarray} where $K_{L}, K_{H}$ are mixing factors (matrices in the case of three families) for massless neutrinos and NHL's, respectively. For $ M \gg D $ the mass of the NHL, $M^{'}$, and the weak eigenstate $\nu_{L}$ are approximately \begin{eqnarray} \label{eigenmix} M^{'} & \doteq & M, \nonumber \\ \nu_{L} & \doteq & \nu^{'}_{L} + \frac{D}{M} N_{L}, \end{eqnarray} that is, the mixing of NHL's is $K_{H} \doteq \frac{D}{M}$. \subsection{Mass matrix diagonalization in case of three families} \label{diagon3} We leave the matrix representation of the mass matrix observing that \begin{eqnarray} \label{fa3} -{\cal L}_{mass} & = & \frac{1}{2}{\cal M} \; = \; \frac{1}{2}\left(\overline{\nu_{L}}\;\overline{n_{L}^{c}}\;\overline{S_{L}} \right) \left( \begin{array}{lll} 0 & D & 0 \\ D^{T} & 0 & M^{T} \\ 0 & M & 0 \end{array} \right) \left( \begin{array}{c} \nu_{R}^{c} \\ n_{R} \\ S^{c}_{R} \end{array} \right) + h.c. \nonumber \\ & = & \frac{1}{2}\left(\overline{\nu_{L}}\;D\;n_{R} + \overline{n_{L}^{c}}\; D^{T} \; \nu_{R}^{c} + \overline{n_{L}^{c}}\;M^{T}\;S^{c}_{R} + \overline{S_{L}}\;M\;n_{R} \right) + h.c. \nonumber \\ & = & \frac{1}{2}\left(\overline{\nu_{L}}\;D\;n_{R} + \overline{\nu_{L}}\;D\;n_{R} + \overline{S_{L}}\;M\;n_{R} + \overline{S_{L}}\;M\;n_{R} \right) + h.c. \nonumber \\ & = & \overline{\nu_{L}}\;D\;n_{R} + \overline{S_{L}}\;M\;n_{R} + h.c. \;. \end{eqnarray} In the above we used the identity (for which the proof is almost identical with that for $\overline{\nu_{L}^{c}}\nu_{R}^{c} = \overline{\nu_{L}}\nu_{R}$, see Appendix \ref{proof}) \begin{equation} \overline{n_{L}^{c}}D^{T}\nu_{R}^{c} = \overline{\nu_{L}} D n_{R}\;. \end{equation} Performing the following rotation \begin{eqnarray} \label{matrixg} \left( \begin{array}{c} \nu_{L}^{'} \\ S_{L}^{'} \end{array} \right) = G \left( \begin{array}{c} \nu_{L} \\ S_{L} \end{array} \right) = \left( \begin{array}{cc} U_{1} & U_{2} \\ U_{3} & U_{4} \end{array} \right) \left( \begin{array}{c} \nu_{L} \\ S_{L} \end{array} \right), \end{eqnarray} where $G$ is a unitary matrix, we get \begin{eqnarray} -{\cal L}_{mass} & = & \left( \overline{\nu_{L}^{'}} U_{1} + \overline{S_{L}^{'}}U_{3}\right)D n_{R} + \left( \overline{\nu_{L}^{'}} U_{2} + \overline{S_{L}^{'}}U_{4}\right) M n_{R}+ h.c. \nonumber \\ & = & \overline{\nu_{L}^{'}}\left(U_{1} D + U_{2} M\right) n_{R}+ h.c. +\overline{S_{L}^{'}}\left(U_{3}D + U_{4} M\right)n_{R} + h.c.\;. \end{eqnarray} We put (compare with $ D \cos \theta - M \sin \theta = 0$ for a single generation case) \begin{equation} U_{1} D + U_{2} M = 0, \end{equation} hence \begin{eqnarray} -{\cal L}_{mass} & = & \overline{S_{L}^{'}}\left(U_{3}D + U_{4} M\right)n_{R} + h.c. = \overline{S_{L}^{'}}M^{'}n_{R} + h.c.\;. \end{eqnarray} There is no mass term for $\nu^{'}_{L}$, therefore $\nu^{'}_{L}$ is a massless neutrino. $M^{'}$, unlike in the one family case, has to be further diagonalized with rotations ($Z, T$ unitary matrices) in the NHL basis: \begin{eqnarray} S_{L}^{''} = T S_{L}^{'},\;\;\;\; n_{R}^{''} = Z n_{R}, \end{eqnarray} yielding \begin{eqnarray} \label{mprimed} -{\cal L}_{mass} & = & \overline{S_{L}^{''}}\left(TU_{3}DZ^{\dagger} + TU_{4}MZ^{\dagger}\right)n_{R}^{''} + h.c. \nonumber \\ & = & \overline{ S_{L}^{''}}M^{''} n_{R}^{''} + h.c., \end{eqnarray} such that $M^{''}$ is diagonal. Identifying (see also Eq. \ref{identif}) \begin{eqnarray} S_{L}^{''} & \equiv & N_{L}, \nonumber \\ n_{R}^{''} & \equiv & N_{R}, \end{eqnarray} we arrive at \begin{eqnarray} -{\cal L}_{mass} & = & \overline{N} M^{''} N \; = \; M_{N_{4}} \overline{N_{4}}N_{4} + M_{N_{5}} \overline{N_{5}}N_{5} + M_{N_{6}} \overline{N_{6}}N_{6}. \end{eqnarray} Here $M^{''}$ is diagonal with elements $M_{N_{4}}, M_{N_{5}}, M_{N_{6}}$ being masses of three Dirac NHL's $N_{4}, N_{5}, N_{6}$. The weak eigenstate vector, $\nu_{L}$, is given by \begin{eqnarray} \label{weakeig} \nu_{L} \equiv \nu_{l} \equiv \left(\nu_{e},\nu_{\mu},\nu_{\tau}\right) & = & U_{1}^{\dagger}\nu_{L}^{'} + U_{3}^{\dagger}S_{L}^{'} = U_{1}^{\dagger}\nu_{L}^{'} + U_{3}^{\dagger}T^{\dagger}S_{L}^{''} \nonumber \\ & \equiv & K_{L}\nu_{L}^{'} + K_{H}S_{L}^{''} = K_{L} \nu_{L}^{'} + K_{H}N_{L}. \end{eqnarray} \subsection{Discussion of mass eigenstates} \label{discus3} The diagonalization of the mass matrix yielded three massless neutrinos $\nu_{L}^{'} \equiv \left(\nu_{1_{L}}^{'},\nu_{2_{L}}^{'},\nu_{3_{L}}^{'}\right)$ along with three Dirac NHL's $N \equiv \left(N_{4},N_{5},N_{6}\right)$ with mass $M_{N} \sim M$. The masslessness of the neutrinos is the consequence of the assumed $B-L$ symmetry. This symmetry also prevents neutrinos from acquiring small masses in radiative corrections. The neutrinos are massless due to $B-L$ symmetry also in the SM, but the difference is that in the SM the $B-L$ symmetry is an automatic consequence of the missing right-handed neutrino fields, while here this symmetry is imposed with right-handed neutrinos present. Note that massless neutrinos imply there are no time dependent neutrino oscillations and no neutrinoless double beta decays. In the light of arguments for massive neutrinos (see Sec. \ref{sttp}) it may seem surprising that this model yields massless neutrinos. However, small neutrino masses can be generated in a variant of our model by introducing a small Majorana mass term $\mu$ \cite{vallemo,garval} in the mass matrix ${\cal M}$ in Eq. \ref{ourmatrix} : \begin{eqnarray} \label{muelement} -{\cal L}_{mass} & = & \frac{1}{2}{\cal M} \; = \; \frac{1}{2}\left(\overline{\nu_{L}}\;\overline{n_{L}^{c}}\;\overline{S_{L}} \right) \left( \begin{array}{lll} 0 & D & 0 \\ D^{T} & 0 & M^{T} \\ 0 & M & \mu \end{array} \right) \left( \begin{array}{c} \nu_{R}^{c} \\ n_{R} \\ S^{c}_{R} \end{array} \right) + h.c.\;.\;\;\;\; \end{eqnarray} Besides, now that we have a superstring motivation for the field content of our model, there is nothing unusual about massless neutrinos either. The unnaturalness of the SM treatment of neutrino masses is removed, dark matter has other candidates and the solar neutrino puzzle has alternative explanations. Whether neutrinos have any mass at all may actually be of secondary interest for a theorist who is trying to come up with some plausible explanation of the low experimental limits on this mass. The weak eigenstates in our model ($\nu_{L}$) are dominated by massless neutrinos $\nu_{L}^{'}$ with a small admixture ($K_{H} \sim D/M$, see Eq. \ref{eigenmix}) of NHL's $N$. In see-saw models, the NHL mixing is generally suppressed due to the scale $M$ (see Eq. \ref{see} and the discussion afterwards) \footnote{An exception are the see-saw models with enhanced mixings discussed at the end of Sec. \ref{see-saw32}.}, which has to be very large to explain small neutrino masses, $m_{\nu} \sim \frac{D^{2}}{M}$, dictated by experiments and cosmological arguments. The NHL mixing in our model is not, however, restricted by the dependence of neutrino masses $m_{\nu^{'}}$ on scales $D$ and $M$ (there is no such dependence as $m_{\nu^{'}} = 0$). Therefore, the scale $M$ can be much lower than in the case of see-saw models and hence rates for many interesting phenomena can be large. This means that signatures of NHL's might be found even at current accelerator energies and luminosities. Our model is thus attractive not only conceptually, but also practically. \section{Properties of the mixing matrix} \label{properties3} The weak interaction eigenstates $\nu_{l}$ are related to six mass eigenstates $\nu^{'}, N$ via a $3 \times 6$ mixing matrix $K$ with components $K_{l \alpha}$; $l = e, \mu, \tau$ and $\alpha = \nu^{'}_{1}, \nu^{'}_{2}, \nu^{'}_{3}, N_{4}, N_{5}, N_{6}$ (see Eq. \ref{weakeig}) \begin{eqnarray} \left( \begin{array}{c} \nu_{e} \\ \nu_{\mu} \\ \nu_{\tau} \end{array} \right) & = & {\left( \begin{array}{llllll} K_{e\nu^{'}_{1}} & K_{e\nu^{'}_{2}} & K_{e\nu^{'}_{3}} & K_{eN_{4}} & K_{eN_{5}} & K_{eN_{6}} \\ K_{\mu\nu^{'}_{1}} & K_{\mu\nu^{'}_{2}} & K_{\mu\nu^{'}_{3}} & K_{\mu N_{4}} & K_{\mu N_{5}} & K_{\mu N_{6}} \\ K_{\tau\nu^{'}_{1}} & K_{\tau\nu^{'}_{2}} & K_{\tau\nu^{'}_{3}} & K_{\tau N_{4}} & K_{\tau N_{5}} & K_{\tau N_{6}} \\ \end{array} \right)} \left( \begin{array}{c} \nu^{'}_{L} \\ N_{L} \end{array} \right) \nonumber \\ & \equiv & (K_{L} \; K_{H}) \left( \begin{array}{c} \nu^{'}_{L} \\ N_{L} \end{array} \right) ; \;\;\;\; \nu^{'} \; = \; \left( \begin{array}{c} \nu^{'}_{1} \\ \nu^{'}_{2} \\ \nu^{'}_{3} \end{array} \right), \;\; N \; = \; \left( \begin{array}{c} N_{4} \\ N_{5} \\ N_{6} \end{array} \right). \end{eqnarray} Alternatively, we can write \footnote{Where not indicated in this work, indices $i,j,k$ run through $1,2,3$ and $a,b,c$ through $4,5,6$.} \begin{eqnarray} \label{alter} \nu_{l} & = & \sum_{i=1,2,3}\big(K_{L}\big)_{li} \nu^{'}_{i_{L}} + \sum_{a=4,5,6} \big(K_{H}\big)_{la} N_{a_{L}} \; = \; \big(K_{L}\big)_{li} \nu^{'}_{i_{L}} + \big(K_{H}\big)_{la} N_{a_{L}}. \end{eqnarray} A quick inspection tells us the matrix $K$ has $3 \times 6$ complex parameters = $36$ degrees of freedom. Unitarity implies an important property often used throughout this work, \begin{eqnarray} \label{KLKH} K_{L}K^{\dagger}_{L} +K_{H}K^{\dagger}_{H} = 1. \end{eqnarray} This property reduces the number of degrees of freedom by $9$ to $27$. Further elimination of unphysical parameters via redefinition (rephasing) \footnote{This operation is also done in the SM when one parametrizes the CKM matrix.} of physical mass eigenstates leaves us $3^{2}$ angles and ${(3-1)}^{2}$ phases \cite{branco}. This allows for possible lepton flavour violation, universality violation and CP violation. The mixing factor which typically governs flavour-conserving processes, $ll_{mix}$, is given by \begin{eqnarray} ll_{mix} & = & \sum_{a=4,5,6} \big(K_{H}\big)_{la} \big(K_{H}^{\dagger}\big)_{al}\;\; ; \makebox[.5in] [c] { } l= e, \mu, \tau \end{eqnarray} and the flavour-violating mixing factor $l{l^{'}}_{mix}$ is defined as \begin{eqnarray} l{l^{'}}_{mix} & = & \sum_{a=4,5,6} \big(K_{H}\big)_{la} \big(K_{H}^{\dagger}\big)_{al^{'}}\;\; ; \makebox[.5in] [c] { } l,l^{'} = e, \mu, \tau, \makebox[.2in] [c] { }l \neq l^{'}. \end{eqnarray} Further, the following important inequality holds \begin{eqnarray} \label{ineq} |{l{l^{'}}_{mix}}|^2 & \leq & {ll}_{mix}\:\:{l^{'}l^{'}}_{mix}, \makebox[.5in] [c] { } l \neq l^{'}. \end{eqnarray} This implies that one might observe nonstandard effects in flavour-conserving processes even if they are absent in flavour-violating processes. \section{Interaction Lagrangians} \label{inter3} The charged and neutral current Lagrangians are obtained from the corresponding terms in the SM Lagrangian substituting for $\nu_{l}$ from Eq. \ref{alter}. The charged current Lagrangian is given by \begin{eqnarray} \label{ccur} {\cal L}_{cc} & = & \frac{1}{2 \sqrt{2}} g_{2} W^{\mu} \sum_{l=e, \mu ,\tau} \Big\{ \; \sum_{i} \bar {l} \gamma_{\mu} (1-\gamma_{5}) \big(K_{L}\big)_{li} \nu^{'}_{i} + \sum_{a} \bar{l} \gamma_{\mu} (1-\gamma_{5}) \nonumber \\ & \times & \big(K_{H}\big)_{la} N_{a} \Big\} + h.c. \end{eqnarray} and the neutral current Lagrangian as \begin{eqnarray} {\cal L}_{nc} & = & \frac{g_{2}}{4c_{W}} Z^{\mu} \sum_{i,a} \bar{\nu_{i}}^{'} {(K_{L}^{\dagger}K_{H})}_{ia} \gamma_{\mu} (1-\gamma_{5})N_{a} + h.c. \nonumber \\ & + & \frac{g_{2}}{4c_{W}} Z^{\mu} \sum_{a,b} \bar{N_{a}} {(K_{H}^{\dagger}K_{H})}_{ab} \gamma_{\mu} (1-\gamma_{5})N_{b} \nonumber \\ & + & \frac{g_{2}}{4c_{W}} Z^{\mu} \sum_{i,j} \bar{\nu_{i}}^{'} {(K_{L}^{\dagger}K_{L})}_{ij} \gamma_{\mu} (1-\gamma_{5})\nu_{j}^{'}. \end{eqnarray} We will also need Lagrangians with neutrinos and NHL's interacting with the Higgs $H$ and with unphysical Higgs $\phi^{+},\phi^{-}$ and $\chi$. The starting point is the Yukawa Lagrangian describing the interactions of neutrino fields $\nu_{l}, n_{R}$ with the Higgs doublet~$\tilde{\Phi}$, \begin{eqnarray} \label{yukawa1} {\cal L} & = & - (\overline{\nu_{l}}\:\:\overline{l_{L}})\:\tilde h \: \tilde{\Phi}\:n_{R} + h.c., \end{eqnarray} where $\tilde h$ is a matrix (in family space) of Yukawa couplings. The $S_{L}$ field does not couple to $\tilde{\Phi}$, but rather might couple to a new $SU(2)_{L} \times U(1)_{Y}$ singlet Higgs field (responsible also for mass $M$) present in some superstring models \cite{bernabeu1}. We do not introduce such a field. The physical Higgs part, derived in Appendix \ref{Becko}, is given by \begin{eqnarray} {\cal L}_{H} & = & - \frac{g_{2}}{2 M_{W}}\overline{N}(K_{H}^{\dagger}K_{H}) M_{N} N H \nonumber \\ & - & \frac{g_{2}}{2 M_{W}}\overline{\nu^{'}}(K_{L}^{\dagger}K_{H})M_{N} \frac{1+\gamma_{5}}{2}N H \nonumber \\ & - & \frac{g_{2}}{2 M_{W}}\overline{N}(K_{H}^{\dagger}K_{L})M_{N} \frac{1-\gamma_{5}}{2}\nu^{'} H, \end{eqnarray} the unphysical neutral Higgs $\chi$ part, \begin{eqnarray} {\cal L}_{\chi} & = & + i \frac{g_{2}}{2 M_{W}} \overline{N}(K_{H}^{\dagger}K_{H}) M_{N} \gamma_{5} N \chi \nonumber \\ & + & i \frac{g_{2}}{2 M_{W}}\overline{\nu^{'}}(K_{L}^{\dagger}K_{H})M_{N} \frac{1+\gamma_{5}}{2}N \chi \nonumber \\ & - & i \frac{g_{2}}{2 M_{W}}\overline{N}(K_{H}^{\dagger}K_{L})M_{N} \frac{1-\gamma_{5}}{2}\nu^{'} \chi, \end{eqnarray} and the unphysical charged Higgs $\phi^{+},\phi^{-}$ parts \begin{eqnarray} {\cal L}_{\phi^{-}} & = & + \frac{g_{2}}{\sqrt{2} M_{W}}\overline{e_{L}}K_{H} M_{N}N_{R}\phi^{-} + O\big(\frac{m_{l}}{M_{W}}\big) + h.c. \;. \end{eqnarray} Feynman rules corresponding to these Lagrangians are listed in Appendix \ref{Cecko}. \section{Review of existing constraints on NHL's} \label{review3} Constraints on neutral heavy lepton masses and mixings come from three different sources. {\bf i)} First, there is the possibility of direct production of NHL's. At $e^{+}e^{-}$ colliders such as LEP I or SLC, they could be produced in Z decays: \footnote{In the rest of this work we drop the prime from $\nu^{'}$.} \begin{eqnarray} Z \rightarrow N_{a} + \nu \end{eqnarray} and subsequently decay via neutral or charged currents: \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.5) \put(0.1,+0.125){\mbox{\epsfxsize=5.5in\epsffile{rev1.eps}}} \end{picture} \end{center} The rate for $Z$ decays into an NHL and a light neutrino has been given previously \cite{Dittmar} as \begin{eqnarray} \Gamma(Z \rightarrow N_{a} + \nu) & = & a_{mix} (1-\frac{{M_{N_{a}}}^{2}}{{M_{Z}}^{2}})(1+\frac{{M_{N_{a}}}^{2}}{{2 M_{Z}}^{2}})\Gamma(Z \rightarrow \nu + \nu), \end{eqnarray} where \begin{eqnarray} a_{mix} & = & \sum_{l=e,\mu,\tau} {|\left(K_{H}\right)_{la}|}^{2}. \end{eqnarray} The subsequent NHL decay rate (for $M_{N} \leq M_{W}$) is then given by \begin{eqnarray} \Gamma_{N} & = & a_{mix} (\frac{M_{N}}{m_{\mu}})^{5} \Phi_{l}\Gamma_{\mu}, \end{eqnarray} where $\Gamma_{\mu}$ is the muon decay rate and $\Phi_{l}$ is the effective number of decay channels available to the NHL \cite{Gronau}. LEP data effectively (better than indirect constraints, see below) probe NHL mixings for NHL mass up to $80$ GeV \cite{Dittmar} . NHL production at $pp$ supercolliders was studied in Ref. \cite{Gour}. It was concluded that the CERN Large Hadron Collider (LHC) has the potential to push the limits on $a_{mix}$ below the LEP constraints for NHL mass up to $110$ GeV. {\bf ii)} Second, there are constraints on NHL mixing parameters from a variety of low energy experiments and from experiments at LEP I where neutral heavy leptons are not directly present. NHL's however, do affect observables indirectly: due to unitarity properties of the mixing matrix $K$, a nonzero NHL mixing slightly reduces the couplings of light neutrinos from their SM values \footnote{For example in case of $We\nu$ vertex, the mixing is changed from SM value $= 1$ to $K_{L}$, see Eq.~\ref{ccur}.}, thus affecting rates for nuclear $\beta$ decays, $\tau$ and $\pi$ decays, and for $Z$ decays. The following upper limits are consistent with experiment \cite{Nardi} \begin{eqnarray} \label{limits1} ee_{mix} & \leq & 0.0071 \nonumber \\ \mu\mu_{mix} & \leq & 0.0014 \nonumber \\ \tau\tau_{mix} & \leq & 0.033 \end{eqnarray} The limit on $\tau\tau_{mix}$ is improved to $\leq 0.024$ if the invisible width of the Z boson is included in the analysis \cite{Nardi}. The limits in Eq. \ref{limits1} are model independent and hold for any value of the NHL mass. They arise from a global analysis of results including lepton universality experiments, CKM matrix unitarity tests, $W$ mass measurements and neutral current data from LEP I experiments. Note that the LEP I neutral current data analysis did not include NHL loop effects but, rather, only coupling constant modifications due to mixing. We consider NHL loop effects in this work. Since the limit on the parameter $\tau\tau_{mix}$ plays (as the least stringent one) the most important role in our analysis, we will pay further attention to its source. It comes from the $\mu - \tau$ universality test based on the $\tau$ leptonic decays compared to the $\mu$ leptonic decays. The result of the test is given as the ratio of the couplings of $\tau$ and $\mu$ to the W boson, $g_{\tau}/g_{\mu}$ (in the SM we have $g_{\tau} = g_{\mu} = g_{2}$). The ratio is found from \begin{eqnarray} \frac{\Gamma(\tau \rightarrow e \nu \nu) / \Gamma^{SM}(\tau \rightarrow e \nu \nu)} {\Gamma(\mu \rightarrow e \nu \nu) / \Gamma^{SM}(\mu \rightarrow e \nu \nu)} & = & \Big(\frac{g_{\tau}}{g_{\mu}}\Big)^{2} \; = \; \frac{1-\tau\tau_{mix}}{1-\mu\mu_{mix}}. \end{eqnarray} Setting $\mu\mu_{mix}=0$, we get \begin{eqnarray} \tau\tau_{mix} & = & 1 - \Big(\frac{g_{\tau}}{g_{\mu}}\Big)^{2}, \end{eqnarray} with \cite{Nardi} \begin{eqnarray} \Big(\frac{g_{\tau}}{g_{\mu}}\Big)^{2} & = & 0.989 \pm 0.016 . \end{eqnarray} {\bf iii)} Finally, the NHL masses and mixings can be constrained via their contribution in loops to various processes. The calculation to the one-loop level of the perturbation theory is naturally more involved than the mostly tree-level considerations required for direct and indirect constraints. In return we can probe regions in the mixings vs NHL mass parameter space currently inaccessible to the direct and indirect methods. For example, as we will see, we can place upper limits on the NHL mass. We caution though that these limits depend on the mixings and they will be relaxed should tighter bounds on mixings be achieved. Still this is an improvement over the direct and indirect methods which are blind to NHL masses larger than $M_{Z}$. There are two classes of these processes, lepton flavour-violating and lepton flavour-conserving. Lepton flavour-violating decays have been considered a hot candidate for a new physics manifestation in general for many decades. They include so far unobserved, so-called rare decays of $\mu$ and $\tau$ leptons and $\mu\:-\:e$ conversion in nuclei (A,Z): \begin{equation} \label{fvdecay3} \begin{array}{c} \mu \rightarrow e \gamma,\;\;\;\;\;\tau \rightarrow e \gamma,\;\;\;\;\;\tau \rightarrow \mu \gamma, \;\;\;\;\;\;\; \\ \mu, \tau \rightarrow e e^{+} e^{-},\;\;\;\;\;\tau \rightarrow \mu e^{+} e^{-}, \;\;\;\;\;\tau \rightarrow e \mu^{+} \mu^{-}, \;\;\;\;\;\tau \rightarrow \mu \mu^{+} \mu^{-}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ \mu^{-}(A,Z) \rightarrow e^{-}(A,Z), \nonumber \end{array} \end{equation} and the Z boson decays \begin{eqnarray} \label{fvz3} Z \rightarrow e^{\pm}\mu^{\mp},\;\;\;\;\;Z \rightarrow e^{\pm}\tau^{\mp},\;\;\;\;\;Z \rightarrow \mu^{\pm}\tau^{\mp}. \end{eqnarray} We will discuss these decays in Chapter 5; here we at least mention the decay $\mu \rightarrow e \gamma$, which underwent an intensive experimental scrutiny and its stringent upper limit places a tough constraint on the mixing parameter $e\mu_{mix}$. For illustration, one of the diagrams contributing to $\mu \rightarrow e \gamma$ is shown in Fig. \ref{muegama}. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.8) \put(1.6,+0.125){\mbox{\epsfxsize=2.5in\epsffile{rev2.eps}}} \end{picture} \end{center} \caption{A one-loop diagram leading to $\mu \rightarrow e \gamma$ decay} \label{muegama} \end{figure} In Sec. \ref{fvple} we will show that $\mu \rightarrow e \gamma$ gives the following upper limit on the mixing parameter $e \mu_{mix}$: \begin{eqnarray} \label{limits3} |e\mu_{mix}| & \leq & 0.00024. \end{eqnarray} By combining the indirect constraints obtained from the global analysis (see Eq.~\ref{limits1}) with the inequality relations of Eq. \ref{ineq} one obtains the following upper limits on the mixing factors \begin{eqnarray} \label{limits2} |e\mu_{mix}| & \leq & 0.0032 \nonumber \\ |\mu\tau_{mix}| & \leq & 0.0068 \nonumber \\ |e\tau_{mix}| & \leq & 0.015. \end{eqnarray} For the mixings $\mu\tau_{mix}$ and $e\tau_{mix}$, these are the strongest available constraints. The second class consists of lepton flavour-conserving processes with NHL's in loops. The main part of this thesis (Chapters 6 and 7) is devoted to two of these processes, $Z \rightarrow l^{+}l^{-}$ with partial leptonic width $\Gamma_{ll}$ and universality breaking parameter $U_{br}$ as observables ; and $\mu \rightarrow e \nu_{e} \nu_{\mu}$ with the W mass $M_{W}$ as observable. We will argue that the flavour-conserving processes can be competitive with and even have some advantages over the flavour-violating ones. \chapter{Standard model at the one-loop level} As a prerequisite for one-loop calculations in Chapters 5,6 and 7, we discuss here the standard model of electroweak interactions at the one-loop level. The classical electroweak Lagrangian was specified in Sec. \ref{classical1}. One-loop corrections require treatment within the framework of the quantum field theory: the classical Lagrangian has to be quantized and extended to include some new terms. We present these quantum field theoretical 'amendments' to the classical Lagrangian in Sec. \ref{quant1}. One-loop corrections calculated from the full Lagrangian typically suffer from divergences. A systematic way of removing these divergences, the renormalization of the SM, is discussed in Sec. \ref{renor}. There are a lot of different schemes used to renormalize the SM. We opted for the on-shell renormalization scheme of W. Hollik \cite{key6,key9}, introduced in Sec. \ref{onshell1}. \section{Quantization} \label{quant1} The quantum field theory requires two more terms to be added to the classical Lagrangian, ${\cal L}_{{\it gfix}}$ and ${\cal L}_{{\it ghost}}$: \begin{eqnarray} {\cal L}_{EW} = {\cal L}_{G} + {\cal L}_{F} + {\cal L}_{H} + {\cal L}_{{\it gfix}} + {\cal L}_{{\it ghost}}. \end{eqnarray} The gauge fixing term ${\cal L}_{{\it gfix}}$ is required in order to define meaningful propagators of the gauge fields which are otherwise singular \cite{key5}. The linear gauge fixing of the 't~Hooft type is given by \cite{key6} \begin{eqnarray} {\cal L}_{{\it gfix}} = -\frac{1}{2}\left(F_{\gamma}^{2} + F_{Z}^{2} + 2F_{+}F_{-}\right), \end{eqnarray} where \begin{eqnarray} F_{\pm} & = & \frac{1}{\sqrt{\xi^{W}}}\left(\partial^{\mu}W_{\mu}^{\pm} \mp i M_{W}\xi^{W}\phi^{\pm}\right), \nonumber \\ F_{Z} & = & \frac{1}{\sqrt{\xi^{Z}}}\left(\partial^{\mu}Z_{\mu} - M_{Z}\xi^{Z} \chi\right), \nonumber \\ F_{\gamma} & = & \frac{1}{\sqrt{\xi^{\gamma}}}\partial^{\mu}A_{\mu}, \end{eqnarray} and $\xi^{W}, \xi^{Z}, \xi^{\gamma}$ are gauge fixing parameters. In the 't Hooft type gauge the vector boson propagators have the form ($V = W, Z$) \begin{eqnarray} \label{bosonprop} \frac{i}{k^{2}-M_{V}^{2}+i\epsilon}\left(-g^{\mu\nu} + \frac{\left(1-\xi^{V}\right) k^{\mu}k^{\nu}}{k^{2}-\xi^{V}M_{V}^{2} + i\epsilon}\right), \end{eqnarray} and propagators of unphysical Higgs particles $\phi^{\pm},\chi$ are given by \begin{eqnarray} \label{unphysprop} \frac{i}{k^{2}-\xi^{V}M_{V}^{2}+i\epsilon}. \end{eqnarray} The unitary gauge is defined by $\xi^{V}\rightarrow \infty$. We can see that in this gauge unphysical Higgs freeze out (their propagators vanish) and only physical particles appear in Feynman diagrams. In this work we use the Feynman gauge defined by $\xi^{V} = \xi^{\gamma} = 1$. In this gauge there are unphysical Higgs present, but the positive trade-off is the particularly simple form of the gauge boson propagators of Eq. \ref{bosonprop}. The ${\cal L}_{{\it ghost}}$ term \cite{key5,key6,key7} is specific to nonabelian theories where the one-loop self-energies of the gauge bosons computed from ${\cal L}_{G} + {\cal L}_{F} + {\cal L}_{H} + {\cal L}_{{\it gfix}}$ do not satisfy gauge invariance and unitarity. ${\cal L}_{{\it ghost}}$ removes this difficulty with scalar anticommuting ghost fields $u^{\pm},u^{Z},u^{\gamma}$ (Fadeev-Popov ghosts) which appear naturally in Fadeev-Popov quantization based on the path-integral method \cite{key8}. $u^{\pm},u^{Z}$ propagators are the same as the propagators of unphysical Higgs, Eq. \ref{unphysprop}, while the $u^{\gamma}$ propagator is given by \begin{eqnarray} \frac{i}{k^{2}+i\epsilon}. \end{eqnarray} \section{Renormalization} \label{renor} In ${\cal L}_{EW}$ there are five independent parameters (showing only lepton Yukawa couplings $h_{i}$ and counting them as one): \begin{eqnarray} g_{2},\;g_{1},\; \lambda,\;v,\;h_{i}. \nonumber \end{eqnarray} After symmetry breaking we can replace them by an equivalent set (counting $m_{f}$ as one) \begin{eqnarray} \label{parameters} e,\;M_{W},\;M_{Z},\;M_{H},\;m_{f}, \end{eqnarray} where $e^{2}/4\pi = \alpha$ and masses are those of $W,Z$ and Higgs bosons and of a fermion, respectively. Originally these parameters were identified with their physical values ($\alpha = 1/137, M_{Z} = 91.137$ GeV, etc.). However, loop corrections calculated in terms of the physical values of these parameters diverge and the parameters themselves are modified - their physical values are changed by an infinite amount. Renormalization takes care of these (so-called ultraviolet) infinities through the reexamination of the meaning of the Lagrangian parameters in Eq. \ref{parameters}. We will illustrate the process on the electric charge $e$. In an effort to get to the core of the one-loop renormalization of the SM, we present at times simplified versions of the SM formulae. The reader is made aware of the simplifications in a series of footnotes. \subsection{Electric charge renormalization} The piece of ${\cal L}_{EW}$ defining the electric charge is the interaction term of the QED Lagrangian \begin{eqnarray} {\cal L}_{em} & = & e\:\overline{l}\gamma_{\mu} l A^{\mu}. \end{eqnarray} We identify $\alpha = e^{2}/4\pi$ with its physical value $1/137$ \footnote{In this section we enforce $\alpha = 1/137.036$ using notation $\alpha_{137}, e_{137}$} measured in low-energy (Thomson limit $k^{2} = (p-q)^{2} \rightarrow 0$, see diagram below) electron scattering. The photon - charged lepton vertex corresponding to ${\cal L}_{em}$ is given by \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.2) \put(1.25,+0.125){\mbox{\epsfxsize=1.3in\epsffile{ren1.eps}}} \put(3.5,0.6){$\equiv \Gamma^{0}= i e_{137} \gamma_{\mu}.$} \end{picture} \end{center} One-loop corrections to this vertex, calculated in terms of $e_{137}$, are \footnote{Strictly speaking, lepton self-energies, $\gamma$ self-energy and $\gamma - Z$ mixing also contribute to this vertex at the one-loop level. They are, however, each renormalized independently (in the on-shell scheme, for example, they vanish in the Thomson limit) and they will not change the essence of our arguments.} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.5) \put(.1,+0.125){\mbox{\epsfxsize=3.0in\epsffile{ren2.eps}}} \put(3.0,0.6){$\equiv \Gamma^{1}= - i e_{137} \gamma_{\mu}\left[\Lambda_{V}(0) + F_{V}(k^{2})\right] + ...,$} \end{picture} \end{center} where ellipses ... represent terms with Lorentz structure different from $\gamma_{\mu}$ \footnote{$\gamma_{\mu}\gamma_{5}$ and $(p+q)_{\mu}$, renormalized independently}, $F_{V}$ is the form factor split off so that $F_{V}(0) = 0$ and $\Lambda_{V}(0)$ is given by \footnote{This form is exact only for the diagram with $\gamma$ in the loop, the other two diagrams have the divergence multiplied by some combinations of $s_{W}^{2}$. There is also an infrared divergence present in the former diagram that we do not deal with here.} \begin{eqnarray} \label{lambda} \Lambda_{V}(0) & = & -\frac{\alpha_{137}}{4\pi}\left( \frac{2}{\epsilon} + finite \;\;constants\right), \end{eqnarray} where $2/\epsilon$ is the ultraviolet divergence ($\epsilon \rightarrow 0$) regularized by dimensional regularization (see Appendix \ref{Decko} on dimensional regularization). The one-loop corrected vertex $\Gamma$ is thus given by \begin{eqnarray} \label{gamma} \Gamma = \Gamma^{0} + \Gamma^{1} & = & i e_{137} \gamma_{\mu}\left[1 - \Lambda_{V}(0) - F_{V}(k^{2})\right] + ..., \end{eqnarray} At low energies (Thomson limit $k^{2}\rightarrow 0$) \begin{eqnarray} \label{thomson} \Gamma(k^{2}\rightarrow 0) & = & i e_{137} \gamma_{\mu}\left[1 - \Lambda_{V}(0)\right] + ..., \end{eqnarray} hence the charge is changed from its tree-level value in $\Gamma^{0}$ by an infinite amount: \begin{eqnarray} \label{charge} e_{137} & \rightarrow & e_{137}\left[1 - \Lambda_{V}(0)\right]. \end{eqnarray} To explain this difficulty, we note the quantity measured by the experiment as $e_{137}$ is a {\it loop corrected} charge. But the quantity on the right-hand side of Eq.~\ref{charge} {\it is} a loop (one-loop) corrected charge, therefore the quantity multiplying the $1 - \Lambda_{V}(0)$ factor, denoted $e_{137}$, cannot be $e_{137}$. This implies the charge in the electromagnetic Lagrangian cannot be identified with its physical value. The correct approach is to admit the independent parameters appearing in ${\cal L}_{EW}$ are in fact 'bare', unrenormalized quantities \begin{eqnarray} e^{b},\;M_{W}^{b},\;M_{Z}^{b},\;M_{H}^{b},\;m_{f}^{b}, \end{eqnarray} different from the physical values. The vertex $\Gamma$ is then given by (compare with Eq.~\ref{gamma}) \begin{eqnarray} \label{gamabare} \Gamma & = & \Gamma^{0} + \Gamma^{1} \; = \; i e^{b} \gamma_{\mu}\left[1 - \Lambda_{V}(0) - F_{V}(k^{2})\right] + ... \nonumber \\ & = & i e^{b} \gamma_{\mu}\left[1 + \frac{\alpha^{b}}{4\pi}\left( \frac{2}{\epsilon} + finite \;\;constants\right) - F_{V}(k^{2})\right] + ... \;. \end{eqnarray} Bare parameters are unambiguously fixed by the requirement that they lead to correct physical values. For electric charge we demand that (in view of the discussion above) \begin{eqnarray} \label{thomsonbare} \Gamma(k^{2}\rightarrow 0) = i e^{b} \gamma_{\mu}\left[1 - \Lambda_{V}(0) \right] + ... & = & ie_{137}\gamma_{\mu}, \end{eqnarray} or equivalently \begin{eqnarray} \label{chargebare} e^{b}\left[1-\Lambda_{V}(0)\right] & = & e_{137}. \end{eqnarray} From here (after plugging in $\Lambda_{V}(0)$ calculated in terms of $\alpha^{b}$) we can solve for $e^{b}$, \footnote{We work to order $O(e_{137}\alpha_{137})$ in Sec. \ref{renor}. Higher order terms are neglected.} \begin{eqnarray} \label{eb} e^{b} & = & e_{137}\left[1-\frac{\alpha_{137}}{4\pi}\left( \frac{2}{\epsilon} + finite\;\; constants\right)\right]. \end{eqnarray} Using Eqs. \ref{chargebare}, \ref{eb}, the Eq. \ref{gamabare} can be written as \begin{eqnarray} \Gamma & = & i e_{137}\gamma_{\mu}\left[1 - F_{V}(k^{2})\right] + ... , \end{eqnarray} where $F_{V}(k^{2})$ is of the order $O(\alpha_{137})$. The infinity is thus removed from the vertex $\Gamma$ , i.e., the electromagnetic vertex (and the charge) is renormalized. \subsection{Renormalization schemes} The loop calculations are rarely carried out in terms of bare parameters. A widely used technique is to split the bare charge $e^{b}$, the bare fermion mass $m_{f}^{b}$ and the bare boson mass $M^{b}$ as \begin{eqnarray} \label{split} e^{b} & = & \hat{e} + \delta e, \nonumber \\ m_{f}^{b} & = & \hat{m_{f}} + \delta m_{f}, \nonumber \\ M_{b}^{2} & = & \hat{M^{2}} + \delta M^{2}, \end{eqnarray} where $\hat{e}, \hat{m_{f}}, \hat{M}$ are renormalized (finite) charge, fermion, and gauge boson masses and $\delta e, \delta m_{f}, \delta M^{2}$ are infinite corrections, so-called counterterms. This split introduces a degree of freedom, as there is no unique way to perform it. Renormalized charge and mass can take on different finite values including the physical ones. This freedom leads in practice to many different ways of splitting the bare parameters, i.e., to many different renormalization schemes (RS). The difference between two renormalized charges coming from two different RS is small, of the order $O(\alpha_{137})$, as every renormalized charge is chosen to be equal to $e_{137}$ in the lowest order: \begin{eqnarray} \hat{e} & = & e_{137}\left[1 + O(\alpha_{137})\right]. \end{eqnarray} With the substitution of Eq. \ref{split}, the ${\cal L}_{em}$ becomes \begin{eqnarray} {\cal L}_{em} = e^{b}\;\overline{l}\gamma_{\mu} l A^{\mu} & = & {\hat e}\;\overline{l}\gamma_{\mu} l A^{\mu} + \delta e\;\overline{l}\gamma_{\mu} l A^{\mu}, \end{eqnarray} where the second term is called the counterterm Lagrangian. The calculation of the vertex $\Gamma$ now leads to \footnote{Note $\delta e/{\hat e}$ is of the order $O(\hat{\alpha}) = O(\alpha_{137})$.} \footnote{From now on we will use $\Gamma = i {\hat e} \gamma_{\mu}\left[1 - \Lambda_{V}(0) - F_{V}(k^{2})\right] $ for the unrenormalized vertex, and ${\hat \Gamma}$ for the renormalized vertex, ${\hat \Gamma} = \Gamma +$ counterterm. The counterterm contains besides $\delta e$ also wave function renormalization factors, not considered until Sec. \ref{frenor}.} \begin{eqnarray} \label{gammacounter} {\hat \Gamma} = \Gamma^{0} + \Gamma^{1} & = & i {\hat e} \gamma_{\mu}\left[1 - \Lambda_{V}(0) + \delta e/{\hat e} - F_{V}(k^{2})\right] + ...\;. \end{eqnarray} The conditions of Eqs. \ref{thomsonbare}, \ref{chargebare} are now given by \begin{eqnarray} \label{thomsoncounter} {\hat \Gamma}(k^{2}\rightarrow 0) = i {\hat e} \gamma_{\mu}\left[1 - \Lambda_{V}(0) + \delta e/{\hat e} \right] + ... & = & i e_{137} \gamma_{\mu}, \\ \label{chargecounter} {\hat e}\left[1-\Lambda_{V}(0)+\delta e/{\hat e} \right] & = & e_{137}. \end{eqnarray} At this point we can illustrate two different approaches to the choice of renormalization scheme. {\bf i)} If we prefer to use some particular value of ${\hat e}$ in the calculation, say \begin{eqnarray} {\hat e} = e_{137}\left[1 + b\;\alpha_{137}\right], \end{eqnarray} the counterterm $\delta Z \equiv \delta e/{\hat e} $ is consequently fixed by (see Eq. \ref{chargecounter}) \begin{eqnarray} \delta Z \equiv \delta e/{\hat e} & = & \frac{e_{137}}{{\hat e}} + \Lambda_{V}(0)- 1 . \end{eqnarray} In fact, the most popular scheme in electroweak calculations is an on-shell scheme (OS) where \begin{eqnarray} \label{OS} {\hat e} \equiv {\hat e}^{OS} & = & e_{137}\;\;\;\;\;(b=0), \nonumber \\ \delta Z^{OS} \equiv \delta e/{\hat e}^{OS} & = & \Lambda_{V}(0)\; = \; -\frac{\alpha_{137}}{4\pi}\left( \frac{2}{\epsilon} + finite \;\;constants\right), \end{eqnarray} and all masses assume their physical, on-shell values (${\hat M_{Z}}^{OS} = M_{Z} = 91.1884$ GeV, etc.). The Eq. \ref{thomsoncounter} with ${\hat e} = e_{137}$, \begin{eqnarray} {\hat \Gamma}(k^{2}\rightarrow 0) & = & i e_{137} \gamma_{\mu}, \end{eqnarray} is called an on-shell renormalization condition. {\bf ii)} A different approach is to start with fixing the counterterm $\delta e$ instead of ${\hat e}$. For instance we may require that $\delta e$ ($\delta m_{f}, \delta M^{2}$ likewise) only contain infinities (no finite terms) \footnote{Compare this with OS where $\delta e/\hat{e}^{OS} = \Lambda_{V}(0)$ contains both infinite and finite terms.}. This is the essence of minimal subtraction (MS) scheme. One chooses \begin{eqnarray} \label{MS} \delta Z^{MS} \equiv \delta e/{\hat e}^{MS} & = & -\frac{\hat{\alpha}^{MS}}{4\pi}\frac{2}{\epsilon} \; \doteq \; -\frac{\alpha_{137}}{4\pi}\frac{2}{\epsilon}, \end{eqnarray} and the charge is consequently given by \begin{eqnarray} {\hat e}\equiv {\hat e}^{MS} & = & \frac{e_{137}}{[1-\Lambda_{V}(0)+ \delta Z^{MS}]} \;\doteq\; \frac{e_{137}}{1+\frac{\alpha_{137}}{4\pi}\times finite \;\;\;terms}, \end{eqnarray} hence the electric charge ${\hat e}^{MS}$ differs from $e_{137}$ and likewise masses $\hat{m_{f}}^{MS}, \hat{M}^{MS}$ do not assume their on-shell values. The MS scheme is frequently used in quantum chromodynamics where on-shell quark masses are not well defined anyway. \subsection{Mass renormalization} The analysis performed above for the electric charge is essentially valid also for masses. The difference is in the form of renormalization condition. Masses can be defined as the poles of the propagators. For instance, gauge boson propagators $V = W,Z$ have poles at bare mass ${M_{V}^{b}}^{2}$: \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.2) \put(0.7,+0.6){\mbox{\epsfxsize=1.3in\epsffile{ren4.eps}}} \put(2.9,0.6){$\equiv P^{0}= \frac{-ig^{\mu\nu}}{k^{2}-{M_{V}^{b}}^{2} +i\epsilon}.$} \end{picture} \end{center} The one-loop correction to $P^{0}$ is given by \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.2) \put(0.7,+0.3){\mbox{\epsfxsize=1.3in\epsffile{ren5.eps}}} \put(2.9,0.6){$\equiv P^{1}= \frac{-ig^{\mu\alpha}}{k^{2}-{M_{V}^{b}}^{2} +i\epsilon} (-i\:\Sigma_{V}\:g_{\alpha \beta})\frac{-ig^{\beta\nu}} {k^{2}-{M_{V}^{b}}^{2}+i\epsilon}.$} \end{picture} \end{center} For a close-up of the blob $-i\:\Sigma_{V}g_{\alpha \beta}$ (unrenormalized vector boson self-energy tensor), see Figs. \ref{zfd}, \ref{wfd} and the relevant discussion in Chapter 6. The one-loop corrected renormalized \footnote{Here, we mean renormalized as far as mass is concerned. There is still one divergence remaining in $\Sigma_{V}$ which will be removed only after the field renormalization, see Sec. \ref{frenor}. Therefore we will withhold the notation ${\hat P}$ until then.} propagator is thus (compare with Eq. \ref{gammacounter}) \begin{eqnarray} \label{propagator} P = P^{0} + P^{1} & = & \frac{-ig^{\mu\nu}}{k^{2}-{M_{V}^{b}}^{2}+i\epsilon} \left[1 - \frac{\Sigma_{V}}{k^{2}-{M_{V}^{b}}^{2}+i\epsilon}\right] \nonumber \\ & \cong & \frac{-ig^{\mu\nu}}{k^{2}-{M_{V}^{b}}^{2}+i\epsilon} \frac{1}{\left(1 + \frac{\Sigma_{V}}{k^{2}-{M_{V}^{b}}^{2}+i\epsilon}\right)} \; = \; \frac{-ig^{\mu\nu}}{k^{2}-{M_{V}^{b}}^{2}+\Sigma_{V}(k^{2})+i\epsilon} \nonumber \\ & = & \frac{-ig^{\mu\nu}}{k^{2}-{\hat M}_{V}^{2}-\delta M_{V}^{2} + \Sigma_{V}(k^{2})+i\epsilon}. \end{eqnarray} We demand that the poles (masses) of renormalized propagators remain at their physical values regardless of the choice of ${\hat M}_{V}$ or $\delta M_{V}^{2}$ (compare with Eq. \ref{thomsoncounter}): \begin{eqnarray} P(k^{2} \rightarrow M_{V}^{2}) \;=\; \frac{-ig^{\mu\nu}}{k^{2}-{\hat M}_{V}^{2}-\delta M_{V}^{2} + \Sigma_{V}(M_{V}^{2})+i\epsilon} & = & \frac{-ig^{\mu\nu}}{k^{2}-{\hat M}_{V}^{{OS}^{2}}+i\epsilon}.\;\;\;\; \end{eqnarray} To put the mass on shell we have to take \begin{eqnarray} \label{os} {\hat M}_{V} = {\hat M}_{V}^{OS} \; \equiv \; M_{V}, \end{eqnarray} or equivalently \begin{eqnarray} \label{orequiv} \Sigma_{V}(M_{V}^{2}) - \delta M_{V}^{2} = 0. \end{eqnarray} \subsection{Field renormalization} \label{frenor} For physical S-matrix elements, the renormalization of the five parameters in Eq.~\ref{parameters} is all that is required. However, if one wishes to also have finite Green functions, then the renormalization of the fields is also required (see, e.g., Ref. \cite{key6}). While the particle masses are given by the poles of the propagators, the normalization of the fields is given by the residues of the propagators. For gauge boson fields, for example, we have: \begin{eqnarray} \frac{-i g^{\mu\nu}}{k^{2}-M_{V}^{2}+i\epsilon} \end{eqnarray} with the residue equal to one (ignoring $-i g^{\mu\nu}$). The residue (field normalization) is changed by the loop corrections. To show that, we expand $\Sigma_{V}(k^{2})$ about $k^{2} = M_{V}^{2}$: \begin{eqnarray} \Sigma_{V}(k^{2}) & = & \Sigma_{V}(M_{V}^{2}) + (k^{2}-M_{V}^{2})\Sigma_{V}^{'}(M_{V}^{2}) + ..., \nonumber \\ & & \Sigma_{V}^{'} \equiv \partial \Sigma_{V}/ \partial k^{2}, \end{eqnarray} and substitute it into the propagator of Eq. \ref{propagator}: \begin{eqnarray} P & = & \frac{-i g^{\mu\nu}}{k^{2}-{\hat M}_{V}^{2}-\delta M_{V}^{2} + \Sigma_{V}(M_{V}^{2})+(k^{2}-M_{V}^{2})\Sigma_{V}^{'}(M_{V}^{2}) + ... + i\epsilon}. \end{eqnarray} Applying the on-shell condition Eqs. \ref{os}, \ref{orequiv} we get \begin{eqnarray} P & = & \frac{-i g^{\mu\nu}}{k^{2}-M_{V}^{2} +(k^{2}-M_{V}^{2})\Sigma_{V}^{'}(M_{V}^{2}) + ... + i\epsilon} \nonumber \\ & \cong & \frac{-i g^{\mu\nu}}{(k^{2}-M_{V}^{2}+ i\epsilon)} \frac{1}{[1+\Sigma_{V}^{'}(M_{V}^{2}) + ...]}. \end{eqnarray} with the (divergent) residue $1/[1+\Sigma_{V}^{'}(M_{V}^{2})]$ at $k^{2} \rightarrow M_{V}^{2}$. The problem is fixed by the field counterterms generated by the substitution \begin{eqnarray} V_{\mu} & \rightarrow & Z_{V}^{1/2}V_{\mu} \;=\; (1+\delta Z_{V})^{1/2}V_{\mu} \; \doteq \; (1+\frac{1}{2} \delta Z_{V})V_{\mu}. \end{eqnarray} The field counterterm $ \delta Z_{V}$ modifies the propagator as follows \footnote{ \begin{eqnarray} M_{V}^{2} V_{\mu} V^{\mu} & \rightarrow & (1 + \delta Z_{V}) M_{V}^{2} V_{\mu} V^{\mu} , \nonumber \\ (\Box + M_{V}^{2})V_{\mu} & \rightarrow & (1 + \delta Z_{V}) (\Box + M_{V}^{2})V_{\mu}. \nonumber \end{eqnarray}} \begin{eqnarray} \label{hatp1} {\hat P} \;=\; \frac{1}{1+\delta Z_{V}}P \;=\; \frac{1}{(1+\delta Z_{V})} \frac{-i g^{\mu\nu}}{(k^{2}-M_{V}^{2}+ i\epsilon)} \frac{1}{[1+\Sigma_{V}^{'}(M_{V}^{2})]}. \end{eqnarray} To renormalize the fields (enforce their normalization) we demand that the residues of the field propagators be equal to one at the poles. This implies the following condition (higher order term neglected): \begin{eqnarray} \label{os2} \Sigma_{V}^{'}(M_{V}^{2})+\delta Z_{V} & = & 0. \end{eqnarray} It is easy to see from Eqs. \ref{propagator}, \ref{hatp1} that before being put on shell, ${\hat P}$ in terms of the renormalized self energy ${\hat \Sigma}_{V}$ is given by \begin{eqnarray} \label{hatp} {\hat P} & = & \frac{-i g^{\mu\nu}}{k^{2}-\hat{M}_{V}^{2} +{\hat \Sigma}_{V}(k^{2})+i\epsilon}, \\ \label{hatsigma} {\hat \Sigma}_{V}(k^{2}) & = & \Sigma_{V}(k^{2})-\delta M_{V}^{2}+ \delta Z_{V}(k^{2}-\hat{M}_{V}^{2}), \end{eqnarray} so that the on-shell renormalization conditions Eqs. \ref{orequiv}, \ref{os2} become \begin{eqnarray} \label{sigmam} {\hat \Sigma}_{V}(M_{V}^{2}) & = & 0, \\ \label{parsigmam} \frac{\partial {\hat \Sigma}_{V}(M_{V}^{2})}{\partial k^{2}} & = & 0. \end{eqnarray} Before we go further, one remark is in order. So far we have been discussing a simplified renormalization of some parameters at the one-loop level. In the next section we will stay at one-loop level, however, we will present the full set of counterterms and OS renormalization conditions required for the processes studied in this thesis. To prove the renormalizability of the SM, it has to be shown that the infinities one encounters in loop calculations to any order can be removed by the finite number of counterterms. This was done by 't Hooft in Refs. \cite{thooftsmren,thooftdim} for a general case of non-abelian theories with spontaneous symmetry breaking. \section{The on-shell scheme of W. Hollik} \label{onshell1} There are many renormalization schemes used in the calculation of loop corrections by different authors. They are distinguished in the first place by the choice of independent input parameters. The choice $e,\;M_{W},\;M_{Z},\;M_{H},\;m_{f}$ that we are using is only one of several possible. Given the set of input parameters, there are still infinitely many possibilities for choosing the renormalized quantities ${\hat e}$, ${\hat m}$. The OS scheme is the most popular and natural in the standard model of electroweak interactions. Even then, within the OS itself, there are many different approaches to renormalization. For instance, some opt for field renormalization, others do not and those who do, do it with different numbers of field renormalization constants. In this thesis we follow the OS scheme ($e,M_{W},M_{Z},M_{H},m_{f}$) of Wolfgang Hollik \cite{key6,key9}. We introduce multiplicative renormalization constants for each free parameter and each symmetry multiplet of fields \footnote{Multiplicative renormalization and only one constant per multiplet guarantees the gauge invariance of the counterterm Lagrangian. To make a connection with Eq. \ref{split}, note that \begin{eqnarray} {\hat e} & \rightarrow & Z\;{\hat e}\;=\;(1+\delta Z)\;{\hat e}\;=\;{\hat e} + \delta e. \nonumber \end{eqnarray}} at the level of the unbroken theory: \begin{eqnarray} \label{multi} W_{\mu}^{a} & \rightarrow & \left(Z_{2}^{W}\right)^{\frac{1}{2}}W_{\mu}^{a} \nonumber \\ B_{\mu} & \rightarrow & \left(Z_{2}^{B}\right)^{\frac{1}{2}}B_{\mu} \nonumber \\ \psi_{j_{L}} & \rightarrow & \left(Z_{L}^{j}\right)^{\frac{1}{2}}\psi_{j_{L}} \nonumber \\ \psi_{j_{R}} & \rightarrow & \left(Z_{R}^{j}\right)^{\frac{1}{2}}\psi_{j_{R}} \nonumber \\ \Phi & \rightarrow & \left(Z^{\Phi}\right)^{\frac{1}{2}}\Phi \nonumber \\ g_{2} & \rightarrow & Z_{1}^{W}\left(Z_{2}^{W}\right)^{-\frac{3}{2}}g_{2} \nonumber \\ g_{1} & \rightarrow & Z_{1}^{B}\left(Z_{2}^{B}\right)^{-\frac{3}{2}}g_{1} \nonumber \\ v & \rightarrow & \left(Z^{\Phi}\right)^{\frac{1}{2}}(v-\delta v) \nonumber \\ \lambda & \rightarrow & Z^{\lambda} \left(Z^{\Phi}\right)^{-2}\lambda \nonumber \\ h_{j} & \rightarrow & \left(Z^{\Phi}\right)^{-\frac{1}{2}}Z_{1}^{j}h_{j}, \end{eqnarray} ten constants in all (counting Yukawa couplings as one). Five of them are associated with fields and five with coupling constants. To generate the counterterm Lagrangian $\delta {\cal L}$, the renormalization constants are expanded as \begin{eqnarray} \label{deltaz} Z = 1 + \delta Z, \end{eqnarray} and Eqs. \ref{multi} - \ref{deltaz} are applied to ${\cal L}_{EW}$. The counterterms added to the unrenormalized quantities then yield the renormalized self-energies given in Appendix~\ref{Ecko}, Eq. \ref{rselfe}; and the renormalized electromagnetic, weak neutral and charged current vertices given in Eq. \ref{rvertexa}. These renormalized expressions can be compared with Eq. \ref{gammacounter} and Eq. \ref{hatsigma}. The ten independent counterterm constants are fixed by the nine on-shell renormalization conditions \footnote{The condition on $\hat{\Sigma}^{f}(k)$ in Eq. \ref{RC2} below fixes both $\delta Z_{L}$ and $\delta Z_{R}$ constants.}. The first set of conditions puts the masses on-shell (compare with Eq. \ref{sigmam}) \footnote{Only real parts of self-energies enter these conditions. The imaginary parts are finite.}: \begin{eqnarray} \label{RC1} \hat{\Sigma}^{W}(M_{W}^{2})\;\;\; =\;\;\; \hat{\Sigma}^{Z}(M_{Z}^{2}) \;\;\; = \;\;\; \hat{\Sigma}^{H}(M_{H}^{2})\;\;\; = \;\;\; \hat{\Sigma}^{f}(m_{f}^{2})\;\;\; = \;\;\;0, \end{eqnarray} where $\hat{\Sigma}^{W}, \hat{\Sigma}^{Z}, \hat{\Sigma}^{H}$ and $\hat{\Sigma}^{f}$ are the $W, Z,$ Higgs and fermion renormalized self-energies respectively; the second set of conditions is the generalization of the QED electric charge renormalization: \begin{eqnarray} \label{RC2} \hat{\Gamma}^{\gamma ee}(k^{2} \rightarrow 0) \; \equiv \; \hat{\Gamma}(k^{2} \rightarrow 0) & = & i e \gamma_{\mu} \nonumber \\ \nonumber \\ \hat{\Sigma}^{\gamma Z}(k^{2} \rightarrow 0) & = & 0 \nonumber \\ \nonumber \\ \displaystyle \left. \left[\frac{\partial}{\partial k^{2}}\hat{\Sigma}^{\gamma}\left(k^{2}\right)\right] \right|_{k^{2}=0} & = & 0 \nonumber \\ \nonumber \\ \displaystyle \left. \frac{1}{{\not k}-m_{f}}\hat{\Sigma}^{f}(k) \right|_{{\not k}=m_{f}} & = & 0 \nonumber \\ \nonumber \\ \displaystyle \left. \left[\frac{\partial}{\partial k^{2}}\hat{\Sigma}^{H}(k^{2})\right] \right|_{k^{2}=M_{H}^{2}} & = & 0, \end{eqnarray} where $\hat{\Sigma}^{\gamma}$ and $\hat{\Sigma}^{\gamma Z}$ are renormalized photon self-energy and photon-Z mixing respectively. The conditions involving $\hat{\Sigma}^{f}, \hat{\Gamma}^{\gamma ee}$ and $\hat{\Sigma}^{\gamma}$ come directly from QED. Derivative conditions can be compared with Eq. \ref{parsigmam} derived for W and Z bosons. When writing down renormalization conditions, one has to be careful not to violate Ward (Slavnov - Taylor) identities \cite{wst}. These consequences of gauge symmetry also relate renormalization constants to one another and can be used as a cross-check of the consistency of the renormalization conditions. In the set above, for example, Ward identities make the axial part of $\hat{\Gamma}^{\gamma ee}$ vanish in the Thomson limit \cite{key6,key9}. The renormalization constants calculated from Eqs. \ref{RC1}, \ref{RC2} are given in Appendix \ref{Ecko}, Eq. \ref{rconstants}. \chapter{Lepton flavour-violating processes} Among the processes with NHL's in the loops, the lepton flavour-violating decays have so far received a lot more attention \cite{Ng1,bernabeu1,Ng2,ggjv,Ilakovac,Jarlskog,Valle2,Korner,pilaftsis1} than the flavour-conserving processes \cite{bernabeu2,melo}. One of the reasons could be a certain preconception that the experimental signature of the flavour violation is `much more dramatic'. It is our intention to show in this and the next chapter that in many cases this expectation is rather naive. Another probable reason (this time justified) is that the calculation of the flavour-violating processes is simpler, with the smaller number of contributing diagrams and without having to actually renormalize. We will demonstrate this in Sec. \ref{fvzb1} in case of the flavour-violating decays of the Z boson. These rare processes were studied in the context of our model previously \cite{bernabeu1,Valle2}; however, the limit of large NHL mass was not fully investigated. This was pointed out in Ref. \cite{Korner}, where the branching ratios for $Z \rightarrow l_{1}^{-} l_{2}^{+}\; (e^{\pm}\mu^{\mp}, \mu^{\pm} \tau^{\mp}, e^{\pm} \tau^{\mp})$ were derived in the see-saw model of Ref. \cite{pilaftsis2}. We therefore reexamine the flavour-violating leptonic decays of the Z boson in our model, carefully treating the case of a large NHL mass. The diagrams are very similar to the flavour-conserving leptonic decays of the Z boson discussed in Chapter 6 and we will borrow some results from there; our intention here is to focus on the typical features of the flavour-violating processes rather than on the calculational details. In Sec. \ref{fvple} we continue with the discussion of the sensitivity of the flavour-violating processes in general to the presence of NHL's. We will refer to this discussion later, when in the main part of this work - the calculation of the flavour-conserving processes in Chapters 6 and 7 - we will confront our results with those for flavour-violating processes. \section{Flavour-violating leptonic decays of the Z boson} \label{fvzb1} In the SM, the CKM matrix gives rise to the flavour-violating hadronic decays of the Z boson at the one-loop level. In our model, by analogy, the mixing matrix $K$ (see Sec. \ref{properties3}) induces the flavour-violating decays $Z \rightarrow l_{1}^{-} l_{2}^{+}$ at the one-loop level \footnote{This feature is not exclusive to our model. Lepton flavour-violating processes are typical for many other nonstandard models with mixing in the lepton sector.}. One-loop Feynman diagrams generating these decays are given in Fig. \ref{odlisny}. There is no tree-level contribution since there is no mixing between the charged leptons in the neutral current Lagrangian. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(.3,+0.125){\mbox{\epsfxsize=5.0in\epsffile{fv1.eps}}} \end{picture} \end{center} \caption{One-loop diagrams for flavour-violating leptonic decays of the Z boson.} \label{odlisny} \end{figure} We will be studying how these graphs contribute to the observable, the width $\Gamma_{l_{1}^{-}l_{2}^{+}}$, in particular the dependence of the width on parameters from the neutrino sector of our model - mixings and NHL masses. The analysis can be simplified (without sacrificing the salient features) by assuming the three NHL's are degenerate, with mass $M_{N}$. \subsection{The amplitude and the width for $Z \rightarrow l_{1}^{-} l_{2}^{+}$} The total amplitude ${\cal M}$ is given by the sum of partial amplitudes corresponding to the graphs of Fig. \ref{odlisny}a-j (subscripts in ${\cal M}_{W N} ...$ refer to particles in the loop) \footnote{We work in the Feynman gauge, see Sec. \ref{quant1}}: \begin{eqnarray} {\cal M} & = & +ie\epsilon_{\mu}\gamma^{\mu}(1-\gamma_{5})\frac{\alpha}{4\pi} \Big\{ k_{1}{\cal M}_{W N} - k_{1}{\cal M}_{W \nu} + k_{1}{\cal M}_{\phi N} + k_{4}{\cal M}_{\nu \nu W} \nonumber \\ & + & k_{3}{\cal M}_{\nu NW} + k_{3}{\cal M}_{N\nu W} + k_{2}{\cal M}_{NNW} + k_{1}{\cal M}_{WWN} - k_{1}{\cal M}_{WW\nu} \nonumber \\ & + & k_{1}{\cal M}_{\phi \phi N} + k_{1}{\cal M}_{\phi WN} + k_{2}{\cal M}_{NN \phi} \Big\}, \end{eqnarray} where $k_{1}, k_{2}, k_{3}, k_{4}$ are mixing factors to be derived shortly and $\epsilon_{\mu}$ is a polarization four-vector of the $Z$ boson. Further, functions ${\cal M}_{W N}, ...$ depend on masses and momenta of internal and external particles. ${\cal M}_{WN}$ is the sum of diagrams \ref{odlisny}a, \ref{odlisny}b with $N$ in the loop, ${\cal M}_{W \nu}$ is the sum of \ref{odlisny}a, \ref{odlisny}b with $\nu$ in the loop, ${\cal M}_{\phi N}$ is the sum of diagrams \ref{odlisny}c, \ref{odlisny}d and ${\cal M}_{\phi W N}$ is the sum of equal contributions from Fig.~\ref{odlisny}h and \ref{odlisny}i. Besides diagrams \ref{odlisny}a, \ref{odlisny}b also \ref{odlisny}f comes in both with massless $\nu$'s and NHL's. Diagram \ref{odlisny}e comes in with four combinations of neutral lepton types. A sample calculation of one function, ${\cal M}_{NN \phi}$, will be given in Sec. \ref{irreduciblev}. Here we simply state results for all functions ($\frac{m_{l}^{2}}{M_{W}^{2}}$ terms neglected) : \begin{eqnarray} \label{amplitudes} {\cal M}_{WN} & = & \frac{-\frac{1}{2}+s_{W}^{2}}{4 s_{W}^{3}c_{W}} \left[\frac{1}{2}-\Delta_{\mu} +\ln{M_{W}^{2}} + f({\cal X})\right], \nonumber \\ {\cal M}_{W \nu} & = & \frac{-\frac{1}{2}+s_{W}^{2}}{4 s_{W}^{3}c_{W}} \left[\frac{1}{2}-\Delta_{\mu} +\ln{M_{W}^{2}} \right], \nonumber \\ {\cal M}_{\phi N} & = & \frac{-\frac{1}{2}+s_{W}^{2}}{4 s_{W}^{3}c_{W}} \left[-\frac{1}{2}-\Delta_{\mu} +\ln{M_{W}^{2}} + f({\cal X})\right] \frac{{\cal X}}{2}, \nonumber \\ {\cal M}_{abW} & = & -\frac{1} {8 s_{W}^{3} c_{W}}\Big\{2M_{Z}^{2}\big[(C_{23}(M_{a},M_{W},M_{b}) + C_{11}(M_{a},M_{W},M_{b})\big] \nonumber \\ & + & 2-4C_{24}^{fin}(M_{a},M_{W},M_{b})-\Delta_{\mu} \Big\}, \;\;\; a,b = N, \nu \mbox{ ; } M_{\nu} = 0, \nonumber \\ {\cal M}_{WWa} & = & - \frac{3 c_{W}} {4 s_{W}^{3}}\Big\{\frac{2}{3}M_{Z}^{2}\Big[-C_{11}(M_{W},M_{a},M_{W}) -C_{23}(M_{W},M_{a},M_{W}) \nonumber \\ & - & C_{0}(M_{W},M_{a},M_{W})\Big] +4C_{24}^{fin}(M_{W},M_{a},M_{W})-\frac{2}{3}+ \Delta_{\mu}\Big\}, \nonumber \\ {\cal M}_{\phi \phi N} & = & - \frac{1} {2 s_{W}^{3}} \frac{1-2s_{W}^{2}}{2c_{W}}{\cal X} \Big[C_{24}^{fin}(M_{W},M_{N},M_{W}) + \frac{1}{4}\Delta_{\mu}\Big], \nonumber \\ {\cal M}_{\phi WN} & = & +\frac{ M_{W}^{2}} {2 s_{W}c_{W}}{\cal X}C_{0}(M_{W},M_{N},M_{W}), \nonumber \\ {\cal M}_{NN \phi} & = & + \frac{ M_{W}^{2}}{8 s_{W}^{3} c_{W}}{\cal X}^{2} C_{0}(M_{N},M_{W},M_{N}), \end{eqnarray} where \begin{eqnarray} \label{calex} {\cal X} & \equiv & \frac{M_{N}^{2}}{M_{W}^{2}}, \;\;\;\;\;\;\;\;s_{W}\;\equiv\;\sin \theta_{W}, \;\;\;\;\;\;\;\;c_{W}\;\equiv\;\cos \theta_{W}, \nonumber \\ \Delta_{\mu} & = & \frac{2}{\epsilon}-\gamma+\ln 4\pi + \ln \mu^{2}, \nonumber \\ f({\cal X}) & = & \frac{{\cal X}^{2}\log {\cal X}}{({\cal X}-1)^{2}} + \frac{{\cal X}}{1-{\cal X}}. \end{eqnarray} Our results are written in terms of the 't Hooft-Veltman integrals $C_{0}, C_{24}, C_{11}$, $C_{23}$~\cite{thooft} defined in the Appendix \ref{scalari}. They contain both finite and infinite parts regularized by dimensional regularization \cite{thooftdim} (see Appendix \ref{Decko}). Infinite parts are parametrized by $\Delta_{\mu}$, where $\epsilon \rightarrow 0$ is not to be confused with the polarization four-vector $\epsilon_{\mu}$. We now evaluate the mixing parameters. The parameter $k_{1}$ comes from diagrams with one NHL in the loop, $k_{2}$ comes from diagrams with two NHL's, $k_{3}$ from diagrams with one NHL and one massless neutrino and $k_{4}$ from the graph with two massless neutrinos. The case of one massless neutrino is trivial (-- $k_{1}$) and is not shown below. Starting with terms that come directly from the Feynman rules of Appendix \ref{Cecko}, we work our way through to the final form using the properties of the mixing matrix $K$ from Sec. \ref{properties3} (if not explicitly shown, repeated indices are summed over): \begin{eqnarray} \label{fvmixings} k_{1} & = & \big(K_{H}\big)_{la}\big(K_{H}^{\dagger}\big)_{al^{'}} \;=\; ll^{'}_{mix}, \nonumber \\ k_{2} & = & \big(K_{H}\big)_{la}\big(K_{H}^{\dagger}K_{H}\big)_{ab} \big(K_{H}^{\dagger}\big)_{bl^{'}} \;=\; \sum_{m=e,\mu,\tau}\big(K_{H}\big)_{la} \big(K_{H}^{\dagger}\big)_{am} \big(K_{H}\big)_{mb} \big(K_{H}^{\dagger}\big)_{bl^{'}} \nonumber \\ & = & \sum_{m=e,\mu,\tau}lm_{mix}ml^{'}_{mix}, \nonumber \\ k_{3} & = & \big(K_{L}\big)_{li}\big(K_{L}^{\dagger}K_{H}\big)_{ia} \big(K_{H}^{\dagger}\big)_{al^{'}} \;=\; \sum_{m=e,\mu,\tau}\big(K_{L}\big)_{li} \big(K_{L}^{\dagger}\big)_{im} \big(K_{H}\big)_{ma} \big(K_{H}^{\dagger}\big)_{al^{'}} \nonumber \\ & = & \sum_{m}\left[\delta_{lm} - \big(K_{H}\big)_{lb} \big(K_{H}^{\dagger}\big)_{bm}\right]\big(K_{H}\big)_{ma} \big(K_{H}^{\dagger}\big)_{al^{'}} \;=\; ll^{'}_{mix} - \sum_{m} lm_{mix} ml^{'}_{mix} \nonumber \\ & = & k_{1}-k_{2}, \nonumber \\ k_{4} & = & \big(K_{L}\big)_{li}\big(K_{L}^{\dagger}K_{L}\big)_{ij} \big(K_{L}^{\dagger}\big)_{jl^{'}} \;=\; - 2k_{1} + k_{2}. \end{eqnarray} For $k_{4}$ we show only the initial and final step. To address the question of infinities, we note that we do not have to actually renormalize. Indeed, we easily observe the mass independent divergences (in fact any terms independent of mass) are cancelled in the sums \begin{eqnarray} {\cal M}_{W W \nu} + {\cal M}_{W W N}, \nonumber \\ {\cal M}_{N \nu W} + {\cal M}_{\nu N W} + {\cal M}_{N N W} + {\cal M}_{\nu \nu W}, \nonumber \\ {\cal M}_{W \nu} + {\cal M}_{W N}. \end{eqnarray} The origin of this, so-called GIM cancellation can be traced back to the unitarity of the mixing matrix $K$ \footnote{This cancellation is referred to as the GIM cancellation since it has a similar origin as the cancellations due to the CKM matrix in $K^{0} \rightarrow \mu^{+} \mu^{-}$ which lead to the postulation of the $c$ quark by Glashow, Iliopoulos and Maiani (GIM) \cite{key4}.}. The remaining divergent amplitudes have their divergence multiplied by the mass term ${\cal X}$ and therefore GIM cancellation does not apply here. However, these divergences vanish in the sum of mass dependent diagrams, \begin{eqnarray} {\cal M}_{\phi \phi N} + {\cal M}_{\phi N}. \end{eqnarray} Using Eq. \ref{fvmixings} it can be shown that the width for the flavour-violating decays of the Z boson to $l_{1}^{-}l_{2}^{+}$ is given in terms of $k_{1}$ and $k_{2}$ as \begin{eqnarray} \Gamma_{l_{1}^{-}l_{2}^{+}} & = & \frac{2}{3} \frac{\alpha^{3}}{(4\pi)^{2}} M_{Z} |k_{1}{\cal M}_{1}+k_{2}{\cal M}_{2}|^{2}, \end{eqnarray} where \begin{eqnarray} {\cal M}_{1} & = & {\cal M}_{\phi WN} + {\cal M}_{\phi \phi N} - {\cal M}_{WW\nu} + {\cal M}_{WWN} + {\cal M}_{N \nu W} + {\cal M}_{\nu N W} - 2 {\cal M}_{\nu \nu W} \nonumber \\ & - & {\cal M}_{W \nu} + {\cal M}_{\phi N} + {\cal M}_{WN}, \nonumber \\ {\cal M}_{2} & = & {\cal M}_{N N \phi} - {\cal M}_{N \nu W} - {\cal M}_{\nu N W} + {\cal M}_{\nu \nu W} + {\cal M}_{N N W}. \end{eqnarray} The amplitude squared can be written as \begin{eqnarray} |k_{1}{\cal M}_{1}+k_{2}{\cal M}_{2}|^{2} & = & |k_{1}|^{2}|{\cal M}_{1}|^{2} + |k_{2}|^{2}|{\cal M}_{2}|^{2} + 2 Re\left(k_{1}k_{2}^{*}{\cal M}_{1}{\cal M}_{2}^{*}\right). \end{eqnarray} The mixing factors $k_{1}, k_{2}$ are process dependent and the following relations hold between CP conjugate final states: \begin{eqnarray} k_{1,2}\;\;\;\equiv\;\;\; k_{1,2}(l_{1}^{-}l_{2}^{+}) & = & k_{1,2}^{*}(l_{1}^{+}l_{2}^{-}), \end{eqnarray} implying \footnote{Note the difference \begin{eqnarray} Re\left\{k_{1}(l_{1}^{-}l_{2}^{+}) \; k_{2}^{*}(l_{1}^{-}l_{2}^{+}) \; {\cal M}_{1}{\cal M}_{2}^{*}\right\} & - & Re\left\{k_{1}(l_{1}^{+}l_{2}^{-}) \; k_{2}^{*}(l_{1}^{+}l_{2}^{-}) \; {\cal M}_{1}{\cal M}_{2}^{*}\right\} \nonumber \\ & = & -2 Im\left( k_{1}k_{2}^{*}\right)Im \left({\cal M}_{1}{\cal M}_{2}^{*}\right) \nonumber \end{eqnarray} may lead to a CP violating asymmetry \begin{eqnarray} \eta \equiv \frac{\Gamma_{l_{1}^{-}l_{2}^{+}} - \Gamma_{l_{1}^{+}l_{2}^{-}}}{\Gamma_{Z}} & = & - \frac{8}{3} \frac{\alpha^{3}}{(4\pi)^{2}} \frac{M_{Z}}{\Gamma_{Z}} Im\left( k_{1}k_{2}^{*}\right)Im \left({\cal M}_{1}{\cal M}_{2}^{*}\right). \nonumber \end{eqnarray} We found that the maximum value allowed, $\eta \leq 2.2 \times 10^{-14}$ for $e\tau$ mode at $M_{N} = 5$ TeV, is very small (see experimental limits in Eq. \ref{brra}).} \begin{eqnarray} Re\left\{k_{1}(l_{1}^{-}l_{2}^{+}) \; k_{2}^{*}(l_{1}^{-}l_{2}^{+}) \; {\cal M}_{1}{\cal M}_{2}^{*}\right\} & + & Re\left\{k_{1}(l_{1}^{+}l_{2}^{-}) \; k_{2}^{*}(l_{1}^{+}l_{2}^{-}) \; {\cal M}_{1}{\cal M}_{2}^{*}\right\} \nonumber \\ & = & 2 Re\left(k_{1}k_{2}^{*}\right)Re\left({\cal M}_{1}{\cal M}_{2}^{*}\right), \end{eqnarray} giving the total rate for $Z \rightarrow l_{1}^{+}l_{2}^{-} + l_{1}^{-}l_{2}^{+}$ \begin{eqnarray} \label{gfviol} \Gamma_{l_{1}^{-}l_{2}^{+} + l_{1}^{+}l_{2}^{-}} & = & \frac{4}{3} \frac{\alpha^{3}}{(4\pi)^{2}} M_{Z} \left\{|k_{1}|^{2}|{\cal M}_{1}|^{2} + |k_{2}|^{2}|{\cal M}_{2}|^{2} + 2 Re\left(k_{1}k_{2}^{*}\right) \right. \nonumber \\ & \times & \left. Re\left( {\cal M}_{1}{\cal M}_{2}^{*}\right)\right\}. \end{eqnarray} \subsection{Approximate relations in the limit of large NHL mass} \label{arit} While we can easily see how the width $\Gamma_{l_{1}^{-}l_{2}^{+} + l_{1}^{+}l_{2}^{-}}$ depends on mixing factors, the dependence on the NHL mass $M_{N}$ is obscured by the algebraic complexity of the 't~Hooft - Veltman integrals. Fortunately, in the most interesting case, which is that of a large $M_{N}$, the amplitudes become particularly simple. It is the most interesting case since the signal is the largest due to quadratic nondecoupling effects. This means that some of the diagrams give rise to terms $O(M_{N}^{2})$. These effects, as well as the question of how high we can go with the mass $M_{N}$ without disturbing perturbation theory, will be discussed in Secs. \ref{appelc} and \ref{breakdown}. For now, we just state that by large NHL mass we mean $M_{Z} < M_{N} < 5$~TeV. In the limit of a large NHL mass $M_{N}$, the amplitudes exhibit the following behaviour: \begin{eqnarray} \label{aproxm} {\cal M}_{WWN} & = & - \frac{3 c_{W}} {4 s_{W}^{3}} \Big\{4C_{24}^{fin}(M_{W},M_{N},M_{W})-\frac{2}{3}+ \Delta_{\mu}\Big\}, \nonumber \\ {\cal M}_{abW} & = & -\frac{1} {8 s_{W}^{3} c_{W}}\Big\{2-4C_{24}^{fin}(M_{a},M_{W},M_{b})-\Delta_{\mu} \Big\}, \\ & & a,b \:=\: N,\nu\;;\;\nu,N\;;\;N,N \mbox{ ; } \;\;M_{\nu} = 0. \nonumber \end{eqnarray} These formulae differ by less than one percent from the exact ones in Eq. \ref{amplitudes}, at $M_{N} = 500$ GeV and the difference decreases with rising $M_{N}$ to less than $0.1$ percent at $M_{N} = 5000$ GeV. $C$ functions in the same limit behave as \begin{eqnarray} \label{aproxc} C_{0}(M_{W},M_{N},M_{W}) & = & \frac{1}{M_{N}^{2}}\Big[\ln{{\cal X}} + 2 \sqrt{4c_{W}^{2} - 1}\left(\theta - \frac{\pi}{2}\right) + 1 \nonumber \\ & + & O\left({\cal X}^{-1}\right) \Big], \;\;\;\;\; \theta = \arctan{\sqrt{4c_{W}^{2} - 1}}, \nonumber \\ C_{0}(M_{N},M_{W},M_{N}) & = & \frac{1}{M_{N}^{2}}\Big[1 + O\left({\cal X}^{-1}\right) \Big], \nonumber \\ C_{24}^{fin}(M_{W},M_{N},M_{W}) & = & \frac{3}{8} - \frac{1}{4} \ln M_{N}^{2} + O\left({\cal X}^{-1}\right), \end{eqnarray} and also $C_{24}^{fin}$ function of any other combination of arguments involving $M_{N}$ varies slowly as $\ln{M_{N}^{2}}$. With the help of Eqs. \ref{aproxm}, \ref{aproxc}, we can see there are three amplitudes in Eq. \ref{amplitudes} with nondecoupling behaviour, namely the quadratic dependence on NHL mass. They are ${\cal M}_{NN\phi}, {\cal M}_{\phi\phi N}$ and ${\cal M}_{\phi N}$. However, as numerical calculations show, ${\cal M}_{\phi\phi N} \rightarrow - {\cal M}_{\phi N}$ for large $M_{N}$, leaving us ${\cal M}_{NN\phi}$ as the only amplitude with the nondecoupling behaviour. ${\cal M}_{NN\phi}$ gives the dominant contribution to ${\cal M}_{2}$ and, moreover, it ensures that for large $M_{N}$ \begin{eqnarray} |k_{2}|^{2}|{\cal M}_{2}|^{2} & > & |k_{1}|^{2}|{\cal M}_{1}|^{2}, \end{eqnarray} despite the fact that the $|k_{2}|$ is quadratically small compared to the linear $|k_{1}|$. In Refs. \cite{bernabeu1,Valle2} the authors neglected terms proportional to $|k_{2}|$, therefore their results do not apply in the large $M_{N}$ limit. \subsection{Numerical results} \label{numeres} In the numerical calculations, the term $Re\left(k_{1}k_{2}^{*}\right)$ from the Eq. \ref{gfviol} was treated as follows. We only have limits on $|k_{1}|, |k_{2}|$ (input parameters for our calculations), not on real and imaginary parts of $k_{1}, k_{2}$. Thus for given $|k_{1}|, |k_{2}|$, the real part of $k_{1}k_{2}^{*}$ can vary as \begin{eqnarray} - |k_{1}||k_{2}| & \leq & Re\left(k_{1}k_{2}^{*}\right) \;\;\;\leq \;\;\; |k_{1}||k_{2}|. \end{eqnarray} In our calculations we set \begin{eqnarray} Re\left(k_{1}k_{2}^{*}\right) & = & \delta |k_{1}||k_{2}|, \end{eqnarray} and $\delta$ is varied between $-1$ and $+1$ as an independent input parameter. To find a numerical value of the parameter $|k_{2}|^{2}$, we express it in terms of $ll_{mix}$ and ${l_{1}l_{2}}_{mix}$, \begin{eqnarray} |k_{2}|^{2} & \equiv & |k_{2}|^{2}(l_{1}^{-}l_{2}^{+}) \;\; = \;\; \left({l_{1}l_{1}}_{mix} + {l_{2}l_{2}}_{mix}\right)^{2} |{l_{1}l_{2}}_{mix}|^{2} + |{l_{1}l_{3}}_{mix}|^{2}|{l_{3}l_{2}}_{mix}|^{2} \nonumber \\ & + & 2\left({l_{1}l_{1}}_{mix} + {l_{2}l_{2}}_{mix}\right) Re\left\{{l_{1}l_{2}^{*}}_{mix} \; {l_{1}l_{3}}_{mix} \;{l_{3}l_{2}}_{mix} \right\}. \end{eqnarray} The smallness of $e\mu_{mix}$ effectively removes some of the terms. For the $e\mu$ final state, the first and the third terms above are negligible, leaving \begin{eqnarray} \label{quartica4} |k_{2}|^{2}(e\mu) & \doteq & |{e\tau}_{mix}|^{2}|{\tau\mu}_{mix}|^{2}, \end{eqnarray} while for $e\tau$ and $\mu\tau$ sector we have \begin{eqnarray} \label{quarticb4} |k_{2}|^{2}(e\tau) & \doteq & \left({ee}_{mix} + {\tau\tau}_{mix}\right)^{2} |{e\tau}_{mix}|^{2} , \nonumber \\ |k_{2}|^{2}(\mu\tau) & \doteq & \left({\mu\mu}_{mix} + {\tau\tau}_{mix}\right)^{2} |{\mu\tau}_{mix}|^{2}. \end{eqnarray} The maximally allowed mixings (Eqs. \ref{limits1}, \ref{limits2}, \ref{limits3}) imply $|k_{1}| =$ $(0.00024,$ $0.015, 0.0068)$ and $|k_{2}| =$ $(0.0001, $ $0.0006, 0.00023)$ for $e\mu, e\tau, \mu\tau$ modes respectively. As noted before, we assume degenerate NHL's with mass $M_{N}$. Gauge boson masses used in the numerical calculations are $M_{Z} = 91.1884$ GeV \cite{mt1} and $M_{W} = 80.410$~GeV \cite{mw1}. The total decay width of the Z boson is taken as $\Gamma_{Z} = 2.4963$ GeV \cite{mt1}. The results are shown in Fig. \ref{fviolfig}a,b. They show how the branching ratio $ BR\left(l_{1}^{\pm}l_{2}^{\mp}\right) \equiv \Gamma_{l_{1}^{+}l_{2}^{-} + l_{1}^{-}l_{2}^{+}}/\Gamma_{Z}$ varies with the NHL mass. In Fig. \ref{fviolfig}a we set $\delta = -1$, in Fig. \ref{fviolfig}b $\delta = +1$. The graphs start at $M_{N} = 100$ GeV. For NHL masses less than $M_{W}$, the rates are negligibly small. A sudden rise in branching ratio just above $M_{N} = 1$ TeV for $e\tau$ and $\mu\tau$ modes in Fig. \ref{fviolfig}a signals that at this point the $|k_{2}|^{2}|{\cal M}_{2}|^{2}$ term overtakes the $|k_{1}|^{2}|{\cal M}_{1}|^{2}$ term and the nondecoupling behaviour (generated by the Feynman graph Fig. \ref{odlisny}j) becomes dominant. We predict the following branching ratio limits for $M_{N} = 5$ TeV and $\delta = +1$: \begin{eqnarray} BR_{th}(Z \rightarrow e^{\pm}\mu^{\mp}) & < & 3.3 \times 10^{-8}, \nonumber \\ BR_{th}(Z \rightarrow e^{\pm}\tau^{\mp}) & < & 1.4 \times 10^{-6}, \nonumber \\ BR_{th}(Z \rightarrow \mu^{\pm}\tau^{\mp}) & < & 2.2 \times 10^{-7}. \end{eqnarray} These results are similar to those of Ref. \cite{pilaftsis1}, where the calculation was done in the context of a see-saw model with enhanced mixings. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,7) \put(.1,+0.325){\mbox{\epsfxsize=5.0in\epsffile{fviol.eps}}} \end{picture} \end{center} \caption{The branching ratio $Z \rightarrow l_{1}^{\pm}l_{2}^{\mp}$ as a function of $M_{N}$ for (a) $\delta = -1$, (b) $\delta = +1$ .} \label{fviolfig} \end{figure} Current experimental upper limits on branching ratios are \cite{pdb} \begin{eqnarray} \label{brra} BR_{exp}(Z \rightarrow e^{\pm}\mu^{\mp}) & < & 6 \times 10^{-6}, \;\;\;\;\;95 \%\;\;{\rm C.L.}, \nonumber \\ BR_{exp}(Z \rightarrow e^{\pm}\tau^{\mp}) & < & 1.3 \times 10^{-5}, \;\;\;\;\;95 \%\;\;{\rm C.L.}, \nonumber \\ BR_{exp}(Z \rightarrow \mu^{\pm}\tau^{\mp}) & < & 1.9 \times 10^{-5}, \;\;\;\;\;95 \%\;\;{\rm C.L.}\;. \end{eqnarray} Our prediction for $e\tau$ mode is thus at least one order of magnitude below the experimental limit. \section{Flavour-violating processes at very low energies} \label{fvple} How does this compare with flavour-violating processes at very low energies ? The rare decay $\mu \rightarrow e \gamma$ (see Fig. \ref{muegama}) is very well measured and supplies us with a stringent limit on $|e\mu_{mix}|$ (see Eq. \ref{limits3}), which we use as an input parameter for our calculations. We now derive this limit. The decay $\mu \rightarrow e \gamma$ was studied in the context of our model and see-saw models with enhanced mixings by several authors \cite{Ng1,Ng2,ggjv,Ilakovac,Jarlskog}. In our model, with mass degenerate NHL's, the $\mu \rightarrow e \gamma$ branching ratio is \cite{ggjv} \begin{eqnarray} \label{brmeg} BR(\mu \rightarrow e \gamma) & = & \frac {3\alpha}{32\pi}{|e\mu_{mix}|}^2 {|F_{\gamma}({\cal X})|}^{2}, \end{eqnarray} \\ where \begin{eqnarray} {|F_{\gamma}({\cal X})|}^{2} & = & - {\cal X}\frac{1- 5{\cal X}-2{\cal X}^{2}}{(1-{\cal X})^{3}} + \frac{6{\cal X}^{3}}{(1-{\cal X})^{4}}\ln {\cal X}, \;\;{\cal X} = \frac{M_{N}^{2}}{M_{W}^{2}}, \end{eqnarray} is an NHL mass dependent form factor. For NHL masses $M_{N} > 500$ GeV, which we ultimately consider, the formfactor becomes independent of mass, \begin{eqnarray} \label{formfactor} F_{\gamma}({\cal X}) \rightarrow -2. \end{eqnarray} This is another example of the nondecoupling behaviour ($F_{\gamma}({\cal X})$ does not vanish) of NHL's \footnote{It is instructive to see how this result arises from the dimensional analysis argument. The effective Lagrangian for the $\mu \rightarrow e \gamma$ is given by \cite{chengli} \begin{eqnarray} {\cal L}_{eff} & = & T({\cal X}){\overline e}_{L}\sigma_{\lambda \nu} \mu_{R} F^{\lambda \nu}, \nonumber \end{eqnarray} where the field operators ${\overline e}_{L}, \mu_{R}, F^{\lambda \nu}$ have mass dimensions 3/2, 3/2 and 2, respectively; hence $T({\cal X})$ has to have dimension -1. For Fig. \ref{muegama}, the large mass $M_{N}$ dominance thus suggests $T({\cal X}) \sim \frac{1}{M_{N}}$ on dimensional grounds. However, there is also a possibility of $T({\cal X}) \sim \frac{m_{\mu}}{M_{N}^{2}}$; this is indeed what happens since it is $m_{\mu}$ (or $m_{e}$) which gives the right helicity flip to yield the required ${\cal L}_{eff}$ (these points can be best understood after writing down the amplitude for the graph). The amplitude for Fig. \ref{muegama} thus decouples quadratically. There is another graph where two internal W's are replaced by the unphysical Higgs $\phi$. The large $M_{N}$ behaviour is in this case boosted by $M_{N}$ dependent couplings of NHL's to $\phi$'s, so this dominant graph yields $T({\cal X}) \sim F_{\gamma}({\cal X}) \sim \frac{m_{\mu}}{M_{N}^{2}} \frac{M_{N}^{2}}{M_{W}^{2}} \sim const$, in agreement with Eq. \ref{formfactor}.}. It is the mildest nondecoupling case; we encountered in the previous section amplitudes with quadratic dependence on NHL mass. Given the current experimental limit on the $\mu \rightarrow e \gamma$ branching ratio \cite{pdb}, \begin{eqnarray} BR(\mu \rightarrow e \gamma) & \leq & 4.9 \times 10^{-11}\;\;\;\;\;90\%\; {\rm C.L.}, \end{eqnarray} Eqs. \ref{brmeg}, \ref{formfactor} yield an upper limit on the mixing of $|e\mu_{mix}| \leq 0.00024$, given previously as Eq. \ref{limits3}. On the other hand, experimental limits on $\tau \rightarrow e \gamma$ and $\tau \rightarrow \mu \gamma$ \cite {pdb}, \begin{eqnarray} BR_{exp}(\tau \rightarrow e \gamma) & < & 1.2 \times 10^{-4}, \;\;\;\;\;90 \%\;\;{\rm C.L.}\;, \nonumber \\ BR_{exp}(\tau \rightarrow \mu \gamma) & < & 4.2 \times 10^{-6}, \;\;\;\;\;90 \%\;\;{\rm C.L.}\;, \end{eqnarray} are much weaker. The predicted rate for both $e \gamma$ and $\mu \gamma$ modes is \cite{ggjv} \begin{eqnarray} BR_{th}(\tau \rightarrow e \gamma, \mu \gamma) & = & 7 \times 10^{-7}, \;\;\;\;{\rm for}\;\;\;\;M_{N} > 500 \;{\rm GeV}. \end{eqnarray} However, the limits on mixing parameters used in Ref. \cite{ggjv} are out of date now. With the current limits the predicted rate would be smaller by at least one order of magnitude, implying that the theoretical result is two orders of magnitude below the experimental upper limit for $\mu \gamma$ mode and about three orders for $e \gamma$ mode. This explains why we had to use indirect limits of Eq. \ref{limits2} for $\mu\tau_{mix}$ and $e\tau_{mix}$. Another well-measured muon decay mode is $\mu \rightarrow e^{-}e^{-}e^{+}$, with \cite {pdb} \begin{eqnarray} \label{mueee4} BR_{exp}(\mu \rightarrow e^{-}e^{-}e^{+}) & < & 1.0 \times 10^{-12},\;\;\;\;\;90 \% {\rm C.L.}\;. \end{eqnarray} This process was considered by Refs. \cite{Ng1,Ng2,Ilakovac,Jarlskog}. The calculation shows the quadratic nondecoupling we encountered in the lepton flavour-violating decays of the Z boson. Ref. \cite{Jarlskog} gives (with an assumption discussed therein) the following constraint for the parameters of the superstring-inspired (our) model: \begin{eqnarray} \label{mnlimit4} ee_{mix}|e\mu_{mix}| & \leq & 0.93 \times 10^{-5} \frac{1 {\rm TeV}^{2}}{M_{N}^{2}({\rm TeV}^{2})}, \end{eqnarray} which for $M_{N} \geq 3$ TeV is competitive with a constraint implied by Eqs. \ref{limits1}, \ref{limits2}, \ref{limits3}: \begin{eqnarray} ee_{mix}|e\mu_{mix}| & \leq & 0.17 \times 10^{-5}. \end{eqnarray} Also considered in Refs. \cite{Ng1,Ng2,Jarlskog} is $\mu - e$ conversion in nuclei, $\mu^{-}(A,Z) \rightarrow e^{-}(A,Z)$. The constraint on the product $ee_{mix}|e\mu_{mix}|$ \cite{Jarlskog} is similar to the one above. For the flavour-violating decays of the tau into three leptons ($\tau \rightarrow$ $e^{-}e^{-}e^{+},$ $e^{-}\mu^{-}\mu^{+},$ etc) there is to my knowledge no calculation studying the large (TeV) NHL mass limit in the context of our model. Within the see-saw model of Ref. \cite{pilaftsis2}, Pilaftsis predicts with the current limits on mixings and for $M_{N} = 3$ TeV \cite{pilaftsis1}: \begin{eqnarray} BR_{th}(\tau \rightarrow e^{-}e^{-}e^{+}) & = & 5 \times 10^{-7}, \nonumber \\ BR_{th}(\tau \rightarrow e^{-}\mu^{-}\mu^{+}) & = & 3 \times 10^{-7}. \end{eqnarray} The current experimental limits are \cite{pdb} \begin{eqnarray} BR_{exp}(\tau \rightarrow e^{-}e^{-}e^{+}) & < & 1.4 \times 10^{-5},\;\;\;\;\;90 \%\;\;{\rm C.L.}\;,\nonumber \\ BR_{exp}(\tau \rightarrow e^{-}\mu^{-}\mu^{+}) & < & 1.4 \times 10^{-5},\;\;\;\;\;90 \%\;\;{\rm C.L.}\;. \end{eqnarray} Finally, hadronic decay modes of the $\tau$ lepton, $\tau \rightarrow l \eta, l\pi^{0}$ \cite{ggjv} are disfavoured by loose limits, e.g. $BR(\tau \rightarrow \mu^{-} \pi^{0}) < 4.4 \times 10^{-5}$ \cite{pdb}. In conclusion, to probe large NHL masses, we would have to push experimental upper limits by at least one order of magnitude for flavour-violating leptonic decays of the Z boson, and by one to two orders of magnitude for flavour-violating decays of the $\tau$ lepton. This most likely requires increased high luminosity running at LEP~I energy and $\tau$ factory \cite{tfactory}. Note the dominant contribution to the total rate for $Z \rightarrow l_{1}^{+}l_{2}^{-} + l_{1}^{-}l_{2}^{+}$, $|k_{2}|^{2}|{\cal M}_{2}|^{2}$, depends quartically on small mixings (see Eqs. \ref{quartica4}, \ref{quarticb4}) and also quartically on the NHL mass $M_{N}$. Further mass independent limits on mixings will therefore suppress this dominant contribution rather quickly, unless $M_{N}$ is very large. \chapter{Lepton flavour-conserving processes} \label{chap6} In this and the following chapter we will examine two lepton flavour-conserving processes: i) $Z \rightarrow l^{+}l^{-} \; (l = e, \mu, \tau)$ with observables $\Gamma_{ll}$ (the width) and $U_{br}$ (universality breaking parameter); and ii) $\mu \rightarrow e \nu \nu$ with observable $M_{W}$ (the W boson mass). We will show that these observables probe the mixings vs NHL mass parameter space of our model in many respects more efficiently than the flavour-violating decays discussed in the previous chapter. We work to the one-loop ($O(\alpha)$) level of perturbation theory. In Sec. \ref{sectree} we classify, closely following the SM case of Ref. \cite{key6}, one-loop corrections to $Z \rightarrow l^{+}l^{-}$ into three groups - oblique, vertex and QED corrections. Each group is then individually studied in Secs. \ref{secqed} - \ref{secver}. We note that a large number of contributing diagrams comes directly from the SM without being modified by NHL's. In such cases we use the SM results of Ref. \cite{key6}. As far as non-SM contributions are concerned, we present a detailed calculation of two Feynman diagrams (one oblique and one vertex) and a summary of results for the remaining ones. Divergent results are then renormalized as discussed in Chapter 4. Secs. \ref{secimp} - \ref{breakdown} are less technical and hopefully more intriguing. We show the impact of loop corrections to a $\mu$-decay on the Z width, calculate the W mass $M_{W}$, discuss the violation of the decoupling theorem and the quadratic dependence of the loop corrections on the NHL mass, and the limitations of the perturbative calculations. Our numerical results are presented in Sec. \ref{seresu} and discussed in Sec. \ref{conc}. We investigate here only a part of the mixings vs NHL mass parameter space by setting $ee_{mix} = \mu\mu_{mix} = 0$. The full space is studied in Chapter 7. \section{$\bf Z \rightarrow l^{+}l^{-}$: the tree-level and the corrections} \label{sectree} The tree-level leptonic width of the Z boson in the SM is given by \begin{eqnarray} \label{treew} \Gamma_{0} & = & \frac{\alpha}{3} M_{Z}(v_{l}^{2} + a_{l}^{2}); \end{eqnarray} with $v_{l} = (-1+4s_{W}^{2})/(4s_{W}c_{W})$ and $a_{l} = -1/(4s_{W}c_{W})$ being, respectively, the vector and axial vector couplings of the charged leptons to $Z$. We neglected terms proportional to $m_{l}^{2}/M_{W}^{2}$. In this approximation, as a consequence of the lepton universality of the SM, the partial widths for all three modes ($ee, \mu\mu, \tau\tau$) are equal. One-loop corrected leptonic decays of the Z boson in the SM were thoroughly discussed by W. Hollik in Ref. \cite{key6}. He parametrizes the leptonic width as \begin{eqnarray} \label{oneloop} \Gamma_{ll} & = & \frac{\Gamma_{0} + \delta{\hat \Gamma_{ll}}}{1+{\hat \Pi}_{Z} (M_{Z}^{2})}(1+\delta_{QED}). \end{eqnarray} The one-loop electroweak corrections include Z boson propagator (so-called oblique) corrections ${\hat \Pi}_{Z}$; vertex corrections $\delta{\hat \Gamma_{ll}}$ and QED corrections $\delta_{QED}$. To give the reader some feeling for the numbers involved, we note that the SM prediction with $M_{Z} = 91.1884$ GeV, $m_{t} = 176$ GeV and $M_{H} = 200$ GeV is \begin{eqnarray} \Gamma_{0} & = & 81.45 \; {\rm MeV}, \nonumber \\ \Gamma_{ll} & = & 84.03 \; {\rm MeV}, \end{eqnarray} i.e., loops account for $\Gamma_{ll} - \Gamma_{0} \doteq 2.5$ MeV. The current experimental value under the assumption of lepton universality is \cite{mt1} \begin{eqnarray} \label{gamaexp} \Gamma_{ll}^{exp} & = & 83.93 \pm 0.14 \; {\rm MeV}. \end{eqnarray} Without assuming universality \cite{mt1}, \begin{eqnarray} \label{pwidths} \Gamma_{ee}^{exp} & = & 83.92 \pm 0.17 \; {\rm MeV}, \nonumber \\ \Gamma_{\mu\mu}^{exp} & = & 83.92 \pm 0.23 \; {\rm MeV}, \nonumber \\ \Gamma_{\tau\tau}^{exp}& = & 83.85 \pm 0.29 \; {\rm MeV}. \end{eqnarray} In our model, Eqs. \ref{treew}, \ref{oneloop} keep the same form. It is ${\hat \Pi}_{Z}$ and $\delta{\hat \Gamma}_{ll}$ which are modified by the contribution of NHL's. Also, $\Gamma_{0}$ is modified (via $s_{W}$) in an indirect way (see Sec. \ref{secimp}); the QED parameter $\delta_{QED}$ is not affected by NHL's. We now address these corrections one by one, starting with $\delta_{QED}$. \section{QED corrections} \label{secqed} QED corrections (Fig. \ref{qedfd}) form a gauge invariant subset and therefore can be treated independently of the genuine electroweak corrections \cite{key6}. The graphs of Fig. \ref{qedfd} were calculated in Ref. \cite{qed} where the results were shown to modify the Z~width by a factor $\delta_{QED}$ (see Eq. \ref{oneloop}), \begin{eqnarray} \delta_{QED} & = & \frac{3 \alpha}{4 \pi}. \end{eqnarray} Our inclusion of NHL's has no impact on this SM result. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.5) \put(.1,+0.325){\mbox{\epsfxsize=5.0in\epsffile{ch51.eps}}} \end{picture} \end{center} \caption{QED corrections.} \label{qedfd} \end{figure} \section{Z-propagator corrections ${\hat \Pi}_{Z}$} \label{seczprop} Z-propagator corrections ${\hat \Pi}_{Z}$ are related to the real part of the renormalized Z self-energy ${\hat \Sigma}_{Z}$ via \begin{eqnarray} {\hat \Pi}_{Z}(M_{Z}^{2}) & = & \frac{\partial \; {\cal R}e \:{\hat \Sigma}_{Z}}{\partial p^{2}}(M_{Z}^{2}), \end{eqnarray} where $p$ is the 4-momentum of the Z boson. ${\hat \Sigma}_{Z}$ includes, besides the unrenormalized Z self-energy $\Sigma_{Z}$, through the renormalization constant $\delta Z_{2}^{Z}$ (see Eq. \ref{rconstants}), also all other unrenormalized gauge boson self-energies $\Sigma_{W}, \Sigma_{\gamma}$ and $\Sigma_{\gamma Z}$. The diagrams contributing to these self-energies are in Figs. \ref{pfd} - \ref{wfd}. The photon self-energy (Fig. \ref{pfd}) and the photon - Z mixing energy (Fig. \ref{pzfd}) are not modified by the NHL's (the sum of the fermion loops runs over all fermions except neutrinos) and therefore we will use the SM analytical formulae of Refs.~\cite{key6,cernlib} given in Eqs. \ref{agama}, \ref{asedem}. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3.0) \put(.9,+0.4){\mbox{\epsfxsize=3.5in\epsffile{fd3.eps}}} \end{picture} \end{center} \caption{Photon self-energy.} \label{pfd} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3.0) \put(.9,+0.125){\mbox{\epsfxsize=3.5in\epsffile{fd1.eps}}} \end{picture} \end{center} \caption{Photon-Z mixing energy.} \label{pzfd} \end{figure} The Z self-energy (Fig. \ref{zfd}) and the W self-energy (Fig. \ref{wfd}), are modified in our model as NHL's enter the fermion loops. The non-SM graphs from Figs. \ref{zfd}, \ref{wfd} are shown explicitly in Fig. \ref{obliquefd}. They include the graphs with massless neutrinos (no NHL's), since these differ from the SM in the mixing factors. We will calculate these graphs and the resulting amplitudes will replace the SM neutrino contribution in Eq. \ref{azet}. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,2.6) \put(.6,+0.125){\mbox{\epsfxsize=4.5in\epsffile{fd2.eps}}} \end{picture} \end{center} \caption{Z boson self-energy.} \label{zfd} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,4) \put(0.6,+0.4){\mbox{\epsfxsize=4.5in\epsffile{fd4.eps}}} \end{picture} \end{center} \caption{W boson self-energy.} \label{wfd} \end{figure} \begin{figure}[t] \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(.25,+0.325){\mbox{\epsfxsize=5.5in\epsffile{fd5.eps}}} \end{picture} \end{center} \caption{Non-SM loops with NHL's and massless neutrinos.} \label{obliquefd} \end{figure} $\Sigma_{Z}$ is associated with the transverse ($g^{\mu\nu}$) part of the Z self-energy tensor $\Sigma_{Z}^{\mu\nu}$: \begin{eqnarray} \Sigma_{Z}^{\mu\nu} & = & g^{\mu\nu}\Sigma_{Z} + p^{\mu}p^{\nu} \tilde{\Sigma}_{Z}. \end{eqnarray} The longitudinal part $\tilde{\Sigma}_{Z}$ does not contribute to S-matrix elements \cite{jeger} and we will not consider it here. The non-SM part of the Z (unrenormalized) self-energy tensor $\Sigma_{Z}^{\mu\nu}$ is the sum of four terms corresponding to Figs. $\ref{obliquefd} \;a - d$ : \begin{eqnarray} \Sigma_{Z}^{\mu\nu} & = & \Sigma_{Z}^{\mu\nu}(M_{N},M_{N}) + \Sigma_{Z}^{\mu\nu}(M_{N},0) + \Sigma_{Z}^{\mu\nu}(0,M_{N}) + \Sigma_{Z}^{\mu\nu}(0,0). \end{eqnarray} We evaluate the contribution of one of these terms, $\Sigma_{Z}^{\mu\nu}(M_{N},M_{N})$ in detail below as an example. Divergent integrals are regularized and evaluated in $n$ dimensions using the technique of dimensional regularization due to 't Hooft and Veltman \cite{thooftdim} (see Appendix \ref{Decko}). Following the Feynman rules of Appendix \ref{Cecko}, the contribution of Fig. $\ref{obliquefd}a$ to the Z self-energy tensor is \begin{eqnarray} \label{SZHH} -i\; \Sigma_{Z}^{\mu\nu}(M_{N},M_{N}) & = & - \sum_{a,b=4,5,6}\int \frac{d^{n}q} {(2\pi)^{n}} Tr \Big\{\frac{ie}{4s_{w}c_{w}}\big(K_{H}^{\dagger}K_{H}\big)_{ab}\gamma_{\mu} \big(1-\gamma_{5}\big) \;\;\;\;\; \Big. \nonumber \\ & \times & \Big. \frac{i}{\not q - \not p - M_{N}} \frac{ie}{4s_{w}c_{w}}\big(K_{H}^{\dagger}K_{H}\big)_{ba}\gamma_{\nu} \big(1-\gamma_{5}\big)\frac{i}{\not q - M_{N}} \Big\} \nonumber \\ \nonumber \\ & = & - \frac{e^{2}}{16s_{W}^{2}c_{W}^{2}} \sum_{a,b}\big(K_{H}^{\dagger}K_{H}\big)_{ab} \big(K_{H}^{\dagger}K_{H}\big)_{ba} \int \frac{d^{n}q}{(2\pi)^{n}} \nonumber \\ & \times & \frac{Tr \big\{\gamma_{\mu}\big(1-\gamma_{5}\big) \big(\not q - \not p + M_{N}\big) \gamma_{\nu}\big(1-\gamma_{5}\big)\big(\not q + M_{N}\big)\big\}} {\big[(q-p)^{2} - M_{N}^{2}\big]\big[q^{2} - M_{N}^{2}\big]}. \nonumber \\ \end{eqnarray} In the above we sum over NHL's of all three families ($a,b = 4,5,6$). Using the relations and theorems of Appendix \ref{gamas} we now evaluate the trace: \begin{eqnarray} Tr \; \big\{...\big\} & \equiv & Tr \; \big\{\gamma_{\mu}\big(1-\gamma_{5}\big)\big(\not q - \not p + M_{N}\big) \gamma_{\nu}\big(1-\gamma_{5}\big)\big(\not q + M_{N}\big)\big\} \nonumber \\ & = & 2 \; Tr \; \big\{\gamma_{\mu}\big(1-\gamma_{5}\big)\big(\not q - \not p\big) \gamma_{\nu} \big(\not q + M_{N}\big)\big\} \nonumber \\ & = & 2 \; Tr \; \big\{\gamma_{\mu}\big(\not q - \not p\big)\gamma_{\nu}\not q\big\} - 2 \; Tr \; \big\{\gamma_{\mu}\gamma_{5}\big(\not q - \not p\big)\gamma_{\nu} \not q \big\}. \end{eqnarray} The trace with $\gamma_{5}$ does not contribute: \begin{eqnarray} Tr \; \big\{\gamma_{\mu}\gamma_{5}\big(\not q - \not p\big)\gamma_{\nu} \not q \big\} & = & 4 i \; \epsilon_{\mu\alpha\nu\beta} \; (q - p)^{\alpha}q^{\beta} \; = \; 0, \end{eqnarray} using $\epsilon_{\mu\alpha\nu\beta}\;q^{\alpha}q^{\beta}\; = \; 0$ and $\int_{q}q^{\alpha}...\; = \; p^{\alpha}...\;\;\;$. The original trace is thus given by \begin{eqnarray} Tr \; \big\{... \big\} & = & 2 \; Tr \; \big\{\gamma_{\mu}\big(\not q - \not p\big)\gamma_{\nu}\not q \big\} \nonumber \\ & = & 8 \big[(q-p)_{\mu}q_{\nu} - g_{\mu\nu}(q-p) q + q_{\mu}(q-p)_{\nu} \big]. \end{eqnarray} We plug this result back in Eq. \ref{SZHH}: \begin{eqnarray} -i\; \Sigma_{Z}^{\mu\nu}(M_{N},M_{N}) & = &- \frac{e^{2}}{2s_{W}^{2}c_{W}^{2}} k_{HH} \nonumber \\ & \times & \int \frac{d^{n}q}{(2\pi)^{n}} \frac{(q-p)_{\mu}q_{\nu} - g_{\mu\nu}(q-p) q + q_{\mu}(q-p)_{\nu}} {\big[(q-p)^{2} - M_{N}^{2}\big]\big[q^{2} - M_{N}^{2}\big]} \nonumber \\ & = & - \frac{e^{2}}{2s_{W}^{2}c_{W}^{2}}k_{HH} \frac{i {\pi}^{2}}{(2 \pi)^{n}} \Big\{ -p_{\mu} B_{\nu}(p;M_{N},M_{N}) \nonumber \\ & - & p_{\nu}B_{\mu}(p;M_{N},M_{N}) + g_{\mu\nu}p_{\alpha}B^{\alpha}(p;M_{N},M_{N}) \nonumber \\ & + & 2 B_{\mu\nu}(p;M_{N},M_{N}) - g_{\mu\nu}g_{\alpha\beta}B^{\alpha\beta}(p;M_{N},M_{N})\Big\}, \end{eqnarray} where $k_{HH} \equiv \sum \big(K_{H}^{\dagger}K_{H}\big)_{ab}\big(K_{H}^{\dagger}K_{H}\big)_{ba}$ and functions $B_{\mu}, B_{\mu\nu}$ are 't Hooft scalar \linebreak $n$-dimensional integrals defined in Appendix \ref{scalari}. Using (see Eq. \ref{abfunc}) \begin{eqnarray} B_{\mu} & = & - p_{\mu} B_{1}, \;\;\;\;\;\;\;\;\;\; B_{\mu\nu} \;\; = \;\; p_{\mu}p_{\nu}B_{21} - g_{\mu\nu}B_{22}, \end{eqnarray} the $n$-dimensional space-time relation (see Eq. \ref{nalgebra}) \begin{eqnarray} g^{\mu \nu} g_{\mu \nu} & = & n, \end{eqnarray} and recollecting that only the transverse (terms with $g_{\mu\nu}$) part contributes to \linebreak S-matrix elements, we get \begin{eqnarray} -i\; \Sigma_{Z}^{\mu\nu}(M_{N},M_{N}) & = &- \frac{e^{2}}{2s_{W}^{2}c_{W}^{2}} k_{HH} \frac{i {\pi}^{2}}{(2 \pi)^{n}}g_{\mu\nu}\Big[ -p^{2}B_{1} - p^{2}B_{21} -2B_{22} \Big. \nonumber \\ & + & \Big. n B_{22}\Big]. \end{eqnarray} With the help of formulae from Appendix \ref{scalari}, we arrive at the following result for the self-energy $\Sigma_{Z}(M_{N},M_{N})$: \begin{eqnarray} -i\; \Sigma_{Z}(M_{N},M_{N}) & = &- \frac{e^{2}}{2s_{W}^{2}c_{W}^{2}} k_{HH} \frac{i {\pi}^{2}}{(2 \pi)^{n}}\frac{1}{3}\Big\{\Big(- 3 M_{N}^{2} + p^{2}\Big)\Delta \Big. \nonumber \\ & + & \Big. \Big[2 A_{0}^{fin}(M_{N}) + 2 M_{N}^{2} -\frac{p^{2}}{3} + \Big(p^{2} - M_{N}^{2} \Big) B_{0}^{fin}(p;M_{N},M_{N})\Big]\Big\} \nonumber \\ & = & - \frac{e^{2}}{2s_{W}^{2}c_{W}^{2}}k_{HH} \frac{i {\pi}^{2}}{(2 \pi)^{n}}\frac{1}{3}\Big\{\Big(- 3 M_{N}^{2} + p^{2}\Big)\Delta \Big. \nonumber \\ & + & \Big. \Big[2M_{N}^{2} \ln M_{N}^{2} - \frac{p^{2}}{3} + \Big(F(p;M_{N},M_{N}) - \ln M_{N}^{2}\Big) \nonumber \\ & \times & \Big(p^{2} - M_{N}^{2}\Big)\Big]\Big\}, \end{eqnarray} where the function $F$ is related to the function $B_{0}$ by Eq. \ref{baf} and $A_{0}^{fin}(m) = - m^{2}(- \ln m^{2} + 1)$. The divergence is displayed as a pole at $n = 4$ (see Appendix \ref{dimreg}): \begin{eqnarray} \Delta & = & \frac{2}{4-n} - \gamma - \ln \pi \; = \; \frac{2}{\epsilon} - \gamma - \ln \pi. \end{eqnarray} In $n$ dimensions, $\alpha$ becomes a dimensional quantity and we should do the the following replacement: \begin{eqnarray} \alpha = \frac{e^{2}}{4\pi} & \rightarrow & \alpha \mu^{\epsilon} \;\; = \;\; \alpha \Big(1 + \frac{\epsilon}{2}\ln \mu^{2} + O(\epsilon^{2}) + ... \Big), \end{eqnarray} where $\mu$ is an arbitrary mass. Together with the expansion \begin{eqnarray} \frac{1}{(2\pi)^{n}} & = & \frac{1}{(2\pi)^{4-\epsilon}} \;\; = \;\; \frac{1}{(2\pi)^{4}}(1 + \epsilon \ln 2 \pi + O(\epsilon^{2}) + ... ), \end{eqnarray} this yields \begin{eqnarray} \frac{1}{(2\pi)^{n}} \alpha\; \Delta & = & \frac{1}{(2\pi)^{4}} \alpha\; \Big( \frac{2}{\epsilon} -\gamma + \ln 4\pi + \ln \mu^{2}\Big) \;\; = \;\; \frac{1}{(2\pi)^{4}} \alpha \; \Delta_{\mu} \nonumber \\ & = & \frac{1}{(2\pi)^{4}} \alpha \; (\Delta_{m} + \ln m^{2}), \;\;\;\Delta_{m} \;=\; \frac{2}{\epsilon} -\gamma + \ln 4\pi + \ln \frac{\mu^{2}}{m^{2}}.\;\;\; \end{eqnarray} The self-energy thus becomes \begin{eqnarray} \Sigma_{Z}(M_{N},M_{N}) & = & \frac{\alpha}{8\pi}\frac{1}{s_{W}^{2}c_{W}^{2}} k_{HH}\frac{1}{3}\Big\{\Big(-3M_{N}^{2} + p^{2}\Big) \Big(\Delta_{M_{N}} + \ln M_{N}^{2}\Big) \nonumber \\ & + & \Big. \Big[2M_{N}^{2} \ln M_{N}^{2} - \frac{p^{2}}{3} + \Big(F(p;M_{N},M_{N}) - \ln M_{N}^{2}\Big)\Big(p^{2} - M_{N}^{2} \Big)\Big]\Big\} \nonumber \\ & = & \frac{\alpha}{8\pi}\frac{1}{s_{W}^{2}c_{W}^{2}} k_{HH}\Big\{\Delta_{M_{N}}\Big(\frac{p^{2}}{3} - M_{N}^{2}\Big) -\frac{p^{2}}{9} \nonumber \\ & + & \frac{1}{3} F(p;M_{N},M_{N})\Big(p^{2} - M_{N}^{2} \Big)\Big\}. \end{eqnarray} For the other three contributions we get, following the same steps, \begin{eqnarray} \Sigma_{Z}(M_{N},0) & = & \frac{\alpha}{8\pi}\frac{1}{s_{W}^{2}c_{W}^{2}} k_{HL}\Big\{\Delta_{M_{N}}\Big(\frac{p^{2}}{3} - \frac{M_{N}^{2}}{2}\Big) + \frac{2}{9}p^{2} - \frac{M_{N}^{2}}{6} \Big. \nonumber \\ & + & \Big. F(p;M_{N},0)\Big(\frac{p^{2}}{3} - \frac{M_{N}^{2}}{6} - \frac{M_{N}^{4}}{6p^{2}}\Big)\Big\}, \nonumber \\ \Sigma_{Z}(0,M_{N}) & = & \Sigma_{Z}(M_{N},0), \nonumber \\ \Sigma_{Z}(0,0) & = & \frac{\alpha}{8\pi}\frac{1}{s_{W}^{2}c_{W}^{2}}k_{LL} \frac{p^{2}}{3}\Big(\Delta_{m} + F(p;m,m) - \frac{1}{3} \Big), \end{eqnarray} where $m^{2} \ll p^{2}$, otherwise $m$ can be arbitrary since $F(p;m,m) \; = \; 1 - \ln (-p^{2}/m^{2} \linebreak -~i \epsilon )$ and therefore $\Delta_{m} + F(p;m,m)$ is independent of $m$. The mixing factors $k_{HH}, k_{HL}$ and $k_{LL}$ can be cast into a more convenient form by converting $K_{L}$ matrices into $K_{H}$ matrices with the help of Eq. \ref{KLKH}: \begin{eqnarray} \label{khhmixi} k_{HH} & = & \sum_{a,b} \big(K_{H}^{\dagger}K_{H}\big)_{ab} \big(K_{H}^{\dagger}K_{H}\big)_{ba} \;\; = \;\; \sum_{a,b,l,j} \big(K_{H}^{\dagger}\big)_{al}\big(K_{H}\big)_{lb} \big(K_{H}^{\dagger}\big)_{bj}\big(K_{H}\big)_{ja} \nonumber \\ & = & \sum_{a,b,l,j} \big(K_{H}^{*}\big)_{la}\big(K_{H}\big)_{ja} \big(K_{H}\big)_{lb}\big(K_{H}^{*}\big)_{jb} \;\; = \;\; ee_{mix}^{2} + |e\mu_{mix}|^{2} + |e\tau_{mix}|^{2} \nonumber \\ & & + |e\mu_{mix}|^{2} + \mu\mu_{mix}^{2} + |\mu\tau_{mix}|^{2} + |e\tau_{mix}|^{2} + |\mu\tau_{mix}|^{2} + \tau\tau_{mix}^{2}, \\ \nonumber \\ k_{HL} & = & \sum_{a,i} \big(K_{H}^{\dagger}K_{L}\big)_{ai}\big(K_{L}^{\dagger}K_{H}\big)_{ia} \;\;=\;\; \sum_{a,i,j,k} \big(K_{H}^{\dagger}\big)_{aj} \big(K_{L}\big)_{ji} \big(K_{L}^{\dagger}\big)_{ik}\big(K_{H}\big)_{ka} \nonumber \\ & = & \sum_{a,j,k} \big(K_{H}^{\dagger}\big)_{aj} \delta_{jk} \big(K_{H}\big)_{ka} - \sum_{a,b,j,k} \big(K_{H}^{\dagger}\big)_{aj}\big(K_{H}\big)_{jb} \big(K_{H}^{\dagger}\big)_{bk}\big(K_{H}\big)_{ka} \nonumber \\ & = & \sum_{a,k} \big(K_{H}^{\dagger}\big)_{ak}\big(K_{H}\big)_{ka} - \sum_{a,b} \big(K_{H}^{\dagger}K_{H}\big)_{ab} \big(K_{H}^{\dagger}K_{H}\big)_{ba} \nonumber \\ & = & ee_{mix} + \mu\mu_{mix} + \tau\tau_{mix} - k_{HH}, \\ \nonumber \\ k_{LL} & = & \sum_{i,j} \big(K_{L}^{\dagger}K_{L}\big)_{ji}\big(K_{L}^{\dagger}K_{L}\big)_{ij} \;\;=\;\; \sum_{i,j,k,l} \big(K_{L}^{\dagger}\big)_{jk}\big(K_{L}\big)_{ki} \big(K_{L}^{\dagger}\big)_{il}\big(K_{L}\big)_{lj} \nonumber \\ & = & ... \; = \; 3 - 2 \big(ee_{mix} + \mu\mu_{mix} + \tau\tau_{mix}\big) + k_{HH}. \end{eqnarray} The total Z self-energy in our model is obtained by cutting out the neutrino contribution from the total Z self-energy in the SM (the first line of Eq. \ref{azet}) and replacing it with the sum $ \Sigma_{Z}(M_{N},M_{N}) + 2 \Sigma_{Z}(M_{N},0) + \Sigma_{Z}(0,0)$. The W self-energy calculation goes along the same lines yielding \begin{eqnarray} \Sigma_{W}(M_{N},m_{l}) & = &\frac{\alpha}{12\pi s_{W}^{2}}\Big\{ \sum_{l=e,\mu,\tau} ll_{mix} \Big[\frac{\Delta^{M_{N}}}{2}\Big(p^{2}-\frac{5}{2} M_{N}^{2}- \frac{m_{l}^{2}}{2}\Big) \Big. \Big. \Big. \nonumber \\ & + & \Big. \frac{\Delta^{m_{l}}}{2}\Big(p^{2} - \frac{5}{2}m_{l}^{2} - \frac{M_{N}^{2}}{2}\Big) \nonumber \\ & + & \Big(p^{2} - \frac{M_{N}^{2}+m_{l}^{2}}{2}-\frac{(M_{N}^{2}-m_{l}^{2})^{2}}{2p^{2}}\Big) F(p;M_{N},m_{l}) \nonumber \\ & + & \Big. \Big. \Big(p^{2}-\frac{M_{N}^{2}+m_{l}^{2}}{2}\Big)\Big(1-\frac{M_{N}^{2}+m_{l}^{2}} {M_{N}^{2}-m_{l}^{2}}\ln\frac{M_{N}}{m_{l}}\Big)-\frac{p^{2}}{3}\Big]\Big\}, \nonumber \\ \Sigma_{W}(0,m_{l}) & = & \frac{\alpha}{12\pi s_{W}^{2}}\Big\{ \sum_{l=e,\mu,\tau} (1 - ll_{mix})\Big[\Big(p^{2}-\frac{3}{2}m_{l}^{2}\Big) \Delta^{m_{l}} \Big. \Big. \nonumber \\ & + & \Big. \Big. \Big(p^{2}-\frac{m_{l}^{2}}{2}- \frac{m_{l}^{4}}{2p^{2}}\Big)F(p;0,m_{l}) +\frac{2}{3}p^{2}-\frac{m_{l}^{2}}{2}\Big] \Big\}, \end{eqnarray} for the diagrams of Figs. $\ref{obliquefd}$e,f respectively. The total W self-energy in our model is obtained by cutting out the lepton contribution from the total W self-energy in the SM (the first two lines of Eq. \ref{adablju}) and replacing it with the sum $\Sigma_{W}(M_{N},m_{l}) + \Sigma_{W}(0,m_{l})$. The self-energies are then renormalized using (see Eq. \ref{rselfe}) \begin{eqnarray} \hat{\Sigma}_{Z}\left(p^{2}\right) & = & \Sigma_{Z}\left(p^{2}\right) \;-\;\delta M_{Z}^{2}\;+\;\delta Z_{2}^{Z}\;\left(p^{2}-M_{Z}^{2}\right), \nonumber \\ \hat{\Sigma}_{W}\left(p^{2}\right) & = & \Sigma_{W}\left(p^{2}\right) \;-\;\delta M_{W}^{2}\;+\;\delta Z_{2}^{W}\;\left(p^{2}-M_{W}^{2}\right), \nonumber \end{eqnarray} with renormalization constants given by Eq. \ref{rconstants}. Note the form of the equations above is the same as in the SM. In order to better see the dependence of ${\hat \Pi}_{Z}$ on $M_{N}$ for $M_{N} \gg M_{W}$ (this is the limit we are ultimately interested in, see Sec. \ref{appelc}.), we split ${\hat \Pi}_{Z}$ as \begin{eqnarray} {\hat \Pi}_{Z} & = & {\hat \Pi}_{Z}^{SM} + {\hat \Pi}_{Z}^{NHL}, \end{eqnarray} where ${\hat \Pi}_{Z}^{SM}$ is the SM limit of ${\hat \Pi}_{Z}$ and ${\hat \Pi}_{Z}^{NHL}$ are corrections due to NHL's. Expanding the $F$ functions in powers of $M_{N}^{2}$, we obtain in the limit of $M_{N} \gg M_{W}$ \begin{eqnarray} \label{aprox1} {\hat \Pi}_{Z}^{NHL} & = & \frac{\alpha}{\pi}\Big\{\frac{c_{W}^{2}-s_{W}^{2}}{16 s_{W}^{4}}\frac{M_{N}^{2}}{M_{W}^{2}}k_{HH} + O(\ln M_{N}^{2}/M_{W}^{2}) + ...\Big\}. \end{eqnarray} Although this formula looks very simple, one should be aware of one important fact. The leading term is suppressed by the mixing parameter. Indeed, the $k_{HH}$ mixing is quadratic in $\tau\tau_{mix}$, while some of the $O(\ln M_{N}^{2}/M_{W}^{2})$ terms are only linear. As a result, a few of them are comparable in size to the leading term in $M_{N}^{2}$ expansion up to $\sim$ 1 TeV NHL mass. We illustrate this point in Table \ref{t1}. Here we show numerical predictions for the (exact) oblique parameter ${\hat \Pi}_{Z}$ and compare them with the approximate parameter ${\hat \Pi}_{Zappx} = {\hat \Pi}_{Z}^{SM} + {\hat \Pi}_{Zappx}^{NHL}$ where ${\hat \Pi}_{Zappx}^{NHL}$ is the first term in Eq. \ref{aprox1} \footnote{The dependence of ${\hat \Pi}_{Z}^{SM}$ on the NHL mass has its origin in a different value of the input parameter $M_{W}$ (as calculated from $G_{\mu}$, see Sec. \ref{secimp}) for different NHL masses. Thus the formula for ${\hat \Pi}_{Z}^{SM}$ comes from the SM, but the choice of $M_{W}$ comes from our model, not the SM.}. The contribution of higher order terms from Eq. \ref{aprox1} is given by the difference $d = {\hat \Pi}_{Z} - {\hat \Pi}_{Zappx}$. Input numbers used are $M_{Z} = 91.1884$ GeV, $M_{H} = 200$~GeV, $m_{t} = 176$ GeV, $\tau\tau_{mix} = 0.033$ and $ee_{mix} = \mu\mu_{mix} = 0$. \begin{table}[htb] \begin{center} \begin{tabular}{|l|r|r|r|r|r|} \hline $M_{N}$ & 0.5 TeV & 1 TeV & 3 TeV & 5 TeV & \\ \hline ${\hat \Pi}_{Z}^{SM}$ & - 4.299 & - 4.297 & - 4.265 & - 4.199 & $\times 10^{-2}$ \\ ${\hat \Pi}_{Z}$ & - 4.313 & - 4.298 & - 4.054 & - 3.526 & $\times 10^{-2}$ \\ ${\hat \Pi}_{Zappx} = {\hat \Pi}_{Z}^{SM} + {\hat \Pi}_{Zappx}^{NHL}$ & - 4.292 & - 4.270 & - 4.013 & - 3.479 & $\times 10^{-2}$ \\ ${\hat \Pi}_{Zappx}^{NHL}$ & 0.007 & 0.027 & 0.252 & 0.720 & $\times 10^{-2}$ \\ $d = {\hat \Pi}_{Z} - {\hat \Pi}_{Zappx}$ & - 0.021 & - 0.028 & - 0.041 & - 0.047 & $\times 10^{-2}$ \\ \hline \end{tabular} \end{center} \caption{Comparison of ${\hat \Pi}_{Zappx}$ with ${\hat \Pi}_{Z}$} \label{t1} \end{table} For $M_{N} = 0.5$ TeV, ${\hat \Pi}_{Zappx}^{NHL}$ contributes $0.007 \times 10^{-2}$ of the total difference between ${\hat \Pi}_{Z}$ and ${\hat \Pi}_{Z}^{SM}$ (${\hat \Pi}_{Z}$ - ${\hat \Pi}_{Z}^{SM}$ = d + ${\hat \Pi}_{Zappx}^{NHL}$). The higher order terms contribute more (with opposite sign), $d = - 0.021 \times 10^{-2}$. At $1$ TeV, ${\hat \Pi}_{Zappx}^{NHL} = 0.027 \times 10^{-2}$ is comparable with $|d| = 0.028 \times 10^{-2}$ and at $3$ TeV it already dominates. Overall, ${\hat \Pi}_{Zappx}$ differs from ${\hat \Pi}_{Z}$ by approximately $1 \%$ in the considered range of NHL masses. \section{Vertex factor $\delta{\hat \Gamma}_{ll}$} \label{secver} The vertex factor \begin{eqnarray} \delta{\hat \Gamma}_{ll} & = & \frac{2}{3} \alpha M_{Z} \Big\{ v_{l}\Big[ {\cal R}e\: {\hat F}_{V}(M_{Z}^{2}) - {\hat \Pi}_{\gamma Z}(M_{Z}^{2})\Big] + a_{l} {\cal R}e\: {\hat F}_{A}(M_{Z}^{2}) \Big\}, \end{eqnarray} is the source of the lepton universality breaking in $\Gamma_{ll}$. It includes, besides irreducible vertex corrections, also lepton wave function renormalization and the renormalized mixing energy ${\hat \Sigma}_{\gamma Z}$. The irreducible vertices (Fig. \ref{vertexfd}) and lepton wave function corrections (Fig. \ref{selffd}) are absorbed in the renormalized formfactors ${\hat F}_{V}, {\hat F}_{A}$ while the mixing energy (Fig. \ref{pzfd}) comes in as \begin{eqnarray} {\hat \Pi}_{\gamma Z}(p^{2}) & = & \frac{{\cal R}e\; {\hat \Sigma}_{\gamma Z}(p^{2})}{p^{2}}. \end{eqnarray} As in the case of ${\hat \Sigma}_{Z}$, ${\hat \Sigma}_{\gamma Z}$ (given in Appendix \ref{recons}) includes besides $\Sigma_{\gamma Z}$ also $\Sigma_{W}$ and $\Sigma_{Z}$ (through the renormalization constants $\delta Z_{1}^{\gamma Z}, \delta Z_{2}^{\gamma Z}$). This is the source of the $M_{N}$ dependence of ${\hat \Sigma}_{\gamma Z}$, in spite of $\Sigma_{\gamma Z}$ being a purely SM quantity. With $\Sigma_{W}$, $\Sigma_{Z}$ calculated in Sec. \ref{seczprop} we get in the limit of $M_{N} \gg M_{W}$ in the leading order an expression similar to Eq. \ref{aprox1}: \begin{eqnarray} \label{aprox3} {\hat \Pi}_{\gamma Z}(M_{Z}^{2}) & = & {\hat \Pi}_{\gamma Z}(M_{Z}^{2})^{SM} - \frac{\alpha}{\pi}\Big\{\frac{c_{W}}{16 s_{W}^{3}} \frac{M_{N}^{2}}{M_{W}^{2}}k_{HH} + O\Big(\ln \frac{M_{N}^{2}}{M_{W}^{2}}\Big) + ...\Big\}. \end{eqnarray} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(.38,+0.325){\mbox{\epsfxsize=5.0in\epsffile{ch57.eps}}} \end{picture} \end{center} \caption{Irreducible vertex corrections.} \label{vertexfd} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(.38,+0.325){\mbox{\epsfxsize=5.0in\epsffile{ch58.eps}}} \end{picture} \end{center} \caption{Lepton self-energies.} \label{selffd} \end{figure} \subsection{Irreducible vertices} \label{irreduciblev} Here we examine irreducible vertices. As an example we calculate the contribution to the unrenormalized form factors $F_{V}, F_{A}$ of the diagram of Fig. \ref{vertexfd}f, which we redraw in Fig. \ref{convenfd} to show our convention of momenta flow ($p_{1}, p_{2}, p=p_{1}+p_{2}, q$) and charge flow (arrows on internal lines). \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,2) \put(1.5,+0.325){\mbox{\epsfxsize=2.5in\epsffile{ch59.eps}}} \end{picture} \end{center} \caption{Momenta and charge flow convention.} \label{convenfd} \end{figure} \pagebreak Following the Feynman rules of Appendix \ref{Cecko}, the vertex $V^{\mu}_{NN\phi}$ is given by (terms $m_{l}^{2}/M_{W}^{2}$ neglected) \begin{eqnarray} V^{\mu}_{NN\phi} & = & \sum_{a,b} \int \frac{d^{n}q}{(2\pi)^{n}} \frac{i}{(q-p_{1})^{2}-M_{W}^{2}} \frac{+ig_{2}}{\sqrt{2}M_{W}} \big(K_{H} \big)_{la}M_{N}\frac{1+\gamma_{5}}{2} \frac{i}{\not q - M_{N}} \nonumber \\ & \times & \frac{+ie}{4s_{W}c_{W}} \big(K_{H}^{\dagger}K_{H}\big)_{ab} \gamma_{\mu}\big(1-\gamma_{5}\big)\frac{i}{\not q - \not p_{1} -\not p_{2} - M_{N}}\frac{+ig_{2}}{\sqrt{2}M_{W}} \big(K_{H}^{\dagger}\big)_{bl} M_{N} \nonumber \\ & \times & \frac{1-\gamma_{5}}{2}. \end{eqnarray} Introducing a shorthand $l_{2} \equiv \sum \big(K_{H}\big)_{la} \big(K_{H}^{\dagger}K_{H}\big)_{ab}\big(K_{H}^{\dagger}\big)_{bl}$, collecting numerical factors at the front and merging $1 \pm \gamma_{5}$ factors we get \begin{eqnarray} \label{shorthand} V^{\mu}_{NN\phi} & = & - \frac{e^{3}}{32 s_{W}^{3}c_{W}} \frac{M_{N}^{2}}{M_{W}^{2}} l_{2} \nonumber \\ & \times & \int \frac{d^{n}q}{(2\pi)^{n}} \frac{4M_{N}^{2} \gamma^{\mu} (1-\gamma_{5})} {(q^{2}-M_{N}^{2}) \big[(q-p_{1})^{2}-M_{W}^{2}\big] \big[(q-p_{1}-p_{2})^{2}-M_{N}^{2}\big]}.\;\;\; \end{eqnarray} Now we can identify the integral as a finite $C_{0}$ function (see Eq. \ref{cnula}): \begin{eqnarray} V^{\mu}_{NN\phi} & = & - \frac{e^{3}}{8 s_{W}^{3}c_{W}}\frac{M_{N}^{4}}{M_{W}^{2}}l_{2} \frac{-i \pi^{2}}{(4\pi)^{2}\pi^{2}} C_{0}(M_{N},M_{W},M_{N}) \gamma^{\mu} (1-\gamma_{5}) \nonumber \\ & = & + \:ie \gamma^{\mu} (1-\gamma_{5})\frac{\alpha}{4\pi} l_{2} \frac{M_{W}^{2}}{8 s_{W}^{3}c_{W}} \frac{M_{N}^{4}}{M_{W}^{4}} C_{0}(M_{N},M_{W},M_{N}) \nonumber \\ & = & + \:ie \gamma^{\mu} (1-\gamma_{5})\frac{\alpha}{4\pi}l_{2}{\cal M}_{NN\phi}. \end{eqnarray} The total contribution of the irreducible vertex diagrams and the definition of the unrenormalized form factors $F_{V}, F_{A}$ is given by \begin{eqnarray} V^{\mu} & = & +ie\gamma^{\mu}F_{V} - ie\gamma^{\mu}\gamma_{5}F_{A} \nonumber \\ & = & +ie\gamma^{\mu}\frac{\alpha}{4\pi} \Big\{ l_{1}{\cal M}_{\phi WN} + l_{2}{\cal M}_{NN \phi} + l_{1}{\cal M}_{\phi \phi N} - (1-l_{1}){\cal M}_{WW\nu} \Big. \nonumber \\ & + & \Big. l_{1}{\cal M}_{WWN} + l_{3}{\cal M}_{N\nu W} + l_{3}{\cal M}_{\nu NW} + l_{4}{\cal M}_{\nu \nu W} + l_{2}{\cal M}_{NNW} - \big(c_{L}^{3} + c_{R}^{3}\big) {\cal M}_{llZ} \Big\} \nonumber \\ & - & ie\gamma^{\mu}\gamma_{5}\frac{\alpha}{4\pi} \Big\{ l_{1}{\cal M}_{\phi WN} + l_{2}{\cal M}_{NN \phi} + l_{1}{\cal M}_{\phi \phi N} - (1-l_{1}){\cal M}_{WW\nu} + l_{1}{\cal M}_{WWN} \Big. \nonumber \\ & + & \Big. l_{3}{\cal M}_{N\nu W} + l_{3}{\cal M}_{\nu NW} + l_{4}{\cal M}_{\nu \nu W} + l_{2}{\cal M}_{NNW} - \big(c_{L}^{3} - c_{R}^{3}\big) {\cal M}_{llZ} \Big\}, \end{eqnarray} where \begin{eqnarray} c_{L} & = & - \frac{1}{2} + s_{W}^{2} ,\;\;\;\;\;\;\;c_{R}\;=\;s_{W}^{2}, \nonumber \\ {\cal M}_{llZ} & = & + \frac{1}{2 s_{W}^{3} c_{W}^{3}} \Big[2M_{Z}^{2}\big(C_{23}(m_{l},M_{Z},m_{l}) + C_{11}(m_{l},M_{Z},m_{l})\big) + 2 \Big. \nonumber \\ & - & \Big. 4 C_{24}^{fin}(m_{l},M_{Z},m_{l}) - \Delta_{\mu} \Big] , \end{eqnarray} $\Delta_{\mu}$ is given in Eq. \ref{deltas} and ${\cal M}_{\phi WN}$, ... were defined before (see Eq. \ref{amplitudes}). The mixing factors are obtained from the flavour-violating ones (see Eq. \ref{fvmixings}) by setting $l = l^{'}$: \begin{eqnarray} \label{elka} l_{1} & = & \big(K_{H}\big)_{la}\big(K_{H}^{\dagger}\big)_{al} \;=\; ll_{mix}, \nonumber \\ l_{2} & = & \big(K_{H}\big)_{la}\big(K_{H}^{\dagger}K_{H}\big)_{ab} \big(K_{H}^{\dagger}\big)_{bl} \;=\; |le_{mix}|^{2} + |l\mu_{mix}|^{2} + |l\tau_{mix}|^{2}, \nonumber \\ l_{3} & = & \big(K_{L}\big)_{li}\big(K_{L}^{\dagger}K_{H}\big)_{ia} \big(K_{H}^{\dagger}\big)_{al} \;=\; l_{1} - l_{2}, \nonumber \\ l_{4} & = & \big(K_{L}\big)_{li}\big(K_{L}^{\dagger}K_{L}\big)_{ij} \big(K_{L}^{\dagger}\big)_{jl} \;=\; 1 - 2 l_{1} + l_{2}. \end{eqnarray} Renormalized form factors ${\hat F}_{V}, {\hat F}_{A}$ are defined (using Eq. \ref{rvertexa}) as \begin{eqnarray} {\hat V}^{\mu} & = & +i e \gamma^{\mu} {\hat F}_{V} - i e \gamma^{\mu} \gamma_{5} {\hat F}_{A} \nonumber \\ & = & + i e \gamma^{\mu} \{F_{V} + v_{l}(\delta Z_{1}^{Z} - \delta Z_{2}^{Z}) - (\delta Z_{1}^{\gamma Z} - \delta Z_{2}^{\gamma Z}) + (v_{l}\:\delta Z_{V}^{l} + a_{l}\:\delta Z_{A}^{l})\} \nonumber \\ & - & i e \gamma^{\mu} \gamma_{5} \{F_{A} + a_{l}(\delta Z_{1}^{Z} - \delta Z_{2}^{Z}) + (v_{l}\:\delta Z_{A}^{l} + a_{l}\:\delta Z_{V}^{l})\}. \end{eqnarray} The check of the cancellation of divergences in ${\hat V}^{\mu}$ shows that the divergence of Fig.~\ref{vertexfd}c is cancelled by those of Figs. \ref{selffd}c,d and the divergence of Fig. \ref{vertexfd}g by those of Figs. \ref{selffd}e,f. The sum of the remaining divergences of Figs. \ref{vertexfd}d,e is cancelled by the sum of the divergences of Figs. \ref{selffd}a,b plus those associated with the counterterms $\delta Z_{1}^{Z} - \delta Z_{2}^{Z}$ and $\delta Z_{1}^{\gamma Z} - \delta Z_{2}^{\gamma Z}$ (these come from the bosonic loops of the photon-Z mixing energy). \subsection{Lepton self-energies} \label{seclep} We define lepton self-energies $\Sigma^{l}$ at the one-loop level as shown in Fig. \ref{lepself} \begin{figure}[hbtp] \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.3) \put(.1,+0.325){\mbox{\epsfxsize=3.0in\epsffile{ch511.eps}}} \put(3.4,0.55){$ \equiv \frac{i}{\not p} + \frac{i}{\not p} \; i \; \Sigma^{l} \; \frac{i}{\not p} \;\; = \;\; \frac{i}{\not p} - \frac{i}{\not p} \; \Sigma^{l} \; \frac{1}{\not p},$} \end{picture} \end{center} \caption{The definition of the lepton self-energy $\Sigma^{l}$} \label{lepself} \end{figure} , i.e., we call the blob $i\:\Sigma^{l}$, rather than $- i\:\Sigma^{l}$ (Hollik's convention). $\Sigma^{l}$ has the form \begin{eqnarray} \label{hasthe} \Sigma^{l}(p) & = & {\not p} \Sigma_{V}^{l}(p^{2}) + {\not p} \gamma_{5} \Sigma_{A}^{l}(p^{2}) + m_{l} \Sigma_{S}^{l}(p^{2}) \nonumber \\ & = & {\not p} \frac{1-\gamma_{5}}{2} \Sigma_{L}^{l}(p^{2}) + {\not p} \frac{1+\gamma_{5}}{2} \Sigma_{R}^{l}(p^{2}) + m_{l} \Sigma_{S}^{l}(p^{2}), \end{eqnarray} where $\Sigma_{V,A,S,L,R}^{l}(p^{2})$ are vector, axial vector, scalar, left-handed and right-handed parts respectively. Renormalized charged lepton self-energies ${\hat \Sigma^{l}}$ (see Eq. \ref{rselfe}) do not contribute to the renormalized vector and axial vector formfactors ${\hat F}_{V}, {\hat F}_{A}$. Indeed, the graphs of Fig. \ref{selffd} give zero after the on-shell renormalization conditions (see Eq.~$\ref{RC1}$) are applied. However, the unrenormalized lepton self-energies $\Sigma^{l}$ do contribute to the formfactors ${\hat F}_{V}, {\hat F}_{A}$ through the renormalization constants $\delta Z_{V}^{l}, \delta Z_{A}^{l}$. The relation between these renormalization constants and $\Sigma^{l}$ can be found in Eq.~\ref{rconstants}. The charged lepton self-energy is the sum of four parts corresponding to loops with $W-N, W-\nu, \phi-N$ and $Z-l$ respectively (see Fig. \ref{selffd}): \begin{eqnarray} \Sigma^{l}(p) & = & \Sigma_{WN}^{l} + \Sigma_{W\nu}^{l} + \Sigma_{\phi N}^{l} + \Sigma_{Zl}^{l}, \end{eqnarray} where \begin{eqnarray} \label{leptonself1} \Sigma_{WN}^{l} & = & - \frac{\alpha}{32\pi s_{W}^{2}}l_{1} [1- 2 \Delta_{\mu} + 2 \ln M_{W}^{2} + 2 f({\cal X})] \not p (1-\gamma_{5}), \nonumber \\ \Sigma_{W\nu}^{l} & = & - \frac{\alpha}{32\pi s_{W}^{2}}(1-l_{1}) [1 - 2 \Delta_{\mu} + 2 \ln M_{W}^{2}] \not p (1-\gamma_{5}), \nonumber \\ \Sigma_{\phi N}^{l} & = & + \frac{\alpha}{32\pi s_{W}^{2}}l_{1} {\cal X} [ \Delta_{\mu} + \frac{1}{2} - \ln M_{W}^{2} - f({\cal X})] \not p (1-\gamma_{5}), \nonumber \\ \Sigma_{Zl}^{l} & = & - \frac{\alpha}{16\pi s_{W}^{2}c_{W}^{2}}c_{L}^{2} [ 1 - 2 \Delta_{\mu} + 2 \ln M_{Z}^{2} ] \not p (1-\gamma_{5}) \nonumber \\ & - & \frac{\alpha}{16\pi s_{W}^{2}c_{W}^{2}}c_{R}^{2} [ 1 - 2 \Delta_{\mu} + 2 \ln M_{Z}^{2} ] \not p (1+\gamma_{5}), \nonumber \\ f({\cal X}) & = & \frac{{\cal X}^{2}}{({\cal X}-1)^{2}}\ln {\cal X} + \frac{{\cal X}}{1-{\cal X}}, \;\;\;\;\;\;\; {\cal X} \; = \; \frac{M_{N}^{2}}{M_{W}^{2}}. \end{eqnarray} These expressions are evaluated at $p^{2} = m_{l}^{2}$ as required by $\delta Z_{V}^{l}, \delta Z_{A}^{l}$ (see Eq. \ref{rconstants}), and terms $\frac{m_{l}^{2}}{M_{W}^{2}}$ were neglected. Note the $\Sigma_{Zl}^{l}$ part is a pure SM result. For completeness, we also give $\Sigma_{\gamma l}^{l}$: \begin{eqnarray} \label{leptonself2} \Sigma_{\gamma l}^{l} & = & \frac{\alpha}{4\pi}\big[\Delta_{\mu} - 1 + 2 B_{0}^{fin}(p;m_{\lambda},m_{l}) + 2 B_{1}^{fin}(p;m_{\lambda},m_{l})\big] \not p \nonumber \\ & - & \frac{\alpha}{\pi}\big[\Delta_{\mu} - \frac{1}{2} + B_{0}^{fin}(p;m_{\lambda},m_{l})\big] m_{l}. \end{eqnarray} Here $m_{\lambda}$ is the regularized photon mass and $B_{0}^{fin}, B_{1}^{fin}$ (evaluated at $p^{2} = m_{l}^{2}$) are given in Eq. \ref{bphoton}. $\Sigma_{\gamma l}^{l}$ is, in this case, part of the QED subset treated independently of genuine electroweak corrections (see Sec. \ref{secqed}). However, in Chapter 7, this photonic correction will be included as a part of the total lepton self-energy and the counterterm $\delta Z_{L}^{l}$ together with the genuine electroweak corrections. \subsection{Form factors ${\hat F}_{V}, {\hat F}_{A}$ in the limit of large NHL mass} \label{secform} Studying the behaviour of ${\hat F}_{V}, {\hat F}_{A}$ in the limit of $M_{N} \gg M_{W}$, we observe in agreement with Sec. \ref{arit} the three sources of quadratic nondecoupling: ${\cal M}_{N N \phi}, {\cal M}_{\phi \phi N}$ and ${\cal M}_{\phi N}$ (which contains $\Sigma^{l}_{\phi N}$). The quadratic nondecoupling of these amplitudes is also expected from dimensional analysis considerations. For illustration, for ${\cal M}_{N N \phi}$ we get from Eq. \ref{shorthand} \begin{eqnarray} {\cal M}_{N N \phi} & \sim & M_{N}^{4} \int \frac{d^{n}q}{(q^{2}-M_{N}^{2}) \big[(q-p_{1})^{2}-M_{W}^{2}\big] \big[(q-p_{1}-p_{2})^{2}-M_{N}^{2}\big]}. \end{eqnarray} Setting n=4 and neglecting all masses and momenta except $M_{N}$, we obtain \begin{eqnarray} {\cal M}_{N N \phi} & \sim & M_{N}^{4} \int \frac{d^{4}q}{(q^{2}-M_{N}^{2}) q^{2} (q^{2}-M_{N}^{2})}. \end{eqnarray} The integral has to be of the form $(M_{N})^{k}$; the power counting yields $k = -2$, so \begin{eqnarray} {\cal M}_{N N \phi} & \sim & M_{N}^{4} M_{N}^{-2} \; = \; M_{N}^{2}. \end{eqnarray} The exact result is (see Eqs. \ref{amplitudes}, \ref{aproxc}) \begin{eqnarray} {\cal M}_{N N \phi} & = & + \frac{1}{8s_{W}^{3}c_{W}}\frac{M_{N}^{2}}{M_{W}^{2}} + ... \; . \end{eqnarray} As mentioned in Sec. \ref{fvzb1}, the contribution of ${\cal M}_{\phi \phi N}$ cancels with that of ${\cal M}_{\phi N}$, leaving ${\cal M}_{N N \phi}$ as the only amplitude with the nondecoupling behaviour. We will now shed more light on this curious cancellation. The way to go is to replace in the diagrams corresponding to these amplitudes the Z boson with the photon and to use a Ward-identity \cite{key6}, which relates the vertex formfactors $F_{V,A}^{\gamma}$ evaluated at $(p_{1}+p_{2})^{2}=0$ to electron self-energies represented by the counterterms $\delta Z_{V,A}$: \begin{eqnarray} \label{wardi} F_{V,A}^{\gamma}(0) + \delta Z_{V,A} & = & \frac{1}{4s_{W}c_{W}} \frac{\Sigma_{\gamma Z}(0)}{M_{Z}^{2}}, \end{eqnarray} where $\Sigma_{\gamma Z}(0)$ is the term originating in the bosonic loops of the $\gamma$-Z mixing. At zero $M_{N}$ the graphs with unphysical Higgs $\phi$ are negligible, however, with $M_{N}$ rising the two graphs dominate the left-hand side of Eq. \ref{wardi}: the vertex ${\cal M}_{\phi \phi N}^{\gamma}$ and the self-energy ${\cal M}_{\phi N}^{\gamma}$. Since the right-hand side of Eq. \ref{wardi} is not affected by the NHL's, it remains constant and (very) small with respect to ${\cal M}_{\phi \phi N}^{\gamma}$ or ${\cal M}_{\phi N}^{\gamma}$ at $M_{N} = O$(TeV). Hence the only way to meet the above formula is to have ${\cal M}_{\phi N}^{\gamma} = - {\cal M}_{\phi \phi N}^{\gamma}$ in the limit of large $M_{N}$. If we now return from the photon to the Z boson, it suffices to check how the Feynman rules for the vertices change. It turns out that \begin{eqnarray} {\cal M}_{\phi N} \; \equiv \; {\cal M}_{\phi N}^{Z} & = & c {\cal M}_{\phi N}^{\gamma}, \\ {\cal M}_{\phi \phi N} \; \equiv \; {\cal M}_{\phi \phi N}^{Z} & = & c {\cal M}_{\phi \phi N}^{\gamma}, \end{eqnarray} with the same constant c (a simple function of Weinberg angle) in both eqs., so we conclude that \begin{eqnarray} {\cal M}_{\phi N} & = & - {\cal M}_{\phi \phi N}. \end{eqnarray} Note that for the photon case the form factors $F_{V,A}^{\gamma}$ are evaluated at $(p_{1}+p_{2})^{2}=0$, while for the Z case $F_{V,A}$ are calculated at $(p_{1}+p_{2})^{2} = M_{Z}^{2}$; this is acceptable, since in the limit of a large $M_{N}$ the Z mass can be neglected. This cancellation implies that the formfactors are dominated by the single amplitude ${\cal M}_{N N \phi}$: \begin{eqnarray} \label{aprox4} {\hat F}_{V} \; = \; {\hat F}_{A} & = & \frac{\alpha}{4\pi} l_{2} \frac{1}{8s_{W}^{3}c_{W}}{\cal X} + ... \; . \end{eqnarray} As in the case of the oblique parameter ${\hat \Pi}_{Z}$, we have to carefully examine the reliability of this approximation. We note again the leading term is suppressed by the mixing parameter $l_{2} = \tau\tau_{mix}^{2}$ \footnote{Assuming $ee_{mix} = \mu\mu_{mix} = 0$ and $Z \rightarrow \tau\tau$ mode}. On the other hand, some of the higher order terms (found in many of the irreducible vertex diagrams, not just ${\cal M}_{N N \phi}$) are only linear in $\tau\tau_{mix}$, which implies they can be larger than expected on the basis of an $M_{N}$ expansion. In Table \ref{t2} we show numerical predictions for the (exact) formfactor ${\hat F}_{V}$ and compare them with the approximate parameter ${\hat F}_{Vappx} = {\hat F}_{V}^{SM} + {\hat F}_{Vappx}^{NHL}$ where ${\hat F}_{Vappx}^{NHL}$ is the leading term in Eq. \ref{aprox4}. The dependence of ${\hat F}_{V}^{SM}$ on the NHL mass has its origin in a different value of the input parameter $M_{W}$ (as calculated from $G_{\mu}$, see Sec. \ref{secimp}) for different NHL masses. Input numbers used are $M_{Z} = 91.1884$~GeV, $M_{H} = 200$ GeV, $m_{t} = 176$ GeV, $\tau\tau_{mix} = 0.033$ and $ee_{mix} = \mu\mu_{mix} = 0$. \begin{table}[htb] \begin{center} \begin{tabular}{|l|r|r|r|r|r|} \hline $M_{N}$ & 0.5 TeV & 1 TeV & 3 TeV & 5 TeV & \\ \hline ${\hat F}_{V}^{SM}$ & 1.938 & 1.939 & 1.949 & 1.971 & $\times 10^{-3}$ \\ ${\hat F}_{V}$ & 2.056 & 2.247 & 3.525 & 5.903 & $\times 10^{-3}$ \\ ${\hat F}_{Vappx} = {\hat F}_{V}^{SM} + {\hat F}_{Vappx}^{NHL}$ & 1.971 & 2.071 & 3.150 & 5.345 & $\times 10^{-3}$ \\ ${\hat F}_{Vappx}^{NHL}$ & 0.033 & 0.132 & 1.201 & 3.374 & $\times 10^{-3}$ \\ $d = {\hat F}_{V} - {\hat F}_{Vappx}$ & 0.085 & 0.176 & 0.375 & 0.558 & $\times 10^{-3}$ \\ \hline \end{tabular} \end{center} \caption{Comparison of ${\hat F}_{Vappx}$ with ${\hat F}_{V}$ } \label{t2} \end{table} For $M_{N} = 0.5$ TeV, ${\hat F}_{Vappx}^{NHL}$ contributes $0.033 \times~10^{-3}$ of the total difference between ${\hat F}_{V}^{SM}$ and ${\hat F}_{V}$. It is less than the contribution of the higher order terms, $d = 0.085 \times 10^{-3}$. However, at $3$ TeV the roles are switched, with ${\hat F}_{Vappx}^{NHL}$ dominating as expected. Overall, ${\hat F}_{Vappx}$ differs from ${\hat F}_{V}$ by approximately $4 \%$ at $0.5$ TeV and by $9.5 \%$ at $5$ TeV. This is worse than the $1 \%$ found for the ${\hat \Pi}_{Z}$ parameter and it signals that the terms linear in mixing are more important in this case. Indeed, the second largest non-SM contribution to ${\hat F}_{V}$ comes from the graph of Fig. \ref{vertexfd}d (associated with amplitude ${\cal M}_{WWN}$). It is given by (see Eqs. \ref{aproxm}, \ref{aproxc}) \begin{eqnarray} -\frac{\alpha}{4 \pi}l_{1}\frac{3c_{W}}{4s_{W}^{3}}\Big(\frac{5}{6} - \ln M_{N}^{2}\Big), \end{eqnarray} and even at $5$ TeV it is as large as $2.02 \times 10^{-3}$ compared to ${\hat F}_{Vappx}^{NHL} = 3.374 \times 10^{-3}$. Other terms are also relatively large, partly cancelling the ${\cal M}_{WWN}$ effect to produce the comparatively small difference of $9.5 \%$. \section{Imprecise $M_{W}$, precise $G_{\mu}$} \label{secimp} Our on-shell renormalization scheme takes $\alpha, M_{Z}$ and $M_{W}$ as input parameters (see Sec. \ref{onshell1}). However, the direct measurement of $M_{W}$ \cite{mw1}, \begin{eqnarray} M_{W} & = & 80.410 \pm 0.180 \; {\rm GeV}, \end{eqnarray} as opposed to that of $M_{Z}$ \cite{mt1}, \begin{eqnarray} M_{Z} & = & 91.1884 \pm 0.0022 \; {\rm GeV}, \end{eqnarray} is not yet precise enough for its use as an input parameter. To appreciate this point, let us examine the sensitivity of $\Gamma_{ll}$ to $M_{W}$. It mainly comes from the tree-level formula (loops are suppressed by factors of $\alpha$) \begin{eqnarray} \Gamma_{0} & = & \frac{\alpha}{3} M_{Z}(v_{l}^{2} + a_{l}^{2}) \;=\;\frac{\alpha}{3} M_{Z}\frac{1-4s_{W}^{2}+8s_{W}^{4}}{8s_{W}^{2}c_{W}^{2}}, \end{eqnarray} where $c_{W}^{2} = M_{W}^{2}/M_{Z}^{2}$. It turns out that $\sigma_{M_{W}} = 0.18$ GeV induces $\sigma_{\Gamma_{0}} \doteq \sigma_{\Gamma_{ll}}$ $\doteq~1$~MeV. This is very large compared to the experimental value (see Eq. \ref{gamaexp}) \begin{eqnarray} \Gamma_{ll}^{exp} & = & 83.93 \pm 0.14 \; {\rm MeV}. \nonumber \end{eqnarray} As a consequence $M_{W}$, even though convenient as an input parameter for one-loop calculations, is usually replaced by the more precisely measured muon decay constant $G_{\mu}$ \cite{pdb} : \begin{eqnarray} G_{\mu} & = & 1.16637 \; (2)\times 10^{-5} \; {\rm GeV}^{-2}. \end{eqnarray} The replacement is done in such a way that while $M_{W}$ is still kept in our formulae for one-loop self-energies and vertices, its actual value is no longer taken from the direct measurement, but is rather calculated from $G_{\mu}$. To calculate $M_{W}$ from $G_{\mu}$, one first computes the $\mu$-decay rate to one-loop level in the Fermi model (this defines $G_{\mu}$) and then equates it to the one-loop calculation of the same quantity in the SM \cite{key6}. The result is a formula relating $M_{W}$ to $G_{\mu}$: \begin{eqnarray} \label{MGSM} M_{W}^{2} s_{W}^{2} & = & \frac{\pi\alpha}{\sqrt 2 G_{\mu} (1-\Delta r^{SM})}, \end{eqnarray} with $\Delta r^{SM}$ (the notation $\Delta r$ is reserved for our model) representing loop effects in $\mu$-decay, \begin{eqnarray} \Delta r^{SM} & = & \Delta r^{SM}(\alpha,M_{W},M_{Z},M_{H},m_{t}) \nonumber \\ & = & \frac{{\cal R}e \:{\hat \Sigma}_{W}^{SM}(0)}{M_{W}^{2}} + \frac{\alpha}{4\pi s_{W}^{2}} \Big(6 + \frac{7-4s_{W}^{2}}{2s_{W}^{2}} \ln c_{W}^{2}\Big) \nonumber \\ & = & \frac{{\cal R}e \:{\hat \Sigma}_{W}^{SM}(0)}{M_{W}^{2}} + \delta_{V}^{SM}. \end{eqnarray} $M_{W}$ is found from Eq. \ref{MGSM} by iterative procedure. The reason we discuss this in some detail is that our model modifies this relation between $G_{\mu}$ and $M_{W}$ both at tree and one-loop level and thus NHL's, besides contributing directly to $\Gamma_{ll}$ through Z-decay loops, contribute also indirectly via $M_{W}$. This complication in the calculation of $\Gamma_{ll}$ is something we can actually benefit from since in $M_{W}$ we obtained another observable sensitive to NHL's. In our model, Eq.~\ref{MGSM} is modified as \begin{eqnarray} \label{MG} M_{W}^{2} s_{W}^{2} & = & \frac{\pi\alpha}{\sqrt{2} G_{\mu} (1-\Delta r)} (1-\frac{1}{2}ee_{mix}-\frac{1}{2}\mu\mu_{mix}), \end{eqnarray} where $1-\frac{1}{2}ee_{mix}-\frac{1}{2}\mu\mu_{mix}$ represents the tree-level modification \footnote{$\Gamma_{\mu}$ is modified by $1-ee_{mix}-\mu\mu_{mix}$ and we take the square root of $\Gamma_{\mu}$ to get Eq. \ref{MG}.}. The loop quantity $\Delta r$ is calculated from the diagrams depicted in Figs. \ref{boxmuon} - \ref{vermuon} and the diagrams with the corrected W propagator. We devote the next chapter to the detailed calculation of the $\Delta r$ for the arbitrary values of the mixings $ll_{mix}$. Here we make a specific choice of mixings which will help us to avoid most of the $\mu$-decay loop diagrams: we put $ee_{mix} = \mu\mu_{mix} = 0$. This choice also implies $e\mu_{mix} = e\tau_{mix} = \mu\tau_{mix} = 0$ and leaves $\tau\tau_{mix}$ as the only nonzero mixing parameter. Of the non-SM $\mu$-decay loops, the vertex corrections, neutrino self-energy corrections and the boxes are all proportional to either $ee_{mix}$ or $\mu\mu_{mix}$ and therefore only ${\hat \Sigma}_{W}$ (see Sec. \ref{seczprop}), the $W$ propagator correction remains to modify $\Delta r^{SM}$: \begin{eqnarray} \label{delrv} \Delta r & = & \frac{{\cal R}e \:{\hat \Sigma}_{W}(0)}{M_{W}^{2}} + \delta_{V} \; = \; \frac{{\cal R}e \:{\hat \Sigma}_{W}(0)}{M_{W}^{2}} + \delta_{V}^{SM}. \end{eqnarray} In the limit of $M_{N} \gg M_{W}$ we obtain an expression similar to Eq. \ref{aprox1}, \begin{eqnarray} \label{aprox2} \Delta r & = & \Delta r^{SM} - \frac{\alpha}{\pi}\Big\{\frac{c_{W}^{2}}{16 s_{W}^{4}} k_{HH} \frac{M_{N}^{2}}{M_{W}^{2}} + O(\ln M_{N}^{2}/M_{W}^{2}) + ...\Big\}. \end{eqnarray} The neglect of terms beyond the $O(M_{N}^{2}/M_{W}^{2})$ results in an error less than $2.5 \%$ for $M_{N} < 5$ TeV. \section{Violation of the decoupling theorem} \label{appelc} The reader may wonder whether NHL loop effects can have any noticeable effect at all. After all, these loops are suppressed by factors such as $ll_{mix}$ or even $ll_{mix}^{2}$ (see for example Eq. \ref{aprox2}, where $k_{HH} = \tau\tau_{mix}^{2}$) compared to their SM counterparts. Although this reasoning is correct, it is not complete. In spite of the smallness of the mixing parameters, the NHL loop effects can actually be larger than SM ones. The reason is the (possibly) large NHL mass and a violation of the decoupling theorem. The decoupling theorem was established and proven by Appelquist and Carazzone in Ref. \cite{ac}. It describes how the heavy particles of a renormalizable theory $A$ enter into the low-energy theory $B$ \cite{dono}: {\it All effects of the heavy particle in the low-energy theory B appear either as a renormalization of the coupling constants or else are suppressed by powers of the heavy particle mass.} For instance, heavy $W$ and $Z$ bosons of the SM (theory $A$) decouple from the low-energy QED (theory~$B$). There are, however, cases in spontaneously broken theories when the decoupling theorem is violated. A well-known example can be found in the SM itself with respect to top quark behaviour. There are two diagrams (Fig. \ref{toblique}) which exhibit a quadratic dependence on the top quark mass and therefore do not vanish as $m_{t} \rightarrow \infty$. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.5) \put(.5,+0.225){\mbox{\epsfxsize=4.5in\epsffile{ch510.eps}}} \end{picture} \end{center} \caption{Diagrams with the top quark nondecoupling.} \label{toblique} \end{figure} The nondecoupling effects are easily visible in this case - the diagrams of Fig. \ref{toblique} led to indirect bounds on the top quark mass from LEP I observables. The bounds, $m_{t} = 170 \pm 10 {}^{+17}_{-19} \; {\rm GeV}\:$ \cite{mt1}, are actually competitive with the value obtained so far from the direct observation at CDF at Fermilab, $m_{t} = 176 \pm 8 \pm 10 \; {\rm GeV}\:$ \cite{mt2}. The NHL's also exhibit nondecoupling as can be seen in Eqs. \ref{aprox1}, \ref{aprox3}, \ref{aprox2}, where the dominant terms are quadratically dependent on $M_{N}$. We note however, that the nondecoupling of NHL's is the consequence of our treatment of the mixings as the parameters independent of $M_{N}$. If we replaced $\tau\tau_{mix}$ with $\tau\tau_{mix} \sim \frac{D^{2}}{M_{N} ^{2}}$ ($K_{H} \sim \frac{D}{M_{N}}$), the dominant terms would change from $\sim \tau\tau_{mix}^{2} M_{N}^{2}$ (see Eqs. \ref{aprox1}, \ref{aprox3}, \ref{aprox2}) to \begin{eqnarray} \tau\tau_{mix}^{2} M_{N}^{2} & \rightarrow & \frac{D^{4}}{M_{N} ^{4}} M_{N}^{2} \; = \; \frac{D^{4}}{M_{N} ^{2}}, \end{eqnarray} and the decoupling would be recovered. The analogy with the top quark can be useful in another aspect. We can make a naive estimate of how large an NHL mass $M_{N}$ should be to produce an effect comparable to that of the top quark. For example, oblique corrections rise with the top quark mass as ${\hat \Pi}_{Z}^{SM} \sim m_{t}^{2}/M_{W}^{2}$, \footnote{According to Okun \cite{okun}, this point is subtle: a large positive contribution from the top quark is cancelled by a large negative contribution from all other virtual particles. As a result, genuine electroweak corrections are negligible and the bulk of $ \sim 2.5$ MeV loop corrections is associated with the running coupling constant $\alpha$.} while in our model, the dependence on $M_{N}$ (besides the dependence on $m_{t}$) is \begin{eqnarray} {\hat \Pi}_{Z} & \sim & k_{HH} \frac{M_{N}^{2}}{M_{W}^{2}} \; \doteq\; \tau\tau_{mix}^{2} \frac{M_{N}^{2}}{M_{W}^{2}}, \end{eqnarray} yielding \begin{eqnarray} m_{t}^{2} & \sim & \tau\tau_{mix}^{2}M_{N}^{2}. \end{eqnarray} As a result, for $\tau\tau_{mix} = 0.033$, $M_{N} \doteq 30\; m_{t} \;\doteq\; 5 $ TeV is required to produce 'the top size' effect. We note this is not the necessary condition for observing the NHL given the current mixings. As our numerical results indicate, even NHL's with the mass $M_{N} \sim 3$ TeV could lead to non-SM effects in $\Gamma_{ll}$. In the same sense we do not have to change the top mass by $175$ GeV to see conflict with the data. \section{Heavy NHL's and perturbation theory breakdown} \label{breakdown} With one-loop corrections rising as $M_{N}^{2}/M_{W}^{2}$, we have to know at what value of $M_{N}$ these corrections are comparable in size to the tree-level contribution. At this point the theory becomes strongly interacting and the perturbative treatment fails. Alternatively, we can study the transition to the strongly interacting regime through the tree-level width of the NHL. We investigate this latter approach here and show the analogy to the the strongly interacting Higgs. There are three decay modes of an NHL, $N_{a} \rightarrow W^{\pm} + l^{\mp}, N_{a} \rightarrow Z + \nu_{i}$ and $N_{a} \rightarrow H + \nu_{i}$, open for $M_{N} > M_{W}, M_{Z}, M_{H}$. They are represented by the diagrams of Fig. \ref{nhldecay}. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,1.75) \put(.3,+0.325){\mbox{\epsfxsize=5.0in\epsffile{chap51.eps}}} \end{picture} \end{center} \caption{Decay modes of an NHL.} \label{nhldecay} \end{figure} In the limit $M_{N} \gg M_{W}, M_{Z}, M_{H}$, the partial decay widths are given by \nopagebreak \footnote{The partial width for $N_{a} \rightarrow W^{\pm} + l^{\mp}$ is the same as in the see-saw model of Ref. \cite{pilaftsis2}, while the widths for $N_{a} \rightarrow Z + \nu_{i}$ and $N_{a} \rightarrow H + \nu_{i}$ are half of their see-saw counterparts.} \begin{eqnarray} \sum_{l=e,\mu,\tau}\Gamma\;(N_{a} \rightarrow W^{\pm} + l^{\mp}) & = & \frac{\alpha}{16 s_{W}^{2}} a_{mix}\frac{M_{N}^{3}}{M_{W}^{2}}, \nonumber \\ \sum_{i=1,2,3}\Gamma\;(N_{a} \rightarrow H + \nu_{i}) & = & \frac{\alpha}{32 s_{W}^{2}} a_{mix} \frac{M_{N}^{3}}{M_{W}^{2}}, \nonumber \\ \sum_{i=1,2,3}\Gamma\;(N_{a} \rightarrow Z + \nu_{i}) & = & \frac{\alpha}{32 s_{W}^{2}}a_{mix} \frac{M_{N}^{3}}{M_{W}^{2}}, \end{eqnarray} where the following relations were used: \begin{eqnarray} \sum_{i} |\left(K_{L}^{\dagger}K_{H}\right)_{ia}|^{2} & \doteq & \sum_{l} |\left(K_{H}\right)_{la}|^{2} \; = \; a_{mix}. \end{eqnarray} The total tree-level width of the NHL is the sum of the partial widths: \begin{eqnarray} \Gamma_{N} & = & \frac{3\alpha}{16 s_{W}^{2}} a_{mix}\frac{M_{N}^{3}}{M_{W}^{2}}. \end{eqnarray} For a comparison, the tree-level width of a very heavy Higgs is given by \begin{eqnarray} \Gamma_{H} & = & \frac{3\alpha}{32 s_{W}^{2}} \frac{M_{H}^{3}}{M_{W}^{2}}. \end{eqnarray} Both $\Gamma_{N}$ and $\Gamma_{H}$ rise swiftly with the particle mass and at some critical point they become larger than the masses $M_{N}, M_{H}$ themselves - a clear indication of a perturbative breakdown. The tree-level formulae are no longer appropriate. We can get a safe estimate on the critical mass by demanding that \begin{eqnarray} \Gamma_{N,H} & \leq & \frac{1}{2}M_{N,H}. \end{eqnarray} With $a_{mix} \leq ee_{mix} + \mu\mu_{mix} + \tau\tau_{mix} \doteq \tau\tau_{mix} = 0.033$, we obtain \begin{eqnarray} M_{N}^{crit} & \sim & 4 \; {\rm TeV}, \end{eqnarray} and for the Higgs we get the well-known bound of $M_{H}^{crit} \sim 1$ TeV \footnote{A similar value is obtained from considerations of perturbative unitarity violation.}. Note that $M_{N}^{crit}$ is not to be interpreted as the upper bound on the NHL mass. Its sole purpose is to make us aware of the limitations of the perturbative treatment. \section{Results} \label{seresu} In this section we present our numerical results. Our FORTRAN program is written as a modification of the routines from the CERN electroweak library \cite{cernlib} used for the SM predictions of LEP I parameters. This applies namely to the oblique parameter, ${\hat \Pi}_{Z}$, with most of the contributing diagrams (see Fig. \ref{pfd} - \ref{wfd}) being SM. The non-SM contributions were implemented as specified in Sec. \ref{seczprop}. The vertex factor $\delta{\hat \Gamma}_{ll}$ was derived independently of the CERN electroweak library, including the SM diagrams. As a 'standard set' of input parameters, we used $M_{Z} = 91.1884$ GeV, $\alpha ^{-1} = 137.036$, $A \equiv \frac{\pi \alpha}{\sqrt{2} G_{\mu}} = 37.281 \: {\rm GeV}$. Also part of this set are $M_{H} = 200$ GeV and $m_{t} = 176$ GeV. Below we only make it explicit when values different from the standard set are used, e.g., $M_{H}, m_{t}$ in Figs. \ref{numw1}, \ref{nummw}a. To keep the discussion simple (without losing any qualitative features), we have reduced the free parameter space by assuming degenerate masses for the three NHL's. In this chapter, we have also imposed restrictions on the mixing parameters. We assume that $ee_{mix}$ and $\mu \mu_{mix}$ are very small relative to $\tau \tau_{mix}$. The model and NHL mass independent limits quoted in Eq. \ref{limits1} are more stringent for $e$ and $\mu$ than for $\tau$. In addition, our assumption is also partially \footnote{Partially, because small $e\mu_{mix}$ does not necessarily imply small $ee_{mix}$ and $\mu \mu_{mix}$.} supported by the smallness of $e\mu_{mix}$ (see Eq. \ref{limits3}), as determined from $\mu \rightarrow e \gamma$, in combination with the inequality Eq. \ref{ineq}. This neglect of $ee_{mix}$ and $\mu \mu_{mix}$ proves useful practically in that many of the muon decay loops (boxes and vertex corrections, but not $W$ oblique correction) are eliminated as a result. The general case of arbitrary $ee_{mix}$, $\mu \mu_{mix}$ is considered in the next chapter. We present results mainly for the NHL mass range $0.5 \: {\rm TeV} \leq M_{N} \leq 5$ TeV, as motivated by the non-decoupling arguments given in Sec. \ref{appelc}. Given the relative complexity of the formulae involved, we looked for alternative ways of checking them, other than the special care taken in their derivation. First, a logical step to take is to run our program with all mixing parameters zero, to see if it reduces to the SM, as predicted by the CERN electroweak library. For the standard set of input parameters we get precise agreement, see Table~\ref{t3}. \begin{table}[bthp] \begin{center} \begin{tabular}{|l|c|c|c|} \hline $ $ & ${\hat \Pi}_{Z}^{SM}$ & $\delta{\hat \Gamma}_{ll}^{SM}$ & $\Gamma_{ll}^{SM}$ [MeV] \\ \hline CERN library & -4.29769 $\times 10^{-2}$ & -1.16976 $\times 10^{-3}$ & 84.0297 \\ Our program & -4.29769 $\times 10^{-2}$ & -1.16976 $\times 10^{-3}$ & 84.0297 \\ \hline \end{tabular} \end{center} \caption{SM limit of our model} \label{t3} \end{table} Second, we checked that infinities cancelled out in all renormalized quantities. Third, throughout this chapter we tried to separate the dominating contributions in the limit of large NHL mass and represent them by simple formulae. Here, we collect these approximations and see, if we can understand the behaviour of partial leptonic widths on their basis. Thus, collecting Eqs. \ref{oneloop}, \ref{aprox1}, \ref{aprox3} and \ref{aprox4}, we get \begin{eqnarray} \label{aprox5} \Gamma_{ll} & = & \frac{\Gamma_{0} + \delta{\hat \Gamma_{ll}}}{1+{\hat \Pi}_{Z} (M_{Z}^{2})}(1+\delta_{QED}) \; \doteq \; \Gamma_{ll}^{appx} \nonumber \\ & = & \frac{\Gamma_{0} + \delta{\hat \Gamma_{Z}}^{SM} + \frac{2}{3}\alpha M_{Z} \big(v_{l}{\hat F}_{Vappx}^{NHL} + a_{l}{\hat F}_{Vappx}^{NHL} - v_{l} {\hat \Pi}_{\gamma Zappx}^{NHL}\big)} {1 + {\hat \Pi}_{Z}^{SM} + {\hat \Pi}_{Zappx}^{NHL}} \nonumber \\ & \times & (1+ \delta_{QED}), \end{eqnarray} where \begin{eqnarray} {\hat \Pi}_{Zappx}^{NHL} & = & \frac{\alpha}{\pi}\frac{c_{W}^{2}-s_{W}^{2}}{16 s_{W}^{4}}\frac{M_{N}^{2}}{M_{W}^{2}}k_{HH}, \nonumber \\ {\hat \Pi}_{\gamma Zappx}^{NHL} & = & - \frac{\alpha}{\pi}\frac{c_{W}}{16 s_{W}^{3}} \frac{M_{N}^{2}}{M_{W}^{2}}k_{HH}, \nonumber \\ {\hat F}_{Vappx}^{NHL} & = & \frac{\alpha}{4\pi} \frac{1}{8s_{W}^{3}c_{W}}\frac{M_{N}^{2}}{M_{W}^{2}} l_{2} . \end{eqnarray} The third, fourth and the fifth term in the numerator in $\Gamma_{ll}^{appx}$ are all increasingly negative with rising $M_{N}$, therefore they make the partial width smaller. The denominator rises with $M_{N}$ and also makes the partial width smaller. On the other hand, the tree-level result, $\Gamma_{0}$, whose indirect dependence on NHL's via $M_{W}$ was discussed in Sec. \ref{secimp}, rises with $M_{N}$ (see Table \ref{t4}). As a result, depending on the particular values of the mixing parameters $l_{2}$ and $k_{HH}$, the partial width can either increase with NHL mass or decrease. An example of the latter case can be found in Table \ref{t4} which shows (for the standard set of input parameters, $\tau\tau_{mix} = 0.033$ and the above assumptions on $ee_{mix}, \mu\mu_{mix}$) the $Z \rightarrow \tau\tau$ mode partial width $\Gamma_{\tau\tau}$ for different NHL masses. Also shown are $\Gamma_{0}$ and $\Gamma_{\tau\tau}^{appx}$, the prediction of Eq. \ref{aprox5}. \begin{table}[bthp] \begin{center} \begin{tabular}{|l|r|r|r|r|} \hline $M_{N}$ & 0.5 TeV & 1 TeV & 3 TeV & 5 TeV \\ \hline $\Gamma_{0}$ [MeV] & 81.43 & 81.45 & 81.82 & 82.60 \\ $\Gamma_{\tau\tau}$ [MeV] & 83.99 & 83.94 & 83.62 & 83.02 \\ $\Gamma_{\tau\tau}^{appx}$ [MeV] & 84.00 & 83.97 & 83.70 & 83.18 \\ \hline \end{tabular} \end{center} \caption{$\Gamma_{0}$ and $\Gamma_{\tau\tau}$ as a function of $M_{N}$ } \label{t4} \end{table} Once we can predict partial leptonic widths, we can take advantage of that and study the universality breaking parameter defined as \cite{bernabeu2} \begin{eqnarray} U_{br} & = & \displaystyle \left| \frac{\Gamma_{\tau\tau} - \Gamma_{ee}} {\Gamma_{\tau\tau} + \Gamma_{ee}} \right| \; = \; \left| \frac{\delta{\hat \Gamma_{\tau\tau}} - \delta{\hat \Gamma_{ee}}} {2 \Gamma_{0} + \delta{\hat \Gamma_{\tau\tau}} + \delta{\hat \Gamma_{ee}}} \right|. \end{eqnarray} The new feature here is that the universality breaking parameter depends only on the vertex factor $\delta{\hat \Gamma_{ll}}$; the indirect influence of $M_{W}$ via $\Gamma_{0}$ is suppressed and the direct oblique corrections cancell out in the ratio. Thus $U_{br}$ represents an independent and complementary quantity to the partial leptonic widths for the study of the NHL's. The experimental limit on the universality breaking parameter can be derived from the limits on partial leptonic widths as follows. From the limits on partial leptonic widths (see Eq. \ref{pwidths}) we get \begin{eqnarray} \frac{\Gamma_{\tau\tau}}{\Gamma_{ee}} & = & 0.9991 \pm 0.0040. \end{eqnarray} This ratio implies \begin{eqnarray} U_{br}^{exp} & \doteq & \displaystyle \left| \frac{\Gamma_{\tau\tau} - \Gamma_{ee}} {\Gamma_{ee} + \Gamma_{ee}} \right| \; = \; \left| \frac{\Gamma_{\tau\tau}}{ 2 \Gamma_{ee}} - \frac{1}{2} \right| \; = \; 0.00045 \pm 0.00200, \end{eqnarray} leading to the upper limit (at $2 \sigma$ level) \begin{eqnarray} U_{br}^{exp} & < & 0.00445. \end{eqnarray} The third experimental parameter sensitive to NHL's is the W mass, $M_{W}$. Under the assumptions made above it is only sensitive to $W$ oblique corrections and thus complements the former two quantities. The $Z$ leptonic widths are given as a function of NHL mass in Figs. \ref{numw1} - \ref{numw3}. We mostly show $\Gamma_{\tau\tau}$, as the most NHL-sensitive mode under the stated assumptions on mixing parameters. $\Gamma_{ee} = \Gamma_{\mu\mu}$ is also plotted in Figs. \ref{numw1}a, \ref{numw2}b . The dashed lines represent the $1\sigma$ variation about the current experimental results for the individual $Z$ leptonic widths $\Gamma_{\tau \tau}^{exp}$ or $\Gamma_{ee}^{exp}$, see Eq. \ref{pwidths}. In Fig. \ref{numw1}a, the widths $\Gamma_{ee}, \Gamma_{\tau\tau}$ are shown for three values of the top quark mass, $m_{t} = 158, 176$ and $194$ GeV. The mixing was fixed at $\tau\tau_{mix} = 0.033$. The striking difference between the two modes can be seen to arise from the competition between the $\Gamma_{0}$ on one side and the $\delta{\hat \Gamma_{Z}}$ and ${\hat \Pi}_{Z}$ on the other side (see the discussion above). For the $\tau\tau$ mode, the mixing parameter $l_{2} = k_{HH} = \tau\tau_{mix}^{2}$, while for the $ee$ mode, $k_{HH} = \tau\tau_{mix}^{2}$ and $l_{2} = 0$ (see Eq. \ref{elka}). Zero parameter $l_{2}$ eliminates the third and the fourth terms in the numerator in the second line of Eq. \ref{aprox5} and the rising $\Gamma_{0}$ dominates as a result. In Fig. \ref{numw1}b, we again set $\tau\tau_{mix} = 0.033$ and show the $Z$ width to $\tau^{+}\tau^{-}$ for three values of the Higgs mass, $M_{H} = 100, 200, 800$ GeV. The dependence on the top quark mass and the weak dependence on the Higgs mass got transferred from the SM as expected. In Fig. \ref{numw2} we vary the mixings from $\tau\tau_{mix} = 0.02$ to $\tau\tau_{mix} = 0.07$. Fig.~\ref{numw2}a shows $\Gamma_{\tau\tau}$, Fig. \ref{numw2}b shows $\Gamma_{ee}$. The quadratic dependence on the mixing parameter, anticipated from Eq. \ref{aprox5}, is confirmed. Fig. \ref{numw3} is the only figure where we look at a 'lower energy' scale. It is a continuation of Fig. \ref{numw2}a to the lower NHL mass range. As expected, the partial width is well within the $1 \sigma$ band and the NHL effects are negligible. The universality breaking factor $U_{br}$ is plotted in Fig. \ref{numubr} as a function of $M_{N}$, with the mixing parameter varied about $\tau\tau_{mix} = 0.033$. The $1\sigma$ experimental limit on $U_{br}$ is indicated as the dashed line. Finally, we present the NHL mass dependence of the $W$ mass in Figs. \ref{nummw}a,b. The top quark mass is varied about $m_{t} = 176$ GeV in Fig. \ref{nummw}a, while the mixing is held constant at $\tau\tau_{mix} = 0.033$. In Fig. \ref{nummw}b, the mixing is varied about $\tau\tau_{mix} = 0.033$ for a fixed top quark mass. \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,6.5) \put(.1,+0.4){\mbox{\epsfxsize=5.0in\epsffile{results1.eps}}} \end{picture} \end{center} \caption{Z leptonic width as a function of $M_{N}$ in the interval $0.5$ TeV $\leq M_{N} \leq 5$~TeV for (a) fixed mixing parameter $\tau\tau_{mix}$, fixed Higgs mass and different values of $m_{t}$; both $Z \rightarrow \tau\tau$ and $Z \rightarrow ee$ modes shown, (b) fixed mixing parameter $\tau\tau_{mix}$, fixed $m_{t}$ and different values of the Higgs mass $M_{H}$; only $Z \rightarrow \tau\tau$ mode shown. The dashed lines represent $1 \sigma$ band about the current experimental value $\Gamma_{\tau\tau}^{exp} = 83.85 \pm 0.29$~MeV.} \label{numw1} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,6.5) \put(.1,+0.4){\mbox{\epsfxsize=5.0in\epsffile{results2.eps}}} \end{picture} \end{center} \caption{Z leptonic width as a function of $M_{N}$ in the interval $0.5$ TeV $\leq M_{N} \leq 5$~TeV for fixed $m_{t}$, fixed Higgs mass and different values of the mixing parameter, (a) $Z \rightarrow \tau\tau$ mode, (b) $Z \rightarrow ee$ mode. The dashed lines represent $1 \sigma$ band about the current experimental value (a)$\Gamma_{\tau\tau}^{exp} = 83.85 \pm 0.29$ MeV, (b)$\Gamma_{ee}^{exp} = 83.92 \pm 0.17$~MeV.} \label{numw2} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(.1,+0.325){\mbox{\epsfxsize=5.0in\epsffile{results3.eps}}} \end{picture} \end{center} \caption{$Z \rightarrow \tau\tau$ width as a function of $M_{N}$ in the interval $50$ GeV $\leq M_{N} \leq 500$~GeV for a fixed top mass $m_{t} = 176$ GeV and different values of the mixing parameter $\tau\tau_{mix}$. The dashed lines represent the $1 \sigma$ band about the current experimental value $\Gamma_{\tau\tau}^{exp} = 83.85 \pm 0.29$ MeV.} \label{numw3} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,7) \put(.1,+0.325){\mbox{\epsfxsize=5.0in\epsffile{results4.eps}}} \end{picture} \end{center} \caption{Universality breaking parameter $U_{br}$ as a function of $M_{N}$ for fixed top quark mass ($m_{t}=176$ GeV) and different values of the mixing parameter. The dashed line represents $1 \sigma$ experimental limit ($< 0.00245$).} \label{numubr} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,6.5) \put(.1,+0.4){\mbox{\epsfxsize=5.0in\epsffile{results5.eps}}} \end{picture} \end{center} \caption{W mass as a function of $M_{N}$ for (a) fixed mixing parameter ($\tau\tau_{mix}=0.033$) and different values of $m_{t}$, (b) fixed top quark mass ($m_{t}=176$ GeV) and different values of the mixing parameter. The dashed lines represent $1 \sigma$ band about the current experimental value $M_{W} = 80.410 \pm 0.180$ GeV.} \label{nummw} \end{figure} \section{Discussion and Conclusions}\label{conc} Our primary consideration here has been the inclusion of neutral heavy leptons in the calculation of the flavour-conserving $Z$ decays to charged leptons at one-loop level. The dependence of the $Z$ leptonic widths on the NHL mass, $M_N$, and on the mixing parameter $\tau \tau_{mix}$ was given in Figs. \ref{numw1} - \ref{numw3}. We see for the experimentally allowed upper limit of $\tau \tau_{mix} = 0.033$, and assuming $ee_{mix} = \mu\mu_{mix} = 0, m_{t} = 176$~GeV, the $Z$ decay width to $\tau$ leptons becomes sensitive to NHL masses of about $4.3$~TeV at the $2\sigma$ level. Curiously, this is an upper limit, \begin{equation} \label{upper1} M_{N} \leq 4.3 \; {\rm TeV}, \end{equation} rather than the lower one. The cause is the nondecoupling of the NHL's. Apart from this comparison of each leptonic width prediction with experiment we can also exploit the lepton flavour universality violation which takes place in the model. The universality breaking ratio, $U_{br}$ (see Fig. \ref{numubr}), leads to a yet better upper limit, \begin{equation} \label{upper2} M_{N} \leq 3.8 \; {\rm TeV}, \end{equation} at the $2\sigma$ level. The importance of $U_{br}$ is underlined by the fact that it is sensitive only to the vertex parameter $\delta{\hat \Gamma_{ll}}$, unlike $\Gamma_{ll}$, which besides the vertex parameter also depends on the Z oblique corrections and indirectly, via the $W$ boson mass, on the W oblique corrections. Thus the universality breaking complements the Z leptonic partial widths as far as sensitivity to NHL's is concerned. The $W$ boson mass also exhibits some sensitivity to NHL parameters arising from the mixing factor modifications and the presence of one-loop diagrams containing NHL's, as described in Sec. \ref{secimp}. From Figs. \ref{nummw}a,b we see that the sensitivity of the $W$ mass, currently measured as $M_{W} = 80.410 \pm 0.180$ GeV \cite{mw1}, to NHL's depends to a large degree on the top mass. The experimental error on $M_W$ might be expected to come down to about $0.05$ GeV once LEP II measures $W$ pair production \cite{mw2}. In conclusion to this chapter, the Z boson flavour-conserving leptonic widths along with the lepton universality breaking parameter and the W mass represent some of the best quantities sensitive to the NHL mass. This applies especially to NHL's that are far too heavy to be produced directly at present colliders. The only way to probe their mass in this case is via their loop contributions. Much of the previous effort on NHL studies has been so far concentrated on flavour-violating processes, either at very low energies ($\mu, \tau$ decays, see Eq. \ref{fvdecay3}), or at $Z$-peak energy ($Z$ decays, see Eq. \ref{fvz3} and Sec. \ref{fvzb1}). We feel there are at least two reasons which give the processes studied in this thesis a distinct advantage over the flavour-violating decays. First, we were able to actually reduce the allowed region in the mixings - NHL mass parameter space here (see Eqs. \ref{upper1}, \ref{upper2}) using the {\it current} experimental data. The only flavour-violating process that competes with the limits of Eqs. \ref{upper1}, \linebreak \ref{upper2} is the decay $\mu \rightarrow e e e$, which sets an upper limit on NHL mass (see Eqs. \ref{mueee4}, \ref{mnlimit4}, assuming $ee_{mix} = 0.0071, |e\mu_{mix}| = 0.00024$) at $2 \sigma$ level \begin{equation} M_{N} \leq 3 \;{\rm TeV}. \end{equation} The flavour-violating decay rates for $\tau$ and $Z$ are below the current experimental sensitivity (see Secs. \ref{numeres}, \ref{fvple}). Second, the inequality Eq. \ref{ineq} can further suppress the flavour-violating processes against the flavour-conserving ones via the 'conspiracy of the phases' in the sum of complex terms making up the flavour-violating parameters $ll^{'}_{mix}$. In this case, flavour-conserving processes are the only way to probe very heavy NHL's. \chapter{Muon decay and W mass} \label{chap7} Here we generalize our analysis from the previous chapter by considering the case of arbitrary mixings $ee_{mix}, \mu\mu_{mix}$ and $\tau\tau_{mix}$. In this case non-SM box, vertex and self-energy diagrams contributing to the muon decay (see Figs. \ref{boxmuon} - \ref{vermuon}) may become important for the calculation of $M_{W}$. This is in contrast to the previous chapter where, as a result of the assumption $ee_{mix} = \mu\mu_{mix} = 0$, only oblique corrections (corrections to the W propagator) had to be considered (see Sec. \ref{secimp}). Still, we assume in this chapter $e\mu_{mix} = 0$ and $\mu\tau_{mix} = 0$. The last parameter, $\mu\tau_{mix}$ does not contribute to the muon decay. The neglect of $e\mu_{mix}$ is supported by experiment (see Eq. \ref{limits3}) and the neglect of $\mu\tau_{mix}$ is motivated by our intention to keep the discussion as straightforward as possible. \section{Box diagrams} \label{secbox} The set of box diagrams contributing to the muon decay is depicted in Fig. \ref{boxmuon}. \begin{figure}[hbtp] \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(.3,+0.325){\mbox{\epsfxsize=5.0in\epsffile{ch61.eps}}} \end{picture} \end{center} \caption{Box diagrams for muon decay} \label{boxmuon} \end{figure} As an example, we will calculate the diagrams of Fig. \ref{boxmuon}e. These two were chosen since, as we will show, they exhibit quadratic nondecoupling. The amplitude for the diagram with the Higgs boson $H$ is given by (we sum over NHL's $N_{a}, N_{b}$ with $M_{N_{a}} = M_{N_{b}} = M_{N}$ and we neglect external momenta in the internal propagators) \begin{eqnarray} {\cal M}_{\phi N H N} & = & \sum_{a,b} \int \frac{d k^{4}}{(2\pi)^{4}} \overline{u_{\nu_{\mu}}}\frac{-i g_{2}}{2}\frac{M_{N}}{M_{W}}\big(K_{L}^{\dagger}K_{H}\big)_{ia} \frac{1+\gamma_{5}}{2}\frac{i}{\not k - M_{N}}\frac{ig_{2}}{2\sqrt{2}} \frac{M_{N}}{M_{W}}\big(K_{H}^{\dagger}\big)_{a\mu} \nonumber \\ & \times & (1-\gamma_{5})u_{\mu}\;\;\overline{v_{e}}\frac{ig_{2}}{2\sqrt{2}} \frac{M_{N}}{M_{W}}\big(K_{H}\big)_{eb}(1+\gamma_{5}) \frac{i}{\not k - M_{N}}\frac{-i g_{2}}{2}\frac{M_{N}}{M_{W}}\big(K_{H}^{\dagger}K_{L}\big)_{bj} \nonumber \\ & \times & \frac{1-\gamma_{5}}{2}v_{\nu_{e}}\frac{i}{k^{2}-M_{W}^{2}} \frac{i}{k^{2}-M_{H}^{2}} \nonumber \\ \nonumber \\ & = & \frac{g_{2}^{4}}{128}\frac{M_{N}^{4}}{M_{W}^{4}}\sum_{a,b} \big(K_{L}^{\dagger}K_{H}\big)_{ia}\big(K_{H}^{\dagger}\big)_{a\mu} \big(K_{H}\big)_{eb}\big(K_{H}^{\dagger}K_{L}\big)_{bj} \int \frac{d k^{4}}{(2\pi)^{4}} \nonumber \\ & \times & \frac{\big[\overline{u_{\nu_{\mu}}}(1+\gamma_{5})(\not k + M_{N})(1-\gamma_{5}) u_{\mu}\big]\big[\overline{v_{e}}(1+\gamma_{5})(\not k + M_{N})(1-\gamma_{5}) v_{\nu_{e}}\big]} {\big(k^{2}-M_{N}^{2}\big)^{2} \big(k^{2}-M_{W}^{2}\big) \big(k^{2}-M_{H}^{2}\big)} \nonumber \\ \nonumber \\ & = & \frac{g_{2}^{4}}{32}\frac{M_{N}^{4}}{M_{W}^{4}}k_{mix}\int \frac{d k^{4}}{(2\pi)^{4}} \frac{\big[\overline{u_{\nu_{\mu}}}(1+\gamma_{5})\not k u_{\mu}\big]\big[\overline{v_{e}}(1+\gamma_{5}) \not k v_{\nu_{e}}\big]} {\big(k^{2}-M_{N}^{2}\big)^{2} \big(k^{2}-M_{W}^{2}\big) \big(k^{2}-M_{H}^{2}\big)}, \end{eqnarray} where \begin{eqnarray} k_{mix} \equiv \big(K_{L}^{\dagger}K_{H}\big)_{ia}\big(K_{H}^{\dagger}\big)_{a\mu} \big(K_{H}\big)_{eb}\big(K_{H}^{\dagger}K_{L}\big)_{bj} & = & {\big(K_{L}^{\dagger}\big)_{i\mu} \big(K_{L}\big)_{ej}}ee_{mix}\mu\mu_{mix}.\;\;\; \end{eqnarray} Using $\int k^{\epsilon}k^{\nu} ... = \frac{1}{4} g^{\epsilon \nu} \int k^{2} ...\;\;$ we get \begin{eqnarray} {\cal M}_{\phi N H N} & = & \frac{g_{2}^{4}}{32}\frac{M_{N}^{4}}{M_{W}^{4}}k_{mix} \big[\overline{u_{\nu_{\mu}}}(1+\gamma_{5})\gamma_{\epsilon} u_{\mu}\big]\big[\overline{v_{e}}(1+\gamma_{5}) \gamma_{\nu} v_{\nu_{e}}\big] \nonumber \\ & \times & \frac{1}{4}g^{\epsilon \nu}\frac{i}{(4 \pi)^{2}}{\cal I}_{2} (M_{H}), \end{eqnarray} where ${\cal I}_{2}(m)$ is the integral \begin{eqnarray} {\cal I}_{2}(m) & = & \frac{{(4\pi)}^{2}}{i}\int \frac{d^{4}k}{{(2\pi)}^{4}} \frac{k^{2}}{(k^{2}-M_{N}^{2})^{2}(k^{2}-M_{W}^{2})(k^{2}-m^{2})}. \end{eqnarray} In the next step we use the tree-level amplitude \begin{eqnarray} {\cal M}_{tree} & = & - \frac{i g_{2}^{2}}{8 M_{W}^{2}} \big[\overline{u_{\nu_{\mu}}} (1+\gamma_{5})\gamma_{\alpha}u_{\mu}\big] \big[\overline{v_{e}}(1+\gamma_{5}) \gamma^{\alpha} v_{\nu_{e}}\big] {\big(K_{L}^{\dagger}\big)_{i\mu} \big(K_{L}\big)_{ej}}, \end{eqnarray} and $g_{2} = e/s_{W}$ to obtain \begin{eqnarray} {\cal M}_{\phi N H N} & = & - \frac{\alpha}{4\pi}ee_{mix}\mu\mu_{mix}\frac{1}{16s_{W}^{2}} \frac{M_{N}^{4}}{M_{W}^{2}}I_{2}(M_{H}){\cal M}_{tree}. \end{eqnarray} The integral ${\cal I}_{2}(M_{H})$ is calculated in Appendix \ref{Decko}. The result is \begin{eqnarray} {\cal I}_{2}(M_{H}) & = & \frac{1}{M_{H}^{2}-M_{W}^{2}} \biggl\{ \frac{1}{1-\frac{M_{W}^{2}}{M_{N}^{2}}} + \frac{\frac{M_{W}^{4}}{M_{N}^{4}} \ln \frac{M_{W}^{2}}{M_{N}^{2}}} {{(1-\frac{M_{W}^{2}}{M_{N}^{2}})}^{2}} -\frac{1}{1-\frac{M_{H}^{2}}{M_{N}^{2}}} \nonumber \\ & - & \frac{\frac{M_{H}^{4}}{M_{N}^{4}} \ln \frac{M_{H}^{2}}{M_{N}^{2}}} {{(1-\frac{M_{H}^{2}}{M_{N}^{2}})}^{2}} \biggr\}. \end{eqnarray} The amplitude ${\cal M}_{\phi N \chi N}$ is obtained from ${\cal M}_{\phi N H N}$ by replacing $M_{H}$ with $M_{Z}$. The total contribution of the box diagrams (Figs. \ref{boxmuon} a-g) is \begin{eqnarray} {\cal M}_{box} & = & {\cal M}_{ZeW\mu} +{\cal M}_{W \nu Z\nu} + {\cal M}_{W\nu Z N} + {\cal M}_{W N Z\nu} + {\cal M}_{W N Z N} \nonumber \\ & + & {\cal M}_{\phi N Z N} + {\cal M}_{W N H N} + {\cal M}_{W N \chi N} + {\cal M}_{\phi N H N} + {\cal M}_{\phi N \chi N} \nonumber \\ & + & {\cal M}_{Z \nu W \mu} + {\cal M}_{Z N W \mu} + {\cal M}_{W e Z\nu} + {\cal M}_{W e Z N} \nonumber \\ \nonumber \\ & = & {\cal M}_{tree}\frac{\alpha}{4 \pi}\Biggl\{ \frac{-1}{4s_{W}^{2}c_{W}^{2}}M_{W}^{2}\biggl[ 4{(-\frac{1}{2}+s_{W}^{2})}^{2}{\cal I}_{0} + {\cal I}_{0}(1-{\mu \mu}_{mix}) (1-ee_{mix}) \Biggr. \biggr. \nonumber \\ & + & {\cal I}_{1}(M_{Z})(1-{\mu \mu}_{mix})ee_{mix} +{\cal I}_{1}(M_{Z}){\mu \mu}_{mix}(1-ee_{mix}) +{\cal I}_{2}(M_{Z}) ee_{mix} \nonumber \biggr. \\ & \times & \biggl. {\mu \mu}_{mix} \biggr] + \frac{1}{4 s_{W}^{2}}M_{N}^{4}\biggl[ \frac{1}{c_{W}^{2}} {\cal I}_{3}(M_{Z}) + {\cal I}_{3}(M_{H}) + {\cal I}_{3}(M_{Z}) - \frac{1}{4 M_{W}^{2}} {\cal I}_{2}(M_{H})\biggr. \nonumber \\ & - & \frac{1}{4 M_{W}^{2}}{\cal I}_{2}(M_{Z})\biggr]ee_{mix} {\mu\mu}_{mix} +\frac{2(-\frac{1}{2}+s_{W}^{2})}{s_{W}^{2} c_{W}^{2}}M_{W}^{2} \biggl[ {\cal I}_{0}(1-ee_{mix}) \nonumber \\ & + & {\cal I}_{1}(M_{Z})ee_{mix} + {\cal I}_{0}(1-{\mu \mu}_{mix}) + {\cal I}_{1}(M_{Z}){\mu \mu}_{mix} \biggr] \Biggr\}, \end{eqnarray} where the integrals ${\cal I}_{0}, {\cal I}_{1}(m), {\cal I}_{3}(m)$ are \begin{eqnarray} {\cal I}_{0} & = & \frac{{(4\pi)}^{2}}{i}\int \frac{d^{4}k}{{(2\pi)}^{4}} \frac{1}{k^{2}(k^{2}-M_{W}^{2})(k^{2}-M_{Z}^{2})} \; = \; \frac{1}{M_{Z}^{2}-M_{W}^{2}}\ln \frac{M_{W}^{2}}{M_{Z}^{2}}, \\ & & \nonumber \\ {\cal I}_{1}(m) & = & \frac{{(4\pi)}^{2}}{i}\int \frac{d^{4}k}{{(2\pi)}^{4}} \frac{1}{(k^{2}-M_{N}^{2})(k^{2}-M_{W}^{2})(k^{2}-m^{2})} \nonumber \\ & & \nonumber \\ & = & \frac{1}{m^{2}-M_{W}^{2}} \biggl\{\ln\frac{M_{W}^{2}}{m^{2}} + \frac{M_{N}^{2}}{M_{W}^{2}-M_{N}^{2}} \ln \frac{M_{W}^{2}}{M_{N}^{2}} - \frac{M_{N}^{2}}{m^{2}-M_{N}^{2}} \ln \frac{m^{2}}{M_{N}^{2}}\biggr\},\;\;\;\;\; \\ & & \nonumber \\ {\cal I}_{3}(m) & = & \frac{{(4\pi)}^{2}}{i}\int \frac{d^{4}k}{{(2\pi)}^{4}} \frac{1}{(k^{2}-M_{N}^{2})^{2}(k^{2}-M_{W}^{2})(k^{2}-m^{2})} \nonumber \\ & & \nonumber \\ & = & \frac{1}{m^{2}-M_{W}^{2}} \biggl\{ \frac{1}{M_{N}^{2}-M_{W}^{2}} + \frac{M_{W}^{2} \ln \frac{M_{W}^{2}}{M_{N}^{2}}}{{(M_{N}^{2}-M_{W}^{2})}^{2}} -\frac{1}{M_{N}^{2}-m^{2}} \nonumber \\ & - & \frac{m^{2} \ln \frac{m^{2}}{M_{N}^{2}}}{{(M_{N}^{2}-m^{2})}^{2}} \biggr\}. \end{eqnarray} For the calculation of these integrals see Appendix \ref{Decko}. In the limit of $M_{N} \gg M_{W}$, the leading contribution to ${\cal I}_{2}(m)$ is very simple, \begin{eqnarray} {\cal I}_{2}^{appx}(m) & = & - \frac{1}{M_{N}^{2}}, \end{eqnarray} implying the $M_{N}^{2}$ dependence of the ${\cal M}_{\phi N H N}$ and ${\cal M}_{\phi N \chi N}$, \begin{eqnarray} \label{pnhn} {\cal M}_{\phi N H N}^{appx} & = & {\cal M}_{tree} \frac{\alpha}{4 \pi} \frac{1}{16s_{W}^{2}}\frac{M_{N}^{2}}{M_{W}^{2}} ee_{mix} \mu\mu_{mix}. \end{eqnarray} The leading contribution to ${\cal I}_{3}(m)$ goes as $\sim 1/M_{N}^{4}$ and to ${\cal I}_{1}(m)$ as $\sim 1/M_{N}^{2}$, therefore no other box depends quadratically on the NHL mass. The last box diagram, Fig. \ref{boxmuon} h can be written as \cite{key6}: \begin{eqnarray} {\cal M}_{\gamma e W \mu} & = & {\cal M}_{tree} \frac{\alpha}{4 \pi} \Big(\ln \frac{M_{W}}{m_{e}} + \ln \frac{M_{W}}{m_{\mu}} - 2 \ln \frac{m_{e}}{\lambda} - 2 \ln \frac{m_{\mu}}{\lambda} + \frac{9}{2} \Big) + ... \;\;\; \end{eqnarray} The ellipses denote additional terms discussed in Ref. \cite{key6}. \section{Neutrino self-energy and its renormalization} \label{secneus} One half of the neutrino self-energy diagrams contributing to muon decay is shown in Fig. \ref{selfmuon}. The corresponding self-energy is denoted as $\Sigma^{\nu_{\mu}}$. The other half consists of the same loops sitting on the bottom neutrino leg with the corresponding self-energy $\Sigma^{\nu_{e}}$. In all these diagrams, we sum over the internal massless neutrinos $\nu_{k}, k=1,2,3$. In principle, the graphs with $\nu_{k}$ replaced by $N_{a}$ are also present, however, they are suppressed by the large mass $M_{N}$. \begin{figure}[hbtp] \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,2) \put(.4,+0.325){\mbox{\epsfxsize=5.0in\epsffile{ch63.eps}}} \end{picture} \end{center} \caption{Neutrino self-energy diagrams for muon decay} \label{selfmuon} \end{figure} Without derivation, we present results for the unrenormalized neutrino self-energy $\Sigma^{\nu_{l}}$ ($l = e, \mu$). It has the form \begin{equation} \Sigma^{\nu_{l}} = \frac{1}{2}\Sigma_{L}^{\nu_{l}} \not p (1-\gamma_{5}), \end{equation} where $\Sigma_{L}^{\nu_{l}}$ receives the following contributions from the diagrams of Fig. \ref{selfmuon}: \begin{eqnarray} \label{neuself} \Sigma_{L}^{\nu_{l}} & = & \Sigma_{L}^{H}(p)+\Sigma_{L}^{\chi}(p)+\Sigma_{L}^{Z,N}(p) +\Sigma_{L}^{Z,\nu}(p)+ \Sigma_{L}^{W}(p) \nonumber \\ & = & \frac{\alpha}{2\pi}(1-ll_{mix})\Biggl\{ \frac{1}{8s_{W}^{2}} ll_{mix} \frac{M_{N}^{2}}{M_{W}^{2}} \biggl[ \frac{1}{2}\Delta_{\mu}+ B_{0}^{fin}(p;M_{H},M_{N})+B_{1}^{fin}(p;M_{H},M_{N})\biggr]\nonumber \\ & + & \frac{1}{8s_{W}^{2}}ll_{mix} \frac{M_{N}^{2}}{M_{W}^{2}} \biggl[ \frac{1}{2}\Delta_{\mu}+ B_{0}^{fin}(p;M_{Z},M_{N})+B_{1}^{fin}(p;M_{Z},M_{N})\biggr]\nonumber \\ & + & \frac{1}{4s_{W}^{2}c_{W}^{2}}ll_{mix} \biggl[ \frac{1}{2}\Delta_{\mu}-\frac{1}{2}+ B_{0}^{fin}(p;M_{Z},M_{N})+B_{1}^{fin}(p;M_{Z},M_{N})\biggr]\nonumber \\ & + & \frac{1}{4s_{W}^{2}c_{W}^{2}}(1-ll_{mix}) \biggl[ \frac{1}{2}\Delta_{\mu}-\frac{1}{2}+ B_{0}^{fin}(p;M_{Z},0)+B_{1}^{fin}(p;M_{Z},0)\biggr] \nonumber \\ & + & \frac{1}{2s_{W}^{2}} \biggl[ \frac{1}{2}\Delta_{\mu}-\frac{1}{2}+ B_{0}^{fin}(p;M_{W},m_{l}\rightarrow 0)+B_{1}^{fin}(p;M_{W},m_{l}\rightarrow 0) \biggr]\Biggr\}. \end{eqnarray} Here $s = p^{2} = 0 \;\; \ll \;\; M_{H}^{2},M_{Z}^{2},M_{W}^{2},M_{N}^{2}$, therefore Eqs. \ref{males} apply for $B_{0}^{fin}, B_{1}^{fin}$. The amplitude for the diagrams of Fig. \ref{selfmuon} in terms of $\Sigma_{L}^{\nu_{l}}$ can be shown to be equal to \begin{equation} {\cal M}_{self} = - {\cal M}_{tree}\frac{\Sigma_{L}^{\nu_{l}}}{2}, \end{equation} where the one half comes from our dealing with the external wave function rather than the neutrino propagator. \subsection{Renormalization of the neutrino self-energies} Let us now investigate the question of the renormalization of the diagrams of Fig.~\ref{selfmuon}. It is the only case in this work when the counterterms are modified from their SM form \footnote{So far we have used the SM form of the counterterms, see Appendix \ref{recons}. The actual value of the counterterms was, of course, different from SM.}. The problem is how to renormalize a part of a theory where interaction eigenstates are different from mass eigenstates. Curiously, this also happens in the SM quark sector. The difference is that in the SM the problem is circumvented by arguing the off-diagonal quark mixings are too small (with the required accuracy in mind) to have any effect in the loops and the renormalization procedure is effectively simplified to that of mass eigenstates being also the flavour eigenstates. In our model, we cannot neglect the off-diagonal mixings ($ll_{mix}$), since they (in combination with TeV NHL masses) lead to the dominant terms in the predicted signals \footnote{Just a few days before the submission of this thesis an interesting paper appeared on the preprint bulletin board \cite{kniehl}, where the arguments of this paragraph are also made. The authors then continue to develop the first formal framework ever for the renormalization of theories with nonnegligible mixings between mass and interaction eigenstates. Their treatment is more general than ours below.}. To derive the required counterterm, we start with the counterterm Lagrangian, which has the same form in both SM and our model: \begin{eqnarray} \label{ourcounter} i \;\delta Z_{L}^{e} \; \overline{\nu_{e}} {\not \partial} \nu_{e} + i\; \delta Z_{L}^{\mu} \overline{\nu_{\mu}} {\not \partial} \nu_{\mu} + i \;\delta Z_{L}^{\tau} \overline{\nu_{\tau}} \not {\partial} \nu_{\tau}, \end{eqnarray} where $\delta Z_{L}^{l}$ is the sum of $\delta Z_{V}^{l}$ and $\delta Z_{A}^{l}$ renormalization constants which are given in Eq. \ref{rconstants}. Weak eigenstates $\nu_{l}$ can be expressed in our model in terms of mass eigenstates $\nu_{i}, N_{a}$ (see Eq. \ref{alter}) \begin{eqnarray} \nu_{l} & = & \sum_{i}\big(K_{L}\big)_{li} \nu_{i} + \sum_{a} \big(K_{H}\big)_{la}N_{a} \; = \; \sum_{n_{\alpha}=\nu_{1},...,N_{6}} K_{ln_{\alpha}}n_{\alpha}, \nonumber \\ \overline{\nu_{l}} & = & \overline{n_{\alpha}}K^{\dagger}_{n_{\alpha} l}. \end{eqnarray} This gives us for the product $\overline{\nu_{l}}\nu_{l}$ \begin{eqnarray} \overline{\nu_{l}}\nu_{l} & = & \overline{n}K^{\dagger}K n \; = \; \sum_{k,i=1,2,3}\overline{\nu_{i}}\big(K_{L}^{\dagger}\big)_{il} \big(K_{L}\big)_{lk}\nu_{k} + ... (\overline{\nu_{i}}N, \overline{N}\nu_{k}, \overline{N}N), \end{eqnarray} and Eq. \ref{ourcounter} thus contributes the following terms as the massless neutrino counterterm Lagrangians: \begin{equation} \sum_{k,i=1,2,3}\Big\{\delta Z_{L}^{e}\big(K_{L}^{\dagger}\big)_{ie}\big(K_{L}\big)_{ek} + \delta Z_{L}^{\mu}\big(K_{L}^{\dagger}\big)_{i\mu}\big(K_{L}\big)_{\mu k} + \delta Z_{L}^{\tau}\big(K_{L}^{\dagger}\big)_{i\tau}\big(K_{L}\big)_{\tau k} \Big\}\overline{\nu_{i}} {\not \partial} \nu_{k}. \end{equation} In our case, however, we sum over internal $\nu_{k}$ but not over external $\nu_{i}$. The graphic representation of the relevant counterterm (embedded in muon decay) is in Fig. \ref{countl}. \begin{figure}[hbtp] \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,2) \put(1.4,+0.025){\mbox{\epsfxsize=2.8in\epsffile{ch64.eps}}} \end{picture} \end{center} \caption{Counterterm diagram for neutrino self-energy in muon decay} \label{countl} \end{figure} The amplitude for this diagram is \begin{equation} {\cal M}_{C} = - \frac{1}{2}{\cal M}_{tree}^{SM}\sum_{l=e,\mu,\tau} \sum_{k=1,2,3} \delta Z_{L}^{l} \big(K_{L}^{\dagger}\big)_{il}\big(K_{L}\big)_{lk} \big(K_{L}^{\dagger}\big)_{k\mu}. \end{equation} Again, the factor $\frac{1}{2}$ comes from our dealing with the external wave function rather than the internal propagator and the mixing factor $\big(K_{L}^{\dagger}\big)_{k\mu}$ originates at the $\mu W \nu_{k}$ vertex. The amplitude ${\cal M}_{C}$ can be further simplified, \begin{eqnarray} {\cal M}_{C} & = & - \frac{1}{2}{\cal M}_{tree}^{SM} \sum_{l=e,\mu,\tau} \delta Z_{L}^{l}\big(K_{L}^{\dagger}\big)_{il} \sum_{k=1,2,3}\big(K_{L}\big)_{lk}\big(K_{L}^{\dagger}\big)_{k\mu} \nonumber \\ & = & - \frac{1}{2} M_{tree}^{SM} \sum_{l=e,\mu,\tau}\delta Z_{L}^{l}\big(K_{L}^{\dagger}\big)_{il} \big(\delta_{l\mu} - l\mu_{mix}\big) \; = \; - \frac{1}{2} M_{tree}^{SM} \delta Z_{L}^{\mu} \big(1 - \mu \mu_{mix} \big) \big(K_{L}^{\dagger}\big)_{i\mu} \nonumber \\ & = & - \frac{1}{2} \delta Z_{L}^{\mu} \big(1 - \mu \mu_{mix} \big) {\cal M}_{tree}. \end{eqnarray} The factor $\big(K_{L}^{\dagger}\big)_{i\mu}$ was absorbed by $M_{tree} = M_{tree}^{SM} \big(K_{L}^{\dagger}\big)_{i\mu}$. Now we can write down the final expressions for the renormalized amplitude ${\hat {\cal M}}_{self}$ and the renormalized neutrino self-energy: \begin{eqnarray} \label{modren} {\hat {\cal M}}_{self} & = & {\cal M}_{self} + {\cal M}_{C} \; = \; - \frac{\Sigma_{L}^{\nu_{l}}}{2} {\cal M}_{tree} - \frac{\delta Z_{L}^{l}}{2} \big(1 - ll_{mix}\big) {\cal M}_{tree}, \\ \label{modren1} {\hat \Sigma_{L}}^{\nu_{l}} & = & \Sigma_{L}^{\nu_{l}} + \delta Z_{L}^{l} \big(1 - ll_{mix}\big). \end{eqnarray} The constant $\delta Z_{L}^{l}$ is found from \begin{eqnarray} \delta Z_{L}^{l} & = & - \Sigma_{L}^{l} (m_{l}^{2}) - m_{l}^{2} \big[\Sigma_{L}^{l'}(m_{l}^{2}) + \Sigma_{R}^{l'}(m_{l}^{2}) + 2 \Sigma_{S}^{l'}(m_{l}^{2}) \big], \nonumber \\ \Sigma^{l'}_{L,R,S}(m_{l}^{2}) & = & \frac{\partial \Sigma_{L,R,S}^{l}}{\partial p^{2}}(m_{l}^{2}), \end{eqnarray} where $\Sigma_{L}, \Sigma_{R}$ and $\Sigma_{S}$ are respectively the left-handed, right-handed and the scalar part of the lepton self-energy given in Eqs. \ref{hasthe} - \ref{leptonself2} \footnote{Now we include the photonic loop $\Sigma_{\gamma l}^{l}$ in the total lepton self-energy, see Eq. \ref{leptonself2} and the following comments.}. The only graph with significant contribution to the term with derivatives, is the photonic loop (Eq. \ref{leptonself2}). The other diagrams lead to derivatives of $B_{0}^{fin}, B_{1}^{fin}$ functions which, when multiplied by $m_{l}^{2}$, are of the order $m_{l}^{2}/M_{W}^{2}$ and therefore negligible. This can be easily seen from Eqs. \ref{males}, \ref{bphoton}, wherein also derivatives of the $B_{0}^{fin}, B_{1}^{fin}$ functions corresponding to the photonic loop are given. The final answer is \begin{eqnarray} \label{finans} \delta Z_{L}^{l} & = & - \Sigma_{L}^{l} (m_{l}^{2}) + \frac{\alpha}{2 \pi} \Big( 2 \ln \frac{m_{l}}{\lambda} - 1 \Big). \end{eqnarray} To prove the cancellation of the infinities, we note that the infinite part of $\delta Z_{L}^{l}$ is given by \begin{eqnarray} \delta Z_{L}^{l,\infty} & = & - \frac{\alpha}{4 \pi}\frac{1}{s_{W}^{2}} \Big\{\frac{1}{2} + \frac{1}{4c_{W}^{2}} + \frac{{\cal X}}{4}ll_{mix}\Big\} \Delta_{\mu}, \end{eqnarray} and the infinite part of the neutrino self-energy by \begin{eqnarray} \Sigma_{L}^{\nu_{l},\infty} & = & \frac{\alpha}{4 \pi}\frac{1}{s_{W}^{2}} \Big\{\frac{{\cal X}}{4}ll_{mix}(1-ll_{mix}) + \frac{1}{2}(1-ll_{mix}) + \frac{1}{4c_{W}^{2}}ll_{mix}(1-ll_{mix}) \Big. \nonumber \\ & + & \Big. \frac{1}{4c_{W}^{2}}(1-ll_{mix})^{2} \Big\} \Delta_{\mu}. \end{eqnarray} From the formulae above it can be easily seen that infinities cancel out in Eq. \ref{modren1}. \subsection{Limit $M_{N} \gg M_{W}, M_{Z}, M_{H}$} We will investigate the large $M_{N}$ behaviour of the renormalized neutrino self-energy ${\hat \Sigma}_{L}^{\nu_{l}}$. For large $M_{N}$ we get from Eq. \ref{males} \begin{eqnarray} B_{0}(p;M_{H,Z,W},M_{N}) & = & 1 - 2 \ln M_{N}, \nonumber \\ B_{1}(p;M_{H,Z,W},M_{N}) & = & - 0.25 + \ln M_{N}. \end{eqnarray} This implies the quadratic nondecoupling for $\Sigma_{L}^{H}(p)$ and $\Sigma_{L}^{\chi}(p)$, see Eq. \ref{neuself}: \begin{eqnarray} \Sigma_{L}^{H}(p) + \Sigma_{L}^{\chi}(p) & = & \frac{\alpha}{2\pi} \frac{1}{4s_{W}^{2}} ll_{mix}(1-ll_{mix}) \frac{M_{N}^{2}}{M_{W}^{2}} \biggl[ \frac{1}{2}\Delta_{\mu} + 0.75 - \ln M_{N} \biggr], \;\;\; \end{eqnarray} and for the $\Sigma_{L}^{\phi N}$, a left-handed part of $\Sigma_{\phi N}^{l}$ (see Eq. \ref{leptonself1}), which contributes to the ${\hat \Sigma}_{L}^{\nu_{l}}$ via the counterterm $\delta Z_{L}^{l}$, see Eq. \ref{finans}: \begin{eqnarray} \Sigma_{L}^{\phi N} & = & + \frac{\alpha}{16\pi s_{W}^{2}}ll_{mix} \frac{M_{N}^{2}}{M_{W}^{2}} \biggl[ \Delta_{\mu} + \frac{3}{2} - 2 \ln M_{N} \biggr]. \end{eqnarray} From here we can see the $\Sigma_{L}^{\phi N}$ not only cancels out infinities in the $\Sigma_{L}^{H}(p)$ and $\Sigma_{L}^{\chi}(p)$, but, in the limit investigated, it also cancels out the finite parts. As a result, there is no quadratic nondecoupling in the renormalized neutrino self-energy. \section{Vertex diagrams} Diagrams modifying the $W\mu\nu_{i}$ vertex are depicted in Fig. \ref{vermuon}. Another set, one that modifies $We\nu_{j}$ vertex, is not shown. \begin{figure}[hbtp] \begin{center} \setlength{\unitlength}{1in} \begin{picture}(6,3) \put(0.93,+0.325){\mbox{\epsfxsize=4.0in\epsffile{ch62.eps}}} \end{picture} \end{center} \caption{Vertex diagrams for muon decay} \label{vermuon} \end{figure} The sum over the depicted set of diagrams gives the muon vertex amplitude ${\cal M}_{vertex}^{\mu}$: \begin{eqnarray} {\cal M}_{vertex}^{\mu} & = & {\cal M}_{\mu \nu Z} + {\cal M}_{\mu N Z} + {\cal M}_{Z W \mu} + {\cal M}_{\gamma W \mu} + {\cal M}_{WZ\nu}\nonumber \\ & + & {\cal M}_{WZN} + {\cal M}_{\phi ZN} + {\cal M}_{WHN} + {\cal M}_{\phi HN} + {\cal M}_{\phi \chi N}. \end{eqnarray} The computation of the diagrams yields \begin{eqnarray} {\cal M}_{vertex}^{\mu} & = & {\cal M}_{tree} \frac{\alpha}{4 \pi} \Biggl\{ \frac{2s_{W}^{2}-1}{4s_{W}^{2}c_{W}^{2}}\Big(\Delta_{M_{Z}}-\frac{1}{2}\Big) (1-{\mu \mu}_{mix}) \Biggr.\nonumber \\ & + & \frac{2s_{W}^{2}-1}{4s_{W}^{2}c_{W}^{2}}\Big(\Delta_{M_{Z}}-\frac{1}{2}- \frac{M_{N}^{2}}{M_{Z}^{2}-M_{N}^{2}}\ln \frac{M_{Z}^{2}}{M_{N}^{2}}\Big) {\mu \mu}_{mix}\nonumber \\ & + & \frac{\frac{1}{2}-s_{W}^{2}}{s_{W}^{2}}\Big(3 \Delta_{M_{W}} + \frac{5}{2} + \frac{3}{s_{W}^{2}}\ln c_{W}^{2}\Big) + 3\Big(\Delta_{M_{W}} + \frac{5}{6}\Big)\nonumber \\ & + & \frac{3}{2s_{W}^{2}}\Big(\Delta_{M_{W}} + \frac{5}{6} + \frac{1}{s_{W}^{2}} \ln c_{W}^{2}\Big)(1-{\mu \mu}_{mix})\nonumber \\ & + & \frac{3}{2s_{W}^{2}}\biggl[ \Delta_{M_{W}} + \frac{5}{6} + \frac{1}{s_{W}^{2}}\ln c_{W}^{2} + \frac{M_{N}^{2}}{M_{Z}^{2}-M_{W}^{2}} v(M_{Z})\biggr]{\mu \mu}_{mix} \nonumber \\ & + & \frac{1}{2c_{W}^{2}} \frac{-M_{N}^{2}}{M_{Z}^{2}-M_{W}^{2}}v(M_{Z}) {\mu \mu}_{mix}\nonumber \\ & + & \frac{1}{2s_{W}^{2}} \frac{-M_{N}^{2}}{M_{H}^{2}-M_{W}^{2}}v(M_{H}) {\mu \mu}_{mix}\nonumber \\ & + & \frac{1}{8s_{W}^{2}}\frac{M_{N}^{2}}{M_{W}^{2}}\biggl[ \Delta_{M_{W}} + \frac{3}{2} -\frac{M_{H}^{2}}{M_{W}^{2}-M_{H}^{2}}\ln \frac {M_{W}^{2}}{M_{H}^{2}} + \frac{M_{N}^{2}}{M_{H}^{2}-M_{W}^{2}}v(M_{H}) \biggr]{\mu \mu}_{mix}\nonumber \\ & + & \frac{1}{8s_{W}^{2}}\frac{M_{N}^{2}}{M_{W}^{2}}\biggl[ \Delta_{M_{W}} + \frac{3}{2} -\frac{M_{Z}^{2}}{M_{W}^{2}-M_{Z}^{2}}\ln \frac {M_{W}^{2}}{M_{Z}^{2}} + \frac{M_{N}^{2}}{M_{Z}^{2}-M_{W}^{2}}v(M_{Z}) \biggr] \nonumber \\ & \times & {\mu \mu}_{mix} \Biggr\}, \end{eqnarray} where \begin{eqnarray} v(m) & = & \ln \frac{M_{W}^{2}}{m^{2}} + \frac{M_{N}^{2}}{M_{W}^{2}-M_{N}^{2}}\ln \frac{M_{W}^{2}}{M_{N}^{2}} - \frac{M_{N}^{2}}{m^{2}-M_{N}^{2}}\ln \frac{m^{2}}{M_{N}^{2}}. \end{eqnarray} The vertices are renormalized as (see Eq. \ref{rvertexa}) \begin{eqnarray} {\hat \Lambda}^{\mu} & = & \Lambda^{\mu} + \delta Z_{1}^{W} - \delta Z_{2}^{W} + \delta Z_{L}^{\mu}, \end{eqnarray} where \begin{eqnarray} \Lambda^{\mu} & = & {\cal M}_{vertex}^{\mu} / {\cal M}_{tree}, \\ \delta Z_{1}^{W} - \delta Z_{2}^{W} & = & - \frac{\alpha}{2 \pi s_{W}^{2}} \Delta_{M_{W}}. \end{eqnarray} Looking for the dominant graphs in the limit $M_{N} \gg M_{W}, M_{Z}, M_{H}$, we note the leading contribution to the function $v(m)$ is \begin{eqnarray} v(m)^{appx} & = & \frac{1}{M_{N}^{2}}\Big(m^{2}\ln \frac{m^{2}}{M_{N}^{2}} - M_{W}^{2} \ln \frac{M_{W}^{2}}{M_{N}^{2}}\Big), \end{eqnarray} which implies the graphs of Fig. \ref{vermuon}f have quadratic nondecoupling \footnote{Another interesting point the authors of Ref. \cite{kniehl} make (see also footnote 2 on page 122) is that the diagrams for the muon decay ($\pi$ decay in Ref. \cite{kniehl}), Figs. \ref{boxmuon}e, \ref{selfmuon}c, \ref{vermuon}f can be singled out as dominant in an elegant way by using the Goldstone boson equivalence theorem.}. However, as in the case of the neutrino self-energy, both infinite and finite terms of these graphs are cancelled by the $\Sigma_{L}^{\phi N}$ term in the counterterm $\delta Z_{L}^{l}$. Therefore there are no $M_{N}^{2}$ dependent terms in the renormalized vertex diagrams either. \section{Results} The muon decay loops modify the $\Delta r$ quantity in the implicit relation between $M_{W}$ and $G_{\mu}$, see Eq. \ref{MG}: \begin{eqnarray} \label{mwg} M_{W}^{2} s_{W}^{2} & = & \frac{\pi\alpha}{\sqrt{2} G_{\mu} (1-\Delta r)} \Big(1-\frac{1}{2}ee_{mix}-\frac{1}{2}\mu\mu_{mix}\Big), \end{eqnarray} and $\Delta r$ can be written as (see Eq. \ref{delrv}) \begin{eqnarray} \Delta r & = & \frac{{\cal R}e \:{\hat \Sigma}_{W}(0)}{M_{W}^{2}} + \delta_{V}. \end{eqnarray} Here the parameter $\delta_{V}$ is the sum of the loop diagrams calculated in this section: \begin{eqnarray} \delta_{V} & = & \frac{{\cal M}_{\gamma e W \mu}}{{\cal M}_{tree}} + {\hat \Lambda}^{\mu} + {\hat \Lambda}^{e} - \frac{1}{2} {\hat \Sigma}^{\nu_{e}} - \frac{1}{2} {\hat \Sigma}^{\nu_{\mu}} + \frac{{\cal M}_{box}}{{\cal M}_{tree}}. \end{eqnarray} Numerical results are shown in Table \ref{muonloops}. As input data we used masses from the standard set, mixings $ee_{mix} = 0.0071$, $\mu\mu_{mix} = 0.0014$ and $\tau\tau_{mix} = 0$. We suppressed the $\tau\tau_{mix}$, which at its maximal value currently allowed (0.033) would make corrections to ${\hat \Sigma}_{W}(0)/M_{W}^{2}$ much larger than the corrections to $\delta_{V}$ (these only depend on $ee_{mix},\mu\mu_{mix}$) and we would like to show the case when also the latter are important. In the first three lines of the table we show how much the self-energy, vertex and box diagrams contribute to $\delta_{V}$ (line 4) for NHL masses $M_{N}$ of up to 30 TeV. Also shown (lines 5,6) are ${\hat \Sigma}_{W}(0)/M_{W}^{2}$ and $\Delta r$. Ultimately we are interested in NHL effects in the observable $M_{W}$ (line 7). \begin{table}[htb] \begin{center} \begin{tabular}{|l|r|r|r|r|r|r|} \hline & SM & 0.5 TeV & 5 TeV & 15 TeV & 30 TeV & \\ \hline ${\hat \Sigma}^{\nu_{e}}+ {\hat \Sigma}^{\nu_{\mu}}$ & - 4.995 & - 4.972 & - 4.982 & - 4.988 & - 4.992 & $\times 10^{-2}$ \\ ${\hat \Lambda}^{\mu}$ & - 1.441 & - 1.442 & - 1.444 & - 1.444 & -1.445 & $\times 10^{-2}$ \\ ${\cal M}_{box}/{\cal M}_{tree}$ & 4.273 & 4.300 & 4.315 & 4.457 & 4.950 & $\times 10^{-3}$ \\ $\delta_{V}$ & 6.670 & 6.539 & 6.525 & 6.652 & 7.133 & $\times 10^{-3}$ \\ ${\hat \Sigma}_{W}(0)/M_{W}^{2}$ & 2.396 & 2.346 & 2.301 & 1.872 & 0.329 & $\times 10^{-2}$ \\ $\Delta r$ & 3.063 & 3.000 & 2.954 & 2.537 & 1.043 & $\times 10^{-2}$ \\ $M_{W}$ & 80.459 & 80.537 & 80.545 & 80.612 & 80.846 & $\times 1$ \\ \hline \end{tabular} \end{center} \caption{Contribution of the muon decay loops to $\delta_{V}$ and $\Delta_{r}$} \label{muonloops} \end{table} The results confirm expectations from the previous sections. There is no nondecoupling for self-energies and vertices and there is a quadratic dependence on $M_{N}$ for the boxes. The boxes are becoming important at very high masses. Still, they are small compared to the change in ${\hat \Sigma}_{W}(0)/M_{W}^{2}$. This is due to the fact the dominant boxes depend on the product $ee_{mix}\mu\mu_{mix}$ (see Eq. \ref{pnhn}), while the correction to the $W$ propagator is proportional to $k_{HH} = ee_{mix}^{2} + \mu\mu_{mix}^{2}$ (see Eq. \ref{khhmixi}), which is allowed to be larger given the current bounds on the mixings. The $W$ mass jumps from $M_{W}^{SM} = 80.459$ GeV to $M_{W} = 80.537$ GeV at $M_{N} = 0.5$ TeV mainly as a result of the tree-level correction $\Big(1-\frac{1}{2}ee_{mix}-\frac{1}{2}\mu\mu_{mix}\Big)$, see Eq.~\ref{mwg}. After that it rises very slowly until the $M_{N}$ dependent amplitudes become dominant above $5$ TeV. In conclusion, the numerical analysis of Chapter 6 turns out to be basically valid even after the restriction $ee_{mix} = \mu\mu_{mix} = 0$ is relaxed. It can be improved by the inclusion of the tree-level correction $\Big(1-\frac{1}{2}ee_{mix}-\frac{1}{2}\mu\mu_{mix}\Big)$, while the largest loop corrections, the box diagrams of Fig. \ref{boxmuon}e, are only marginally important. \chapter{Conclusions} Two theoretical schemes were discussed here as possible solutions of the problem of the small neutrino masses: the see-saw mechanism of Yanagida and Gell-Mann, Ramond and Slansky \cite{guts,seesaw} (Sec. \ref{see-saw32}); and the superstring-inspired low-energy model of neutrino masses suggested in Refs. \cite{vallemo,wolfe} (Chapter 3). Both these solutions introduce NHL's into the theory as a necessary ingredient. Our main consideration has been the phenomenology of NHL's in the superstring-inspired low-energy model. The qualitative features of our analysis are applicable also in the context of see-saw models with enhanced mixings. The superstring-inspired low-energy model (Chapter 3) is a simple extension of the SM. It enriches only the neutral fermion spectrum of the SM, leaving the gauge and Higgs sectors intact. We found that among the new parameters of the model, seven are especially important: 'flavour-conserving' mixing parameters $ee_{mix}, \mu\mu_{mix}, \tau\tau_{mix}$; 'flavour-violating' mixing parameters $e\mu_{mix}, e\tau_{mix}, \mu\tau_{mix}$ and the mass scale $M_{N}$ of NHL's (we assumed all three NHL's have mass $M_{N}$). The bounds on the six mixing parameters (Sec. \ref{review3}) are largely independent of the mass $M_{N}$. On the other hand bounds on $M_{N}$ always depend on the mixings. The mass $M_{N}$, if larger than $M_{Z}$, can presently only be probed in radiative corrections (loops). A traditional approach was mostly limited to hypothetical lepton flavour-violating processes such as $\mu \rightarrow e \gamma;\: \mu, \tau \rightarrow e e^{+} e^{-}; \: Z \rightarrow e^{\pm}\mu^{\mp}$ etc \cite{Ng1,Ng2,bernabeu1,ggjv,Ilakovac,Jarlskog,Valle2,Korner,pilaftsis1}. We reviewed constraints from these processes in Chapter 5. Besides flavour-violating processes NHL's could also induce (via radiative corrections) deviations from the SM in currently observed processes. We calculated radiative corrections due to NHL's to three such observables: leptonic widths of the Z boson $\Gamma_{ll}$, lepton universality breaking parameter $U_{br}$ and the mass of the W boson $M_{W}$ (Chapters 6 and 7). We found that these observables form three complementary quantities as far as sensitivity to NHL masses and mixings is concerned. $\Gamma_{ll}$ depends on three kinds of radiative corrections: the vertex corrections (Sec. \ref{secver}), the Z oblique corrections and the W oblique corrections (Sec. \ref{seczprop}); the universality breaking parameter depends only on the vertex corrections; the W mass $M_{W}$ depends to a large degree only on the W oblique corrections. The effect of the NHL mass $M_{N}$ in radiative corrections is, on the one hand, suppressed by the small mixings; on the other hand it is enhanced due to the violation of the Appelquist-Carazzone theorem (Sec. \ref{appelc}). These competing tendencies are reflected by the typical behaviour of the dominant terms in radiative corrections due to NHL's (see Eqs. \ref{aprox1}, \ref{aprox4}), \begin{eqnarray} \label{domterm} \sim k_{HH} \frac{M_{N}^{2}}{M_{W}^{2}} & \sim & (\tau \tau_{mix})^{2} \frac{M_{N}^{2}}{M_{W}^{2}}. \end{eqnarray} To make up for the small mixings, only NHL's with masses in the TeV range can lead to significant deviations from the SM. Assuming $\tau \tau_{mix} = 0.033$, $ee_{mix} = \mu\mu_{mix} = 0$ and $m_{t} = 176$ GeV, we derived the following limit on the NHL mass at the $2\sigma$ level from the $Z$ decay width to $\tau$~leptons: \begin{equation} \label{upp1} M_{N} \leq 4.3 \; {\rm TeV}. \end{equation} The universality breaking ratio, $U_{br}$ , leads to a yet better upper limit, \begin{equation} \label{upp2} M_{N} \leq 3.8 \; {\rm TeV}, \end{equation} at the $2\sigma$ level. We can use Eq. \ref{domterm} to display the approximate dependence of the above limits on $\tau \tau_{mix}$: \begin{eqnarray} M_{N} & < & 4.3 \times \frac{0.033}{\tau \tau_{mix}} \;{\rm TeV} \end{eqnarray} from Eq. \ref{upp1} and \begin{eqnarray} M_{N} & < & 3.8 \times \frac{0.033}{\tau \tau_{mix}} \;{\rm TeV} \end{eqnarray} from Eq. \ref{upp2}. Note the limits of Eqs. \ref{upp1}, \ref{upp2} are comparable to the limit \begin{eqnarray} M_{N} & < & 4 \;{\rm TeV}, \end{eqnarray} derived from the considerations of perturbation theory breakdown in Sec. \ref{breakdown}. We also found some sensitivity of the W mass $M_{W}$ to the NHL mass and mixings, which depends to a large degree on the top quark mass (Figs. \ref{nummw}a,b). In Chapter 7 we generalized our analysis of Chapter 6 by relaxing the restriction $ee_{mix}=\mu\mu_{mix}=0$. We found that while the numerical results of Chapter 6 remain basically valid, they can be improved by the inclusion of the tree-level correction to the muon decay, $\Big(1-\frac{1}{2}ee_{mix}-\frac{1}{2}\mu\mu_{mix}\Big)$. As already noted in Chapter 6, we feel there are at least two reasons which give the (flavour-conserving) processes studied in this thesis a distinct advantage over the flavour-violating ones. First, the limits on $M_{N}$ which we derived are only matched by those from $\mu \rightarrow e e e$. The flavour-violating decay rates for $\tau$ and $Z$ are below the current experimental sensitivity (see Sec. \ref{numeres} and \ref{fvple}). Moreover, the $\mu \rightarrow e e e$ decay depends only on $ee_{mix}, e\mu_{mix}$, the two of the six mixing parameters (see Eq.~\ref{mnlimit4}), and may be unobservable if $ee_{mix}$ or $e\mu_{mix}$ are very small. Second, the inequality Eq. \ref{ineq} can further suppress the flavour-violating processes against the flavour-conserving ones via the 'conspiracy of the phases' in the sum of complex terms making up the flavour-violating parameters. For these two reasons, first signatures of neutral heavy leptons could come from flavour-conserving observables. At this time, LEP has stopped its runs at the Z-peak energy and is running at $130 - 140$ GeV. It will eventually be producing W pairs which will allow the mass $M_{W}$ to be measured with a precision of $0.05$ GeV \cite{mw2} (currently $M_{W} = 80.410 \pm 0.180$ \cite{mw1}). Combined with more precise measurements of the top quark mass we might be in a position to place even more stringent limits on NHL masses and mixings from our prediction of $M_{W}$ (see Figs. \ref{nummw}a,b). The observation of neutral heavy leptons is essential for our understanding of the small neutrino masses. It would provide us with significant hints on grand unified theories and possibly superstring theories. \chapter{#1}\thispagestyle{plain}} \begin{document} \pagenumbering{roman} \thispagestyle{empty} \vspace*{\fill} \begin{center} {\Huge The Phenomenology of Neutral} \\[.2in] {\Huge Heavy Leptons} \\[.2in] by \\[.2in] Ivan Melo, RNDr. \\[.4in] A thesis submitted to\\ the Faculty of Graduate Studies and Research \\ in partial fulfilment of\\ the requirements for the degree of\\[.1in]Doctor of Philosophy\\[.5in] Department of Physics \\[.2in] Ottawa-Carleton Institute for Physics\\ Ottawa, Ontario\\ February, 1996\\ \copyright\ copyright 1996, Ivan Melo \\ \end{center} \vspace*{\fill} \thispagestyle{empty} \vspace*{\fill} \begin{center} The undersigned hereby recommend to\\ the Faculty of Graduate Studies and Research\\ acceptance of the thesis,\\[.5in] {\Large The Phenomenology of Neutral Heavy Leptons}\\[.5in] submitted by Ivan Melo, RNDr. \\ in partial fulfilment of the requirements for \\ the degree of Doctor of Philosophy \\[.5in] \rule{3in}{.5pt}\\ Chair, Department of Physics\\[.5in] \rule{3in}{.5pt}\\ Thesis Supervisor\\[.5in] \rule{3in}{.5pt}\\ External Examiner\\[.5in] Carleton University\\[.5in] Date: \rule{1.5in}{.5pt} \end{center} \vspace*{\fill} \setcounter{page}{0} \newpage \vspace*{\fill} {\Huge Abstract} \vspace{.5in} Naturally small neutrino masses can arise in some grand unified models. The mechanism of neutrino mass generation in these models typically requires the existence of neutral heavy leptons. We study the low-energy phenomenology of these new fermions. Concentrating on loop corrections due to neutral heavy leptons, we examine how the flavour-conserving leptonic decays of the Z boson, universality breaking in these decays, and the W boson mass depend on the mass and mixings of the neutral heavy leptons. Working within the framework of a superstring-inspired $SU(2)_{L} \times U(1)_{Y}$ model, we show that these flavour-conserving processes have some virtues over the traditionally considered flavour-violating decays. \vspace*{\fill} \newpage \vspace*{\fill} {\Huge Acknowledgements} \vspace{.5in} Having come to Carleton from overseas, I was lucky to find here excellent conditions for my work, from the top research carried out at the department, to even more importantly, a very friendly, stimulating and inspiring atmosphere created by the Physics faculty, staff and students. I am indebted to everyone who helped me to feel at home. Especially one person is dear to my family. In late Roselyn Tighe we found an exceptional friend who stood by us in both good and bad times. She became our Canadian mom. Thank you, Roz, for everything. I was priviledged to work closely with two fine physicists who helped me to achieve my best. My thanks go to Pat Kalyniak who helped my dream of doing theoretical physics to come true, for her guidance and support; and to Peter Watson for the many discussions his true physics spirit made so exciting. To my parents, for their support and prayers, I am grateful. To Jakub, Matej and Katka, for giving me the strength to pursue my goal, my love. \vspace*{\fill} \tableofcontents \newpage \listoftables \addcontentsline{toc}{chapter}{List of Tables} \newpage \addcontentsline{toc}{chapter}{List of Figures} \listoffigures \newpage \pagenumbering{arabic} \setcounter{page}{1} \markright{} \pagestyle{myheadings} \thispagestyle{plain} \include{ch1} \newpage \include{ch2} \newpage \include{ch3} \newpage \include{ch4} \newpage \include{ch5} \newpage \include{ch6} \newpage \include{ch7} \newpage \include{ch8} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Euler equations for incompressible flow are a fundamental model in fluid dynamics that describe the motion of ideal fluids: \begin{equation}\label{Euler} \begin{split} \partial_t u &+ u\cdot\nabla u +\nabla p= 0 \\ &\nabla \cdot u=0. \\ \end{split} \end{equation} In this equation, $u$ is the velocity field and $p$ is the pressure of an ideal fluid flowing in $\mathbb{R}^2$. A key difficulty in understanding the dynamics of 2d Euler flows is the non-locality of the system due to the presence of the pressure term. Defining the vorticity $\omega:= \nabla^\perp\cdot u$, it is insightful to study the Euler equations in vorticity form: \begin{equation}\label{EulerVorticty} \begin{split} \partial_t \omega &+ u\cdot\nabla \omega= 0, \\ &\nabla \cdot u=0 \\ &u=\nabla^\perp \Delta^{-1} \omega.\\ \end{split} \end{equation} Because the $L^{\infty}$ norm of vorticity is conserved in the Euler equations in two dimensions, Yudovich \cite{Y} proved that there is a unique global-in-time solution to the Euler equation corresponding to every initial bounded and decaying vorticity. See also (\cite{Wo}, \cite{BKM}, \cite{HO},\cite{ Y},\cite{Ka}, \cite{MP},\cite{MB}). This bound on the $L^{\infty}$ norm is unfortunately unstable even to very mild perturbations of the equation ~\cite{CV,EM,EsharpLp}. To understand this phenomenon, we are interested in studying linear perturbations of the Euler equations in two dimensions as follows: \begin{equation}\label{EulerRieszM} \begin{split} \partial_t u &+ u\cdot\nabla u +\nabla p= \begin{pmatrix} 0\\ u_1 \end{pmatrix}\\ &\nabla \cdot u=0 \\ \end{split} \end{equation} \eqref{EulerRieszM} is a model for many problems in fluids dynamics that have a coupling with the Euler equations. For instance, similar types of equations appear in viscoelastic fluids see~\cite{CK,EF,LM,CM} and in magnetohydrodynamics see~\cite{BLW,H,CW,WZ}. Further, they also appear when studying stochastic Euler equation, see~\cite{GV}. Writing \eqref{EulerRieszM} in vorticity form, we get \begin{equation}\label{EulerR} \begin{split} \partial_t \omega &+ u\cdot\nabla \omega= \partial_x u_1 \\ &\nabla \cdot u=0 \\ &u=\nabla^\perp \Delta^{-1} \omega,\\ \end{split} \end{equation} we observe that the challenge of studying these equations is that the right hand side of $\eqref{EulerR}$ can be written as the Riesz transform of vorticity $ \partial_x u_1= R(\omega)$, which is unbounded on $L^{\infty}$. P. Constantin and V. Vicol considered these equations with weak dissipation in \cite{CV}, and they proved global well-posedness. However, without dissipation it is an open question whether these equations are globally well-posed. In this work, we are interested in the question of $L^{\infty}$ ill/well-posedness of the Euler equations with Riesz forcing and the local rate of $L^\infty$ growth. The first author and N. Masmoudi studied the Euler equations with Riesz forcing in~\cite{EM}, where they proved that it is mildly ill-posed. This means that there is a universal constant $c>0$ such that for all $\epsilon>0$, there is $\omega_0\in C^\infty$ for which the unique local solution to $\eqref{EulerR}$ satisfies: \begin{equation}\label{EMresult} |\omega_0|_{L^\infty}\leq \epsilon, \,\, \text{but} \,\, \sup_{t\in [0,\epsilon]} |\omega(t)|_{L^\infty}\geq c \end{equation} The authors in~\cite{EM} conjectured that the Euler equations with Riesz forcing is actually strongly ill-posed in $L^{\infty}$. Namely, that we can take $c$ in $\eqref{EMresult}$ to be arbitrarily large. The goal of our work here is to show that indeed this is possible. To show this, we use the first author's Biot-Savart law decomposition~\cite{E} to derive a leading order system for the Euler equations with Riesz forcing. We then show that the leading order system is strongly ill-posed in $L^{\infty}$. Using this, we can show that the Euler equations with Riesz forcing is strongly ill-posed by estimating the error between the leading order system and the Euler with Riesz forcing system on a specific time interval. We should remark that the main application of the approach of the first author and N. Masmoudi in \cite{EM} was to prove ill-posedness of the Euler equation in the integer $C^k$ spaces, which was also proved independently by J. Bourgain and D. Li in \cite{BL}. Regarding the notion of mild ill-posedness in $L^{\infty}$ for models related to the Euler with Riesz forcing system, see the work of J. Wu and J. Zhao in \cite{WZ} about the $2D$ resistive MHD equations. \subsection{Statement of the main result} \begin{theorem} For any $\alpha,\delta>0$, there exists an initial data $\omega_0^{\alpha,\delta} \in C_c^{\infty}(\mathbb{R}^2)$ and $T(\alpha)$ such that the corresponding unique global solution, $\omega^{\alpha,\delta}$, to \eqref{EulerR} is such that at $t=0$ we have $$|\omega_0^{\alpha,\delta}|_{L^{\infty}}=\delta,$$ but for any $0<t\leq T(\alpha)$ we have $$ \quad|\omega^{\alpha,\delta}(t)|_{L^{\infty}} \geq |\omega_0|_{L^\infty}+c \log (1+\frac{c}{\alpha}t), $$ where $T(\alpha)= c \alpha |\log(\alpha)|$ and $c>0$ is a constant independent of $\alpha.$ \end{theorem} \begin{remark} Note that at time $t=T(\alpha),$ we have that \[|\omega^{\alpha,\delta}|_{L^\infty}\geq c\log(c|\log\alpha|),\] which can be made arbitrarily large as $\alpha\rightarrow 0.$ Fixing $\delta>0$ small and then taking $\alpha$ sufficiently small thus gives strong ill-posedness for \eqref{EulerR} in $L^\infty.$ \end{remark} \begin{remark} As we will discuss below, we in fact establish upper and lower bounds on the solutions we construct so that on the same time-interval we have: $$ \quad|\omega^{\alpha,\delta}(t)|_{L^{\infty}} \approx |\omega_0|_{L^\infty}+c \log (1+\frac{c}{\alpha}t). $$ This should be contrasted with the linear problem where the upper and lower bounds for the same data come without the $\log:$ \[|\omega^{\alpha,\delta}_{linear}(t)|_{L^\infty}\approx |\omega_0|_{L^\infty} + c(1+\frac{c}{\alpha}t).\] \end{remark} \begin{remark} Our ill-posedness result applies to the equation: \[\partial_t\omega+u\cdot\nabla\omega=R(\omega),\] where $R=R_{12}=\partial_{12}\Delta^{-1}.$ Note that a direct consequence of the result gives strong ill-posedness when $R=R_{11}$ or $R=R_{22}$ even though these are dissipative on $L^2.$ This can be seen just by noting that a linear change of coordinates can transform $R_{12}$ to a constant multiple of $R_{11}-R_{22}=R_{11}-Id$. The strong ill-posedness for the Euler equation with forcing by any second order Riesz transform (other than the identity) follows. We further remark that the same strategy can be used to study the case of general Riesz transforms though we do not undertake this here since the case of forcing by second order Riesz transforms is the most relevant for applications we are aware of (such as the 3d Euler equations, the Boussinesq system, visco-elastic models, MHD, etc.). \end{remark} \subsection{Comparison with the linear equation and the effect of transport} We now move to compare the result of this paper with the corresponding linear results and emphasize the regularizing effect of the non-linearity in this problem. The ill-posedness result of \cite{EM} relies on viewing \eqref{EulerR} as a perturbation of \begin{equation}\label{LinearEqn}\partial_t f = R(f).\end{equation} For this simple linear equation, it is easy to show that $L^\infty$ data can immediately develop a logarithmic singularity. Let us mention two ways to quantify this logarithmic singularity. One way is to study the growth of $L^p$ norms as $p\rightarrow\infty$. For the linear equation \eqref{LinearEqn}, it is easy to show that the upper bound: \[|f(t)|_{L^p}\leq \exp(Ct) p |f_0|_{L^p}\] is sharp in the sense that we can find localized $L^\infty$ data for which the solution satisfies \[|f(t)|_{L^p}\geq c(t)\cdot p.\] This can be viewed as approximating $L^\infty$ "from below." Similarly, the $C^\alpha$ bound for \eqref{LinearEqn}, \[|f(t)|_{C^\alpha}\leq \frac{\exp(Ct)}{\alpha} |f_0|_{C^\alpha}\] can also be shown to be sharp for short time in that we can find for each $\alpha>0$ smooth and localized data with $|f_0|_{C^\alpha}=1$ for which \[|f(t)|_{L^\infty}\geq \frac{c(t)}{\alpha}.\] The main result of \cite{EM} was that these upper and lower bounds remain unchanged in the presence of a transport term by a Lipschitz continuous velocity field. This is not directly applicable to our setting since the coupling between $\omega$ and $u$ is such that $u$ may not be Lipschitz even if $\omega$ is bounded. Interestingly, in \cite{EsharpLp}, it was shown that this growth could be significantly stronger in the presence of a merely bounded velocity field. All of the above discussion leads us to understand that the nature of the well/ill-posedness of \eqref{EulerR} will depend on the precise relationship between the velocity field and the linear forcing term in \eqref{EulerR}. In particular, for a natural class of data, we construct solutions to \eqref{EulerR} satisfying \[|\omega|_{L^\infty} \approx 1+ \log(1+\frac{t}{\alpha}),\] for short time, which is the best growth rate possible in this setting. This should be contrasted with the corresponding growth rate for the linear problem \[|\omega_{lin}|_{L^\infty}\approx 1+ \frac{t}{\alpha}.\] In particular, the nonlinear term in \eqref{EulerR} actually tries to \emph{prevent} $L^\infty$ growth. Let us finally remark that the weak growth rate we found is consistent with the vorticity trying to develop a $\log\log$ singularity. It is curious that, in the Euler equation, vorticity with nearly $\log\log$ data are perfectly well-behaved and consistent with global regularity but with a triple exponential upper bound on gradients. Though establishing the global regularity rigorously remains a major open problem, this appears to be a sign that perhaps smooth solutions to \eqref{EulerRieszM} are globally regular. \subsection{A short discussion of the proof} The first step of the proof is to use the Biot-Savart law decomposition by the first author \cite{E} to derive a leading order model: \[\partial_t \Omega + \frac{1}{2\alpha}(L_s(\Omega)\sin(2\theta)+L_c(\Omega)\cos(2\theta))\partial_\theta\Omega= \frac{1}{2\alpha}L_s(\Omega),\] where the operators $L_s$ and $L_c$ are bounded linear operators on $L^2$ defined by $$ L_s(f)(R)= \frac{1}{\pi}\int_{R}^{\infty} \int_0^{2\pi} \frac{f(s,\theta)}{s} \sin(2\theta) \, d\theta \, ds \quad \text{and} \quad L_c(f)(R)= \frac{1}{\pi}\int_{R}^{\infty} \int_0^{2\pi} \frac{f(s,\theta)}{s} \cos(2\theta) \, d\theta \, ds. $$ Essentially all we do here is replace the velocity field by its most singular part. Upon inspecting this model, we observe that the forcing term on the right hand side is purely radial while the direction of transport is angular. Upon choosing a suitable unknown, we thus reduce the problem to solving a transport equation for some unknown $f$: \[\partial_t f + \frac{1}{2\alpha} L_s(f)\sin(2\theta)\partial_\theta f=0.\] Surprisingly, this reduced equation propagates the usual "odd-odd" symmetry even though the original system does not. The leading order model will then be strongly ill-posed if we can ensure that the solution of this transport equation satisfies that $\int_0^t L_s(f)$ can be arbitrarily large. One subtlety is that the growth of $L_s(f)$ enhances the transport effect, which in turn depletes the growth of $L_s(f)$. In fact, were the transport term to be stronger even by a log, the problem would \emph{not} be strongly ill-posed. By a careful study of the characteristics of this equation, we obtain a closed non-linear integro-differential equation governing the evolution of $L_s(f)$ (see equation \eqref{Lf}). We study this non-linear integro-differential equation and establish upper and lower bounds on $L_s(f)$ proving strong ill-posedness for the leading order equation; see section \ref{LOME} for more details. Finally, we close the argument by estimating the error incurred by approximating the dynamics with the leading order model. An important idea here is to work on a time scale long enough to see the growth from the leading order model but short enough to suppress any potential stronger non-linear growth; see section \ref{ReminderEstS} for more details. \subsection{Organization} This paper is organized as follow: In section \ref{LOM}, we derive a leading order model for the Euler equations with Riesz forcing \eqref{EulerR} based on the first author's Biot-Savart law approximation~\cite{E}. Then, in section \ref{LOME}, we obtain a pointwise estimate on the leading order model which is the main ingredient in obtaining the strong ill-posedness result for the Euler with Riesz forcing system. In addition, in section \ref{LOME}, we also obtain some estimates on the leading order model in suitable norms which will be then used in estimating the reminder term in section \ref{ReminderEstS}. After that, in section \ref{ElgindiEllipticEst} we will recall the first author's Biot-Savart law decomposition obtained in~\cite{E}, and we will include a short sketch of the proof. In section \ref{EmbeddEst}, we will obtain some embedding estimates which will also be used in section \ref{ReminderEstS} for the reminder term estimates. Then, in section \ref{ReminderEstS}, we show that the reminder term remains small which will then allow us to prove the main result in section \ref{Main}. \subsection{Notation} In this paper, we will be working in a form polar coordinates introduced in~\cite{E}. Let $r$ be the radial variable: $$ r=\sqrt{x^2+y^2} $$ and since we will be working with functions of the variable $r^{\alpha}$, where $0<\alpha<1$, we will use $R$ to denote it: $$ R=r^{\alpha}$$ We will use $\theta$ to denote the angle variable: $$ \theta=\arctan{\frac{y}{x}} $$ We will use $|f|_{L^{\infty}}$ and $|f|_{L^2}$ to denote the usual $L^{\infty}$ and $L^{2}$ norms, respectively. In addition, we will use $f_t$ or $f_{\tau}$ to denote the time variable. Further, in this paper, following~\cite{E}, we will be working on $(R, \theta) \in [0,\infty) \times [0, \frac{\pi}{2}]$ where the $L^2$ norm will be with measure $dR \, d\theta$ and not $R\, dR \, d\theta$. We define the weighted ${\mathcal H}^{k}( [0,\infty) \times [0, \frac{\pi}{2}])$ norm as follows: $$ |f|_{\dot{{\mathcal H}}^m}= \sum_{i=0}^m |\dR^i \dth^{m-i} f|_{L^2}+ \sum_{i=1}^m |R^i \dR^i \dth^{m-i} f|_{L^2} $$ $$ |f|_{{\mathcal H}^k}= \sum_{m=0}^{k} |f|_{\dot{{\mathcal H}}^m} $$ We also define ${\mathcal W}^{k,\infty}$ norm as follows: $$ |f|_{\dot{{\mathcal W}}^{m,\infty}}= \sum_{i=0}^m |\dR^i \dth^{m-i} f|_{L^{\infty}}+ \sum_{i=1}^m |R^i \dR^i \dth^{m-i} f|_{L^{\infty}} $$ $$ |f|_{{\mathcal W}^{k,\infty}}= \sum_{m=0}^{k} |f|_{\dot{{\mathcal W}}^{m,\infty}} $$ Throughout this paper, we will use the following notation to define the following operators: $$ L(f)(R)=\int_R^{\infty}\frac{f(s)}{s} \, ds $$ and by adding a subscript $L_s$ or $L_c$, we denote the project onto $\sin(2 \theta)$ and $\cos(2 \theta)$ respectively. Namely, $$ L_s(f)(R)= \frac{1}{\pi}\int_{R}^{\infty} \int_0^{2\pi} \frac{f(s,\theta)}{s} \sin(2\theta) \, d\theta \, ds \quad \text{and} \quad L_c(f)(R)= \frac{1}{\pi} \int_{R}^{\infty} \int_0^{2\pi} \frac{f(s,\theta)}{s} \cos(2\theta) \, d\theta \, ds $$ \section{Leading Order Model}\label{LOM} In this section, we will derive a leading order model for the Euler equation with Riesz forcing: \begin{equation}\label{EulerRiesz} \begin{split} \partial_t \omega &+ u\cdot\nabla \omega= \partial_x u_1 \\ &\nabla \cdot u=0 \\ &u=\nabla^\perp \Delta^{-1} \omega\\ \end{split} \end{equation} To do this, we follow~\cite{E} and we write the equation in a form of polar coordinates. Namely, we set $r=\sqrt{x^2+y^2}$, $R=r^\alpha$, and $\theta= \arctan{\frac{y}{x}}$. We will the rewrite the equation \eqref{EulerRiesz} in the new functions $ \omega(x,y)=\Omega(R,\theta)$ and $ \psi(x,y)= r^2 \Psi(R,\theta)$ with $u=\nabla^\perp \psi $, where $ u_1=- \dy \psi $, and $u_2= \dx \psi $. \textbf{Equations of $u$ in terms of $\Psi$} $$ u_1=-r(2 \sin(\theta) \Psi+ \alpha \sin(\theta )R \, \dR \Psi +\cos(\theta) \dth \Psi) $$ $$ u_2=r(2 \cos(\theta) \Psi+ \alpha \cos(\theta )R \, \dR \Psi -\sin(\theta) \dth \Psi) $$ \textbf{Evolution Equation for $ \Omega$} \begin{equation*} \begin{split} \partial_t{\Omega} + \Big( -\alpha R \dth \Psi \Big)\dR \Omega + \Big( 2 \Psi + \alpha R \dR \Psi \Big) \dth \Omega &= \big(-2 \alpha R \sin(\theta) \cos(\theta) -\alpha^2 R \sin(\theta) \cos(\theta) \big) \dR \Psi \\ &+\big( -1+2 \sin^2(\theta) \big) \dth \Psi +\big(- \alpha R \cos^2(\theta) + \alpha R \sin^2(\theta)\big) \dRth\Psi \\ &- \big(\alpha^2R^2 \sin(\theta) \cos(\theta) \big)\dRR \Psi + \big( \sin(\theta) \cos(\theta) \big)\dthth\Psi \\ \end{split} \end{equation*} \textbf{The elliptic equation for $\Delta( r^2 \Psi(R,\theta)) =\Omega(R,\theta)$} $$ 4 \Psi + \alpha^2 R^2 \dRR \Psi + \dthth \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta) $$ Now using the first author's Biot-Savart decomposition \cite{E}, see section \ref{ElgindiEllipticEst} for more details, by defining the operators $$ L_s(\Omega)(R)=\frac{1}{\pi} \int_{R}^{\infty} \int_0^{2\pi} \frac{\Omega(s,\theta)}{s} \sin(2\theta) \, d\theta \, ds \quad \text{and} \quad L_c(\Omega)(R)=\frac{1}{\pi} \int_{R}^{\infty} \int_0^{2\pi} \frac{\Omega(s,\theta)}{s} \cos(2\theta) \, d\theta \, ds $$ we have $$ \Psi(R,\theta)=-\frac{1}{4 \alpha} L_s(\Omega) \sin(2 \theta)- \frac{1}{4 \alpha} L_c(\Omega) \cos(2 \theta)+ \text{lower order terms} $$ Thus, if we ignore the $\alpha$ terms in the evolution equation, we obtain \begin{equation} \label{FundEq} \begin{split} \partial_t{\Omega} + \Big( 2 \Psi \Big) \dth \Omega &= \Big( -1+2 \sin^2(\theta) \Big) \dth \Psi +\Big( \sin(\theta) \cos(\theta) \Big)\dthth\Psi \\ \end{split} \end{equation} Now we consider $\Psi$ of the form $$ \Psi= -\frac{1}{4 \alpha} L_s(\Omega) \sin(2 \theta)- \frac{1}{4 \alpha} L_c(\Omega) \cos(2 \theta) $$ and plug it into the evolution equation, we have \begin{equation*} \begin{split} \partial_t{\Omega} - \Big( \frac{1}{2 \alpha} L_s(\Omega)\sin(2\theta)+\frac{1}{2 \alpha} L_c(\Omega) \cos(2\theta) \Big) \dth \Omega = &-\big( \cos(2\theta)\big) \big(- \frac{1}{2 \alpha} L_s(\Omega) \cos(2\theta) +\frac{1}{2 \alpha} L_c(\Omega) \sin(2\theta) \big) \\ &+ \big( \frac{1}{2}\sin(2 \theta) \big) \big( \frac{1}{ \alpha} L_s(\Omega) \sin(2\theta)+\frac{1}{ \alpha} L_c(\Omega) \cos(2\theta) \big) \\ \end{split} \end{equation*} which simplifies to \[\partial_t{\Omega} - \Big( \frac{1}{2 \alpha} L_s(\Omega)\sin(2\theta)+\frac{1}{2 \alpha} L_c(\Omega) \cos(2\theta) \Big) \dth \Omega =\frac{1}{2 \alpha} L_s(\Omega) \] In order to work with positive solutions and have the angular trajectories moving to the right, we make the change $\Omega\rightarrow-\Omega$ and get the final model: \begin{equation}\label{OmegaModal} \begin{split} \partial_t{\Omega} + \Big( \frac{1}{2 \alpha} L_s(\Omega)\sin(2\theta)+\frac{1}{2 \alpha} L_c(\Omega) \cos(2\theta) \Big) \dth \Omega &=\frac{1}{2 \alpha} L_s(\Omega).\\ \end{split} \end{equation} \noindent We now move to study the dynamics of solutions to \eqref{OmegaModal}. \begin{proposition} \label{OmegaModalPStat}Let $\Omega$ be a solution to the leading order model \begin{equation}\label{OmegaModalP} \begin{split} \partial_t{\Omega} + \Big( \frac{1}{2 \alpha} L_s(\Omega)\sin(2\theta)+\frac{1}{2 \alpha} L_c(\Omega) \cos(2\theta) \Big) \dth \Omega &=\frac{1}{2 \alpha} L_s(\Omega)\\ \end{split} \end{equation} with initial data of the form $\Omega|_{t=0}=f_0 (R) \sin(2 \theta)$ then we can write $\Omega$ as follow: \begin{equation} \Omega= f + \frac{1}{2 \alpha} \int_0^t L_s(f_\tau) d \tau \end{equation} where $f$ satisfies the following transport equation: \begin{equation} \begin{split} \partial_t{f} + \frac{1}{2 \alpha} \sin(2\theta) L_s(f) \dth f &=0\\ \end{split} \end{equation} \end{proposition} \begin{proof} The righthand side term of \eqref{OmegaModalP} is radial, and hence if we take the inner product with $\sin(2\theta)$ it will be zero. Now if write $\Omega$ as: $$ \Omega_t(R,\theta)= f_t (R,\theta)+ \frac{1}{2 \alpha} \int_0^t L_s(\Omega_\tau)(R) d \tau$$ and consider it to be a solution to $\eqref{OmegaModalP}$, we obtain that $f$ satisfies the following: \begin{equation} \label{fModel} \begin{split} \partial_t{f_t} + \Big( \frac{1}{2 \alpha} L_s(f_t)\sin(2\theta)+\frac{1}{2 \alpha} L_c(f_t) \cos(2\theta) \Big) \dth f_t &=0\\ \end{split} \end{equation} Here we used that $L_s(\Omega_\tau)(R)$ is a radial function. Notice that $\eqref{fModel}$ is a transport equation that preserves odd symmetry. Now if we set: $$f_t^s=\int_0^{2\pi} f_t(R,\theta) \sin(2\theta) d\theta \quad \text{and} \quad \Omega_t^s=\int_0^{2\pi} \Omega_t(R,\theta) \sin(2\theta) \, d \theta, $$ we notice that $f_t^s$ and $\Omega_t^s$ will satisfy the same equation. Thus, if we start with the same initial conditions $f_0=\Omega_0$, then $$f_t^s=\Omega_t^s \quad \text{for all } t $$ Thus, we have $L_s(\Omega_t)=L_s(f_t)$, and hence $$ \Omega_t= f_t + \frac{1}{2 \alpha} \int_0^t L_s(f_\tau) d \tau $$ Now since the initial data which we are considering have odd symmetry, it suffices to consider the following transport equation: \begin{equation} \label{fModel2} \begin{split} \partial_t{f_t} + \frac{1}{2 \alpha} \sin(2\theta) L_s(f_t) \dth f_t &=0\\ \end{split} \end{equation} \end{proof} \section{Leading Order Model Estimate}\label{LOME} The purpose of this section is to obtain $L^{\infty}$ estimates for the leading order model which is the main ingredient in obtaining the ill-posedness result for the the Euler with Riesz forcing system. This will be done in subsection \ref{PLOME} in three steps: Lemma \ref{LeadinIneq}, Lemma \ref{ApprOp}, and Proposition \ref{LeadingEst}. Then in subsection \ref{ELOMnorms}, we will obtain some estimate for the leading order model which will be useful in reminder estimates in section \ref{ReminderEstS} . \subsection{Pointwise Leading Order Model Estimate}\label{PLOME} \begin{lemma} \label{LeadinIneq} Let $f$ be a solution to the following transport equation: \begin{equation} \label{fM} \begin{split} \partial_t{f} + \frac{1}{2 \alpha} \sin(2\theta) L_s(f) \dth f &=0\\ \end{split} \end{equation} with initial data $f|_{t=0}=f_{0}(R) \sin(2 \theta)$, then we have the following estimate on the operator $L_s(f)$: \begin{equation} \label{ULbound} c_1 \int_{R}^{\infty} \frac{f_0(s)}{s} \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \, d s \leq L_{s}(f_t)(R) \leq c_2 \int_{R}^{\infty} \frac{f_0(s)}{s} \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \, d s \end{equation} where $c_1$ and $c_2$ are independent of $\alpha$ \end{lemma} \begin{proof} To prove this, we consider the following variable change. For $ \theta \in [0, \frac{\pi}{2})$, let $\gamma$ be defined as follows $$ \gamma:= \tan(\theta) \implies \frac{d \gamma}{d \theta}= \sec^2(\theta), \, \,\, \text{and} \, \, \sin(2\theta)= \frac{2\gamma}{1+\gamma^2} $$ Applying chain rule, we rewrite $\eqref{fM}$ in the $(R,\gamma)$ variables: \begin{equation} \label{ModelGamma} \dt f_t + \frac{1}{\alpha}\gamma \, L_{s}(f_t)(R) \, \dg f=0 \end{equation} with initial date $$ f|_{t=0}=f_{0}(R) \sin(2\theta)=f_{0}(R) \frac{2\gamma}{1+\gamma^2} $$ Let $\phi_t(\gamma)$ be the flow map associated with $\eqref{ModelGamma}$, so we have $$ \frac{d \phi_t(\gamma)}{ dt}= \frac{1}{\alpha} \phi_t(\gamma) L_s(f_t) \implies \phi_t(\gamma)= \gamma \exp(\frac{1}{\alpha} \int_{0}^t L_s(f_\tau) \, d\tau) $$ Thus, $$ \phi^{-1}_t(\gamma)= \gamma \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau) \, d\tau) $$ Hence, we now write the solution to $\eqref{ModelGamma}$ as follows: $$ f_t(R,\gamma)=f_0(R, \phi^{-1}_t(\gamma))=f_{0}(R) \frac{2 \phi^{-1}_t(\gamma)}{1+ \phi^{-1}_t(\gamma)^2}=f_{0}(R) \, \frac{2 \, \gamma \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau) \, d\tau)}{1+ \gamma^2 \exp(-\frac{2}{\alpha} \int_{0}^t L_s(f_\tau) \, d\tau)} $$ Now we consider the operator $L_s$ in the $(R,\gamma) \in [0,\infty)\times [0,\frac{\pi}{2})$ variables: $$ L_{s}(f_t)(R)=\frac{1}{\pi} \int_{R}^{\infty} \frac{1}{s} \int_{0}^{\infty} f_t(s,\gamma) \, \frac{ 2 \gamma}{(1+ \gamma^2)^2} \, d \gamma \, d s $$ Plugging the expression for $f_t$, we have \begin{equation}\label{Lf} L_{s}(f_t)(R)=\frac{1}{\pi} \int_{R}^{\infty} \frac{1}{s} \int_{0}^{\infty} \ f_{0}(s)\, \frac{ \, \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau)}{1+ \gamma^2 \exp(-\frac{2}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau)} \, \frac{ 4 \gamma^2}{(1+ \gamma^2)^2} \, d \gamma \, ds \end{equation} Now since $0 \leq \exp(-\frac{2}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \leq 1$, we have a upper and lower bound on the operator on $L_{s}(f_t)(R)$ with constants $c_1,c_2$ independent of $\alpha$ (In fact, these constants can be explicitly computed). Namely, \begin{equation*} c_1 \int_{R}^{\infty} \frac{ f_{0}(s)}{s} \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \, d s \leq L_{s}(f_t)(R) \leq c_2 \int_{R}^{\infty} \frac{ f_{0}(s)}{s} \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \, d s \end{equation*} Thus, we have our desired inequalities. \end{proof} \begin{lemma} \label{ApprOp} Define the operator \begin{equation} \label{ApxL} \hat{L}(f_t)(R):= \int_{R}^{\infty} \frac{f_{0}(s)}{s } \exp(-\frac{1}{\alpha}\int_{0}^t \hat{L}(f_s)(s) \, d\tau) \, ds \end{equation} Then we have $$ \int_{0}^t \hat{L}(f_\tau)(R) \, d \tau= 2 \alpha \log(1+ \frac{t}{2 \alpha } \, L(f_0)(R)) $$ where $L(f_0)(R)=\int_R^{\infty} \frac{f_0(s)}{s} \,ds $ \end{lemma} \begin{proof} We introduce $g_t(R):= \exp(-\frac{1}{\alpha}\int_{0}^t \hat{L}(f_\tau)(R) \, d\tau) \,$ and $K(R):=\frac{f_0(R)}{R}$, then the operator $\hat{L}$ can be rewritten as: \begin{equation}\label{Lkg} \hat{L}(f_t)(R)=\int_{R}^{\infty} K(s) g_t(s) \, ds \end{equation} Now taking time derivative of $\eqref{Lkg}$, and using that $\dt g_t(R)= -2 g_t(R) \, \int_{R}^{\infty} K(s) g_t(s) \, ds$, we can obtain: $$ \dt \hat{L}(f_t)= -\frac{1}{2 \alpha} (\hat{L}(f_t))^2 $$ which can be solved explicitly: \begin{equation} \label{Lexplicit} \hat{L}(f_t)(R)= \frac{L(f_0)(R)}{1+\frac{t}{2 \alpha}\, L(f_0)(R)} \end{equation} and then it follows that $$ \int_{0}^t \hat{L}(f_t)(R)d \tau= 2 \alpha \log(1+\frac{t}{2 \alpha}\, L(f_0)(R)) $$ \end{proof} \begin{proposition}\label{LeadingEst} Let $f$ be a solution to the following transport equation: \begin{equation} \begin{split} \partial_t{f} + \frac{1}{2 \alpha} \sin(2\theta) L_s(f) \dth f &=0\\ \end{split} \end{equation} with initial data $f|_{t=0}=f_{0}(R) \sin(2 \theta)$, then we have the following estimate on the operator $L_s(f)$: \begin{equation} \label{ULboundT} \frac{ 2\alpha}{c_1} \log(1+\frac{c_1}{ 2\alpha}t\, L(f_0)(R))) \geq \int_{0}^t L_{s}(f_\tau)(R) \geq \frac{ 2\alpha}{c_2} \log(1+\frac{c_2}{ 2\alpha}t\, L(f_0)(R)) \end{equation} where $c_1$ and $c_2$ are independent of $\alpha$ \end{proposition} \begin{proof} In the section, we will use the bounds in $\eqref{ULbound}$, Namely \begin{equation}\label{ULbound2} c_1 \int_{R}^{\infty} \frac{f_0(s)}{s} \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \, d s \leq L_{s}(f_t)(R) \leq c_2 \int_{R}^{\infty} \frac{f_0(s)}{s}\exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau) \, d s, \end{equation} to obtain and upper and lower estimate on $\int_0^t L_{s}(f)$. As before we set: $$g_t(R)=\exp(-\frac{1}{\alpha}\int_{0}^t L_{s}(f_\tau)(R) \, d\tau)\quad \text{and} \quad K(R)=\frac{f_0(R)}{R}$$ Using $\eqref{ULbound2}$ we can obtain that \begin{equation}\label{ULbound5} -\frac{c_1}{2 \alpha} \bigg( \int_{R}^{\infty}g_t(s) K(s) \, ds \bigg)^2 \geq \dt \int_{R}^{\infty} g_t(s) K(s) \, ds \geq - \frac{c_2}{2 \alpha} \bigg( \int_{R}^{\infty}g_t(s) K(s) \, du \bigg)^2 \end{equation} Similar to Lemma \ref{ApprOp}, we define $$ L_s(f_t)(R):= \int_{R}^{\infty} g_t(s) K(s) \, ds $$ Now from $\eqref{ULbound5}$, we have $$ - \frac{c_1}{2 \alpha} (L_s(f_t)(R))^2 \geq \dt L_s(f_t)(R) \geq - \frac{c_2}{2 \alpha} (L_s(f_t)(R))^2 $$ Thus, \begin{equation}\label{ULWbound} \frac{L(f_0)(R)}{1+ \frac{c_1}{2 \alpha} \, t\, L(f_0)(R)} \geq L_s(f_t)(R) \geq \frac{L(f_0)(R)}{1+ \frac{c_2}{2 \alpha} \, t\, L(f_0)(R)} \end{equation} which will give us that: $$ \frac{ 2\alpha}{c_1} \log(1+\frac{c_1}{ 2\alpha}t\, L(f_0)(R))) \geq \int_{0}^t L_{s}(f_\tau)(R) \geq \frac{ 2\alpha}{c_2} \log(1+\frac{c_2}{ 2\alpha}t\, L(f_0)(R)) $$ and this completes the proof \end{proof} \subsection{ Estimate for the leading order model in ${\mathcal W}^{k,\infty}$ and ${\mathcal H}^k$ norms } \label{ELOMnorms} The purpose of this subsection is to obtain some estimate on the leading order model in ${\mathcal W}^{k,\infty}$ and ${\mathcal H}^k$ norms. These will be used to estimate the size of the reminder term in section \ref{ReminderEstS}. First we will obtain estimates on $\Psi_2$ in Lemma \ref{LPsi2Est}. Then in Lemma \ref{LOmega2Est}, we will obtain estimates on $\Omega_2$. \begin{lemma}\label{LPsi2Est} Let $\Omega_2$ to solution to the leading order model: \begin{equation*} \begin{split} \partial_t{\Omega_2} + \Big( \frac{1}{2 \alpha} L_s(\Omega_2)\sin(2\theta)+\frac{1}{2 \alpha} L_c(\Omega_2) \cos(2\theta) \Big) \dth \Omega_2 &=\frac{1}{2 \alpha} L_s(\Omega_2)\\ \end{split} \end{equation*} with initial data $\Omega_2|_{t=0}=f_{0}(R) \, \sin(2\theta)$ , where $f_{0}(R)$ is smooth with compactly support. Consider $$ \Psi_2= \frac{1}{4 \alpha} L_s(\Omega_2) \sin(2 \theta)+ \frac{1}{4 \alpha} L_c(\Omega_2) \cos(2 \theta) $$ Then, we have the following estimates on $\Psi_2$: \begin{equation} \label{Psi2Est} |\Psi_2|_{{\mathcal W}^{k+1,\infty}}\leq \frac{c_{k}}{ \alpha}, \quad |\Psi_2|_{{\mathcal H}^{k+1}}\leq \frac{c_{k}}{ \alpha} \end{equation} where $c_{k}$ depends on the initial conditions and is independent of $\alpha$ \end{lemma} \begin{proof} Recall that from Proposition \ref{OmegaModalPStat}, we can write $\Omega_2$ as follows: $$ \Omega_2= f + \frac{1}{2 \alpha} \int_{0}^t L_s(f_{\tau}) \, d\tau , $$ and since the initial data is odd in $\theta$, we have $$\Psi_2=\frac{1}{4 \alpha}L_s(\Omega_t) \sin(2 \theta)=\frac{1}{4 \alpha}L_s(f_t) \sin(2 \theta)$$ To estimate the size of $\Psi_2$, from \eqref{Lf}, we have $$ L_{s}(f_t)(R)=\int_{R}^{\infty} \frac{1}{s} \int_{0}^{\infty} \ f_{0}(s)\, \frac{ \, \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau)}{1+ \gamma^2 \exp(-\frac{2}{\alpha} \int_{0}^t L_s(f_\tau)(s) \, d\tau)} \, \frac{ 4 \gamma^2}{(1+ \gamma^2)^2} \, d \gamma \, ds $$ Using \eqref{ULbound}, we have $$ | \Psi_2|_{L^{\infty}} \leq \frac{c}{\alpha} \int_{R}^{\infty} \frac{\ f_{0}(s)}{s} \, ds \leq \frac{c_0}{\alpha} $$ For $\dth \Psi_2 $, it is clear that we have $$ |\dth \Psi_2 |_{L^{\infty}} \leq \frac{c_0}{\alpha} $$ where, similarly, $c_0$ depends on the initial condition. Now for $\dR \Psi_2$, we have $$ \dR \Psi_2=\frac{1}{4 \alpha} \dR L_s(f_t) \sin(2 \theta) $$ Thus, $$ \dR L_{s}(f_t)(R)= - \frac{1}{R} \int_{0}^{\infty} \ f_{0}(R)\, \frac{ \, \exp(-\frac{1}{\alpha} \int_{0}^t L_s(f_\tau)(R) \, d\tau)}{1+ \gamma^2 \exp(-\frac{2}{\alpha} \int_{0}^t L_s(f_\tau)(R) \, d\tau)} \, \frac{ 4 \gamma^2}{(1+ \gamma^2)^2} \, d \gamma \, $$ and similarly, we have $$| \, \dR \Psi_2 |_{L^{\infty}} \leq \frac{c}{ \alpha}$$ Now the estimate on $R \, \partial_{R } \Psi_2$ follows from the estimate on $\dR \Psi_2$ and the fact that the initial data have compact support. Thus, $$| R \, \dR \Psi_2 |_{L^{\infty}} \leq \frac{c}{ \alpha}$$ For higher order derivative, we can obtain the estimate following the same steps. Hence, we have \begin{equation*} |\Psi|_{{\mathcal W}^{k+1,\infty}}\leq \frac{c_{k}}{ \alpha} \end{equation*} The ${\mathcal H}^{k}$ estimates also follows using the same steps. \begin{equation*} |\Psi|_{{\mathcal H}^{k+1}}\leq \frac{c_{k}}{ \alpha} \end{equation*} \end{proof} In the following Lemma, we will obtain the ${\mathcal H}^{k}$ estimates on $\Omega_2$. Here we will use Lemma \ref{LPsi2Est} and transport estimates. \begin{lemma}\label{LOmega2Est} Let $\Omega_2$ to solution to the leading order model: \begin{equation*} \begin{split} \partial_t{\Omega_2} + \Big( \frac{1}{2 \alpha} L_s(\Omega_2)\sin(2\theta)+\frac{1}{2 \alpha} L_c(\Omega_2) \cos(2\theta) \Big) \dth \Omega_2 &=\frac{1}{2 \alpha} L_s(\Omega_2)\\ \end{split} \end{equation*} with initial data $\Omega_2|_{t=0}=f_{0}(R) \, \sin(2\theta)$, where $f_{0}(R)$ is smooth with compactly support. Then, we have the following estimates on $\Omega_2$: \begin{equation} \label{Omega2Est} |\Omega_2|_{{\mathcal H}^{k}} \leq c_k \, e^{ \frac{c_k}{ \alpha}t } \end{equation} where $c_{k}$ depends on the initial conditions and is independent of $\alpha$ \end{lemma} \begin{proof} Recall that from Proposition \ref{OmegaModalPStat} we can write $\Omega_2$ as follows: $$ \Omega_2= f + \frac{1}{2 \alpha} \int_{0}^t L_s(f_{\tau}) \, d\tau , $$ where $f$ satisfies the following transport equation: $$ \partial_t f_t + 2 \Psi_2 \dth f_t=0 $$ When we consider the derivatives of $\Omega_2$, the transport term $f$ will dominates the radial term $\frac{1}{2 \alpha} \int_{0}^t L_s(f) \, d\tau $. Thus, it suffices to consider the ${\mathcal H}^{k}$ estimates on $f$ which will follow from the standard $L^2$ estimate for the transport equation. Thus, Since we have $$ \partial_t f_t + 2 \Psi_2 \dth f_t=0 \implies \partial_t \dth f_t + 2 \dth\Psi_2 \dth f_t + 2 \Psi_2 \dthth f_t =0 $$ Hence, $$ |\dth f_t|_{L^2} \leq |\dth f_0|_{L^2} \, e^{\int_0^t |\dth\Psi_2|_{L^{\infty}}} $$ From $\eqref{Psi2Est}$ we have $|\dth \Psi_2 |_{L^{\infty}} \leq \frac{c_0}{ \alpha}$. Thus, applying Gronwall inequality, we have \begin{equation}\label{dthf} |\dth f_t|_{L^2} \leq |\dth f_0|_{L^2} e^{ \frac{c_0}{ \alpha} t} \end{equation} To obtain ${\mathcal H}^k$ estimates, we need to estimate terms of the form $R^k \dR^k$. We will show how to obtain the $R \dR$ estimate, and for general $k$, it will follow similarly. Thus, similar to $L^2$ estimate for $\dth f$ case, since $$ \partial_t f_t + 2 \Psi_2 \dth f_t=0 $$ we have $$ \partial_t \dR f_t + 2 \dR\Psi_2 \dth f_t + 2 \Psi_2 {\partial}_{R \theta} f_t =0 $$ and thus, $$ \partial_t |R \dR f_t|_{L^2} \leq 2 |R \dR \Psi_2 |_{L^{\infty}} |\dth f|_{L^2} + |\dth\Psi_2|_{L^{\infty}} |R\dR f_t|_{L^2} $$ Now from $\eqref{Psi2Est}$, $\eqref{dthf}$, and applying Gronwall inequality we have \begin{equation*} \begin{split} |R \dR f_t|_{L^2} & \leq \big( |R\dR f_0|_{L^2} + |\dth f_0|_{L^2} e^{ \frac{c_0}{ \alpha}t} \big) e^{ \frac{c_0}{\alpha} t} \end{split} \end{equation*} Hence, $$ |f(t)|_{{\mathcal H}^{1}} \leq |f_0|_{{\mathcal H}^{1}} \, e^{ \frac{c_1}{\alpha} t} $$ which implies that $$| \Omega_2(t)|_{{\mathcal H}^1} \leq \ | \Omega_2(0)|_{{\mathcal H}^1} e^{ \frac{c_1}{ \alpha}t } $$ Similarly, using $ \eqref{Psi2Est}$, the transport estimate, and following the same steps as above, we can obtain for the general ${\mathcal H}^k$ estimates. Hence $$| \Omega_2|_{{\mathcal H}^k} \leq \ | \Omega_2(0)|_{{\mathcal H}^k} e^{ \frac{c_k}{ \alpha}t } $$ \end{proof} \section{Elliptic Estimate}\label{ElgindiEllipticEst} The purpose of this section is to recall the first author's Biot-Savart law decomposition~\cite{E} which is used here to derive the leading order model. In this section, we highlight the main ideas in the proof, and for more details, see~\cite{E} and~\cite{DE}. We remark that this is also related to the Key Lemma of A. Kiselev and V. \v{S}ver\'{a}k, see also the work of the first author \cite{Eremarks}, and the first author and I. Jeong \cite{EJ} for generalization. \begin{proposition}\label{EllipticEst}(\cite{E}) Given $\Omega \in H^k$ such that for every $R$ we have $$\int_{0}^{2\pi} \Omega(R,\theta) \sin(n \theta) \, d\theta=\int_{0}^{2\pi} \Omega(R,\theta) \cos(n \theta) \, d\theta= 0$$ for $n=0,1,2$ then the unique solution to $$4 \Psi + \dthth \Psi + \alpha^2 R^2 \dRR \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta)$$ satisfies \begin{equation} \label{EllipticEstMain} |\dthth \Psi|_{H^k} \, + \, \alpha | R \dRth \Psi|_{H^k} \, + \, \alpha^2|R^2 \dRR \Psi|_{H^k} \leq C_{k} |\Omega|_{H^k} \end{equation} where $C_k$ is \textbf{independent} of $\alpha$. In addition, we have the weights estimate \begin{equation} \label{EllipticEstW} |\dthth D^k_R (\Psi)|_{L^2} \, + \, \alpha | R \dRth D^k_R (\Psi )|_{L^2} \, + \, \alpha^2|R^2 \dRR D^k_R (\Psi)|_{L^2} \leq C_{k} |D^k_R(\Omega)|_{L^2} \end{equation} where $C_k$ is \textbf{independent} of $\alpha$. Recall that $D_R=R\dR$ \end{proposition} \begin{proof} First, we will show how to obtain $\eqref{EllipticEstMain}$. Since $\Omega$ is orthogonal to $ \sin(n \theta)$ and $\cos(n \theta) $ for $n=0,1,2$. Then, $\Psi$ must also be orthogonal to $ \sin(n \theta)$ and $\cos(n \theta) $ for $n=0,1,2$. Consider the elliptic equation, and we consider $L^2$ estimate. $$4 \Psi + \dthth \Psi + \alpha^2 R^2 \dRR \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta)$$ Taking the inner product with $\dthth \Psi$ and integrating by parts, we have obtain $$-4 |\dth\Psi|^2_{L^2} + |\dthth \Psi|^2_{L^2} - \alpha^2 |\dth \Psi|^2_{L^2} + \alpha^2|R\dRth \Psi|^2_{L^2} + \frac{(4 \alpha +\alpha^2)}{2}|\dth \Psi|^{2}_{L^2} \leq |\Omega|_{L^2} |\dthth \Psi|_{L^2}$$ Now by assumption, we have $$ \Psi(R,\theta) =\sum_{n \geq 3} \Psi_n(R)e^{in \theta} $$ and hence $$ |\dth\Psi|^2_{L^2} \leq \frac{1}{9} |\dthth \Psi|^2_{L^2} $$ Using the above inequality, we can show that $$\frac{5}{9} |\dthth \Psi|^2_{L^2} + \alpha^2|R\dRth \Psi|^2_{L^2} + \frac{(4 \alpha -\alpha^2)}{2}|\dth \Psi|^{2}_{L^2} \leq |\Omega|_{L^2} |\dthth \Psi|_{L^2}$$ and thus we have that $$ |\dthth \Psi|_{L^2} \leq C_0 |\Omega|_{L^2} $$ where $C_0$ is independent of $\alpha$. The estimate for the $R^2 \dRR \Psi$ term will follow similarly. We can also obtain the $H^k$ estimates by following the same strategy. To obtain $\eqref{EllipticEstW}$ estimates, recall that $D_R=R \dR$ and we notice that we can write the elliptic equation in the following form: $$4 \Psi + \dthth \Psi + \alpha^2 D^2_R (\Psi) +4 \alpha \, D_R( \Psi) =\Omega(R,\theta)$$ From this, we observe that the $D_R$ operator commutes with the elliptic equation, and hence $\eqref{EllipticEstW}$ estimates will follow from \eqref{EllipticEstMain}. \end{proof} \begin{theorem}(\cite{E}) Given $\Omega \in H^k$ where $\Omega$ has the form of $$\Omega(R,\theta)= f(R) \sin(2 \theta) \, \, \, \,\, \Big( \Omega(R,\theta)= f(R) \cos(2 \theta) \Big)$$ then the unique solution to $$4 \Psi + \dthth \Psi + \alpha^2 R^2 \dRR \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta)$$ is $$\Psi=-\frac{1}{4 \alpha} L(f)(R) \sin(2 \theta)+ \mathcal{R}(f) \, \, \, \,\, \, \Big( \Psi=-\frac{1}{4 \alpha} L(f)(R) \cos(2 \theta) + \mathcal{R}(f) \Big)$$ where $$L(f)(R)=\int_{R}^{\infty} \frac{f(s)}{s} \, ds$$ and $$|\mathcal{R}(f)|_{H^k} \leq c |f|_{H^k} $$ where $c$ is independent of $\alpha$ \end{theorem} \begin{proof} Consider the case where $\Omega(R,\theta)= f(R) \sin(2 \theta)$, the case where $\Omega(R,\theta)= f(R) \cos(2 \theta)$ can be handled similarly. In this case $\Psi(R,\theta)$ will be of the form $\Psi(R,\theta)=\Psi_2(R) \sin(2 \theta)$. Where $\Psi_2(R)$ will satisfy the following ODE: $$ \alpha^2 R^2 \dRR \Psi_2+(4 \alpha+ \alpha^2) R \dR \Psi_2=f(R) $$ we can solve the ODE and obtain $$ \dR \Psi_2(R)=\frac{1}{\alpha^2} \frac{1}{R^{\frac{4}{\alpha}+1}} \int_0^R \frac{f(s)}{s^{1-\frac{4}{\alpha}}} \, ds $$ Now using that $\Psi_2(R) \rightarrow 0$ as $R \rightarrow \infty$, we obtain $$ \Psi_2(R)=-\frac{1}{\alpha^2} \int_{R}^{\infty} \frac{1}{\rho^{\frac{4}{\alpha}+1}} \int_0^{\rho} \frac{f(s)}{s^{1-\frac{4}{\alpha}}} \, ds $$ By integrating by parts, it follows that $$ \Psi_2(R)=-\frac{1}{4 \alpha} \int_R^{\infty}\frac{f(s)}{s} \, ds- \frac{1}{4 \alpha} \frac{1}{R^{\frac{4}{\alpha}}} \int_0^R \frac{f(s)}{s^{1-\frac{4}{\alpha}}} \, ds :=-\frac{1}{4 \alpha}L(f)(R)+ \mathcal{R}(f) $$ Using Hardy-type inequality one can show that: $$|\mathcal{R}(f)|_{L^2} \leq c |f|_{L^2} $$ where $c$ is independent of $\alpha$ \end{proof} \section{ Embedding estimate in terms of ${\mathcal H}^k$ norm}\label{EmbeddEst} In this section we consider some embedding estimate in the ${\mathcal H}^k$ norm which will be used in section \ref{ReminderEstS}. These estimates will be used various times as we estimate the reminder term. Recall that the ${\mathcal H}^k$ norm is defined as follows: $$ |f|_{\dot{{\mathcal H}}^m}= \sum_{i=0}^m |\dR^i \dth^{m-i} f|_{L^2}+ \sum_{i=1}^m |R^i \dR^i \dth^{m-i} f|_{L^2} $$ $$ |f|_{{\mathcal H}^k}= \sum_{m=0}^{k} |f|_{\dot{{\mathcal H}}^m} $$ \begin{lemma} \label{OmegaEmbed} Let $\Omega \in {\mathcal H}^N$, where $N \in \mathbb{N}$, then we have \begin{equation} \label{ROmegaLinfty} |R^k \dR^k \dth^m \Omega|_{L^{\infty}} \leq c_{k,m} |\Omega|_{{\mathcal H}^{k+m+2}} \quad \text{for any} \quad k+m+2 \leq N \end{equation} \end{lemma} \begin{proof} Using Sobolev embedding, we have \begin{equation*} \begin{split} |R^k \dR^k \dth^m \Omega|_{L^{\infty}} &\leq c_{k,m} |R^k \dR^k \dth^m \Omega|_{H^{2}_{R,\theta}} \\ \end{split} \end{equation*} where $H^{2}_{R,\theta} $ is the standard $H^2$ norm in $R$ and $\theta$. When considering the second derivative terms of $R^k \dR^k \dth^m \Omega$, for the angular derivatives term, we have $|R^k \dR^k \dth^{m+2} \Omega|_{L^2} \leq |\Omega|_{{\mathcal H}^{k+m+2}}$. Now for the radial derivatives, we have three cases. Considering the case when the two radial derivatives land on $\dR^k \dth^m \Omega$, we have $$|R^{k} \dR^{k+2} \dth^{m} \Omega|_{L^2} \leq |R^{k+2} \dR^{k+2} \dth^{m} \Omega|_{L^2}+| \dR^{k+2} \dth^{m} \Omega| \leq |\Omega|_{{\mathcal H}^{k+m+2}}$$ where the last inequality follows from the definition of the ${\mathcal H}^{N}$ norm. The other two cases follow in a similar way. \end{proof} We will also need some embedding estimates for the stream function $\Psi$ in terms of $\Omega$. \begin{lemma}\label{PsiEmbed} Let $\Omega \in {\mathcal H}^N$, where $N \in \mathbb{N}$, satisfying the same conditions as in Proposition \ref{EllipticEst}, then for the solution $\Psi$ of $$4 \Psi + \dthth \Psi + \alpha^2 R^2 \dRR \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta),$$ we have \begin{equation} \label{PsiInfty} |\dR^k \dth^m \Psi|_{L^{\infty}} \leq c_{k,m} |\Omega|_{{\mathcal H}^{k+m+1}} \end{equation} for $k,m \in \mathbb{N}$ with $k+m+1 \leq N$ \end{lemma} \begin{proof} As in Lemma \ref{OmegaEmbed}, applying the Sobolev embedding, we have \begin{equation*} \begin{split} | \dR^k \dth^m \Psi|_{L^{\infty}} &\leq \, c_{k,m} | \dR^k \dth^m \Psi|_{H^{2}_{R,\theta}} \\ \end{split} \end{equation*} From the elliptic estimates in Proposition \ref{EllipticEst}, for any $i,n \in \mathbb{N}$, we have \begin{equation} \label{PsiL2Omega} \begin{split} | \dR^i \dth^{n} \Psi|_{L^2} \leq c_{i,n} \, |\Omega|_{{\mathcal H}^{i+n-1}} \end{split} \end{equation} Thus, to bound $ | \dR^k \dth^m \Psi|_{H^{2}_{R,\theta}}$, we take $\Omega$ to be in ${\mathcal H}^{k+m+1}$. Hence, we have \begin{equation} \label{PsiInfty} |\dR^k \dth^m \Psi|_{L^{\infty}} \leq c_{k,m} |\Omega|_{{\mathcal H}^{k+m+1}} \end{equation} \end{proof} \begin{lemma}\label{RPsiEmbed} Let $\Omega \in {\mathcal H}^N$, where $N \in \mathbb{N}$, satisfying the same conditions as in Proposition \ref{EllipticEst}, then for the solution $\Psi$ of $$4 \Psi + \dthth \Psi + \alpha^2 R^2 \dRR \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta),$$ we have \begin{equation} \label{RPsiInfty} |R^k \dR^k \dth^m \Psi|_{L^{\infty}} \leq c_{k,m} |\Omega|_{{\mathcal H}^{k+m+1}} \end{equation} for $k,m \in \mathbb{N}$ with $k+m+1 \leq N$ \end{lemma} \begin{proof} As in Lemma \ref{OmegaEmbed}, applying the Sobolev embedding, we have \begin{equation*} \begin{split} |R^k \dR^k \dth^m \Psi|_{L^{\infty}} &\leq \, c_{k,m} |R^k \dR^k \dth^m \Psi|_{H^{2}_{R,\theta}} \\ \end{split} \end{equation*} From the elliptic estimates in Proposition \ref{EllipticEst}, for any $i,n \in \mathbb{N}$, we have \begin{equation} \label{PsiL2} \begin{split} | \dR^i \dth^{n} \Psi|_{L^2} \leq c_{i,n} \, |\dR^{i} \dth^{n-1} \Omega|_{L^2} \leq c_{i,n} \, |\Omega|_{{\mathcal H}^{i+n-1}} \end{split} \end{equation} and \begin{equation} \label{RPsiL2} \begin{split} |R^i \dR^i \dth^{n} \Psi|_{L^2} \leq c_{i,n} \, |\Omega|_{{\mathcal H}^{i+n-1}} \end{split} \end{equation} Thus, if we look at the second derivative terms of $R^k \dR^k \dth^m \Psi$, we can use the above inequalities to obtain the desired estimate. For the angular derivative term, we have $ |R^k \dR^k \dth^{m+2} \Psi|_{L^2} \leq c_{k,m} \, |\Omega|_{{\mathcal H}^{k+m+1}} $. When considering the radial derivative terms, we have three terms. For $R^{k} \dR^{k+2} \dth^{m} \Psi$ term, applying \eqref{PsiL2} and \eqref{RPsiL2}, we have $$ |R^{k} \dR^{k+2} \dth^{m} \Psi|_{L^2} \leq |R^{k+2} \dR^{k+2} \dth^{m} \Psi|_{L^2}+ | \dR^{k+2} \dth^{m} \Psi| \leq c_{k,m}\, |\Omega|_{{\mathcal H}^{k+m+1}} $$ The other terms can be handled in similar way. Hence, we have our desired result. \end{proof} \section{ Reminder estimate } \label{ReminderEstS} In this section, we obtain an error estimate on the remaining terms in the Euler with Riesz forcing. Recall that $\Omega$ satisfies the following evolution equation: \begin{equation*} \begin{split} \partial_t{\Omega} + \Big( -\alpha R \dth \Psi \Big)\dR \Omega + \Big( 2 \Psi + \alpha R \dR \Psi \Big) \dth \Omega &= \big(2 \alpha R \sin(\theta) \cos(\theta) +\alpha^2 R \sin(\theta) \cos(\theta) \big) \dR \Psi \\ &+\big( 1-2 \sin^2(\theta) \big) \dth \Psi +\big( \alpha R \cos^2(\theta) + \alpha R \sin^2(\theta)\big) \dRth\Psi \\ &+ \big(\alpha^2R^2 \sin(\theta) \cos(\theta) \big)\dRR \Psi - \big( \sin(\theta) \cos(\theta) \big)\dthth\Psi \\ \end{split} \end{equation*} and and the elliptic equation is the following: $$ 4 \Psi + \alpha^2 R^2 \dRR \Psi + \dthth \Psi +(4 \alpha +\alpha^2) R \dR \Psi=\Omega(R,\theta) $$ From section \ref{LOM}, the leading order model for the Euler with Riesz forcing equation satisfies the following: \begin{equation} \begin{split} \partial_t{\Omega_2} + \big( 2 \Psi_2 \big) \dth \Omega_2 &= \big( -1+2 \sin^2(\theta) \big) \dth \Psi_2 + \big( \sin(\theta) \cos(\theta) \big)\dthth\Psi_2 \\ \end{split} \end{equation} Where $$ \Psi_2(R,\theta)= \frac{1}{4 \alpha} L_s(\Omega) \sin(2 \theta)+ \frac{1}{4 \alpha} L_c(\Omega) \cos(2 \theta) $$ Now set $ \Omega_r =\Omega- \Omega_2$ to be the reminder term for the vorticity, and similarly set $ \Psi_r =\Psi- \Psi_2$ to be the reminder term for the stream function. Thus, we have that reminder, $\Omega_r$, satisfies the following evolution equation: \begin{equation}\label{ReminderEq} \begin{split} \partial_t{\Omega_r} &+ \Big( -\alpha R (\dth \Psi_2 +\dth \Psi_r ) \Big)(\dR \Omega_2 + \dR \Omega_r)+ \Big( 2 \Psi_2 \dth \Omega_r + 2 \Psi_r \dth \Omega_2 + 2 \Psi_r \dth \Omega_r \Big) \\& + \Big(\alpha R (\dR \Psi_2+ \dR\Psi_r) \Big)( \dth \Omega_2 + \dth \Omega_r)= \big(2 \alpha R \sin(\theta) \cos(\theta) +\alpha^2 R \sin(\theta) \cos(\theta) \big) \big(\dR \Psi_2 + \dR \Psi_r \big) \\ &+ \big( 1-2 \sin^2(\theta) \big) \dth \Psi_r + \alpha\big( R \cos^2(\theta) - R \sin^2(\theta)\big)( \dRth\Psi_2+\dRth\Psi_r) \\ &+ \alpha^2\big(R^2 \sin(\theta) \cos(\theta) \big)(\dRR \Psi_2+ \dRR \Psi_r) - \big( \sin(\theta) \cos(\theta) \big)\dthth\Psi_r \\ \end{split} \end{equation} The goal of this section is to show that $\Omega_r$ remains small. Namely, using energy methods, for some time $T$, we show that $$\sup_{t \leq T}|\Omega_r(t)|_{L^{\infty}} \leq C \alpha^{\frac{1}{2}}$$ for some constant $C$ independent of $\alpha$. We define the following term to shorten the notation: $$ I_1= -\alpha R(\dth \Psi_2 +\dth \Psi_r ) ( \dR \Omega_2 + \dR \Omega_r) \, , \, I_2= ( 2 \Psi_2 \dth \Omega_r + 2 \Psi_r \dth \Omega_2 + 2 \Psi_r \dth \Omega_r ) \, , \, I_3= \alpha R ( \dR \Psi_2+ \dR\Psi_r) ( \dth \Omega_2 + \dth \Omega_r) \ $$ $$ I_4= 2 \alpha (1-\alpha) R \sin(\theta) \cos(\theta) (\dR \Psi_2 + \dR \Psi_r ) \, , \, I_5= ( 1-2 \sin^2(\theta) ) \dth \Psi_r\, , \, I_6= \alpha( R \cos^2(\theta) - R \sin^2(\theta))( \dRth\Psi_2+\dRth\Psi_r)\ $$ $$ I_7= \alpha^2(R^2 \sin(\theta) \cos(\theta)) (\dRR \Psi_2+ \dRR \Psi_r)\, , \, I_8= - ( \sin(\theta) \cos(\theta) ) \dthth\Psi_r\, $$ and now we have the error estimate proposition. \begin{proposition}\label{ReminderEst} Let $\Omega_r=\Omega- \Omega_2$ satisfying \eqref{ReminderEq} with $\Omega_r|_{t=0}=0 $ then $$\sup_{0\leq t < T}|\Omega_r(t)|_{L^{\infty}} \leq c_N \alpha^{\frac{1}{2}}$$ where $T=c \, \alpha |\log \alpha|$ and $c$ is a small constant independent of $\alpha$ \end{proposition} \begin{proof} We will use $\partial^{N}$ to refer to any mixed derivatives in $R$ and $\theta$ of order $N$ (not excluding pure $R$ and $\theta$ derivatives). From the definition of ${\mathcal H}^{N}$ norm, to obtain ${\mathcal H}^{N}$ estimate we will take the following inner product with each $I_{i}$ term: $$\quad \big\langle \partial^{N} I_i, \partial^{N} \Omega_r \big\rangle \quad \text{and} \quad \big\langle R^{k} \dR^{k} \dth^{N-k}I_i , R^{k} \dR^{k} \dth^{N-k} \Omega_r \big\rangle$$ for $0 \leq k \leq N$ and $1 \leq i \leq 8$. \textbf{Estimate on $I_1$ and $I_{3}$} Here we will estimate $I_1$ and $I_{3}$. The estimate of $I_{3}$ is very similar to $I_{1}$, and so we will just show how to obtain the estimate on $I_1$. \textbf{Estimate on $I_1$} We can write $I_1$ as \begin{equation*} \begin{split} I_1= -\alpha R(\dth \Psi_2 +\dth \Psi_r ) ( \dR \Omega_2 + \dR \Omega_r) \, &= - \alpha (\dth \Psi_2 ) R( \dR \Omega_2 ) \, - \alpha (\dth \Psi_2 ) R( \dR \Omega_r) \, \\& - \alpha (\dth \Psi_r ) R ( \dR \Omega_2 ) \, - \alpha (\dth \Psi_r ) R ( \dR \Omega_r) \, \\ &=I_{1,1}+ I_{1,2} +I_{1,3}+ I_{1,4} \\ \end{split} \end{equation*} and we will estimate each term separately. \begin{itemize} \item $I_{1,1}= -\alpha \dth \Psi_2 \, R \dR \Omega_2 $ Here we have $$ \big\langle \partial^{N}(\alpha \dth \Psi_2 \, R \dR \Omega_2 ), \partial^{N} \Omega_r \big\rangle =\sum_{i=0}^N c_{i,N} \int \partial^{i} ( \alpha \dth \Psi_2 ) \partial^{N-i} ( R \dR \Omega_2) \, \partial^{N}\Omega_r $$ Now from Lemma \ref{LPsi2Est} and Lemma \ref{LOmega2Est}, we know that $$ | \Psi_2|_{{\mathcal W}^{k+1,\infty}} \leq \frac{c_{k}}{\alpha} \quad \text{and} \quad |\Omega_2|_{{\mathcal H}^{k}} \leq | \Omega_2(0)|_{{\mathcal H}^k} e^{ \frac{c_k}{ \alpha}t}$$ Thus, we have \begin{equation*} \begin{split} \sum_{i=0}^N\int \alpha \partial^{i} ( \dth \Psi_2 ) \partial^{N-i} ( R \dR \Omega_2) \, \partial^{N}\Omega_r &\leq c_N \sum_{i=0}^N \alpha |\partial^{i} \dth \Psi_2 |_{L^{\infty}} \, |\partial^{N-i} ( R \dR \Omega_2) |_{L^2} \, |\partial^{N}\Omega_r |_{L^2} \\ & \leq c_N \alpha | \Psi_2|_{{\mathcal W}^{N+1,\infty}} \, |\Omega_2|_{{\mathcal H}^{N+1}} \, |\Omega_r|_{{\mathcal H}^N} \leq \alpha \, \frac{c_{N}}{\alpha} e^{\frac{c_N}{\alpha} t} |\Omega_r|_{{\mathcal H}^N} \\ & \leq c_N e^{\frac{c_N}{\alpha} t} |\Omega_r|_{{\mathcal H}^N} \end{split} \end{equation*} and similarly we have \begin{equation*} \begin{split} \big\langle \dR^{k} \dth^{N-k}(\alpha \dth \Psi_2 \, & R \dR \Omega_2 ), R^{2k} \dR^{k} \dth^{N-k} \Omega_r \big\rangle =\\ & c_{i,m,N} \int \sum_{i+m=0}^{N} \dR^i \dth^{m} ( \, \alpha \dth \Psi_2) \, \, \dR^{k-i} \dth^{N-k-m} (R \dR \Omega_{2}) \, \, R^{2k} \dR^{k} \dth^{N-k} \Omega_{r} \end{split} \end{equation*} From the definition of ${\mathcal W}^{N+1,\infty}$ norm, we have for $i+m \leq N$ $$ |R^i\dR^i \dth^{m+1} \Psi_2 |_{L^{\infty}} \leq | \Psi_2|_{{\mathcal W}^{N+1,\infty}} $$ Again, applying Lemma \ref{LPsi2Est} and Lemma \ref{LOmega2Est}, we obtain \begin{equation*} \begin{split} \sum_{i+m=0}^{N}& \int R^i\dR^i \dth^{m} ( \, \alpha \dth \Psi_2) \, R^{k-i} \, \dR^{k-i} \dth^{N-k-m} (R \dR \Omega_{2}) \, \, R^{k} \dR^{k} \dth^{N-k} \Omega_{r}\\ &\leq c_N \sum_{i+m=0}^{N} \alpha |R^i\dR^i \dth^{m+1} \Psi_2 |_{L^{\infty}} \, | R^{k-i} \, \dR^{k-i} \dth^{N-k-m} ( R \dR \Omega_2) |_{L^2} \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r} |_{L^2} \\ & \leq c_N \alpha | \Psi_2|_{{\mathcal W}^{N+1,\infty}} \, |\Omega_2|_{{\mathcal H}^{N+1}} \, |\Omega_r|_{{\mathcal H}^N} \leq \alpha \, \frac{c_{N}}{\alpha} e^{\frac{c_N}{\alpha} t} |\Omega_r|_{{\mathcal H}^N} \leq c_N e^{\frac{c_N}{\alpha} t} |\Omega_r|_{{\mathcal H}^N}\\ \end{split} \end{equation*} Thus, we have \begin{equation} \label{I11H} \Big\langle I_{1,1}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^N} \end{equation} \item $I_{1,2}= -\alpha \dth \Psi_2 \, R \dR \Omega_r $ Here we have $$ \big\langle \partial^{N}(\alpha \dth \Psi_2 \, R \dR \Omega_r ), \partial^{N} \Omega_r \big\rangle =\sum_{i=0}^N c_{i,N} \int \partial^{i} ( \alpha \dth \Psi_2 ) \partial^{N-i} ( R \dR \Omega_r) \, \partial^{N}\Omega_r $$ To obtain this estimate, we again apply Lemma \ref{LPsi2Est}. Namely, that $ | \Psi_2|_{{\mathcal W}^{k+1,\infty}} \leq \frac{c_{k}}{\alpha}$. When $i=0$, we integrate by parts and obtain $$ \int ( \alpha \dth \Psi_2 ) \partial^{N} ( R \dR \Omega_r) \, \partial^{N}\Omega_r \leq c | \Psi_2|_{{\mathcal W}^{2,\infty}} |\Omega_r|_{{\mathcal H}^N}^2 \leq \frac{c_N}{\alpha} |\Omega_r|_{{\mathcal H}^N}^2 $$ For $1 \leq i\leq N$ we have, \begin{equation*} \begin{split} \sum_{i=1}^N\int \alpha \partial^{i} ( \dth \Psi_2 ) \partial^{N-i} ( R \dR \Omega_r) \, \partial^{N}\Omega_r &\leq c_N \sum_{i=1}^N \alpha |\partial^{i} \dth \Psi_2 |_{L^{\infty}} \, |\partial^{N-i} ( R \dR \Omega_r) |_{L^2} \, |\partial^{N}\Omega_r |_{L^2} \\ & \leq c_N \alpha | \Psi_2|_{{\mathcal W}^{N+1,\infty}} \, |\Omega_r|_{{\mathcal H}^{N}} \, |\Omega_r|_{{\mathcal H}^N} \leq \alpha \, \frac{c_{N}}{\alpha} |\Omega_r|_{{\mathcal H}^N}^2 \leq c_N |\Omega_r|_{{\mathcal H}^N}^2\\ \end{split} \end{equation*} Similarly, Now for the $R^{k} \dR^{k} \dth^{N-k}$ terms we have \begin{equation*} \begin{split} \big\langle R^{k} \dR^{k} \dth^{N-k}(\alpha \dth \Psi_2 & \, R \dR \Omega_r ) , R^{k} \dR^{k} \dth^{N-k} \Omega_r \big\rangle = \\ & c_{i,m,N} \int \sum_{i+m=0}^{N} R^{k} \dR^i \dth^{m} ( \, \alpha \dth \Psi_2) \, \, \dR^{k-i} \dth^{N-k-m} (R \dR \Omega_{r}) \, \, R^{k} \dR^{k} \dth^{N-k} \Omega_{r} \end{split} \end{equation*} We again use $ | \Psi_2|_{{\mathcal W}^{k+1,\infty}} \leq \frac{c_{k}}{\alpha}$. Hence, we have \begin{equation*} \begin{split} \sum_{i+m=0}^{N}& \int R^i\dR^i \dth^{m} ( \, \alpha \dth \Psi_2) \, R^{k-i} \, \dR^{k-i} \dth^{N-k-m} (R \dR \Omega_{r}) \, \, R^{k} \dR^{k} \dth^{N-k} \Omega_{r}\\ &\leq c_N \sum_{i+m=0}^{N} \alpha |R^i\dR^i \dth^{m+1} \Psi_2 |_{L^{\infty}} \, | R^{k-i} \, \dR^{k-i} \dth^{N-k-m} ( R \dR \Omega_r) |_{L^2} \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r} |_{L^2} \\ & \leq c_N \alpha | \Psi_2|_{{\mathcal W}^{N+1,\infty}} \, |\Omega_r|_{{\mathcal H}^{N}} \, |\Omega_r|_{{\mathcal H}^N} \leq \alpha \, \frac{c_{N}}{\alpha} |\Omega_r|_{{\mathcal H}^N}^2 \leq c_N |\Omega_r|_{{\mathcal H}^N}^2\\ \end{split} \end{equation*} Thus, we have \begin{equation} \label{I12H} \Big\langle I_{1,2}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N |\Omega_r|_{{\mathcal H}^N}^2 \end{equation} \item $ I_{1,3}=- \alpha (\dth \Psi_r ) R \dR \Omega_2 $ To obtain the estimate on $I_{1,3}$, we will use Lemma \ref{LOmega2Est} which will give us the estimate on $\Omega_2$. In addition, to bound the $\dth \Psi_r$ term, we will use the elliptic from Proposition \ref{EllipticEst} and embedding estimates from Lemma \ref{PsiEmbed}. Now we have $$ \big\langle \partial^{N}(\alpha \dth \Psi_r \, R \dR \Omega_2 ), \partial^{N} \Omega_r \big\rangle =\sum_{i=0}^N c_{i,N} \int \partial^{i} ( \alpha \dth \Psi_r ) \partial^{N-i} ( R \dR \Omega_2) \, \partial^{N}\Omega_r $$ When $ 0 \leq i \leq \frac{N}{2} $, we will use the embedding from Lemma \ref{PsiEmbed}. Namely that $$ |\dR^k \dth^m \Psi|_{L^{\infty}} \leq c_{k,m} |\Omega|_{{\mathcal H}^{k+m+1}} $$ Thus, we have \begin{equation*} \begin{split} & \sum_{i=0}^{\frac{N}{2}} \int \partial^{i} ( \alpha \dth \Psi_r ) \partial^{N-i} ( R \dR \Omega_2) \, \partial^{N}\Omega_r \leq \sum_{i=0}^{\frac{N}{2}} \alpha | \partial^{i} \dth \Psi_r |_{L^{\infty}}|\, |\partial^{N-i} ( R \dR \Omega_2)|_{L^2} | \partial^{N} \Omega_r |_{L^2} \\ & \leq \sum_{i=0}^{\frac{N}{2}} \alpha |\Omega_r|_{{\mathcal H}^{i+2}} |\Omega_2|_{{\mathcal H}^{N+1}} |\Omega_r|_{{\mathcal H}^{N} } \leq \alpha |\Omega_r|_{{\mathcal H}^{\frac{N}{2}+3}} |\Omega_2|_{{\mathcal H}^{N+1}} |\Omega_r|_{{\mathcal H}^{N} } \leq c_N \alpha e^{\frac{c_N}{\alpha}} |\Omega_r|_{{\mathcal H}^{N} }^2 \end{split} \end{equation*} Here we used Lemma \ref{LOmega2Est} for $ |\Omega_2|_{{\mathcal H}^{N+1}}$ term. When $ \frac{N}{2} \leq i \leq N $ we will use the elliptic estimate from Proposition \ref{EllipticEst}. Namely, $$ | \partial^k \dth \Psi|_{L^2} \, \leq c_{k} |\Omega|_{{\mathcal H}^k}$$ Thus we have \begin{equation*} \begin{split} & \sum_{i=\frac{N}{2}}^{N} \int \partial^{i} ( \alpha \dth \Psi_r ) \partial^{N-i} ( R \dR \Omega_2) \, \partial^{N}\Omega_r \leq \sum_{i=\frac{N}{2}}^{N} \alpha | \partial^{i} \dth \Psi_r |_{L^{2}}|\, | R \dR \Omega_2|_{{\mathcal W}^{N-i,\infty}} | \partial^{N} \Omega_r |_{L^2} \\ & \leq \sum_{i=\frac{N}{2}}^{N} \alpha |\Omega_r|_{{\mathcal H}^{i}} |\Omega_2|_{{\mathcal W}^{\frac{N}{2},\infty}} |\Omega_r|_{{\mathcal H}^{N} } \leq \alpha |\Omega_r|_{{\mathcal H}^{\frac{N}{2}+3}} |\Omega_2|_{{\mathcal H}^{N}} |\Omega_r|_{{\mathcal H}^{N} } \leq c_N \alpha e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 \end{split} \end{equation*} Similarly, to estimate the following inner product $$ \big\langle \dR^{k} \dth^{N-k}(\alpha (\dth \Psi_r ) R \dR \Omega_2 ), R^{2k} \dR^{k} \dth^{N-k} \Omega_r \big\rangle \leq c_N \alpha e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 $$ we will use \eqref{EllipticEstW} in Proposition \ref{EllipticEst} and embedding estimates from Lemma \ref{RPsiEmbed}. Following the same steps as we did in the previous inner product, we obtain that \begin{equation} \label{I13H} \Big\langle I_{1,3}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N \alpha e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 \end{equation} \item Estimate on $ I_{1,4}=- \alpha (\dth \Psi_r ) R \dR \Omega_r $ To obtain the estimate on $ I_{1,4}$, we will use the elliptic estimates from Proposition \ref{EllipticEst}. Namely, \eqref{EllipticEstMain} and \eqref{EllipticEstW}, then we will also use the embedding estimates from Lemma \ref{PsiEmbed} and Lemma \ref{RPsiEmbed}. We will only show how to obtain the estimate on the following term: \begin{equation*} \begin{split} \big\langle \dR^k \dth^{N-k} ( \, \alpha \dth \Psi\, & R \dR \Omega_{r}) R^{2k} \dR^{k} \dth^{N-k}\Omega_{r} \big\rangle= \\ & c_{i,m,N} \int \sum_{i+m=0}^{N} \dR^i \dth^{m} ( \, \alpha \dth \Psi) \, \, \dR^{k-i} \dth^{N-k-m} (R \dR \Omega_{r}) \, \, R^{2k} \dR^{k} \dth^{N-k} \Omega_{r} \end{split} \end{equation*} For the other inner product, the idea is the same. To start the estimate, first we consider the case when $i=m=0$, we integrate by parts and use the embedding estimates in Lemma \ref{PsiEmbed} and Lemma \ref{RPsiEmbed}, we have \begin{equation*} \begin{split} & \int \alpha \dth \Psi \, \Big( R^{k+1} \dR^{k+1} \dth^{N-k} \Omega_{r} + R^{k} \dR^{k} \dth^{N-k} \Omega_{r} \Big) \, \, R^{k} \dR^{k} \dth^{N-k} \Omega_{r} \\& \leq\alpha |R \dRth \Psi|_{L^{\infty}} \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r}|_{L^2}^2 + \alpha | \dth \Psi|_{L^{\infty}} \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r}|_{L^2}^2 \\ & \leq c_N ( |\Omega_r|_{{\mathcal H}^3} \, |\Omega_r|_{{\mathcal H}^N}^2 + |\Omega_r|_{{\mathcal H}^2} \, |\Omega_r|_{{\mathcal H}^N}^2) \leq c_N |\Omega_r|^3_{{\mathcal H}^N} \\ \end{split} \end{equation*} Now when $ 1 \leq i+m \leq \frac{N}{2}$, we use Lemma \ref{RPsiEmbed} and the definition of the ${\mathcal H}^k$ norm to obtain: \begin{equation*} \begin{split} & \sum_{i+m \geq 1}^{\frac{N}{2}}R^i \dR^i \dth^{m} ( \, \alpha \dth \Psi) \, \Big( R^{k+1-i} \dR^{k+1-i} \dth^{N-k-m} \Omega_{r} + R^{k-i} \dR^{k-i} \dth^{N-k-m} \Omega_{r} \Big) \, \, R^{k} \dR^{k} \dth^{N-k} \Omega_{r} \\ & \leq \sum_{i+m \geq 1}^{\frac{N}{2}} \alpha |R^i \, \dR^{i} \dth ^{m+1} \Psi|_{L^{\infty}} \, |R^{k+1-i} \dR^{k+1-i} \dth^{N-k-m} \Omega_{r}|_{L^2} \, \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r}|_{L^2} \, +\\ & \quad \, \sum_{i+m \geq 1}^{\frac{N}{2}} \alpha |R^i \, \dR^{i} \dth ^{m+1} \Psi|_{L^{\infty}} \, |R^{k-i} \dR^{k-i} \dth^{N-k-m} \Omega_{r}|_{L^2} \, \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r}|_{L^2} \\ & \leq c_N \sum_{i+m \geq 1}^{\frac{N}{2}} |\Omega|_{{\mathcal H}^{i+m+2}} \big( |\Omega|_{{\mathcal H}^N} + |\Omega|_{{\mathcal H}^{N-1}}\big) |\Omega_r|_{{\mathcal H}^{N} } \leq c_N |\Omega_{r}|_{{\mathcal H}^{\frac{N}{2}+2}} \big( |\Omega|_{{\mathcal H}^N} + |\Omega|_{{\mathcal H}^{N-1}}\big) | \Omega_{r}|_{{\mathcal H}^N } \\ & \leq c_N | \Omega_{r}|_{{\mathcal H}^N }^3 \end{split} \end{equation*} Now for the case when $ \frac{N}{2} \leq i+m \leq N$, we will use Lemma \ref{OmegaEmbed} and the elliptic estimates from Proposition \ref{EllipticEst} to obtain \begin{equation*} \begin{split} & \sum_{i+m \geq \frac{N}{2}}^{N}R^i \dR^i \dth^{m} ( \, \alpha \dth \Psi) \, \Big( R^{k+1-i} \dR^{k+1-i} \dth^{N-k-m} \Omega_{r} + R^{k-i} \dR^{k-i} \dth^{N-k-m} \Omega_{r} \Big) \, \, R^{k} \dR^{k} \dth^{N-k} \Omega_{r} \\ & \leq \sum_{i+m \geq \frac{N}{2}}^{N} \alpha |R^i \, \dR^{i} \dth ^{m+1} \Psi|_{L^2} \, \Big( |R^{k+1-i} \dR^{k+1-i} \dth^{N-k-m} \Omega_{r}|_{L^{\infty}} \Big) \, \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r}|_{L^2} \, + \\ & \quad \sum_{i+m \geq \frac{N}{2}}^{N} \alpha |R^i \, \dR^{i} \dth ^{m+1} \Psi|_{L^2} \, \Big( |R^{k-i} \dR^{k-i} \dth^{N-k-m} \Omega_{r}|_{L^{\infty}} \Big) \, \, |R^{k} \dR^{k} \dth^{N-k} \Omega_{r}|_{L^2} \\ & \leq \sum_{i+m \geq \frac{N}{2}}^{N} |\Omega_{r}|_{{\mathcal H}^{i+m-1}} \, \big( |\Omega|_{{\mathcal H}^{N-(i+m)+3}} + |\Omega|_{{\mathcal H}^{N-(i+m)+2}} \big) \, | \Omega_{r}|_{{\mathcal H}^N } \leq c_N |\Omega_{r}|_{{\mathcal H}^{N-1}} |\Omega|_{{\mathcal H}^{\frac{N}{2}+3}} | \Omega_{r}|_{{\mathcal H}^N } \\ & \leq c_N | \Omega_{r}|_{{\mathcal H}^N }^3 \end{split} \end{equation*} and thus, we have the following: \begin{equation} \label{I14H} \Big\langle I_{1,4}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N |\Omega_r|_{{\mathcal H}^N}^3 \end{equation} \end{itemize} Thus, we have the following estimate on $I_1$ term \begin{equation} \label{I1H} \Big\langle I_{1}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^N}+ c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 +c_N |\Omega_r|_{{\mathcal H}^N}^3 \end{equation} \textbf{Estimate on $I_3$} The estimate on $I_3$ follows similarly to $I_1$, so we skip the details for this case. One can obtain the following: \begin{equation} \label{I3H} \Big\langle I_{3}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^N}+ c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 +c_N |\Omega_r|_{{\mathcal H}^N}^3 \end{equation} \textbf{Estimate on $I_2$} Here we have $$ I_2= ( 2 \Psi_2 \dth \Omega_r + 2 \Psi_r \dth \Omega_2 + 2 \Psi_r \dth \Omega_r ) \ = I_{2,1}+ I_{2,2}+I_{2,3} $$ \begin{itemize} \item $I_{2,1}= 2 \Psi_2 \dth \Omega_r$ To estimate $I_{2,1}$, we follow the same steps as in the $I_1$ term. Using Lemma \ref{LPsi2Est}. Namely, that $ | \Psi_2|_{{\mathcal W}^{N,\infty}} \leq \frac{c_{N}}{\alpha}$, we have: \begin{equation} \label{I21H} \Big\langle I_{2,1}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq \frac{c_N}{\alpha} |\Omega_r|_{{\mathcal H}^N}^2 \end{equation} \item $I_{2,2}= 2 \Psi_r \dth \Omega_2 $ Similarly, To estimate $I_{2,2}$, we also follow the same steps as we did in $I_1$. Using Lemma \ref{LOmega2Est}, that $ |\Omega_2|_{{\mathcal H}^{k}} \leq | \Omega_2(0)|_{{\mathcal H}^k} e^{ \frac{c_k}{ \alpha}t}$, we obtain: \begin{equation} \label{I22H} \Big\langle I_{2,2}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 \end{equation} \item$I_{2,3}= 2 \Psi_r \dth \Omega_r$ This terms $I_{2,3}$ can be estimated in a similar way as in the $I_{1,4}$ term, by using embedding and elliptic estimate, we have \begin{equation} \label{I23H} \Big\langle I_{2,3}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq c_N |\Omega_r|_{{\mathcal H}^N}^3 \end{equation} \end{itemize} Hence, we obtain \begin{equation} \label{I2H} \Big\langle I_{2}, \Omega_r \Big\rangle_{{\mathcal H}^N} \leq \frac{c_N}{\alpha} |\Omega_r|_{{\mathcal H}^N}^2 + c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 +c_N |\Omega_r|_{{\mathcal H}^N}^3 \leq c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^{N} }^2 +c_N |\Omega_r|_{{\mathcal H}^N}^3 \end{equation} \textbf{Estimate on $I_4$, $I_5$, $I_6$, $I_7$, and $I_8$} We can write $I_4$ as: \begin{equation*} \begin{split} I_4&= 2 \alpha R \sin(\theta) \cos(\theta) +\alpha^2 R \sin(\theta) \cos(\theta) ) (\dR \Psi_2 + \dR \Psi_r )\\ & = \alpha(2+\alpha) \,\sin(\theta) \cos(\theta) \, R \dR \Psi_2 + \alpha(2+\alpha) \,\sin(\theta) \cos(\theta) \, R \dR \Psi_r\\ &=I_{4,1}+ I_{4,2} \end{split} \end{equation*} Recall that $$I_{5}=(1-2\sin^2(\theta)) \dth \Psi_r$$. We can also rewrite and $I_{6}$ and $I_{7}$ as follows: \begin{equation*} \begin{split} I_{6}&= \alpha( \cos^2(\theta) - \sin^2(\theta)) R( \dRth\Psi_2+\dRth\Psi_r) \\ & = \alpha( \cos^2(\theta) - \sin^2(\theta)) R \dRth\Psi_2 + \alpha( \cos^2(\theta) - \sin^2(\theta)) R \dRth\Psi_r\\ &=I_{6,1}+I_{6,2} \end{split} \end{equation*} and \begin{equation*} \begin{split} I_7&= \alpha^2 (\sin(\theta) \cos(\theta))R^2 (\dRR \Psi_2+ \dRR \Psi_r) \\ & = \alpha^2 (\sin(\theta) \cos(\theta))R^2 \dRR \Psi_2 +\alpha^2 (\sin(\theta) \cos(\theta))R^2 \dRR \Psi_r \\ &=I_{7,1}+I_{7,2} \end{split} \end{equation*} Recall that $$I_{8}=-\sin(\theta) \cos(\theta) \dthth \Psi_r$$. Now for $i=4, 6, \text{and} \, 7$, using Lemma \ref{LPsi2Est}. Namely, that $|\Psi|_{{\mathcal H}^{k+1}} \leq \frac{c_{k}}{ \alpha}$, we have the following estimate: \begin{equation} \label{Ii1H} \big\langle I_{i,1} \, , \, \Omega_r \big\rangle_{{\mathcal H}^{N}} \leq c_{N} |\Omega_r|_{{\mathcal H}^{N}} \quad \text{for}\quad i=4,6,7 \end{equation} Using the elliptic estimates in Proposition \ref{EllipticEst}, we obtain that: \begin{equation} \label{Ii2H} \big\langle I_{i,2} \, , \, \Omega_r \big\rangle_{{\mathcal H}^{N}} \leq c_N |\Omega_r|_{{\mathcal H}^{N}}^2 \quad \text{for}\quad i=4,6,7 \end{equation} and \begin{equation} \label{I58H} \big\langle I_{i} \, , \, \Omega_r \big\rangle_{{\mathcal H}^{N}} \leq c_N |\Omega_r|_{{\mathcal H}^{N}}^2 \quad \text{for}\quad i=5,8 \end{equation} Hence, from \eqref{Ii1H}, \eqref{Ii2H}, \eqref{I58H}, we have that \begin{equation} \label{I45678H} \big\langle I_{i} \, , \, \Omega_r \big\rangle_{{\mathcal H}^{N}} \leq c_N |\Omega_r|_{{\mathcal H}^{N}} +c_N |\Omega_r|_{{\mathcal H}^{N}}^2 \quad \text{for}\quad i=4,5, \dots 8 \end{equation} \textbf{Totally reminder estimate:} Here we obtain the totally error estimate. From our previous work we have, \begin{equation*} \begin{split} \frac{d}{dt} |\Omega_r|^2_{{\mathcal H}^{N}} = \big\langle \partial_t \Omega_r , \, \Omega_r \big\rangle_{{\mathcal H}^{N}} &\leq \sum_{i=1}^8 | \, \big\langle I_{i} , \, \Omega_r \big\rangle_{{\mathcal H}^{N}} | \\ \end{split} \end{equation*} and thus from \eqref{I1H}, \eqref{I3H}, \eqref{I2H}, and \eqref{I45678H} , we have \begin{equation*} \begin{split} \frac{d}{dt} |\Omega_r|^2_{{\mathcal H}^{N}} &\leq c_N e^{\frac{c_N}{\alpha}t} |\Omega_r|_{{\mathcal H}^N}+\Big( \frac{c_N}{\alpha} +c_N e^{\frac{c_N}{\alpha}t} \Big) \, |\Omega_r|_{{\mathcal H}^{N} }^2 +c_N |\Omega_r|_{{\mathcal H}^N}^3 \\ \end{split} \end{equation*} and hence \begin{equation*} \begin{split} \frac{d}{dt} |\Omega_r|_{{\mathcal H}^{N}} &\leq c_N e^{\frac{c_N}{\alpha}t}+ \Big( \frac{c_N}{\alpha} +c_N e^{\frac{c_N}{\alpha}t} \Big) \, |\Omega_r|_{{\mathcal H}^{N} } +c_N |\Omega_r|_{{\mathcal H}^N}^2 \\ \end{split} \end{equation*} Now since we have $\Omega_r|_{t=0}=0$, then by bootstrap, it follows that \begin{equation*} \begin{split} |\Omega_r|_{{\mathcal H}^{N}} &\leq \Big( \int_0^t c_N e^{\frac{c_N}{\alpha}\tau} \, d\tau \Big) \exp(\int_0^t \frac{c_N}{\alpha} +c_N e^{\frac{c_N}{\alpha}\tau} \, d\tau) \leq \alpha c_N (e^{\frac{c_N}{\alpha} t} -1) \exp( \frac{c_N}{\alpha} t + \alpha c_N e^{\frac{c_N}{\alpha}t} \, ) \\ \end{split} \end{equation*} Thus, if we choose $t < T= c \, \alpha |\log \alpha|$ for $c$ small, say $c=\frac{1}{4 \, c_N}$, then we have \begin{equation*} \begin{split} |\Omega_r|_{{\mathcal H}^{N}} &\leq c_N \alpha^{\frac{1}{2}} \, \\ \end{split} \end{equation*} and this completes the proof of Proposition \ref{ReminderEst} \end{proof} \section{ Main result }\label{Main} We now recall and prove the main theorem of this work. \begin{theorem} For any $\alpha,\delta>0$, there exists an initial data $\omega_0^{\alpha,\delta} \in C_c^{\infty}(\mathbb{R}^2)$ and $T(\alpha)$ such that the corresponding unique global solution, $\omega^{\alpha,\delta}$, to \eqref{EulerR} is such that at $t=0$ we have $$|\omega_0^{\alpha,\delta}|_{L^{\infty}}=\delta,$$ but for any $0<t\leq T(\alpha)$ we have $$ \quad|\omega^{\alpha,\delta}(t)|_{L^{\infty}} \geq |\omega_0|_{L^\infty}+c \log (1+\frac{c}{\alpha}t), $$ where $T(\alpha)= c \alpha |\log(\alpha)|$ and $c>0$ is a constant independent of $\alpha.$ \end{theorem} \begin{proof} Consider the initial data of the form $$ \omega_0=\Omega|_{t=0}=f_{0}(R) \sin(2\theta) $$ where $f(R)$ is non-negative compactly support smooth function which is zero on $[0,1)$ and positive outside. We know that we can write $\Omega=\Omega_2 +\Omega_r$, and from the form of initial data, we have $\Omega_r|_{t=0}=0$ and thus from Proposition \ref{ReminderEst}, we have $$ |\Omega_r (t)|_{L^{\infty}} \leq c_N \alpha^{\frac{1}{2}} \, $$ for $0\leq t \leq c \, \alpha |\log \alpha|$, where recall that $c$ is a small constant independent of $\alpha$. Recall also that we can write $\Omega_2$ as: $$ \Omega_2= f + \frac{1}{2 \alpha} \int_0^t L_s(f_\tau) d \tau $$ and thus from Proposition \ref{LeadingEst}, we obtain that $$ \Omega_2= f + \frac{1}{2 \alpha} \int_0^t L_s(f_\tau) d \tau \geq f+ c_0 \log(1+ \frac{c_0}{\alpha} \, t )) $$ for some $c_0$ independent of of $ \alpha$ and thus we have our desired result. \end{proof} \section*{Acknowledgements} Both authors were partially supported by the NSF grants DMS 2043024 and DMS 2124748. T.M.E. was partially supported by an Alfred P. Sloan Fellowship.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} As the coronavirus disease 2019 (COVID-19) has already had profound effects on public health and the economy worldwide, how to use statistical approaches to model and understand the spread of COVID-19 to inform and educate the public about the virus transmission and develop effective strategies for addressing this challenge has become crucial. In particular, the understanding of how the potential environmental factors, such as weather and government interventions, affect the virus transmissibility is important yet unclear. Moreover, an effective strategy to mitigate the outbreak based on the weather conditions is in extreme need for policymakers yet little attention has been paid to the interaction effect between weather and government interventions. For instance, a natural question for policymakers is ``Should the government implement more restrictions to mitigate the pandemic as the weather gets colder?'' There has been many studies on the impact of weather and government interventions since the COVID-19 outbreak but remain some challenges. To name a few, \cite{yu2020impact,xu2020modest,carson2020covid} find it evident that the weather temperature is somehow associated with the COVID-19 spread, while \cite{jamil2020no,gupta2020estimating} find no significant associations. \cite{cowling2020impact,haug2020ranking,haldar2020effect,flaxman2020estimating} investigate the impacts of government interventions on COVID-19 spread and most of the studies show that government interventions are associated with reduced transmission of COVID-19. However, most of work focus on individual effects of weather and government interventions, which may lead to misleading results due to potential collinearity issues \citep{wilson2020weather}, and these methods cannot be directly extended to multiple factors due to model complications. Besides, the interaction effects between weather and government interventions are not available if their effects are estimated separately. In this paper, we employ a nonparametric regression method not only to model the impacts of weather and government interventions \textit{jointly}, which incorporates with an epidemic model that allows us to evaluate the effects on the virus transmissibility, but also provide forecasts of future COVID-19 infections. Specifically, a Gaussian process prior \citep{williams2006gaussian} is imposed on the functional parameters in the \textit{susceptible-infectious-removed (SIR)} model \citep{kermack1927contribution}, and based on this model, the posterior distribution of the basic reproduction number, which is used to measure the transmission potential of a disease, will be derived. The main effects and interaction effects of these factors will be analyzed by the sensitivity analysis \citep{sobol1993sensitivity}. It is worth noting that the process of estimating the parameters associated with the epidemic models is often called \textit{calibration} in the computer experiment literature \citep{kennedy2001bayesian,santner2003design,tuo2015efficient}. Although there are numerous developments on calibration, most of the existing work are based on scalar parameters rather than functional parameters. Exceptions include the recent work by \cite{plumlee2016calibrating,brown2018nonparametric}, but their work are based on continuous outputs with a Gaussian assumption, which is not valid for the count data in the epidemic models in our application. The remainder of the paper is organized as follows. In Section \ref{sec:sir}, the SIR model and a modified SIR model with functional parameters will be introduced. The statistical model incorporating with the SIR model will be explicitly described in Section \ref{sec:statistcalmodel}. Numerical studies are conducted in Section \ref{sec:simulation} to examine the performance. In Section \ref{sec:application}, the statistical model is applied to the COVID-19 outbreak to assess the impacts of weather and government interventions. Concluded remarks are given in Section \ref{sec:conclusion}, and the details of sampling for the posterior distributions are given in Appendix. The R \citep{R2018} code and data for reproducing the results in this paper are provided in Supplementary Material. \section{Compartmental models in epidemiology}\label{sec:sir} \subsection{SIR model} Compartmental models are widely used in epidemiology which simplify the mathematical modelling of infectious diseases. One of the prominent models is the \textit{SIR model} \citep{kermack1927contribution,Diekmann2013}, which assigns the population to three compartments: susceptible (S), infectious (I), and recovered (R), where the three compartments represent the number of the susceptible individuals, the infected individuals, and the recovered or deceased individuals, respectively. The SIR model has been widely used for understanding how a disease spreads in outbreaks of measles, influenza, rubella, smallpox, Ebola, monkeypox, SARS, and the current COVID-19 pandemic. See, for example, \cite{osthus2017forecasting,chen2020time,cooper2020sir,roda2020difficult,d2020assessment}. Mathematically speaking, the transition of the three compartments can be expressed by the ordinary differential equations as follows, \begin{equation}\label{eq:sir} \frac{dS(t)}{dt}=-\frac{\beta I(t)S(t)}{N},\quad\frac{dI(t)}{dt}=\frac{\beta I(t)S(t)}{N}-\gamma I(t),\quad\frac{dR(t)}{dt}=\gamma I(t), \end{equation} where $S(t)$, $I(t)$ and $R(t)$ represent the numbers of cases in the corresponding compartment, $N=S(t)+I(t)+R(t)$ is the total population, $\beta$ is the contact rate that represents the average number of contacts per person per time in the susceptible compartment, and $\gamma$ is the recovery rate from the infectious compartment. The ratio of $\beta$ and $\gamma$ is called the \textit{basic reproduction number} in epidemiology, often denoted by $\mathcal{R}_0:=\beta/\gamma$, which indicates the average number of infected cases generated by a typical infectious individual when introduced into a fully susceptible population. This number is of great importance in public health and epidemiology, which is often used to measure the transmission potential of a disease or a virus \citep{dietz1993estimation,zhao2020preliminary,zhang2020evolving}. Essentially, when $\mathcal{R}_0$ is larger than 1, the infection will be able to start spreading in a population, and the larger $\mathcal{R}_0$ is, the harder it is to control the epidemic. \subsection{Modified SIR model} Despite the SIR model has been widely used in epidemiology, the model has been shown that it cannot reflect the reality due to its simplifications and assumptions. See, for example, \cite{ModelingInfectious,sung2020efficient}. In particular, the constant parameter assumption of $\beta$ and $\gamma$, which implies that the contact rate and the recovery rate are both fixed in the entire process, has been shown that it is too strong to be satisfied in reality \citep{cowling2008effectiveness,cauchemez2016unraveling,hong2020estimation,yu2020impact,ambrosio2020coupled}. Therefore, in this article, we consider a modified, more flexible, SIR model by assuming that the parameters can vary based on some environmental factors. First, similar to \cite{hong2020estimation}, we consider a discrete version of SIR models by replacing the derivatives in \eqref{eq:sir} with finite differences, which results in \begin{align*} I(t+1)&-I(t)=\frac{\beta I(t)(N-I(t)-R(t))}{N}-\gamma I(t),\\ &R(t+1)-R(t)=\gamma I(t), \end{align*} Then, by assuming the functional parameters $\beta(\mathbf{x})$ and $\gamma(\mathbf{x})$, where $\mathbf{x}\in\Omega\subseteq\mathbb{R}^d$ is a $d$-dimensional factor, and expressing the equations in a recursive fashion, a modified SIR model can be expressed as \begin{align*} I(t+1)&=(1+\beta(\mathbf{x}) -\gamma(\mathbf{x}))I(t) + \beta(\mathbf{x})I(t)(I(t)+R(t))/N,\\ &R(t+1)=R(t)+\gamma(\mathbf{x}) I(t), \end{align*} and $S(t+1)=N-I(t+1)-R(t+1)$ for $t\in\mathbb{N}\cup\{0\}$. Thus, the number of the daily infectious cases at day $t$ based on the modified SIR is the difference in susceptible from day $t-1$ to day $t$, which we denoted as \begin{equation}\label{eq:modifiedSIR} f(t,\beta(\mathbf{x}),\gamma(\mathbf{x})):=S(t-1)-S(t). \end{equation} \section{A statistical model incorporating with the SIR model}\label{sec:statistcalmodel} \subsection{Gaussian process priors for functional parameters} In this section, we introduce a statistical model incorporating with the modified SIR model in \eqref{eq:modifiedSIR}. First, denote $y_t$ as the daily reported number of infectious cases at day $t$, and assume $y_t$ follows an independent Poisson distribution with the mean function $f(t,\beta(\mathbf{x}), \gamma(\mathbf{x}))$, that is, \begin{equation}\label{eq:dataassumption} y_t\overset{\text{indep.}}{\sim}\text{Poi}(f(t,\beta(\mathbf{x}),\gamma(\mathbf{x}))). \end{equation} The functional parameters in the SIR model are assumed to follow a joint Gaussian process (GP) prior: \begin{align} \text{logit}\left(\beta(\cdot)\right)&\sim\mathcal{GP}(\mu_1(\cdot), \tau K_{\boldsymbol{\phi}}(\cdot,\cdot)),\nonumber\\ \text{logit}\left(\gamma(\cdot)\right)&\sim\mathcal{GP}(\mu_2(\cdot), \tau K_{\boldsymbol{\phi}}(\cdot,\cdot)),\label{eq:GPassumption}\\ {\rm{Cov}}\left(\text{logit}\left(\beta(\mathbf{x})\right),\text{logit}\left(\gamma(\mathbf{x}')\right)\right)&=\rho\tau K_{\boldsymbol{\phi}}(\mathbf{x},\mathbf{x}')\quad\text{for any}\quad\mathbf{x},\mathbf{x}'\in\Omega\nonumber, \end{align} where $\text{logit}(x)=x/(1-x)$ and ${\rm{Cov}}(x,y)$ is the covariance between $x$ and $y$. The logit transformation is used here because both $\beta$ and $\gamma$ are rates which are bounded from zero to one, but the Gaussian process prior has positive measures over the negative reals. Other transformation, such as the probit function, $\Phi^{-1}(x)$, the cumulative log-log function, $\log(-\log(x))$, or the identity function, $x$, could be also used here. $\mu_j(\cdot)$ is the mean function, where we assume a constant mean, i.e., $\mu_j(\mathbf{x})=\mu_j$. $\tau>0$ is the process variance, and $k_{\boldsymbol{\phi}}$ is the correlation function, for which a Gaussian correlation function is commonly used in the form of $K_{\boldsymbol{\phi}}(\mathbf{x},\mathbf{x}')=\exp(-\|\boldsymbol{\phi}^T(\mathbf{x}-\mathbf{x}')\|^2)$ for any $\mathbf{x},\mathbf{x}'\in\Omega$, where $\boldsymbol{\phi}\in\mathbb{R}^p$ is the unknown correlation parameter. Note that the correlation function is usually reparameterized as \begin{equation}\label{eq:kernel} K_{\boldsymbol{\phi}}(\mathbf{x},\mathbf{x}')=\prod^d_{j=1}\phi_j^{4(x_{j}-x'_{j})^2} \quad\text{for any}\quad\mathbf{x},\mathbf{x}'\in\Omega, \end{equation} where $\boldsymbol{\phi}=(\phi_1,\ldots,\phi_d)\in(0,1)^d$, for the purpose of numerical stability, because the domain of $\phi_j\in(0,1)$ is now bounded. See, for example, \cite{brown2018nonparametric} and \cite{mak2017efficient}. As a result, the form of the correlation function \eqref{eq:kernel} is used throughout this article. In \eqref{eq:GPassumption}, we assume that $\text{logit}(\beta(\cdot))$ and $\text{logit}(\gamma(\cdot))$ are correlated with a \textit{cross-correlation}, $\rho$, which implies that for any given $\mathbf{x}\in\Omega$, the correlation between $\text{logit}(\beta(\mathbf{x}))$ and $\text{logit}(\gamma(\mathbf{x}))$ is $\rho$. This assumption is similar to the separable correlation function in \cite{stein1991universal,mardia1993spatial,brown1994multivariate,banerjee2002prediction,qian2008gaussian}. The dependence assumption of the two parameters is crucial and appealing from an epidemiological perspective. For the compartmental models like SIR, it is well known that the parameters are strongly \textit{coupled} in the modeling literature. See, for example, the joint posterior distribution in \cite{roda2020difficult} which shows a strong correlation between the two parameters in an SIR model. Thus, the independent GP assumption like in \cite{brown2018nonparametric} and \cite{plumlee2016calibrating} is not valid in this application. Numerical studies in Section \ref{sec:simulation} demonstrate the comparison of the two assumptions, which shows that the joint GP prior can outperform the independent one when there exists correlations between the parameters. Suppose that we observe the reported infectious cases in $n$ days, which are denoted by $\mathbf{y}_n=(y_1,\ldots,y_n)$. Denote $\beta_t=\beta(\mathbf{x}_t)$, $\gamma_t=\gamma(\mathbf{x}_t)$, $\boldsymbol{\beta}=(\beta_1,\ldots,\beta_n)$, and $\boldsymbol{\gamma}=(\gamma_1,\ldots,\gamma_n)$. Furthermore, denote $\mathbf{1}_n=(1,\ldots,1)^T\in\mathbb{R}^{n\times 1}$ and $\mathbf{K}_{\boldsymbol{\phi}}=(K_{\boldsymbol{\phi}}(\mathbf{x}_i,\mathbf{x}_j))_{1\leq i,j\leq n}\in\mathbb{R}^{n\times n}$. Then, together with the model assumptions \eqref{eq:dataassumption}, \eqref{eq:GPassumption} and \eqref{eq:kernel}, we have the following hierarchical model, \begin{align} y_t|\boldsymbol{\beta},\boldsymbol{\gamma}&\overset{\text{indep.}}{\sim} \text{Poi}(f(t,\beta_t,\gamma_t))\quad\text{for}\quad t=1,\ldots,n,\nonumber\\ {\rm logit }\,\left( \begin{array}{c} \boldsymbol{\beta}\\ \boldsymbol{\gamma}\\ \end{array}\right)&\sim\mathcal{N}_{2n}\left(\left[ \begin{array}{c} \mu_1\mathbf{1}_n\\ \mu_2\mathbf{1}_n\\ \end{array}\right], \tau\left[ \begin{array}{cc} \mathbf{K}_{\boldsymbol{\phi}}&\rho\mathbf{K}_{\boldsymbol{\phi}}\\ \rho\mathbf{K}_{\boldsymbol{\phi}}&\mathbf{K}_{\boldsymbol{\phi}}\\ \end{array}\right]\right),\nonumber\\ \tau&\sim\text{InvGamma}(a,b),\label{eq:prior1}\\ \rho&\sim\text{Beta}(1,b_{\rho}),\label{eq:prior2}\\ \phi_j&\overset{\text{indep.}}{\sim}\text{Beta}(1,b_{\phi})\quad\text{for}\quad j=1,\ldots,d,\label{eq:prior3}\\ \mu_j&\overset{\text{indep.}}{\sim}\mathcal{N}(\alpha_j,\sigma^2_j)\label{eq:prior4}\quad\text{for}\quad j=1,2, \end{align} where \eqref{eq:prior1}, \eqref{eq:prior2}, \eqref{eq:prior3}, \eqref{eq:prior4} are the priors of the parameters $\tau,\rho,\phi_j$ and $\mu_j$, in which InvGamma$(a,b)$ is an inverse gamma distribution with shape parameter $a$ and rate parameter $b$, and Beta$(1,b)$ is a beta distribution with parameters 1 and $b$. It is worth noting that the cross-correlation $\rho$ has positive measures only over the positive reals, which is reasonable because the parameters are commonly found to be positively correlated in the epidemiology literature \citep{roda2020difficult}. \subsection{Posterior Distributions} The goal of this study is to infer the functional parameters $\beta(\mathbf{x})$ and $\gamma(\mathbf{x})$ and subsequently investigate whether the $d$-dimensional factor $\mathbf{x}$ plays a role in varying the basic reproduction number, which is denoted by $\mathcal{R}_0(\mathbf{x}):=\beta(\mathbf{x})/\gamma(\mathbf{x})$. In addition, predicting the number of future infections based on forecast weather and government interventions, say $\mathbf{x}_{n+1}$, is also of great interest. Therefore, the joint posterior distribution of $\beta(\mathbf{x}),\gamma(\mathbf{x})$, and the number of the future daily infected cases are developed as follows. We first derive the joint posterior distribution of $\beta(\mathbf{x})$ and $\gamma(\mathbf{x})$. Denote the parameters $\boldsymbol{\psi}=(\rho, \boldsymbol{\phi},\mu_1,\mu_2,\tau)$ and $\text{data}=(\mathbf{y}_n,\mathbf{X}_n)$. Then, the posterior distribution given observations can be obtained by \begin{align*} \pi(\beta(\mathbf{x}),\gamma(\mathbf{x}),&\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data})\\ \propto&\pi(\beta(\mathbf{x}),\gamma(\mathbf{x})|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},\text{data})\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data}), \end{align*} where $\pi(x|y)$ denotes the posterior distribution of $x$ given $y$. Thus, the joint posterior distribution of $\beta(\mathbf{x})$ and $\gamma(\mathbf{x})$ can be approximated by Markov chain Monte Carlo (MCMC) by drawing the samples from $\pi(\beta(\mathbf{x}),\gamma(\mathbf{x})|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},\text{data})$ and $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data})$ iteratively. The posterior $\pi(\beta(\mathbf{x}),\gamma(\mathbf{x})|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},\text{data})$ can be drawn based on the property of conditional multivariate normal distributions, that is, \begin{align}\label{eq:multinormal} &{\rm logit }\,\left( \begin{array}{c} \beta(\mathbf{x})\\ \gamma(\mathbf{x})\\ \end{array}\right)|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},{\rm{data}}\\ &\sim \mathcal{N}_2 \left(\left[\begin{array}{c} \mu_1+\mathbf{k}_{\boldsymbol{\phi}}(\mathbf{x})^T\mathbf{K}_{\boldsymbol{\phi}}^{-1}({\rm logit }\,\boldsymbol{\beta}-\mu_1\mathbf{1}_n) \\ \mu_2+\mathbf{k}_{\boldsymbol{\phi}}(\mathbf{x})^T\mathbf{K}_{\boldsymbol{\phi}}^{-1}({\rm logit }\,\boldsymbol{\gamma}-\mu_2\mathbf{1}_n) \end{array}\right], \tau(1-\mathbf{k}_{\boldsymbol{\phi}}(\mathbf{x})^T\mathbf{K}_{\boldsymbol{\phi}}^{-1}\mathbf{k}_{\boldsymbol{\phi}}(\mathbf{x}))\left[\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array}\right]\right),\nonumber \end{align} where $\mathbf{k}_{\boldsymbol{\phi}}(\mathbf{x})=(K_{\boldsymbol{\phi}}(\mathbf{x},\mathbf{x}_1),\ldots,K_{\boldsymbol{\phi}}(\mathbf{x},\mathbf{x}_n))^T$. The MCMC samples of $\beta(\mathbf{x})$ and $\gamma(\mathbf{x})$ can then be obtained by sampling from the multivariate normal distribution of \eqref{eq:multinormal} and taking the inverse of the logit function. For the posterior $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data})$, we have \begin{align}\label{eq:jointdis} &\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\rm{data})\propto\pi({\rm{data}}|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi})\pi(\boldsymbol{\beta},\boldsymbol{\gamma}|\boldsymbol{\psi})\pi(\boldsymbol{\psi})\\ &\propto\exp\left\{-\sum^n_{t=1}f(t,\beta_t,\gamma_t)\right\}\times\prod^n_{t=1}f(t,\beta_t,\gamma_t)^{y_t}\nonumber\\ &\times\exp\left\{-\frac{1}{\tau}\left( \begin{array}{c} {\rm logit }(\boldsymbol{\beta})-\mu_1\mathbf{1}_n\\ {\rm logit }(\boldsymbol{\gamma})-\mu_2\mathbf{1}_n\\ \end{array}\right)^T\left[ \begin{array}{cc} \mathbf{K}_{\boldsymbol{\phi}}&\rho\mathbf{K}_{\boldsymbol{\phi}}\\ \rho\mathbf{K}_{\boldsymbol{\phi}}&\mathbf{K}_{\boldsymbol{\phi}}\\ \end{array}\right]^{-1}\left( \begin{array}{c} {\rm logit }(\boldsymbol{\beta})-\mu_1\mathbf{1}_n\\ {\rm logit }(\boldsymbol{\gamma})-\mu_2\mathbf{1}_n\\ \end{array}\right)\right\}\nonumber\\ &\times |\mathbf{K}_{\boldsymbol{\phi}}|^{-1}(1-\rho^2)^{-n/2}\tau^{n/2+a-1}\exp\{-b\tau\}(1-\rho)^{b_{\rho}-1}\prod^d_{j=1}(1-\phi_j)^{b_{\phi}-1}\nonumber\\ &\times\exp\left\{-\frac{1}{2}\sum^2_{j=1}\frac{\left(\mu_j-\alpha_j\right)^2}{\sigma_j^2} \right\}.\nonumber \end{align} The samples from this posterior distribution can be drawn by Gibbs sampling with Metropolis-Hastings algorithm. In particular, similar to \cite{bernardo1998regression} and \cite{brown2018nonparametric}, we use a multivariate normal distribution as a proposal to sample $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$, which draws a proposal of $(\boldsymbol{\beta}',\boldsymbol{\gamma}')$ by \begin{equation}\label{eq:proposal} \left(\begin{array}{c} {\rm logit }\,\boldsymbol{\beta}'\\ {\rm logit }\,\boldsymbol{\gamma}'\\ \end{array}\right)=c\tau^{1/2}\left( \begin{array}{cc} \mathbf{K}_{\boldsymbol{\phi}}&\rho\mathbf{K}_{\boldsymbol{\phi}}\\ \rho\mathbf{K}_{\boldsymbol{\phi}}&\mathbf{K}_{\boldsymbol{\phi}}\\ \end{array}\right)^{1/2}\mathbf{Z}+ \left(\begin{array}{c} {\rm logit }\,\boldsymbol{\beta}\\ {\rm logit }\,\boldsymbol{\gamma}\\ \end{array}\right), \end{equation} where $\mathbf{Z}\sim\mathcal{N}_n(0,\mathbf{I}_n)$ and $\mathbf{I}_n$ is an identity matrix of size $n\times n$, and $c>0$ is a small constant which can be adaptively determined by monitoring the acceptance rate as in \cite{brown2018nonparametric}. For the parameters $\rho$ and $\phi_j$, we use a normal distribution as a proposal, which draws a proposal of $\rho'$ and $\phi'_j$ by $$\log(-\log(\rho'))=\log(-\log(\rho))+\mathcal{N}(0,c_{\rho})$$ and $$\log(-\log(\phi'_j))=\log(-\log(\phi_j))+\mathcal{N}(0,c_{\phi_j}),$$ respectively, where $c_{\rho}$ and $c_{\phi_j}$ are small constants which similarly can be adaptively determined. For the parameters $\mu_1,\mu_2$ and $\tau$, the samples can be directly drawn from their conditional distributions, which are multivariate normal and inverse gamma distributions, respectively. We leave the details of the Gibbs sampling for the posterior $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data})$ to Appendix \ref{append:samplerdetail}. Now we move to the posterior distribution of the number of future infections. Let $\mathbf{x}_{n+1}$ be the forecast weather and government interventions at time $n+1$, and denote $\beta_{n+1}=\beta(\mathbf{x}_{n+1}),\gamma_{n+1}=\gamma(\mathbf{x}_{n+1})$. Then the posterior distribution of the infected number, $y_{n+1}$, given observations has \begin{align*} \pi(y_{n+1},&\beta_{n+1},\gamma_{n+1},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\mathbf{x}_{n+1},\text{data})\\ \propto&\pi(y_{n+1}|\beta_{n+1},\gamma_{n+1})\pi(\beta_{n+1},\gamma_{n+1}|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},\mathbf{x}_{n+1})\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data}). \end{align*} Similarly, this posterior distribution can be drawn by MCMC sampling, where the samples for $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data})$ can be drawn as introduced before, and the samples from $\pi(\beta_{n+1},\gamma_{n+1}|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},\mathbf{x}_{n+1})$ can be similarly drawn from the multivariate normal distribution \eqref{eq:multinormal}. $\pi(y_{n+1}|\beta_{n+1},\gamma_{n+1})$ follows a Poisson distribution with the mean $f(n+1,\beta_{n+1},\gamma_{n+1})$. Thus, the MCMC samples can be drawn iteratively from $\pi(y_{n+1}|\beta_{n+1},\gamma_{n+1})$, $\pi(\beta_{n+1},\gamma_{n+1}|\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi},\mathbf{x}_{n+1})$, and $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|\text{data})$. \section{Simulation Study}\label{sec:simulation} In this section, simulation studies are conducted to examine the performance of the proposed method. In the simulations, the hyperparameters in the priors \eqref{eq:prior1}, \eqref{eq:prior2}, \eqref{eq:prior3}, \eqref{eq:prior4}, are set as follow. Similar to \cite{brown2018nonparametric}, the shape parameters $b_{\rho}$ and $b_{\boldsymbol{\phi}}$ are chosen to be 0.1, which place most probability mass near one to enforce the smoothness for the functional parameters; $a=0.01$ and $b=0.01$ are chosen so that the prior is centered at one with standard deviation $\sqrt{0.01/0.01^2}=10$; for \eqref{eq:prior4} we set $\alpha_1=\alpha_2=0$ and $\sigma^2_1=\sigma^2_2=1$. For the MCMC sampling, 2,000 iterations are performed in a burn-in period, and after that additional 2,000 MCMC samples are drawn, which are thinned to reduce autocorrelation. Suppose that the observation $y_t$ is simulated from a Poisson distribution with the mean function, $f(t,\beta(x), \gamma(x))=5\beta(x)+\gamma(x)(t/10)^2$, where $x$ is one-dimensional factor in the space $[0,1]$. Let $\beta(x)=\sin(3x)\exp(-x)+0.2$ and $\gamma(x)=\sin(3x)$. The left panel of Figure \ref{fig:simulation_setup} demonstrates the two functions $\beta(x)$ and $\gamma(x)$, and it can be seen that these two curves share some similarity overall, which suggests that the dependence assumption of these two functions is necessary. We generate $x_1,\ldots,x_{40}$ from a uniform distribution, and randomly generate $n=40$ observations, $y_1,\ldots,y_{40}$. The right panel of Figure \ref{fig:simulation_setup} shows the random samples as dots, where the solid line is the true mean function $f(t,\beta(x), \gamma(x))$. We use the first 30 samples, $y_1,\ldots,y_{30}$, as the training dataset and the rest of 10 samples as the test dataset. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{simulation_setup.pdf} \caption{Simulation setting. The left panel demonstrates $\beta(x)$ (solid line) and $\gamma(x)$ (dashed line), and the right panel demonstrates the true mean function $f(t,\beta(x), \gamma(x))$ (solid line) and the simulated data (dots).} \label{fig:simulation_setup} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{simulation_correlation.pdf} \caption{Posteriors of $\beta(x)$ (left panel) and $\gamma(x)$ (right panel). The gray lines are the MCMC draws, the solid red lines are the posterior mean, and the dashed lines are the true functions. } \label{fig:simulation_correlation} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{simulation_prediction.pdf} \caption{Posteriors of test data $y_{31},\ldots,y_{40}$ based on a joint GP prior (left panel) and an independent GP prior (right panel). The gray lines are the MCMC draws, the solid red lines are the posterior mean, and the dashed lines are the true functions. } \label{fig:simulation_prediction} \end{figure} Figure \ref{fig:simulation_correlation} shows the posterior draws of $\beta(x)$ and $\gamma(x)$. It can be seen that the posterior means can recover the true functions very well. The predictions on the test dataset are shown in the left panel of Figure \ref{fig:simulation_prediction}, which shows that the posterior mean is reasonably close to the true function. These results demonstrate that the proposed method can perform well for the models with functional parameters in terms of estimation and prediction. To further examine the performance, we compare with the model under the assumption of independent GPs, which is close to the assumption in \cite{plumlee2016calibrating} and \cite{brown2018nonparametric}. Figure \ref{fig:simulation_nocorrelation} illustrates the posterior draws of $\beta(x)$ and $\gamma(x)$, which appears to be not as close to the true functions as the ones from our proposed model, along with wider confidence intervals. The right panel of Figure \ref{fig:simulation_prediction} illustrates the posterior draws of the predictions on the test data based on independent GPs, which also shows that the joint GP prior can produce more accurate predictions compared to independent GPs. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{simulation_nocorrelation.pdf} \caption{Posteriors of $\beta(x)$ (left panel) and $\gamma(x)$ (right panel) based on an independent GP prior. The gray lines are the MCMC draws, the solid red lines are the posterior mean, and the dashed lines are the true functions. } \label{fig:simulation_nocorrelation} \end{figure} \section{Application to COVID-19}\label{sec:application} In this section, we leverage the proposed model to analyze the COVID-19 virus spread among the eight largest metropolitan areas in the United States (US). In particular, the impacts of weather and government interventions on the virus transmissibility will be explored, and the forecast of daily infected cases based on these factors will be provided. The data source is briefly introduced here. The daily COVID-19 cases are obtained at the US county level from the data repository provided by New York Times \citep{smith2020coronavirus}. The population sizes are obtained from the census bureau website, which also can be found in \cite{yu2020impact}. The historical weather data and the weather forecast are collected from the Weather Underground \citep{wg2020}, which include the daily average temperature, humidity, wind speed, pressure, and precipitation. The information of government interventions is obtained from New York Times \citep{jasmine2020coronavirus} and local media, where we categorize the interventions into five levels: (1) no intervention; (2) all businesses are open with mask required and some capacity limitations ; (3) all industries resume operations but some indoor services, such as bars and restaurants, remain closed; (4) Industries resume operations with severe restrictions and capacity limitations; (5) all non-essential businesses are closed. Scatterplots for every pair of factors are demonstrated in Figure \ref{fig:pairsplot}, where it appears to have no obvious relationship between any pair of the factors. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{corplot.pdf} \caption{Scatterplots of input factors, where the left diagonal panels are the scatterplots for every pair of factors, the right diagonal panels are the correlations, and the diagnoal panels are the histograms.} \label{fig:pairsplot} \end{figure} Now we are ready to apply the proposed model to the data, where the setting of the MCMC sampling is similar to the one in Section \ref{sec:simulation}. Consider the confirmed cases from the day that the first case was reported to November 11 as the training data, and the cases from November 12 to 25 as the test data. Since the actual infectious period for COVID-19 is not available and it varies by individual and situation, as suggested by Centers for Disease Control and Prevention and \cite{wilson2020weather}, we assume an infectious period of 11 days from the actual infection to the confirmation of the positive test result. In other words, we assume that the actual infection occurs 11 days prior to the confirmation date. Alternatives for estimating the infectious period are discussed in Section \ref{sec:conclusion}. The input factor is a 6-dimensional variable, i.e., $\mathbf{x}\in \mathbb{R}^6$, including 5 variables representing weather data and one variable representing government intervention levels. The MCMC samples of the basic reproduction number can be obtained from the MCMC samples of $\beta(\mathbf{x})$ and $\gamma(\mathbf{x})$ by computing $\mathcal{R}_0(\mathbf{x})=\beta(\mathbf{x})/\gamma(\mathbf{x})$. Since it's hard to visualize the function $\mathcal{R}_0(\mathbf{x})$ via a six-dimensional $\mathbf{x}$, similar to \cite{welch1992screening}, we use a functional ANOVA decomposition \citep{Hoeffding1948ACO,sobol1993sensitivity,santner2003design} for $\mathcal{R}_0(\mathbf{x})$ and plot its overall mean and main effects. That is, suppose that $\mathbf{x}$ follows a distribution $F(\mathbf{x})$ where $F(\mathbf{x})=F_1(x_1)\times F_2(x_2)\times\cdots\times F_d(x_d)$, then the overall mean and the main effects of the function $\mathcal{R}_0(\mathbf{x})$ can be obtained by \begin{equation}\label{eq:fanova} m_0:=\int_{\Omega}\mathcal{R}_0(\mathbf{x}){\rm{d}}F(\mathbf{x})\quad\text{and}\quad m_j(x_j)=\int_{\Omega_{-j}}(\mathcal{R}_0(\mathbf{x})-m_0){\rm{d}}F_{-j}(\mathbf{x}_{-j}), \end{equation} respectively, where $\int_{\Omega_{-j}}\cdots{\rm{d}}F_{-j}(\mathbf{x}_{-j})$ indicates integration over the variables not in $j$ and $F_{-j}(\mathbf{x}_{-j})=\prod^d_{i\neq j}F_i(x_i)$. Since the MCMC samples of $\mathcal{R}_0(\mathbf{x})$ are available for any $\mathbf{x}\in\Omega$, and the integration in \eqref{eq:fanova} can be approximated by the Monte-Carlo integration \citep{caflisch1998monte}, the samples of the posterior distributions of $m_0$ and $m_j(x_j)$ can be naturally drawn via a Monte-Carlo sampling method. This is similar to \cite{le2014bayesian} for estimating the Sobol indices through a surrogate model that accounts for both the integration errors and the surrogate model uncertainty. The boxplots of the overall means of $\mathcal{R}_0(\mathbf{x})$ are shown in Figure \ref{fig:mean_effect}. It can be seen that among these eight cities, Chicago has the highest basic reproduction number than other cities, which implies that each existing infection in Chicago can cause more new infections than other cities, while the existing infection in New York causes less new infections than others. Before illustrating the main effects, the sensitivity analysis \citep{sobol1993sensitivity} is adopted to determine which input factors are responsible for the most variation in the basic reproduction number. The result is shown in Figure \ref{fig:main_effect_sa}. Although no unique factor can dominate the others for all of the cities in terms of sensitivity index, it appears that government interventions have made stronger impacts than weather factors on the virus spread in most of the cities, especially in Baltimore and Chicago. On the other hand, some cities, such as Los Angeles, Houston, and Saint Louis, have shown the evidence that temperature has played a crucial role in explaining the variation of the basic reproduction number. \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{mean_effect.pdf} \caption{Overall mean of basic reproduction number.} \label{fig:mean_effect} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{main_effect_sa.pdf} \caption{Main effect indices of basic reproduction number.} \label{fig:main_effect_sa} \end{figure} The main effects of $\mathcal{R}_0(\mathbf{x})$ are demonstrated in Figure \ref{fig:main_effect}. As have shown in the sensitivity analysis, the intervention factor has larger variations in the main effects, ranging from -0.1 to 0.3, whereas the main effects of the weather factors range from -0.1 to 0.1. Among these six factors, it shows that temperature and government intervention both have negative effects on the virus spread for all of the cities, while other factors have no common trend. In particular, except for Houston, it appears that decreasing the temperature in 10$^\circ$F will increase roughly 0.025 in the basic reproduction number. This result is quite promising in the sense that most of the existing methods cannot directly quantify the effect of temperature on the basic reproduction number. The intervention factor shows that the basic reproduction number can be effectively reduced if governments implement more restrictions to combat the COVID-19 outbreak, especially for Baltimore and Chicago, which can decrease approximately 0.4 in the basic reproduction number from no intervention to the strictest restriction. This finding is consistent with the results in some recent work on the effect of government intervention for COVID-19, such as \cite{flaxman2020estimating,haug2020ranking,haldar2020effect,wang2020mitigate}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{main_effect.pdf} \caption{Main effects of basic reproduction number.} \label{fig:main_effect} \end{figure} We further investigate the interaction effects of the basic reproduction number. Particularly, we focus on the interaction effects between the intervention factor, which is controllable by governments, and the five weather factors, which are uncontrollable. The sensitivity indices \citep{sobol1993sensitivity} of these five interaction effects are first computed to compare their relative importance. For the sake of saving the space, only the interaction effects for New York are demonstrated here, as other cities have no obvious interaction effects. The sensitivity indices and the interaction plot with the highest index are shown in the left and right panels of Figure \ref{fig:interaction_effect_plot}, respectively. It can be seen that the interaction effect between temperature and government interventions has the highest sensitivity index, and from the interaction plot of the two factors, it appears that when governments implement more restrictions, the effect of temperature on the virus spread tends to be milder. This result suggests that as the weather gets colder, policymakers may need to implement more restrictions to mitigate the pandemic. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{interaction_plot.pdf} \caption{Interaction effect indices (left) and interaction plot (right) between temperature and government intervention for New York.} \label{fig:interaction_effect_plot} \end{figure} Last but not least, we validate the proposed model by performing predictions on the test data from November 12 to 25. The prediction results of the eight cities are shown in Figure \ref{fig:newcase_prediction}. The predictions are reasonably accurate over the 14-day period. Particularly, in the cities New York, Los Angeles, San Francisco, and Saint Louis, the infected cases tend to increase over the 14-day period and our predictions successfully capture the trend. This shows strong empirical justification for our model specification. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{newcase_prediction.pdf} \caption{Prediction performance of the proposed model. The black dots are the true confirmed numbers, the green dashed lines are the fitted values from July 1 to November 11, the gray lines and the solid red lines are the MCMC draws and posterior means for the test data from November 12 to 25, respectively.} \label{fig:newcase_prediction} \end{figure} \section{Concluding Remarks}\label{sec:conclusion} How the weather and government interventions affect the spread of a disease has been an important question but remains unclear in the literature. A new statistical model that incorporates with the prominent SIR model is employed to study the impact on the COVID-19 virus transmissibility among eight US metropolitan areas. The Gaussian process modeling and the sensitivity analysis for the functional parameters enable to investigate the the main and interaction effects of the factors, which could lead to a new intervention strategy for policymakers. This study shows that, among six environmental factors, government interventions have the strongest impact on the COVID-19 virus spread in most of the cities. The weather temperature has been found to have negative effects in all of the cities. Other weather factors, such wind speed and pressure, do not show common effects among the eight cities. New York city has shown a strong interaction effect between temperature and interventions, which suggests that more restrictions are necessary to mitigate the outbreak as the weather gets colder. Although we found some potential associations between weather and virus transmissibility, it is worth emphasizing that these associations may not directly imply the causation of the virus transmissibility, meaning that there might be some lurking/causal variables which are correlated with these factors that make the associations appear stronger. For instance, as recent studies have showed (e.g., \cite{wilson2020weather,soucy2020estimating}), the individual mobility may have the direct impact on the COVID-19 spread, which could be strongly correlated with weather factors. Therefore, incorporating the information of individual mobility and estimating the causal effects of mobility and weather is worthwhile to investigate in the future work. Furthermore, it is also worthwhile to study how to estimate the actual infection period more accurately. It is conceivable to consider a more sophisticated epidemic model, such as the SEIR model (Susceptible-Exposed-Infectious-Recovered), which accounts for the incubation period through the exposed compartment \citep{kermack1927contribution,Wulancet}. We leave it to the our future work. \begin{appendix} \setcounter{section}{0} \setcounter{equation}{0} \def\Alph{subsection}.\arabic{equation}{\Alph{subsection}.\arabic{equation}} \def\Alph{subsection}{\Alph{subsection}} \section{Sampler Details}\label{append:samplerdetail} We introduce the sampler for the distribution $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|{\rm{data}})$, which can be drawn iteratively from $\pi(\boldsymbol{\beta}^{(k+1)},\boldsymbol{\gamma}^{(k+1)}|\boldsymbol{\psi}^{(k)},{\rm{data}})$ and $\pi(\boldsymbol{\psi}^{(k+1)}|\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},{\rm{data}})$ via Gibbs sampling, where the superscript $k+1$ and $k$ indicate the $(k+1)$-th and $k$-th iterations. Let the conditional distribution \eqref{eq:jointdis} of $\pi(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi}|{\rm{data}})$ denote as $g(\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\psi})$. The distribution $\pi(\boldsymbol{\beta}^{(k+1)},\boldsymbol{\gamma}^{(k+1)}|\boldsymbol{\psi}^{(k)},{\rm{data}})$ can be drawn via Metropolis-Hastings algorithm with the proposal \eqref{eq:proposal}, that is, we accept $\boldsymbol{\beta}^{(k+1)}=\boldsymbol{\beta}'$ and $\boldsymbol{\gamma}^{(k+1)}=\boldsymbol{\gamma}'$ with the probability $$ \min\left\{1,\frac{g(\boldsymbol{\beta}^{(k+1)},\boldsymbol{\gamma}^{(k+1)},\boldsymbol{\psi}^{(k)})}{g(\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},\boldsymbol{\psi}^{(k)})}\right\}. $$ For the distribution $\pi(\boldsymbol{\psi}^{(k+1)}|\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},{\rm{data}})$, where $\boldsymbol{\psi}^{(k+1)}=(\rho^{(k+1)},\boldsymbol{\phi}^{(k+1)},\\\tau^{(k+1)},\mu_1^{(k+1)},\mu_2^{(k+1)})$, we discuss each of the parameters separately. The parameters $\rho^{(k+1)},\boldsymbol{\phi}^{(k+1)}$ can be also drawn via Metropolis-Hastings algorithm, where the proposals of $\rho'$ and $\phi'_j$ are drawn by $\log(-\log(\rho'))=\log(-\log(\rho))+\mathcal{N}(0,c_{\rho})$ and $\log(-\log(\phi'_j))=\log(-\log(\phi_j))+\mathcal{N}(0,c_{\phi_j})$ for each $j$. Thus, we accept $\rho^{(k+1)}=\rho'$ and $\phi_j^{(k+1)}=\phi'_j$ with the probabilities $$ \min\left\{1,\frac{g(\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},\rho^{(k+1)},\boldsymbol{\phi}^{(k)},\tau^{(k)},\mu_1^{(k)},\mu_2^{(k)})}{g(\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},\rho^{(k)},\boldsymbol{\phi}^{(k)},\tau^{(k)},\mu_1^{(k)},\mu_2^{(k)})}\right\}, $$ and $$ \min\left\{1,\frac{g(\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},\rho^{(k)},\phi_1^{(k)},\ldots,\phi_j^{(k+1)},\ldots,\phi^{(k)}_d,\tau^{(k)},\mu_1^{(k)},\mu_2^{(k)})}{g(\boldsymbol{\beta}^{(k)},\boldsymbol{\gamma}^{(k)},\rho^{(k)},\boldsymbol{\phi}^{(k)},\tau^{(k)},\mu_1^{(k)},\mu_2^{(k)})}\right\}, $$ respectively. The samples of $(\mu_1^{(k+1)},\mu_2^{(k+1)})$ can be drawn by its posterior distribution, which is a 2-dimensional multivariate normal distribution with the mean $$\left(\frac{\mathbf{1}^T_n\mathbf{K}_{\boldsymbol{\phi}}\mathbf{1}_n}{\tau}\left[\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array}\right]+\left[\begin{array}{cc} 1/\sigma^2_1 & 0 \\ 0 & 1/\sigma^2_2 \end{array}\right]\right)^{-1}\left[\begin{array}{c} \alpha_1/\sigma^2_1+\mathbf{1}^T_n\mathbf{K}_{\boldsymbol{\phi}}(\boldsymbol{\beta}+\rho\boldsymbol{\gamma})/\tau \\ \alpha_2/\sigma^2_2+ \mathbf{1}^T_n\mathbf{K}_{\boldsymbol{\phi}}(\rho\boldsymbol{\beta}+\boldsymbol{\gamma})/\tau \end{array}\right] $$ and the covariance matrix $$ \left(\frac{\mathbf{1}^T_n\mathbf{K}_{\boldsymbol{\phi}}\mathbf{1}_n}{\tau}\left[\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array}\right]+\left[\begin{array}{cc} 1/\sigma^2_1 & 0 \\ 0 & 1/\sigma^2_2 \end{array}\right]\right)^{-1}, $$ where $\boldsymbol{\beta},\boldsymbol{\gamma},\rho,\boldsymbol{\phi},\tau$ are the MCMC samples at the previous iteration. The sample of $\tau^{(k+1)}$ can be also drawn by its posterior distribution, which is an inverse gamma distribution with the shape parameter $a+n$ and the rate parameter $$ b+\frac{1}{2}\left[\begin{array}{c} \boldsymbol{\beta} -\mu_1\mathbf{1}_n \\ \boldsymbol{\gamma} -\mu_2\mathbf{1}_n \end{array}\right]^T\left(\left[\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array}\right]^{-1}\otimes\mathbf{K}_{\boldsymbol{\phi}}^{-1}\right)\left[\begin{array}{c} \boldsymbol{\beta} -\mu_1\mathbf{1}_n \\ \boldsymbol{\gamma} -\mu_2\mathbf{1}_n \end{array}\right], $$ where $\otimes$ is the Kronecker product, and $\boldsymbol{\beta},\boldsymbol{\gamma},\rho,\boldsymbol{\phi},\mu_1,\mu_2$ are the MC samples at the previous iteration. \end{appendix} \bibliographystyle{imsart-nameyear}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Tensor fields such as scalar and vector fields are ubiquitous in engineering and science. Consider for example an atmospheric climate model. There are scalars (temperature, pressure, humidity, density), vectors (radiation, wind), and higher order tensors (stress, strain rate tensors) at every point in 3D. We may want to learn the time derivative of all fields for prediction or the time averaged mapping between particular fields for analysis. Yet there currently is no canonical neural network architecture for learning transformations on tensor fields. One challenge is preserving rotational equivariance. If we rotate the coordinate frame, the tensor representations ought to rotate accordingly. Conventional approaches using convolutional neural networks (CNN) fails here: We cannot predict what would happen to the output after rotating the input. This results in models that violate system symmetry, act unpredictably, generalize poorly and lack data efficiency. There have been recent progress in well behaved neural networks on tensor fields. We unify concepts from Euclidean neural networks (E3NN, previously known as "tensor field networks") \cite{thomas2018tensor}, tensor basis neural networks (TBNN) and Fourier neural operators (FNO) \cite{li2021fourier}. E3NN generalizes convolutions to tensor features by defining appropriate tensor products over spherical harmonic projections. However, E3NN is a graph convolutional network, acting on features over a discrete point cloud. Further, as with CNNs, long range or domain wide convolutions scale poorly as $O(N^2)$. In contrast, we extend E3NN-style convolutions to tensor fields defined over all of $\mathbb{R}^3$ represented on a regular grid. Furthermore similar to FNO we make use of the convolution theorem to achieve $O(N \log{N})$ scaling for long range convolutions as the Fast Fourier Transform (FFT) scales as such. Akin to TBNN we additionally implement an equivariant attention layer for pairwise coupling between tensor features at a point, but realize them via tensor products. \section{Architecture} Our goal is to predict the operation of an equivariant operator $F$ between sets of tensor fields $\{ \mathbf{u_i} \}$ and $\{ \mathbf{v_j} \}$ over $\mathbb{R}^3$. The input and output sets can in general be of different cardinalities and contain tensor fields of arbitrary and different ranks. \begin{equation} \begin{gathered} F(\{ \mathbf{u_i} \})= \{ \mathbf{v_j} \} \\ \text{s.t} \ F(\{T \mathbf{u_i} \})= \{T \mathbf{v_j} \} \ \forall \ \text{translations and rotations $T$}\\ \end{gathered} \end{equation} Our employed neural network architecture reprenests $F$ as a composition of a tensor field convolution linear layer $L$, a local product bilinear layer $A$ (for pairwise coupling), and a local nonlinear layer $N$. Canonically $F=NAL$ but different layers may be composed together as needed. All layers use tensor products and norm operations which are equivariant, to be detailed below. \subsection{Linear layer $L$: tensor field convolutions} \subsubsection{Definition} From linear systems theory, the output $\mathbf{v}$ of any linear translation invariant system can be written as the convolution of the input $\mathbf{u}$ and a characteristic impulse response $\mathbf{h}$. In linear PDE theory, the impulse response for an operator's inverse is known as the operator's Green's function. In general, the variables are not restricted to scalar fields (rank $l=0$) but can also be vector fields (rank $l=1$) and higher order tensor fields. If the system is also rotational equivariant, the impulse response tensor field must be a product of a scalar radial function and a spherical harmonic tensor $\mathbf{Y}_l$ \cite{thomas2018tensor}. $\mathbf{Y}_l$ only has angular dependence and is defined in our context as scalar 1, vector $\mathbf{\hat{r}}$, and traceless symmetric matrix $\mathbf{3 \hat{r}} \mathbf{\hat{r}}' - \mathbb{I}$ for $l=0,1,2$ respectively. We introduce the tensor field convolution layer which learns as its convolution filter kernel the impulse response of a linear equivariant system. Concretely, a tensor field convolution relating $\mathbf{u_i}$ to $\mathbf{v_j}$ needs a filter tensor field of an appropriate rank $l$ and a tensor product suitable to the ranks. Common tensor products are elementary linear algebra operations. They can be generalized for higher order tensor fields via Clebsch-Gordan products. An output may depend on multiple inputs whose convolutions are summed via weights $w_{ij}$. \begin{equation} \begin{split} L(\{ \mathbf{u_i} \})_j & =\sum\limits _i w_{ij} \mathbf{u_i} \ast \mathbf{h_{ij}}\\ & =\sum\limits _i w_{ij} \int \limits _{\mathbb{R}^3} \mathbf{u_i}(\mathbf{\tilde{r}}) \otimes _{(l_{\mathbf{u}_i}, l_{\mathbf{h}_ij}) \rightarrow l_{\mathbf{v}_j}} \mathbf{h_{ij}}(\mathbf{r}-\mathbf{\tilde{r}}) d\mathbf{\tilde{r}}\\ & =\sum\limits _i w_{ij} \int \limits _{\mathbb{R}^3} R(|\mathbf{r}|) \ \mathbf{u_i}(\mathbf{\tilde{r}}) \otimes _{(l_{\mathbf{u}_i}, l_{\mathbf{h}_ij}) \rightarrow l_{\mathbf{v}_j}} \mathbf{Y_{l_{ij}}}(\mathbf{r}-\mathbf{\tilde{r}}) \ d\mathbf{\tilde{r}} \\ \end{split} \end{equation} \begin{equation} \mathbf{v}=\mathbf{u} \otimes _{(l_u,l_h)\rightarrow l_v} \mathbf{h}= \begin{cases} u \mathbf{h} & \text{scalar product for $l_u=0$ and $l_h=l_v$}\\ \mathbf{u} \cdot \mathbf{h} & \text{dot product for $l_v=0$ and $l_u=l_h$}\\ \mathbf{u} \times \mathbf{h} & \text{cross product for $(1,1)\rightarrow 1$}\\ \mathbf{u} \mathbf{h} & \text{traceless symmetric matrix vector product for $(2,1)\rightarrow 1$}\\ \end{cases} \end{equation} \begin{equation} \mathbf{Y_l(\mathbf{\hat{r}})}= \begin{cases} 1 & \text{for $l=0$}\\ \mathbf{\hat{r}} & \text{for $l=1$}\\ \mathbf{3 \hat{r}} \mathbf{\hat{r}}' - \mathbb{I} & \text{for $l=2$}\\ \end{cases} \end{equation} \subsubsection{Examples} For example, consider the convolution mapping a charge distribution (scalar field, rank $l=0$) to the electric field (vector field $l=1$) via Gauss's Law. The filter is a vector field with radial function $1/r^2$ and rank 1 spherical harmonic $\mathbf{\hat{r}}$. The tensor product is scalar vector multiplication. Other examples: \begin{equation} \begin{split} L(\{ \mathbf{u} \}) & = \mathbf{u} \ast \mathbf{h}\\ & = \mathbf{u} \ast [R(r)\mathbf{Y_{l_h}}(\mathbf{\hat{r}})]\\ \end{split} \end{equation} \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Operator $L$ & $R(r)$ & $\mathbf{Y_{l_h}}(\mathbf{\hat{r}})$ & $(l_u,l_h)\rightarrow l_v$ \\ \hline $I$ & $\delta(r)$ & 1 & $(l,0)\rightarrow l$ \\ $\bigtriangledown$ & $\delta '(r)$ & $\mathbf{\hat{r}}$ & $(0,1)\rightarrow 1$ \\ $\bigtriangledown \cdot$ & $\delta '(r)$ & $\mathbf{\hat{r}}$ & $(1,1)\rightarrow 0$ \\ $\bigtriangledown \times$ & $\delta '(r)$ & $\mathbf{\hat{r}}$ & $(1,1)\rightarrow 1$ \\ Diffusion & Gaussian & 1 & $(0,0)\rightarrow 0$ \\ $(\bigtriangledown^2)^{-1}$ & $1/r$ & 1 & $(0,0)\rightarrow 0$ \\ $(\bigtriangledown \cdot)^{-1}$ & $1/r^2$ & $\mathbf{\hat{r}}$ & $(0,1)\rightarrow 1$ \\ \hline \end{tabular} \end{center} \subsubsection{Efficient scaling of tensor field convolution: direct and Fourier methods} Tensor field convolutions are numerically implemented as separate scalar convolutions of the component scalar valued fields. Short ranged convolutions (eg differential operators) are done directly while long range convolutions are efficiently done in Fourier space via the convolution theorem. FFT allows $N\log{N}$ scaling in the latter. Let $\mathcal{F}$ denote the Fourier transform, $\mathbf{e_i}$ the ith basis of the result tensor field, $\mathbf{u}[i]$ the ith scalar field component of tensor field $\mathbf{u}$, and $C_{mnp}$ the Clebsch-Gordan coefficient of the appropriate tensor product. \begin{equation} \begin{split} \mathbf{u} \ast \mathbf{h} = & \sum \limits _p \sum \limits _{m,n} C_{mnp} \mathbf{u}[m] \ast \mathbf{h}[n] \mathbf{e_p}\\ = & \sum \limits _p \mathcal{F}^{-1} \left( \sum \limits _{m,n} C_{mnp} \mathcal{F}(\mathbf{u}[m]) \mathcal{F}(\mathbf{h}[n]) \right) \mathbf{e_p}\\ \end{split} \end{equation} \subsection{Pairwise attention layer $A$: tensor field local product} At each point, an output tensor field is a linear combination of the appropriate pointwise tensor products between pairs of input tensor fields. A pair may contain the same input field twice or an identity scalar field (to facilitate recovering the identity transformation). A pair is dropped for an output field if there's no appropriate tensor product matching the ranks. \begin{equation} \begin{split} A(\{ \mathbf{u_i} \})_j (\mathbf{r}) & =\sum\limits _{a,b} w_{jab} \mathbf{u_a}(\mathbf{r}) \otimes_{(l_{\mathbf{u}_a}, l_{\mathbf{u}_b}) \rightarrow l_{\mathbf{v}_j}} \mathbf{u_b}(\mathbf{r})\\ \end{split} \end{equation} \subsection{Nonlinear layer $N$: local norm nonlinearity} Nonlinear activations are local and pointwise. It rescales a tensor field as a function of the field's norm at each point. The function can be an affine followed by rectifier linear unit (ReLU). Because norms are rotation invariant, overall equivariance is preserved. \begin{equation} N(\{ \mathbf{u_i} \})_j(\mathbf{r}) = \sigma(a_j |\mathbf{u_j}(\mathbf{r})| + b_j) \mathbf{\hat{u_j}}(\mathbf{r}) \end{equation} \subsection{Neural network parameters} We may learn $F$ by parameterizing it for pairs of training data $( \mathbf{u_i}, \mathbf{v_j} )$. A few parameters are $a_j,b_j,w_{ij}$ while most parameters encode different radial functions of the tensor convolutions. For the latter one scheme is sum of Gaussians, decay power laws, and derivative operators. Alternately, we may use a neural network with $r$ as input. \begin{equation} R_{ij}(r)= \sum \limits _n A_n e^{{r^2}/{\sigma_n^2}} + \sum \limits _n B_n r^{-|n|} + \sum \limits _n C_n \partial^n_r \delta(r) \end{equation} \section{Experiments} \subsection{Results} We train on data generated by PDEs and other physical problems including quantum chemistry. We achieve almost perfect fits on linear PDEs eg Poisson's equation, Gauss's law and Stokes flow. This is unsurprising as the tensor field convolutions are tantamount to learning Green's functions which completely characterize linear systems. We also fit almost perfectly some nonlinear PDE functions with pairwise coupling such as the advection time derivative. Tensor convolutions can encompass differential operators while local tensor products in equivariant attention represent coupling. We achieve good but not perfect performance for predicting the quantum electronic density from nuclear positions, a highly non-linear problem. The electronic density is a quantum observable and the central quantity in density functional theory (DFT), a widely used method for calculating physical and chemical properties. We expect performance to improve from stacking multiple linear and nonlinear layers. \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline Problem & Type & Equation & Training error & Test error\\ \hline Poisson's equation & Linear PDE & $\bigtriangledown^2 v = u$ & <0.2\% & <1\% \\ Gauss's law & Linear PDE & $\bigtriangledown \cdot \mathbf{v} = u$ & <0.2\% & <1\% \\ Stokeslet creeping flow & Linear PDE & $\bigtriangledown^2 \mathbf{v_1} - \bigtriangledown v_2 = -\mathbf{u}$ & <0.2\% & <1\% \\ Diffusion-advection-reaction time derivative & Nonlinear PDE & & <0.2\% & <1\% \\ Prediction of DFT ground-state densities & Nonlinear PDE & & <2\% & <6\% \\ \hline \end{tabular} \end{center} \subsection{Experimental setups} \subsubsection{PDE solutions} To generate data, time independent PDEs are solved using Green's functions (for Poisson's, Gauss's, Stokes flow) on a finite difference grid. Time dependent PDEs (diffusion advection) have their time derivative directly evaluated as the output. In all cases we use a 16x16x16 domain. Input scalar and vector fields are initiated randomly per voxel while their values outside domain are implicitly set to 0. \subsubsection{Quantum chemistry: DFT} Density-functional theory (DFT) is a widely used quantum-mechanical approach to model the electronic structure in matter from first principles~\cite{Martin2004}. Based on DFT approaches many chemical and physical properties can be accurately predicted, such that the suitability of a compound towards applications can be simulated *in silico* and before costly manufacture. An established and highly successful application of DFT is the discovery of novel materials by systematic high-throughput screening through a design space of candidate compounds~\cite{Jain2016,Alberi2019,Luo2021}. In such methods a limiting factor is the cost of the required DFT simulations, with the self-consistent field~(SCF) procedure being the most time-consuming component. In this procedure one solves a fixed-point problem for the electron density starting from an initial guess, commonly built from precomputed atomic data. In this experiment we investigate using our proposed neural-network model as a surrogate for the SCF, i.e. to predict the converged fixed-point density by feeding the NN a standard (cheaply computable) initial guess as well as the molecular structure. In this experiment the initial guess densities as well as the converged DFT densities used for training, testing and validation were generated using the density-functional toolkit (DFTK)~\cite{DFTK} on a set of small aliphatic and aromatic molecules using an energy cutoff of 15 Hartrees in a plane-wave basis and a single $k$-point. In the future we want to develop this idea further to (a) provide improved guess methods for DFT calculations, such that less costly SCF steps are needed, or to (b) use it as an SCF surrogate, such that all costly SCF iterations could be avoided. We expect this to provide a novel route to enable larger and larger design space searches and to accelerate the computational discovery of novel innovative materials. \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent years have witnessed steadily increasing recognition and attention of coordinated motion of mobile agents across a broad range of disciplines. Applications can be found in many areas such as biology or ecology (e.g., aggregation behavior of animals in \cite{amr, warb,bre}), physics (e.g., collective motion of particles in \cite{vic95,vic00}), and engineering (e.g., formation control of robots in \cite{lin05,hong, olf04,olf06}). The studies of multiple autonomous agents focus on understanding the general mechanisms and interconnection rules of cooperative phenomena as well as their potential applications in various engineering problems. In a multi-agent system, agents are usually coupled and interconnected with some simple rules including nearest neighbor rules \cite{vic95, olf04}. A computer graphics model to simulate collective behavior of multiple agents was presented in \cite{rey87}. With a proposed simple model and neighbor-based rules, flocking and schooling were successfully simulated and analyzed for self-propelled particles in \cite{vic95}. Also, self-organized aggregation behavior of particle groups with leaders becomes more and more interesting. The coordinated motion of a group of motile particles with a leader has been analyzed in \cite{mu}, while leader-follower networks have been also considered in \cite{wang}. Recently, to design distributed flocking algorithms, Olfati-Saber has introduced a theoretical framework including a virtual leader/follower architecture, which is different from conventional leader/follower architecture (\cite{olf06}). Sometimes, the coupling delays between agents have to be taken into consideration in practical problems (\cite{ear, koz, olf04}). For example, \cite{ear} proposed a stability criterion for a network of specific oscillators with time-delayed coupling. In \cite{olf04}, the authors studied consensus problems of continuous-time agents with interconnection communication delays. The dynamics of each agent is first order and the graph to describe the interconnection topology of these agents is undirected. In this paper, a leader-following consensus problem for multiple agents with coupling time delays is discussed. Here the considered dynamics of each agent is second order, coupling time delay is time-varying, and the interconnection graph of the agents is directed. The convergence analysis of the consensus problem with directed graphs (or digraph for short) is more challenging than that of undirected graphs due to the complexity of directed graphs. The analysis becomes harder if time delay is involved. For time-delay systems, modeled by delayed differential equations, an effective way to deal with convergence and stability problems is Lyapunov-based; Lyapunov-Krasovskii functionals or Lyapunov-Razumikhin functions are often used in the analysis \cite{hale}. The paper is organized as follows. Section 2 presents the multi-agent model and some preliminaries. Then, two cases, fixed coupling topology and switched coupling topology, are considered. The leader-following convergence of two models in the two cases are analyzed in Section 3 and Section 4, respectively. Here, Lyapunov-Razumikhin functions are employed, along with the analysis of linear matrix inequalities. Finally, some concluding remarks are given in Section 5. By convention, $R$ and $Z^{+}$ represent the real number set and the positive integer set, respectively; $I_n$ is an $n\times n $ identity matrix; for any vector $x$, $x^{T}$ denotes is its transpose; $||\cdot||$ denotes Euclidean norm. \section{Model Description} We consider a group of $n+1$ identical agents, in which an agent indexed by 0 is assigned as the ``leader" and the other agents indexed by $1,...,n$ are referred to as ``follower-agents" (or ``agents" when no confusion arises). The motion of the leader is independent and the motion of each follower is influenced by the leader and the other followers. A continuous-time model of the $n$ agents is described as follows: \begin{equation} \label{move1} \ddot {x}_i = u_i,\quad i=1,...,n, \end{equation} or equivalently, \begin{equation} \label{move2} \left\{ {\begin{array}{l} \dot {x}_i = v_i, \\ \dot {v}_i = u_i, \\ \end{array}} \right. \end{equation} where the state $x_i\in R^{m}$ can be the position vector of agent $i$, $v_i\in R^{m}$ its velocity vector and $u_i\in R^{m}$ its coupling inputs for $i=1,...,n$. Denote \begin{equation*} x=\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\x_n \end{pmatrix},\quad v=\begin{pmatrix} v_1 \\ v_2 \\ \vdots \\v_n \end{pmatrix}, \quad u=\begin{pmatrix} u_1 \\ u_2 \\ \vdots \\u_n \end{pmatrix} \in R^{mn}. \end{equation*} Without loss of generality, in the study of leader-following stability, we take $m=1$ for simplicity in the sequel. Then (\ref{move2}) can be rewritten as \begin{equation} \label{move3} \left\{ {\begin{array}{l} \dot {x} = v,\\ \dot {v} = u \in R^n.\\ \end{array}} \right. \end{equation} The dynamics of the leader is described as follows: \begin{equation} \label{moveleader} \dot x_0=v_0\in R, \end{equation} where $v_0$ is the desired constant velocity. If each agent is regarded as a node, then their coupling topology is conveniently described by a simple graph (basic concepts and notations of graph theory can be found in \cite{bang, god, olf04}). Let $\mathcal{G}=(\mathcal{V},\mathcal{E},A)$ be a weighted digraph of order $n$ with the set of nodes $\mathcal{V}=\{1,2,...,n\}$, set of arcs $\mathcal{E}\subseteq \mathcal{V} \times \mathcal{V}$, and a weighted adjacency matrix $A=[a_{ij}]\in R^{n \times n}$ with nonnegative elements. The node indexes belong to a finite index set $\mathcal{I}=\{1,2,...,n\}$. An \emph{arc} of $\mathcal{G}$ is denoted by $(i,j)$, which starts from $i$ and ends on $j$. The element $a_{ij}$ associated with the arc of the digraph is positive, i.e. $a_{ij}>0 \Leftrightarrow (i,j) \in \mathcal{E}$. Moreover, we assume $a_{ii}=0$ for all $i \in \mathcal{I}$. The set of neighbors of node $i$ is denoted by $\mathcal{N}_{i}=\{j\in \mathcal{V}:(i,j)\in \mathcal{E}\}$. A \emph{cluster} is any subset $\mathcal{J}\subset \mathcal{V}$ of the nodes of the digraph. The set of neighbors of a cluster $\mathcal{J}$ is defined by $\mathcal{N}_{\mathcal{J}}=\bigcup_{i\in \mathcal{J}}\mathcal{N}_{i}=\{j\in \mathcal{V}:i\in \mathcal{J},(i,j)\in \mathcal{E}\}$. A \emph{path} in a digraph is a sequence $i_0,i_1,\cdots,i_f$ of distinct nodes such that $(i_{j-1},i_j)$ is an arc for $j=1,2,\cdots,f,f\in Z^+$. If there exists a path from node $i$ to node $j$, we say that $j$ is reachable from $i$. A digraph $\mathcal{G}$ is strongly connected if there exists a path between any two distinct nodes. A \emph{strong component} of a digraph is an induced subgraph that is maximal, subject to being strongly connected. Moreover, if $\sum_{j \in \mathcal{N}_{i}}a_{ij}=\sum_{j \in \mathcal{N}_{i}}a_{ji}$ for all $i=1,...,n$, then the digraph $\mathcal{G}$ is called \emph{balanced}, which was first introduced in \cite{olf04}. A diagonal matrix $D=diag\{ d_1,...,d_n\}\in R^{n\times n}$ is a degree matrix of $\mathcal{G}$, whose diagonal elements $d_i=\sum_{j \in \mathcal{N}_{i}}a_{ij}$ for $i=1,...,n$. Then the Laplacian of the weighted digraph $\mathcal{G}$ is defined as \begin{equation} L=D-A\in R^{n\times n}. \end{equation} To study a leader-following problem, we also concern another graph $\bar{\mathcal{G}}$ associated with the system consisting of $n$ agents and one leader (labelled $0$). Similarly, we define a diagonal matrix $B\in R^{n\times n}$ to be a leader adjacency matrix associated with $\bar{\mathcal{G}}$ with diagonal elements $b_i\;(i\in \mathcal{I})$, where $b_i=a_{i0}$ for some constant $a_{i0}>0$ if node $0$ (i.e., the leader) is a neighbor of node $i$ and $b_i=0$ otherwise. For $\bar{ \mathcal{G}}$, if there is a path in $\bar {\mathcal{G}}$ from every node $i$ in $\mathcal{G}$ to node $0$, we say that node $0$ is globally reachable in $\bar{ \mathcal{G}}$, which is much weaker than strong connectedness. {\bf Example 1}. As shown in Figs. 1 and 2, both $\bar{\mathcal{G}}_1$ and $\bar{\mathcal{G}}_2$ are not strongly connected, but they have a globally reachable node $0$. Suppose that the weight of each arc is $1$ in both cases. Obviously, $\mathcal{G}_2$ with $\mathcal{V}=\{1,2,3,4\}$ is balanced. Laplacians of $\mathcal{G}_1$ and $\mathcal{G}_2$ as well as the leader adjacency matrices $B_1,\;B_2$ are easily obtained as follows: \begin{align*} L_1=\left(\begin{array}{cccc} 1&-1&0&0\\-1&1&0&0\\0&0&0&0\\0&-1&-1&2 \end{array}\right), L_2=\left(\begin{array}{cccc} 1&-1&0&0\\-1&1&0&0\\0&0&1&-1\\0&0&-1&1 \end{array}\right), B_1=B_2=\left(\begin{array}{cccc} 1&0&0&0\\0&0&0&0\\0&0&1&0\\0&0&0&0 \end{array}\right). \end{align*} \begin{center}{{\includegraphics[scale=0.2]{fig1.eps} }\hskip 1.5cm {\includegraphics[scale=0.2]{fig2.eps} } {Fig.1 $\bar{\mathcal{G}}_1$ and $\mathcal{G}_1$\hskip 2.6cm Fig.2 $\bar{\mathcal{G}}_2$ and $\mathcal{G}_2$}} \end{center} The following lemma was obtained in (\cite{lin05, more}). \begin{lemma} \label{lemgr1} A digraph $\mathcal{G}=(\mathcal{V},\mathcal{E},A)$ has a globally reachable node if and only if for every pair of nonempty, disjoint subsets $\mathcal{V}_1,\mathcal{V}_2\subset \mathcal{V}$ satisfies $\mathcal{N}_{S_i}\bigcup\mathcal{N}_{S_j}\neq \emptyset$. \end{lemma} \begin{remark} Let $S_1,S_2,...,S_p$ be the strong components of $\mathcal{G}= (\mathcal{V}, \mathcal{E})$ and $\mathcal{N}_{S_i}$ be the neighbor sets for $S_i,i=1,...,p, p> 1$. From Lemma \ref{lemgr1}, a digraph $\mathcal{G} $ has a globally reachable node if and only if every pair of $S_i,S_j$ satisfies $\mathcal{N}_{S_i}\bigcup\mathcal{N}_{S_j}\neq \emptyset$. If the graph is strongly connected, then each node is globally reachable from every other node. \end{remark} The next lemma shows an important property of Laplacian $L$ (\cite{lin05}). \begin{lemma} \label{lemgr2} The digraph $\mathcal{G}$ has a globally reachable node if and only if Laplacian $L$ of $\mathcal{G}$ has a simple zero eigenvalue (with eigenvector $\textbf{1}=(1,...,1)^{T}\in R^{n}$). \end{lemma} Due to the coupling delays, each agent cannot instantly get the information from others or the leader. Thus, for agent $i\;(i=1,...,n)$, a neighbor-based coupling rule can be expressed as follows: \begin{equation} \label{prot} u_i(t)=\sum_{j \in \mathcal{N}_{i}(\sigma)}a_{ij}(x_j(t-r)-x_i(t-r)) +b_i(\sigma)(x_0(t-r)-x_i(t-r))+k(v_0-v_i(t)),\; k>0, \end{equation} where the time-varying delay $r(t)>0$ is a continuously differentiable function with \begin{equation} \label{const1} 0<r<\tau, \end{equation} $\sigma: [0,\infty) \to \mathcal{I}_{\Gamma}=\{1,...,N\}$ ($N$ denotes the total number of all possible digraphs) is a switching signal that determines the coupling topology. The set $\Gamma=\{\mathcal{G}_1,...,\mathcal{G}_N\}$ is a finite collection of graphs with a common node set $\mathcal{V}$. If $\sigma$ is a constant function, then the corresponding interconnection topology is fixed. In addition, $\mathcal{N}_{i}(\sigma)$ is the index set of neighbors of agent $i$ in the digraph $\mathcal{G}_{\sigma}$ while $a_{ij}\;(i,j=1,...,n)$ are elements of the adjacency matrix of $\mathcal{G}_{\sigma}$ and $b_i(\sigma)\;(i=1,...,n)$ are the diagonal elements of the leader adjacency matrix associated with $\bar{\mathcal{G}}_\sigma$. With (\ref{prot}), (\ref{move2}) can be written in a matrix form: \begin{equation} \label{model1} \begin{cases} \dot{x}=v,\\ \dot{v}=-(L_\sigma+B_\sigma)x(t-r)-k(v-v_0\mathbf{1})+B_\sigma\mathbf{1}x_0(t-r), \end{cases} \end{equation} where $L_\sigma$ is Laplacian of $\mathcal{G}_\sigma$ and $B_\sigma$ is the leader adjacency matrix associated with $\bar{\mathcal{G}}_\sigma$. In the sequel, we will demonstrate the convergence of the dynamics system (\ref{model1}); that is, $x_i \to x_0,v_i \to v_0$ as $t \to \infty$. \section{Fixed Coupling Topology} In this section, we will focus on the convergence analysis of a group of dynamic agents with fixed interconnection topology. In this case, the subscript $\sigma$ can be dropped. Let $\bar{x}=x-x_0\mathbf{1},\bar{v}=v-v_0\mathbf{1}$. Because $-(L+B)x(t-r)+B\textbf{1}x_{0}(t-r) =-(L+B)\bar x(t-r)$ (invoking Lemma \ref{lemgr2}), we can rewrite system (\ref{model1}) as \begin{equation} \label{model2} \dot{\epsilon}=C\epsilon(t)+E\epsilon(t-r), \end{equation} where $$ \epsilon=\left(\begin{array}{c} \bar x\\ \bar v\end{array}\right),\; C=\begin{pmatrix} 0&I_n\\ 0&-kI_n \end{pmatrix},\; E=\begin{pmatrix} 0&0\\ -H&0 \end{pmatrix},\; H=L+B. $$ Before the discussion, we introduce some basic concepts or results for time-delay systems (\cite{hale}). Consider the following system: \begin{equation} \begin{cases} \label{delay}\dot x=f(x_t),\quad t > 0 ,\\ x(\theta)=\varphi(\theta),\;\theta \in [-\tau,0], \end{cases} \end{equation} where $x_t(\theta)=x(t+\theta),\forall \theta\in [-\tau,0]$ and $f(0)=0$. Let $C([-\tau,0],R^{n})$ be a Banach space of continuous functions defined on an interval $[-\tau, 0]$, taking values in $R^{n}$ with the topology of uniform convergence, and with a norm $||\varphi||_c = \max\limits_{\theta \in [-\tau, 0]}||\varphi(\theta)||$. The following result is for the stability of system (\ref{delay}) (the details can be found in \cite{hale}). \begin{lemma} \label{lem1}(Lyapunov-Razumikhin Theorem) Let $\phi_1,\phi_2$, and $\phi_3$ be continuous, nonnegative, nondecreasing functions with $\phi_1(s)>0,\phi_2(s)>0,\phi_3(s)>0$ for $s >0$ and $\phi_1(0)=\phi_2(0)=0$. For system (\ref{delay}), suppose that the function $f: C([-\tau,0],R^{n}) \to R$ takes bounded sets of $C([-\tau,0],R^{n})$ in bounded sets of $R^n$. If there is a continuous function $V(t,x)$ such that \begin{equation} \label{cond1}\phi_1(||x||) \leq V(t,x) \leq \phi_2(||x||),\;t\in R, \;x\in R^{n}. \end{equation} In addition, there exists a continuous nondecreasing function $\phi(s)$ with $\phi(s)>s,\; s>0$ such that \begin{equation} \label{cond2} \dot{V}(t,x)|_{(\ref{delay})} \leq -\phi_3(||x||),\quad \mbox{if}\;\; V(t+\theta,x(t+\theta))<\phi(V(t,x(t))),\;\theta \in [-\tau,0], \end{equation} then the solution $x=0$ is uniformly asymptotically stable. \end{lemma} Usually, $V(t,x)$ is called Lyapunov-Razumikhin function if it satisfies (\ref{cond1}) and (\ref{cond2}) in Lemma \ref{lem1}. \begin{remark} \label{rem2} Lyapunov-Razumikhin theorem indicates that it is unnecessary to require that $\dot V(t,x)$ be non-positive for all initial data in order to have stability of system (\ref{delay}). In fact, one only needs to consider the initial data if a trajectory of equation (\ref{delay}) starting from these initial data is ``diverging" (that is, $V(t+\theta, x(t+\theta))< \phi(V(t,x(t)))$ for all $\theta \in [-\tau,0]$ in (\ref{cond2})). \end{remark} A matrix $A$ is said to have \emph{property SC} (\cite{horn}) if, for every pair of distinct integers $\hbar,\ell$ with $1 \leq \hbar,\ell \leq n$, there is a sequence of distinct integers $\hbar=i_1,i_2,...,i_{j-1}, i_j=\ell,1 \leq j \leq n$ such that all of the matrix entries $a_{i_1i_2},a_{i_2i_3},...,a_{i_{j-1}i_j}$ are nonzero. In fact, it is obvious that, if $\mathcal{G}$ is strongly connected, then its adjacency matrix $A$ has \emph{property SC}. Moreover, a matrix is called a positive stable matrix if its eigenvalues have positive real-parts. Note that $H=L+B$ plays a key role in the convergence analysis of system (\ref{model2}). The following lemma shows a relationship between $H$ and the connectedness of graph $\bar {\mathcal{G}}$ (as defined in Section 2). \begin{lemma} \label{lemgr3} The matrix $H=L+B$ is positive stable if and only if node $0$ is globally reachable in $\bar {\mathcal{G}}$. \end{lemma} Proof: (Sufficiency) Based on $Ger\check{s}gorin$ disk theorem (\cite{horn}), all the eigenvalues of $H$ are located in the union of $n$ discs: \begin{equation*} Ger(H)=\bigcup_{i=1}^{n}\{z\in R^{2}:|z-d_{i}-b_i| \leq \sum_{j \neq i}a_{ij}\}. \end{equation*} However, for the graph $\mathcal{G}$, $d_{i}=\sum_{j\neq i}a_{ij}$. Thus, every disc with radius $d_i$ will be located in the right half of the complex plane, and then $H$ has either zero eigenvalue or eigenvalue with positive real-part. Since node $0$ is globally reachable, there exists at least one $b_i>0$. Therefore, at least one $Ger\check{s}gorin$ circle does not pass through the origin. The following two cases are considered to prove the sufficient condition: \begin{description} \item Case (i) {\it $\mathcal{G}$ has a globally reachable node}: Let $S_1,...,S_p$ ($p\in Z^+$) be the strong components of $\mathcal{G}$. If $p=1$, $\mathcal{G}$ is strongly connected. Then its adjacency matrix $A$ has \emph{property SC}. Since $D+B$ is a diagonal matrix with nonnegative diagonal entries, $H$ still has \emph{property SC}. By Better theorem (\cite{horn}), if zero is an eigenvalue of $H$, it is just a boundary point of $Ger(H)$. Therefore, every $Ger\check{s}gorin$ circle passes through zero, which leads to a contradiction. Hence, zero is not an eigenvalue of $H$. If $p>1$, then there is one strong component, say $S_1$, having no neighbor set by Lemma \ref{lemgr1}. We rearrange the indices of $n$ agents such that the Laplacian of $\mathcal{G}$ is taken in the form \begin{equation} \label{lap00} L=\begin{pmatrix}L_{11} & 0\\L_{21}& L_{22}\end{pmatrix}, \end{equation} where $L_{11}\in R^{\kappa\times \kappa}\;(\kappa <n)$ is Laplacian of the component $S_1$. From Lemma \ref{lemgr2}, zero is a simple eigenvalue of $L_{11}$ and $L$, while $L_{22}$ is nonsingular. Since node $0$ is globally reachable, then the block matrix $B_1\neq 0$ with $B=diag\{B_1,B_2\}$. Similar to the case when $p=1$, we conclude that zero is not an eigenvalue of $L_{11}+B_1$, and is also not an eigenvalue of $H$. \item Case (ii) {\it $\mathcal{G}$ has no globally reachable node}: Let $S_1,...,S_p$ be the strong components with $\mathcal{N}_{S_i}=\emptyset,i=1,...,p, p>1$ by Lemma \ref{lemgr1}. Since $\bigcup_{i=1}^{p}\mathcal{V}(S_i) \subset \mathcal{V}(\mathcal{G})$, Laplacian associated with $\mathcal{G}$ can be transformed to the following form: \begin{equation} \label{lap02} L=\left(\begin{array}{cccc} L_{11}&&&\\ &\ddots&&\\ &&L_{pp}&\\ L_{p+1,1}&\cdots&L_{p+1,p}&L_{p+1,p+1} \end{array}\right), \end{equation} where $L_{ii}$ is the Laplacian associated with $S_i$ for $i=1,...,p$. One can easily verify that $L_{p+1,p+1}$ is nonsingular. Since node $0$ is globally reachable, then $B_i\neq 0$ for $i=1,...,p$ where $B_i$, corresponding to $L_{ii}$, are diagonal blocks of $B$. Similar to the proof in Case (i), we can obtain that zero is not eigenvalue of $H_i$ or $H$. \end{description} (Necessity) If node $0$ is not globally reachable in $\bar{\mathcal{G}}$, then we also have: \begin{description} \item Case (i) {\it $\mathcal{G}$ has a globally reachable node}: As discussed before, assume $S_1$ has no neighbor set, and then we have (\ref{lap00}), where $L_{11}\in R^{\kappa\times \kappa}\; (\kappa\in Z^+)$ is the Laplacian of $S_1$. Invoking Lemma \ref{lemgr2}, zero is a simple eigenvalue of $L_{11}$ and $L$, while $L_{22}$ is nonsingular. By the assumption that node $0$ is not globally reachable in $\bar{\mathcal{G}}$, then the block matrix $B_1=0$ with $B=diag\{B_1,B_2\}$. Therefore, zero is a simple eigenvalue of $L_{11}+B_1$, and is also a simple eigenvalue of $H$. This leads to a contradiction. \item Case (ii) {\it $\mathcal{G}$ has no globally reachable node}: As discussed before, we have (\ref{lap02}). By the assumption that node $0$ is not globally reachable in $\bar{\mathcal{G}}$, then there exists at least one $B_i=0$ for $i=1,...,p$ where $B_i$, corresponding to $L_{ii}$, are diagonal blocks of $B$. Thus, $H_i$ and $H$ have more than one zero eigenvalues. This implies a contradiction. \end{description}\hfill\rule{4pt}{8pt} Therefore, if node $0$ is globally reachable in $\bar{\mathcal{G}}$, $H$ is positive stable, and from Lyapunov theorem, there exists a positive definite matrix $\bar P\in R^{n\times n}$ such that \begin{equation} \label{lyaequ}\bar P H+H^{T}\bar P=I_n. \end{equation} Let $\bar\mu=max\{\mbox{eigenvalues of} \;\bar P HH^T\bar P\}$ and let $\bar \lambda$ be the smallest eigenvalue of $\bar P$. Now we give the main result as follows. \begin{theorem} \label{thm1} For system (\ref{model2}), take \begin{equation} \label{kstar} k>k^*=\frac{\bar{\mu}}{2\bar\lambda}+1. \end{equation} Then, when $\tau$ is sufficiently small, \begin{equation} \label{conclu}\lim_{t \to \infty}\epsilon(t)=0, \end{equation} if and only if node $0$ is globally reachable in $\bar{\mathcal{G}}$. \end{theorem} Proof: (Sufficiency) Since node $0$ is globally reachable in $\bar{\mathcal{G}}$, $H$ is positive stable and $\bar P$ is a positive definite matrix satisfying (\ref{lyaequ}). Take a Lyapunov-Razumikhin function $V(\epsilon)=\epsilon^{T}P\epsilon$, where $$ P=\left(\begin{array}{cc}k\bar P&\bar P\\\bar P&\bar P \end{array}\right)\qquad(k>1) $$ is positive definite. Then we consider $\dot V(\epsilon)|_{(\ref{model2})}$. By \emph{Leibniz-Newton} formula, \begin{align*} \epsilon(t-r)&=\epsilon(t)-\int_{-r}^{0}\dot \epsilon(t+s)ds\\ &=\epsilon(t)-C\int_{-r}^{0} \epsilon(t+s)ds-E\int_{-2r}^{-r} \epsilon(t+s)ds. \end{align*} Thus, from $E^2=0$, the delayed differential equation (\ref{model2}) can be rewritten as \begin{equation*} \dot \epsilon=F\epsilon-EC\int_{-r}^{0}\epsilon(t+s)ds, \end{equation*} where $F=C+E$. Note that $2a^{T}b \leq a^{T}\Psi a+b^{T}\Psi^{-1}b$ holds for any appropriate positive definite matrix $\Psi$. Then, with $a=-C^TE^TP\epsilon,b= \epsilon(t+s)$ and $\Psi=P^{-1}$, we have $$ \dot{V}|_{(\ref{model2})}=\epsilon^{T}(F^{T}P+PF) \epsilon-2\epsilon^TPEC\int_{-r}^{0} \epsilon(t+s)ds $$ $$ \leq \epsilon^{T}(F^{T}P+PF) \epsilon+r\epsilon^TPECP^{-1}C^TE^TP\epsilon+\int_{-r}^{0} \epsilon^T(t+s)P\epsilon(t+s)ds. $$ Take $\phi(s)=qs$ for some constant $q>1$. In the case of \begin{equation} \label{cond3}V(\epsilon(t+\theta))<qV(\epsilon(t)),\;-\tau\leq \theta \leq 0, \end{equation} we have $$ \dot{V} \leq-\epsilon^{T}Q\epsilon+r\epsilon^T(PECP^{-1}C^TE^TP+qP)\epsilon, $$ where $$ Q=-(F^TP+PF)=\begin{pmatrix}I_n&H^T\bar P\\\bar PH&2(k-1)\bar P\end{pmatrix}. $$ $Q$ is positive definite if $k$ satisfies (\ref{kstar}), according to Lemma \ref{lemgr3} and Schur complements theorem (\cite{horn}). Let $\lambda_{min}$ denote the minimum eigenvalues of $Q$. If we take \begin{equation} \label{bound1} r<\tau=\frac{\lambda_{min}}{||PECP^{-1}C^TE^TP||+q||P||}, \end{equation} then $\dot V(\epsilon)\leq -\eta \epsilon^T\epsilon$ for some $\eta>0$. Therefore, the conclusion follows by Lemma \ref{lem1}. (Necessity) Since system (\ref{model2}) is asymptotically stable, the eigenvalues of $F$ have negative real-parts, which implies that $H$ is positive stable. By Lemma \ref{lemgr3}, node $0$ is globally reachable in $\bar{\mathcal{G}}$. \hfill\rule{4pt}{8pt} \begin{remark} \label{rem3} In the proof of Theorem \ref{thm1}, we have obtained a finite bound of the considered time-varying delay, that is, $\tau$ in (\ref{bound1}), though ``$\tau$ is sufficiently small" is mentioned in Theorem \ref{thm1}. \end{remark} \begin{remark} Obviously, (\ref{conclu}) still holds if the time delay is constant. Moreover, if the system (\ref{move2}) is free of time-delay (that is, $r\equiv 0$), then the coupling rule (\ref{prot}) becomes $$ u_i(t)=\sum_{j \in \mathcal{N}_{i}(\sigma)}a_{ij}(x_j(t)-x_i(t))+b_i(\sigma)(x_0(t)-x_i(t))+k(v_0-v_i(t)), $$ which is consistent with the nearest neighbor rules in \cite{olf04}. \end{remark} For illustration, we give an numerical example with the interconnection graph given in Fig. 1. It is not hard to obtain $$ \bar \mu =0.3139,\;\bar \lambda =0.1835,\;k^*=2.7106, $$ $$ \lambda_{min}=0.3325,\;q=1.0500,\;\tau=0.0334, $$ $$ \bar P=\left(\begin{array}{cccc} 0.5379 & 0.5758 & 0.0439 & 0.0227\\ 0.5758 & 1.1667 & 0.1091 & 0.0909\\ 0.0439 & 0.1091 & 0.5833 & 0.0833\\ 0.0227 & 0.0909 & 0.0833 & 0.2500 \end{array}\right). $$ Take $k=3$ and the time-varying delay $r(t)=0.0300|\cos(t)|$ in the simulation. Fig. 3 shows the simulation results for both position errors and velocity errors, while Fig. 4 demonstrates that the trajectories of the four agents and the one of the leader. \begin{center}{\includegraphics[scale=0.4]{fig3.eps}\includegraphics[scale=0.4]{fig4.eps}}\\ {\small Fig. 3. Leader-following errors of four agents with the coupling topology shown in Fig.1} \end{center} \begin{center}{\includegraphics[scale=0.4]{fig5.eps}\includegraphics[scale=0.4]{fig6.eps}}\\ {\small Fig. 4. Trajectories of four agents and the leader with the coupling topology shown in Fig.1} \end{center} \section{Switched Coupling Topology} Consider system (\ref{model1}) with switched coupling topology. Still taking $\bar{x}=x-x_0\mathbf{1},\bar{v}=v-v_0\mathbf{1}$, we have \begin{equation} \label{model4} \dot{\epsilon}=C\epsilon(t)+E_\sigma\epsilon(t-r). \end{equation} where $\sigma$ is the switching signal as defined in Section 2, and $$ E_\sigma=\begin{pmatrix} 0&0\\ -H_\sigma&0 \end{pmatrix}, \quad H_\sigma=L_\sigma+B_\sigma. $$ At first, we study the matrix $H_\sigma=L_\sigma+B_\sigma$. \begin{lemma} \label{lemgr4} Suppose $\mathcal{G}_\sigma$ is balanced. Then $H_\sigma+H_{\sigma}^{T}$ is positive definite if and only if node $0$ is globally reachable in $\bar {\mathcal{G}}$. \end{lemma} Proof: (Necessity) The proof is quite trivial and omitted here. (Sufficiency) Because $\mathcal{G}_\sigma$ is balanced, it is strongly connected if it has a globally reachable node. Then from Theorem 7 in \cite{olf04}, $\frac{1}{2}(L_\sigma+L_\sigma^T)$ is a valid Laplacian matrix with single zero eigenvalue. After some manipulations, it is not difficult to obtain that $\frac{1}{2}(L_\sigma+L_\sigma^T)+B_\sigma$ is positive definite (the details can be found in \cite{hong}) and so is $H_\sigma+H_\sigma^T$. If ${\cal G}_\sigma$ has no globally reachable node, then there is no arc between every pair of distinct strong components and we can renumber the nodes so that Laplacian associated with $\mathcal{G}_\sigma$ has the form \begin{equation} L_\sigma=\left(\begin{array}{cccc} L_{11}(\sigma)&&&\\ &L_{22}(\sigma)&&\\ &&\ddots&\\ &&&L_{pp}(\sigma) \end{array}\right) \end{equation} where each $L_{ii}(\sigma)$ is Laplacian associated with a strong component $S_i$ for $i=1,...,p,p>1$. By the assumption that node $0$ is globally reachable in $\bar{\mathcal{G}}_\sigma$, then each diagonal block matrix $B_i(\sigma)$, corresponding to $L_{ii}(\sigma)$, is nonzero. Then, it is easy to see that $\frac{1}{2}(L_{ii}(\sigma)+L_{ii}(\sigma^T))+B_i(\sigma)$ is positive definite and therefore, $H_\sigma+H_\sigma^T$ is positive definite. \hfill\rule{4pt}{8pt} Based on the balanced graph ${\cal G}_{\sigma}$ (with Lemma \ref{lemgr4}) and the fact that the set $\mathcal{I}_\Gamma$ is finite, both $ \tilde{\lambda}=\min\{\mbox{eigenvalues of}\; H_\sigma+H_\sigma^T\}>0 $ and $ \tilde{\mu}=\max\{\mbox{eigenvalues of}\; H_\sigma H_\sigma^T\}>0$ can be well defined. \begin{theorem} \label{thm2} For system (\ref{model4}) with balanced graph ${\cal G}_{\sigma}$, take \begin{equation} \label{kstar1} k>k^*=\frac{\tilde{\mu}}{2\tilde\lambda}+1. \end{equation} If node $0$ is globally reachable in $\bar{\mathcal{G}}_{\sigma}$ and $\tau$ is sufficiently small, then $$ \lim_{t \to \infty}\epsilon(t)=0. $$ \end{theorem} Proof: Take a Lyapunov-Razumikhin function $V(\epsilon)=\epsilon^{T}\Phi\epsilon$, where $$ \Phi=\left(\begin{array}{cc}kI_n&I_n\\I_n&I_n \end{array}\right)\qquad ( k>1) $$ is positive definite. Similar to the analysis in the proof of Theorem \ref{thm1}, we can obtain $$ \dot{V}\leq \epsilon^{T}(F_\sigma^{T}\Phi+\Phi F_\sigma) \epsilon+r\epsilon^T\Phi E_\sigma C_\sigma \Phi^{-1}C_\sigma^TE_\sigma^T\Phi\epsilon+\int_{-r}^{0} \epsilon^T(t+s)\Phi\epsilon(t+s)ds. $$ Take $\phi(s)= q s$ for some constant $q>1$. In the case of \begin{equation} \label{cond31}V(\epsilon(t+\theta))<qV(\epsilon(t)),\quad-\tau\leq \theta \leq 0, \end{equation} we have $$ \dot{V} \leq -\epsilon^{T}Q_\sigma \epsilon+r\epsilon^T(\Phi E_\sigma C_\sigma \Phi^{-1}C_\sigma^TE_\sigma^T\Phi+q\Phi)\epsilon, $$ where $$ Q_\sigma=-(F_\sigma^T\Phi+\Phi F_\sigma)=\begin{pmatrix} H_\sigma^T+H_\sigma&H_\sigma^T\\ H_\sigma & 2(k-1)I_n\end{pmatrix}. $$ $Q_\sigma$ is positive definite for any value of $\sigma$ and then $\dot V(\epsilon)$ is negative definite if we take (\ref{kstar1}) and \begin{equation} \label{bound2} r<\tau=\frac{\lambda_{min}}{\frac{2k}{k-1}\tilde \mu+\frac{1}{2}q(k+1+\sqrt{(k-1)^2+4})}, \end{equation} where $\lambda_{min}$ denotes the minimum eigenvalue of all possible $Q_\sigma$. Thus, the conclusion is obtained according to Lemma \ref{lem1}. \hfill\rule{4pt}{8pt} In the switching case, the assumption of balanced graph ${\cal G}_{\sigma}$ is not necessary for the stability result in Theorem \ref{thm2}. The following numerical example shows that the stability can be obtained even if the coupling topology graph is not balanced sometimes. Here we consider there are two coupling topologies, given in Figs. 1 and 2, switching between each other, with the following switching order: $\{\bar{\mathcal{G}}_1,\bar{\mathcal{G}}_2,\bar{\mathcal{G}}_1,\bar{\mathcal{G}}_2,...\}$. With simple calculations, we have $$ \tilde\lambda=0.5028,\;\tilde\mu=7.9257,\;k^*=7.8816, $$ $$ \lambda_{min}=0.4781,\;q=1.0500,\;\tau= 0.0174. $$ Take $k=9$ and the time-varying delay $r(t)=0.0150|\cos(t)|$. Then the simulation results are shown in Fig. 5. \begin{center}{\includegraphics[scale=0.4]{fig7.eps}\includegraphics[scale=0.4]{fig8.eps}}\\ {\small Fig. 5. Leader-following errors with two switching graphs given in Fig.1 and Fig.2} \end{center} \section{Conclusions} This paper addressed a coordination problem of a multi-agent system with a leader. A leader moves at the constant velocity and the follower-agents follow it though there are time-varying coupling delays. When the coupling topology was fixed and directed, a necessary and sufficient condition was given. When the coupling topology was switched and balanced, a sufficient condition was presented. Moreover, several numerical simulations were shown to verify the theoretical analysis. \section*{Acknowledgment} This work was supported by the NNSF of China under Grants 60425307, 50595411, and 60221301.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Uranus's 98\textdegree~obliquity, the angle between the planet's spin axis and normal to its orbital plane, is perhaps the most unusual feature in our solar system. The most accepted explanation for its origin is a giant collision with an Earth-sized object that struck Uranus at polar latitudes during the late stages of planetary formation \citep{1989Metic..24R.251B,1990Icar...84..528K,1992Icar...99..167S,1997P&SS...45..181P,2012Icar..219..737M,2015A&A...582A..99I,2018ApJ...861...52K,2019MNRAS.487.5029K,2019MNRAS.tmp.2855R}. Collisions between massive objects are an expected part of solar system formation; indeed, our own Moon was likely formed as a result of a collision between Earth and a Mars-sized object \citep{2001Natur.412..708C}. There are problems with a collisional origin to Uranus's obliquity though. An Earth-mass projectile grazing Uranus's pole is a low probability event, and even larger mass impactors are required for more centered impacts. These impacts could also significantly alter the planet's primordial spin rate, yet both Uranus and Neptune spin at similar periods ($T_{U}=17.2$ hr, $T_{N}=16.1$ hr). Just as with Jupiter and Saturn, the two ice giants likely acquired their fast and nearly identical spin rates while accreting their massive gaseous atmospheres \citep{2018AJ....155..178B, 2018NatAs...2..138B}. Additionally, \cite{2012Icar..219..737M} argue that for Uranus's regular satellites to orbit prograde around the planet, two or more collisions would be necessary. Tilting from 0\textdegree~to 98\textdegree~with a single impact would destabilize any existing satellite system via Kozai interactions, and would lead to a chaotic period of crossing orbits and collisions. The resulting proto-satellite disk would preserve its pre-impact angular momentum and hence would form retrograde satellites. Although the two impactors could be individually less massive than a single one, multiple large impacts are nevertheless still improbable. In this paper we will explore an alternative collisionless approach based on the resonant capture explanation for Saturn's 27\textdegree~obliquity. Since Saturn is composed of at least 90\% hydrogen and helium gas, we would expect gas accretion during planet formation to conserve angular momentum and force any primordial obliquity to $\epsilon\sim0$\textdegree. A collisional explanation would then require an impactor of $6-7.2\,M_{\oplus}$ \citep{parisi2002model}, which is even more unlikely than the putative Uranus strike. If $7\,M_{\oplus}$ objects were common in the early solar system, one would expect to find evidence for their existence (e.g. a higher tilt for Jupiter and perhaps even additional planets). Instead, Saturn's obliquity can best be explained by a secular spin-orbit resonance between the precession frequencies of Saturn's spin axis and Neptune's orbital pole \citep{2004AJ....128.2501W,2004AJ....128.2510H}. And even Jupiter's small tilt may have resulted from a resonance with either Uranus or Neptune \citep{2006ApJ...640L..91W,2015ApJ...806..143V}. A significant advantage of this model is that the gradual increase of Saturn's obliquity preserves both the planet's spin period and the orbits of its satellite system, which would eliminate all of the issues present in the giant impact hypothesis for Uranus \citep{1965AJ.....70....5G}. Uranus's current spin precession frequency today is too slow to match any of the planets' orbital precession rates, but that may not have been the case in the past. \cite{2010ApJ...712L..44B} posit that a resonance is possible if Uranus harbored a moon large enough so that the planet's spin axis could precess sufficiently fast to resonate with its own orbit. This moon would, however, have to be larger than all known moons (between the mass of Ganymede and Mars), have to be located far from Uranus ($\approx50$ Uranian radii), and then have to disappear somehow perhaps during planetary migration. A more promising solution is instead to place a circumplanetary disk of at least $4.5\times10^{-3}\,M_{\oplus}$ around Uranus during the last stage of its formation \citep{2020ApJ...888...60R}. Since Uranus must have harbored a massive circumplanetary disk to account for its gaseous atmosphere, and \cite{2018ApJ...868L..13S} calculate a circumplanetary disk of around $10^{-2}\,M_{\oplus}$, capturing into a spin-orbit resonance by linking Uranus's pole precession to its nodal precession seems plausible during formation. \cite{2020ApJ...888...60R} find that a 70\textdegree~kick is possible within the accretion timespan of 1 Myr, and that while a subsequent impactor is still necessary, it only needs to be 0.5 $M_{\oplus}$. The odds of this collision generating Uranus' current spin state is significantly greater, but to attain 70\textdegree~Uranus's orbital inclination would need to be around 10\textdegree. An inclination this high is a little uncomfortable and hints that further improvements to the model may be necessary. For instance, \cite{2018CeMDA.130...11Q} demonstrated a similar set of resonance arguments that are not sensitive to a planet's orbital inclination, and that are capable of pushing a planet's obliquity beyond 90\textdegree. These arguments include mean motion terms which arise naturally if the planets are configured in a resonance chain \citep{2019NatAs...3..424M}. In this paper we investigate yet another possibility by placing Uranus closer to the Sun where tidal forces are stronger and precession timescales are shorter. This will require us to make some optimistic modifications to the planets' initial configurations in order to generate the desired resonance, as will be seen below. If our models yield fruitful results, then these assumptions will need to be carefully examined in the larger context of solar system formation. Furthermore, we will also revisit the multi-collision explanation as well as hybrid resonance and collision models. We will then critically compare all of these resonance and collisional models. \section{Capture into a Secular Spin Orbit Resonance} \label{rc} \subsection{Initial Conditions} Gravitational torques from the Sun on an oblate planet cause the planet's spin axis to precess backwards, or regress, about the normal to its orbital plane \citep{1966AJ.....71..891C}. Similarly, gravitational perturbations cause a planet's inclined orbit to regress around the Sun. A match between these two precession frequencies results in a secular spin-orbit resonance. In this case, the spin axis remains fixed relative the planet's orbital pole, and the two vectors precesses about the normal to the invariable plane. The longitudes of the two axial vectors, $\phi_{\alpha}$ and $\phi_{g}$, are measured from a reference polar direction to projections onto the invariable plane, and the resonance argument is given as \citep{2004AJ....128.2510H}: \begin{equation}\label{resarg} \Psi = \phi_{\alpha} - \phi_{g}. \end{equation} The precession rate of Uranus's spin axis can be derived from first principles by considering the torques of the Sun and the Uranian moons on the planet's equatorial bulge. Following \cite{1966AJ.....71..891C}, if $\hat{\sigma}$ is a unit vector that points in the direction of the total angular momentum of the Uranian system, then: \begin{equation}\label{diffeq} \frac{d\hat{\sigma}}{dt} = {\alpha}(\hat{\sigma}\times\hat{n})(\hat{\sigma}\cdot\hat{n}) \end{equation} where $\hat{n}$ is a unit vector pointing in the direction of Uranus's orbital angular momentum, and $t$ is time. Uranus's axial precession period is therefore: \begin{equation}\label{period} T_{\alpha}=\frac{2\pi}{{\alpha}\cos\epsilon}, \end{equation} where $\cos\epsilon = \hat{\sigma}\cdot\hat{n}$. The precession frequency near zero obliquity, $\alpha$, incorporates the torques from the Sun and the moons on the central body \citep{1991Icar...89...85T}: \begin{equation}\label{prec} {\alpha} = \frac{3{n}^{2}}{2} \frac{{J}_{2}(1-\frac{3}{2}\sin^{2}{\theta}_{p}) + q}{K\omega \cos\theta_{p} + l}. \end{equation} \noindent Here $n = (GM_{\odot}/r_{p}^{3})^{1/2}$ is the orbital angular speed of the planet, $G$ is the gravitational constant, $M_{\odot}$ is the Sun's mass, $r_{p}$ is the Sun-planet distance, $\omega$ is the planet's spin angular speed, $J_{2}$ is its quadrupole gravitational moment, and $K$ is its moment of inertia coefficient normalized by $M_{p}R_{p}^{2}$. For Uranus today, $M_{p}=14.5\,M_{\oplus}$, $R_{p}=2.56\times 10^{9}$ cm, $K=0.225$ and $J_{2}=0.00334343$\footnote{All physical values of the solar system can be found here courtesy of NASA Goddard Space Flight Center: http://nssdc.gsfc.nasa.gov/planetary/factsheet/}. The parameter $ q\equiv\frac{1}{2} {\sum}_{i} ({M_{i}}/{M_{p}})({a_{i}}/{R_{p}})^{2}(1-\frac{3}{2}\sin^{2}{\theta}_{i}) $ is the effective quadrupole coefficient of the satellite system, and $ l\equiv{R}_{p}^{-2} {\sum}_{i} ({M_{i}}/{M_{p}})(GM_{p}a_{i})^{\frac{1}{2}}\cos\theta_{i} $ is the angular momentum of the satellite system divided by $M_{p}R_{p}^{2}$. The masses and semi-major axes of the satellites are $M_{i}$ and $a_{i}$, $\cos\theta_{p} = \hat{s}\cdot\hat{\sigma}$ and $\cos\theta_{i} = \hat{l_{i}}\cdot\hat{\sigma}$, where $\hat{s}$ is the direction of the spin angular momenta of the central body and $\hat{l_{i}}$ is the normal to the satellite's orbit \citep{1991Icar...89...85T}. Note that $M_{i}\ll{M}_{p}$ where $M_{p}$ is the mass of the planet and, since the satellite orbits are nearly equatorial, we can take $\theta_{p} = \theta_{i} = 0$. Torques from the main Uranian satellites on the planet contribute significantly to its precessional motion, while those from other planets and satellites can be neglected. We therefore limit ourselves to Uranus's major moons---Oberon, Titania, Umbriel, Ariel, and Miranda. We find $q=0.01558$ which is about 4.7 times larger than Uranus's $J_{2}$, and $l=2.41\times10^{-7}$ which is smaller than $K\omega$ by about a factor of 100. So from Equation \ref{prec}, the effective quadrupole coefficient of the satellite system plays a much more significant role in the planet's precession period than the angular momentum of the satellite system. At its current obliquity, $\epsilon=98$\textdegree, Uranus's precession period is about 210 million years (or $\alpha=0.0062$ arcsec yr$^{-1}$), and reducing Uranus's obliquity to 0\textdegree~results in a precession period 7.2 times faster: 29 million years (or $\alpha=0.045$ arcsec yr$^{-1}$). This pole precession rate is much longer than any of the giant planets' fundamental frequencies \citep{1999ssd..book.....M}, but it can be sped up to $\approx\,2$ Myr by placing Uranus at around 7 AU. This is just fast enough for Uranus to resonate with a similar planet---Neptune---located beyond Saturn. Placing Uranus's orbit between those of Jupiter and Saturn is not entirely ad hoc. \cite{1999Natur.402..635T,2002AJ....123.2862T,2003Icar..161..431T} argue that at least the ice giants' cores might have formed between Jupiter and Saturn (4-10 au), as the timescales there for the accretion of planetesimals through an oligarchic growth model, when the large bodies in the planetary disk dominate the accretion of surrounding planetesimals, are more favorable than farther away. The Nice model \citep{2005Natur.435..466G, 2005Natur.435..462M, 2005Natur.435..459T} places Uranus closer to the Sun but beyond Saturn for similar reasons; however, having the ice giants form between Jupiter and Saturn is not inconsistent with the Nice model. If Uranus and Neptune were indeed formed between Jupiter and Saturn and later ejected sequentially, then a secular spin-orbit resonance between Uranus and Neptune is possible. A related possibility that is also sufficient for our purposes is if Neptune initially formed beyond Saturn and Uranus between Jupiter and Saturn. Here, however, the similar masses of Uranus and Neptune becomes more difficult to explain. In the following, we assume that Uranus is fully formed with its satellites located near their current configurations to derive the spin axis precession rate. \subsection{Method} Calculating Uranus's obliquity evolution requires tracking the planets' orbits while also appropriately tuning Neptune's nodal precession rate. We use the HNBody Symplectic Integration package \citep{2002DDA....33.0802R} to track the motion of bodies orbiting a central massive object using symplectic integration techniques based on two-body Keplerian motion, and we move Neptune radially with an artificial drag force oriented along the velocity vector using the package HNDrag. These packages do not follow spins, so we have written an integrator that uses a fifth-order Runge-Kutta algorithm \citep{1992nrca.book.....P} and reads in HNBody data to calculate Uranus's axial orientation due to torques applied from the Sun (Equation \ref{diffeq}). For every time step, the integrator requires the distance between the Sun and Uranus. Since HNBody outputs the positions and velocities at a given time frequency different from the adaptive step that our precession integrator uses, calculating the precessional motion requires interpolation. To minimize interpolation errors, we use a torque averaged over an orbital period which is proportional to $\langle r_{p}^{-3}\rangle=a_{p}^{-3}(1-e_{p}^{2})^{-\frac{3}{2}}$, where $a_{p}$ is the planet's semi-major axis and $e_{p}$ is its eccentricity. This is an excellent approximation since Uranus's orbital period is $10^{5}-10^{6}$ times shorter than its precession period. We tested the code for a two-body system consisting of just the Sun and Uranus and recovered the analytic result for the precession of the spin axis (Figure \ref{plot}). \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{error3.pdf} \caption{The calculated relative error of three quantities describing Uranus's spin axis. Here $\omega$ is the unit vector pointing in the direction of Uranus's spin axis. $\epsilon$ is the planet's obliquity and $\phi$ is the planet's spin longitude of the ascending node. All quantities should be constant with time as the system only contains the Sun and Uranus. Numerical errors at the levels shown here are sufficiently low for our purposes. } \label{plot} \end{figure} For our simulations we place Jupiter and Saturn near their current locations (5 au and 9 au respectively), Uranus at 7 au, and Neptune well beyond Saturn at 17 au. Leaving Uranus in between the two gas giants for more than about ten million years is unstable \citep{1993AJ....105.1987H}, but eccentricity dampening from remnant planetesimals can delay the instability. Scattering between Uranus and the planetesimals provides a dissipative force that temporarily prevents Uranus from being ejected, and we mimic this effect by applying an artificial force to damp Uranus's eccentricity. We apply the force in the orbital plane and perpendicular to the orbital velocity to damp the eccentricity while preventing changes to the semi-major axis \citep{1992fcm..book.....D}. With Uranus's orbit relatively stable, we then seek a secular resonance between its spin and Neptune's orbit. \subsection{A Secular Resonance} Capturing into a spin-orbit resonance also requires the two angular momentum vectors, the planet's spin axis and an orbital pole, and the normal to the invariable plane be co-planar. Equilibria about which the resonance angle librates are called ``Cassini States'' \citep{1966AJ.....71..891C,1969AJ.....74..483P,1975AJ.....80...64W,2004AJ....128.2501W}, and there are multiple vector orientations that can yield a spin-orbit resonance. In this case, the resonance angle, $\Psi$, librates about Cassini State 2 because Uranus's spin axis and Neptune's orbital pole precess on opposite sides of the normal to the invariable plane. As Neptune migrates outwards away from the Sun, its nodal precession frequency slows until a resonance is reached with Uranus's spin precession rate. If the consequence of the resonance is that Uranus's obliquity increases \citep{1974JGR....79.3375W}, then its spin precession frequency slows as well (Equation \ref{prec}) and the resonance can persist. The time evolution of the resonance angle and obliquity are given by \citep{2004AJ....128.2510H}: \begin{equation}\label{resdt} \dot{\Psi}=-\alpha\cos\epsilon - g\cos{I} \end{equation} \begin{equation}\label{obldt} \dot{\epsilon}=g\sin{I}\sin{\Psi} \end{equation} where $g$ is the negative nodal precession rate, and $I$ is the amplitude of the inclination induced by Neptune's perturbation on Uranus's orbit. If Neptune migrates outward slowly enough, then $\dot{\Psi}$ is small and the two planets can remain in resonance nearly indefinitely. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{capture.pdf} \caption{A resonance capture. The top panel shows Uranus's obliquity evolution over time. The middle panel shows the evolution of the precession frequencies with the dashed line indicating the resonance location, and the bottom panel shows the resonance angle ($\Psi$). The solid vertical line at $t\approx 150$ Myr indicates when Neptune reaches it current location at 30 au. In this simulation resonance is established at $t=0.05$ Gyr when Neptune is at $\approx 24$ au, and it breaks at $t=0.85$ au with Neptune at $\approx 120$ au. Stopping Neptune at 30 au, we find that this capture could account for perhaps half of Uranus's extreme tilt. Here, Uranus is located at $a_{U}=7$ au, with its current equatorial radius. Neptune's inclination is set to twice its current value at $i_{N}=4$\textdegree~which strengthens the resonance.} \label{capture} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{polar4_3.pdf} \caption{The corresponding polar plot to Figure \ref{capture} where Neptune is migrating well within the adiabatic limit. The short period oscillations here are at the pole precession rate while the longer oscillations are the librations about the equilibrium point which itself is moving to higher obliquities (to the right). The red dotted circles represents points of constant obliquity in increments of 15\textdegree.} \label{polar:capture} \end{figure} Figure \ref{capture} shows Uranus undergoing capture into a spin-orbit resonance when Neptune crosses $\sim$24 au en route to its current location at 30 au. Here we have set Neptune's migration rate at 0.045 au/Myr, which is within the adiabatic limit -- the fastest possible rate to generate a capture with ${\epsilon}_{i}\approx0$\textdegree. The adiabatic limit occurs when Neptune's migration takes it across the resonance width in about a libration time, which is just $2\pi/w_{lib}$ with $w_{lib} = \sqrt{-\alpha{g}\sin{\epsilon}\sin{I}}$ \citep{2004AJ....128.2510H}. Just as slow changes to the support of a swinging pendulum do not alter the pendulum's motion, gradual changes to Neptune's orbit do not change the behavior of the libration. However, if Neptune's migration speed exceeds the adiabatic limit, then the resonance cannot be established. The top panel of Figure \ref{capture} shows Uranus tilting to 60\textdegree~in 150 Myrs when Neptune reaches its current location, and all the way to 90\textdegree~in 600 Myr if we allow Neptune to continue outwards. Planets migrate by scattering planetesimals, which can decrease inclinations; accordingly, we optimistically assumed an initial value for Neptune's inclination at twice its current value. Because we have increased Neptune's inclination and moved Neptune out as fast as possible and yet still allowed capture, one hundred fifty million years represents a rough lower limit to the time needed to tilt Uranus substantially. The bottom panel of Figure \ref{capture} and Figure \ref{polar:capture} both show the evolution of the resonance angle, and the angle oscillates with a libration period of about 30 Myr about the equilibrium point. The libration period increases as $\epsilon$ increases in accordance with Equation \ref{period}. The noticeable offset of the equilibrium below $\Psi=0$\textdegree~in Figures \ref{capture} and \ref{polar:capture} is due to the rapid migration of Neptune \citep{2004AJ....128.2510H}: \begin{equation} \label{eqangle} \Psi_{eq} = \frac{\dot{\alpha}\cos\epsilon\,+\,\dot{g}\cos I }{\alpha g\sin\epsilon\sin I}. \end{equation} Recall that $g$, the nodal precession frequency, is negative, $\alpha$ is positive, and as Neptune migrates away from the Sun $\dot{g}$ is positive. Since $\alpha$ is constant, $\dot{\alpha}=0$, and so $\Psi_{eq}$ is slightly negative in agreement with Figure \ref{capture}. We conclude that although a spin-orbit resonance with Neptune can tilt Uranus over, the model requires that Uranus be pinned between Jupiter and Saturn for an uncomfortably long few hundred million years. Is there any room for improvement? Both the \cite{1999Natur.402..635T,2002AJ....123.2862T,2003Icar..161..431T} model and the Nice model \citep{2005Natur.435..466G, 2005Natur.435..462M, 2005Natur.435..459T} require the planets' migration timescales to be on the order of $10^{6}-10^{7}$ years. This is incompatible with this resonance capture scenario, which requires at least $10^{8}$ years. Speeding up the tilting timescale significantly would require a stronger resonance. The strength of this resonance is proportional to the migrating planet's inclination and it sets the maximum speed at which a capture can occur \citep{1994Icar..109..221H}. Although Neptune's initial orbital inclination angle is unknown, a dramatic reduction in the tilting timescale is implausible. Another possibility is that the gas giants were once closer to the Sun where tidal forces are stronger. Some evidence for this comes from the fact that the giant planets probably formed closer to the snow line \citep{2006Icar..181..178C} where volatiles were cold enough to condense into solid particles. Shrinking the planets' semi-major axes by a factor of 10\% decreases the resonance location by about 3 au, and reduced the obliquity evolution timescale by about 15\%. Although this is an improvement, a timescale on the order of $10^{8}$ years seems to be the fundamental limit on the speed at which a significant obliquity can be reached \citep{2016DPS....4831809R,2018CeMDA.130...11Q}. Less critical than the timescale problem but still important is the inability of the obliquity to exceed 90\textdegree$\,$ (Figure \ref{capture}). The reason for this follows from Equation \ref{prec}, which shows that Uranus's precession period approaches infinity as $\epsilon$ approaches 90\textdegree. Neptune's migration speed then is faster than the libration timescale and the resonance ceases. This effect is more apparent in Figure \ref{polar:capture} which shows the libration period increasing with the obliquity. The resonance breaks when the resonance angle stops librating about an equilibrium point and instead circulates a full $2\pi$ radians. \cite{2018CeMDA.130...11Q} show that a related resonance that occurs when the planets are also close to a mean-motion resonance could tilt the planet past 90\textdegree, but this, like the resonance considered here, is probably too weak. Keeping Uranus between Jupiter and Saturn for $10^{8}$ years is as implausible as the planet having once had a massive distant moon \citep{2010ApJ...712L..44B}. \section{Obliquity Kick from a Secular Spin-Orbit Resonance} \label{rk} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{kick.pdf} \caption{A resonance kick with a particularly large 40\textdegree$\,$ amplitude. Here Neptune is migrating out rapidly at an average speed of 0.068 au/Myr, and Uranus's radius is at its current size. Jupiter, Saturn, and Uranus are located 10\% closer to the Sun than today, and Neptune has an inclination of 4\textdegree.} \label{kick} \end{figure} A resonance capture with Neptune may not be able to tilt Uranus effectively, but this resonance may still contribute significantly on a timescale more compatible with current planetary formation models. A \textit{resonance kick} occurs if Neptune's migration speed is too fast to permit captures (i.e. exceeds the adiabatic limit). If $\dot{g}$, the rate Neptune's nodal precession frequency changes as the planet migrates, is large enough, then from Equation \ref{resdt}, $g\cos I$ shrinks faster than Uranus's spin precession frequency $\alpha\cos\epsilon$. Thus $\dot{\Psi} < 0$ which drives $\Psi$ to -180\textdegree. For a capture, on the other hand, $\dot{g}$ is smaller so that the resonance lasts more than one libration cycle. A kick can also occur at slower migration speeds if the relative phase of the two precession axes are misaligned. Figure \ref{kick} shows an example of a resonance kick with a concurrent change in obliquity lasting 50 Myr. Overall, the magnitude of the kick depends on Neptune's orbital inclination, Uranus's initial obliquity, the migration speed, and the relative orientation of Uranus's spin axis and Neptune's orbital pole at the time the resonance is encountered. We will explore the entirety of this phase space to examine how effective Neptune's resonant kicks are at tilting Uranus. For a range of seven migration speeds, we ran simulations for initial obliquities ranging from $\epsilon\approx0$\textdegree~to $\epsilon\approx90$\textdegree~in increments of 5\textdegree. While Uranus may have originated with zero obliquity due to gas accretion, this does not need to be the case in general. Impacts, for example, are a source of at least small obliquities, and the prior spin-orbit resonance discussed by \cite{2020ApJ...888...60R} likely induced significant obliquity. For each initial obliquity we sample a range of phase angles from 0 to $2\pi$. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{phase.pdf} \caption{This figure shows the change in obliquity as a function of Uranus's initial azimuthal angle where $\epsilon=$1\textdegree, $i_{N}=$8\textdegree$\,$ and the system is near the adiabatic limit. Here we sampled 10,000 initial azimuthal angles from 0\textdegree$\,$ to 360\textdegree$\,$ and raised the inclination even further to emphasize the transition region from kicks (phases near 0\textdegree) to captures (phases near 180\textdegree). The annotated points (A,B,C) are discussed further in Figure \ref{polar}.} \label{phase} \end{figure} \begin{figure}[h] \centering \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.5\textwidth]{polar1.pdf} \centering (a) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.5\textwidth]{polar2.pdf} \centering (b) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.5\textwidth]{polar3.pdf} \centering (c) \end{tabular} \caption{These are polar plots of one kick (a) and two captures (b, c) taken from Figure \ref{phase}. \textit{A:} The largest resonance kick at the transition region in Figure \ref{phase}. The resonance angle undergoes less than one libration cycle. It approaches 180\textdegree$\,$ and then leaves the resonance. \textit{B:} A very tenuous capture whose libration angle exceeds 180\textdegree~for a few cycles before escaping the resonance creating the large outer circle. \textit{C:} A resonance capture well within the capture region in Figure \ref{phase}. Here the system also breaks free from the resonance after a few libration cycles. Short period oscillations in these plots are due to the effects of pole precession.} \label{polar} \end{figure} Distinguishing kicks from captures is more difficult when Neptune is migrating near the adiabatic limit, especially at low inclinations, so to highlight this effect we raise Neptune's inclination to 8\textdegree~in Figure \ref{phase}. This figure shows how the phase angle determines whether the resonance would yield a kick or a capture. Note, however, that it is actually the phase angle on encountering the resonance that matters, not the initial phase angle plotted in Figure \ref{phase}. Also, the outlying oscillations in this figure are due to librational motion as the final obliquity is calculated only when Neptune reaches its current location at 30 au. In this case there is a clear division between captures and kicks near azimuthal angles 150\textdegree~and 250\textdegree. In other cases at lower inclinations, however, the boundaries between kicks and captures seem more ambiguous. Figure \ref{polar} shows the corresponding polar plots for a selection of points in Figure \ref{phase} contrasting the difference between kicks and captures. Near the adiabatic limit, the phase angle will not librate more than one or two cycles for captures before the resonance breaks. This is most apparent in Figure \ref{polar}b where Uranus cycles just over one libration period before the resonance breaks. For comparison, Figure \ref{polar:capture} shows a capture well within the adiabatic limit, and here the phase angle clearly librates multiple times until the planet's obliquity reaches $\epsilon\sim90$\textdegree. We therefore only identify kicks as a resonance active for less than one libration cycle. Resonance kicks near the adiabatic limit can also generate large final obliquities, so we will focus our attention to this region in phase space. As shown in Figures \ref{kick} and \ref{phase}, it is possible to generate kicks up to $\Delta\epsilon\sim40$\textdegree~for $i_{N}=4$\textdegree~and $\Delta\epsilon\sim55$\textdegree~for $i_{N}=8$\textdegree. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{contourPK_capture.pdf} \caption{This figure shows the percentage of resonances that produce captures for a range of initial obliquities and migration speeds. Captures occur most readily in the lower left corner of the figure for small obliquities and slow migration rates. Here $i_{N}=4$\textdegree.} \label{percentkick} \end{figure} In Figure \ref{percentkick} we map the fraction of resonances that produce captures for a range of migration speeds and initial obliquities. The transition from 100\% kicks to 100\% captures over migration speeds is sharpest at lower initial obliquities. This can be understood by considering the circle that Uranus's spin axis traces as it precesses; for small obliquities significant misalignments between the two poles are rare, and the outcome of a resonance is determined primarily by Neptune's migration speed. With increasing initial obliquities, large misalignments become more common and the probability of generating a resonance kick increases \citep{2018CeMDA.130...11Q}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{plotkick4_color.pdf} \caption{This figure depicts the change in obliquity as a function of Uranus's initial obliquity. The blue circles depict resonance kicks, while the red crosses depict resonance captures. Neptune's migration speed is 0.068 au/Myr, which is near the adiabatic limit at small initial obliquities. We set $i_{N}=4$\textdegree. It should be noted that our sampling of 100 initial azimuthal angles for Uranus is too coarse to resolve any captures for initial obliquities greater than 55\textdegree. It is possible for captures to happen at larger initial obliquities but the range of favorable phase angles is very small.} \label{kickdist} \end{figure} We expect and find that the strongest resonant kick occurs at around the adiabatic limit because a slow migration speed gives ample time for the resonance to respond. Conversely, a rapid migration speed would quickly punch through the resonance leaving little time for the resonance to influence Uranus. Figure \ref{kickdist} depicts the distributions of kicks and captures near the $\epsilon=0$\textdegree~adiabatic limit where Neptune's migration speed is roughly 0.068 au/Myr. Looking at the average resonance kicks, we see that they can reach maximum changes in obliquities of 40\textdegree~(Figure \ref{kick}) for $i_{N}$ near twice Neptune's current inclination and even greater changes in obliquity for higher assumed $i_{N}$ (Figure \ref{phase}). This looks promising, but we need to understand the probability of these large kicks. In fact, looking at Figure \ref{kickdist} shows that for high obliquities negative kicks are common. For low obliquities, kicks must be positive since $\epsilon$ itself cannot be negative. However, if Neptune is migrating quickly and $\epsilon$ is large enough, then the relative phase angle is random resulting in a range of possible obliquity kicks; in particular if $\sin(\Psi)$ is positive in Equation \ref{obldt}, then $\dot{\epsilon}$ is negative. Figure \ref{fig:kickdists}a shows the maximum possible kicks over all initial obliquities and migration speeds, and although large kicks are possible, they are rare. Apart from resonant kicks that occur near the adiabatic limit, which can be seen in this figure as the magenta feature extending linearly up and to the right, the maximum strength of resonant kicks is typically $\Delta\epsilon\approx10$\textdegree$\,-\,20$\textdegree. On top of that, resonance kicks can also decrease obliquities, which is depicted in Figure \ref{fig:kickdists}b. If Uranus's obliquity was initially large, then the percentage of positive kicks is around 50\% tending towards primarily negative kicks as Neptune's migration speed decreases. Since about half of all possible resonance kicks at initial obliquities greater than 10\textdegree~are negative, the average kick should be low. Figure \ref{fig:kickdists}c depicts the corresponding mean changes in obliquity, and they tend to be weak with mean resonance kicks of only a few degrees. At low initial obliquities, though, kicks tend to increase the planet's obliquity by at least 10\textdegree. Generating a large resonance kick would most commonly occur if ${\epsilon}_{i}=$ 0\textdegree~with Neptune migrating no faster than 0.1 au/Myr. These figures show that, as a statistical process, resonances have only a weak effect, and that one needs favorable initial conditions for large kicks. \begin{figure}[h] \centering \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.5\textwidth]{vmaxobl_color.png} \centering (a) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.5\textwidth]{upordown_color.png} \centering (b) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.5\textwidth]{vmeanobl_color.png} \centering (c) \end{tabular} \caption{(a) This shows the corresponding maximum change in obliquity for resonant kicks depicted in Figure \ref{percentkick}. Diagonal hatching in the four boxes to the lower left in all panels correspond to captures. The scale ranges from 40\textdegree$\,$ kicks (magenta) to 0\textdegree$\,$ (cyan). (b) This shows the percentage of kicks that yield positive changes in obliquity. 100\% positive kicks are depicted in magenta. (c) This shows the mean changes in obliquity for resonant kicks. The scale measures the change in obliquity with magenta being the maximum.} \label{fig:kickdists} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{shrinkplot2.pdf} \caption{The change in obliquity as a function of Uranus's initial obliquity for a cooling and shrinking Uranus with $i_{N}=4$\textdegree. There are 1900 simulations depicted here.} \label{fig:kickdists:d} \end{figure} We could increase Uranus's obliquity further if it received multiple successive resonance kicks. This might be achieved with either a resonance between Uranus and another possible ice giant that may have existed in the \cite{1999Natur.402..635T} model, a resonance with its own orbital pole after Uranus' spin precession rate was amplified by harboring a massive extended circumplanetary disk \citep{2020ApJ...888...60R}, or if Uranus's precession frequency quickened as the planet cools and shrinks. The latter process is interesting and merits further discussion. Uranus was hotter and therefore larger in the past \citep{1986Icar...67..391B,1991uran.book..469P,1996Icar..124...62P,2009Icar..199..338L}, and conserving angular momentum requires that a larger Uranus must spin significantly slower. Both Uranus's spin angular frequency, $\omega$, and its quadrupole gravitational harmonic, $J_{2}$, appear in Equation \ref{prec} and change if the planet's radius changes. Since $\omega\propto R^{-2}$ and $J_{2}\propto{\omega}^{2}$ \citep{2009ApJ...698.1778R}, the result is a slower precession frequency. Here, for simplicity, we have ignored the contributions of the satellites as including them would soften the response somewhat. Although this is highly dependent on Uranus's cooling rate, \cite{1986Icar...67..391B} and \cite{1991uran.book..469P} show that Uranus shrank by a factor of 2 on a timescale of order 10 Myr. We simulated this scenario by having Uranus's radius decrease according to an exponential function with Neptune stationary at 25 au. Figure \ref{fig:kickdists:d} shows the resulting kicks as a function of Uranus's initial obliquity, and they never exceed 15\textdegree. Scenarios that include multiple crossings of the same resonance would likely still fall short of fully tilting Uranus \citep[e.g.][]{2004AJ....128.2501W,2004AJ....128.2510H,2004Natur.429..848C}. \section{Revisiting The Collision Model} \subsection{Conditions for Collisions} Recall that the leading hypothesis for Uranus's tilt is a single Earth-mass impactor striking the planet's polar region, but that \cite{2012Icar..219..737M} argue for two or more collisions. In this section we consider each of these scenarios and derive the resulting probability distributions for such impacts. To do this we designed a collisional code that builds up a planet by summing the angular momenta of impactors to determine the planet's final obliquity and spin rate under various circumstances, and we typically run this for a half million randomized instances. Our assumptions are that the impactors originate within the protoplanetary disk, they approach a random location on the planet on trajectories that parallel its orbital plane, and all the mass is absorbed upon impact. Because nearly every object in the Solar System orbits in roughly the same direction, the impactors' relative speed would be at most several tens of percent of Uranus's orbital speed (6.8 km/s). Since we expect most impactors to follow orbits with lower eccentricities, we sample relative velocities between 0 and 0.4 times Uranus's circular speed. Considering that the impactor's relative velocity is small compared to the planet's escape velocity (21.4 km/s), we must also take into account gravitational focusing. For cases where gravitational focusing is strong, the impact cross section is large and the impactor is focused to a hyperbolic trajectory aimed more closely towards the planet's center. Since head-on collisions do not impart any angular momentum, we expect the planet's spin state to be more difficult to change when focusing is included. The impact parameter for this effect is given by $b$ with \begin{equation}\label{eq:GF} b^{2}=R_{P}^{2}(1+(V_{esc}/V_{rel})^{2}). \end{equation} Also, since we do not know how the density profile changes between impacts, we maintain the dimensionless moment of inertia at $K\equiv\frac{I}{MR^{2}}=0.225$, but vary the planet's radius as the cube root of the total mass. Although these assumptions are mildly inconsistent, we find that even large impacts incident on a mostly formed Uranus yield just small changes in radius, and that the final spin rates changes by only about 10\% for other mass-radius relations. Finally, \cite{2012ApJ...759L..32P} suggest a maximum impact boundary of around 0.95 $R_{P}$ as beyond this the impactor simply grazes the planet's atmosphere and departs almost unaffected. For simplicity, and in the spirit of approximation, we ignore this subtlety. \begin{figure}[h] \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{ang1_GF.pdf} \centering (a) \end{tabular} \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{obl1.pdf} \centering (b) \end{tabular} \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{ang100_GF.pdf} \centering (c) \end{tabular} \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{obl100.pdf} \centering (d) \end{tabular} \caption{(a) The spin distribution for $5\times10^{5}$ realizations of a single impact ($m_{i}=1\,M_{\oplus}$) on a non-spinning proto-Uranus with initial mass $13.5\,M_{\oplus}$ including the effects of gravitational focusing. $\omega_{U}$ is the current uranian spin angular frequency, and all of the following distributions are normalized so that the shaded areas equal 1 (with the obliquities in radians); therefore, the solid line that fits the distribution is the probability distribution function (PDF) $P=\omega_{U}/\omega_{max}$. (b) The corresponding obliquity distribution (depicted in degrees) with the solid line given by $P=1.0/\pi$. (c) The spin distribution for 100 impacts of equal mass ($m_{i}=0.01\,M_{\oplus}$). (d) The corresponding obliquity distribution for 100 impacts. The dashed lines tracing the distributions in both of these figures are the analytic results (Equation \ref{eq:angmompd:2}, \ref{eq:oblpd:1}), and a detailed analysis can be found in the Appendix.} \label{fig:onehit} \end{figure} \subsection{Accretion of Planetesimals and Protoplanets} In Figures \ref{fig:onehit}a and b, we assume that the planet's initial spin rate was low to highlight the angular momentum imparted by impacts. Since $V_{esc}^{2}=2GM_{P}/R_{P}$, the impact cross section $b^{2}\propto R_{P}$ for $V_{rel} \ll V_{esc}$. The corresponding probability density distribution of impact locations is $\frac{d(\pi b^{2})}{dR_{P}}$, which is constant; therefore, the spin distribution induced from a single collision is flat (Figure \ref{fig:onehit}). However, if the impactor's relative velocity is instead much greater than the planet's escape velocity, then the impactors will be traveling on nearly straight lines and gravitational focusing does not apply. In this case a single collision produces a spin distribution that increases linearly, as there is an equal chance of striking anywhere on the planet's surface. But since gravitational focusing only varies the radial concentration of impacts on a planet's surface, the obliquity distribution for a single impact onto an initially non-spinning planet with or without gravitational focusing is uniform. A Uranian core formed from the accretion of many small objects, by contrast, would likely have a very low spin rate \citep{1991Icar...94..126L, 1993Icar..103...67D, 1993Sci...259..350D, 1999Icar..142..219A}, since each successive strike likely cancels out at least some of the angular momentum imparted from the previous impact (Figures \ref{fig:onehit} c and d). The planet would also have a narrower range of likely obliquities because the phase space available for low tilts is small. The calculation for the planet's final spin state for many impacts behaves similarly to a random walk, so from the central limit theorem, each directional component of the imparted angular momentum can be described by a normal distribution. The theoretical curve of Figure \ref{fig:onehit}c is given by the probability distribution $f_{{L}}(l)$, which describes the probability that $L$, the magnitude of the planet's spin angular momentum $L = \sqrt{L_{X}^{2} + L_{Y}^{2} + L_{Z}^{2}}$, takes the value $l$: \begin{equation}\label{eq:angmompd:2} f_{{L}}(l) = \frac{2l^{2}e^{-l^{2}/2{\sigma}^{2}}}{\sqrt{2{\pi}}\,{\sigma}^{2}{\sigma}_{z}}\,\Phi(0.5;1.5;-\beta l^{2}) \end{equation} \citep[Eq. 109]{1993Icar..103...67D}. Here $\sigma$ is the standard deviation for the components of the planet's spin angular momentum that lie in the orbital plane, $\sigma_{z}$ is the standard deviation for the component perpendicular to the orbital plane, and $\beta=\frac{{\sigma}^{2} - {\sigma}_{z}^{2}}{2{\sigma}^{2}{\sigma}_{z}^{2}}$. The angular momentum imparted is always perpendicular to the impactor's trajectory. After multiple impacts, standard deviations are related by $\sigma_{z} \approx \sqrt{2}\sigma$, so $\beta<0$. \iffalse $\gamma(0.5,\beta l^{2})$ is the lower incomplete gamma function, and \fi Finally, $\Phi(0.5;1.5;\beta l^{2})$ is the confluent hypergeometric function of the first kind. The corresponding obliquity probability distribution is: \begin{equation}\label{eq:oblpd:1} f_{\epsilon}(\varepsilon)=\left|\frac{1}{4\sqrt{2}\,{\sigma}^{2}{\sigma}_{z}}\frac{\tan(\varepsilon)}{\cos^{2}(\varepsilon)}{{\left(\frac{\tan^{2}(\varepsilon)}{2{\sigma}^{2}} + \frac{1}{2{\sigma}_{z}^{2}}\right)}^{-3/2}}\right| \end{equation} \citep[Eq. 111]{1993Icar..103...67D}; we provide derivations of these two equations in the Appendix. Notice how well these calculations agree with the numerical result for many impacts (Figure \ref{fig:onehit}c and d). Consequentially, decreasing the mass per impactor by increasing the number of impactors in Figure \ref{fig:onehit}c from 100 to 1000 would shift the peak to slower spin rates by a factor of $\sqrt{10}$. Because Uranus's spin period is quite fast, its spin state could not have simply been a byproduct of myriad small collisions. \begin{figure}[h] \centering \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{ang_equal_GF.pdf} \centering (a) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{obl_equal_GF.pdf} \centering (b) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{mesh_equal_GF_log.pdf} \centering (c) \end{tabular} \caption{(a) The spin distribution for two impacts of equal mass ($m_{i}=0.5\,M_{\oplus}$) onto an initially non-spinning Uranus. (b) The corresponding obliquity distribution for two equal impacts. The dashed line is the analytic result for the limit of an Earth mass distributed amongst a large number of particles. (c) A density plot of the spin frequency vs. obliquity where the value of each pixel is the number of iterations that yielded that result. Values within 10\% of Uranus's current obliquity and spin rate are contained within the red rectangle. The probability of falling within this rectangle compared to a similar space around the most likely value is 0.96, meaning that the current state is a likely outcome. } \label{fig:twoequal} \end{figure} \begin{figure}[h] \centering \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{ang_unequal_GF.pdf} \centering (a) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{obl_unequal_GF.pdf} \centering (b) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{mesh_unequal_GF_log.pdf} \centering (c) \end{tabular} \caption{(a) The spin distribution for two impacts of masses $0.8\,M_{\oplus}$ and $0.2\,M_{\oplus}$ onto a non-spinning planet. (b) The corresponding obliquity distribution for these two unequal impacts. The dashed line is the analytic result for the limit of an Earth mass distributed amongst a large number of particles. (c) A density plot of the spin frequency vs. obliquity where each pixel is the number of iterations that yielded those values. Values within 10\% of Uranus's current obliquity and spin rate are contained within the red rectangle. The likelihood of falling within 10\% of the planet's current spin state is $l_{U}=0.0062$, 0.76 times that of falling within 10\% of the most likely value.} \label{fig:twounequal} \end{figure} Accordingly, we will now consider the intermediary cases with only a few impactors incident on a non-spinning planet. Figure \ref{fig:twoequal} shows the product of two equal sized hits, and the resulting distributions already resemble the limit of multiple collisions. If the masses of the two impactors differ significantly, however, the corresponding spin and obliquity distributions are more similar to the single impact case (Figure \ref{fig:twounequal}). Therefore, while the planet's obliquity distribution may be more or less flat, its spin rate strongly depends on both the number of strikes and the total mass in impactors. \begin{table}[h] \begin{tabular}{l|lr|l|c} N & $M_{i}$ & $M_{T}$ & Probability ($l_{U}$) & Normalized Probability\\ \hline 1 & 1 & 1.0 & 5.0$\times 10^{-3}$ & 1.00\\ 2 & 0.5 & 1.0 & 1.1$\times 10^{-2}$ & 2.20\\ 3 & 0.333 & 1.0 & 7.1$\times 10^{-3}$ & 1.42\\ 4 & 0.25 & 1.0 & 4.5$\times 10^{-3}$ & 0.90\\ 7 & 0.142 & 1.0 & 6.4$\times 10^{-4}$ & 0.13\\ 100 & 0.01 & 1.0 & 0 & 0\\ \hline 2 & 0.8, 0.2 & 1.0 & 6.2$\times 10^{-3}$ & 1.24\\ \hline 1 & 0.41 & 0.41 & 5.2$\times 10^{-3}$ & 1.04\\ 2 & 0.205 & 0.41 & 4.4$\times 10^{-5}$ & 0.001\\ 3 & 0.137 & 0.41 & 2.0$\times 10^{-6}$ & $\sim$0\\ \hline 1 & 3.4 & 3.4 & 1.6$\times 10^{-3}$ & 0.32\\ 2 & 1.7 & 3.4 & 2.3$\times 10^{-3}$ & 0.46 \end{tabular} \caption{A Non-rotating Uranus}{This table shows the probability of a number of collisions (N) each with mass $M_{i}$ totaling to $M_{T}$ (in Earth masses) simultaneously generating a spin rate between $0.9<\omega/\omega_{U}<1.1$ and an obliquity between $93^{\circ}<\epsilon<103^{\circ}$ out of $5\times10^{5}$ realizations. In this data set, Uranus is initially non-spinning with an obliquity of 0\textdegree, and in general, probabilities decrease with more impactors. The final column divides the probability by the odds of generating Uranus's current state from a single Earth-mass impactor (first entry).} \label{table:odds:1} \end{table} Table \ref{table:odds:1} shows a range of possible collisions onto a non-spinning planet. Here we show that the smallest amount of mass necessary to push Uranus toward its observed spin state is about $0.4\,M_{\oplus}$, regardless of the number of impacts. The odds of this happening decreases for each additional collision because each impact needs to hit at exactly the right location. We also provide statistics for impactors much greater than an Earth-mass in the last section of Table \ref{table:odds:1}. Impactors this massive would likely violate our no mass-loss assumption, yet the odds of generating Uranus's current spin state is still low. A more detailed analysis of these impacts is beyond the scope of this paper; however, see \cite{2018ApJ...861...52K,2019MNRAS.487.5029K} for a smooth particle hydrodynamics analysis on the effects impacts have on Uranus's rotation rate and internal structure. We also explored cases with multiple unequal sized impactors and discovered that the order of the impacts does not matter, as expected, and that the odds are improved for more similar sized impactors. An example of this can be seen in Figures \ref{fig:twoequal}a \& \ref{fig:twounequal}a where for the same total mass the spin distribution for two equally sized impactors is concentrated near Uranus's current spin state, whereas the distribution is flatter for two unequal sized impacts. We conclude that a small number of equal impacts totaling to about $1\,M_{\oplus}$ is the most likely explanation for Uranus's spin state if the planet was initially non-spinning. \subsection{Adding the Effects of Gas Accretion} Gas accretion almost certainly provides a significant source of angular momentum, so much so that we might expect the giant planets to be spinning at near break-up velocities if they accreted gas from an inviscid thin circumplanetary disk \citep{1986Icar...67..391B, 2009Icar..199..338L, 2010AJ....140.1168W}. Instead, we observe the gas giants to be spinning several times slower, so there must have been some process for removing excess angular momentum. This mechanism may be a combination of multiple effects: magnetic braking caused by the coupling between a magnetized planet and an ionized disk \citep{2011AJ....141...51L, 2018AJ....155..178B}, vertical gas flow into the planet's polar regions and additional mid-plane outflows from a thick circumplanetary disk \citep{2012ApJ...747...47T}, and magnetically driven outflows \citep{1998ApJ...508..707Q,2003A&A...411..623F}. Since both Uranus and Neptune spin at about the same rates, we suspect that gas accretion is responsible; though, pebble accretion may also contribute a significant amount of prograde spin \citep{2020Icar..33513380V}. As such, the planet's initial obliquities should be near 0\textdegree~as the angular momentum imparted by gas is normal to the planet's orbital plane. \begin{table}[h] \begin{tabular}{l|lcr|l|c} N & $M_{i}$ & $M_{T}$ & $\epsilon_{i}$ & Probability ($l_{U}$) & Normalized Probability \\ \hline 1 & 1.0 & 1.0 & 0\textdegree & 4.5$\times 10^{-3}$ & 0.90\\ 2 & 0.2 & 0.5 & 0\textdegree & 5.4$\times 10^{-4}$ & 0.11\\ 2 & 0.5 & 1.0 & 0\textdegree & 1.0$\times 10^{-2}$ & 2.00\\ 2 & 1.0 & 2.0 & 0\textdegree & 4.7$\times 10^{-3}$ & 0.94\\ 2 & 1.5 & 3.0 & 0\textdegree & 2.5$\times 10^{-3}$ & 0.50\\ \hline 1 & 1.0 & 1.0 & 40\textdegree & 4.7$\times 10^{-3}$ & 0.94\\ 2 & 0.25 & 0.5 & 40\textdegree & 9.0$\times 10^{-4}$ & 0.18\\ 2 & 0.5 & 1.0 & 40\textdegree & 1.0$\times 10^{-2}$ & 2.00\\ 2 & 1.0 & 2.0 & 40\textdegree & 5.0$\times 10^{-3}$ & 1.00\\ 2 & 1.5 & 3.0 & 40\textdegree & 2.7$\times 10^{-3}$ & 0.54\\ \hline 1 & 1.0 & 1.0 & 70\textdegree & 4.8$\times 10^{-3}$ & 0.96\\ 2 & 0.25 & 0.5 & 70\textdegree & 1.7$\times 10^{-3}$ & 0.34\\ 2 & 0.5 & 1.0 & 70\textdegree & 1.0$\times 10^{-2}$ & 2.00\\ 2 & 1.0 & 2.0 & 70\textdegree & 5.0$\times 10^{-3}$ & 1.00\\ 2 & 1.5 & 3.0 & 70\textdegree & 2.7$\times 10^{-3}$ & 0.54\\ \end{tabular} \caption{An Initially Slow Rotating Uranus}{This table shows the same calculations as in Table \ref{table:odds:1}, but with the planet having an initial spin period of 68.8 hrs. $\epsilon_{i}$ is Uranus's initial obliquity. The normalized probability column divides the Probability by 5$\times 10^{-3}$ as in Table \ref{table:odds:1}.} \label{table:odds:2} \end{table} \begin{figure}[h] \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{mesh_2-10_slow_GF_log.pdf} \end{tabular} \caption{Density plot showing two impacts of equal mass ($m_{i}=1.0\,M_{\oplus}$) incident on Uranus with $T_{i}=$68.8 hours and $\epsilon_{i}=$40\textdegree. The probability of Uranus's spin state falling within 10\% of the maximum value is 1.2 times that of the planet's current state ($l_{U}=0.005$).} \label{fig:den_slow_GF} \end{figure} First, we explore cases where the planet initially spins slowly. In Figure \ref{fig:den_slow_GF} we have Uranus's initial spin period four times slower than its current value, tilted to 40\textdegree, and the planet was struck by two Earth-mass impactors. In this case, even if Uranus was tilted initially by another method, the odds of generating Uranus's current spin state is the same as if the planet was untilted. This is shown in Table \ref{table:odds:2}, and the entries show similar likelihoods to the non-spinning case. However, both the non-spinning and slow spinning cases are improbable for two reasons. First, the mechanism responsible for removing excess angular momentum during gas accretion needs to be extremely efficient. And second, the odds that both Uranus and Neptune were spun up similarly by impacts requires significant fine tuning. \begin{table}[h] \begin{tabular}{l|lcr|l|c} N & $M_{i}$ & $M_{T}$ & $\epsilon_{i}$ & Probability ($l_{U}$) & Normalized Probability \\ \hline 1 & 1.0 & 1.0 & 0\textdegree & 3.4$\times 10^{-3}$ & 0.68\\ 2 & 0.25 & 0.5 & 0\textdegree & 0 & 0\\ 2 & 0.5 & 1.0 & 0\textdegree & 3.7$\times 10^{-3}$ & 0.74\\ 2 & 1.0 & 2.0 & 0\textdegree & 4.1$\times 10^{-3}$ & 0.82\\ 2 & 1.5 & 3.0 & 0\textdegree & 2.6$\times 10^{-3}$ & 0.52\\ 5 & 0.6 & 3.0 & 0\textdegree & 6.1$\times 10^{-3}$ & 1.22\\ 10 & 0.3 & 3.0 & 0\textdegree & 7.5$\times 10^{-3}$ & 1.50\\ 15 & 0.2 & 3.0 & 0\textdegree & 6.0$\times 10^{-3}$ & 1.20\\ \hline 1 & 1.0 & 1.0 & 40\textdegree & 4.5$\times 10^{-3}$ & 0.90\\ 2 & 0.25 & 0.5 & 40\textdegree & 1.3$\times 10^{-3}$ & 0.26\\ 2 & 0.5 & 1.0 & 40\textdegree & 7.4$\times 10^{-3}$ & 1.48\\ 2 & 1.0 & 2.0 & 40\textdegree & 4.7$\times 10^{-3}$ & 0.94\\ 2 & 1.5 & 3.0 & 40\textdegree & 2.6$\times 10^{-3}$ & 0.52\\ \hline 1 & 1.0 & 1.0 & 70\textdegree & 8.3$\times 10^{-3}$ & 1.66\\ 2 & 0.25 & 0.5 & 70\textdegree & 2.6$\times 10^{-2}$ & 5.20\\ 2 & 0.5 & 1.0 & 70\textdegree & 1.4$\times 10^{-2}$ & 2.80\\ 2 & 1.0 & 2.0 & 70\textdegree & 5.7$\times 10^{-3}$ & 1.14\\ 2 & 1.5 & 3.0 & 70\textdegree & 2.7$\times 10^{-3}$ & 0.54\\ \end{tabular} \caption{An Initially Fast Rotating Uranus}{This table shows the same calculations as in Table \ref{table:odds:1}, but with the planet having an initial spin period of 17.2 hrs. $\epsilon_{i}$ is Uranus's initial obliquity. The final column normalizes the probability column by 5$\times 10^{-3}$ as in Table \ref{table:odds:1}.} \label{table:odds:4} \end{table} \begin{figure}[h] \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{mesh_2-10_GF_log.pdf} \end{tabular} \caption{Density plot for collisions incident on Uranus with gravitational focusing. Two impacts of equal mass ($m_{i}=1.0\,M_{\oplus}$) incident on Uranus with $T_{i}=$17.2 hours and $\epsilon_{i}=$0\textdegree. The color bar shows the number of realizations for that value, and the contour lines contain the values within which a percentage of realizations are found. The red box contains the space within 10\% of Uranus's current obliquity and spin rate. Uranus having a spin of $2\,\omega_{U}$ and $\epsilon=30$\textdegree~is twice as likely as its current state ($l_{U}=0.0042$). } \label{fig:den_fast_GF} \end{figure} \begin{figure}[h] \centering \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{mesh_1-1_40_GF_log.pdf} \centering (a) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{mesh_2-05_40_GF_log.pdf} \centering (b) \end{tabular}\\ \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \centering \includegraphics[width=0.45\textwidth]{mesh_2-05_70_GF_log.pdf} \centering (c) \end{tabular} \caption{(a) Density plot showing one impact ($m_{i}=1.0\,M_{\oplus}$) incident on Uranus with $T_{i}=$17.2 hours and $\epsilon_{i}=$40\textdegree. It is 17.5 times more likely to fall within 10\% of the initial state than Uranus's current spin state ($l_{U}=0.0045$). Notice the sharp spike of over 2000 counts near the planet's initial spin state. (b) Two impacts ($m_{i}=0.5\,M_{\oplus}$) incident on Uranus with $T_{i}=$17.2 hours and $\epsilon_{i}=$40\textdegree. The probability of Uranus's spin state falling within 10\% of the maximum value is 3.5 times that of the planet's current state ($l_{U}=0.0075$). (c) Two impacts ($m_{i}=0.5\,M_{\oplus}$) incident on Uranus with $T_{i}=$17.2 hours and $\epsilon_{i}=$70\textdegree. The probability of Uranus's spin state falling within 10\% of the maximum value is 1.8 times that of the planet's current state ($l_{U}=0.014$).} \label{fig:den_fast_40_GF} \end{figure} Accordingly, we investigate the effects of gas accretion by considering impacts onto an untilted fast spinning Uranus. Note that since we are adding angular momentum vectors, the order does not matter; therefore, striking Uranus with a giant impactor before the planet accretes gas will yield the same probability distributions as the reverse case considered here. For an initial spin period near Uranus's current value, the minimum impactor mass increases by $\sqrt{2}$ from $\sim0.4\,M_{\oplus}$ to $0.55\,M_{\oplus}$ over the non-spinning case because the planet already has the correct $|\vec{L}|$ which must be rotated by $\sim90$\textdegree~by the impact. However, while the non-spinning case has a relatively flat obliquity distribution, a fast spinning planet is more resistant to change. For example, striking this planet with a 1 $M_{\oplus}$ object will most likely yield little to no change to the planet's spin state (see Figure 7(a) in \cite{2020ApJ...888...60R}). Introducing more impactors does not change this conclusion appreciably; the planet still tends to remain with a low tilt and similar spin period. Figure \ref{fig:den_fast_GF} demonstrates this with the most favorable case of two 1 $M_{\oplus}$ strikes onto an untilted planet already spinning with a 17.2 hrs period. Additional cases are reported in Table \ref{table:odds:4}. If Uranus was initially tilted by a 40\textdegree~resonance kick, its rapid rotation ensures that its spin state will tend to remain relatively unaffected by subsequent impacts. This can be seen in Figure \ref{fig:den_fast_40_GF}(a) with a 1 $M_{\oplus}$ strike, where the probability of tilting Uranus to 98\textdegree~is only 4.5$\times 10^{-3}$. The odds do improve if the number of impacts increases (Figure \ref{fig:den_fast_40_GF}(b)), but they are not better than the non-spinning case. However, if Uranus was initially tilted by 70\textdegree~via a spin-orbit resonance \citep{2020ApJ...888...60R}, then two 0.5 Earth-mass strikes generates a favorable result (Figure \ref{fig:den_fast_40_GF}(c)). Also, only in this case will two 0.25 $M_{\oplus}$ strikes yield even better likelihoods (see Figure 8 in \cite{2020ApJ...888...60R}). Therefore, if Uranus's and Neptune's current spin rates were a byproduct of gas accretion, then a large resonance kick can significantly reduce the mass needed in later impacts. \begin{table}[h] \begin{tabular}{l|lcr|l|c} N & $M_{i}$ & $M_{T}$ & $\epsilon_{i}$ & Probability ($l_{U}$) & Normalized Probability \\ \hline 1 & 1.0 & 1.0 & 0\textdegree & 2.3$\times 10^{-3}$ & 0.46\\ 2 & 0.25 & 0.5 & 0\textdegree & 0 & 0.00\\ 2 & 0.5 & 1.0 & 0\textdegree & 2.6$\times 10^{-4}$ & 0.05\\ 2 & 1.0 & 2.0 & 0\textdegree & 2.7$\times 10^{-3}$ & 0.54\\ 2 & 1.5 & 3.0 & 0\textdegree & 2.0$\times 10^{-3}$ & 0.40\\ \hline 1 & 1.0 & 1.0 & 40\textdegree & 4.1$\times 10^{-3}$ & 0.82\\ 2 & 0.25 & 0.5 & 40\textdegree & 0 & 0\\ 2 & 0.5 & 1.0 & 40\textdegree & 2.0$\times 10^{-3}$ & 0.40\\ 2 & 1.0 & 2.0 & 40\textdegree & 4.1$\times 10^{-3}$ & 0.82\\ 2 & 1.5 & 3.0 & 40\textdegree & 2.5$\times 10^{-3}$ & 0.50\\ \hline 1 & 1.0 & 1.0 & 70\textdegree & 2.1$\times 10^{-3}$ & 0.42\\ 2 & 0.25 & 0.5 & 70\textdegree & 1.2$\times 10^{-4}$ & 0.02\\ 2 & 0.5 & 1.0 & 70\textdegree & 3.3$\times 10^{-3}$ & 0.66\\ 2 & 1.0 & 2.0 & 70\textdegree & 3.0$\times 10^{-3}$ & 0.60\\ 2 & 1.5 & 3.0 & 70\textdegree & 2.4$\times 10^{-3}$ & 0.48\\ \hline 5 & 0.8 & 4.0 & 0\textdegree & 3.4$\times 10^{-3}$ & 0.68\\ 10 & 0.4 & 4.0 & 0\textdegree & 5.0$\times 10^{-3}$ & 1.00\\ 15 & 0.2667 & 4.0 & 0\textdegree & 4.4$\times 10^{-3}$ & 0.88\\ \end{tabular} \caption{An Initially Very Fast Rotating Uranus}{This table shows the same calculations as in the previous tables, but the planet is spinning with a period of 8.6 hrs.} \label{table:odds:3} \end{table} \begin{figure}[h] \begin{tabular}[b]{@{}p{0.45\textwidth}@{}} \includegraphics[width=0.5\textwidth]{mesh_10_GF_log.pdf} \end{tabular} \caption{Density plot showing ten impacts of equal mass ($m_{i}=0.4\,M_{\oplus}$) incident on Uranus with $T_{i}=$8.6 hours and $\epsilon_{i}=$0\textdegree. The probability of Uranus's spin state falling within 10\% of the maximum value is 2.9 times higher than falling near the planet's current state ($l_{U}=0.005$), as shown in the red box.} \label{fig:10fast_GF} \end{figure} Finally, the mechanism that removes angular momentum during gas accretion could have been very weak and Uranus would have been initially spinning very fast. In this case, slowing down Uranus's spin rate and tilting the planet over would require very massive impacts. As discussed in the previous subsection, changing the planet's spin state with many impactors requires more impacting mass to compensate for partial cancellations of impact effects. Table \ref{table:odds:3} shows that ten impacts totaling to $4\,M_{\oplus}$ produce plausible outcomes. However, it is unclear how gas accretion would transport the optimal amount of angular momentum to the ice giants but not to the gas giants, nor is it expected that the massive impactors required in this scenario would spin both Uranus and Neptune down similarly. While their obliquity distributions peaks at around 30\textdegree, which favors a Neptune formation scenario, the planets would still likely be spinning twice as fast as they are today (Figure \ref{fig:10fast_GF}). Additionally, ten independent strikes is less probable than only two, as the solar system would need to have been populated with many massive rogue planetary cores. \section{Summary and Conclusion} We have searched exhaustively for ways to tilt Uranus to 98\textdegree. Since gas accretion provides the giant planets with a significant source of angular momentum \citep{1986Icar...67..391B, 2009Icar..199..338L, 2010AJ....140.1168W}, and the planet's core was likely to have formed from the accumulation of pebbles and planetesimals, any primordial spin states were likely to be erased leaving near zero initial obliquities and relatively fast spin rates. As such, changing the planets' obliquities significantly without altering the planet's spin period requires either a specific configuration of large collisions or a secular spin-orbit resonance. If impacts were solely responsible for Uranus's large tilt, then there needed to have been multiple collisions in order to explain the prograde motion of the Uranian satellites \citep{2012Icar..219..737M}. Maximizing the probability of this outcome requires minimizing both the number of impacts and the mass of each impactor, as there must have been many more rogue Mars-sized cores than Earth-sized dispersed throughout the early solar system \citep{2015Natur.524..322L,2015PNAS..11214180L,2015A&A...582A..99I}. We have shown that, in general, two impacts totaling to $1\,M_{\oplus}$ yields the most favorable outcome compared to all the other possibilities, but the odds generally do not change by more than a factor of a few for other scenarios. Also, the likelihood of generating Uranus's current spin state is still very low. An initially fast spinning planet cannot be tilted easily because of its large initial angular momentum. We could improve the likelihood of generating Uranus's spin state by assuming a slower initial spin period (Figure \ref{fig:den_slow_GF}), but this would require an even more efficient method of removing angular momentum as the planet accretes its gaseous atmosphere; there seems to be little justification for this. The advantage of the collisionless secular spin-orbit resonance model is that it preserves both Uranus's spin rate and its moons' orbits by gently tipping the Uranian system over. Here we have investigated a resonance argument with Uranus commensurate with Neptune. We have shown that Uranus being located between Jupiter and Saturn can augment the planet's spin precession rate enough to match with Neptune located beyond Saturn. Capture into resonance can tilt the planet to near 90\textdegree, but only on unrealistic 100 Myr timescales. Resonance kicks, on the other hand, require just $10^{7}$ years, but would produce at most a 40\textdegree~obliquity under ideal circumstances. This resonance can, however, easily excite Uranus's obliquity by about 10\textdegree~or 20\textdegree, which would eliminate one of the impacts required by \cite{2012Icar..219..737M}. As we have seen in Tables \ref{table:odds:2} and \ref{table:odds:4}, however, an initial obliquity of 40\textdegree~does not provide much mass reduction or probability improvements in the subsequent collisions needed to generate Uranus's current spin state. We would need to tilt the planet all the way up to $\sim70$\textdegree~to significantly reduce the mass of later impacts, which most likely to occur during the time that Uranus once harbored a circumplanetary disk \citep{2020ApJ...888...60R}. Even in ideal circumstances these non-collisional models cannot drive the planet's obliquity beyond 90\textdegree, and so large collisions seem unavoidable. Tilting Uranus is a difficult problem and each of the models that we have considered contains a major fault. Neptune's 30\textdegree~obliquity, by contrast, can be much more easily explained by any one of these scenarios. Regardless of the planet's initial spin rate, Figures \ref{fig:den_slow_GF} and \ref{fig:den_fast_GF} show a high probability of generating Neptune's current spin state. If Neptune's spin rate was a byproduct of gas accretion, then a small impact or an impact near the planet's center is sufficient to explain Neptune's low obliquity. \cite{2019MNRAS.tmp.2855R} reinforce this scenario since a head on collision of a large impactor with Neptune may also explain its core's higher moment of inertia, in opposition to Uranus's more centrally dense interior. Furthermore, if Neptune was instead captured into a spin-orbit resonance, then we require a less massive disk and a smaller orbital inclination than for Uranus to tilt Neptune over \citep{2020ApJ...888...60R}. Since ice giants must have harbored large circumplanetary disks while accreting their massive atmospheres, then we should expect at least minor obliquity excitations. Ultimately, a combination of the two models, a spin-orbit resonance followed by a giant impact, may be the more likely explanation for Uranus's unusual spin state. \section{Acknowledgement} This work was supported by NASA Headquarters under the NASA Earth Science and Space Fellowship grant NNX16AP08H. The authors also thank Dr. Leslie Sage for his helpful comments and suggestions on an earlier draft of this manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The low-energy dynamics of Quantum Chromodynamics (QCD) is characterized by two prominent properties, {\it i.e.} chiral symmetry breaking and confinement. In the QCD vacuum, chiral symmetry of quark fields is spontaneously broken, as probed by an order parameter $\langle\bar{q}q\rangle$, the vacuum expectation value of the scalar density operator $\bar{q}q$. Through the Banks-Casher relation \cite{Banks:1979yr}, this chiral condensate $\Sigma=-\langle\bar{q}q\rangle$ can be related to the spectral density of the low-lying Dirac eigenvalues. In the presence of valence quarks, or color sources, some modification of the {\it vacuum} is expected around them. Strictly speaking, it is no longer the vacuum, {\it i.e.} the lowest energy state of the system, but we use this terminology having in mind an application to the study of finite density QCD. Although the static color sources as considered in this work, is only a crude approximation to the finite density system, the study of the vacuum modification may shed light on the states of finite density QCD, which has been a subject of active research (see, for instance \cite{Fukushima:2010bq}). The other interesting property of low-energy QCD is the confinement of quarks, which is characterized by the linearly rising potential between static color sources. Putting a pair of static quark and anti-quark in the vacuum, a color flux-tube emerges between them and leads to a linear increase of the energy as a function of the separation. This flux-tube structure has been observed in lattice QCD calculations by monitoring the action density or chromo-electric (or chromo-magnetic) field \cite{Bali:1994de,Haymaker:1994fm,Cea:1995zt}. We expect that such flux-tube structure is reflected in the low-lying fermion eigenmodes, because the Dirac eigenmodes carry the information of their background gauge field configuration. Indeed, the QCD field-strength tensor can be reconstructed using the fermion eigenmodes \cite{Gattringer:2002gn}. In this paper we present a lattice study of the spatial distribution of the chiral condensate in the presence of static color charges. We consider quark-antiquark and three-quark systems represented by Wilson loops to mimic the mesonic and baryonic states, respectively. We use the lattice data of the Dirac eigenmodes calculated on the gauge configurations generated with 2+1-flavor dynamical overlap fermions \cite{Aoki:2012pma}. With the overlap fermion formulation \cite{Neuberger:1997fp,Neuberger:1998wv}, chiral symmetry is exactly realized on the lattice, which is important in the study of the low-lying Dirac eigenmodes, as they are very sensitive to any small violations of chiral symmetry. The lattice data used in this work have this nice property, and indeed were successfully applied to the extraction of the chiral condensate in the vacuum \cite{Fukaya:2007fb,Fukaya:2007yv,Fukaya:2009fh,Fukaya:2010na}. The organization of this paper is as follows. In Section~\ref{sec:2}, we describe the method to construct the \textit{local chiral condensate} $\bar{q}q(x)$ by using the overlap-Dirac eigenmodes, and show its distribution in the vacuum. In Sections~\ref{sec:3} and \ref{sec:4}, we investigate the spatial distribution of the local chiral condensate around the static color sources. Section~\ref{sec:summary} is devoted to a summary. Preliminary reports of this work are found in \cite{Iritani:2013rla,Iritani:2014jqa,Iritani:2014fga}. \section{Topological structure of the QCD vacuum} \label{sec:2} We investigate the topological structure of the non-perturbative QCD vacuum in terms of the eigenmodes of the overlap-Dirac operator. It preserves exact chiral symmetry, and the relation to the topological charge of background gauge field configuration is manifest, {\it i.e.} the index theorem, at least for smooth enough backgrounds \cite{Niedermayer:1998bi}. In this paper, we use the 2+1-flavor dynamical overlap-fermion configurations generated by the JLQCD Collaboration \cite{Aoki:2012pma}. Their lattice volumes are $16^3\times 48$ and $24^3\times 48$ at a single inverse lattice spacing $a^{-1} = 1.759(10)$~GeV. The dynamical quark masses are $m_{ud} = 0.015a^{-1}$ and $m_s = 0.080a^{-1}$. The global topological charge is fixed at $Q=0$ to avoid the problem of divergent molecular dynamics force in the simulations \cite{Fukaya:2006vs}. It induces finite volume effects \cite{Aoki:2007ka}, which would not be significant for the relatively local observables considered in this study. Most of the results are obtained on the larger lattice ($24^3\times 48$) where the number of independent configurations is 50. In the following we describe the profile of the low-lying eigenmodes on these lattices. \subsection{Local chiral condensate $\bar{q}q(x)$} The massless overlap-Dirac operator is given by \cite{Neuberger:1997fp,Neuberger:1998wv} \begin{equation} D_{\rm ov}(0) = m_0 \left[ 1 + \gamma_5 \mathrm{sgn} \ H_W(-m_0) \right], \label{eq:overlap-Dirac-operator} \end{equation} with the hermitian Wilson-Dirac operator $H_W(-m_0) = \gamma_5 D_W(-m_0)$. Here, sgn denotes the matrix sign function. Introducing the quark mass $m_q$, the overlap-Dirac operator is modified as \begin{equation} D_{\rm ov}(m_q) = \left( 1 - \frac{m_q}{2m_0} \right) D_\mathrm{ov}(0) + m_q. \end{equation} This form cancels $\mathcal{O}(a)$ discretization effects, together with a proper rotation of the fermion fields in the observables. We define the eigenfunction $\psi_\lambda(x)$ associated with an eigenvalue $\lambda$ of the massless overlap-Dirac operator \begin{equation} \label{eq:eigeneq} D_{\rm ov}(0) \psi_\lambda(x)=\lambda\psi_\lambda(x), \end{equation} where the eigenfunction $\psi_\lambda(x)$ is normalized as $\sum_x \psi_\lambda^\dagger(x) \psi_\lambda(x) = 1$. Using them we may expand the ``local chiral condensate'' $\bar{q}q(x)$ in terms of the eigenmodes, {\it i.e.} \begin{equation} \bar{q}q(x) = - \sum_\lambda \frac{ \psi_\lambda^\dagger(x)\psi_\lambda(x) }{ m_q + (1 - \frac{m_q}{2m_0})\lambda } \label{eq:localChiralCondensate} \end{equation} for a valence quark mass $m_q$. This relation represents a self-contracting fermion loop contribution from and to the scalar density operator. If the measured observables do not include other light quark fields to be contracted, then the substitution (\ref{eq:localChiralCondensate}) is justified. The correlation functions of $\bar{q}q(x)$ with the Wilson-loop are in this class of observables. The chiral condensate $\langle \bar{q}q \rangle$ is given by an ensemble average of $\bar{q}q(x)$ without insertions of other operators. By averaging over space-time, this quantity is simply written in terms of only the eigenvalues because of the normalization condition for $\psi_\lambda(x)$. Thus the relation between the chiral condensate and the spectral density $\rho(\lambda)$ of the Dirac eigenvalues is established. In the chiral limit it reads $\Sigma=\pi\rho(0)$, {\it i.e.} the Banks-Casher relation \cite{Banks:1979yr}. \subsection{Action and topological charge densities in terms of the Dirac eigenmode} Since the gauge field strength tensor $F_{\mu\nu}$ is defined through the covariant derivative $D_\mu$ as $F_{\mu\nu}=[D_\mu,D_\nu]$, it can also be related to the Dirac operator \cite{Gattringer:2002gn}. Here we briefly reproduce the derivation. The square of the Dirac operator $\Slash{D}\equiv\gamma_\mu D_\mu$ is decomposed as \begin{equation} [\Slash{D}(x)]^2 = \sum_\mu D_\mu^2(x) + \sum_{\mu < \nu} \gamma_\mu \gamma_\nu F_{\mu\nu}(x). \label{eq:Dslash2} \end{equation} By multiplying $\gamma_\mu\gamma_\nu$ and taking a trace with respect to the Dirac indices, the field strength tensor is expressed as \begin{equation} F_{\mu\nu}(x) = - \frac{1}{4} \mathrm{tr} \left[ \gamma_\mu \gamma_\nu \Slash{D}^2(x) \right]. \label{eq:Fmunu} \end{equation} Therefore, by expanding the Dirac operator in terms of its eigenvectors $\psi_\lambda(x)$, an expansion of the field strength is obtained: \begin{equation} F_{\mu\nu}(x) = \sum_\lambda \lambda^2 f_{\mu\nu}(x)_\lambda, \;\; f_{\mu\nu}(x)_\lambda \equiv \frac{i}{2} \psi_\lambda^\dagger(x)\gamma_\mu\gamma_\nu\psi_\lambda(x). \label{eq:field_strength} \end{equation} Using this decomposition, the action and topological charge densities are expressed as \begin{eqnarray} \rho(x) = \mathrm{tr_c}[F_{\mu\nu} F_{\mu\nu}] &=& \mathrm{tr_c} \sum_{\lambda,\lambda'} \lambda^2 \lambda'^2 f_{\mu\nu}(x)_\lambda f_{\mu\nu}(x)_{\lambda'}, \label{eq:action_dens} \\ q_{\mathrm{top}}(x) = \mathrm{tr_c}[F_{\mu\nu} \tilde{F}_{\mu\nu}] &=& \mathrm{tr_c} \sum_{\lambda,\lambda'} \lambda^2 \lambda'^2 f_{\mu\nu}(x)_\lambda \tilde{f}_{\mu\nu}(x)_{\lambda'}, \label{eq:topolo_dens} \end{eqnarray} respectively. Here, $\mathrm{tr}_c$ denotes the trace with respect to the color indices and $\tilde{f}_{\mu\nu}(x)_\lambda = \frac{1}{2}\varepsilon_{\mu\nu\rho\sigma}f_{\rho\sigma}(x)_\lambda$. So far the expressions are exact, but in the numerical studies we introduce a truncation of the summation over the eigenmodes. This truncation acts as a filter to cut UV fluctuations above $\lambda_\mathrm{max}$. On the ensembles of $16^3\times 48$ and $24^3\times 48$ lattices, we calculated 160 and 240 pairs of eigenvalues and eigenvectors of $D_{\mathrm{ov}}$, respectively. Then, the eigenvalues after correcting the $O(a)$ effect, $\mathrm{Im}\ \lambda/(1-\mathrm{Re}\ \lambda/2m_0)$, cover the region between $\pm 300$~MeV, as shown in Figure~\ref{fig:eigval}. In the measurements of the correlation between $\bar{q}q(x)$ and the Wilson loops, we monitor the dependence on the number $N$ of the eigenmodes included and confirm that the results saturate at least above 200~MeV. Some examples will be shown later. \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth,clip]{truncated_level.pdf} \caption{ \label{fig:eigval} Eigenvalue (in MeV) of the low-lying eigenmodes on the $24^3\times 48$ lattice. By including 240 eigenmodes, we can cover the range of $\lambda\lesssim$ 300~MeV. } \end{figure} Before presenting the results, we show some snapshots of the eigenmodes. The index theorem dictates that exact zero-modes are associated with topological excitations of the gauge field. This suggests that the near-zero modes are superpositions of such local topological objects. Using a truncation at $N=20$, we visualize the low-mode contributions to the local chiral condensate $\bar{q}q(x)$ (\ref{eq:localChiralCondensate}), the action density $\rho(x)$ (\ref{eq:action_dens}) and the topological charge density $q_{\mathrm{top}}(x)$ (\ref{eq:topolo_dens}) in the panels (a), (b) and (c) of Figure~\ref{fig:snapshot_local_densities}, respectively. They show tomographic images on a certain $T$-$X$ slice of the four-dimensional lattice extracted from a given gauge configuration of size $24^3\times 48$. \begin{figure*}[tb] \subfigure[\ local chiral condensate]{ \includegraphics[width=0.45\textwidth,clip]{local_dens_chiral_map.pdf} } \subfigure[\ action density]{ \includegraphics[width=0.45\textwidth,clip]{local_dens_action_map.pdf} } \subfigure[\ topological charge density]{ \includegraphics[width=0.45\textwidth,clip]{local_dens_topolo_map.pdf} } \caption{ \label{fig:snapshot_local_densities} Snapshots of (a) the local chiral condensate, (b) the action density and (c) the topological charge distributions observed with the sum of 20 lowest-lying eigenmodes. These pictures show the same $T$-$X$ slice of a $24^3 \times 48$ lattice on a representative gauge configuration. The chiral condensate local fluctuations are correlated with the local topological charge measurements. } \end{figure*} As one can see in Fig.~\ref{fig:snapshot_local_densities} (a), the local condensate $\bar{q}q(x)$ forms a cluster structure. At the same space-time points of the cluster, the action density shows peaks (the panel (b)). More importantly, the topological charge density has positive and negative islands stretching over several lattice spacings at the same space-time points. Such observation is not new; indeed, there are lattice studies using the overlap-Dirac operator \cite{Ilgenfritz:2007xu,Ilgenfritz:2008ia} showing the similar profile of the low-lying eigenmodes. \section{Chiral condensate in Quark-Antiquark System} \label{sec:3} In the presence of color charges, there appears a flux-tube of chromo-electric fields, which has been observed on the lattice by measuring the spatial distribution of the field strength tensor \cite{Bali:1994de,Haymaker:1994fm,Cea:1995zt}. In this section, we investigate the spatial distribution of the local chiral condensate $\bar{q}q(x)$ (\ref{eq:localChiralCondensate}) around the static color sources. Previously, a related analysis has been made, but on a single color source, {\it i.e.} a Polyakov line \cite{Feilmair:1988js,Sakuler:1992qx,Faber:1993sw}, or at finite temperature where the flux-tube is expected to be suppressed \cite{Chagdaa:2006zz}. \subsection{Partial restoration of the chiral symmetry in the flux-tube} \begin{figure}[tb] \includegraphics[width=0.45\textwidth,clip]{flux-tube-chiral-measurement-xy.pdf} \caption{ \label{fig:flux-tube-measurement} Schematic picture of the flux-tube measurement. Static quark and antiquark are located at $(R/2,0)$ and $(-R/2,0)$ on the $XY$-plane. } \end{figure} We investigate the spatial distribution of the local chiral condensate $\bar{q}q(x)$ around the static color sources by calculating a correlation \begin{equation} \langle\bar{q}q(\vec{x})\rangle_W \equiv \frac{\langle\bar{q}q(\vec{x}) W(R,T)\rangle}{\langle W(R,T)\rangle} - \langle \bar{q}q \rangle, \label{eq:diffLocalChiral} \end{equation} where $W(R,T)$ denotes a Wilson-loop of size $R \times T$. It represents a pair of static quark and anti-quark separated by a distance $R$. The origin of the coordinate is chosen at the center of the loop, which stretches along the $X$-axis, and the $Y$ and $Z$-axes correspond to the transverse directions. Figure~\ref{fig:flux-tube-measurement} shows a schematic picture of the measurement. As mentioned in the previous section, we truncate the sum over the eigenmodes in (\ref{eq:localChiralCondensate}) at the $N$-th eigenvalue and denote the corresponding local condensate as $\bar{q}q^{(N)}(x)$. In the final analysis, we chose $N=160$, after confirming that the result is unchanged once a sufficient number of low-lying modes are included. Reflecting the ultraviolet divergences of the scalar density operator, the expectation value of $\bar{q}q^{(N)}(x)$ contains quadratic and logarithmic divergences. The strong quadratic divergence is associated with a mixing with the identity operator and has the form $m_q/a^2$. Because of the exact chiral symmetry of the overlap-Dirac operator, the strongest divergence of $1/a^3$ is absent and the leading term is of $1/a^2$ and proportional to $m_q$. Since the truncation at a fixed mode number $N$ can be considered as a certain regularization scheme, the regularized operator $\bar{q}q^{(N)}(x)$ can be parametrized as \begin{equation} \bar{q}q^{(N)} = \bar{q}q^{\rm (subt)} + c_1^{(N)}m_q/a^2 + c_2^{(N)}m_q^3 \label{eq:subtractChiral} \end{equation} with $\bar{q}q^{(\rm subt)}$ the operator for which the power divergences are subtracted. The second and third terms represent a mixing with the identity operator; the mass dimension is compensated by $m_q/a^2$ and $m_q^3$, respectively. These coefficients $c_1^{(N)}$ and $c_2^{(N)}$ can be obtained by fitting the vacuum expectation value $\langle\bar{q}q^{(N)}\rangle$ as a function of the valence quark mass $m_q$ \cite{Noaki:2009xi}. When the correlation with the Wilson-loop is considered as in (\ref{eq:diffLocalChiral}), the contribution from the identity operator with the divergent coefficient $c_1^{(N)}m_q/a^2+c_2^{(N)}m_q^3$ cancels on the right hand side and the measurement is free from the power divergences. Figure \ref{fig:localChiral} shows the spatial distribution of $\langle\bar{q}q^{(N)}(\vec{x})\rangle_W$ on the $XY$-plane with a separation $R = 8$. The location of color sources are shown by the circles. In order to improve the signal of the Wilson loop, we apply the APE smearing for the spatial link-variables, and the temporal extent is fixed at $T=4$ for which the ground state becomes dominant. In this plot, $\bar{q}q(\vec{x})$ is set at $t = 0$, and the valence quark mass is $m_q = 0.015a^{-1}$. \begin{figure} \centering \includegraphics[width=0.49\textwidth,clip]{chiral_in_flux_000-080_R8_T4_map.pdf} \caption{ \label{fig:localChiral} Spatial profile of the local chiral condensate $\langle\bar{q}q^{(N)}(\vec{x})\rangle_W$ around a Wilson loop $W(R,T)$ with $R=8$ and $T=4$. The position of color sources are at $(X,Y) = (4,0)$ and $(-4,0)$, which are shown in the plot by white circles. } \end{figure} In order to improve the signal, the lattice data are averaged over space-time. Namely, assuming the translational invariance of the expectation value, we shift the whole system including the Wilson-loop and the local chiral condensate and take an average. This can be done without additional computational cost to solve quark propagators by using the low-lying eigenmodes. This is one of the advantages of the construction (\ref{eq:localChiralCondensate}). As Figure~\ref{fig:localChiral} demonstrates, there appears a tube-like structure between the color sources, where the change of the condensate becomes positive, {\it i.e.} $\langle \bar{q}q^{(N)}(\vec{x}) \rangle_W > 0$. It means that the magnitude of the chiral condensate is reduced between the color charges, since $\langle\bar{q}q\rangle$ is negative in the vacuum. Peak structures at the position of the charges are shown in the flux-tube measurements \cite{Bali:1994de,Haymaker:1994fm} due to the strong enhancement of the action/energy density around the color charges. In terms of the low-mode truncated local chiral condensate shown in Fig.~\ref{fig:localChiral} no such characteristic structures around the color charges can be observed. The absence of peaks will be discussed later. The remaining logarithmic divergence in $\bar{q}q^{(\rm subt)}$ can be canceled by taking a ratio \begin{equation} r(\vec{x}) \equiv \frac{ \langle\bar{q}q^{(\rm subt)}(\vec{x})\rangle_W}{ \langle\bar{q}q^{(\rm subt)}\rangle } = \frac{ \langle\bar{q}q^{\rm (subt)}(\vec{x})W(R,T)\rangle}{ \langle\bar{q}q^{\rm (subt)}\rangle \langle W(R,T)\rangle }, \label{eq:reductionRatio} \end{equation} where $\langle\bar{q}q^{\rm (subt)}\rangle$ is obtained by fitting the vacuum expectation value $\langle\bar{q}q\rangle$ to (\ref{eq:subtractChiral}) as a function of the valence quark mass $m_q$. As there are no remaining ultraviolet divergences, the ratio $r(\vec{x})$ has a proper continuum limit. Hereafter, we mainly use this quantity to quantitatively estimate the restoration of chiral symmetry. Figure~\ref{fig:chiral_ratio} shows the ratio $r(\vec{x})$ for the separation between the color sources fixed at $R=8$. The plots Fig.~\ref{fig:chiral_ratio}~(a) and Fig.~\ref{fig:chiral_ratio}~(b) correspond to the cross-sections of Figure~\ref{fig:localChiral} along the $X$-axis and the transverse $Y$-axis. The location of color sources is shown by black dots in Fig.~\ref{fig:chiral_ratio}~(a). These plots provide a quantitative measure of the reduction of the chiral condensate. The region where the chiral condensate is reduced forms a structure that resembles the color flux-tube. In other words, chiral symmetry is partially restored inside the flux-tube. The restoration becomes stronger around the center of the flux, which is about 20\% when $R=8$. \begin{figure} \subfigure[\ cross section at $Y=0$]{ \includegraphics[width=0.48\textwidth,clip]{chiral_ratio_in_flux_X_000-080_R8_T4.pdf} } \subfigure[\ cross section at $X=0$]{ \includegraphics[width=0.48\textwidth,clip]{chiral_ratio_in_flux_Y_000-080_R8_T4.pdf} } \caption{ \label{fig:chiral_ratio} Ratio of the chiral condensate $r(x)$ for a separation $R = 8$. The color sources are separated along the $X$ axis and set at $(X,Y) = (4,0)$ and $(-4,0)$. The plots show the cross section (a) along the $X$-axis at $Y=0$ and (b) along $Y$-axis at $X=0$. } \end{figure} The close relationship between the flux-tube is suggested in Fig.~\ref{fig:chiral_action}. We compare the cross-section of both chiral condensate $\langle \bar{q}q^{(\mathrm{subt})}(\vec{x}) \rangle_W$ and the action density defined by (\ref{eq:action_dens}) with a cutoff on the mode number $\langle \rho^{(N)} (\vec{x})\rangle_W$ around the Wilson loop using low-lying 160 eigenmodes. The latter is calculated inserting the action density $\rho(\vec{x})$ in place of $\bar{q}q(\vec{x})$ in Eq.~(\ref{eq:diffLocalChiral}), which is used for the flux-tube measurement \cite{Bali:1994de,Haymaker:1994fm,Cea:1995zt}. In order to compare the profile, both quantities are normalized to unity at the origin. Apart from their normalization coefficients, the spatial profile of the chiral condensate shows a good agreement with UV Dirac mode truncated action density. As mentioned above, the action density is strongly enhanced around the color charges as reported in \cite{Bali:1994de,Haymaker:1994fm}. However, both UV filtered densities do not have such structures. Our conclusion is that such peak mainly comes from ultraviolet divergent part and thus cannot be seen in Fig.~\ref{fig:chiral_action} (a) within our cutoff scale. \begin{figure} \centering \subfigure[\ cross section at $Y=0$]{ \includegraphics[width=0.48\textwidth,clip]{chiral_action_flux_slice_X_000-080_R8_T4.pdf}} \subfigure[\ cross section at $X=0$]{ \includegraphics[width=0.48\textwidth,clip]{chiral_action_flux_slice_Y_000-080_R8_T4.pdf}} \caption{ \label{fig:chiral_action} The spatial profile of both local chiral condensate $\langle \bar{q}q^{\mathrm{(subt)}}(\vec{x}) \rangle_W$ and UV Dirac mode truncated action density $\langle \rho^{(N)}(\vec{x}) \rangle_W$ around the color sources with a separation $R = 8$ using 160 low-lying eigenmodes. For a comparison of their shape, both quantities are normalized at the origin. } \end{figure} Figure~\ref{fig:chiral_ratio_cut} shows the same plot as Fig.~\ref{fig:chiral_ratio} but with different values of $N$, the number of eigenmodes included in the sum (\ref{eq:localChiralCondensate}). As expected from the construction that cancels the ultraviolet divergences, there is no significant difference between $N=120$ and 240. Our choice $N=160$ is therefore sufficiently conservative to estimate the local chiral condensate inside the tube. Up to the largest eigenmode in our calculation at $N = 240$, we have confirmed such saturation for other quantities considered in this paper except for the magnitude of the action density $\rho^{(N)}(\vec{x})$ and topological charge density $q_{\rm top}^{(N)}(\vec{x})$. The value of these quantities strongly depends on the cutoff scale $\lambda_{\rm max}$ as expected from the definition in Eqs.~(\ref{eq:action_dens}) and (\ref{eq:topolo_dens}). However, the spatial profile of both $\langle \rho^{(N)}(\vec{x}) \rangle_W$ and $\langle \bar{q}q^{\mathrm{(subt)}}(\vec{x}) \rangle_W$ are rather stable, and there are no signature of peaks within our truncation as in Fig.~\ref{fig:chiral_ratio_cut}~(a). \begin{figure}[tb] \subfigure[\ cross section at $Y=0$]{ \includegraphics[width=0.48\textwidth,clip]{cutoff_dep_cross_section.pdf}} \subfigure[\ cross section at $X=0$]{ \includegraphics[width=0.48\textwidth,clip]{cutoff_dep_cross_section_Y.pdf}} \caption{ \label{fig:chiral_ratio_cut} Same as Fig.~\ref{fig:chiral_ratio} but with different numbers of eigenmodes included: $N$ = 120, 160, 200, and 240. } \end{figure} The partial restoration of chiral symmetry is in accordance with the chiral bag model picture for the quark-antiquark system \cite{Hosaka:1996ee}. In the na\"ive bag-model, chiral symmetry is completely restored inside the bag, while Fig.~\ref{fig:chiral_ratio} suggests a smooth boundary with a reduced but non-zero condensate inside the bag. \subsection{Chiral symmetry restoration as a function of the separation} Next we study the chiral symmetry restoration depending on the separation of the color sources. Figure \ref{fig:chiral_ratio_R_dep} compares the cross section of $r(x)$ along the $X$-axis with $R$ = 4, 8 and 10. By increasing the separation, we observe that the region of partial restoration stretches between the color sources, which are located at $X = R/2$ and $- R/2$. This supports the picture of the tube structure. The magnitude of the reduction increases with $R$. For instance, at the origin, the reduction of about 15\% at $R = 4$ grows up to about 25\% at $R = 10$. Beyond $R=10$, the statistical signal becomes much worse, and the effect of spatial boundary would become important as $R$ approaches $L/2$. \begin{figure} \includegraphics[width=0.47\textwidth,clip]{chiral_ratio_in_flux_X_000-080_R_dep_T4.pdf} \caption{ \label{fig:chiral_ratio_R_dep} Chiral condensate ratio $r(x)$ along the $X$-axis. The results with increasing separation $R$: $R$ = 4, 8 and 10. Color sources are located at $(R/2,0)$ and $(-R/2,0)$. } \end{figure} In Figure~\ref{fig:chiral_ratio_center}, we plot the value of the ratio at the center $r(0)$, where the magnitude becomes minimum, as a function of $R$. As the separation $R$ increases, the ratio of the chiral condensate decreases monotonically until the maximum distance we could explore. At larger distances, the effect of string breaking should manifest itself in dynamical QCD, and the local chiral condensate would stop decreasing. As far as we can observe, the reduction of chiral condensate inside the color flux-tube is of the size of 20--25\% at the distance of 1~fm, assuming that the string breaking does not occur in this scale \cite{Bali:2005fu}, since it is difficult to observe the breaking state using the Wilson loop as a color source. \begin{figure} \centering \includegraphics[width=0.48\textwidth,clip]{chiral_ratio_center_R_dep.pdf} \caption{ \label{fig:chiral_ratio_center} Ratio at the center of the flux $r(0)$ as a function of the separation $R$. } \end{figure} By increasing the separation between color sources, the thickness of the flux is expected to grow logarithmically as a function of its length \cite{Hasenfratz:1980ue,Luscher:1980iy}. Such behavior has indeed been observed in quenched lattice QCD calculations \cite{Cardoso:2013lla} (see also \cite{Bakry:2010sp} for a study at finite temperature), for which one can increase the statistics more easily. In this work we try to observe the thickness through the chiral condensate ratio $r(x)$. \begin{figure} \includegraphics[width=0.48\textwidth,clip]{chiral_ratio_Y_Xdep_comp_fit.pdf} \caption{ \label{fig:chiral_ratio_comp} Cross section of the ratio along the $Y$-axis with different separations $R$ and cut points $X$. } \end{figure} Figure~\ref{fig:chiral_ratio_comp} shows the cross-section of $r(x)$ along the $Y$-axis for some combinations of $R$ and $X$. It is clear that for a fixed $X$ the flux is thicker when the separation $R$ is larger. More interestingly, the curve for $R = 8$ at $X = 0$ almost coincides with that for $R = 10$ at $X = 2$. Similarly, the curve for $R = 4$ at $X = 0$ coincides with that for $R = 6$ at $X = 2$. Due to the reflection symmetry, these behaviors are also observed at $X = - 2$. This indicates that the thickness of the flux is highly correlated with the magnitude of the reduction. We also note that these corresponding cross-sections have the same distance from the color charge, which is given by $R/2 - |X|$ for $|X| \leq R/2$. In fact, such coincidence is expected from an effective string model \cite{Cea:2012qw,Kharzeev:2014xta}. According to that model, the ratio $r(Y)$ is written as \begin{equation} r(Y) = 1 - \tilde{r} \frac{\mu^2}{\alpha} \frac{K_0((\mu^2 Y^2 + \alpha^2)^{1/2})}{K_1(\alpha)}, \label{eq:fit_func} \end{equation} where $K_0(x)$ and $K_1(x)$ are modified Bessel functions. The parameter $\mu$ has a physical interpretation as the inverse penetration length of the flux from the perpendicular direction, and $\alpha$ is the thickness of the core. The parameter $\tilde{r}$ represents the strength of the condensate reduction. The function (\ref{eq:fit_func}) reproduces the lattice data quite well, as shown by the curves in Fig.~\ref{fig:chiral_ratio_comp}. The fit results are summarized in Table~\ref{tab:fit_cross_section}. The penetration length is in the range of $1/\mu\simeq$ 1.0--1.6, which corresponds to 0.11--0.18~fm in physical units. The core size $\alpha$ is 1.4--3.6 in lattice units, and is in the range 0.15--0.4~fm. We observe an increase of $\alpha$ as $R$ increases while $X$ is fixed at zero, but with the large statistical error we are not able to claim the clear evidence of the string fattening. \begin{table}[tbp] \begin{tabular}{ccc|cccc} \hline \hline $R$ & $X$ & $r(0)$ & $\tilde{r}$ & $\mu$ & $\alpha$ & $\chi^2/\mathrm{dof}$ \\ \hline 10 &0 &0.757(10) & 1.54(7) & 0.66(11) & 2.3(0.9) & 0.40 \\ 10 &1 &0.762(10) & 1.46(7) & 0.72(14) & 2.8(1.2) & 0.46 \\ 10 &2 &0.778(10) & 1.26(6) & 0.71(14) & 2.5(1.1) & 0.66 \\ 10 &3 &0.805(9) & 1.00(6) & 0.75(18) & 2.5(1.3) & 1.00 \\ \hline 8 & 0 & 0.786(5) & 1.11(3) & 0.71(7) & 2.2(0.5) & 0.43 \\ 8 & 1 & 0.792(5) & 1.08(4) & 0.72(8) & 2.3(0.6) & 0.25 \\ 8 & 2 & 0.813(5) & 0.93(3) & 0.75(11) & 2.5(0.8) & 0.58 \\ 8 & 3 & 0.855(5) & 0.69(3) & 0.83(17) & 3.0(1.3) & 1.42 \\ \hline 6 & 0 & 0.815(3) & 0.89(2) & 0.66(4) & 1.7(2) & 0.38 \\ 6 & 1 & 0.827(3) & 0.81(2) & 0.65(4) & 1.6(2) & 1.01 \\ 6 & 2 & 0.865(3) & 0.65(2) & 0.61(4) & 1.4(2) & 2.36 \\ \hline \hline \end{tabular} \caption{ \label{tab:fit_cross_section} Fit results for $r(Y)$ to (\ref{eq:fit_func}) for each $R$ and $X$, together with $\chi^2$ per degrees of freedom. Note that $\mu$ and $\alpha$ are in lattice units. The condensate ratio at the center $r(0)$ is listed as well. } \end{table} \section{Chiral Condensate in the Three-Quark System} \label{sec:4} \subsection{Partial Restoration of chiral symmetry in the 3Q-system} Next we consider a system consisting of three color charges that represents a baryon system, that we call the 3Q-system in this paper. Using the path-ordered product $U_k \equiv \prod_{\Gamma_k} e^{iagA_k}$ along a path $\Gamma_k$, the 3Q Wilson-loop is given by \begin{equation} W_\mathrm{3Q} \equiv \frac{1}{3}\varepsilon_{abc}\varepsilon_{a'b'c'} U_1^{aa'} U_2^{bb'} U_3^{cc'}, \label{} \end{equation} which is made color-singlet by the totally anti-symmetric tensor $\varepsilon_{abc}$ of color indices $a$, $b$, and $c$ \cite{Takahashi:2000te,Takahashi:2002bw}. Similar to the $\mathrm{\bar{Q}Q}$-system, the spatial distribution of the chiral condensate for the 3Q-system is measured as \begin{equation} \langle \bar{q}q(\vec{x}) \rangle_{{\rm 3Q}} \equiv \frac{ \langle \bar{q}q(\vec{x}) W_{\rm 3Q} \rangle }{ \langle W_{\rm 3Q} \rangle } - \langle \bar{q}q \rangle, \label{eq:chiral_around_3Q} \end{equation} with the 3Q Wilson loop $W_{\rm 3Q}$. The ratio of the chiral condensate in the 3Q-system $r_\mathrm{3Q}(\vec{x})$, for which the ultraviolet divergences cancel, is then constructed by \begin{equation} r_\mathrm{3Q}({\vec{x}}) \equiv \frac{\langle \bar{q}q^{(\mathrm{subt})}(\vec{x})W_\mathrm{3Q} \rangle}{ \langle \bar{q}q^\mathrm{(subt)}\rangle\langle W_\mathrm{3Q} \rangle}. \label{} \end{equation} Figure~\ref{fig:chiral_3q_measure} shows a schematic picture of the construction of the 3Q Wilson loop $W_\mathrm{3Q}$ from the Wilson lines $U_k$. For simplicity, we use an isosceles right triangle configuration of the color charges on the $XY$-plane, and the coordinate is set as in Fig.~\ref{fig:chiral_3q_measure}. In this case, the junction point of the three flux tubes (the Fermat point) corresponds to the origin \cite{Takahashi:2000te,Takahashi:2002bw}. The measurement of the local chiral condensate $\bar{q}q(\vec{x})$ is done at a fixed time slice. The low-mode truncation number $N$, the temporal extension $T$ and other measurement setups are the same as in the $\mathrm{Q\bar{Q}}$-system. \begin{figure} \includegraphics[width=0.47\textwidth,clip]{flux-3q-construction.pdf} \caption{ \label{fig:chiral_3q_measure} A schematic picture of the construction for three-quark system with an isosceles right triangle configuration. Three Wilson lines $U_k$ correspond to the static color sources. } \end{figure} Figure \ref{fig:qbarq_3Q_ratio} shows the ratio $r_\mathrm{3Q}(\vec{x})$ with color sources at $(X,Y) = (6,0)$, $(0,6)$ and $(0,0)$ denoted by circles in the plot. As shown in Fig.~\ref{fig:qbarq_3Q_ratio}, the magnitude of the chiral condensate is reduced among the color sources, which indicates the partial restoration of chiral symmetry inside the 3Q-system. Similar to the $\mathrm{Q\bar{Q}}$-system in Sec. III, there appear no peaks at the color charges within our truncation scale. We note that the characteristic $Y$-type flux is not clearly seen in this plot, probably because the thickness of flux is comparable to the color source separation. Because of the statistical noise, we are not able to repeat the calculations increasing the quark separations. \begin{figure} \includegraphics[width=0.48\textwidth,clip]{chiral3_ratio_heatmap_000-080_R6_Z0_T4.pdf} \caption{ \label{fig:qbarq_3Q_ratio} Condensate ratio $r_\mathrm{3Q}(\vec{x})$ with the color sources at $Q_1 = (6,0)$, $Q_2 = (0,6)$, and $Q_3 = (0,0)$ on the $XY$-plane. } \end{figure} Like in the $\mathrm{\bar{Q}Q}$-system, the magnitude of the restoration depends on the separation of the sources. Figure~\ref{fig:qbarq_3Q_ratio} shows the cross-section of the ratio $r_\mathrm{3Q}(\vec{x})$ along the line of $X = Y$ with the color sources at $(X,Y) = (R,0)$, $(0,R)$ and $(0,0)$. In this setup, the measurement goes through one color charge and the center of mass of the system. By comparing the data for $R = 3$ and 6, we find that the reduction is more substantial for $R=6$, which is similar to the $\mathrm{Q\bar{Q}}$-system (see Fig.~\ref{fig:chiral_ratio_R_dep}). The reduction of the local chiral condensate becomes larger with the size of the loop, and take its minimum value at around the center of gravity. With $R=6$, the reduction is about 30\%, which is also similar to that of the $\mathrm{Q\bar{Q}}$-system. \begin{figure} \includegraphics[width=0.48\textwidth,clip]{flux_chiral_slice_C_000-080_Rdep_Z0_T4.pdf} \caption{ \label{fig:qbarq_3Q_ratio_Rdep} Chiral condensate ratio $r_{\rm 3Q}(\vec{x})$ along the line of $X = Y$ with the color sources at $(R,0), (0,R)$ and $(0,0)$ on the $XY$-plane with $R = 3$ and 6. } \end{figure} \subsection{Partial restoration at \textit{finite density}} Finally, using the observed modification of the local chiral condensate around the color sources, we estimate the size of the partial restoration of chiral symmetry in \textit{finite density} QCD. We consider the system of fixed number of baryons in a finite volume box, so that the baryon number density $\rho$ is $N_b/L^3$, where $N_b$ is the number of baryons and $L^3$ is the spatial volume. As a toy example we take $N_b=1$ and replace the baryon by the 3Q Wilson-loop. This only gives a crude approximation of the realistic system, but given the difficulty of simulating QCD at finite chemical potential it may provide a useful clue to the understanding of the finite density QCD. The net change of the condensate under such system is estimated by the spatial average of the condensate ratio $r_\mathrm{3Q}(\vec{x})$: \begin{eqnarray} \frac{\langle \bar{q}q \rangle_\rho}{\langle \bar{q}q \rangle_0} \equiv \frac{1}{L^3}\sum_{\vec{x}}^{L^3} r_{\mathrm{3Q}}(\vec{x}), \label{eq:spatial_average} \end{eqnarray} where $\langle\bar{q}q\rangle_\rho$ is the condensate at the finite baryon number density $\rho=1/L^3$. We use two lattice volumes, $L^3=16^3$ and $24^3$, which correspond to $(16a)^{-3} \simeq 0.18\ \mathrm{fm}^{-3}$ and $(24a)^{-3} \simeq 0.05\ \mathrm{fm}^{-3}$, respectively. The $16^3$ lattice roughly corresponds to the normal nuclear density $\rho_0\simeq 0.18\ \mathrm{fm}^{-3}$. Figure~\ref{fig:chiral_volume_dep} shows $\langle \bar{q}q \rangle_\rho/\langle \bar{q}q \rangle_0$ as a function of $1/L^3$. The two symbols correspond to the different configurations of the color sources, {\it i.e.} $(0,0)$, $(R,0)$ and $(0,R)$ with $R = 3$ and 6 on the $XY$-plane. The solid lines are the results of a linear fit with fixed value of 1 at $1/L^3 = 0$. The linear dependence from unity at $\rho=0$ simply means that there is a finite region where the chiral condensate is reduced from its vacuum value. Since the region gets larger with increasing $R$, the slope for larger $R$ is steeper. In our setup, the reduction of the chiral condensate at the normal nuclear density is only $\sim$ 5\%, which is much smaller than the phenomenological model estimate of the order of 30\% \cite{Hayano:2008vn}. Our estimate, however, assumes a fixed spatial size of the {\it baryon} which is smaller than the realistic nucleon. For instance, the mean root square radius of our setup in Fig.~\ref{fig:chiral_volume_dep} is 0.44~fm when $R=6$, while the charge radius of proton is 0.88~fm. As the restoration of chiral condensate is stronger for larger separation, this suggests that $\langle\bar{q}q\rangle_\rho$ in realistic finite density QCD could be substantially lower than our estimate. \begin{figure} \includegraphics[width=0.47\textwidth,clip]{vol_dependence_qbarq_ratio_average.pdf} \caption{ \label{fig:chiral_volume_dep} Reduction of the chiral condensate at finite density measured by $\langle \bar{q}q \rangle_\rho/\langle \bar{q}q \rangle_0$. The estimates for a given configuration of color sources, {\it i.e.} at $(0,0), (R,0)$ and $(0,R)$ on the $XY$-plane, with $R=3$ and $6$ are shown. } \end{figure} \section{Summary} \label{sec:summary} The Dirac eigenmodes carry the full information of the background gauge field. Indeed, having the complete set of the eigenvalue and eigenvectors, one can reconstruct the field-strength tensor $F_{\mu\nu}(x)$ at any point $x$. They therefore offer an interesting way of filtering out the ultraviolet modes and investigating the low-energy dynamics of QCD by only using the low-lying eigenmodes upon reconstruction. This is a sound and well-defined regularization method of quantum field theory. We use this method to investigate the spatial profile of the chiral condensate under the presence of external sources. On the lattices generated with 2+1 flavors of dynamical overlap fermions, we calculate the low-lying eigenvalues and associated eigenvectors of the overlap-Dirac operator, and use them to reconstruct the chiral condensate locally. Then, it is straightforward to measure its correlation with the external color sources set up to model the $\mathrm{\bar{Q}Q}$ and 3Q systems. We find that the local chiral condensate shows a structure interpreted as a color flux-tube between the $\mathrm{\bar{Q}Q}$ color sources, in which the condensate decreases significantly. It indicates a partial restoration of chiral symmetry inside the flux-tube and suggests that it happens also inside hadrons. The spatial profile is consistent with a string model of the confinement potential, giving another support for the presence of the color flux-tube. We perform a similar measurement in the 3Q system, which is new as far as we have noticed. It again shows the partial restoration of chiral symmetry among the color sources. The reduction of condensate is about 30\% for the separation between the color sources of $\sim$ 1~fm. It can be used to estimate the chiral condensate in the finite density system. The method developed in this work may easily be applied for the study of finite temperature QCD, where Polyakov loops can be used for a static color source. Since the eigenmodes can be applied to define various charges, such as the axial charge density, the quark number density, and the topological charge, it may provide an interesting alternative to measure their spatial distribution. \section*{Acknowledgements} The lattice QCD calculations have been done on SR16000 at High Energy Accelerator Research Organization (KEK) under a support of its Large Scale Simulation Program (No. 13/14-05). This work is supported in part by the Grant-in-Aid of the Japanese Ministry of Education (No. 25287046 and 26247043), and the SPIRE (Strategic Program for Innovative REsearch) Field 5 project.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Coherent effects, as fragile as they may seem, might be able to survive in complex systems even in presence of strong noise induced by the coupling to an external environment. They are often related to functions in complex chemical and biophysical systems~\cite{photo,photoT,coherence}. Understanding under which conditions robust coherent effects can be sustained even at room temperature is a central issue for designing efficient quantum devices. Molecular nanotubes are among the most interesting and most investigated structures. They are present in several natural photosynthetic complexes, for instance in the Green Sulphur Bacteria~\cite{Valleau,Huh,arvi1,arvi2,arvi3,K31,K32} or in Phycobilisome Antennas~\cite{nir1,nir2,nir3,nir4}. They are also present in other biomolecular systems, for instance in Microtubules, which are fundamental biological structures, showing interesting similarities with photosynthetic Antenna complexes~\cite{craddock,Phil}. Also artificial molecular nanotubes are at the centre of research interest~\cite{caoNT,eisele,NT2,K1}. Nanotubular molecular aggregates are extremely efficient for light-harvesting and energy transport and they present a very ordered structure with a high degree of symmetry~\cite{arvi1,arvi2,arvi3,K2,K41,K42,K43}. The high degree of symmetry concerns both the molecule positions and the orientation of their transition dipoles. Despite all that, a clear understanding of how structural features in molecular aggregates can sustain coherent effects and explain their high efficiency is still missing. Some of the primary coherent effects which are thought to be responsible for the high efficiency of molecular nanotubes are induced by the delocalization of the excitation over many molecules. Since the sunlight is very dilute, usually only one excitation is present in such complexes, so that single-excitation delocalized states are usually investigated. Delocalized excitonic states can lead to cooperative effects, such as superradiance~\cite{K31,K32,K2,Jaggr,Moll,vangrondelle} and super-transfer~\cite{schulten,srlloyd}, and they can be useful in both natural or artificial light-harvesting complexes~\cite{eisele,vangrondelle,superabsorb,kaplan1,kaplan2,kaplan3,kaplan4,kaplan5,kaplan6,kaplan7,kaplan8,sr2,srfmo,srrc}. Specifically, coherently delocalized excitonic states can have a large dipole strength which strongly couples them to the electromagnetic field. Thus, these states are able to super-absorb light at a rate much larger than the single-molecule absorbing rate, since the absorption rate of delocalized excitonic states can increase with the number of molecules over which the excitation is delocalized~\cite{K31,K32}. States with a large dipole strength can also couple between themselves efficiently, inducing a super-transfer coupling between distant molecular aggregates~\cite{srlloyd} or different parts of the same aggregate as we show here. Delocalized single excitonic states over a large number of molecules are called macroscopic coherent states and they are studied both for applications and basic science~\cite{mc1,mc2,Cl,eisele2,eisele3,JYZ,JRC}. Molecular nanotubes are composed by a network of self-assembled photo-active molecules. Each molecule can be treated as a two level system, characterized by both an excitation energy and a transition dipole moment which determines its coupling with the electromagnetic field and with the other molecules. The interaction between the molecules is often assumed to be dipole-dipole~\cite{K2,K41,K42,K43} which decays with the distance as $1/r^3$ or, in some approximate scheme, as nearest-neighbour~\cite{K1} only. While the results thus obtained are certainly very interesting, care is needed to use such simplifications in large molecular structures. Indeed, dipole-dipole interaction is valid when the distance between the molecules is sufficiently large and the overall system size $L$ is considerably smaller than the wavelength $\lambda_0$ connected with the excitation energy of the molecules (small volume limit). Since nanotubular aggregates can be large, here we consider a more accurate Hamiltonian interaction~\cite{mukamelspano} which takes into account the interaction between oscillating charges in each molecule. Such description reduces to the usual dipole-dipole interaction in the small volume limit. Using such radiative Hamiltonian, we have analyzed the existence of macroscopic coherent states at room temperature in different, natural and artificial, molecular nonotubes. Since the molecules in such structures are tightly packed, their interaction energy can be strong, of the order of several times $ k_B T \approx 200\mbox{ cm}^{-1}$ with $T=300K$. Such strong interaction is thought to be able to support excitonic delocalization even at room temperature. Nevertheless here we show that the symmetric arrangement of the molecules is able to induce excitonic delocalization at room temperature well beyond what one could expect from the magnitude of the nearest-neighbour coupling between the molecules. Moreover, by comparing natural structures with few mathematical models of self-aggregated molecular nanotubes we show that the degree of macroscopic coherence cannot be explained even by the long-range nature of the coupling between the molecules. We connect such enhanced delocalization to the super-transfer coupling present inside such structures, which induces the emergence of a gapped superradiant state in the low energy region of the spectrum. Thus our main result is that macroscopic coherence in natural molecular nanotubes is an emergent property produced by specific cooperative effects which cannot be reduced either to the range of the interaction or to the magnitude of the coupling between the molecules. Specifically, in this paper we investigate the \textit{Chlorobium Tepidum} Antenna complexes of Green Sulfur bacteria. Green Sulfur bacteria are photosynthetic organisms which live in deep water where the sunlight flux is very low~\cite{Huh} and they are among the most efficient photosynthetic systems~\cite{arvi1,arvi2,arvi3}. Similarly to other antenna complexes present in nature~\cite{nir1,nir2,nir3,nir4}, they present a high degree of symmetry being arranged in nontrivial cylindrical structures with an ordered orientation of the molecule dipoles. We analyze both the wild type (WT) and the triple mutant type (MT), which have been recently investigated in~\cite{Ganapathy,Koh}. Understanding the connection between functionality and structure in such complexes will enhance our comprehension of natural photosynthesis and it could also inspire efficient bio-mimetic devices for energy transport and light-harvesting. In Section~\ref{sec:mod} and~\ref{sec:ham} we present the cylindrical models studied. In Section~\ref{sub-41} the existence of a delocalized superradiant state close to the ground state for the natural models is shown. In Sections~\ref{sub-42} and~\ref{sub-43} the thermal coherence length is introduced and analyzed. Natural complexes are shown to be able to support the largest thermal coherence length w.r.t. the other models considered. The evidence produced in these Sections allows to conclude that the large thermal coherence length of natural aggregates cannot be explained by the magnitude of the coupling or by the range of the interaction between the molecules. In Section~\ref{sec:rel} we explain that the origin of such macroscopic coherent states found in natural complexes lies in their specific geometry which induces a supertransfer coupling inside the complexes. Such supertansfer coupling strongly affect the lowest part of the spectrum thus enhancing the thermal coherence length. In Section~\ref{sec:conc}, we analyze structures which are more complex than single cylindrical surfaces. Specifically, we consider tubular structures made of four concentric cylindrical surfaces, as they appear in natural antenna complexes of Green Sulfur bacteria~\cite{Arellano,Ganapathy,Koh,Chew}. We show that these structures display an enhanced delocalization of the excitation with respect to single cylindrical surfaces. Finally in Section~\ref{conclu} we give our conclusions and perspectives. \section{The models} \label{sec:mod} The natural Antenna complexes present in Green Sulphur bacteria have lengths of $1000$ - $2000$ \AA, widths of $100$ - $600$ \AA \, and they can contain a number of molecules between $50,000$ and $250,000$, typically arranged into concentric cylindrical surfaces \cite{Huh,Linnanto}. It is important to remark that, depending on the environment and on the growing conditions \cite{Hohmann}, some samples could show an alternation between tubular aggregates and non-tubular curved lamellae \cite{Ikonen,Oostergetel}. Nevertheless, in spite of the heterogeneity of the structures experimentally observed, we will consider here cylindrical surfaces only with a radius of $6$ nm and length up to $L=250$~nm composed of $1500$ molecules. Specifically, we analyse five different cylindrical models with fixed radius ($R=60 \mbox{ \AA}$) and total number of chromophores $N$. These models differ for the geometrical arrangement of the chromophores along the cylindrical surface. In details they are: \begin{itemize} \item \textit{Chlorobium Tepidum} bchQRU triple mutant (MT), \item \textit{Chlorobium Tepidum} wild type (WT), \item parallel dipoles cylinder (PD), \item tangent dipoles cylinder (TD), \item random dipoles cylinder (RD). \end{itemize} While the first two are representative of natural systems, the others are mathematical models with a suitable symmetric arrangements of chromophores (TD and PD) while the last one (RD) is characterized by a random orientation of the dipole moments. The molecule positions and dipole orientations for the natural models have been taken from literature~\cite{Ganapathy,Koh,Chew} and they correspond to the values capable to reproduce experimental results. % \begin{figure}[t] \centering \includegraphics[scale=0.16]{pdf/mt4.pdf} \includegraphics[scale=0.16]{pdf/wt4.pdf} \includegraphics[scale=0.16]{pdf/pd4.pdf} \includegraphics[scale=0.16]{pdf/td4.pdf} \includegraphics[scale=0.16]{pdf/rd4.pdf} \caption{Sections of the different models. In all panels we show cylinders with the same radius $R=60 \mbox{ \AA}$. For the sake of clarity we show only $30$ dipoles per ring instead of $60$ as we considered in this paper. Moreover the distances along the $z-$axis are enhanced by a factor of 5 with respect to the distances on the $x-y$ axes. The same factor and also a reduction of the number of dipoles have been used for WT model. In all models but the WT, where the dipoles are arranged in a helical structure, the dipoles are arranged into $N_1=5$ rings. } \label{fig:cyl-all} \end{figure} A schematic view of the arrangement of the dipoles on the cylindrical surfaces for all models is shown in Figure~\ref{fig:cyl-all}, while all other technical details can be found in \ref{app-a}. Notice that all the models but the WT share the same basic structure: the cylinder is made by a collection of $N_1$ rings composed of $N_2=60$ molecules equally spaced on each ring. The difference between them lies in the dipole orientation only: \begin{itemize} \item PD model: all dipoles are oriented parallel to the $z$ axis \item TD model: all dipole are perpendicular to the $z$ direction and tangential to the cylindrical surface \item MT model: here the dipoles have a fixed $z$ component, but also a component perpendicular to the $z$ direction, see~\ref{app-a} for details. Note that the component perpendicular to the $z$ direction points inward and outward alternatively with respect to the plane tangent to the cylindrical surface with a small angle $\alpha$ (see black and red arrows in Figure~\ref{fig:cyl-all}(A)). \item RD model: the position of the dipoles is the same of the other three models but the orientation of the dipoles is fully random on the unit sphere. \end{itemize} On the other hand the WT model, see Figure~\ref{fig:cyl-all}(B), is not composed of separated rings but instead is arranged in a complicated helical structure, see~\ref{app-a} for details. \section{The Hamiltonian and the dipole approximation} \label{sec:ham} Each molecule is represented as a two-level system with an excitation energy $e_0$ and a transition dipole moment $\vec{\mu}$. The parameters of the aggregates considered here have been taken from literature~\cite{Holz,Steens} to be the ones characterizing the Antenna Complexes in Green Sulfur bacteria. Specifically we set for the excitation energy of all the molecules $e_0 = 15390 \mbox{ cm}^{-1}$~\cite{Steens}, corresponding to $\lambda_0 \approx 650$ nm, so that \begin{itemize} \item[] $k_0=2 \pi e_0 \times 10^{-8}= 9.670 \times 10^{-4} \mbox{ \AA}^{-1}$. \item[] $\mu=\sqrt{30} \mbox{ D}$~\cite{Holz} so that $|\mu|^2= 151024 \mbox{ \AA}^3 \mbox{ cm}^{-1}$ (for the conversion, see~\cite{dipsquare}). \item[] $\gamma= 4 |\mu|^2 k_0^3/3= 1.821 \times 10^{-4} \mbox{ cm}^{-1}$, corresponding to the radiative lifetime $\tau_\gamma=29.15\text{ ns}$ (for the conversion, see~\cite{enertime}). \end{itemize} Choosing the basis states in the single excitation manifold, where the state $|i\rangle$ refers to a state in which the $i^{th}$ molecule is excited while all the others are in the ground state, the nanotubes can be described through a Non-Hermitian Hamiltonian which takes into account the interaction between the molecules mediated by the electromagnetic field (EMF). The effective Non-Hermitian Hamiltonian (also called radiative Hamiltonian), is commonly used to model the interaction with the EMF in different systems, such as natural light-harvesting complexes~\cite{mukamelspano,mukameldeph} and cold atomic clouds~\cite{kaiser} and it reads: \begin{equation} \label{eq:ham} H=\sum_{i=1}^N e_0|i\rangle \langle i|+\sum_{i\neq j}\Delta_{ij}|i\rangle \langle j|-\frac{i}{2}\sum_{i,j=1}^{N}Q_{ij}|i\rangle \langle j|. \end{equation} The terms $\Delta_{ij}$ and $Q_{ij}$ derive from the interaction with the EMF. The real and imaginary diagonal parts of the intermolecular coupling are given respectively, by \begin{equation} \Delta_{nn} = 0 \, , \\ Q_{nn} = \frac{4}{3} \mu^2 k_0^3 = \gamma \, , \label{eq:gamma} \end{equation} with $\mu=|\vec{\mu}|$ being the transition dipole, while the off-diagonal ($n \ne m$) by $$ \Delta_{nm} = \frac{3\gamma}{4} \left[ \left( -\frac{\cos (k_0 r_{nm})}{(k_0 r_{nm})} + \frac{\sin (k_0 r_{nm})}{(k_0 r_{nm})^2} + \frac{\cos (k_0 r_{nm})}{(k_0 r_{nm})^3} \right) \hat{\mu}_n \cdot \hat{\mu}_m +\right. \nonumber \\ $$ \begin{equation} -\left. \left( -\frac{\cos (k_0 r_{nm})}{(k_0 r_{nm})} + 3\frac{\sin (k_0 r_{nm})}{(k_0 r_{nm})^2} + 3\frac{\cos (k_0 r_{nm})}{(k_0 r_{nm})^3}\right) \left( \hat{\mu}_n \cdot \hat{r}_{nm} \right) \left( \hat{\mu}_m \cdot \hat{r}_{nm} \right) \right],\\ \label{eq:d1} \end{equation} $$ Q_{nm} = \frac{3\gamma}{2} \left[ \left( \frac{\sin (k_0 r_{nm})}{(k_0 r_{nm})} + \frac{\cos (k_0 r_{nm})}{(k_0 r_{nm})^2} - \frac{\sin (k_0 r_{nm})}{(k_0 r_{nm})^3} \right) \hat{\mu}_n \cdot \hat{\mu}_m +\right. \nonumber \\ $$ \begin{equation} -\left. \left( \frac{\sin (k_0 r_{nm})}{(k_0 r_{nm})} + 3\frac{\cos (k_0 r_{nm})}{(k_0 r_{nm})^2} - 3\frac{\sin (k_0 r_{nm})}{(k_0 r_{nm})^3}\right) \left( \hat{\mu}_n \cdot \hat{r}_{nm} \right) \left( \hat{\mu}_m \cdot \hat{r}_{nm} \right) \right], \label{eq:g1} \end{equation} where $\hat{\mu}_n := \vec{\mu}_n / \mu$ is the unit dipole moment of the $n$-th site and $\hat{r}_{nm} := \vec{r}_{nm} / r_{nm}$ is the unit vector joining the $n$-th and the $m$-th sites. Diagonalizing the Hamiltonian (\ref{eq:ham}) we obtain the complex eigenvalues $ \varepsilon_{n}=\mbox{E}_n-\mbox{i}\frac{\Gamma_{n}}{2}$ where $\Gamma_{n}$ is the radiative decay of the $n^{th}$ eigenstate. In general it differs from the radiative decay of the single molecule $\gamma$. In particular, when the ratio $\Gamma_{n}/\gamma \gg 1$ we will talk about a ``superradiant state'' (SRS), otherwise when $\Gamma_n/\gamma \ll 1$ the state is called ``subradiant''. In other words, a SRS can radiate much faster than a single molecule, while a subradiant one radiates at a rate much slower than the single molecule radiative decay. \\ Within the range of parameters considered here, the imaginary part $Q_{ij}$ can be considered a small perturbation of the real part of the Hamiltonian (\ref{eq:ham}), moreover the system size is small compared to wavelength associated with the optical transition of the molecules (maximum size considered here is $L/\lambda_0 \approx 0.4$ ). In such case, the optical absorption of an eigenstate of the aggregate can be estimated in terms of its dipole strength, computed only from the real part of the Hamiltonian (\ref{eq:ham}). Denoting the $n^{th}$ eigenstate of the real part of the Hamiltonian (\ref{eq:ham}) with $|E_n\rangle$, we can expand it on the site basis, so that \begin{equation} \label{eq:expan} |E_{n}\rangle=\sum_{i=1}^{N} C_{ni} \, |i\rangle. \end{equation} Note that the site basis is referred to the molecules and is composed by the states $|i\rangle$, each of them carrying a dipole moment $\vec{\mu}_i$. If $N$ is the total number of molecules, then we will express the transition dipole moment $\vec{D}_n$ associated with the $n^{th}$ eigenstate as follows: \begin{equation} \label{eq:dipst} \vec{D}_n=\sum_{i=1}^{N} C_{ni} \, \hat{\mu}_i. \end{equation} The dipole strength of the $n^{th}$ eigenstate is defined by $|\vec{D}_n|^{2}$ (note that due to normalization $\sum_{n=1}^{N} |\vec{D}_n|^{2}=N$). Under the approximation that the imaginary part of the Hamiltonian (\ref{eq:ham}) can be treated as a perturbation and $L/\lambda_0 \ll 1$ we have $|\vec{D}_n|^2 \approx \Gamma_n/\gamma$, which is valid for states with a large radiative decay rate (see \ref{app-b} for a comparison between dipole strengths and radiative decay widths for all models). Thus, in the following we will consider only the real part of the Hamiltonian (\ref{eq:ham}): \begin{equation} \label{eq:hreal} H_{r}=\sum_{i=1}^N e_0|i\rangle \langle i|+\sum_{i\neq j}\Delta_{ij}|i\rangle \langle j|. \end{equation} where $\Delta_{i,j}$ is given in equation~(\ref{eq:d1}). Finally we note that for small systems, when $k_{0}r_{ij}\ll 1$, the Hamiltonian (\ref{eq:ham}) becomes \begin{equation} \begin{array}{lll} \label{real} Q_{ij}&\simeq\displaystyle \gamma \hat{\mu}_i \hat{\mu}_j,\\ &\\ \Delta_{ij} &\simeq\displaystyle \frac{\vec{\mu}_{i} \cdot \vec{\mu}_{j}-3(\vec{\mu}_{i} \cdot \hat{r}_{ij})(\vec{\mu}_{j} \cdot \hat{r}_{ij})}{r_{ij}^{3}}\\ \end{array} \end{equation} In this limit, the real term $\Delta_{ij}$ represents a dipole-dipole interaction energy with $\mu =|\vec{\mu}_j|$ and the radiative decay $\gamma=\frac{4}{3}|\mu|^{2}k_{0}^{3}$. Nevertheless when the dimension of the aggregate becomes comparable with the wavelength $\lambda_0$ the dipole-dipole approximation fails. For the maximal sizes considered here($L/\lambda_0 \approx 0.4$) the dipole approximation can be considered good, even if there are already non-negligible deviations in some quantities between the dipole-dipole interaction equation~(\ref{real}) and the Hamiltionian in equation~(\ref{eq:hreal}), see~\ref{app-bc}. For this reason in the following we will use the expression given in equation~(\ref{eq:hreal}). \begin{figure}[t] \includegraphics[scale=0.62]{pdf/sr1.pdf} \caption{(A) - (D) Squared dipole strength $|D_n|^2$ as a function of the energy $E_n-e_0$. Superradiance arises in all cylindrical models since they are characterized by a high degree of geometrical symmetry. However, in the engineered structures made up of parallel and tangent dipoles (panels (C,D) ) the SRS does not coincide definitely with the ground state, nor it is close to it. On the other hand, in the MT model (A) the ground state is superradiant while in the WT model (B) the SRS, even if it does not coincide with the ground state, it is indeed very close to it. In panels (A,B,C,D) insets are shown with a magnification of the energy spectrum close to the SRS. In the insets the arrows indicate the position of the ground state. (E) Average squared dipole strength $\langle |D_n|^2 \rangle$ as a function of the eigenstate index. The average has been computed over 10 disorder realizations. (F) $|D_{max}|^2 $ as a function of the cylindrical length $L$. A linear dependence, as given by the dashed line $|D_{max}|^2 \propto L $, emerges clearly from all structures except the RD model (brown). Different colours stand for different models : MT (red), WT (orange), PD (green), TD (blue) and RD (brown). In panels (A-E) we considered cylindrical structures made of $6000$ dipoles. In panel (F) we considered cylindrical structures with a number of dipoles varying from $60$ to $6000$. } \label{fig:secpar} \end{figure} \section{Single Cylindrical structures: Results} \label{sec:single} In this Section we analyze first the collective dipole strengths of the eigenstates of the different models, showing the emergence of a superradiant state close to the ground state in natural complexes, see subsection~\ref{sub-41}. The coherence length is defined in subsection~\ref{sub-42} where also a new model with only nearest-neighbor couplings is introduced. Finally in subsection~\ref{sub-43} the results of our analysis about the thermal coherence length for the different models is shown. \subsection{Collective Dipole Strength} \label{sub-41} As a first goal let us analyze the dipole strengths associated with the eigenstates of the Hamiltonian models described in the previous section. For the five models introduced previously we diagonalized the Hamiltonian in equation~(\ref{eq:hreal}), and we analyzed in detail the dipole strengths $|D_n|^2$ of all eigenstates. In Figure~\ref{fig:secpar}(A-E) we plot $|D_n|^2$ as a function of the energy E$_n-e_0$ of the corresponding eigenstate. All models but the random one (E) are characterized by the presence of SRS in different positions of the energy spectrum. For instance for the MT model the state having the largest dipole strength is the ground state while for the WT model it is very close to it. Note that the position of the superradiant state is below the excitation energy of a single molecule. Since the dipole strength of the eigenstates determines the absorption spectrum~\cite{K31,K32}, a superradiant ground state implies a red-shifted absorption spectrum which is a typical behaviour for molecular J-aggregates~\cite{K31,K32,eisele,K2}. On the other hand for both the TD and PD models the SRSs are in the middle of the energy spectrum (C,D). Contrary to this general trend, the absence of ordering characterizing the random model (RD) does not guarantee the presence of SRS. Indeed it is well known that in the small volume limit $L/\lambda \ll 1$ symmetry is necessary to preserve super- and sub-radiance~\cite{haroche}. This is a clear indication that natural structures tend to push the SRS to the lowest energy region. Moreover, as the comparison with the other symmetric structure shows, this is not a trivial consequence of the symmetric arrangement. Other symmetric arrangements, such as the TD and PD, are still characterized by SRS but ``living'' in an energy region far from the ground state. SRSs are typically characterized by a collective dipole strength which grows with the length of the cylindrical structure. This is clearly shown in Figure~\ref{fig:secpar}(F) where the maximal dipole strength $|D_{\rm max}|^2$ is shown as a function of the length $L$ of the cylinder. As one can see the maximal dipole strength grows $\propto L$ for all models but the random one for which it is independent of $L$. \\ \subsection{Delocalized excitonic states at room temperature} \label{sub-42} Given a quantum state specified by the density matrix $\hat{\rho}$ it is possible to define its coherence length in the single excitation manifold defined by the basis states $\ket{i}$~\cite{Schulten,Kosztin}: \begin{equation} \label{eq:lrho} L_{\rho}=\frac{1}{N}\frac{\left(\sum_{ij}|\rho_{ij}|\right)^2}{\sum_{ij}|\rho_{ij}|^2}. \end{equation} The expression of $L_{\rho}$ in equation~(\ref{eq:lrho}) measures how much a single excitation is spread coherently over the molecules composing the aggregate. To give an idea of its physical meaning let us consider three different simple cases: \begin{itemize} \item a pure localized state, $\hat{\rho}=|i \rangle\langle i|$; then it is easy to see that the coherence length defined in equation~(\ref{eq:lrho}) is given by $L_{\rho}=1/N$. This case represents the minimal value that $L_\rho$ can get. \item A completely delocalized mixed state characterized by the density matrix: $\hat{\rho}=(1/N) \sum_{i=1}^{N} |i\rangle\langle i|$. In this case we have $L_{\rho}=1$. This state is maximally delocalized in the basis, but it is completely incoherent. \item Lastly we consider the fully delocalized coherent state: $\hat{\rho}=(1/N) \sum_{i,j=1}^{N} |i\rangle\langle j|$. In this case we have $L_{\rho}=N$. Note that any pure state with constant amplitude $1/\sqrt{N}$ over the sites and arbitrary phases would give the same result. \end{itemize} Generally speaking we can see that $1/N \leq L_{\rho} \leq N$. The closer $L_{\rho}$ is to $N$, the higher a coherent delocalization can be assigned to our state. In the same way $L_\rho < 1 $ indicates an incoherent localized state. States characterized by $L_\rho \sim 1 $ have a little ambiguity (since both localization and coherence are measured on the same length scale). In what follows we will consider the previous models of cylindrical structures and we will compare them with an additional model where the positions of the molecule are the same of the MT model, but their interaction is only nearest-neighbour. In this way we will be able to address the relevance of the range of the interaction to the thermal coherence length. For this purpose, let us consider a variant of the MT model, in which the Hamiltonian matrix elements are defined as follows: \begin{equation} \label{ham-nn} H_{NN}=\left\{ \begin{array}{lr} \sum_{i=1}^N e_0|i\rangle \langle i|+\sum_{i\neq j}\Delta_{ij}|i\rangle \langle j| & \mbox{ if } \quad r_{ij}\leq \bar{d}, \\ \\ 0 & \mbox{ if } \quad r_{ij}>\bar{d}. \end{array} \right. \end{equation} where we have introduced the cut-off distance $\bar{d}=9 \mbox{ \AA}$ and $\Delta_{i,j}$ is defined in equation~(\ref{eq:hreal}). In other words any lattice point interacts only with its four nearest neighbours. For all the models above we have computed the thermal coherence length at room temperature ($T=300K$), defined for a state at the canonical equilibrium and whose matrix elements are given by: \begin{equation} \label{eq:expand} \rho_{ij}=\sum_{n} \frac{e^{-\beta E_n}}{\mbox{Tr}({e^{-\beta \hat{H}}})} \langle i|E_n\rangle \langle E_n|j\rangle, \end{equation} where $\beta=1/k_B T$. A very important question to be answered is how much the symmetrical arrangements that give rise to SRS are also able to produce a large thermal coherence length at room temperature. In that regard we calculate the coherence length $L_{\rho}$ according to equation~(\ref{eq:lrho}), using a thermal density matrix equation~(\ref{eq:expand}), as a function of the cylindrical length L for each of the cylindrical models studied so far, including the NN model described by equation~(\ref{ham-nn}).\\ As a final remark for this Section, let us note that for zero temperature $L_\rho$ depends only on how much the ground state is delocalized, while for infinite temperature we have a fully mixed state with: $\hat{\rho}=(1/N) \sum_{i=1}^{N} |i\rangle\langle i|$, so that $L_{\rho}=1$ as explained above even if all eigenstates are fully delocalized. On the other hand at finite temperature the thermal coherence length is determined by how much the energy eigenstates are delocalized on the site basis and also on how many eigenstates have an energy approximately within $k_B T$ above the ground state (i.e. from the density of states within an energy $k_B T$ from ground state). For this reason, it is important to study the delocalization properties of the eigenstates of the nanostructures considered here. This analysis is shown in~\ref{app-pr}, where we show that the eigenstates of all models but the RD one have fully delocalized eigenstates with a very similar degree of delocalization. \subsection{Thermal Coherence Length} \label{sub-43} It is usually thought that natural photosynthetic structures can support delocalised states even at room temperature because the nearest-neighbour (NN) coupling between the molecules is larger than the room temperature energy $k_B T \approx 200$ cm$^{-1}$. In Table~\ref{tab-02} we show the nearest-neighbour coupling for the different models considered here. As one can see these couplings are larger than $k_B T$, and the maximal value between $\Omega_{1,2}$ are of the same order amomg the different models. \begin{table}[t] \centering \begin{tabular}{|p{5cm}||c|c|c|} \hline & $ \Omega_1 $ [cm$^{-1}$] & $\Omega_2$ [cm$^{-1}$] \\ \hline MT & 618 & 248 \\ \hline WT & 115 & 629 \\ \hline PD & 610 & 528 \\ \hline TD & 1218 & 264 \\ \hline \end{tabular} \caption{Nearest-neighbour (NN) coupling for the different models. $\Omega_1$: azimuthal coupling for NN sites in the same ring (or between two adjacent chains for the WT). $\Omega_2$: vertical coupling for NN sites between rings (or in the same chain for the WT).} \label{tab-02} \end{table} Let us now consider the thermal coherence length of the structures analyzed here at room temperature. Figure~\ref{fig:lrho}(A) shows the dependence of $L_{\rho}$ on the cylinder length $L$ (with a corresponding number of dipoles $N$ ranging from $120$ to $9600$). \\ In all models but the RD, the coherence length $L_{\rho}$ increases quite markedly for small $L$ until it reaches a plateau for larger $L$ values. Apart from the RD structure, that exhibits a coherence length $L_{\rho}\approx 1$, the other structures are characterized by $1 \le L_{\rho} \le \mbox{N}$. This means that the thermal state at room temperature of these structures has a high degree of excitonic delocalization. Moreover it emerges clearly that the natural complexes (MT and WT) show the highest values of thermal coherence length if compared with the other engineered structures. It is interesting to note that the MT complex supports a coherent delocalisation of the excitation over hundreds of molecules even at room temperature, which is one order of magnitude larger than the delocalisation supported by the NN model despite the fact that in the NN model the molecules have the same position and the same nearest-neighbour coupling of the MT model. This shows that the ability of such structures to support large delocalised excitation even at room temperature goes beyond the strength of the NN coupling between their molecules. From Figure~\ref{fig:lrho}(A) we can also deduce that the large coherence length of the natural systems cannot be explained by the presence of long range interactions. Indeed long-range interactions are present also in the PD and TD models, but their thermal coherence length is one order of magnitude smaller. By comparing the different cylindrical structures, one may also observe that the further the SRS is from the ground state, the lower is $L_{\rho}$. One could argue that natural structures concentrate the most radiative states (states with the largest dipole strength) close to the ground state in order to maximize their thermal coherence length. We will discuss the relationship between the presence of the SRS close to the ground state and a large coherence length in the next Section. The presence of a large thermal coherence length can be related to the structural properties of the energy spectrum. To this end we consider the mean energy density $\delta(k_B T)$ defined as the number of states contained in a unit of thermal energy $k_B T$, i.e. \begin{equation} \label{deltae-eq} \delta (k_B T) = \frac{1}{k_B T} \int_{E_1}^{E_1 + k_B T} \, N(E) \, dE, \end{equation} where $E_1$ is the ground state and $N(E)$ is the density of states (number of states per unit energy). In particular, we would like to study the dependence of the average density of states, equation~(\ref{deltae-eq}) on the cylindrical length $L$. Results are shown in Figure~\ref{fig:lrho}(B) and clearly indicate that, not only, in general, the average density increases proportionally to $L$, but more important, natural structures are characterized by the smallest average densities (approximately one order of magnitude less than the other structures). Such a low density of states in the lower part of the spectrum induces, see Figure~\ref{fig:lrho}(B), an enhanced thermal coherence length. Indeed, if all the eigenstates have approximately the same degree of delocalization, as measured by their PR for instance, then for a smaller number of states within an energy $k_BT$ from the ground state, the thermal coherence length is larger, as explained above. In order to explain the origin of the low density of states, let us observe that : ($i$) it cannot be due to the intensity of the NN coupling. Indeed the NN model, which has the same NN coupling as the MT model, has a much higher density of states and a smaller thermal coherence length; ($ii$) it cannot be due to the range of interaction since also the TD and PD model are characterized by the same interaction range but they display a higher density of states and as a consequence a smaller thermal coherence length. Below we propose an explanation of the connection between the presence of a SRS close to the ground state and a low density of states, implying a large thermal coherence length. \begin{figure}[t] \begin{center} \includegraphics[scale=0.65]{pdf/lrho-fb.pdf} \caption{(A) Coherence length $L_{\rho}$ as a function of the cylindrical length in the 6 cylindrical models at $T=300$ K. The total number of chromophores $N$ varies from $120$ to $9600$. (B) $\delta (k_B T)$, as given by equation~(\ref{deltae-eq}), as a function of the cylindrical length $L$ at a fixed temperature $T=300$ K. All models have a total number of dipoles ranging from $120$ to $9600$. Note that since energy is measured in $[\mbox{cm}]^{-1}$, the mean energy density in the thermal energy width $k_B T$ is measured in $[\mbox{cm}]$, see equation~(\ref{deltae-eq}). } \label{fig:lrho} \end{center} \end{figure} \section{Relationship between structure and macroscopic coherence} \label{sec:rel} In this section we propose an explanation of why such a low density of states is connected to the presence of SRS close to the ground state of the system. As we will show below the low energy part of the spectrum for both the MT and WT models arises from a super-transfer coupling between states with a large (giant) dipole belonging to some sub-unit of the whole cylinder. In the case of MT we will show that the super-transfer coupling arises between giant dipole eigenstates of single rings, while in the case of WT the super-transfer arises between eigenstates belonging to different sub-units of the whole cylinder. The presence of super-transfer induces a large coupling energy which decreases the density of states. As a clear signature of this, we show below that super-transfer is also able to induce the emergence of an energy gap between the ground state and the first excited state. Specifically in subsection~\ref{sub-51} we analyze cylinders made of a sequence of rings and we show that the symmetry present in the system implies that each eigenstate of a ring couples only to a correspondent eigenstate of the other rings. We also show that the dipole strength of the eigenstates of each ring is concentrated in few superradiant states. In subsection~\ref{sub-52} we show that the coupling between superradiant states in each ring displays a super-transfer effect, while the coupling between the subradiant states is characterized by a sub-transfer effect. Finally in subsection~\ref{sub-53} we show how in natural structures the super-transfer coupling produces a depressed density of states close to the ground state, thus enhancing the thermal coherence length. \subsection{Structure of ring eigenstates coupling} \label{sub-51} In order to analyze the super-transfer effect, let us consider the properties of the eigenstates of the single rings composing three different nanotubes: MT, TD and PD. All the above mentioned models are composed of a sequence of rings, each containing 60 molecules, as explained in Section~\ref{sec:mod}. The case of the WT model will be discussed later since its structure is more complicated. In Figure~\ref{fig:sr} the dipole strength of few eigenstates (ordered from low to high energy) of a single ring, containing 60 dipoles, is shown for the different structures. Note that the sum of all the dipole strengths must be equal to the number of the dipoles in the ring $N_2=60$ as explained in the previous Sections. As one can see in the MT case the whole dipole strength is concentrated in the lowest three eigenstates, each having a dipole strength approximately equal to $N_2/3$. Each dipole strength is oriented in a different spatial position with the ground state having a dipole strength along $z$ corresponding to the direction of the cylinder axis and the other two states perpendicular to it in the ring plane, see inset in Figure~\ref{fig:sr}(A) . In the TD model in Figure~\ref{fig:sr}(B), the dipole strengths are concentrated in the first and second excited state (which are degenerate and having $|D_n|^2=N_2/2$ each) and their direction lies in the plane perpendicular to the direction of the cylinder axis. Finally for the PD model in Figure~\ref{fig:sr}(C), the whole dipole strength is concentrated in the most excited state and it is directed along the \textit{z} axis (cylinder axis). \begin{figure}[t] \begin{center} \includegraphics[scale=0.65]{pdf/singlering2.pdf} \caption{Dipole strength of few eigenstates (in the lowest or highest part of the energy spectrum) {\it vs.} the eigenstate index $n$, for a single ring composing three different nanotubular structures: MT, TD and PD. Lateral panels indicate the spacial direction of the giant dipoles of the SRS. Each ring of the three structures considered (A,B,C) is composed by $N_2=60$ dipoles.} \label{fig:sr} \end{center} \end{figure} A common feature of these structures is their invariance under a $2 \pi /N_2$ rotation around the cylinder axis. Strictly speaking, in the MT model such symmetry is slightly broken due to the presence of alternating $ \alpha $ angles, see~\ref{app-a}. Nevertheless since $\alpha$ is very small the change due to the symmetry breaking is negligible. As a consequence the Hamiltonian for each ring is a circulant matrix, i.e. each row can be obtained by a cyclic permutation of the previous one. Circulant matrices are diagonalized by the Fourier basis, so that the components of the eigenstates of each ring $|\varphi_q\rangle$ on the site basis $|j \rangle$ are given by \begin{equation} \langle j |\varphi_q \rangle = \frac{1}{\sqrt{N_2}} e^{i 2 \pi jq/N_2} \quad {\rm for} \quad q=1,...,N_2. \label{eq:ring} \end{equation} Due to the rotational invariance the coupling matrix between two rings is also circulant. To make explicit this point, let us work out a specific example of two rings. The Hamiltonian reads: \begin{equation}\label{hblock} H_r= \left[ {\begin{array}{cc} D & V \\ V & D \\ \end{array} } \right] \end{equation} where $D$ refers to the Hamiltonian of a single ring (which is diagonal in the Fourier basis given in equation~(\ref{eq:ring})) and $V$ represents the interaction between two rings. The total Hamiltonian matrix $H_r$ can be made block diagonal by the matrix \[ U_r= \left[ {\begin{array}{cc} U & 0 \\ 0 & U \\ \end{array} } \right] \] where the elements of $U$ are given by equation~(\ref{eq:ring}): $ U_{j,q} = \langle j |\varphi_q \rangle $. In other words, each ring eigenstate is coupled only with one corresponding eigenstate of any other ring. This is clearly shown in Figure~\ref{fig:MT3}(B), where the matrix elements of the Hamiltonian of a small cylinder composed of two rings of 6 sites each, are represented in the basis given by the tensor product of the Fourier basis of each ring. As one can see, this results in a block structure where each block has only diagonal elements. \begin{figure}[t] \begin{center} \begin{tabular}{cc} {\bf \Large(A)} & {\bf \Large(B)} \\ \includegraphics[scale=0.39]{pdf/RRcoupling2.pdf} & \includegraphics[scale=0.39]{pdf/MT-6-2.pdf} \\ \end{tabular} \caption{(A) Graphical representation of the coupling between the sites of two rings, each formed by six molecules. Same colours indicate the same couplings. The circulant coupling matrix $V$, see equation~(\ref{hblock}), generated by the symmetric coupling is represented below. (B) Modulus of the Hamiltonian $H_r$~(\ref{hblock}) matrix elements for the MT model, for the case sketched in (A) in the Fourier basis. Each ring eigenstate is mainly coupled only to one corresponding eigenstate in all the other rings. } \label{fig:MT3} \end{center} \end{figure} As a consequence of the symmetric structure of the nanotubes considered above, all the eigenstates of the whole cylinder can be ``generated'' by the coupling between the eigenstates of single rings, see also discussion in \cite{K5}. Specifically the SRS of the whole cylinder is generated by the coupling of the SRS of the single rings. In order to prove that, we show in Figure~\ref{fig:fb}(A, B, C) the most SRS for the different models projected along the eigenstates of the single rings. In the figure we considered cylinders made of $N_1=160$ rings, with $N_2=60$ molecules per ring, for a total number of dipoles of $N=9600$. Let us analyze the single models individually: \begin{enumerate} \item For the MT model, one can see that the most SRS (having a dipole along the cylinder axis) has components only on the ground states of the single rings (indicated by arrows in the inset of Figure~\ref{fig:fb}(A)) that are also SRS with a dipole strength along the $z-$axis, see Figure~\ref{fig:sr}(A). \item In the PD model, Figure~\ref{fig:fb}(B), the most SRS, $|E_{2814}\rangle$, projects itself on the most excited state in the single ring spectrum, which corresponds to the only SRS of the PD ring, see Figure~\ref{fig:sr}(C). Note that $|E_{2814}\rangle$ indicates the $2813^{rd}$ excited state. \item In the TD model there are two most SRS which are degenerate with a different polarization: one along the $x$ direction and one along the $y$ direction. In Figure~\ref{fig:fb}(B) we considered only the SRS with a polarization along the $y$ direction, which corresponds to the state $|E_{1083}\rangle$. Such state has non zero projections only onto the second excited states of the single ring with the same dipole direction of the SRS of the whole cylinder, see Figure~\ref{fig:sr}(B). Correspondingly the other SRS with a polarization along the $x$ direction will have projection only on the SRS of the single ring with the same polarization. \end{enumerate} \begin{figure}[t] \begin{center} \includegraphics[scale=0.36]{pdf/srproj.pdf} \includegraphics[scale=0.45]{pdf/fact-basis-60.pdf} \caption{Left Panels: Projections of the most SRS of the whole cylinder $|E_{\rm SR}\rangle$ over the single ring eigenstates $|\varphi_n\rangle$ as a function of the eigenstate index n. In each case we selected a total number of dipoles $N=9600$ (then n$=1,...,9600$), which corresponds to $N_1=160$ rings and $N_2=60$ molecules in each ring. (A) MT model, (B) PD model, (C) TD model (in the insets the corresponding blow up of the low energy part of the energy spectrum). Arrows refer to the SRS of the single rings. Right Panels: energy spectrum of the three different cylindrical structures: (D) MT model, (E) PD model, (F) TD model. Coloured symbols represent the exact numerical spectrum, white lines stand for the spectrum obtained from the analytic approximate eigenstates, see equation~(\ref{tenseig}). } \label{fig:fb} \end{center} \end{figure} These findings allow for a further approximate scheme for the eigenstates of the cylindrical structures considered above. Indeed, since each eigenstate of any single ring is coupled only to a corresponding eigenstate of the other rings, we can decompose the whole cylinder into independent chains where each site of the chain corresponds to a single ring eigenstate. For a chain having $N_s$ sites and nearest-neighbour interactions the eigenstates are independent of the coupling and given by: \begin{equation} \label{eq:nn} \langle k | \psi_r \rangle = \sqrt{\frac{2}{N_s+1}} \sin \left( \frac{\pi k r}{N_s+1} \right), \end{equation} where $k$ represents the site index and $r=1,..,N_s$. Clearly when the interaction range is not nearest-neighbour, the above expression for the eigenstates is no longer valid. Nevertheless for the natural structures considered in this paper the interaction is short-range, decaying as $1/r^3$ for the realistic cylinder length considered here, so that in a first scheme we can consider the nearest-neighbour eigenstates as a good approximation. Note however that care should be taken to generalize such approximation since the interaction between the molecules is much more complicated than a simple dipole-dipole one. For instance the coupling is also affected by the dipole strength of the ring eigenstates involved as we will see below. Nevertheless we can assume that the chain of eigenstates is diagonalized by the same eigenstates of a chain with nearest-neighbour coupling for the parameters and the realistic system sizes considered here. Building the eigenstates as a tensor product between the Fourier basis for the ring~(\ref{eq:ring}) and the one for the chain~(\ref{eq:nn}) \begin{equation}\label{tenseig} \braket{j,s}{\Psi_{q,r}} = \frac{1}{\sqrt{N_2}} e^{i 2 \pi jq/N_2} \sqrt{\frac{2}{N_1+1}} \sin \left( \frac{\pi s r}{N_1+1} \right) \end{equation} (with $j,q=1,\dots,N_2$ and $s,r=1,\dots,N_1$) we can diagonalize the Hamiltonian of the whole cylinder in order to obtain an approximation of the actual spectrum of the whole structures. The results are shown in Figure~\ref{fig:fb}(D, E, F) where the spectrum obtained from exact numerical diagonalization is compared with the spectrum obtained by diagonalizing the matrix with the eigenbase in equation~(\ref{tenseig}). As one can see, the proposed analytic basis gives an excellent approximation of the spectrum obtained by exact numerical diagonalization. \subsection{Super and Sub-Transfer} \label{sub-52} In the previous section we have shown that each eigenstate of a single ring couples only with a corresponding eigenstate of the other rings (apart for a small symmetry breaking factor present in the MT model). Here we will show that the coupling between the eigenstates with a large dipole strength is enhanced with respect to the coupling between the single molecules within each ring by a factor proportional to the number of molecules placed on each ring. Such effect is known in literature as super-transfer~\cite{srlloyd}. At the same time we will show that the coupling between the eigenstates of the single rings with a small dipole strength is suppressed with respect to the coupling between the single molecules, giving rise to another collective sub-transfer effect, which has not been fully addressed in literature. In order to prove the previous statements, let us compute the coupling strength between two eigenstates of two rings, say 1 and 2. Let us indicate the two corresponding $q$-th eigenstates of the two rings as $$|\psi^{s,q} \rangle= \sum_k C_k^{s,q} |k\rangle,$$ where the states $|k \rangle$ represent the site basis of a ring and $s=1,2$. The coupling between two single ring eigenstates belonging to two different rings can be written as: \begin{equation} \label{eq:st} V^q_{12} = \langle \psi^{1,q}| V| \psi^{2,q } \rangle= \sum_{k,k'} (C^{1,q}_k)^* C^{2,q}_{k'} V_{k,k'}. \end{equation} Using equations ~(\ref{eq:hreal}) we have that $V_{k,k'}=\Delta_{k,k'}= f(r_{k,k'}) \vec{\mu}_k \cdot \vec{\mu}_{k'} + g(r_{k,k'}) (\vec{\mu}_k \cdot \hat{r}_{k,k'}) (\vec{\mu}_{k'} \cdot \hat{r}_{k,k'})$. When the distance between the two rings is much larger than their diameter we can approximate $r_{k,k'} \approx R_{12}$ where $R_{12}$ is the distance between the centres of the two rings. In this limit, equation~(\ref{eq:st}) becomes \begin{equation} V^q_{12} = \sum_{k,k'} (C^{1,q}_k)^* C^{2,q}_{k'} \left[ f(R_{12})\vec{\mu}_k \cdot\vec{\mu}_{k'} + g(R_{12}) (\vec{\mu}_k\cdot \hat{R}_{12}) (\vec{\mu}_{k'}\cdot \hat{R}_{12})\right] \, , \end{equation} which can be expressed in terms of the dipole strengths using equation~(\ref{eq:dipst}) \begin{equation} \label{eq:st2} V^q_{12} = \left[ f(R_{12}) |\vec{D_q}|^2 + g(R_{12}) (\vec{D}_q\cdot \hat{R}_{12}) (\vec{D}^*_{q}\cdot \hat{R}_{12})\right] \, . \end{equation} \begin{figure}[t] \begin{center} \includegraphics[scale=0.6]{pdf/st-fb.pdf} \caption{Coupling between ring eigenstates as a function of their distance $h$ normalized to the wavelength $\lambda_0 =650$ nm. Open circles represent the coupling $V^q_{12}$ (see equation~(\ref{eq:st})) between the ground states of two rings for the MT model. Red squares stand for the maximal coupling between individual molecules in the two rings. Blue triangles represent the coupling between the most excited eigenstates of the two rings. Green curve represents the coupling between the giant dipoles of the ground states as given by equation~(\ref{eq:st2}). The three lines represent respectively the behaviours $1/r^2$ (dashed), $1/r^3$ (dot-dashed), $1/r^5$ (dotted). } \label{fig:sr2} \end{center} \end{figure} As a result, we obtain $ V^q_{12} \propto |D_q|^2 \propto N_2$. In other words the eigenstates with a large dipole strength will have a coupling enhanced by a factor proportional to the number of molecules $N_2$ in the ring. The above expression represents the interaction between the giant dipoles of the eigenstates of each ring. Therefore states with a large dipole strength will have a super-transfer coupling, (proportional to the dipole strength of the eigenstates) increasing linearly with the number of molecules $N_2$ in each ring. At the same time, the coupling between two eigenstates with zero dipole strengths will be suppressed, leaving only higher order multipole terms to contribute to the coupling. This will lead to a sub-transfer coupling. The super and sub-transfer effects for the MT model are shown in Figure~\ref{fig:sr2} where we compare: $(i)$ the coupling between the superradiant ground states (which have a large dipole strength) of two rings as a function of their rescaled distance (open circles); $(ii)$ the maximal coupling between single molecules of each ring as a function of the distance between the two rings (red squares); $(iii)$ the coupling between the most excited states (with a very small dipole strength) of each ring as a function of their distance (blue triangles). Let us comment in detail this figure. First of all we note that the coupling between the states with a large dipole is clearly larger (by a factor $\sim N_2 = 60$) than the maximal coupling between the single molecules thus showing the super-transfer effect. Moreover, the coupling between the eigenstates with a small dipole strength is much smaller than the maximal coupling between single molecules: this shows the sub-transfer effect. In the same figure, as a continuous green curve we show the coupling between the ground states as given by equation~(\ref{eq:st2}). As one can see, at sufficiently large distance, the couplings are well approximated by equation~(\ref{eq:st2}) thus confirming that the coupling is enhanced by a factor proportional to the number of molecules in each ring $N_2$. Another important observation concerns the dependence of such couplings from the distance $r=h/\lambda_0$ and how it is modified by the super and sub-transfer effect. We can distinguish three different regimes: at small distances, at intermediate distances and at distances comparable with the wavelength of the optical transition. At large distances, when $h \sim \lambda_0$ an oscillatory behaviour arises due to the presence of oscillatory terms in the Hamiltonian of the system, see equation~(\ref{eq:hreal}). At intermediate distances the super-transfer coupling decays with $1/r^{3}$ as the coupling between single molecules, consistently with the dipole-dipole nature of the interaction. On the other hand, the sub-transfer coupling decays as $1/r^5$ which is consistent with high order multipole expansion of the coupling since the dipole interaction is suppressed. At small distances the behaviour of the coupling with distance is less trivial: while the single molecule coupling still behaves as $1/r^3$, the sub-transfer coupling decays much faster and then it goes as $1/r^5$ as explained above. On the other hand the super-transfer coupling decays as $1/r^2$, which is much slower than the dipole coupling. Since all the couplings start from the same intensity at very small distances and the superradiant one has to go above the single molecule coupling, it makes sense that its decay is slower than $1/r^3$, but further analysis is needed to understand the origin of such slow decay of the interaction between giant dipoles. \subsection{Super-Transfer and density of states} \label{sub-53} \begin{figure}[t] \begin{center} \includegraphics[scale=0.6]{pdf/f-13.pdf} \caption{(A) The lowest part of the energy spectrum for a MT nanotube with 220 rings (open circles) compared with the spectrum generated by the super-transfer coupling between the the three most SRSs of each ring (crosses). Note the presence of a consistent energy gap between the ground state and the first excited state. (B) Energy gap (distance between the ground and the first excited state) for the MT model as a function of the nanotubular length. As one can see there is a region where the gap increases with the system size. Maximal gaps occurs at $L = 1826 \, \mbox{\AA}$. The yellow vertical strip indicates the region where natural complexes operate. } \label{fig:gap} \end{center} \end{figure} From the discussion above we can conclude that all the SRS belonging to each ring will couple between themselves through a super-transfer coupling. For instance, in the case of the MT model, also the other two SRS of the single rings corresponding to the first and second excited states will couple between themselves by super-transfer, see Figure~\ref{fig:sr}(A). While for the PD and the TD model the coupling between the SRS of the rings gives rise to the SRS of the whole cylinder which lies far away from the ground state, for the MT model the coupling between the SRS of the single ring determines completely the lowest part of the spectrum. In order to prove the last sentence we consider the $3 N_1$ eigenvalues generated by the super-transfer coupling of the three SRS for each ring of the MT model. The spectrum generated by the three SRS is shown in Figure~\ref{fig:gap}(A) together with the exact spectrum of the MT model. As one can see this simple approximation allows to compute with high accuracy the lowest energy part of the spectrum. The presence of super-transfer induces a large coupling energy in the lowest part of the spectrum, which in turn diminishes the density of states. This is also signaled in Figure~\ref{fig:gap}(A) by the change of slope seen in the lower part of the spectrum. A further evidence of such decreased density of states induced by the super-transfer coupling of the SRS of each ring is shown in Figure~\ref{fig:gap}(B). Here the energy gap between the ground state and the first excited state for the MT model is shown as a function of the length of the nanotube. Contrary to what can be expected for generic systems, the energy gap increases with the system size instead of decreasing, up to a critical system size, above which it decreases. The maximal energy gap occurs at a distance of $\sim 182.6 $ nm which is compatible with the typical length of such nanostructures found in nature, ranging between $100$ and $200$ nm. Note that it would be interesting to understand the critical system size at which the gap has a maximum. We intend to study this problem in a future work. The results obtained so far can be generalized to more complicated structures, such as the WT model, as the preliminary results shown in \ref{app-wt} show. Indeed even for the WT model, where the disposition of dipoles is much more complicated than in the previous models, one can show that the superradiant state close to the ground state emerges from the supertransfer coupling between the superradiant states of cylindrical sub-units of the whole cylinder. Summarizing, the analysis both for the MT and the WT models show how a precise ordering of the dipoles in these systems can favour the emergence of super-transfer between the eigenstates of sub-units of the whole structure, producing an enhancement of the thermal coherent length. This represents a clear example of the interplay between structure and functionality. As a last remark, let us notice that even if the other models (TD, PD) have a super-transfer coupling between the ring eigenstates with the largest dipole strength, the resulting SRS lies in the middle of the spectrum and its effect on the thermal coherence length is less relevant (since the latter is sensitive to the density of states only in the lowest part of the energy spectrum). This argument strongly supports the relationship between the presence of a SRS close to the ground state and the thermal coherence length discussed above. \section{Natural concentric structures} \label{sec:conc} Natural antenna complexes in Green Sulphur Bacteria are not made by a single cylindrical surface. In order to take this into account, in this section we investigate a more complex configuration of dipoles on four concentric rolls as found in Green Sulfur bacteria \textit{Chlorobium Tepidum}. Such structures have been extensively considered in literature (see for example \cite{Valleau,Koh,Saikin,Korppi}). Inspired from these studies we considered here a model of \textit{Chlorobium Tepidum} Triple mutant (bchQRU) formed by four concentric cylindrical surfaces, as shown in Figure~\ref{fig-4cyl}(A). Our aim is to investigate whether concentric cylindrical aggregates can support delocalized excitonic states at room temperature more efficiently than single cylindrical structures. \begin{figure}[t] \centering \begin{tabular}{cc} {\bf \Large (A)} & {\bf \Large (B)} \\ \includegraphics[width=7.5cm]{pdf/cilindri2.pdf} & \includegraphics[width=7.5cm]{pdf/1layer2.pdf} \\ \end{tabular} \caption{(A) Structure of an aggregate of Bchl molecules on four concentric rolls. The radius of the innermost roll is $ 30 $ \AA, while the distance between consecutive layers is equal to $ 21 $ \AA. (B) Single layer of the structure formed by four concentric rings. The whole aggregate has been obtained by overlying 100 layers~\cite{Huh}.} \label{fig-4cyl} \end{figure} The distribution of the dipoles on each cylindrical surface is the same as the MT model of the previous section. In Table~\ref{tab-01} we report all parameters for this model. \begin{table}[!ht] \centering \begin{tabular}{|c|c|} \hline Number of surfaces & $ 4 $ \\ \hline Radius of the innermost roll & $ 30 $\AA \\ \hline Distance between concentric rolls & $21$ \AA \\ \hline Radii of the cylinders & $ 30-51-72-93$ \AA \\ \hline Number of dipoles on each ring & $ 30-51-72-93 $ \\ \hline Density (number of dipoles over radius of the ring \AA ) & $ 1$ constant value \\ \hline \end{tabular} \caption{Main parameters used to engineer the structure with four concentric rolls.} \label{tab-01} \end{table} The coupling between the EMF and the dipoles of the aggregate has been taken into account as in the Hamiltonian (\ref{eq:hreal}). As in the previous sections let us first analyze the dipole strengths associated with the eigenstates of the Hamiltonian (\ref{eq:hreal}). Results are shown in Figure~\ref{dipole}(A) for a complex made of 80 layers of 4 concentric rings. As one can see the maximal dipole strength is concentrated in an energy region close to the ground state (the $43^{rd}$ eigenstate has the maximal dipole strength, see inset in Figure~\ref{dipole}(A)). Such dipole strength is associated with eigenstates having a high degree of delocalization along the cylinders. A further evidence is given in Figure~\ref{dipole}(B) where we show that the maximal dipole strength increases proportionally with the length $L$ of the cylinders. We also note that the maximal dipole strength for concentric cylinders is between twice and 3 times larger than the maximal dipole strength of a single cylindrical surface with the same geometry, see Figure~\ref{dipole}(B) where the same data of Figure~\ref{fig:secpar}(F) for the MT model have been reported for comparison. Note that the fact that concentric cylindrical surface can cooperate to create a larger SRS is not trivial since the interaction between molecules in different cylinders is very weak, of the order of 16 cm$^{-1}$ which is one or two orders of magnitude smaller than the coupling between molecules inside each cylinder, see Table~\ref{tab-02}. \begin{figure}[t] \centering \includegraphics[scale=0.6]{pdf/dipole-fb2.pdf} \caption{(A) Dipole strength associated with each eigenstates of the system composed of 80 layers of 4 concentric rings for a total length of $L=65.57$ nm, as a function of the eigenvalues. Inset : the low energy part of the spectrum. Arrows indicate the Ground State (GS) and the state with maximal dipole strength (the $43^{rd}$ one). (B) Maximal dipole strength as a function of the rescaled length of the aggregate $L/\lambda_0$ where $\lambda_0 \approx 650$ nm. Dashed line represent the linear fits. Maximal length considered in this panel is $L=65.57$ nm, corresponding to 80 layers of 4 concentric rings. } \label{dipole} \end{figure} Finally, we have studied the effect of thermalization by putting the system in a thermal bath at room temperature $ T=300 $ K. As before, we studied the thermal coherence length $ L_{\rho} $, see equation~(\ref{eq:lrho}). Results are shown in Figure~\ref{4cyl}(A) and compared with the same results obtained for the MT model. A fitting with the function \begin{equation} \label{eq:fit} L_\rho = L_\infty \left( 1-e^{-N/N_c}\right), \end{equation} shown in figure as dashed lines, gives for the asymptotic coherence length (measured in number of layers) $L_\infty = 532.9$ for the 4 cylinders and $L_\infty = 249.8$ for the MT model. Keeping in mind that the radius of the cylinder for the MT model is an average of the four radii of the structure composed of 4 concentric cylinders, it is remarkable that the asymptotic coherence length is more than twice larger than the single cylindrical structure. \begin{figure}[t] \centering \includegraphics[scale=0.6]{pdf/lrh0a.pdf} \caption{ Thermal coherence length as a function of the number of layers in the cylinder for the system with four concentric cylinders (green circles) and the MT model with one cylinder only (red squares). Dashed lines are the fit with the expressions~(\ref{eq:fit}) whose parameters are $L_{\infty}= 249.8$ and $N_c = 19.9$ for the dashed green curve and $L_{\infty}= 523.9$ and $N_c = 25.2$ for the dashed red curve. } \label{4cyl} \end{figure} This is highly non trivial, since for the concentric cylinders we have many more states and the density of states is larger than that for the single cylinder having the same length. For a discussion on this point see~\ref{app-c}. The results in this Section show that packing symmetrical structures in concentric cylinders as it is found in natural photosynthetic complexes produces, at room temperature, a larger thermal coherence length than a single cylinder. \section{Conclusions and Perspectives} \label{conclu} We have analyzed realistic structures of self-aggregated molecular nanotubes of chlorophyll molecules as found in Antenna Complexes of Green Sulfur Bacteria. By taking into account position and dipole orientation of chlorophyll molecules which agree with experimental data we have shown that natural structures are able to support macroscopic coherent states even at room temperature. Indeed in natural complexes we have found delocalized thermal excitonic states with a coherence length extending over hundreads of molecules. We show that such thermal coherence length is much larger than that one could expect from the magnitude of the nearest-neighbour coupling and it cannot be explained even by the long-range nature of the interaction between the molecules. Instead, the ability of natural structures to support a large coherence length can be traced back to their specific geometric features. In order to explain how this is possible, we first considered cylindrical structures made of a sequence of rings, each containing a fixed number of molecules equally spaced on the ring itself. Since the disposition of the dipoles is highly symmetric, in each ring we have few superradiant eigenstates (to which we associate a giant dipole) where most of the dipole strength of the system is concentrated, and many subradiant states with zero dipole strength. Moreover, due to discrete rotational symmetry of the whole cylinder around its axis, each eigenstate of the ring sub-unit is coupled only with the correspondent eigenstate in the other rings. The coupling between the superradiant eigenstates in each ring gives rise to the super-transfer effect, i.e. a coupling which is enhanced by a factor proportional to the number of molecules in the ring. At the same time we have shown that the coupling between the subradiant states in each ring induces a sub-transfer effect, i.e. a suppressed coupling compared to the single molecule coupling. Moreover, we have demonstrated that in natural complexes the super-transfer coupling between the superradiant states in each ring generates the lower part of the energy spectrum of the whole cylinder. Since the spectral energy width of a system is proportional to the intensity of the coupling between its parts, the enhanced super-transfer coupling is able to increase the spectral width close to the ground state. This creates a depressed density of states in the lower part of the spectrum, allowing for a larger thermal coherence length. Indeed the latter increases as the number of states in an interval $k_BT$ above the ground state decreases. We also gave evidence that similar mechanisms are responsible for the large thermal coherence length that we found in other natural structures (WT model) where the disposition of the dipoles is less simple than the one described above. From our results we can predict that symmetry in cylindrical molecular nanotubes is essential to have robust structures, not only to thermal noise, as we considered in this paper, but also to other sources of noise. The structural requirement is to create a super-transfer coupling between the superradiant eigenstates of cylindrical sub-units able to generate the lower part of the spectrum of the whole structure. Molecular nanotubes are fundamental structures in biological systems and they are among the most promising structures to be used in quantum devices. The most important message which can be extracted from our analysis is the fact that specific geometric features, connected to symmetries, allow to control the cooperative effects in molecular aggregates. Indeed it is due to the presence of such cooperatively enhanced coupling (super-transfer) inside the molecular aggregates that macroscopic coherent states are allowed to survive at room temperature. This is an emergent property of such structures which cannot be reduced either to the intensity of the coupling between the molecules, or to their interaction range. The relevance of geometry in molecular aggregates and the emergent properties arising from it, are fundamental to understand even more complicated structures. For instance, structures made of few concentric cylinders as they are found in Green Sulphur bacteria. Our preliminary study of such structures has shown that these aggregates have an enhanced thermal coherence length compared to the single cylindric surfaces. We would like to mention that recently by some of the authors of this paper, excitonic states have been analysed also in Microtubules~\cite{Phil}, which are molecular nanotubes thought to be involved in many cellular functions. The analysis have confirmed the role of symmetry and geometry in such structures too. In perspective it would be interesting to investigate the relevance of macroscopic coherent states for light-harvesting and photo-excitation transport. Moreover it would be important to understand the general structural requirements necessary to induce macroscopic coherent states in generic molecular networks. \ack GLC acknowledges the support of PRODEP (511-6/17-8017). Useful discussions with J. Knoester, N. Keren and G. G. Giusteri are also acknowledged. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Peccei-Quinn (PQ) mechanism~\cite{Peccei:1977hh, Peccei:1977ur} is one of the most plausible solutions to the strong CP problem~\cite{Peccei:2006as}. It involves an anomalous $\text{U}(1)_{\rm PQ}$ symmetry that is spontaneously broken. The corresponding pseudo-goldstone boson is the axion~\cite{Wilczek:1977pj, Weinberg:1977ma}. Astrophysical constraints~\cite{Raffelt:2006cw} on axions imply that the PQ breaking scale has to be very high ($f_a > 10^9$ GeV). This introduces a large hierarchy between the PQ and the electroweak scale which calls for a theoretical explanation. One way to address the issue is in the context of Supersymmetry (SUSY), using the SUSY breaking effects to generate the PQ scale dynamically. This possibility was first considered by two different groups~\cite{Asaka:1998ns,ArkaniHamed:1998kj}. They proposed a supersymmetric hadronic axion model composed of three sectors: a hidden sector where SUSY is broken, a PQ sector defined by the superpotential \begin{equation} W_{\rm PQ} = \lambda_p S \tilde \Phi_p \Phi_p \, , \end{equation} and a messenger sector \begin{equation} W_M = \lambda_M X \tilde \Phi_M \Phi_M \, . \end{equation} The chiral superfields $S$, $\Phi_p$ and $\tilde\Phi_p$ are charged under the $\text{U}(1)_{\rm PQ}$, with $S$ the gauge-singlet axion multiplet, and $\Phi_p,\;\tilde\Phi_p$ in the $\irrep{3}$ and $\irrepbar{3}$ of the color gauge group $\text{SU}(3)_C$. The superfield $X$ has a SUSY breaking vacuum expectation value (VEV), $\langle X \rangle = M + \theta^2 F$, and $\Phi_M,\; \tilde\Phi_M$ are the messenger superfields of gauge mediation, also in the $\irrep{3}$ and $\irrepbar{3}$ of $\text{SU}(3)_C$. At two loops gauge mediation generates SUSY breaking masses for the PQ squarks $\Phi_p$ and $\tilde{\Phi}_p$. In turn SUSY breaking effects are transmitted to the scalar $S$ field via one-loop diagrams of $\Phi_p$ and $\tilde{\Phi}_p$. This generates an effective potential $V_{\rm eff}(S)$, effectively at three loops, such that $|S| \frac{\partial V_{\rm eff}}{\partial \tilde S} < 0$. The effects of gravity mediation, parametrized as \begin{equation} V_{\rm gr} \sim m_{3/2}^2 |S|^2 \, , \end{equation} with $m_{3/2}$ the gravitino mass, then stabilize the potential for $S$ at large field values. This results in a large PQ breaking scale, $f_a = \langle S \rangle$. A similar setup was reexamined more recently in the context of gauge mediation. Some of the present authors considered a model in which the PQ messengers $\Phi_p, \tilde\Phi_p$ and the regular messengers $\Phi_M, \tilde\Phi_M$ mixed~\cite{Carpenter:2009sw} and concluded that at one loop it did not stabilize the PQ scale away from the origin of the field $S$. The same work~\cite{Carpenter:2009sw} considered other ways of generating the PQ scale dynamically, but they all involved models with additional gauge interactions that were somewhat complicated. In Ref.~\cite{Jeong:2011xu} the authors considered a slightly different model with messenger mixing and showed that it led to the stabilization of the PQ scale at two loops.\footnote{See also Refs.~\cite{Choi:2011rs,Nakayama:2012zc} for different mechanisms of stabilization of the PQ scale in gauge mediation.} In this work we revisit the dynamical generation of the PQ scale in the context of gauge mediation and we ask if it is possible to achieve it in a simpler way. To this end we consider models with an $R$-symmetry and a PQ symmetry, containing the fields $S$, $X$ and an arbitrary number of messengers. We classify these models systematically according to the charge assignments of the fields, and point out possible virtues and disadvantages for each class. We study in detail concrete examples in which the one-loop effective potential generates a VEV, $\langle S \rangle \sim M$, with $M$ the messenger scale. Taking a high enough messenger scale, $M > 10^9$ GeV, we then have an acceptable PQ breaking scale, $f_a \sim M$. This is significantly simpler than the scenarios we described above. First, the analysis at one loop is sufficient. Second, the PQ scale is stabilized solely within the context of gauge mediation, without the need of intervention of gravity mediation. The paper is organized as follows. In section~\ref{sec:models} we introduce the models and categorize them into four distinct classes. In \cref{sec:completemodel} we study a couple of examples and comment on the vacuum stability and on cosmological constraints. In \cref{sec:quality} we discuss how one could address the issue of the axion quality in these models by invoking discrete symmetries. In section~\ref{sec:discussion} we summarize our main results and discuss possible implications for phenomenological model building. We include two appendices. In the first we report many details on models with two sets of messengers. In the second we study the anomalies related to the discrete gauge symmetries, invoked to address the axion quality problem. \section{Classification of models} \label{sec:models} We consider models defined by the superpotential \begin{equation} \label{eq:superdef} W = \mathcal{M}_{ij}(X,S) \tilde\Phi_i \Phi_j + W_R(X) \, , \end{equation} where \begin{equation} \label{eq:bigM} \mathcal{M}_{ij}(X,S) = X \lambda_{ij} + m_{ij} + S \delta_{ij} \, , \qquad i,j = 1,\dots, N \, . \end{equation} There are $N$ messenger fields, $\Phi_i,\tilde \Phi_i$, transforming under $\irrep{5}$ and $\irrepbar{5}$ of $\text{SU}(5)$ respectively, and two gauge-singlet fields, $X$ and $S$. The matrices $\lambda$, $m$ and $\delta$ are $N\times N$. We assume a global $R$-symmetry, $U(1)_R$, and a PQ symmetry, $\text{U}(1)_{\rm PQ}$, under which the fields are charged as shown in \cref{tab:U1charges}. The choice of keeping $X$ neutral under $\text{U}(1)_{\rm PQ}$ is convenient as it keeps the SUSY breaking and PQ breaking sectors separate. Note also that our models, as most models of gauge mediation, have an extra $U(1)_V$ global symmetry, under which the $\Phi$'s and the $\tilde \Phi$'s transform with opposite phases. The quantum number associated with this symmetry is the messengers' number.\footnote{If the messengers' number were conserved the lightest component of the messengers would be stable and could overclose the universe. We assume that $U(1)_V$ is broken in another sector of the theory. A coupling of one messenger to two matter fields in the superpotential of the visible sector, for instance, would suffice~\cite{Evans:2013kxa}.} The term $W_R(X)$ in \cref{eq:superdef} stands for a hidden sector superpotential, necessary to generate a SUSY breaking VEV for $X$. A minimal choice is $W_R(X)= F X$ but for the time being we leave $W_R(X)$ unspecified. Consistent with our choice that $X$ is uncharged under the PQ symmetry, we assume that all the fields that appear in $W_R(X)$ are singlets under $\text{U}(1)_{\rm PQ}$. Our notation, as well as the reasoning we outline in the rest of this section, are inspired by the models of (extra)ordinary gauge mediation~\cite{Cheung:2007es} where the superpotential has the form of~\cref{eq:superdef} with $S$ absent. \begin{table} \begin{center} \begin{tabular}{l l l l l} \toprule {} & $X$ & $S$ & $\Phi_i$ & $\tilde\Phi_i$ \\\midrule $R$ & $2$ & $r_S$ & $r_i$ & $\tilde r_i$ \\ ${\rm PQ}$ & $0$ & $p_S$ & $p_i$ & $\tilde p_i$ \\ \bottomrule \end{tabular} \end{center} \caption{$R$ and PQ charges of the fields in the model.} \label{tab:U1charges} \end{table} The superpotential in \cref{eq:superdef} must have $R$ charge 2 and PQ charge zero, resulting in the following selection rules \begin{align} & \lambda_{ij} \neq 0 \quad & {\rm only} \ {\rm if} \quad &\tilde r_i + r_j = 0 \quad & {\rm and} & \quad \tilde p_i + p_j = 0 \, , \label{eq:rulelambda} \\ & m_{ij} \neq 0 \quad & {\rm only} \ {\rm if} \quad & \tilde r_i + r_j = 2 \quad & {\rm and} & \quad \tilde p_i + p_j = 0 \, , \label{eq:rulem} \\ & \delta_{ij} \neq 0 \quad & {\rm only} \ {\rm if} \quad & \tilde r_i + r_j = 2-r_S \quad & {\rm and} & \quad \tilde p_i + p_j = -p_S \, . \label{eq:ruledelta} \end{align} As a consequence the determinant of the matrix $\mathcal{M}$ is a monomial in $X$ and $S$: \begin{equation} \label{eq:monomials} \det \mathcal{M} = X^n S^q G(\lambda, m, \delta) \, , \end{equation} where $G(\lambda, m, \delta)$ is some function of the couplings and \begin{align} n & = \sum_{i=1}^N \left(1 - \frac{1}{2}( \tilde r_i +r_i) + \frac{r_S}{2} \frac{(\tilde p_i + p_i)}{p_S} \right) \, , \\ q & = -\frac{1}{p_S} \sum_{i=1}^N ( \tilde p_i + p_i ) \, . \end{align} Here $n$ and $q$ are integers satisfying $0 \leq n \leq N$ and $0 \leq q \leq N$. The proof of the identity in \cref{eq:monomials} can be done in analogy with that given in Ref.~\cite{Cheung:2007es}. Our aim is to look for models that generate a large VEV for the field $S$, so that the PQ symmetry is spontaneously broken. Note that this implies that the PQ charge of $S$ must be strictly nonzero, $p_S \neq 0$. As $S$ is a flat direction of the classical potential we have to compute the one-loop effective potential and check if it stabilizes $S$ away from the origin. The identity of \cref{eq:monomials} leads to a classification scheme with four qualitatively distinct types of models. \begin{itemize} \item {\em Type A:} $\det m \neq 0$, $\det \lambda = \det \delta = 0$. Here we can go to a basis where $m$ is diagonal, which implies $\tilde r_i +r_i = 2$ and $\tilde p_i +p_i = 0$. It follows that \begin{equation} n = q = 0 \, . \end{equation} The messengers are stable around $X=0$ and $S=0$, but some can become tachyonic at large $X$ and $S$. In these models the gaugino masses vanish to leading order in $F$. We will not study any models of this kind in the remainder of the paper. \item {\em Type B:} $\det \lambda \neq 0$, $\det m = \det \delta = 0$. Here we can go to a basis where $\lambda$ is diagonal, $ \tilde r_i +r_i = 0$ and $ \tilde p_i +p_i = 0$. Thus we have \begin{equation} n=N \quad {\rm and} \quad q=0 \, . \end{equation} At large $X$ all messengers have masses of order $\lambda X$. As $X$ approaches the origin, $\det m = 0$ implies that some messengers have $\mathcal{O}(m)$ masses, while others are light with masses going to zero as some power of $X$. Eventually the latter become tachyonic as $X$ gets closer to zero. There can however be local minima of the potential for finite $X$ where all the messengers are massive. We study a simple model of this form in \cref{sec:twomess}. \item {\em Type C:} $\det \delta \neq 0$, $ \det \lambda =\det m = 0$. Here we can go to a basis where $\delta$ is diagonal, $ \tilde r_i +r_i = 2 - r_S$ and $ \tilde p_i +p_i = -p_S$. It follows that \begin{equation} n=0 \quad {\rm and} \quad q=N \, . \end{equation} With a reasoning analogous to the one for Type B models, we deduce that in Type C models some messengers become tachyonic as $S$ goes to zero. As $n=0$, the gauginos are massless at leading order in the SUSY breaking parameter $F$. In the next section we study in detail a model belonging to this category for which $S$ is stabilized away from the origin. \item {\em Type D:} $ \det \lambda =\det m = \det \delta= 0$. In this category we have \begin{equation} 0 < n < N \quad {\rm and} \quad 0 < q < N \, . \end{equation} The messenger sector combines features of the previous categories and there are no tachyons in a region $X_{\rm min} < |X| < X_{\rm max}$, $S_{\rm min} < |S| < S_{\rm max}$. We will see that models of this kind can develop a minimum with $\langle X\rangle\neq 0$ and $\langle S\rangle\neq 0$ at one loop. \end{itemize} \section{Concrete examples} \label{sec:completemodel} \subsection{A model with two sets of messengers} In this section we consider a model of Type C with two sets of messengers ($N=2$) that generates a VEV for the field $S$ of order the messenger scale $M$. Taking $M>10^9$ GeV we then have an acceptable PQ breaking scale. Many details of the analysis are found in \cref{sec:twomess} where we look in general at models with $N=2$ and find other options (of Type B and D) with less appealing features. For the sake of definiteness we specify $W_R(X)$, which we take to be of the simple form proposed in Ref.~\cite{Shih:2007av}. The superpotential of the model is then given by: \begin{align} W & = W^C_{\rm PQ} + W_R \, ; \label{eq:fullmod} \\ W^C_{\rm PQ} & = X \lambda_x \tilde\Phi_1 \Phi_1 + S \lambda_s \left( \tilde \Phi_1 \Phi_2 + \tilde\Phi_2 \Phi_1 \right) \, , \label{eq:WC} \\ W_R & = X(\lambda \varphi_1 \varphi_{-1} + F) + m_1 \varphi_{-1} \varphi_3 +\frac{1}{2} m_2 \varphi_1^2 \, . \end{align} Here the $\varphi$ fields are gauge- and PQ-singlets, with the subscript denoting their $R$ charge. The PQ and $R$ symmetries forbid additional renormalizable terms in~\cref{eq:fullmod}. We have taken the two couplings of $S$ to the messengers to be equal. With this choice the model has a messenger parity symmetry~\cite{Dimopoulos:1996ig} that has the virtue of forbidding dangerous D-terms~\cite{Dine:1981gu}, which would otherwise lead to tachyonic sfermions. At tree level there is a pseudo moduli space of vacua on which $\varphi_i=0,~\Phi_i=0,~\tilde \Phi_i=0$ with $X$ and $S$ arbitrary. As described in the previous section some of the messengers are tachionic for small $S$. Indeed for small $S$ the potential rolls down to a moduli space of supersymmetric vacua with $\varphi_i=0,\;S=0,\;X=0$ and on which the gauge invariant combinations $\tilde\Phi_i \Phi_j$ are subject to the constraints: \begin{equation} \tilde \Phi_1 \Phi_1=-{F\over \lambda_x}~,\qquad \tilde\Phi_1 \Phi_2+\tilde\Phi_2 \Phi_1=0~. \end{equation} Moreover on the pseudo moduli space some of the $\varphi_i$ become tachionic at large $X$. In this case the potential rolls down along a runaway direction on which $ \Phi_i=\tilde \Phi_i=0$. This runaway is parametrized by $\varphi_3\rightarrow \infty$ and~\cite{Shih:2007av} \begin{equation} X = \left({m_1^2 m_2 \varphi_3^2\over \lambda^2 F}\right)^{1\over 3},\qquad \varphi_1=\left({F m_1\varphi_3\over \lambda m_2}\right)^{1\over 3},\qquad \varphi_{-1}=\left({F^2 m_2\over \lambda^2 m_1 \varphi_3}\right)^{1\over 3} ~. \end{equation} We are interested in establishing if the one loop effective potential on the pseudo moduli space has a local minimum in $X$ and $S$ such that no field is tachionic. The Coleman-Weinberg formula for the potential at one loop is: \begin{equation} \label{eq:CWfull} V^{(1)} = \frac{1}{64 \pi^2} \STr \left[ \hat M^4 \left(\log \frac{\hat M^2}{\Lambda^2} -\frac{1}{2} \right) \right] \, , \end{equation} where $\STr$ stands for supertrace, $\hat M^2$ is shorthand for the squared mass matrices of the scalar and fermion components of the superfields and $\Lambda$ is the cutoff scale. We can compute the one-loop effective potential $V^{(1)}(X,S)$, as a function of both $X$ and $S$, at all orders in the SUSY breaking parameter $F$. Given the form of $W$ in \cref{eq:fullmod} the squared mass matrices of scalars and fermions are block diagonal and the resulting one-loop potential can be written as the sum of two terms \begin{equation} \label{eq:V1factors} V^{(1)}(X,S) = V^{(1)}_{\rm PQ}(X,S) + V^{(1)}_R(X) \, , \end{equation} where $V^{(1)}_R(X)$ was computed in Ref.~\cite{Shih:2007av}. For some range of the parameters one obtains a minimum at $\langle X \rangle =M$ for which all the $\varphi_i$'s are non tachyonic. The $V^{(1)}_{\rm PQ}(X,S)$ contribution to the one loop potential will not destabilize this minimum provided that \begin{equation} \label{eq:Xmincond} \frac{\partial V^{(1)}_{\rm PQ}(X,S)}{\partial X} \ll \langle X \rangle \frac{\partial^2 V^{(1)}_R(X)}{\partial X^2} \, . \end{equation} This condition can be satisfied by taking $\lambda_x$ sufficiently small, $\lambda_x < 0.1 \ \lambda$. In \cref{sec:twomess} we present a detailed analysis of $V^{(1)}_{\rm PQ}(X,S)$. In summary there is a local, PQ breaking minimum at \begin{equation} \label{eq:modmin} S_{\rm min} \simeq \frac{\lambda_x}{\lambda_s} \ e^{-3/2} M \, . \end{equation} For $S < S_{\rm tac}$, with $S_{\rm tac} = \frac{\sqrt{\lambda_x F}}{\lambda_s}$, some messengers become tachyonic and the system rolls down classically to the SUSY vacuum. For $S_{\rm min}$ to lie in the tachyon-free region we need to satisfy \begin{equation} F < e^{-3} \lambda_x M^2 \, . \end{equation} We want to make sure that our metastable vacuum is long lived, {\it i.e.} that the tunnelling rate from $S_{\rm min}$ to $S = 0$ is low\footnote{The tunneling rate to the runaway direction where $\varphi_3\rightarrow \infty$ can be easily made very small~\cite{Shih:2007av}.}. A detailed study of this rate is beyond the scope of this work. It is sufficient to note that the potential barrier height does not vary strongly with $\lambda_s$ while its width, $S_{\rm min} - S_{\rm tac}$, is proportional to $\lambda_s^{-1}$. Hence the tunneling rate can be made small by lowering $\lambda_s$. At the metastable minimum of $V^{(1)}(X,S)$, with $\langle S \rangle \sim \langle X \rangle = M$, we have the following spectrum: \begin{enumerate} \item The axion and the $R$-axion are massless.\footnote{The axion then acquires a small mass due to the $\text{U}(1)_{\rm PQ}$ anomaly, while the $R$-axion can acquire a mass from explicit $R$-symmetry breaking in supergravity~\cite{Bagger:1994hh}. } \item The saxion has a mass \begin{equation} \label{eq:saxmass} m_s \sim \sqrt{\frac{\lambda_s^2}{16 \pi^2}} \frac{F}{M} \, . \end{equation} In terms of loop counting this is larger than that of the MSSM particles in gauge mediation. However vacuum stability requires small $\lambda_s$, and as a result the saxion is likely lighter than most of the MSSM particles. \item The axino, the fermionic component of the superfield $S$, has a mass \begin{equation} \label{eq:axinomass} m_{\tilde a} \sim \frac{\lambda_s^2}{16 \pi^2} \frac{F}{M} \, , \end{equation} so it is lighter than the saxion. \item The $R$-saxion has a mass \begin{equation} \label{eq:Rsaxmass} m_{Rs} \sim \sqrt{\frac{\lambda^2}{16 \pi^2}} \frac{F}{M} \, , \end{equation} and is typically heavier than the saxion, as the coupling $\lambda$ that appears in $W_R$ does not need to be as small as $\lambda_s$. \item The gravitino is light, as usual in gauge mediation \begin{equation} m_{3/2} \sim \frac{F}{M_{\rm Pl}} \, , \end{equation} where $M_{\rm Pl}$ is the reduced Planck mass. \end{enumerate} The saxion and $R$-saxion are the two pseudomoduli in the model and we need to make sure that they do not pose cosmological issues~\cite{Banks:2002sd}. Typically such issues are more severe the lighter the pseudomodulus, so we concentrate on the saxion here, as $m_s < m_{Rs}$. The main decay modes of the saxion are (i) into two axions, (ii) into an axino and a gravitino, and (iii) into two gravitinos.\footnote{The saxion can also decay into pairs of MSSM particles or pairs of gauge bosons, but these are further suppressed and subdominant compared to (i), (ii), and (iii) in the text.} The decays (ii) and (iii) can be understood from the effective operator \begin{equation} \label{eq:highK} \frac{1}{M^2} \int d^4 \theta \ (X^\dagger X) (S^\dagger S) \, . \end{equation} The decay rate in each case is given approximately by \begin{equation} \label{eq:decay} \Gamma_s \sim \frac{1}{16 \pi} \frac{m_s^3}{M^2} \simeq 10^{-25} \ {\rm GeV} \ \left( \frac{m_s}{1 \ {\rm GeV}} \right)^3 \left( \frac{10^{12} \ {\rm GeV}}{M} \right)^2 \, . \end{equation} The saxion starts oscillating about its minimum when the Hubble rate, $H \sim T^2/M_{\rm Pl}$, is comparable to its mass, $H \sim m_s$. At that time it has an energy density of order $m_s^2 M^2$, and constitutes a fraction $M^2/M_{\rm Pl}^2$ of the total energy density. From then on it behaves like matter, so its fraction of the energy density grows with the scale factor. It decays when $H \sim \Gamma_s$, that it is at a temperature \begin{equation} \label{eq:Tdecay} T_s^{\rm dec} \sim \frac{m_s^{3/2} M_{\rm Pl}^{1/2}}{10 \ M} \, . \end{equation} Requiring that it decays safely before Big Bang Nucleosynthesis (BBN), $T_s^{\rm dec} > 0.1$ GeV, amounts to a lower bound on $\lambda_s F$: \begin{equation} \label{eq:BBNconst} \lambda_s F > 10^{15} \ {\rm GeV}^2 \left( \frac{M}{10^{12} \ {\rm GeV}} \right)^{5/3} \, , \end{equation} where we used \cref{eq:saxmass}. As long as $M<10^{12}$ GeV, the bound of \cref{eq:BBNconst} also guarantees that the saxion decays before it comes to dominate the energy density. This is helpful in relieving possible constraints from extra radiation at BBN~\cite{Kawasaki:2007mk}, as the decay products are relativistic. The axino in this model is much heavier than the gravitino and its main decay mode is into a gravitino and an axion, which are both dark matter candidates. Avoiding over-closure of the universe with too much gravitino dark matter then results in a bound on the reheating temperature, which should be lower than about $10^5$ GeV~\cite{Cheung:2011mg}. \subsection{Models with more messengers} The model considered in \cref{eq:fullmod} is of Type C and has the deficiency that gaugino masses are suppressed, as they are not generated at one loop. One way to fix this is to introduce more messengers. For example we can replace $W_R$ in \cref{eq:fullmod} with \begin{equation} \label{eq:newbreak} W_R=X \lambda (\tilde\Phi_3 \Phi_3+\tilde\Phi_4 \Phi_4 )+ X F + m \tilde \Phi_3 \Phi_4 \, . \end{equation} Under the $R$-symmetry the new messengers have charges $\tilde r_4=r_3=1$ and $\tilde r_3=r_4=-1$. The model specified by this $W_R$ plus $W^C_{\rm PQ}$ [which we take of the same form as in \cref{eq:WC}] is of Type D. The superpotential (\ref{eq:newbreak}), which was considered in Ref.~\cite{Cheung:2007es}, breaks both SUSY and the $R$-symmetry spontaneously for a large range of parameters. This continues to be true when added to the PQ sector as long as the couplings in $W^C_{\rm PQ}$ are not too large.\footnote{All possible couplings between the sets of messengers appearing in $W_{\rm PQ}^C$ and the messengers in $W_R$ need also be small.} This construction is rather ad hoc but serves as an example of a messenger sector that breaks SUSY, the $R$-symmetry and the PQ symmetry at one loop. \section{The axion quality} \label{sec:quality} The PQ mechanism provides a solution to the strong CP problem as long as the $\text{U}(1)_{\rm PQ}$ is violated almost exclusively by the QCD anomaly. One must ensure that higher-dimension operators which violate the symmetry explicitly~\cite{Kamionkowski:1992mf, Holman:1992us} are suppressed. In our framework such operators can appear in the superpotential, taking the form \begin{align} \label{eq:superhigh} \delta W \supset X M_{P}^2 \left(\frac{X}{M_{P}}\right)^a \left(\frac{S}{M_{P}} \right)^b \, , \qquad \text{where} \quad a \in \mathbb{N}^0\, , \, b \in \mathbb{N}^+\, , \end{align} and $M_P$ is the Planck mass. These terms result in a contribution to the scalar potential that has to be small: \begin{align} \label{eq:CPconstraint} |F| M_{P}^2 \left( \frac{f_a}{M_{P}}\right)^{a+b} < \SI{E-14}{\GeV^4}\, . \end{align} Here we have taken $\langle X \rangle \simeq \langle S \rangle = f_a \simeq M$. The above inequality must be satisfied if the axion is to provide the solution to the strong CP problem \cite{Carpenter:2009zs}. If $f_a \simeq \SI{E10}{\GeV}$ and $|F| \simeq \SI{E15}{\GeV^2}$, then this corresponds to the requirement that operators with $a+b < 8$ must be forbidden in the superpotential. Additionally one should worry about non-renomalizable operators in the K\"ahler potential. These are of the form \begin{align} \label{eq:Khigh} \delta K \supset \int \diff^2 \theta \diff^2 \bar{\theta} X X^\dag \left(\frac{X}{M_{P}}\right)^a \left(\frac{S}{M_{P}} \right)^b\, , \qquad \text{where} \quad a \in \mathbb{N}^0\, , \, b \in \mathbb{N}^+\, , \end{align} which leads to the following contributions to the scalar potential \begin{align} F F^\dag \left( \frac{f_a}{M_{P}}\right)^{a+b} < \SI{E-14}{\GeV^4}\, . \end{align} Taking once again $f_a \simeq \SI{E10}{\GeV}$ and $|F| \simeq \SI{E15}{\GeV^2}$, we must forbid operators with $a+b < 5$. This is a less stringent constraint on $a$ and $b$ compared to contributions from the non-renormalizable superpotential operators of \cref{eq:superhigh}. One possibility to address this problem is to impose discrete gauge symmetries~\cite{Banks:1989ag,Krauss:1988zc,Ibanez:1991hv}. We consider imposing a discrete PQ symmetry, $\mathbb{Z}_N^{\rm PQ}$, and a discrete $R$-symmetry, $\mathbb{Z}_M^R$, such that the global $\text{U}(1)_R$ and $\text{U}(1)_{\rm PQ}$ described in the previous sections arise as accidental symmetries of the Lagrangian. The dangerous operators of \cref{eq:superhigh} and \cref{eq:Khigh} then have to respect $\mathbb{Z}_N^{\rm PQ} \times \mathbb{Z}_M^R$ and we have to find how large $N$ and $M$ need to be in order to satisfy \cref{eq:CPconstraint}. The constraints arising from the requirement that these discrete symmetries are anomaly free are discussed in \cref{sec:anom}. The analysis is model dependent as it involves details of the visible sector (or other sectors) of the theory. As an example, for the model defined by \cref{eq:fullmod}, some of the possibilities are: \begin{enumerate} \item We can neglect anomaly constraints in our hidden sector model [\cref{eq:fullmod}] under the assumption that extra matter charged under $\text{SU}(5)$, $\mathbb{Z}_N^{\rm PQ}$ and $\mathbb{Z}_M^R$ is introduced to cancel anomalies~\cite{Harigaya:2013vja}. The minimal suitable discrete symmetry is then $\mathbb{Z}_3^{\rm PQ}\times \mathbb{Z}_5^R$, with the discrete $R$ charge of $S$ chosen as $\rcharge{S}=1$. \item If there is no additional matter charged under $\mathbb{Z}_N^{\rm PQ}$ and $\text{SU}(5)$ we can still neglect anomaly constraints on the $\mathbb{Z}_M^R$ symmetry by assuming that the $R$ charges in the visible sector are chosen to cancel any resulting anomaly. The minimal suitable discrete symmetry is then $\mathbb{Z}_2^{\rm PQ}\times \mathbb{Z}_9^R$, with $\rcharge{S}=2$. \item We can assume that the visible sector is anomaly free by itself. The minimal compatible discrete symmetry is then $\mathbb{Z}_2^{\rm PQ}\times \mathbb{Z}_{11}^R$, with $\rcharge{S}=5$. \end{enumerate} \section{Summary and discussion} \label{sec:discussion} The feasibility of the axion solution to the strong CP problem requires a PQ breaking scale, $f_a$, much above the electroweak scale. It would be desirable to explain this hierarchy with a mechanism that generates $f_a$ dynamically. The main result of this paper is the construction of simple models that tie $f_a$ to the messenger scale $M$ of gauge mediation. This is achieved with a one-loop analysis where it is sufficient to compute the Coleman-Weinberg potential, induced by SUSY breaking effects, for the scalar component of the axion superfield $S$. We find some examples in which such a potential leads to $\langle S \rangle \sim M$. This implies that the messenger scale, $M \sim f_a$, has to be larger than $10^9$ GeV, which puts it on the higher end of the range typically considered in gauge mediation. Our models possess an $R$-symmetry and a PQ symmetry, and can be classified in a way similar to Ref.~\cite{Cheung:2007es}. We find four distinct classes but study in detail only a few examples (some of which are in \cref{sec:twomess}). There is a lot left to explore. For instance, we have not studied any model of Type A. They suffer from the issue of suppressed gaugino masses, but perhaps one can address that problem in another sector of the theory. Models of Type D do not have such an issue and are possibly the most interesting to further investigate. We have mentioned only one example of a successful Type D model with four sets of messengers in \cref{sec:completemodel}. One can be more clever and find additional examples where the same sector spontaneously breaks the $R$-symmetry (thus breaking SUSY) and the PQ symmetry. This would specify the entire ``hidden'' sector, which could then be connected to the visible sector in the gauge mediation framework. The result would be a full calculable model for which one could study the phenomenological implications. We have also addressed the problem of the axion quality in \cref{sec:quality}. We do not provide any new insight into this issue, and show that relatively large discrete symmetries, thus somewhat unattractive, are needed to ensure the high quality of the PQ symmetry. \acknowledgments We thank Michael Dine for very helpful conversations and insightful comments. G.F. is supported by a Mobilex grant from the Danish Council for Independent Research and FP7 Marie Curie Actions COFUND (grant id: DFF 1325-00061). L.U. is supported by the I-CORE Program of the Planning Budgeting Committee and the Israel Science Foundation (grant NO 1937/12).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Five-dimensional Einstein-Chern-Simons gravity (EChS) is a gauge theory whose Lagrangian density is given by a 5-dimensional Chern-Simons form for the so called $\mathfrak{B}$ algebra \cite{salg1}. This algebra can be obtained from the AdS algebra and a particular semigroup $S$ by means of the $S$-expansion procedure introduced in Refs. \cite{salg2,salg3}. The field content induced by the $\mathfrak{B}$ algebra includes the vielbein $e^{a}$, the spin connection $\omega ^{ab}$, and two extra bosonic fields $h^{a}$ and $k^{ab}.$ The EChS gravity has the interesting property that the five dimensional Chern-Simons Lagrangian for the $\mathfrak{B}$ algebra, given by \cite{salg1}: \begin{equation} L_{\mathrm{EChS}}=\alpha _{1}l^{2}\varepsilon _{abcde}R^{ab}R^{cd}e^{e}+\alpha _{3}\varepsilon _{abcde}\left( \frac{2}{3}% R^{ab}e^{c}e^{d}e^{e}+2l^{2}k^{ab}R^{cd}T^{\text{ }e}+l^{2}R^{ab}R^{cd}h^{e}% \right) , \label{1} \end{equation}% where $R^{ab}=\mathrm{d}\omega ^{ab}+\omega _{\text{ }c}^{a}\omega ^{cb}$ and $T^{a}=\mathrm{d}e^{a}+\omega _{\text{ }c}^{a}e^{c}$, leads to the standard General Relativity without cosmological constant in the limit where the coupling constant $l$ tends to zero while keeping the effective Newton's constant fixed \cite{salg1}. In Ref. \cite{salg4} was found a spherically symmetric solution for the Einstein-Chern-Simons field equations and then was shown that the standard five dimensional solution of the Einstein-Cartan field equations can be obtained, in a certain limit, from the spherically symmetric solution of EChS field equations. The conditions under which these equations admit black hole type solutions were also found. The purpose of this work is to find static solutions with more general symmetries than the spherical symmetry. These solutions are represented by three-dimensional maximally symmetric spaces: open, flat and closed. The functional derivative of the matter Lagrangian with respect to the field $\ h^{a}$ is considered as another source of gravitational field, so that it can be interpreted as a second energy-momentum tensor: the energy-momentum tensor for field $\ h^{a}$. This tensor is modeled as an anisotropic fluid, the energy density, the radial pressure and shear pressures are characterized. The results lead to identify the field $h^{a}$ with the presence of a cosmological constant. The spherically symmetric solutions of Ref. \cite{salg4} can be recovered from the general static solutions. The article is organized as follows: In section \ref{sectionII} we briefly review the Einstein-Chern-Simons field equations together with their spherically symmetric solution, which lead, in certain limit, to the standard five-dimensional solution of the Einstein-Cartan field equations. In section \ref{sectionIII} we obtain general static solutions for \ the Einstein-Chern-Simons field equations. The obtaining of the energy momentum tensor for the field $h^{a},$ together with the conditions that must be satisfied by the energy density and radial and tangential pressures, also will be considered in section \ref{sectionIII}. In section \ref{sectionIV} we recover the spherically symmetric black hole solution found in Ref. \cite{salg4} from the general static solutions and will study the energy density and radial and tangential pressures for a naked singularity and black hole solutions. Finally, concluding remarks are presented in section \ref{sectionV}. \section{Spherically symmetric solution of \textbf{EChS field equations } \label{sectionII}} In this section we briefly review the Einstein-Chern-Simons field equations together with their spherically symmetric solution. We consider the field equations for the Lagrangian \begin{equation} L=L_{\mathrm{EChS}}+L_{\mathrm{M}}, \label{accion01} \end{equation}% where $L_{\mathrm{EChS}}$ is the Einstein-Chern-Simons gravity Lagrangian given in (\ref{1}) and $L_{\mathrm{M}}$ is the corresponding matter Lagrangian. In the presence of matter described by the langragian $L_{\mathrm{M}}=L_{% \mathrm{M}}(e^{a},h^{a},\omega ^{ab}),$ the field equations obtained from the action (\ref{accion01}) when $T^{a}=0$ and $k^{ab}=0$ are given by \cite{salg4}: \begin{align} & \mathrm{d}e^{a}+\omega _{\text{ }b}^{a}e^{b}=0, \label{3.1} \\ & \varepsilon _{abcde}R^{cd}\mathrm{D}_{\omega }h^{e}=0, \label{3.2} \\ & \alpha _{3}l^{2}\star \left( \varepsilon _{abcde}R^{bc}R^{de}\right) =-\star \left( \frac{\delta L_{\mathrm{M}}}{\delta h^{a}}\right) , \label{3.3} \\ & \star \left( \varepsilon _{abcde}R^{bc}e^{d}e^{e}\right) +\frac{1}{2\alpha }l^{2}\star \left( \varepsilon _{abcde}R^{bc}R^{de}\right) =\kappa _{\mathrm{% EH}}\hat{T}_{a}, \label{3.4} \end{align}% where $\mathrm{D}_{\omega }$ denotes the exterior covariant derivative respect to the spin connection $\omega $, \textquotedblleft $\star $% \textquotedblright\ is the Hodge star operator, $\alpha :=\alpha _{3}/\alpha _{1}$, $\kappa _{\mathrm{EH}}$ is the coupling constant in five-dimensional Einstein-Hilbert gravity, \begin{equation*} \hat{T}_{a}=\hat{T}_{ab}e^{b}=-\star \left( \frac{\delta L_{\mathrm{M}}}{% \delta e^{a}}\right) \end{equation*}% is the energy-momentum 1-form, with $\hat{T}_{ab}$ the usual energy-momentum tensor of matter fields, and where we have considered, for simplicity, $% \delta L_{M}/\delta \omega ^{ab}=0.$ Since equation (\ref{3.4}) is the generalization of the Einstein field equations, it is useful to rewrite it in the form% \begin{equation} \star \left( \varepsilon _{abcde}R^{bc}e^{d}e^{e}\right) =\kappa _{\mathrm{EH% }}\hat{T}_{a}+\frac{1}{2\alpha \alpha _{3}}\star \left( \frac{\delta L_{% \mathrm{M}}}{\delta h^{a}}\right) \end{equation}% where we have used the equation (\ref{3.3}). This result leads to the definition of the 1-form energy-momentun associated with the field $h^{a}$% \begin{equation} \hat{T}_{a}^{(h)}=\hat{T}_{ab}^{(h)}e^{b}=\frac{1}{2\alpha \alpha _{3}}\star \left( \frac{\delta L_{\mathrm{M}}}{\delta h^{a}}\right) . \end{equation}% This allows to rewrite the field equations (\ref{3.3}) and (\ref{3.4}) as \begin{align} &-\mathrm{sgn}(\alpha )\frac{1}{2}l^{2}\star \left( \varepsilon _{abcde}R^{bc}R^{de}\right) =\kappa _\mathrm{EH}\hat{T}_{a}^{(h)}, \label{7.3} \\ &\star \left( \varepsilon _{abcde}R^{bc}e^{d}e^{e}\right) +\mathrm{sgn}% (\alpha )\frac{1}{2}l^{2}\star \left( \varepsilon _{abcde}R^{bc}R^{de}\right) =\kappa _\mathrm{EH}\hat{T}_{a}, \label{7.4} \end{align}% where the absolute value of the constant $\alpha $ has been absorbed by redefining the parameter $l$% \begin{equation*} l\rightarrow l^{\prime }=\frac{1}{\sqrt{\left\vert \alpha \right\vert }}=% \sqrt{\left\vert \frac{\alpha _{1}}{\alpha _{3}}\right\vert } . \end{equation*} \subsection{\textbf{Static and spherically symmetric solution}} In this subsection we briefly review the spherically symmetric solution of the EChS field equations, which lead, in certain limit, to the standard five-dimensional solution of the Einstein-Cartan field equations. In five dimensions the static and spherically symmetric metric is given by \begin{equation*} \mathrm{d}s^{2}=-f^{2}(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{g^{2}(r)}% +r^{2}\mathrm{d}\Omega _{3}^{2}=\eta _{ab}e^{a}e^{b}, \end{equation*}% where $\mathrm{d}\Omega _{3}^{2}=\mathrm{d}\theta _{1}^{2}+\sin ^{2}\theta _{1}\mathrm{d}\theta _{2}^{2}+\sin ^{2}\theta _{1}\sin ^{2}\theta _{2}% \mathrm{d}\theta _{3}^{2}$ is the line element of 3-sphere $S^{3}$. Introducing an orthonormal basis \begin{align} e^{T}=f(r)\,\mathrm{d}t,\ e^{R}=\frac{\mathrm{d}r}{g^2(r)},\ e^{1}=r\,% \mathrm{d}\theta _{1}, \notag \\ e^{2}=r\sin \theta _{1}\,\mathrm{d}\theta _{2},\ e^{3}=r\sin \theta _{1}\sin \theta _{2}\, \mathrm{d}\theta _{3} \label{10} \end{align}% and replacing into equation (\ref{7.4}) in vacuum ($\hat{T}_{TT}=\hat{T}% _{RR}=\hat{T}_{ii}=0$), we obtain the EChS field equations for a spherically symmetric metric equivalent to eqs. ({\color{blue}26} - {\color{blue}28}) from Ref. \cite{salg4}. \subsubsection{Exterior solution} Following the usual procedure, we find the following solution \cite{salg4}: \begin{equation} f^2(r)=g^2(r)=1+\text{sgn}(\alpha )\left(\frac{r^{2}}{l^{2}}-\beta \sqrt{% \frac{r^{4}}{l^{4}}+\text{sgn}(\alpha )\frac{\kappa _\mathrm{EH}}{6\pi ^{2}l^{2}}M}\right), \label{14} \end{equation}% where $M$ is a constant of integration and $\beta =\pm 1$ shows the degeneration due to the quadratic character of the field equations. From (% \ref{14}) it is straightforward to see that when $l\rightarrow 0$, it is necessary to consider $\beta =1$ to obtain the standard solution of the Einstein-Cartan field equation, which allows to identify the constant $M$, with the mass of distribution. \section{General Static Solutions with General Symmetries\label{sectionIII}} In Ref. \cite{salg4} were studied static exterior solutions with spherically symmetry for the Einstein-Chern-Simons field equations in vacuum. In this reference were found the conditions under which the field equations admit black holes type solutions and were studied the maximal extension and conformal compactification of such solutions. In this section we will show that the equations of Einstein-Chern-Simons allow more general solutions that found for the case of spherical symmetry. The spherical symmetry condition will be relaxed so as to allow studying solutions in the case that the space-time is foliated by maximally symmetric spaces more general than the 3-sphere. It will also be shown that, for certain values of the free parameters, these solutions lead to the solutions found in Ref. \cite{salg4}. \subsection{Solutions to the EChS field equations} Following Refs. \cite{Oliv01,Oliv02}, we consider a static metric of the form \begin{equation} \mathrm{d}s^{2}=-f^{2}(r)\,\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{g^{2}(r)}% +r^{2}\mathrm{d}\Sigma _{3}^{2}. \label{18} \end{equation}% where $\mathrm{d}\Sigma _{3}^{2}$ is the line element of a three-dimensional Einstein manifold $\Sigma _{3}$, which is known as the \emph{base manifold} \cite{7}. Introducing an ortonormal basis, we have \begin{equation*} e^{T}=f(r)\ \mathrm{d}t,\quad e^{R}=\frac{\mathrm{d}r}{g(r)},\quad e^{m}=r% \tilde{e}^{m}, \end{equation*}% where $\tilde{e}^{m}$, with $m=\{1,2,3\}$, is the \textit{dreibein} of the base manifold $\Sigma _{3}$. From eq. (\ref{3.1}), it is possible to obtain the spin connection in terms of the vielbein. From Cartan's second structural equation $R^{ab}=\mathrm{d}% \omega ^{ab}+\omega _{\text{ }c}^{a}\omega ^{cb}$ we can calculate the curvature matrix. The nonzero components are \begin{align} R^{TR}& =-\left( \frac{f^{\prime \prime }}{f}g^{2}+\frac{f^{\prime }}{f}% g^{\prime }g\right) e^{T}e^{R},\ R^{Tm}=-\frac{f^{\prime }}{f}g^{2}e^{T}% \tilde{e}^{m}, \notag \\ R^{Rm}& =-g^{\prime }g\,e^{R}\tilde{e}^{m},\ R^{mn}=\tilde{R}^{mn}-g^{2}% \tilde{e}^{m}\tilde{e}^{n}, \label{22} \end{align}% where $\tilde{R}^{mn}=\mathrm{d}\tilde{\omega}^{mn}+\tilde{\omega}_{\text{ \ }p}^{m}\tilde{\omega}^{pn}$ are the components of the curvature of the base manifold. To define the curvature of the base manifold is necessary to define the spin connection $\tilde{\omega}^{mn}$ of the base manifold. This connection can be determined in terms of the dreibein $\tilde{e}^{m}$ using the property that the total covariant derivative of the vielbein vanishes identically, and the condition of zero torsion $\tilde{T}^{m}=0$. Replacing the components of the curvature (\ref{22}) in the field equations (% \ref{7.4}), for the case where $\hat{T}_{a}=0$ (vacuum), we obtain three equations \begin{equation} B_{u}(r)\tilde{R}(\tilde{x})+6A_{u}(r)=0,\quad u=\{0,1,2\}, \label{23.4} \end{equation}% where $\tilde{R}(\tilde{x})$ is the Ricci scalar of the base manifold and the functions $A_{u}(r)$ and $B_{u}(r)$ are given by \begin{align} A_{0}(r)& =-2r\left( g^{2}r^{2}\right) ^{\prime }+\text{sgn}(\alpha )\,l^{2}r\left( g^{4}\right) ^{\prime }, \label{24.1} \\ B_{0}(r)& =2r\left( 2r-\text{sgn}(\alpha )\,l^{2}\left( g^{2}\right) ^{\prime }\right) , \label{24.2} \\ A_{1}(r)& =2r\left( -2rg^{2}-3\,\text{sgn}(\alpha )\,l^{2}r^{2}g^{2}\frac{% f^{\prime }}{f}+2\,\text{sgn}(\alpha )\,l^{2}g^{4}\frac{f^{\prime }}{f}% \right) , \label{24.3} \\ B_{1}(r)& =2r\left( 2r-2\,\text{sgn}(\alpha )\,l^{2}g^{2}\frac{f^{\prime }}{f% }\right) , \label{24.4} \\ A_{2}(r)& =-2r^{2}\left( 2\left( g^{2}r^{2}\right) ^{\prime }+4rg^{2}\frac{% f^{\prime }}{f}+r^{2}\left( g^{2}\right) ^{\prime }\frac{f^{\prime }}{f}% +2r^{2}g^{2}\frac{f^{\prime \prime }}{f}\right) \notag \\ & \qquad +\text{sgn}(\alpha )\,l^{2}r^{2}\left( 3\left( g^{4}\right) ^{\prime }\frac{f^{\prime }}{f}+4g^{4}\frac{f^{\prime \prime }}{f}\right) \left( g^{4}\right) ^{\prime }, \label{24.5} \\ B_{2}(r)& =2r\left\{ 2-\text{sgn}(\alpha )\,l^{2}\left( \left( g^{2}\right) ^{\prime }\frac{f^{\prime }}{f}+2g^{2}\frac{f^{\prime \prime }}{f}\right) \right\} . \label{24.6} \end{align} The equation (\ref{23.4}) with $u=0$ can be rewritten as \begin{equation*} -\frac{A_{0}(r)}{B_{0}(r)}=\frac{\tilde{R}(\tilde{x})}{6}. \end{equation*} Since the left side depends only on $r$ and the right side depends only on $% \tilde{x}$, we have that both sides must be equal to a constant $\gamma $, so that \begin{equation} \tilde{R}(\tilde{x})=6\gamma . \label{25} \end{equation} An Einstein manifold $\Sigma _{n}$ is a Riemannian or pseudo Riemannian manifold whose Ricci tensor is proportional to the metric \begin{equation} \tilde{R}_{\mu \nu }=kg_{\mu \nu }. \label{25a} \end{equation}% The contraction of eq. (\ref{25a}) with the inverse metric $g^{\mu \nu }$ reveals that the constant of proportionality $k$ is related to the scalar curvature $\tilde{R}$ by \begin{equation} \tilde{R}=nk, \label{25b} \end{equation}% where $n$ is the dimension of $\Sigma _{n}$. Introducing (\ref{25a}) and (\ref{25b}) into the so called contracted Bianchi identities, \begin{equation*} \tilde{\partial}^{\beta }\left( \tilde{R}_{\alpha \beta }-\frac{1}{2}% g_{\alpha \beta }\tilde{R}\right) =0, \label{25c} \end{equation*}% we find \begin{equation*} \left( n-2\right) \tilde{\partial}_{\beta }k=0. \label{25d} \end{equation*}% This means that if $\Sigma _{n}$ is a Riemannian manifoldof dimension $n>2$ with metric $g_{\alpha \beta }$, then $k$ must be a constant. On the other hand, in a $n$-dimensional space, the Riemann tensor can be decomposed into its irreducible components \begin{eqnarray} \tilde{R}_{\mu \nu \rho \sigma } &=&\tilde{C}_{\mu \nu \rho \sigma }+\frac{1% }{n-2}\left( g_{\mu \rho }\tilde{R}_{\nu \sigma }-g_{\mu \sigma }\tilde{R}% _{\nu \rho }-g_{\nu \rho }\tilde{R}_{\mu \sigma }+g_{\nu \sigma }\tilde{R}% _{\mu \rho }\right) \notag \\ &&+\frac{1}{\left( n-1\right) \left( n-2\right) }\left( g_{\mu \sigma }g_{\nu \rho }-g_{\mu \rho }g_{\nu \sigma }\right) \tilde{R}, \label{25e} \end{eqnarray}% where $\tilde{C}_{\mu \nu \rho \sigma }$ is the Weyl conformal tensor, $% \tilde{R}_{\alpha \beta }$ is the Ricci tensor and $\tilde{R}$ is the Ricci scalar curvature. Introducing (\ref{25a}), (\ref{25b}) into (\ref{25e}) we have \begin{equation} \tilde{R}_{\mu \nu \rho \sigma }=\tilde{C}_{\mu \nu \rho \sigma }+\kappa \left( g_{\mu \rho }g_{\nu \sigma }-g_{\mu \sigma }g_{\nu \rho }\right) , \label{25f} \end{equation}% where $\kappa =k/(n-1)$. From (\ref{25f}) we can see that when $\tilde{C}_{\mu \nu \rho \sigma }=0$, the Einstein manifold $\Sigma _{n}$ is a Riemannian manifold with constant curvature $\kappa $. Since the Weyl tensor is identically zero when $n=3$, we have that, if $n=3$% , there is no distinction between Einstein manifolds and constant curvarture manifolds. However, for $n>3$, constant curvature manifolds are special cases of Einstein manifolds. This means that our $\Sigma _{3}(\tilde{x})$ manifold is a Riemannian manifold of constant curvature $\kappa =6\gamma$. The solution of $A_{0}(r)/B_{0}(r)=-\gamma $ leads to \begin{equation*} g^{2}(r)=\gamma +\text{sgn}(\alpha )\left( \frac{r^{2}}{l^{2}}-\beta \sqrt{% \frac{r^{4}}{l^{4}}+\text{sgn}(\alpha )\frac{\mu }{l^{4}}}\right) , \end{equation*}% where $\mu $ is a constant of integration and $\beta =\pm 1$. The equations (% \ref{23.4}) with $u=0$ and $u=1$ lead to $f^{2}(r)=g^{2}(r)$, while $u=2$ tells us that \begin{equation*} \tilde{R}=6\lambda , \end{equation*}% where the constant of integration $\lambda $ must be equal to $\gamma $, so that is consistent with eq. (\ref{25}). In short, if the line element is given by (\ref{18}), then the functions $% f(r)$ and $g(r)$ are given by \begin{equation} f^{2}(r)=g^{2}(r)=\gamma +\text{sgn}(\alpha )\frac{r^{2}}{l^{2}}-\text{sgn}% (\alpha )\,\beta \sqrt{\frac{r^{4}}{l^{4}}+\text{sgn}(\alpha )\frac{\mu }{% l^{4}}} \label{29} \end{equation}% where $\beta =\pm 1$ shows the degeneration due to the quadratic character of the field equations, $\mu $ is a constant of integration related to the mass of the system and $\gamma $ is another integration constant related to the scalar curvature of the base manifold ($\tilde R=6\gamma$): $\gamma =0$ if it is flat, $\gamma =-1$ if it is hyperbolic (negative curvature) or $% \gamma =1$ if it is spherical (positive curvature). \subsection{A solution for equation (\protect\ref{3.2})} Since the explicit form of the $h^{a}$ field is important in an eventual construction of the matter lagrangian $L_{M}$, we are interested in to solve the field equation (\ref{3.2}) for the $h^{a}$ field. \begin{equation} \varepsilon _{abcde}R^{cd}\mathrm{D}_{\omega }h^{e}=0. \label{eccampoh01} \end{equation} Expanding the $h^{a}=h_{\text{ }\mu }^{a}dx^{\mu }$ field in their holonomic index, we have \begin{equation*} h_{a}=h_{\mu \nu }e_{a}^{\mu }\,\mathrm{d}x^{\nu }. \end{equation*} For the space-time with a three-dimensional manifold maximally symmetrical $% \Sigma _{3}$, we will assume that the field $h_{\mu \nu }$ must satisfy the Killing equation $\mathcal{L}_{\xi }h_{\mu \nu }=0$ for $\xi _{0}=\partial _{t}$ (stationary) and the six generators of the $\Sigma _{3}$, i.e., we are assuming that the field $h_{\mu \nu }$ has the same symmetries than the metric tensor $g_{\mu \nu }$. \subsubsection{\textbf{Killing vectors of $\Sigma_3$ and shape of field $h^a$% }} When the curvature of $\Sigma _{3}$ is $\gamma =1$ (spherical type), it can show that its \textit{driebein} is given by \begin{equation*} \tilde{e}^{1}=\text{d}x_{1}\ ,\ \tilde{e}^{2}=\sin (x_{1})\,\text{d}x_{2}\ ,\ \tilde{e}^{3}=\sin (x_{1})\sin (x_{2})\,\text{d}x_{3}, \label{dri01} \end{equation*} whose Killing vectors are \cite{salg4,salg6} \begin{align} \xi _{1}& =\partial _{x_{3}}, \notag \\ \xi _{2}& =\sin x_{3}\,\partial _{x_{2}}+\cot x_{2}\cos x_{3}\,\partial _{x_{3}}, \notag \\ \xi _{3}& =\sin x_{2}\sin x_{3}\,\partial _{x_{1}}+\cot x_{1}\cos x_{2}\sin x_{3}\,\partial _{x_{2}}+\cot x_{1}\csc x_{2}\cos x_{3}\,\partial _{x_{3}}, \label{kv01} \notag\\ \xi _{4}& =\cos x_{3}\,\partial _{x_{2}}-\cot x_{2}\sin x_{3}\,\partial _{x_{3}}, \notag \\ \xi _{5}& =\sin x_{2}\cos x_{3}\,\partial _{x_{1}}+\cot x_{1}\cos x_{2}\cos x_{3}\,\partial _{x_{2}}-\cot x_{1}\csc x_{2}\sin x_{3}\,\partial _{x_{3}}, \notag \\ \xi _{6}& =\cos x_{2}\,\partial _{x_{1}}-\cot x_{1}\sin x_{2}\,\partial _{x_{2}}. \notag \end{align} On the other hand, when the curvature of $\Sigma_3$ is $\gamma=-1$ (hyperbolic type), its \textit{driebein} and their Killing vectors are the same of the spheric type just changing the trigonometrical functions of $x_1$ for hyperbolical ones. For example, in this case $\tilde e^3=\sinh (x_1)\sin(x_2)\,\text{d}x_3$. The third case, $\gamma =0$ is the simplest. The \emph{driebein} is given by \begin{equation*} \tilde{e}^{1}=\text{d}x_{1}\ ,\ \tilde{e}^{2}=\text{d}x_{2}\ ,\ \tilde{e}% ^{3}=\text{d}x_{3} \label{dri02} \end{equation*} and their Killing vectors are given by \begin{align*} \xi _{1}& =\partial _{x_{1}}\ ,\ \xi _{2}=\partial _{x_{2}}\ ,\ \xi _{3}=\partial _{x_{3}}, \notag \\ \xi _{4}& =-x_{3}\,\partial _{x_{2}}+x_{2}\,\partial _{x_{3}}, \\ \xi _{5}& =x_{3}\,\partial _{x_{1}}-x_{1}\,\partial _{x_{3}}, \notag \\ \xi _{6}& =-x_{2}\,\partial _{x_{1}}+x_{1}\,\partial _{x_{2}}. \notag \end{align*} Then, we have \begin{align} h^{T}& =h_{t}(r)\,e^{T}+h_{tr}(r)\,e^{R}, \notag \\ h^{R}& =h_{rt}(r)\,e^{T}+h_{r}(r)\,e^{R}, \label{hfield01} \\ h^{m}& =rh(r)\,\tilde{e}^{m}. \notag \end{align} \subsubsection{\textbf{Dynamic of the field $h^a$}} In order to obtain the dynamics of the field $h^{a}$ found in (\ref{hfield01}% ), we must replace this and the $2-$form curvature (\ref{22}) in the field equation (\ref{eccampoh01}). Depending of the curvature of $\Sigma _{3}$ two cases are possible. First, if $\gamma =0$ the equation (\ref{eccampoh01}) is satisfied identically. This means that the nonzero components of $h^{a}$ field given in equation (\ref{hfield01}) are not determined by field equations. Second, if $\gamma =\pm 1$, the equation (\ref{eccampoh01}) leads to the following conditions \begin{align} h_{tr}& =h_{rt}=0, \label{C.9} \\ h_{r}& =(rh)^{\prime }, \label{C.10} \\ (fh_{t})^{\prime }& =f^{\prime }h_{r}. \label{C.11} \end{align} From eq. (\ref{C.11}), we obtain \begin{equation*} h_{t}(r)=h_{r}(r)-\frac{1}{f(r)}\int h_{r}^{\prime }(r)\,f(r)\,dr+\frac{A}{% f(r)}, \end{equation*}% where $A$ is a constant to be determined and we have performed integration by parts. Then, we can solve equation (\ref{C.10}) \begin{equation*} h(r)=\frac{1}{r}\int h_{r}(r)\,dr+\frac{B}{r}, \end{equation*} where $B$ is another integration constant and $f(r)$ is the vielbein component $e_{t}^{T}$. Again, we realize that not all the nonzero components of $h^{a}$ field are determined by the field equations. The simplest case happens when $h_r$ is constant, namely $h_r(r)=h_0$. The other components of $h^a$ field are \begin{equation*} h_t(r)=h_0+\frac{A}{f(r)}\,,\quad h(r)=h_0+\frac{B}{r}, \end{equation*} whose asymptotic behavior is given by \begin{equation*} h_r(r\rightarrow\infty)=h_0\,,\quad h_t(r\rightarrow\infty)=h_0+\gamma A\,,\quad h(r\rightarrow\infty)=h_0. \end{equation*} \subsection{Energy-momentum tensor for the field $h^{a}$} From the vielbein found in the previous section we can find the energy-momentum tensor associated to the field $h^{a}$, i.e., we can solve the equation (\ref{7.3}). Let us suppose that the energy-momentum tensor associated to the field $h^{a}$ can be modeled as an anisotropic fluid. In this case, the components of the energy-momentum tensor can be written in terms of the density of matter and the radial and tangential pressure. In the frame of reference comoving, we obtain \begin{equation} \hat{T}_{TT}^{(h)}=\rho ^{(h)}(r),\quad \hat{T}_{RR}^{(h)}=p_{R}^{(h)}(r),% \quad \hat{T}_{ii}^{(h)}=p_{i}^{(h)}(r). \label{45tmunu01} \end{equation} Considering these definitions with the solution found in (\ref{29}) and replacing in the field equations (\ref{7.3}), we obtain \begin{align} \rho ^{(h)}(r)& =-p_{R}^{(h)}(r)=-\frac{12}{l^{2}\kappa _{\text{EH}}}\left\{ 2-\beta \frac{2+\text{sgn}(\alpha )\frac{\mu }{r^{4}}}{\sqrt{1+\text{sgn}% (\alpha )\frac{\mu }{r^{4}}}}\right\} , \label{36.1} \\ p_{i}^{(h)}(r)& =\frac{4}{l^{2}\kappa _{\text{EH}}}\left\{ 6-\beta \,\frac{% 6+9\,\text{sgn}(\alpha )\frac{\mu }{r^{4}}+\frac{\mu ^{2}}{r^{8}}}{\left( 1+% \text{sgn}(\alpha )\frac{\mu }{r^{4}}\right) ^{\frac{3}{2}}}\right\} . \label{36.3} \end{align} Note that equations (\ref{36.1}) and (\ref{36.3}) show that the energy density and the pressures do not depend on the $\gamma $ constant (See appendix \ref{ape}). \subsection{Energy density and radial pressure \label{edrp01}} Now consider the conditions that must be satisfied by the energy density $% \rho ^{(h)}(r)$ and radial pressure $p_{i}^{(h)}(r)$. From eq. (\ref{36.1}) we can see that the energy density is zero for all $r$, only if $\beta =1$ and $\mu =0$. This is the only one case where $\rho ^{(h)}(r)$ vanishes. Otherwise the energy density is always greater than zero or always less than zero. In order to simplify the analysis, the energy density can be rewritten as \begin{equation} \rho ^{(h)}(r) =-\frac{12}{l^{2}\kappa _{\text{EH}}}\left\{ \frac{2\sqrt{1+% \text{sgn}(\alpha )\frac{\mu }{r^{4}}}-\beta \left( 2+\text{sgn}(\alpha )% \frac{\mu }{r^{4}}\right) }{\sqrt{1+\text{sgn}(\alpha )\frac{\mu }{r^{4}}}}% \right\} . \label{38} \end{equation} Since the solution found in (\ref{29}) has to be real, then it must be satisfied that $1+\text{sgn}(\alpha )\frac{\mu }{r^{4}}>0$. This implies that the terms which appear in the numerator of eq. (\ref{38}) satisfy the following constraint \begin{equation*} 0<2\sqrt{1+\text{sgn}(\alpha )\frac{\mu }{r^{4}}}<\left( 2+\text{sgn}(\alpha )\frac{\mu }{r^{4}}\right) . \end{equation*} This constraint is obtained by considering that $\left( \text{sgn}(\alpha )% \frac{\mu }{r^{4}}\right) ^{2}>0$, adding to both sides $4\left( 1+\text{sgn}% (\alpha )\frac{\mu }{r^{4}}\right) $ and then taking the square root. So, if $\beta =-1$ we can ensure that the energy density is less than zero. If $% \beta =1$ the energy density is greater than zero, unless that $\mu =0$, case in that the energy density is zero. The radial pressure behaves exactly reversed as was found in eq. (\ref{36.1}). We also can see if $\mu =0$ the energy density remains constant. Otherwise, the energy density is a monotonic increasing ($\beta =-1$) or decreasing ($% \beta =1$) function of radial coordinate. Note that if $\beta =-1$ then when $r\rightarrow \infty ,$ the energy density and the radial pressure tend a nonzero value \begin{equation*} \rho ^{(h)}(r\rightarrow \infty )=-p_{R}^{(h)}(r\rightarrow \infty )=-\frac{% 48}{l^{2}\kappa _{\text{EH}}}, \end{equation*} as if it were a negative cosmological constant. Otherwise, $\beta =+1$, the energy density and the radial pressure are asymptotically zero, as in the case of a null cosmological constant. In summary, \begin{itemize} \item If $\mu =0$, then the energy density is constant throughout the space, zero if $\beta=1$ and $-\frac{48}{l^{2}\kappa _{\text{EH}}}$ if $\beta=-1$. \item If $\beta =1$ and $\mu\neq 0$, the energy density is positive and decreases to zero at infinity (see figure \ref{temfig01a}). \item If $\beta =-1$ and $\mu\neq 0$, the energy density is negative and its value grows to $-\frac{48}{l^{2}\kappa _{\text{EH}}}$ (see figure \ref{temfig01b}). \end{itemize} As we have already shown, the radial pressure is the negative of energy density. \begin{figure}[h] \includegraphics[width=0.7\columnwidth]{fig01.eps} \centering \caption{The energy density associated with the field $h^a$ ($\protect\beta% =1 $).} \label{temfig01a} \end{figure} \begin{figure}[h] \includegraphics[width=0.7\columnwidth]{fig02.eps} \centering \caption{The energy density associated with the field $h^a$ ($\protect\beta% =-1$).} \label{temfig01b} \end{figure} \subsection{Tangential pressures} \label{tp01} We can see that the tangential pressures given in the eq. (\ref{36.3}) vanishes if \begin{equation*} \text{sgn}(\alpha )\frac{\mu }{r^{4}}=9+4\beta \sqrt{6}. \end{equation*} Thus we have \begin{itemize} \item If $\beta =1,$ the tangential pressure vanishes only if $\text{sgn}% (\alpha )\mu $ is greater than zero (see figure \ref{temfig03a}). \item If $\beta =-1$ the tangential pressure vanishes only if $\text{sgn}% (\alpha )\mu $ is less than zero (see figure \ref{temfig03b}). \item In other cases, the tangential pressure does not change sign. \end{itemize} Furthermore, it is straightforward to show that there is only one critical point at $r=\sqrt[4]{\frac{\text{sgn}(\alpha )\mu }{5}}$ only if $\mathrm{sgn% }(\alpha )\mu >0$. \subsubsection{\textbf{Case $\protect\beta =1$}} If \textbf{$\beta =1,$} three cases are distinguished, depending on the quantity $\mathrm{sgn}(\alpha )\mu $ \begin{enumerate}[($a$)] \item For $\mu =0$, we have the simplest case. The tangential pressure is zero for all $r$ . \item If $\mathrm{sgn}(\alpha )\mu >0,$ the tangential pressure diverges at $r=0$. \ It is a function that tends to $-\infty $ at $r=0$, vanishes at \begin{equation} r_{1}=\sqrt[4]{\text{sgn}(\alpha )\mu \,\frac{4\sqrt{6}-9}{15}}\approx 0.48% \sqrt[4]{|\mu |}, \label{r101} \end{equation} takes its maximum value \begin{equation*} p_{i}^{(h)\,\text{max}}=\frac{4}{9\ l^{2}\kappa _{\text{EH}}}\left( 54-19% \sqrt{6}\right) \approx \frac{3.3}{\ l^{2}\kappa _{\text{EH}}} \end{equation*}% at \begin{equation} r_{2}=\sqrt[4]{\frac{\text{sgn}(\alpha )\mu }{5}}\approx 0.67\sqrt[4]{|\mu |} \label{r201} \end{equation}% and decreases to zero when $r$ tends to infinity. \item If $\mathrm{sgn}(\alpha )\mu <0$, then the tangential pressure tends to $+\infty $ at \begin{equation*} r_{m}=\sqrt[4]{-\text{sgn}(\alpha )\mu }=\sqrt[4]{|\mu |}. \end{equation*}% Of course, the manifold is not defined for $r<r_{m}$ (see the metric coefficients in eq. (\ref{29})). The tangential pressure is a decreasing function of $r$ which vanishes at infinity, but always greater than zero. \end{enumerate} \begin{figure}[h] \includegraphics[width=0.7\columnwidth]{fig03.eps} \centering \caption{The tangential pressures associated with the field $h^a$ ($\protect% \beta=1$).} \label{temfig03a} \end{figure} \subsubsection{\textbf{Case $\protect\beta =-1$}} If \textbf{$\beta =-1$} three situations are also distinguished \begin{enumerate}[($a$)] \item For $\mu =0$, we have the simplest case. The tangential pressure is constant and greater than zero for all $r$ \begin{equation*} p_{i}^{(h)}(r)=\frac{48}{l^{2}\kappa _{\text{EH}}}. \end{equation*} \item If $\mathrm{sgn}(\alpha )\mu >0$, the tangential pressure diverges to positive infinity at $r=0,$ is a decreasing function of $r$, reaches a minimum value \begin{equation*} p_{i}^{(h)\,\text{min}}=\frac{4}{9\ l^{2}\kappa _{\text{EH}}}\left( 54+19% \sqrt{6}\right) \approx \frac{45}{\ l^{2}\kappa _{\text{EH}}} \end{equation*}% at \begin{equation*} r=\sqrt[4]{\frac{\text{sgn}(\alpha )\mu }{5}}\approx 0.67\sqrt[4]{|\mu |}, \end{equation*}% and then increases to a bounded infinite value \begin{equation*} p_{i}^{(h)}(r\rightarrow \infty )=\frac{48}{l^{2}\kappa _{\text{EH}}}. \end{equation*}% The tangential pressure is always greater than zero. \item If $\mathrm{sgn}(\alpha )\mu <0,$ the tangential pressure diverges to negative infinity at (remember that the manifold is not defined for $r<r_{m}$) \begin{equation*} r_{m}=\sqrt[4]{-\text{sgn}(\alpha )\mu }=\sqrt[4]{|\mu |}. \end{equation*}% The tangential pressure is an increasing function of $r$ which tends to a positive constant value when $r$ goes to infinity \begin{equation*} p_{i}^{(h)}(r\rightarrow \infty )=\frac{48}{l^{2}\kappa _{\text{EH}}}. \end{equation*} Furthermore, the tangential pressures become zero at \begin{equation*} \quad r=\sqrt[4]{-\text{sgn}(\alpha )\mu \,\frac{9+4\sqrt{6}}{15}}\approx 1.06\sqrt[4]{|\mu |}. \end{equation*} \end{enumerate} \begin{figure}[h] \includegraphics[width=0.7\columnwidth]{fig04.eps} \centering \caption{The tangential pressures associated with the field $h^a$ ($\protect% \beta=-1$).} \label{temfig03b} \end{figure} \section{Spherically Symmetric Solution from General Solution \label% {sectionIV}} Now consider the case of spherically symmetric solutions studied in Ref. \cite{salg4} and reviewed in section \ref{sectionII}. These solutions are described by the vielbein defined in eq. (\ref{10}) with the functions $f(r)$ and $g(r)$ given in eq. (\ref{14}). This solution corresponds to the general static solution found in (\ref{29}) where \begin{inparaenum}[(i)] \item the curvature of the so called, three-dimensional base manifold, is taken positive $\gamma =1$ (sphere $S^{3}$), \item the constant $\mu $, written in terms of the mass $M$ of the distribution is given by \begin{equation*} \mu =\frac{\kappa _{EH}}{6\pi ^{2}}Ml^{2}>0, \end{equation*} \item and $\beta =1$ so that this solution has as limit when $% l\rightarrow 0$, the 5D Schwarzschild black hole obtained from the Einstein Hilbert gravity. \end{inparaenum} From Ref. \cite{salg4}, we know that the relative values of the mass $M$ and the distance $l$ of this solution leads to black holes or naked singularities. \begin{enumerate}[($a$)] \item In the event that $\alpha >0$, the manifold only has one singularity at $r=0$. Otherwise, if $\alpha <0$, the manifold has only one singularity at \begin{equation}\label{rm01} r_{m}=\sqrt[4]{\mu }=\sqrt[4]{\frac{\kappa _{\mathrm{EH}}}{6\pi ^{2}}Ml^{2}}. \end{equation} \item There is a black hole solution with event horizon defined by \begin{equation} r_{0}=\sqrt{\frac{\mu -\text{sgn}(\alpha )\,l^{4}}{2l^{2}}}=\sqrt{\frac{% \kappa _{\text{EH}}}{12\pi ^{2}}M-\text{sgn}(\alpha )\frac{l^{2}}{2}}, \label{bh01} \end{equation}% if $\mu >l^{4}$, or equivalently \begin{equation} \frac{\kappa _{\mathrm{EH}}}{6\pi ^{2}}M>l^{2}. \label{bh02} \end{equation} Otherwise, there is a naked singularity. \end{enumerate} \subsection{Case $\protect\alpha >0$} In this case the energy density appears to be decreasing and vanishes at infinity and the radial pressure behaves reversed (see subsection \ref{edrp01} with $\beta=1$ and $\mathrm{sgn}(\alpha)\,\mu>0$). Much more interesting is the behavior of the tangential pressure. In fact, as we already studied in subsection \ref{tp01}, the tangential pressure is less than zero for $r<r_{1}$ (\ref{r101}), vanishes at $r_1$, becomes greater than zero until reaching a maximum at $r_{2}$ (\ref{r201}) and then decreases until it becomes zero at infinity. \subsubsection{\textbf{Comparison between $r_{0}$, $r_{1}$ and $r_{2}$ for black hole solution}} When the solution found is a black hole, then it must satisfy the condition (% \ref{bh02}) and has event horizon in $r_{0}$ given in (\ref{bh01}). It may be of interest to study the cases when $r_{0}$ is larger or smaller than $% r_{1}$ and $r_{2}$. First consider $r_{0}$ for $l$ fixed, i.e., we study the behavior of the $% r_{0}=r_{0}(\mu )$ function. For $\mu \geq l^{4}$ (black hole solution), $% r_{0}=r_{0}(\mu )$ is a well-defined, continuous and strictly increasing function of $\mu$ which has an absolute minimum at $\mu =l^{4}$, where it vanishes, i.e., $r_{0}(\mu =l^{4})=0$. Furthermore, when $\mu \gg l^{4}$ the $r_0(\mu)$ function behaves like $\sqrt{\mu }$. On the other hand, the study of functions $r_{1}(\mu )$ and $r_{2}(\mu )$ shows that they are well defined, continuous and strictly increasing functions of $\mu\geq 0 $ which vanish at $\mu =0$. As $\mu$ increases, $% r_{1}$ and $r_{2}$ grow proportional to $\sqrt[4]{\mu }$. From the definitions of $r_1$ and $r_2$ given in eqs. (\ref{r101}) and (\ref% {r201}), and the preceding analysis, it follows that $r_{2}>r_{1}>r_{0}$ if $% \mu =l^{4}$, and $r_{0}>r_{2}>r_{1}$ if $\mu \rightarrow \infty $. This means that should exist a unique value of the constant $\mu $, denoted $\mu _{1}$ such that $r_{0}(\mu _{1})=r_{1}(\mu _{1})$ and a single $\mu _{2}$ such that $r_{0}(\mu _{2})=r_{2}(\mu _{2})$. After some calculations is obtained \begin{equation*} \mu _{1}=\frac{l^{4}}{15}\left( 8\sqrt{6}-3+2\sqrt{6\left( 7-2\sqrt{6}% \right) }\right) \approx 1.58\ l^{4} \end{equation*}% and \begin{equation*} \mu _{2}=\frac{l^{4}}{5}\left( 7+2\sqrt{6}\right) \approx 2.38\ l^{4}. \end{equation*} From the above analysis it is concluded that depending on the value of the constant $\mu $, proportional to the mass, we could have the following cases \begin{itemize} \item If $l^{4}\leq \mu <\mu _{1}$ then $r_{0}<r_{1}$. Outside the black hole horizon, there is a region $r_{0}<r<r_{1}$ where the tangential pressure is negative. \item If $\mu >\mu _{1}$ then $r_{0}>r_{1}$, the zone in which the tangential pressure is negative is enclosed within the black hole horizon. \end{itemize} A completely analogous analysis can be done to study the relationship between $r_{0}$ and $r_{2}$: if $\mu<\mu_2$, the maximum value of the tangential pressure is outside the event horizon or, inside if $\mu>\mu_2$. \subsubsection{\textbf{Pressure radial and tangential pressures }} In summary, for $\alpha>0$ we can see that the energy density is always greater than zero, while the radial pressure is less than zero, both vanish when $r$ goes to infinity (see figure \ref{amc01}). On the other hand, the lateral pressures are less than zero for $r<r_1$, become positive for $r>r_1$ reaching a maximum at $r_2$ and then decrease until vanish when $r$ goes to infinity (see figure \ref{amc01}). The solution may be a naked singularity $(\mu <l^{4})$ or a black hole $(\mu >l^{4})$. In case of a black hole there is an event horizon at $r=r_{0}$, which can hide the zone of negative tangential pressures ($\mu >\mu _{1}$) or otherwise, remains uncovered. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{fig05.eps} \caption{The components of energy-momentum tensor associated with the field $% h^a$ ($\protect\alpha >0$). Note the interesting behavior of tangential pressure: is negative for $r<r_1$, vanishes at $r_1$, reaches a maximum at $% r_2$ and then decreases to zero at infinity. The energy density and pressures tend rapidly to zero as $1/r^{8}$.} \label{amc01} \end{figure} \subsection{Case $\protect\alpha <0$} Now consider the coupling constant $\alpha <0$. In this case the space-time has a minimum radius $r_m$, defined in (\ref{rm01}), where is located the singularity. From analysis done in subsection \ref{edrp01} (with $\beta=1$ and $\mathrm{% sgn}(\alpha)\,\mu<0$) we obtain that the energy density is progressively reduced and vanishes at infinity. On the other hand, the radial pressure is just the negative energy density (see figure \ref{fig06}). Furthermore, the tangential positive pressure tends to infinity at $r=r_{m}=% \sqrt[4]{\mu }$ and decreases to zero at infinity (see subsection \ref{tp01} with $\beta=1$ and $\mathrm{sgn}(\alpha)\,\mu<0$). \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{fig06.eps} \caption{The components of energy-momentum tensor associated with the field $% h^a$ ($\protect\alpha <0$). The space time singularity is located at $r_m=% \sqrt[4]{\protect\mu}$. The region $r <r_m$ does not belong to the variety. As in case $\protect\alpha>0$ (see figure \ref{amc01}), the energy density and pressures tend to zero as $1/r^8$.\label{fig06}} \end{figure} \section{Concluding Remarks\label{sectionV}} An interesting result of this work is that when the field $h^{a}$, which appears in the Lagrangian (\ref{1}), is modeled as an anisotropic fluid (see eqs. (\ref{36.1} - \ref{36.3})), we find that the solutions of the fields equations predicts the existence of a negative tangential pressure zone around low-mass distributions ($\mu <\mu _{1}$) when the coupling constant $% \alpha $ is greater than zero. Additionally ($\alpha >0$), this model predicts the existence of a maximum in the tangential pressure, which can be observed in the outer region of a field distribution that satisfies $\mu<\mu_{2}$. It is also important to note that this model contains in its solutions space, solutions that behave like those obtained from models with negative cosmological constant ($\beta =-1$). In such a situation, the field $h^{a}$ is playing the role of a cosmological constant \cite{GMS,salg5}. In this article we have assumed that the matter Lagrangian $L_{M}$ satisfies the property $\delta L_{M}/\delta e^{a}=0$ and that the energy-momentum tensor associated to the $h^{a}$ field can be modeled as an anisotropic fluid. A possible explicit example of the matter Lagrangian which satisfies these considerations could be constructed from a Lagrangian of the electromagnetic field in matter \cite{11,12,13}. In fact, a candidate for matter Lagrangian which satisfies the above conditions would be \begin{equation*} L_{M}=-\frac{1}{2}\,\frac{1}{4!}\,\epsilon _{cdmnl}F_{ab}H^{cd}h^{a}h^{b}h^{m}h^{n}h^{l}, \end{equation*}% where $F_{ab}$ is the electromagnetic field and $H^{cd}$ is the so called electromagnetic exitation, which is given by (in tensorial notation) \begin{equation*} H^{\mu \nu }=\frac{1}{2}\chi ^{\mu \nu \rho \sigma }F_{\rho \sigma }, \end{equation*}% where the tensor density $\chi ^{\mu \nu \rho \sigma }$ describes the electric and magnetic properties of matter. \begin{acknowledgments} This work was supported in part by FONDECYT Grants 1130653. Two of the authors (F.G., C.Q.) were supported by grants from the Comisi\'{o}n Nacional de Investiga\-ci\'{o}n Cien\-t\'{\i}\-fica y Tecnol\'{o}gica CONICYT and from the Universidad de Concepci\'{o}n, Chile. One of the authors (P.M) was supported by FONDECYT Grants 3130444. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The standard model of cosmology (or CDM) assumes that the dark matter(DM) is cold and effectively collisionless. It assumes a hierarchical framework for galaxy formation and throughout their evolution galaxies are subject to frequent collisions and interactions with nearby galaxies that leave surrounding material in the form of tidal debris, therefore studying tidal features can yield important clues on galaxy evolution. Tidal features can appear in the form of complicated structures\citep{bar92}, but it is frequent to observe structures like tidal tails around the bulge of massive galaxies\citep{for86,ben06}, some early-type galaxies present shells\citep{mal80,mal83,tur99,van05,sch92,mic04} which can be seen as smooth arcs of ellipses centered around its host galaxy decreasing their surface brightness for larger radii, there are also debris in the form of stellar streams\citep{mar08,mar09,mar10}. Several analyses of the spheroidal components of tidally disturbed early-type galaxies(ETGs)\citep{tae12} suggest that structural properties such as the effective radius(radius at which half of the total light of the system is emitted) and their surface brightness are quite similar from those of the other ellipticals or the spheroids of galaxies with few or no signs of tidal interaction. Other results have shown evidence of kinematically decoupled components in early-type galaxies\citep{hau06,kop00}. Given the standard paradigm of CDM it is expected that even these ``passive'' galaxies also formed by galaxy mergers, then the similarity of the bulge regions for the ETGs could be due to the some minor mergers(galaxy mergers of unequal mass with mass ratio of $1/10 - 1/100$) slightly affecting the bulge but massive enough to still induce tidal debris, or by major mergers(mergers of comparable mass galaxies) that happened a long time ago so that now the bulge has relaxed. Cosmological N-Body simulations in CDM have shown\citep{ana11,beh13} that merger events are frequent when galaxies are growing, however, it must be taken into account that anticipating the final structure around a galaxy is rather difficult given the dependence on the environment and merger history in the formation of each particular galaxy\citep{phi14}. Observationally, it is not uncommon to run into difficulties identifying and studying tidal structures around galaxies, sometimes these structures are too faint compared to the central light distribution that a detailed analysis proves impossible. Nevertheless, there are observations where faint structures such as shells are clearly seen around galaxies at some particular radii from the center of their associated host\citep{tur99,mal83,but10,lau10,for86,sch88}. In some situations it has been possible to identify younger stellar populations in the outskirts compared to those in the inner bulge\citep{kan09} probably reminiscent of a late gas accretion. It is worth noticing that in the CDM model the spatial location of these remnants of past interaction is not predicted and has to be tracked with numerical simulations as it depends on the history of each galaxy. Metallicity is another powerful tool to constrain galaxy evolution across cosmic epochs as it is known to be correlated to gas infall, outflows due to supernovae, and star formation rate within the galaxy. Recent studies of star formation focused on local group dwarf galaxies \citep{wei14a,wei14b} revealing radial gradients in metallicity \citep{kir13}, and most of them seem to have old stellar populations formed $\sim$ 10 Gyrs ago\citep{pie14}, followed by a more quiescent star formation likely due to loss of gas mass at the reionization epoch. There are high redshift galaxies ($z\sim$~3) that also show metallicity gradients, their regions of high star formation rate(SFR) are also of lower metallicity\citep{man09,man10,cre10} due to a large gas infall, they are sorrounded by a more enriched disc, in \citet{man10} the authors concluded that these observations are consistent with accretion of metal-poor gas in massive galaxies where a high SFR can be sustained without requiring frequent mergers, this picture was also suggested by studies of gas rich disks at $z\sim$ 1-2\citep{for09,tac10}. In order to describe the debris around galaxies and stellar population it is necessary to include the gas dynamics in simulations, Even though there is an uncertainty due to our lack of understanding of the baryonic processes, their addition is also required to partially reduce some discrepancies of the CDM model, for instance, the cusp-core problem(see \citet{blo10} for a review), the overabundance of satellites\citep{kly99,moo99,sim07,bel10,mac12,gar14} or the Too-Big-to-Fail\citep{boy11,gar14b} issue. However, some of these discrepancies might also be solved assuming different properties for the dark matter, leading to for instance, scalar field dark matter(SFDM) \citep{sin94,lee96,guz00,mat01}, strongly self-interacting DM\citep{spe00,vog12,col02,roc13}, warm dark matter\citep{bod01,mac12}. It is of particular interest for us the SFDM alternative, here the mass of the field is assumed to be very small ($\sim 10^{-22}$eV/$c^2$) such that its de Broglie wavelength is of order $\sim$~kpc, relevant for galactic scales. The quantum behavior of the field has created much interest in the model due to its success to account for some discrepancies mentioned above with dark matter properties only, for example, the small mass keeps the central density from increasing indefinitely due to the uncertainty principle in contrast to CDM simulations where supernova feedback is required\citep{gov10,gov12,pon12,sca13}, note that the amount of feedback required to produce a constant central density may be in conflict with other observations\citep{kuz11,pen12,boy14}. An interesting property that has been noted for scalar field haloes in excited states or in a superposition of them is the appearance of ``ripples'' in their mass density profiles\citep{sei90,bal98,guz04,ure10,ber10,mat07,rob13}. This property motivated us to investigate whether features like rings or shells in non-interacting galaxies hosted by multistate scalar field haloes could arise independently of the galaxy merger mechanism. In the next section we give a brief overview of previous works in the scalar field dark matter model and describe a model for galaxy formation in SFDM haloes. We modify the hydrodynamics code ZEUS to implement an evolving SFDM halo in which a gaseous component is present, we provide the details of our simulations in Section 3. In Section 4 we discuss our results and section 5 is devoted to our conclusions. \section{SFDM} \subsection{Overview} The main idea in the scalar field dark matter model\citep{sin94,ji94,lee96,guz00,mat01,hu00} considers a self-interacting scalar field with a very small mass, typically of $\sim 10^{-22}$eV/$c^2$, such that the quantum mechanical uncertainty principle and the interactions prevent gravitational collapse in self-gravitating structures, thus the haloes are characterized with homogeneous densities (usually refered as a cores) in their centers, in general the core sizes depend on the values of the mass and the self-interacting parameters\citep{col86}(for a review see \cite{sua13,rin14}). There has been several studies of the cosmological evolution that results for a scalar field mass $m\sim$10$^{-22}$eV/$c^2$, with this mass the cosmological density evolution of CDM can be reproduced\citep{mat01,cha11,sua11,mag12,sch14}, there is consistency with the acoustic peaks of the cosmic microwave background radiation\citep{rod10} and it implies a sharp cut-off in the mass power spectrum for halo masses below $10^{8}$M$_{\odot}$ suppressing structure formation of low mass dark matter haloes\citep{mar14,boz14,hu00,mat01}. Moreover, there is particular interest in finding equilibrium configurations of the system of equations that describe the field (Einstein-Klein-Gordon system) and of its weak field approximation (Schr\"{o}dinger-Poisson(SP) system), different authors have obtained solutions interpretated as boson stars or later as dark matter haloes showing agreement with rotation curves in galaxies and velocity dispersion profiles in dwarf spheroidal galaxies\citep{sei91,lee96,boh07,rob12,rob13,lor12,lor14,med15,die14,guz14}. So far the large and small scales observations are well described with the small mass and thus has been taken as a prefered value but the precise values of the mass and self-interaction parameters are still uncertain, we believe tighter constraints can come from numerical simulations \citep{sch14} and comparisons with large galaxy samples which we plan to do in the future. Recently the idea of the scalar field has gained interest, given the uncertainty in the parameters the model has adopted different names in the literature depending on the regime that is under discussion, for instance, if the interactions are not present and the mass is $\sim 10^{-22}$eV/$c^2$ this limit was called fuzzy dark matter\citep{hu00} or more recently wave dark matter\citep{sch14}, another limit is when the SF self-interactions are described with a quartic term in the scalar field potential and dominate over the mass(quadratic) term, this was studied in \citep{goo00,sle12} and called repulsive dark matter or fluid dark matter by\citep{pee00}. Notice that for a scalar field mass of $\sim 10^{-22}$eV/$c^2$ the critical temperature of condensation for the field is T$_\mathrm{crit}\sim m^{-5/3}\sim$TeV, which is very high, if the temperature of the field is below its critical temperature it can form a cosmological Bose Einstein condensate, if it condenses it is called Bose-Einstein condensed(BEC) dark matter\citep{mat01,guz00,ber10,rob13,har11,cha11,li14}. \citet{sik09} mentioned that axions could also form Bose-Einstein condensates even though their mass is larger than the previous preferred value, notice that the result was contested in \citet{dav13}, this suggest that the condensation process should be study in more detail to confirm it can remain as BEC dark matter. In \cite{ure09}, it was found that complex scalar field with $m$<$10^{-14}$eV/$c^2$ that decoupled being still relativistic will always form a cosmological Bose-Einstein condensate described by the ground state wavefunction, this does not preclude the existence of bosons with higher energy, particulary in dark matter halos. We see that the smallness of the boson mass is its characteristic property and cosmological condensation is a likely consequence. The preferred mass of the scalar field dark matter lies close to $\sim 10^{-22}$eV/$c^2$ satisfying the above constraint, although there are still uncertainties on the mass parameter, in order to avoid confusion with the known axion and help with the identification in future works, we find it useful and appropiate to name the scalar field dark matter candidate, given the above characteristics we can define it as a particle with mass $m$<$10^{-14}$eV/$c^2$, being commonly described by its wavefunction we choose to name this DM candidate \textit{psyon}. It is worth emphasizing that despite the variety of names given to the model the main idea mentioned above remains the same, it is the quantum properties that arise due to the small mass of the boson that characterize and distinguishes this paradigm, analoguous to the standard cosmological model represented by the CDM paradigm whose preferred dark matter candidates are the WIMPs(weakly interacting massive particles), one being the neutralino, we see that all the above regimes SFDM, Repulsive DM, Axion DM, or any other model assuming an ultra light bosonic particle comprise a single class of paradigm, which we call \textit{Quantum Dark Matter}(QDM) paradigm. As pointed before, in the QDM paradigm the small mass of the dark matter boson leads to the posibility of forming cosmological condensates, even for axions which are non-thermally produced and have masses in $10^{-3}-10^{-6}$eV/$c^2$\citep{sik09}, from here we can obtain a characteristic property that distinguishes these dark matter candidates from WIMPs or neutrinos, namely, the existence of \textit{bosons in the condensed state}, or simply \textit{BICS}, from our above discussion the axion and psyon are included in the BICS. \subsection{Dark matter haloes of scalar field} There has been considerable work to find numerical solutions to the non-interacting SFDM in the non-relativistic regime to model spherically symmetric haloes\citep{guz04,ure10,ber10,bra12,ruf69,kau68,sei91,lee10}, and also for the self-interacting SFDM \citep{boh07,rob12,col86,rin12,bal98,goo00}, it is worth noting that as mentioned in \cite{guz04} for the weak field limit of the system that determines the evolution of a spherically symmetric scalar field, that is, the Einstein and Klein-Gordon equations, for a complex and a real scalar field the system reduces to the Schr\"{o}dinger-Poisson equations\citep{arb03}. From current bounds reported in \cite{li14} obtained by imposing that the SF behaves cosmologically as pressureless matter(dust) we obtained that the interacting parameter would be extremely small for the typical mass of $\sim$10$^{-22}$eV/$c^2$, therefore we expect that solutions to the SP system with no interactions would behave qualitatively similar to those when self-interactions are included, this assumption is supported by the similarity in the solutions for a small self-coupling found in other works\citep{bal98,col86,bri11}, in this work we will study the non-interacting solutions. One characteristic feature of stationary solutions of the form $\psi(\bold{x},t)= e^{-iE_n t}\phi(r)$ for the SP system is the appearance of nodes in the spatial function $\phi(r)$, these nodes are associated to different energy states of the SF, the zero node solution corresponds to the ground state, one node to the first excited state, and so on. These excited states solutions fit rotation curves(RCs) of large galaxies up to the outermost measured data and can even reproduce the wiggles seen at large radii in high-resolution observations \citep{sin94,col86,rob13}. However, haloes that are purely in a single excited state seem to be unstable when the number of particles is not conserved(finite perturbations) and decay to the ground state with different decay rates\citep{guz04,bal98}, though they are stable when the number of particles is conserved(infinitesimal perturbations). The ground state solution is stable under finite perturbations and infinitesimal perturbations\citep{ber10,sei90}, but has difficulties to correctly fit the rotation curves in large galaxies because its associated RC has a fast keplerian behavior shortly after reaching its maximum value unable to remain flat enough at large radii. One way to overcome this problem was to consider that bosons are not in one state but instead coexist in different states within the halo, given our intention to describe dark matter haloes we will refer to such configurations as multistate halos(MSHs)\citep{ure10,mat07,rob13,rob13b}. The size of the MSH is determined by the most excited state that accurately fits the RC for large radii, excited states are distributed to larger radii than the ground state, and in contrast to the halo with single state there are MSHs that are stable under finite perturbations provided the ground state in the final halo configuration has enough mass to stabilize the coexisting state\citep{ure10,ber10}. In \cite{ber10} it was shown that MSHs can be studied in the classical approach as a collection of classical scalar fields coupled through gravity, one field $\psi_i$ for each state, this would modify the SP system such that its source of energy density would be the sum of densities in each state\citep{ure10}, where each state satisfies its respective Schr\"{o}dinger equation and the states are coupled through the Newtonian gravitational potential $U$. In spherical symmetry (in units $\hbar$=1, c=1) the SF solutions, $\Phi_n$, satisfying the Einstein-Klein-Gordon system are related to the SP wavefunctions through \begin{equation} \sqrt{8\pi G} \Phi_n(\textbf{x},t) = e^{-imt} \psi_n (\textbf{x},t) \end{equation} with $m$ the mass of the SF. For stationary solutions we can assume wavefunctions of the form $\psi_n(\bold{x},t)= e^{-iE_n t}\phi(r)$, then the SP system reads \begin{equation} \label{sp} \nabla^2 \phi_n = 2(U- E_n) \phi_n \nonumber \end{equation} \begin{equation} \nabla^2 U = \sum_n |\phi_n|^2 \end{equation} with $\nabla^2$ the Laplacian operator in spherical coordinates, and $E_n$ the energy eigenstates. We consider the case of a MSH with only the ground and first excited states given that for this MSH \cite{ure10} reported the critical value for its stability under small perturbations. If the ground state has $N^{(1)}$ particles and there are $N^{(2)}$ in the excited state, the authors found that MSH would be stable under small perturbations provided the fraction \begin{equation} \label{eta} \eta:= \frac{N^{(2)}}{N^{(1)}} \leq 1.2=\eta_{max}. \end{equation} In all the configurations studied in which $\eta$ is larger than the maximum for stability $\eta_{max}$, the induced instability causes the configuration to lose a few particles and a remarkable effect takes place, the excited state does a fast transition to the ground state and viceversa, that is, the populations of the states are inverted such that the final $\eta$ complies with the stability condition (\ref{eta}), after the inversion the MSH approaches a stable configuration where the particle number remains constant for each state, after the transition the particles that were initially in the excited state are now in the ground state where they have redistributed and become more compact than in their original distribution. This population inversion(PI) is a characteristic feature of MSHs and as we will show it can have consequences in the gas distribution and on the accretion rate during the period of galaxy formation. In fact, given the cut-off in the mass power spectrum due to the small mass of the psyon we expect a delay in the first structures to collapse compared to the those found in CDM\citep{mat01,mar14,boz14}, this was confirmed with cosmological simulations for a mass of $8\times 10^{-23}eV$\citep{sch14}, therefore, this intrinsic change of states within MSHs at high redshift might have observable consequences in the gas and stellar properties of the galaxies that will form in such potential wells. \subsection{Galaxy formation scenario in SFDM haloes} Initial fluctuations that grow due to the cosmological expansion of the universe eventually separate from it and start collapsing due to its own gravity, at this time (known as turnaround) the halo has a number of psyons that can be in different states, their values would be determined before the collapse of the configuration, and have some dependence on the its local environment. Depending on the number density of bosons populating the excited states we can have different fates for the haloes as mentioned in subsection (2.2). From the mentioned results of the literature and based on the successful fits to different galaxies we propose the following model of galaxy formation in SFDM haloes. The smallest and less dense systems, such as dwarf galaxies, would reside in SFDM haloes with most bosons in the ground state except possibly for just a few excited particles\citep{med15}, this is because the potential wells of these haloes, being the lowest dense systems, would be just massive enough to collapse and allow the existence of the state of minimum energy that would form a bound configuration. Larger configurations that had initially a larger number of bosons in excited states than in the ground state can undergo a population inversion and reach a stable state, collapse to a dense ground state or become a black hole depending on how large the fraction of particles in excited states is after turnaround, given the different possible outcomes for this case we expect that most galaxies are formed this way, for instance, ellipticals can form when haloes transition to a stable configuration to form a ground state and quickly deepen the gravitational potential resulting in a dense and compact structure where the bulge can form, for high surface brightness galaxies which usually have RCs that fall slowly after its maximum, these galaxies would likely correspond to SFDM haloes whose final stable configuration has a comparable number of bosons in the excited and ground states as both states would contribute evenly in the central region increasing the potential well where more star formation can take place, and for distances larger than a given radius (likely the first maximum peak in the RC) the ground state would be a small contribution to the density, thus in the outer region the baryonic RC would remain below its maximum. Another possible outcome for extended haloes is when the fraction of excited particles is below its stability threshold, here gravity will slowly redistribute DM bosons in the stable MSH but mostly keeping the particle number constant, considering that psyons in the excited states are subdominant but more widely distributed than the ones in the ground state of a MSH, the DM gravitational potential in the center would be less dense than in the high surface brightness case so that gas accretion would also proceed more slowly resulting in rotation curves for the baryonic matter that slowly increase to large radii or with almost flat profiles, this case is quite similar to what is found in low surface brighness galaxies\citep{kuz10,kuz11,kuz11b}. This galaxy formation scenario broadly describes the different galaxy types showing that only due to the dark matter properties in the context of SFDM it is possible to agree with the general features of several galaxies. However, we do not expect this scenario to represent all galaxies, in fact, to get a better description of individual galaxies we require taking into account other fundamental parameters that affect their evolution, such as the environment, angular momentum, galaxy mergers etc., we will leave a more in dept exploration of the scenario including the full astrophysical processes for a future work. For now, we take the given description of elliptical galaxies to explore whether some substructure found in some of these galaxies can be a consequence of the properties of the background DM halo where such ellipticals are formed. We do not discard the possibility that shells or rings result from mergers, instead, we explore a new possible mechanism in which they can arise offering an alternative explanation to the tidal debris in certain non-interacting ellipticals. We study this issue in a multistate halo. \begin{figure} \label{fig1} \begin{tabular}{ll} \resizebox{110pt}{98pt}{\includegraphics{./Fig_1a_rc1gyr.eps}} & \resizebox{110pt}{98pt}{\includegraphics{./Fig_1a_rho1gyr.eps}} \\ \resizebox{110pt}{98pt}{\includegraphics{./Fig_1b_rc3gyr.eps}} & \resizebox{110pt}{98pt}{\includegraphics{./Fig_1b_rho3gyr.eps}} \\ \end{tabular} \caption{Density profiles for dark matter and gas for simulations $A^1_1$(upper panels) and $A^3_1$(lower panels). Shown are the initial(cyan lines) and final(yellow lines) DM profiles of the MSH composed of the ground and first states of the scalar field. Also shown are the initial condition for the disc of gas(gray lines), its density and rotation curve just before(blue-dashed line) and shortly after (red-dashed line) the population inversion occurs in the SFDM halo to reach its stable configuration, the black dashed line shows the profiles at the end of the simulations when the multistate halo is now stable.} \end{figure} \begin{figure} \label{fig2} \begin{tabular}{ll} \resizebox{110pt}{97pt}{\includegraphics{./Fig_2a_rhogas1gyr.eps}} & \resizebox{110pt}{97pt}{\includegraphics{./Fig_2a_rhogas3gyr.eps}} \\ \multicolumn{2}{c}{\resizebox{165pt}{100pt}{\includegraphics{./Fig_2b_res3gyr.eps}}} \\ \end{tabular} \caption{Density profiles for dark matter and gas for simulations $A^1_1$(upper panels) and $A^3_1$(lower panels). Shown are the initial(cyan lines) and final(yellow lines) DM profiles of the MSH composed of the ground and first states of the scalar field. Also shown are the initial condition for the disc of gas(gray lines), its density and rotation curve just before(blue-dashed line) and shortly after (red-dashed line) the population inversion occurs in the SFDM halo to reach its stable configuration, the black dashed line shows the profiles at the end of the simulations when the multistate halo is now stable.} \end{figure} \section{Simulations} We consider a multistate halo with only the first and ground state coexisting, this is because we know the stability threshold for this multistate configuration. In \cite{ber10} it was seen that the larger the $\eta$ above the critical value, the faster it settles to its final stable configuration, to be sure that the MSH is initially in the unstable regime we pick an intermediate value $\eta$=$1.6$. We explore the evolution of the gas distribution that follows a disc of gas with a Miyamoto-Nagai profile\citep{miy75} with gas mass of $M_g= 3.9\times 10^9M_{\odot}$, and for the horizontal and vertical scale-lengths we use $a=6.5$ kpc and $b=0.5$ kpc respectively (we also simulated some cases with the same gas mass but using an initial spherical Plummer profile but found no change in the main results, there was only a slight increase in the computation time, we then report only the cases for an initial disc of gas). The code we use to evolve the gaseous component embedded in the MSH is the hydrodynamics code ZEUS-MP\citep{hay06}. It is a fixed-grid time-explicit Eulerian code, we solve for the standard hydrodynamics equation for the baryonic component as in \cite{med14}, but here, we modify the code to allow for the evolution of a multistate dark halo that undergoes a transition as described below. In order to implent the population inversion in ZEUS we use a semi-analytical model that let us control the rate of the transition to a stable state helping with the interpretation of the results. To achieve this we first use an analytical approximation to the numerical solutions obtained in the Newtonian limit for a confined distribution where the gravitational potential is weak\citep{ure10,guz04,rob13}, the scalar field density profile due to the wavefuntion in the state $j$ is \begin{equation} \label{rho} \rho_j(r)=\rho_{c,j} \frac{sin^2(j \pi r/R_j)}{(j \pi r/R_j)^2}, \end{equation} where $j$=$1$+$n$ labels the state of the scalar field and $n$=$0,1,2,...$ is the number of nodes in the profile, thus $j=1$ denotes the ground state with zero nodes and $j=2$ corresponds to the first excited state, and the total density of the MHS would be described by $\rho(r)$=$\rho_1(r)$+$\rho_2(r)$. In equation (\ref{rho}) $\rho_{c,j}$ represents the central density of state $j$ and $R_j$ is taken as a truncation radius of the corresponding state, that is, for radii grater than $R_j$ the number of particles in the state $j$ is neglected, thus we take $\rho_j(r)$=$0$ for all $r>R_j$ whereas for $r \leq R_j$ the density profile of the state $j$ is given by (\ref{rho}). The mass profile and rotation curve for state $j$ are \citep{rob13,med14} \begin{eqnarray} M_j(r) &=& \frac{4 \pi \rho_{c,j}}{k_j^2} \frac{r}{2} \biggl(1-\frac{\sin(2 k_j r)}{2 k_j r} \biggr), \label{mass}\\ V^2_j(r) &=& \frac{4 \pi G \rho_{c,j}}{2 k_j^2} \biggl(1-\frac{\sin(2 k_j r)}{2 k_j r} \biggr) \label{vel}, \end{eqnarray} where we defined $k_j:=j \pi/R_j$. The total mass enclosed within $R_j$ is \begin{equation} \label{tmass} M_j(R_j)= \bigg(\frac{2}{\pi}\bigg) \frac{\rho_{c,j} R^3_j}{j^2}. \end{equation} During the population inversion in the numerical solution both states lose a few percentage of their initial masses(<10\% \citep{ure10,ber10}), but this loss would not substantially affect the final halo profilea, for this reason we will neglect the mass loss in our analytical study of the MSH. In order to construct the initial configuration in an unstable state we work with a mass ratio $\eta=1.6$, when the population inversion occurs the initial mass ratio $M^i_2(R^i_2)/M^i_1(R^i_1)$ will also be inverted and the final MSH will be stable. After the inversion the total initial mass of the ground state $M^\textit{i}_1(R^\textit{i}_1)$ becomes the final mass of the excited state $M^\textit{f}_2(R^\textit{f}_2)$ and viceversa, and the nodes in the wavefunctions will also have changed, that is \begin{equation} \label{invm} M^\textit{i}_1(R^\textit{i}_1) \longrightarrow M^\textit{f}_2(R^\textit{f}_2), \hspace{2mm} M^\textit{i}_2(R^\textit{i}_2) \longrightarrow M^\textit{f}_1(R^\textit{f}_1) \end{equation} \begin{equation} \label{invj} j :1,2 \longrightarrow 2,1 = j '. \end{equation} We still need to specify a function to model the time evolution of the transition to the stable configuration. As we mentioned, for unstable haloes the transition happens really fast, during this period the central density varies until it reaches the new stable value, the inversion also modifies the radius of each state. In our semi-analytical approach the radius and density are related to the mass through eq.(\ref{tmass}), thus we can choose to vary the radius to induce the transition, remembering that the transition is relatively fast, we found that a smooth function that captures the PI of the numerical evolution and allow an interpolation between an initial and final $R_j$ is \begin{equation}\label{rad} R^\textit{f}_{j'}(t)= R^\textit{i}_j + \frac{\epsilon_{j',j}}{2}\bigg(1+ tanh[\alpha(t-t_{inv})] \bigg), \end{equation} where $\alpha$ determines the transfer rate from the initially unstable to the final stable MSH, $\epsilon_{j',j}=R^\textit{f}_{j'}-R^\textit{i}_j$ is a parameter that relates the initial radius to the one after the inversion, and $t_{inv}$ is the time where the PI is halfway to reach its final stage with $j'$ confined in $R^\textit{f}_{j'}$, we applied a corresponding function to invert $j$ to $j'$. Using this semi-analytical inversion model we can identify and study the dependence of the resulting properties of the gaseous component on the transition time, in particular the effect on the production of tidal features. We modify the code ZEUS to implement the model and run simulations for a Miyamoto-Nagai disc of gas, the gas has no self-gravity and is used as a tracer of the gravitational potential, the initial condition for the gas velocities are set directly from the background dark matter halo and therefore match the initial halo velocities(see Fig. 1). As noted by other authors\citep{guz04,ure02,ure12} stationary solutions to the SP system (for a specified number of nodes) are related to each other by a scaling transformation, so the properties found here for our simulated parameters are expected to appear in similar configurations under a suitable scaling parameter. Table \ref{tab1} shows the parameters of mass and radii for our MSH before and after the transition, these are the values we will use that define the background MSH in all our runs, we explore an early and late transition specified by $t_{inv}$=$1$ and $3$~Gyrs respectively, and we also evolve these cases when $\alpha=1,1/4$ corresponding to an fast and slow inversion, we denote by $A^{t_{inv}}_{\alpha}$ a realization that had a PI with parameters $t_{inv}$ and $\alpha$. \begin{table} \begin{minipage}{100mm} \caption{Initial and final parameters defining the MSH.}\label{tab1} \begin{tabular}{@{}ccccc@{}} \hline \multicolumn{3}{c}{Initial} & \multicolumn{2}{c}{Final}\\ \hline \multicolumn{3}{c}{$j$} & \multicolumn{2}{c}{$j'$} \\ \hline & 1& 2 & 1& 2 \\ $M$\footnote{Mass in units $1 \times 10^{10} M_{\odot}$} & 1.844 & 2.953 & 2.953 & 1.844 \\ $R$\footnote{Units in $kpc$} & 10 & 20 & 10 & 20 \\ \hline \end{tabular} \end{minipage} \end{table} \section{Discussion} In Figure 1 we show the density profiles and the rotation curves for the MSH and the disc of gas comparing the cases of a rapid early and late transitions ($A^1_1$ and $A^3_1$). We notice the decline in the gas RC in both cases, in fact, the behavior is quite similar in both simulations, the gas moves according to the potential of the MSH and when the PI takes place it induces a rapid gas infall towards the center (see red dashed line in Fig.1) seen as an bulge like region of high gas density. In Figure 2 we plot a zoom to the gas densities of Figure 1 where we observe a conspicuous peak that appears as a result of the halo transition, soon after the transition a portion of the outer gas is pulled to the center and gets redistributed again in the now stable MSH configuration but now some of the gas remains trapped within the region of the first node of the MSH density profile, this results in the notable spike in the gas profiles. This density increase appears in both early and late fast transitions showing that this feature is expected independently of when the transition to the stable multistate halo happened. We verified that this is not a resolution effect by simulating $A^3_1$ for three different grid resolutions giving the density profiles shown in the bottom panel of Fig. 2, the inner profile does change due to the higher resolution that captures more details of the gas while the spike at $\sim 9$ kpc is only slightly modified and at the location of the node in the MSH, suggesting that its origin lies in the ripple-like feature of the background MSH. \begin{figure*} \label{fig3} \begin{minipage}{380pt} \centering \begin{tabular}{lll} \hspace{-1.1cm} \resizebox{155pt}{175pt}{\includegraphics{./Fig_3l_rhogas_ini.ps}} & \hspace{-1.1cm} \resizebox{155pt}{175pt}{\includegraphics{./Fig_3m_rhogas1_t1.ps}} & \hspace{-1.1cm} \resizebox{155pt}{175pt}{\includegraphics{./Fig_3r_rhogas1_t3.ps}} \\ \end{tabular} \end{minipage} \caption{Comparison of initial gas distribution(left) and its final stage. The central panel corresponds to a halo that had an early transition to its stable configuration at $t_{inv}$=1 Gyr($A^1_1$) and on the last panel(right) the transition was at a later epoch $t_{inv}$=3 Gyrs($A^3_1$). A ring appears in both simulations at the corresponding density peak of Fig. 2.} \end{figure*} This surprising agglomeration of gas at $9$ kpc is nevertheless small compared to the mean central density($\approx$ 0.45M$_{\odot}pc^{-3}$) being $\sim 20\%$ at the peak, the fact that it remains well after the transition is important for observations as this can be the origin of some structures around galaxies. In fact, in Fig. 3 we plot the gas spatial distribution at the beginning and at the end of the simulation with early and late halo transition, that is $A^1_1$ and $A^3_1$, and we readily observe the presence of a sorrouding ring of gas orbiting around the denser central concentration reminiscent of those found in certain elliptical galaxies. The ring keeps its location until the end of the simulations in our isolated galaxy because its origin lies in the shape of the background SFDM halo, moreover, the stability of the halo under small perturbations, such as accretion of small galaxies, mildy change the inner DM potential but may alter the ring structure in a similar way that the population inversion modified the initial gas profile, however, once the merging process has concluded the gas will start redistributing and according to our results had enough gas remain on the outskirts a ring-like structure would appear again, the final amplitude of the ring's density will depend more strongly on the details of the merging process but its location should be similar to the non perturbed halo provided the DM halo potential was not severely modified, as expected for minor mergers in the past or those relatively more recent, thus, tidal features are not exclusive of galaxy mergers when the dark matter is an ultra light scalar field. Comparing the early and late transitions in the MSH we obtain similar structures, thus we may focus on one of them, we choose the case of a late population inversion($t_{inv}$=$3$ Gyrs) to explore the effect on the ring's shape under a fast($\alpha$=1) and slow ($\alpha$=1/4) transition. Fig. 4 depicts the comparison at the end of the simulation, we observe that both $A^3_1$ and $A^3_{1/4}$ form the ring as expected but the slow transition spreads the gas around the ring creating a broader substructure, this is consistent with our galaxy formation scenario, as the gas accretion proceeds slower for $A^3_{1/4}$ the ring will slowly form from the concentration of consecutively infalling gas, on the other hand in $A^3_1$ the rapid transition enhances the gas infall and accelerates the gathering of gas at the ring's location quickly depleting the rest of the gas in low density regions creating a thin and more confined ring. Moreover, a fast gas concentration is likely to induce a period of star formation, a massive accretion in the early universe would generate a starburst producing several stars and winds that could drive gas away from the center, part of the latter will be recycled and some can be trapped in the location where the outer ring forms, there the gas can trigger star formation too or just accumulate as dust, the central star formation affects the metallicity in that region and if the gas infall stops early due to effective gas blowouts consequence of a high star formation rate the galaxy would be left with a dominant fraction of old stars. For a massive galaxy, part of the ejected gas cannot escape and will remain bounded scattering the inner photons and producing a luminous bulge, this is consistent with the formation of some ellipticals, in particular this picture could describe the puzzling origin of dust rings in the Sombrero galaxy(M104)\citep{baj84,ems95}, although a more careful study of this issue has to be done. The gas fraction and nature of the dark matter halo are fundamental to determine the fate of a galaxy, for instance, dwarf galaxies reside in SFDM haloes in the ground state and have shallow potentials, then if a starburst occurs early it can blow out a large portion of gas that if it is accreted much later the next generations of stars will differ from the old population observed as two different fractions of stellar population where the younger one is located at larger radii due to its late accretion, analyses of stellar age and metallicity at different radii can be a way to distinguish between CDM predictions and the SFDM, future hydrodynamics simulations of SFDM will be able to confirm these predictions. The existence of the ring is a generic feature in our simulations of isolated galaxies, notice that for an initially stable halo($\eta<\eta_{max}$) the ring will also appear, it suffices to observe that our final MSHs fall precisely in this regime. The SFDM model provides a new mechanism to form rings without major changes to the galaxy inner region mainly because it is where the ground state is mostly confined, in fact, the latter can exhibit a solitonic behaviour keeping its profile even after several mergers as shown in SFDM cosmological simulations\citep{sch14b}. Although perfect rings are created in our simulations, we note that this structure can be deformed by the passage of an accreted satellite or due to tidal forces from nearby galaxies culminating in incomplete or displaced rings, these tidal debris would manifest in the more common shells\citep{mal80,mal83,tur99}. Galaxies that undergo major mergers form under violent conditions, thus the shape of their tidal structures can vary widely especially when the interaction is still ongoing, for these latter galaxies far from equilibrium the emergence of an exotic tidal feature might be more accurately explained by modeling the merging galaxy and evolving it in a simulation that includes baryonic processes and a live halo. For the more isolated systems like the considered here, we have found that there is another mechanism that leads to outer structures but with the difference that they posses a certain degree of symmetry associated to the background DM halo and not to a particular merger event. It is interesting that the quantum properties of the psyons manifest on a macroscopic scale and have implications in observable quantities like gas and stars, remarkably a similar situation where quantum effects manifest on larger scales has already been observed in local experiments, in particular it was seen that water droplets can bounce indefinitely on the surface of a vibrating fluid bath and propel themselves across the surface of the fluid by virtue of their pilot-wave dynamics exhibiting quantum features like single-particle diffraction, tunneling, quantized orbits, and orbital level splitting\citep{wal78,cou05,cou06,pro06,edd09,edd12,for10,bus10,oza13}. In the SFDM context the analogue fluid would be the dark matter and the baryons would be the droplets guided by the DM, whether this analogy is valid deserves further consideration and will be treated in other work. \begin{figure} \label{fig4} \begin{tabular}{ll} \hspace{-.3cm} \vspace{-1cm} \resizebox{140pt}{170pt}{\includegraphics{./Fig_4_rhogas1_t3.ps}} & \vspace{-1cm} \hspace{-1.1cm} \resizebox{140pt}{170pt}{\includegraphics{./Fig_4_rhogas1_4_t3.ps}} \\ \hspace{-.3cm} \resizebox{140pt}{160pt}{\includegraphics{./Fig_4_rhogas1_t3c.ps}} & \hspace{-1.1cm} \resizebox{140pt}{160pt}{\includegraphics{./Fig_4_rhogas1_4_t3c.ps}} \\ \end{tabular} \caption{Gas distribution after 5 Gyrs for simulations $A^3_1$(upper-left and its colored scale in the bottom-left panel) and $A^3_{1/4}$(upper-right and its colored version in the bottom-right panel). The ring on the left panels is more defined as the MSH undergoes a rapid transition to stability while for a slow transition (right panels) more particles are spread around the ring.} \end{figure} \section{Conclusions} In this work we have seen that if the dark matter is a ultra light scalar field the internal structure of galaxies can reflect fossil records of their formation, in our case we study the formation of tidal structures similar to rings and shells in non-interacting galaxies. We discussed the nature of these structures in the context of SFDM and conclude that independently of galaxy mergers they can arise as a consequence of the quantum nature of the scalar field dark matter boson(hereafter \textit{psyons}) that form the halo where galaxies are embedded in, mainly, due to the intrinsic ripples in the dark matter halo density profile. The possibility of describing tidal substructure in scalar field dark matter halos was also suggested in \cite{bra12} but under different halo configurations and without including the evolution of a gaseous component, here we make use of spherically symmetric haloes in multistates because their stability has been studied and confirmed in previous works\citep{ure10,ber10}. In addition, they possess desirable properties to describe a variety of galaxies. Inspired by the numerical solutions in \cite{ure10}, where they reported the maximum fraction of psyons in excited states (eq. \ref{eta}) to form a stable multistate halo composed of the ground state and first excited state, we propose a semi-analytical model to describe a similar halo configuration and study the distribution of a gaseous disc that is contained within such scalar field DM halo. The halo is set to be initially unstable by choosing a fraction of excited boson above the stability limit, this induces a transition to reach a final stable configuration affecting the initial gas distribution, after the transition the population of the states is inverted and the halo stabilizes once the ground state dominates the halo mass that by gravity will be confined later in the center of the halo. At the end of the simulations ($5$ Gyrs) part of the gas keeps orbiting in the form of a ring outside the main bulge-like region, the structure of the ring is not significantly different when the halo transition happens early ($t_{inv}$=$1$ Gyrs) or late($t_{inv}$=$3$ Gyrs) but the ring broadens if the transition is slow and becomes more defined for fast transitions. Due to the lack of stability parameter for MSH in higher energy states we explore the first non trivial superposition of multistates, given the positive results it seems that tidal structures would also be found at larger radii for SFDM haloes that have boson in higher excited states, although they would become fainter with distance, this would agree with shells that appear to be distribited at larger radii. The tidal features in this work have a distinct origin from the usual tidal stripping of galaxies, they emerge from the quantum properties of the psyons that compose the dark halo, the presence of the symmetric ring is due to the spherically symmetry hypothesis, deviations from this symmetry could result in incomplete rings and shells although a certain degree of symmetry should remain which distinguishes to remnants that come from recent galaxy mergers that depend on the path of the accreted satellites and does not necessarily leave tidal debris of a particular symmetry, our results suggest that outer regions of dark haloes may not be totally smooth. Exploring in more detail the regions where shells and rings form could also help to put constraints on their origin, their formation, and serve to test galaxy and halo formation scenarios such as the quantum dark matter(QDM) paradigm. Additionally, we have provided a classification for the dark matter models that are based on the assumption that the dark matter is a scalar field of a very small mass that proves useful for future references and that helps to establish the main feature of the particle candidates of this paradigm. We formulate a galaxy formation scenario based on known results of halo formation in the SFDM context that can explain qualitatively the general properties of the different galaxy types in the universe and applied to interpret our findings. Given the success of the SFDM model to account for the apparent discrepancies of the standard CDM model with only dark matter properties, it will be desirable to include astrophysical processes in simulations to get a fair comparison with state-of-the-art CDM simulations. The QDM paradigm is indeed an interesting possibility that deserves further study. \section*{Acknowledgments} This work was partially supported by CONACyT M\'exico under grants CB-2009-01, no. 132400, CB-2011, no. 166212, and I0101/131/07 C-234/07 of the Instituto Avanzado de Cosmologia (IAC) collaboration (http://www.iac.edu.mx/). Victor H. Robles and L. Medina are supported by a CONACYT scholarship.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Experimental Results} \label{sec:experiments} In this section, we list the experimental results for both the face representation model learning and multi-class classifier training. \subsection{Face Representation Learning} \label{sec:experiments-feature} Good feature representation model is the foundation of our task. In order to evaluate the discrimination and generalization capability of our face representation model, we leverage the LFW \cite{LFWTech,LFWTechUpdate} verification task, which is to verify whether a given face pair (in total 6000) belongs to the same person or not. We trained our face representation model using the images in our base set (already published to facilitate the research in the area, excluding people in LFW by design) with ResNet-34 \cite{Resnet}. The verification accuracy with different models are listed in Table \ref{table:lfwResults}. As shown, for the loss function, we investigated the standard cross entropy, cross entropy plus our CCS-loss term in Eq. \ref{eq:cca}, the center loss in \cite{centerECCV}, and the sphere face loss in \cite{sphereface}. For the CCS-loss, we set $\lambda$ in Eq. \ref{eq:cca} equal to $0.1$. For the center loss, we tried different sets of parameters and found the best performance could be achieved when the balancing coefficient was $0.005$, as reported in the table. For the sphere face \cite{sphereface}, we noticed this paper very recently and only tried limited sets of parameters (there are four parameters to be adjusted together). The parameters reported in the paper can not make the network converge on our dataset. The only parameter set we found to make the network converge leads to worse results, compared with the standard cross-entropy loss. As shown in Table \ref{table:lfwResults}, we obtain the face representation model with the cutting-edge performance with the help of our CCS-loss term in Eq. \ref{eq:cca}. We follow the no-external data protocol and use the CCS model to investigate the one-shot learning phase. For comparison, we also list the results from other methods referring the numbers stated in the published corresponding papers. These methods use different datasets and different networks structures. Please note that applying our CCS face with the public available MS-Celeb-1M dataset leads to better performance on LFW verification task compared with the other methods with public/private datasets, which demonstrates the effectiveness of our CCS method. \begin{table} \begin{center} \begin{tabular}{|l||c|c|} \hline Methods &Network&Accuracy \\ \hline\hline Cross entropy only & 1 & $98.88\%$ \\ \hline CCS face in \ref{eq:cca} (ours) & 1 & $\mathbf{99.28}\%$ \\ \hline Center face \cite{centerECCV} & 1 & $99.06\%$ \\ \hline Sphere face \cite{sphereface} & 1 & $-.--\%$ \\ \hline \end{tabular} \end{center} \caption{ LFW verification results obtained with models trained with the same dataset (base set). All the models use ResNet-34 \cite{Resnet} as the feature extractor to highlight the effectiveness of the loss function design. For the sphere face, we do not find a set of parameters which works for this dataset, with limited number of trials. } \label{table:lfwResults} \end{table} \begin{table} \begin{center} \begin{tabular}{|l||c|c|c|} \hline Methods &Dataset&Network&Accuracy \\ \hline\hline JB \cite{JB}& Public & -- & $96.33\%$ \\ \hline Human & -- & -- & $97.53\%$ \\ \hline DeepFace\cite{vgg_face} & Public & 1 & $97.27\%$ \\ \hline DeepID2,3 \cite{Xiaoou_Deep2,Xiaoou_Deep3}& Public & 200 & $99.53\%$ \\ \hline FaceNet \cite{Google_Face} & Private & 1 & \textcolor{red}{$99.63\%$} \\ \hline Center face \cite{centerECCV} & Private & 1 & $99.28\%$ \\ \hline Center face \cite{sphereface} & Public & 1 & $99.05\%$ \\ \hline Sphere face \cite{sphereface} & Public & 1 & $99.42\%$ \\ \hline CCS face (ours) & Public & 1 & \textcolor{blue}{$\mathbf{99.71}\%$} \\ \hline \end{tabular} \end{center} \caption{ For reference, LFW verification results reported in the peer-reviewed publications (partial list). Different datasets and network structures were used. For CCS-face, we used ResNet-34 and a cleaned version of the full MS-Celeb-1M data. The \textit{closest runner-up} to our CCS-face is FaceNet in \cite{Google_Face}, which is trained with $>100$M \textbf{private} images for $8$M persons, while our model is reproducible.} \label{table:lfwResults-r} \end{table} We also tried different values of $\lambda$ in Eq. \ref{eq:cca} and found our method is not sensitive to the choose of $\lambda$, shown in the Table \ref{table:lambda}. Larger $\lambda$ means stronger regularizer applied. Note $\lambda = 0$ corresponds to no CCS-loss applied. \begin{table}[h] \begin{center} \begin{tabular}{|l||c|c|c|c|c|} \hline $\lambda$ & $0$ &$0.01$&$0.1$ & $1$ &$10$ \\ \hline\hline LFW & $98.88\%$ & $90.05\%$ & $99.28\%$ &$99.20\%$ & $99.20$ \\ \hline \end{tabular} \end{center} \caption{ LFW verification results obtained with different $\lambda$ in Eq. \ref{eq:cca}. Larger $\lambda$ means stronger regularizer applied. } \label{table:lambda} \end{table} \subsection{One-shot Face Recognition} In phase two, we train a $21$K-class classifier to recognize the persons in both the base set and the low-shot set. Since there is only one image per person for training in the low-shot set, we repeat each sample in the low-shot set for $100$ times through all the experiments in this section. In order to test the performance, we apply this classifier with $120,000$ test images consists of images from the base or low-shot set. We focus on the recognition performance in the novel set while monitoring the recognition performance in the base set to ensure that the performance improvement in the novel set does not harm the performance in the base set. To recognize the test images for the persons in the novel set is a challenging task. The one training image per person was randomly preselected, and the selected image set includes images of low resolution, profile faces, and faces with occlusions. We provide more examples in the supplementary materials due to space constraint. The training images in the novel set show a large range of variations in gender, race, ethnicity, age, camera quality (or evening drawings), lighting, focus, pose, expressions, and many other parameters. Moreover, we applied de-duplication algorithms to ensure that the training image is visually different from the test images, and the test images can cover many different looks for a given person. \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline Method & C@$99\%$ & C@$99.9\%$ \\ \hline\hline Fixed Feature & $25.65\%$ & $0.89\%$\\ \hline SGM \cite{Ross2016lowshot} & $27.23\%$ &$4.24\%$ \\ \hline Update Feature & $26.09\%$ & $0.97\%$\\ \hline Direct Train & $15.25\%$ & $0.84\%$\\ \hline Shrink Norm (Eq.\ref{eq:snorm}) & $32.58\%$ &$2.11\%$\\ \hline Equal Norm (Eq.\ref{eq:fnorm}) & $32.56\%$ &$5.18\%$\\ \hline UP Only (Eq.\ref{eq:enorm}) &${77.48}{\%}$ & ${47.53}{\%}$ \\ \hline CCS Only (Eq.\ref{eq:cca}) &${62.55}{\%}$ & ${11.13}{\%}$ \\ \hline \textbf{Our:} CCS (\ref{eq:cca}) plus UP (\ref{eq:enorm}) &$\mathbf{94.89}\mathbold{\%}$ & $\mathbf{83.60}\mathbold{\%}$ \\ \hline \hline Hybrid \cite{ICCV-W-Yue} & $92.64\%$ & N/A \\ \hline Doppelganger \cite{ICCV-W-Doppelganger} & $73.86\%$ & N/A \\ \hline Generation-based \cite{ICCV-W-Generation} & $61.21\%$ & N/A \\ \hline \end{tabular} \end{center} \caption{Coverage at Precisions = $99\%$ and $99.9\%$ on the \textbf{low-shot set}. Please refer to subsection 5.2 or the corresponding citations for the detailed descriptions for all the methods. As shown in the table, our method CCS+UP loss significantly improves the recall at precision $99\%$ and $99.9\%$ and achieves the \textbf{best} performance among all the methods. Unless specified with ``CCS'', or numbers reported by other papers (Hybrid, Doppelganger, and Generation-based) \cite{ICCV-W-Yue,ICCV-W-Doppelganger,ICCV-W-Generation}, the face feature extractor was trained with cross entropy loss. } \label{tab:core} \end{table} The methods we experimented and the corresponding results are listed in Table \ref{tab:core}. We also list the performance from the \textit{top-3} methods presented in the MS-Celeb-1M challenge in ICCV 2017 workshop for this task. We use coverage rate at precision $99\%$ and $99.9\%$ as our evaluation metrics since this is the major requirement for a real recognizer. The methods in the table are described as follows. The ``Fixed Feature'' in Table \ref{tab:core} means that, in phase two, we do not update the feature extractor and only train the classifier in Eq. \ref{eq:crossentropy} with the feature extracto in phase one. The SGM, known as squared gradient magnitude loss, is obtained by updating the feature extractor during phase one using the feature shrinking method as described in \cite{Ross2016lowshot}. Compared with the ``Fixed-Feature'', SGM method introduces about $2\%$ gain in coverage when precision requirement is $99\%$, while $4\%$ gain when precision requirement is $99.9\%$. The improvement for face recognition by feature shrinking in \cite{Ross2016lowshot} is not as significant as that for general image. The reason might be that the face feature is already a good representation for faces and the representation learning is not the main bottleneck. Note that we did not apply the feature hallucinating method as proposed in \cite{Ross2016lowshot} for fair comparison and to highlight the contribution of model learning, rather than data augmentation. To couple the feature hallucinating method (may need to be modified for face) is a good direction for the next step. The ``Update Feature'' method in Table \ref{tab:core} means that we fine-tune the feature extractor simultaneously when we train the classifier in Eq. \ref{eq:crossentropy} in phase two. The feature updating does not change the recognizer's performance too much. The rest three methods (shrink norm, equal norm, UP-method) in Table \ref{tab:core} are obtained by using the cost functions defined in Eq. \ref{eq:snorm}, Eq. \ref{eq:fnorm}, and Eq. \ref{eq:enorm} as supervision signals for deep convolutional neural network in phase two, with the face feature updating option. As shown in the table, our UP term improves the coverage@precision=$99\%$ and coverage@precision=$99.9\%$ significantly. The coverage at precision $99\%$ on the base set obtained by using any classifier-based methods in Table \ref{tab:core} is $100\%$. The top-1 accuracy on the base set obtained by any of these classifier-based methods is $99.80\pm0.02\%$. Thus we do not report them separately in the table. \section{Introduction} \label{sec:intro} The great progress of face recognition in recent years has made large-scale face identification possible for many practical applications. In this paper, we study the problem of training a large-scale face identification model using \textit{imbalanced} training images for a large quantity of persons, and then use this model to identify other face images for the persons in \textit{the same group}. This setup is widely used when the images for the persons to be recognized are available beforehand, and an accurate recognizer is needed for a large and relatively fixed group of persons. For example, large-scale celebrity recognition for search engine, public figure recognition for media industry, and movie character annotation for video streaming companies. Building a large-scale face recognizer is not a trivial effort. One of the major challenges is caused by the highly imbalanced training data. When there are many persons to be recognized, it naturally happens that for some of the persons to be recognized, there might be very limited number of training samples, or even only one sample for each of them. Besides this unique challenge, there are also other challenges introduced by the fact that different persons may have very similar faces, and the fact that the faces from the same person may look very different due to lightning, pose, and age variations. To study this problem, we design a benchmark task and propose a strong baseline solution for this task. Our benchmark task is to train a face recognizer to identify $21,000$ persons. For the $20,000$ persons among them, we provide about $50$-$100$ training images per person and call this group \textit{base set}, following the terminology defined in \cite{Ross2016lowshot}. For the other $1,000$ persons, we only offer \textbf{one} training image per person, and call this group \textit{low-shot set}. The task is to study with these training images only, how to develop an algorithm to recognize the persons in \textbf{both} data sets. In particular, we mainly focus on the recognition accuracy for persons in the low-shot set as it shows the one-shot learning capability of a vision system, while we also check the recognition accuracy for those in the base set to ensure not to hurt their performance. We have published this data set to facilitate the research in this direction. Our solution for this benchmark task is to train a good face representation model and build a classifier on top of that. The objective of feature learning is to train a face representation model with good discrimination ability not only for the base set, but also for the low-shot set. In other words, since there is only one training image per person in the low-shot set, we need to build face feature extractor with good generalization capability. There have been a lot of effort in this direction, yet our method is different in the following two perspectives. First is data. We train our face feature extractor with the base set, which includes about one million images with high annotation accuracy. This is one of the \textit{largest public available} datasets \cite{LFWTech,LFWTechUpdate,Youtube,CASIA_WebFace,Yandong:Celeb,Xiaoou_Deep1}, which makes our model reproducible and meaningful. Second is the cost function design. In addition to the standard cross entropy loss used together with Softmax for multinomial logistic regression (MLR) learning, we propose to add another loss term, which encourages features of the same classes to have similar directions to their corresponding weight vector in logistic regression. Since the weight vector is trained to have the direction which is close to the direction of the features from its corresponding class, and far away from the directions of the features from other classes, our proposed term effectively minimizes the intra-class variance and maximizes the inter-class variance simultaneously. We compare our face representation model with its most similar alternative methods and demonstrate the advantages of our method in subsection \ref{sec:related-feature}, \ref{sec:method-feature}, and \ref{sec:experiments-feature}. The second stage of our solution is to learn a classifier on top of the face feature extractor learned in the first stage. Though K-nearest neighborhood (KNN) or other template-based methods might be the most straight-forward solution, the standard KNN method is not suitable for our setup due to its limitations in aspects of accuracy and scalability \cite{ACMMMMSCeleb1M-2,ICCV-W-Yue,Xu_2017_ICCV}. More discussions are presented in section \ref{sec:relatedKNN}. In our solution, we choose to use MLR for its proven great performance on various visual recognition problems. The major challenge of using MLR as the classifier is caused by the highly imbalanced training data. In our experiments, we have observed almost perfect performance of MLR in recognizing persons in the base set, yet very poor performance of MLR for the low-shot set, even though their training images are oversampled. A further analysis in Section \ref{sec:methodology} shows that a low-shot class with only one training sample can only claim small partition in the feature space. Moreover, we reveal that there is a close connection between the volume of a class partition in the feature space and the norm of the weight vector of this class in the MLR model. Based on this finding, we propose to add a new loss term to the original cross-entropy loss for MLR, serving as the prior for the weight vectors in multinomial logistic regression. This new loss term is based on our empirical assumption and observation that on average, each person in the low-shot set should occupy a space of similar volume in the feature space, compared with the persons in the base set. We call this term the Underrepresented-classes Promotion (UP) loss. For comparison, we also explore other different options for the priors of the weight vectors. To quantitatively evaluate the performance, we adopt the close-domain face identification setup, and apply the classifiers with the test images \textit{mixed} from both the base set ($100,000$ images, $5$ images/person) and the low-shot set ($20,000$ images, $20$ images/person). Our experimental results clearly demonstrate the effectiveness of the proposed method. With our feature extraction model and the UP term, we can recognize $94.89 \%$ of the test images in the low-shot set with a high precision of $99\%$ and keep the top-1 accuracy of $99.8\%$ for the base classes, while without using our method, only $25.65\%$ of the test images from the low-shot set can be recognized at the same precision. In summary, our contributions can be highlighted as follows. \begin{itemize} \item We set up a benchmark task for one-shot face recognition, and provide the associated dataset composed of the base set and the low-shot set. \item We propose a new cost function to effectively learn discriminative feature extractor with good generalization capability on the low-shot set. \item We reveal that the deficiency of the multinomial logistic regression (MLR) in one-shot learning is related to the norms of the weight vectors in MLR, and propose a novel loss term called underrepresented-classes promotion (UP) which effectively addresses the data imbalance problem in one-shot learning. \item Our solution recognizes $94.89\%$ of the test images at the precision of $99\%$ for the low-shot classes. To the best of our knowledge, this is the \textbf{best} performance among all the published methods using this benchmark task with the same setup. \end{itemize} \section{Methodology} \label{sec:methodology} Our solution includes the following two phases. The first phase is \textit{representation learning}. In this phase, we build face representation model using all the training images from the \textit{base set}. The second phase is \textit{one-shot learning}. In this phase, we train a multi-class classifier, with the help our UP term, to recognize the persons in both \textit{base set} and \textit{low-shot set} based on the representation model learned in phase one. \subsection{Representation learning} \label{sec:method-feature} We train our face representation model with supervised learning framework considering persons' ids as class labels. The cost function we use is \begin{equation} \label{eq:costfunction} \mathcal{L} = \mathcal{L}_s + \lambda \mathcal{L}_a \, , \end{equation} where $\mathcal{L}_s$ is the standard cross entropy loss used for the Softmax layer, while $\mathcal{L}_a$ is our proposed loss used to improve the feature discrimination and generalization capability, with the balancing coefficient $\lambda$. More specifically, we recap the first term, cross entropy $\mathcal{L}_s$ as \begin{align} \label{eq:crossentropy} \mathcal{L}_s &= - \sum_n \sum_k t_{k,n} \log p_{k}(x_n) \,, \end{align} where $t_{k,n}\in\{0,1\}$ is the ground truth label indicating whether the $n^{th}$ image belongs to the $k^{th}$ class, and the term $p_{k}(x_n)$ is the estimated probability that the image $x_n$ belongs to the $k^{th}$ class, defined as, \begin{equation} \label{eq:sigmoid} p_{k}(x_n) = \frac{\exp (\mathbf{{w}}_k^T {\boldsymbol{\phi}(x_n)} )}{\sum_i \exp (\mathbf{w}_k^T {\boldsymbol{\phi}(x_n)})} \, , \end{equation} where $\mathbf{w}_k$ is the weight vector for the $k^{th}$ class, and $\boldsymbol{\phi}(\cdot)$ denotes the feature extractor for image $x_n$. We choose the standard residual network with 34 layers (ResNet-34) \cite{Resnet} as our feature extractor $\boldsymbol{\phi}(\cdot)$ using the last pooling layer as the face representation. ResNet-34 is used due to its good trade-off between prediction accuracy and model complexity, yet our method is general enough to be extended to deeper network structures for even better performance. Note that in all of our experiments, we always set the bias term $b_k=0$. We have conducted comprehensive experiments and found that removing the bias term from the standard Softmax layer does not affect the performance. yet leads to a much better understanding of the geometry property of the classification space. The second term $\mathcal{L}_a$ in the cost function \ref{eq:costfunction} is calculated as \begin{align} \label{eq:cca} \mathbf{w}'_k &\leftarrow \mathbf{w}_k \\ \mathcal{L}_a &= - \sum_k\sum_{i \in C_k} \frac{\mathbf{w}_k^{'T} \boldsymbol{\phi}(x_i)}{\|\mathbf{w}'\|_2 \|\boldsymbol{\phi}(x_i)\|_2} \, . \end{align} We set the parameter vector $\mathbf{w}'_k$ to be equal to the weight vector $\mathbf{w}_k$. This loss term encourages the face features belong to the same class to have similar direction as their associated classification weight vector $\mathbf{w}_k^T$. We call this term as Classification vector-centered Cosine Similarity (CCS) loss. Calculate the derivative with respect to $\boldsymbol{\phi}(x_i)$, we have \begin{equation} \frac{\partial \mathcal{L}_a}{\partial \boldsymbol{\phi}(x_i)}=\frac{1}{\|\boldsymbol{\phi}(x_i) \|_2} \left( \frac{\mathbf{w}_k^{'T}}{\|\mathbf{w}'_k\|_2} -\frac{\boldsymbol{\phi}(x_i)^T \cos \theta_{i,k}}{\|\boldsymbol{\phi}(x_i)\|_2} \right) \,, \end{equation} where $\theta_{i,k}$ is the angle between $\mathbf{w}'_k$ and $\boldsymbol{\phi}(x_i)$. Note that $\mathbf{w}'_k$ in this term is the parameter copied from $\mathbf{w}_k$, so there is no derivative to $\mathbf{w}'_k$. For experiment ablation purpose, we also tried to back propagate the derivative of $\mathbf{w}_k$, but did not observe better results. \subsubsection{Discussion} There have been a lot of effort in adding extra terms to cross entropy loss to improve the feature generalization capability. Maybe the most similar version is the center loss in \cite{centerECCV}, also known as the dense loss in \cite{Latha:dense} published during the same time. In center loss, the extra term is defined as \begin{equation} \label{eq:center} \mathcal{L}_c = - \sum_k\sum_{i \in C_k} ||\mathbf{c}_k - \boldsymbol{\phi}(x_i) ||_2^2\, , \end{equation} where $\mathbf{c}_k$ is defined as the \textit{class} center (might be dynamically updated as the approximation of the true class center due to implementation cost). Our method is different from center loss from two perspectives. First, minimizing the cost function \ref{eq:center} may lead to two consequences. While it helps reduce the distance between $\boldsymbol{\phi}(x_i)$ and its associated center $\mathbf{c}_k$, it also reduces the norms of $\boldsymbol{\phi}(x_i)$ and $\mathbf{c}_k$. The second consequence is usually not good as it may hurt the classification performance. We did observe in our experiment that over training with center loss would lead to features with too small norms and worse performance compared with not using center loss (also reported in \cite{centerECCV}). On the contrary, our loss term only considers the angular between $\boldsymbol{\phi}(x_i)$ and $\mathbf{w}'_k$, and will not affect the norm of the feature. In our experiment section, we demonstrate that our method is not sensitive to the parameter tuning. Second, please note that we use the weight vector in Softmax $\mathbf{w}_k$ to represent the \textbf{classification} center, while in \ref{eq:center}, the variable $\mathbf{c}_k$ is the \textbf{class} center. The major difference is that $\mathbf{w}_k$ is updated (naturally happens during minimizing $\mathcal{L}_s$) using not only the information from the $k^{th}$ class, but also the information from the other classes. In contrast, $\mathbf{c}_k$ is updated only using the information from the $k^{th}$ class (calculated separately). More specifically, according to the derivative of the cross entropy loss in \ref{eq:crossentropy}, \begin{equation} \label{eq:gcd} \frac{\partial \mathcal{L}_s}{\partial \mathbf{w}_k}=\sum_n (p_{k}(x_n)-t_{k,n}){\boldsymbol{\phi}(x_n)} \,, \end{equation} the direction of $\mathbf{w}_k$ is close to the direction of the face features from the $k^{th}$ class, and being pushed far away from the directions of the face features \textit{not} from the $k^{th}$ class. \subsection{One-shot Learning} In this subsection, we build a classifier using multinomial logistic regression on top of the feature representation model we obtained in the previous subsection. \subsubsection{Challenges of One-shot} As we discussed previously, the standard MLR does not perform well for the persons in the low-shot set. In section \ref{sec:experiments}, we report that with the standard MLR, for the low-shot set, the coverage at the precision of $99\%$ is only $25.65\%$, while for the base set, the coverage is $100\%$ at the precision of $99\%$. Note that the training images in the low-shot set have been oversampled by $100$ times. The feature extractor with standard softmax was used. The low coverage for the low-shot classes is related to the fact that the only one sample from each low-shot class occupies a much smaller partition in the feature space, compared with the samples in each base class. This is because a class with one sample usually has a much smaller (even 0 for one sample) intra class variance than a classes with many samples which can span a larger area in the feature space. To further understand this property, without loss of generality, we discuss the decision hyperplane between any two adjacent classes. We apply Eq. \ref{eq:sigmoid} to both the $k^{th}$ class and the $j^{th}$ class to determine the decision hyperplane between the two classes (note we do not have bias terms throughout our paper): \begin{equation} \label{eq:decision} \frac{p_j(x)}{p_k(x)}=\frac{\exp (\mathbf{{w}}_j^T {\boldsymbol{\phi}(x)} )}{\exp (\mathbf{{w}}_k^T {\boldsymbol{\phi}(x)} )}=\exp [(\mathbf{{w}}_j - \mathbf{{w}}_k)^T \boldsymbol{\phi}(x)] \end{equation} \begin{figure} \centering \vspace*{-0.36in} \subfloat[$\|\mathbf{w}_k\|_2 =\|\mathbf{w}_j\|_2 $]{\includegraphics[width=0.45\linewidth]{figs/va}} \hspace{-1cm} \subfloat[$\|\mathbf{w}_k\|_2 <\|\mathbf{w}_j\|_2 $]{\includegraphics[width=0.45\linewidth]{figs/vb}} \caption{Relationship between the norm of $\mathbf{w}_k$ and the volume size of the partition for the $k^{th}$ class. The dash line represents the hyper-plane (perpendicular to $\mathbf{w}_j-\mathbf{w}_k$) which separates the two adjacent classes. As shown, when the norm of $\mathbf{w}_k$ decreases, the $k^{th}$ class tends to possess a smaller volume size in the feature space. } \label{fig:weight} \end{figure} As shown in Figure \ref{fig:weight}, the hyperplane to separate two adjacent classes $k$ and $j$ is perpendicular to the vector $\mathbf{w}_j - \mathbf{w}_k$. When the norm of $\mathbf{w}_k$ gets decreased, this hyperplane is pushed towards the $k^{th}$ class, and the volume for the $k^{th}$ class gets decreased. As this property holds for any two classes, we can clearly see the connection of the norm of a weight vector and the volume size of its corresponding partition space in the feature space. In our experiments with the standard MLR, we found that the norms of the weight vectors for the low-shot classes are much smaller than the norms of the weight vectors for the base classes, with an example shown in Figure \ref{fig:wnorm1}. \begin{figure} \centering {\includegraphics[width=0.8\linewidth]{figs/wnorm-noalignb-iter100000.pdf}} \\[-0.05cm] \caption{Norm of the weight vector $\mathbf{w}$ with standard MLR. The x-axis is the class index. The rightmost $1000$ classes on the x-axis correspond to the persons in the low-shot set. As shown in the figure, without the UP term, $\|\mathbf{w}_k\|_2$ for the low-shot set is much smaller than that of the base set. } \label{fig:wnorm1} \end{figure} \subsubsection{Underrepresented Classes Promotion} In this subsection, we propose a method to promote the underrepresented classes, a.k.a. the classes with limited number of (or only one) samples. Our method is based on a prior which we design to increase the volumes of the partitions corresponding to the low-shot classes in the feature space. Based on the previous analysis, we introduce a new term to the loss function with the assumption that on average, the persons in the low-shot set and the persons in the base set should have similar volume sizes for their corresponding partitions in the feature space. \begin{equation} \label{eq:enorm} \mathcal{L}_{up} = \mathcal{L}_s + \left\| \frac{1}{|C_{l}|} \sum_{k \in C_{l}}\|\mathbf{w}_k\|_2^2 - \alpha \right\|_2^2 \, , \end{equation} where $\alpha$ is the parameter learned from the average of the squared norms of weight vectors for the base classes, \begin{equation} \alpha \leftarrow \frac{1}{|C_{b}|} \sum_{k \in C_{b}} \| \mathbf{w}_k \|_2 ^2 . \end{equation} We use $C_{b}$ and $C_{l}$ to denote the sets of the class indices for the \textit{base set} and the \textit{low-shot set}, respectively. As shown in Eq. \ref{eq:enorm}, the average of the squared norms of the weight vectors in the \textit{low-shot set} is promoted to the average of the squared norms of the weight vectors for the \textit{base set}. We call this term underrepresented-classes promotion (\textbf{UP}) term. For every mini-batch, we jointly optimize the cross entropy term and the UP loss term. The derivative we sent back for back propagation is the summation of the derivative of cross entropy and the derivative of the UP term. We keep the rest of the optimization the same as a regular deep convolutional neural network. \subsubsection{Alternative Methods} Adding extra terms of $\mathbf{w}_k$ to the cost function is essentially to inject prior knowledge to the system. Different assumptions yield to different prior terms to the weight vectors. Here we discuss several alternatives for the UP-prior. One typical method to handle insufficient data problem for regression and classification problems is to shrink $\mathbf{w}_k$, \cite{Ti96,YandongLasso}. Here we choose the $L2$-norm option for optimization efficiency. \begin{equation} \label{eq:snorm} \mathcal{L}_{l2} = \mathcal{L}_s +\sum_{k} \|\mathbf{w}_k\|_2^2 \,. \end{equation} Another option is to encourage all the weight vectors to have similar or even the same norms. A similar idea has been proposed in \cite{EWeightNorm} for the purpose of accelerating the training speed. We adopt the soft constraint on the squared norm of $\mathbf{w}$ here. \begin{equation} \label{eq:fnorm} \mathcal{L}_{eq} = \mathcal{L}_s + \sum_{k \in \{ C_{l} \cup C_{b} \} } \left\| \|\mathbf{w}_k\|_2^2 - \beta \right\|_2^2 \, , \end{equation} where \begin{equation} \beta \leftarrow \frac{1}{|\{ C_{l}\cup C_{b} \}|} \sum_{k \in \{ C_{l}\cup C_{b} \}} \| \mathbf{w}_k \|_2 ^2 . \end{equation} Note the major difference between this cost function and the cost function in Eq. \ref{eq:enorm} is that, in Eq. \ref{eq:fnorm}, the values of the norms of all $\mathbf{w}_k$ get affected and pushed to the same value, while in Eq. \ref{eq:enorm}, only the values of the norms of $\mathbf{w}_k$ for \textit{low-shot set} classes get promoted. The performance of all these options is presented in Section \ref{sec:experiments}. \section{Conclusion and Future Work} In this paper, we have studied the problem of one-shot face recognition. We build a solution for this task from two perspectives. First, we introduce a method called Classification vector-centered Cosine Similarity (CCS) to train a better face feature extractor, which has better discrimination capability for persons with limited number of training images, compared with the extractor trained without CCS. Second, we reveal that the deficiency of multinomial logistic regression in one-shot learning is related to the norms of the weight vectors in multinomial logistic regression, and propose a novel loss term called underrepresented-classes promotion to effectively address the data imbalance problem in the one-shot learning. The evaluation results on the benchmark dataset show that the two new loss terms together bring a significant gain by improving the recognition coverage rate from $25.65\%$ to $94.89\%$ at the precision of $99\%$ for one-shot classes, while still keep an overall accuracy of $99.8\%$ for normal classes. In the future, we are interested in applying the UP prior and CCS, and exploring ore options to improve low-shot learning in the general visual recognition problems. {\small \bibliographystyle{ieee} \section{Related Work} \label{sec:related} \subsection{Benchmark Task} Nowadays, we observe the major focus in face recognition has been to learn a good face feature extractor. In this setup, typically, a face feature extractor is trained with images for a group of persons, and then tested with images for a \textit{different} group of persons in the verification or identification task. For example, the verification task with the LFW dataset \cite{LFWTech} is the de facto standard test to evaluate face features, though the performance on this dataset is getting saturated. Moreover, a lot of face identification tasks, e.g., MegaFace \cite{UW_MegaFace} or LFW \cite{LFWTech} with the identification setup, are essentially to evaluate face features since the identification is achieved by comparing face features between query and gallery images. The major advantage of the above setup is that the generalization capability of face representation model can be clearly evaluated, since the persons in the training phase are usually \textit{different} from the persons in the testing phase. This is very important when the images of the target persons are not accessible during the training phase. Unfortunately, we observe the best performance for the above setup is typically obtained by using very large, \textit{private} dataset(s), which makes it impossible to reproduce these work, e.g., \cite{Google_Face}. Moreover, though to obtain a good feature extractor is essential and critical for face identification, good feature extractor is not yet the final solution for the identification. Our benchmark task has a different setup. we train the face identification model with the imbalanced training images for the persons to be recognized. This setup is very useful when the images for the target persons are available beforehand, because it generally leads to better performance to train with images for the target persons compared with to train with images for other persons (assuming similar total amount of images). As discussed in the introduction section, there are also many real scenarios using this setup. Moreover, since in our task we include the low-shot classes (persons with only one training sample), the generalization capability can also be evaluated. Last but not least, we provide both the training and testing datasets, so people can conveniently reproduce and compare their algorithms in this direction. \subsubsection{Low-shot learning for general visual recognition} In the general image recognition domain, the recent low-shot learning work \cite{Ross2016lowshot} also attracts a lot of attentions. Their benchmark task is very similar to ours but in the general image recognition domain: the authors split the ImageNet data \cite{ILSVRC15} into the base and low-shot (called novel in \cite{Ross2016lowshot}) classes, and the target is to recognize images from both the base and low-shot classes. Their solution is quite different from ours since the domain is quite different. We will not review their solution here due to the space constraint, but list results from their solution as one of the comparisons in the experiments section \ref{sec:experiments}. \subsection{Discriminative Feature learning} \label{sec:related-feature} Cross entropy with Softmax has demonstrated good performance in supervising the face feature extraction model training. In order to further improve the performance of representation learning, many methods have been proposed to add extra loss terms or slightly modify the cross entropy loss (used together with softmax for multinomial logistic regression learning) to regularize the representation learning in order to improve the feature discrimination and generalization capability. Among all these works, we consider the center loss \cite{centerECCV} as one of the representative methods (a similar idea published in \cite{Latha:dense} during the same time). In \cite{centerECCV}, face features from the same class are encouraged to be close to their corresponding class center (actually, approximation of the class center, usually dynamically updated). By adding this loss term to the standard Softmax, the authors obtain a better face feature representation model \cite{centerECCV}. There are many other alternative methods, including the range loss in \cite{RangeLoss}, fisher face in \cite{Fisher}, center invariant loss in \cite{centerinvariant}, marginal loss in \cite{MarginalLoss}, sphere face in \cite{sphereface}, etc. Each of these methods has its own uniqueness and advantages under certain setup. We design a different kind of loss term adding to the cross entropy loss of the Softmax to improve the feature extraction performance. In section \ref{sec:methodology} and \ref{sec:experiments}, we demonstrate that our method has better performance in our setup than that with center loss in \cite{centerECCV} or sphere face in \cite{sphereface} (these two are the most similar ones) from the perspective of theoretical discussion and experimental verification. Our method has only one parameter and is very easy to use. We have not reproduced all these cost function design methods \cite{RangeLoss,Fisher,centerinvariant,MarginalLoss} with our training dataset due to practical reasons. These methods were implemented with different networks structures, and trained on different datasets. Sometimes parameter adjustment is critically required when the training data is switched. We will work on evaluating more methods in the future. \subsection{KNN vs. Softmax} \label{sec:relatedKNN} After a good face feature extractor is obtained, the template-based method, e.g., K-nearest neighborhood (KNN), is widely used for face identification these days. The advantages of KNN is clear: no classifier training is needed, and KNN does not suffer much from imbalanced data, etc. However, experiments in \cite{ICCV-W-Yue,ACMMMMSCeleb1M-1,ACMMMMSCeleb1M-2,Xu_2017_ICCV} and section \ref{sec:experiments} in our paper demonstrate that the accuracy of KNN with the large-scale face identification setup is usually lower than MLR, when the same feature extractor is used. Moreover, if we use all the face images for every person in the gallery, the complexity is usually too high for large scale recognition, and the gallery dataset needs to be very clean to ensure high precision. If we do not keep all the images per person, how to construct representer for each class is still an open problem. As described above, MLR demonstrates overall higher accuracy compared with KNN in many previous publications. This is mainly because in MLR, the weight vectors for each of the classes is estimated using discriminant information from all the classes, while in the KNN setup, the query image only needs to be close enough to one local class to be recognized. Moreover, after feature extraction, with MLR, the computational complexity of estimating the persons' identity is linear to the number of persons, not the number of images in the gallery. However, the standard MLR classifier suffers from the imbalanced training data and has poor performance with the low-shot classes even these classes are oversampled during training, though the overall accuracy is higher than KNN. Recently, some works develop hybrid solutions by combining MLR and KNN \cite{ICCV-W-Yue,Xu_2017_ICCV} and achieve promising results. In these work, when MLR does not have high confidence (threshold tuning is needed), KNN is used. We solve this problem from a different perspective. Different from the hybrid solution, our solution only has one MLR as the classifier so that no threshold is needed to switch between classifiers. We boost the performance of MLR by regularizing the norm of the weight vectors in MLR. We have not seen a lot of effort in this direction, especially in the deep learning scenario. \section{Benchmark Datasets} We have prepared and published the associated datasets for our task earlier this year, and attracted more than one hundred downloads. Here we clarify the key points of our dataset for everyone's convenience. \noindent \textbf{Training} There are $21$K persons in total. For the $20$K persons among them, there are $50$-$100$ training images per person (base set). For the rest $1000$ persons, there is one training image per person (low-shot set). \noindent \textbf{Testing} We test the face identification with the same $21$K persons. There are $120$K images to be recognized ($100$K from the base set and $20$K from the low-shot set). The model to be tested will not have access to know whether the test image is from the base set or the low-shot set, which is close to the real scenario, yet the performance is evaluated on base and low-shot separately to better understand the system. \noindent \textbf{Comparison} Though the base set in this paper is considerably larger than most of the public face dataset, it is smaller than MS-Celeb-1M \cite{guo2016msceleb} (actually a subset of MS-Celeb-1M). The base set has a different focus from MS-Celeb-1M. MS-Celeb-1M targets at recognizing as many as possible celebrities in the one-million celebrity list so the celebrity coverage is important, and the noisy label for the less popular celebrity is inevitable \cite{ACMMMMSCeleb1M-1,ACMMMMSCeleb1M-2,MSCelebNoiseLabel}. Therefore, MS-Celeb-1M inspires work including data cleaning, training with noisy-labels, etc. On the other hand, the base set published in this paper is mainly for training a robust and generalizable face feature extractor, as it is nearly noise-free. Moreover, for the convenience of feature evaluation, we do not include the celebrities in LFW \cite{LFWTech} (the de facto standard) in our base set ($20$K persons). Thus researchers can directly leverage this dataset and evaluate performance on the LFW verification task. Our low-shot set can also be used to evaluate feature extractors (though indirectly) since only one image is provided per person in the low-shot set. We provide $20$ images per person in this test for testing purpose. For comparison, the LFW dataset \cite{LFWTech}, has less than $100$ persons having more than $20$ images. The benchmark task in MegaFace \cite{UW_MegaFace} focuses on $80$ identities for the query set to be recognized, though millions of images provided as distractors. \section{Appendix} We summarize some potential questions and response as follows. Most of the information is included in the main paper, yet we re-organize and add more details using the QA structure for the convenience of the readers. \subsection{KNN vs. Softmax} To leverage face verification (pairwise comparison) to solve the face identification problem is a straight-forward and popular solution \cite{UW_MegaFace}. We are also suggested to use k-nearest neighbors (KNN). Here we discuss why we don't choose KNN. \begin{figure}[h] \centering {\includegraphics[width=0.99\linewidth]{figs/KNN-MLR.png}} \caption{Deep comparison of K nearest-neighborhood (KNN) and multinomial logistic regression (MLR, a.k.a, Softmax) applied on top of the feature extraction for large-scale face identification. The figure shows the precision and coverage on \textbf{all the test images from both the base and low-shot sets}. The same feature extractor was used for both the method. As shown, though both the methods lead to similar Top-1 accuracy, MLR has much larger coverage at high precision compared with that of KNN (MLR: $85.31\%$ @ Precssion = $99.9\%$ while KNN: $52.57\%$ @ Precssion = $99.9\%$). The major reason is that MLR update the classification weight vector using information from the samples from both the corresponding class and other classes. } \label{fig:MLR-KNN} \end{figure} We acknowledge the advantages of KNN (or other similar template-based method): no classifier-training is needed after the feature representation model is trained, and KNN does not suffer much from imbalanced data. If the feature extraction is perfect (the distance between samples from the same class is always smaller than the distance between samples from different classes), KNN is good enough for any face identification problem. However, there is no perfect face feature though a lot of progress has been made along this direction. Given a reasonably good, yet not perfect face feature extractor, our experimental results demonstrate that with the large-scale face identification setup, the multinomial logistic regression (MLR, a.k.a. Softmax) has better performance than that of KNN. As shown in Fig. \ref{fig:MLR-KNN}, both KNN and MLR (with the same feature extractor for fair comparison) were tested with all the test images from both the base and low-shot set. Though they lead to similar Top-1 accuracy, MLR has much larger coverage at high precision compared with that of KNN (MLR: $85.31\%$ @ Precssion = $99.9\%$ while KNN: $52.57\%$ @ Precssion = $99.9\%$). In the previous publication \cite{ICCV-W-Yue,ACMMMMSCeleb1M-1,ACMMMMSCeleb1M-2,Xu_2017_ICCV} on large-scale face recognition, authors have made similar statement. We believe the major reason is that in MLR, the weight vectors for each of the classes are estimated using discriminant information from all the classes, while in the KNN setup, the query image only needs to be close enough to one local class to be recognized. Moreover, for the KNN method, if we use all the face images for every person in the gallery, the complexity is usually too high for large scale recognition, and the gallery dataset needs to be very clean to ensure the high precision. If we do not keep all the images per person, how to construct representer for each class is still an open problem. On the contrary, with MLR, after the feature extraction, the computational complexity of estimating the persons' identity is linear to the number of persons, not the number of images in the gallery. \subsubsection{Challenge of Imbalanced Training data} Though MLR has overall better performance over KNN, as shown in Fig. \ref{fig:MLR-KNN}, the standard MLR classifier suffers from the imbalanced training data and has poor performance with the low-shot classes even these classes are oversampled during training. We evaluate the precision and recall of MLR on the base and low-shot sets separately and present the results in Fig. \ref{fig:MLR}. As shown, though MLR has overall better performance over KNN, the standard MLR classifier suffers from the imbalanced training data and has poor performance with the low-shot classes even these classes are oversampled during training. \begin{figure}[h] \centering {\includegraphics[width=0.99\linewidth]{figs/MLR.png}} \caption{Precision and coverage of MLR (a.k.a, Softmax) evaluated on the base and low-shot sets separately. As shown, Though MLR has overall better performance over KNN, as shown in Fig. \ref{fig:MLR-KNN}, the standard MLR classifier suffers from the imbalanced training data and has poor performance with the low-shot classes even these classes are oversampled during training. } \label{fig:MLR} \end{figure} Recently, some works develop hybrid solutions by combining MLR and KNN \cite{ICCV-W-Yue,Xu_2017_ICCV} and achieve promising results. In these work, when MLR does not have high confidence (threshold tuning is needed), KNN is used. We solve the training data imbalance challenge from a different perspective. Different from the hybrid solution, our solution only has one MLR as the classifier so that no threshold is needed to switch between classifiers. We boost the performance of MLR by regularizing the norm of the weight vectors in MLR. We have not seen a lot of effort in this direction, especially in the deep learning scenario. \subsection{Feature Learning} There have been many efforts in improving the face feature learning by adding regularizers to the Softmax loss term. Maybe the early pioneers in this direction are the center-loss in \cite{centerECCV} or the similar version called dense loss in \cite{Latha:dense}. We find that better face feature model always help the final results, no matter KNN, or MLR, or MLR with underrepresented class promotion method is used. Therefore, we also investigate how to learn an even better face feature extractor. Our proposed classification vector-centered cosine similarity (CCS) term is different from its cousin center face \cite{centerECCV} or sphere face \cite{sphereface}. We try to minimize the angle between the feature vectors and the corresponding weights while center-loss minimizes the distance between the feature vectors and the corresponding class centers. SphereFace emphasizes on maximizing the margin (angle) between the features and the corresponding decision boundary. Experimental results show that our method has better performance for our task, when the same training data is used, as listed in the main paper. SphereFace has four task-specific hyper-parameters to control the regularization strength. Given the limited time (we noticed this work short time before submission), we failed to find good hyper-parameters for our task (also tried authors' parameters). Please also note that the improvement on the face representation model is one of the three contributions of our paper (the other two are the benchmark task design and UP-term for data imbalance). \subsection{Practical Applications} In this paper, we study the problem of training a large-scale face identification model using \textit{imbalanced} training images for a large quantity of persons, and then use this model to identify other face images for the persons in \textit{the same group}. Though this setup may not be the case for video surveillance where the person to be recognized is not typically in the training data, this setup is still widely used when the images for the persons to be recognized are available beforehand, and an accurate recognizer is needed for a large and relatively fixed group of persons. For example, large-scale celebrity recognition for search engine, public figure recognition for media industry, and movie character annotation for video streaming companies. This one-shot face recognition challenge has been selected as one of the ICCV 2017 workshops, and attracted more than $40$ teams registered last year and hundreds times of data download. \subsection{More discussion on UP term} For the reading convenience, we include the figure to demonstrate the impact of underrepresented class promotion (UP) loss term on the norm of $\mathbf{w}$ here. \begin{figure}[h!] \centering \subfloat[Without UP] {\includegraphics[width=0.8\linewidth]{figs/wnorm-noalignb-iter100000.pdf}} \\ \subfloat[With UP] {\includegraphics[width=0.8\linewidth]{figs/wnorm-alignb-newa-iter20000.pdf}} \\[-0.05cm] \caption{Norm of the weight vector $\mathbf{w}$ with standard MLR with/without UP. The x-axis is the class index. The rightmost $1000$ classes on the x-axis correspond to the persons in the low-shot set. As shown in the sub-figure [a], without the UP term, $\|\mathbf{w}_k\|_2$ for the low-shot set is much smaller than that of the base set, while with the UP term, on average, $\|\mathbf{w}_k\|_2$ for the low-shot set tends to have similar values as that of the base set. } \label{fig:wnorm1} \end{figure} As shown in the sub-figure [a], without the UP term, $\|\mathbf{w}_k\|_2$ for the low-shot set is much smaller than that of the base set, while with the UP term, on average, $\|\mathbf{w}_k\|_2$ for the low-shot set tends to have similar values as that of the base set. Note that the variance of the norm of $\mathbf{w}$ for different low-shot classes is reduced. This is a byproduct of the UP-term: since all the $||\mathbf{w}_k||$ for the low-shot classes are encouraged to be closer to one scalar value $\alpha$, they naturally have similar values (not identical though), which leads to smaller variance of $\mathbf{w}_k$ among the low-shot classes. The UP-term does not change $w_k$ for base classes. How to more actively control the variance is still an open problem. We will study in the future.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The disruption of satellite galaxies through tidal interactions is an important source for the hierarchical build-up of galactic halos on small and large scales (e.g., Searle \& Zinn 1978; Boylan-Kolchin et al. 2010). Direct evidence comes, e.g., from tidal streams around the Milky Way and Andromeda (e.g. Ibata et al. 1994, 2001), but also lies in the progressive discovery of tidal features such as low surface brightness structures around nearby galaxies in the Local Volume (e.g., Mart{\'{\i}}nez-Delgado et al. 2010), and out to higher redshifts (Forbes et al. 2003; Koch et al. 2015). A few of the dwarf galaxies in the Local Group and near-by galaxy clusters have a remarkable spatial extent and/or exhibit S-shaped morphologies indicative of ongoing tidal disruption by their massive hosts. In this context, models predict that tidal effects cause significant variations of the galaxies' half-light radii, $r_h$ (e.g., Pe\~narrubia et al. 2009). Prominent representatives of such extended dwarf galaxies (as indicated on Fig.~1) are Sagittarius (Majewski et al. 2003), And~XIX (Brasseur et al. 2011), NGC 4449B (Rich et al. 2012), and HCC-087 in the Hydra~I cluster (Koch et al. 2012), which is probably one of the most extended dwarf galaxies in the Local Volume. Also a few Virgo Cluster galaxies have been reported to show indications of strong tidal interactions (Paudel et al. 2013), and their morphologies are of great interest (e.g., McDonald et al. 2011; S{\'a}nchez-Janssen et al. 2016). All these examples can provide a deeper insight into the interactions of (dwarf) satellites with their environments and their parent clusters (e.g., Tal et al. 2009; cf. Penny et al. 2009). { Conversely, some recently discovered ultra-diffuse galaxies in the Coma, Virgo and Fornax clusters occupy a parameter space in the size-luminosity plane that lie in between NGC 4449B and the ACSVCS measurements of VCC 1661. In fact, many of thoses diffuse objects seem to have smooth shapes without tidal features (van Dokkum et al. 2015; Koda et al. 2015; Mu\~noz et al. 2015; Beasley et al. 2016). } Curiously, HCC-087 had previously been classified as a regular early-type dwarf, despite its extraordinary size (Mieske et al. 2008), while further investigation revealed the presence of significant tidal tails, emphasizing the need for a case-by-case examination of such extended systems. \begin{figure}[htb] \begin{center} \includegraphics[angle=0,width=1\hsize]{f1.eps} \end{center} \caption{Magnitude-radius plot for stellar systems using data from { van Dokkum (2015), Mu\~noz et al. (2015), and} Misgeld \& Hilker (2011), who, in turn, used F06's values for the VCC galaxies. We also show particularly extended galaxies that are strongly inflated by tidal disruption: Sgr (Majewski et al. 2003); And XIX (Brasseur et al. 2011); HCC-087 (Koch et al. 2012); and NGC 4449B (Rich et al. 2012). Our own measurement for VCC 1661 is shown as black star symbol (smaller $r_h$), connecting to F06's measurement (larger value). The gray box indicates the full range of radii found in the literature.} \end{figure} Here, we investigate the dwarf galaxy VCC 1661 in the Virgo Cluster (Binggeli et al. 1985), which is the faintest galaxy covered by the Advanced Camera for Surveys Virgo Cluster Survey (ACSVCS) -- a photometric study using the ACS onboard the Hubble Space Telescope (HST; C\^ot\'e et al. 2004). With its integrated magnitude of $g$$\sim$$14.5$ mag, $B_T$$\sim$$15.97$ mag, respectively, it is not especially faint by standards of the Virgo Cluster Catalogue (VCC; Binggeli et al. 1985), but it is characterized by a very low surface brightness of $\mu_{r_e}(g) = 26.5$ mag\,arcsec$^{-2}$ (Ferrarese et al. 2006; hereafter F06). It resides in a relatively uncrowded region, with its nearest neighbour (VCC~1679; $B_T$$\sim$$18.7$ mag) at a projected distance of 4.2$\arcmin$ or 20 kpc\footnote{In the following we will adopt the Cepheid-based distance to Virgo of $16.52\pm0.22\pm1.14$ Mpc (Tonry et al. 2001) consistent with the distance-scale used by F06.} . Several studies have obtained surface brightness profiles of Virgo dwarf galaxies in various filters and, for VCC~1661, a range of radii is reported in the literature: The first study of this galaxy by F06 lists a remarkably large S\'ersic-radius of 58$\arcsec$ (4.7 kpc) in the $g$-band (based on HST imaging in F775W) and 39$\arcsec$ (3.2 kpc) in $z$, while noting that its isophotes are very smooth. Later, the same team obtained a smaller value from wide-field, multi-colour Sloan Digital Sky Survey (SDSS) images using a model-independent analysis (Chen et al. 2010; hereafter C10). Likewise, the compilation of Janz \& Lisker (2008) does not contain any unusually extended objects, but no actual values for radii were given. Finally, based on a variety of optical and near-infrared imagery, McDonald et al. (2011) did not note anything unusual about this galaxy in the optical bands, while listing an overall range of 13.4'' (in $z$) to 47.9'' ($H$-band). While most measurements indicate a median value around $\sim25\arcmin$, the entire literature covers a factor of more than four in radius, irrespective of the filters used to obtain the respective images. All studies, however, agree in that the isophotes appear to be very smooth. Table~1 presents an overview of the radii (and other Sersic-profile parameters; see Sect.~4) of VCC~1661 derived in the literature and the present work. \begin{table*}[htb] \caption{Measurements of the radius and S\'ersic index $n$ of VCC~1661} \centering \begin{tabular}{lcccc} \hline\hline Reference & Band & $n$ & r$_e$ [$\arcsec$] & r$_e$ [kpc]$^a$ \\ \hline & $g$ & 2.34 & 58.07 & 4.65$\pm$0.33 \\ \raisebox{1.5ex}[-1.5ex]{Ferrarese et al. (2006)} & $z$ &1.93 & 39.27 & 3.15$\pm$0.22 \\ \hline Janz \& Lisker (2008) & $r$ & 1.20 & 19.35 & 1.55$\pm$0.11 \\ \hline & $g$ & 1.51 & 23.40 & 1.87$\pm$0.18 \\ \raisebox{1.5ex}[-1.5ex]{Chen et al. (2010)} & $z$ & $\dots$ & 16.90\rlap{$^{b}$} & 1.35$\pm$0.18 \\ \hline & $g$ & 2.60\rlap{$^{c}$} & 19.78 & 1.58$\pm$0.11 \\ & $r$ & 1.40\rlap{$^{c}$} & 19.58 & 1.57$\pm$0.11 \\ McDonald et al. (2011) & $i$ & 0.60\rlap{$^{c}$} & 47.90 & 3.84$\pm$0.27 \\ & $z$ & $\dots$ & 13.74 & 1.10$\pm$0.08 \\ & H & 0.60\rlap{$^{c}$} & 40.37 & 3.23$\pm$0.23 \\ \hline This work & $r$ & 0.98$\pm$0.40 & 24.1$\pm$7.7 & 1.93$\pm$0.63 \\ \hline \end{tabular} \\$^a$Adopting a distance modulus of 31.09 mag for VCC~1661 (Tonry et al. 2001). \\Uncertainties on literature values can only account for the distance errors. \\$^b$Sersic-corrected effective radius from a curve-of-growth analysis \\$^c$Sersic index of the bulge component after bulge-disk-decomposition \end{table*} We note that none of the above studies quoted any uncertainties on individual measurements, but rather state global values such as $\sigma\log\,r_e$=0.025--0.03 (C10) and $\sigma r_e$(r)=3--15\% (McDonald et al. 2011), so that it is difficult to assess the significance and origin of the discrepancies. The significance of settling this galaxy's extent becomes clear in the magnitude-radius plot for a broad range of stellar systems of, e.g., Misgeld \& Hilker (2011), who employed the data set of F06. Relying on the largest of the radii in the literature (as plotted by Misgeld \& Hilker 20011; their Fig.~1) would render VCC~1661 a clear outlier in this parameter space, lying 4.2$\sigma$ above the mean relation defined by a broad range of stellar systems in the Local Volume (Fig.~1; see also Fig.~2 in Koch et al. 2012; Br\"uns \& Kroupa 2012). Similar to the aforementioned tidally disturbed satellites, this would imply that also VCC~1661 would have properties consistent with having undergone severe tidal interactions, while any tidally stripped material has yet to be detected. The lower range of these radii, however, would leave it an ordinary Virgo dwarf. Thus we obtained new images of VCC~1661 to measure its spatial extent, and to look for potential low surface brightness features to obtain a clear-cut characterization of this dwarf galaxy. \section{Data} Our data were taken on March 20, 2012 with the 28-inch Centurion telescope at the Polaris Observatory Association in Lockwood Valley, California (Brosch et al. 2008, 2015; Rich et al. 2012, 2016). We employ an SBIG STL11000 camera run at $-25^{\circ}$C at the f/3.1 prime focus behind a corrector group. The pixel scale of the detector is 0.82$\arcsec$\,pixel$^{-1}$ (that is $65.7\pm0.9\pm4.5$ pc\,pixel$^{-1}$ at the distance of Virgo), with a field of view of $39.6\arcmin\times59.4\arcmin$. The field around VCC~1661 was imaged for 26$\times300$ s using an Astrodon Luminance filter, which is a round broad-band filter with a band pass ranging from 4000--7000 \AA~that acts effectively as a wide Sloan $r$-filter. The data were flatfielded using dome-flats, and dark-subtracted using standard procedures in the MAXIM DL library; here, {\em imsurfit} was used to correct for low-level variations. After average-combining individual images including a sigma clipping algorithm, we used IRAF's {\em imsurfit} task to model and perform a final sky subtraction. The median sky level on our image is of 21.44$\pm$0.01 mag\,arcsec$^{-2}$, as measured in a region 40 px $\times$ 40 px wide in a preferentially feature-free area $\sim$4$\arcmin$ away from VCC 1661. We caution that this ignores the effects of bright stars at the edge of the CCD, PSF modeling on degree-scales, and other known background variations towards Virgo such as Galactic cirrus and Intracluster light (e.g., Mihos et al. 2005). Since we are focusing on one single object in the following, none of these large-scale variations poses a concern. Furthermore, the small radius of the galaxy is favorable for reaching low-surface brightness and suppressing the effect of any large-scale flatfield variations. Furthermore, scattered light is suppressed by a baffled { optical element in front of} the camera (Brosch et al. 2015). Note that Rich et al. (2012) reach 29 mag\,arcsec$^{-2}$ using the same set-up, under similar conditions, and comparable sky level. The seeing, as determined from the point spread function of near-by stars on the final image, was sub-optimal, at $\sim6\arcsec$ and we will address the ensuing limitations of our analysis in Sect.~4.1. We reach a signal-to-noise ratio of 3 pixel$^{-1}$ at a magnitude level of 25.2 mag arcsec$^{-2}$. The resulting image is shown in Figures~2 and A1 in the appendix, with a focus on VCC~1661 in Fig.~2, while Fig.~A1 covers the entire field of view. % \begin{figure}[htb] \begin{center} \includegraphics[angle=0,width=0.495\hsize]{f2a.ps} \includegraphics[angle=0,width=0.495\hsize]{f2b.ps} \includegraphics[angle=0,width=0.495\hsize]{f2c.ps} \includegraphics[angle=0,width=0.495\hsize]{f2d.eps} \end{center} \caption{Image of VCC~1661 in the luminance filter, where North is up and East is left. Different image stretches were used such as to emphasize the inner regions versus the galaxy's full extent. A scale bar of 60$\arcsec$ is indicated; each image covers $3.4\arcmin\times 3.4\arcmin$.} \end{figure} \section{Radial profiles} The different contrasts in Fig.~2 clearly highlight the bright center and extended structure of the galaxy, but also emphasize additional light sources in the halo of VCC~1661. Such objects were handled with IRAF's {\em imedit} task by blending their point spread function into the local (within $\le$20 pixels), mean background. Likewise, another 15 stars and brighter, extended globular clusters (Jord\'an et al. 2009) were removed from the immediate surrounding of VCC~1661. Other, faint globular cluster members within the galaxy halo do not stand out against the background of our images and will add no significant contribution to the surface brightness profiles we derive in the following. To derive the isophotal parameters and radial surface brightness profile of VCC~1661 we used IRAF's {\em ellipse} task, following closely the procedures laid out in F06. Since the Virgo Cluster is in the footprint of the SDSS, we were able to use a number of stars in the galaxy's vicinity to calibrate the photometry from {\em ellipse}, measured in the luminance filter, to Sloan-$r$ magnitudes catalogued in the SDSS. Figure~3 shows the azimuthally averaged, radial profile we obtained from {\em ellipse} before and after subtraction of the contaminating sources. Note that the apparent flat trend towards the center is caused by the inability of our fitting routine to deal with the dense inner regions at our large seeing. Radii within the seeing limit of 6$\arcsec$ will be ignored in all future discussions of our data. Here, we also indicate the sky-level of our imaging. % \begin{figure}[htb] \begin{center} \includegraphics[angle=0,width=0.9\hsize]{f3.eps} \end{center} \caption{Radial profiles with and without subtraction of globular cluster and star contaminants. The solid line indicates our approximate seeing, below which we will ignore the measurements.} \end{figure} \section{On the size of VCC~1661} % The mean ellipticity, $e$, of our isophotes is 0.05$\pm$0.03, in accordance with the values of F06. For consistency, we also adopt the elliptical radius as our major axis coordinate, i.e., $r_{ell} = a\,(1-e^2)^{1/2}$, where $a$ denotes the major axis distance. Figure~4 shows our final surface brightness profile, provided in the $r$-band, in comparison with profiles for this galaxy (partly using different filters) from the literature (F06; Janz \& Lisker 2008; C10). \begin{figure}[htb] \begin{center} \includegraphics[angle=0,width=1\hsize]{f4.eps} \end{center} \caption{Radial profiles from the literature and this work. The top panel shows the $g$-band data of F06, while the middle panel indicates their $z$-band photometry. Either curve is shown out to the radii where F06's counts drop below 10\% of their sky level (their Fig.~100; blue solid points) and by their best-fit profile extending beyond (dashed blue line). The bottom panel contains C10's data in all 5 SDSS bands. We also show in black the SDSS-profile ($r$) derived by Janz \& Lisker (2008). The seeing of our observations and the S\'ersic-radii derived by F06 and C10 are labeled by vertical lines.} \end{figure} \subsection{Centurion imaging} F06 fitted both S\'ersic (1968) and cored-S\'ersic profiles and found the former to be the best representation to the majority of the VCC dwarfs, while the latter provided a better fit to the brightest cluster galaxies. In particular, C\^ot\'e et al. (2006) noted the presence of a prominent nucleus within VCC~1661, and as such F06 included an additional, central King component in the overall profile fit to characterize the cores of their sample galaxies. % At our seeing limit, the central regions on our images are not sensitive to the presence of the nucleus and we restrict our analysis to radii larger than 6''. On the other hand, sampling the galaxy halo outside of this radius alone prohibits convergence of a single S\'ersic profile. Fortunately, the $g$-band profile obtained by F06 and our measurements agree very well within the overlapping region (again excluding our central 6$\arcsec$). Due to the difference in the filter curves and ensuing zero points, there is an offset between both profiles of 0.46 mag with a 1$\sigma$-scatter of a mere 0.04 mag. We thus opted to merge our radial profile with that of F06, offset by the above amount, in order to reach a maximal spatial coverage while retaining the homogenous filter system (Fig.~5). \begin{figure}[htb] \begin{center} \includegraphics[angle=0,width=1\hsize]{f5.eps} \end{center} \caption{The profile from this work, combined with the g-band data of F06 for the inner region (r$_{\rm ell}<6\arcsec$), shifted by 0.46 mag. The best-fit King+S\'ersic profile is indicated.} \end{figure} The resulting, combined data were fitted in an error-weighted least-squares sense with a S\'ersic-profile plus a central King-nucleus. From this, we obtained King core- and tidal radii of 0.13$\arcsec$ and 88.2$\arcsec$, respectively (corresponding to a concentration of $c=2.8$) with a quality that is governed by F06's inner component with better seeing. The best-fit S\'ersic index of VCC~1661 is $n=0.98$, so that this dwarf galaxy can be characterized by an exponential profile. Most importantly, our data indicate a S\'ersic-radius of 24.1$\arcsec$, which is less than half of the largest value found in the literature (Table~1) and rather compatible with the smaller, measurements of Janz \& Lisker (2008), C10, and the optical data of McDonald et al. (2011). We will return to a detailed comparison with the literature in Sects. 4.2 and 4.3. \subsubsection{Seeing limitations} The rather poor seeing conditions of $\sim$$6\arcsec$ on our images can affect our analysis in two ways. Firstly, fainter background sources can be smeared out to such an extent that they become undetectable, which poses a potential source of uncertainty when estimating the background. We probed this effect by adding artificial, extended sources to a sky background with variance as achieved by our images. The object density and magnitudes for these random sources were estimated from the SDSS field near VCC~1661. As a result, the sky background changes by a mere 0.4\% with only a slight increase in the variance. Our estimate of the sky level and the sky subtraction are thus not likely to be affected by the large seeing. Secondly, it is feasible that some of the light from the bright nucleus is scattered beyond the central $6\arcsec$ that were ignored in our subsequent analyses, thereby altering the radial profile even beyond that radius. To this end, we smoothed the fiducial profile, { including our measurements combined with the resolved nucleus from F06}, with a Gaussian kernel of $6\arcsec$ and re-fit this new profile in an identical manner as before. The resulting best-fit radius decreases to 16.4$\arcsec\pm7\arcsec$. { While this suggests that the PSF has some influence on the profile outside of the 6$\arcsec$ range, this result is still consistent within the errors with the non-smeared value derived above. This argues that seeing alone cannot be the cause of an increased radius-measurement using the present set-up.} \subsection{Sloan images} Janz \& Lisker (2008) and C10 employed the fifth data release (DR5) of the SDSS (Adelmann-McCarthy et al. 2007) to derive the sizes for a large sample of VCC galaxies. Both studies used the same SDSS data and a similar treatment of the background in terms of a specialized sky subtraction (Lisker et al. 2007), and source masking followed by a tilted-plane fit to the sky in C10, respectively. The final images have a smaller pixel scale of 0.396$\arcsec$\,px$^{-1}$ (0.792$\arcsec$\,px$^{-1}$ for the fainter galaxies) and better seeing conditions of $\sim$1.6$\arcsec$ compared to the present work. Restricting the analysis to distances $>2\arcsec$ due to the SDSS seeing, a Petrosian radius was determined (Petrosian 1976) before summing the entire flux within two times this radius. The resulting {\em half-light}-radius was then corrected for the missing flux in the Petrosian aperture (Graham et al. 2005) and for the isophotal axial ratio. C10 also excluded the central 2$\arcsec$, thus avoiding the nucleus. Their S\'ersic radii were based on profile fits giving equal weight to all data points, but also a non-parametric approach was obtained for the $g$-band, similar to the techniques of Janz \& Lisker (2008). C10 list two values, a smaller one based on their curve-of-growth analysis, and quoted in our Table~1 is the result from a parametric S\'ersic fit. Considering the different filters in SDSS-studies, the resulting radii for VCC~1661 are in very good agreement. Moreover, all these studies are consistent with the relatively small extent found in our present work. A comparison of our profile with that from the SDSS in Fig.~4 indicates that our magnitude calibration from the luminance filter to Sloan-$r$ is well justified: The median difference in the fiducial overlapping regions is 0.23 mag with a 1-$\sigma$ scatter of 0.07 dex. In particular, there is no systematic trend seen in these zero point shifts with regard to radius within the galaxy. This confirms not only that our Centurion imaging data are reliably calibrated to a standard magnitude system, but also lends weight to the notion that VCC~1661's radius is not exceptionally large as found in other studies. \subsection{Literature} Table~1 summarizes the various measurements of VCC~1661's size. With its S\'ersic index, $n$, of 0.98, VCC~1661 is practically characterized by a simple exponential profile. It is noteworthy that our result is the smallest of all values in Table~1, where the indices listed in the referenced studies are about twice as large. However, since only global error estimates on the radii and S\'ersic indices are given in the literature, lacking errors for individual objects, we cannot assess the significance of this discrepancy. The high resolution of the ACS (at 0.049$\arcsec$\,pixel$^{-1}$) allowed F06 to resolve and measure VCC~1661's nucleus. Our fit of the combined (nucleus plus halo) galaxy profile upon merging our data with F06's yields a King-core radius for the nucleus that is larger by 38\% compared to the value found by F06. Consequently, we find a large concentration parameter of our King-model of $c=2.8$. While we describe our profile by the same functional form as F06, the most striking difference is the considerably larger radius found in the ACSVCS, particularly in $g$. The radius measured by F06 in the $z$-band is smaller by one third. However, the radial profiles in the $g$ vs. $z$-band of F06 do not indicate any significant radial color, thus population gradient in VCC~1661, nor are the integrated SDSS-colors of C10 unusual amongst the ACSVCS galaxy sample, with the exception of a $u-g$ that is slightly bluer than average. Also the values by McDonald et al. (2011) differ from each other by a factor of up to three, despite the careful background handling of this and all other studies. \section{Discussion: the nature of VCC~1661} The dwarf elliptical VCC~1661 has been included in several surveys of the Virgo Cluster and all of them agreed on the extent of its regularity. Nonetheless, the values for its radius varied widely throughout the literature, including within the the ACSVCS, which is clearly superior in terms of depth and resolution. The reason for this discrepancy remains uncertain since there is no obvious correlation between filter used and the resulting radius determination, { nor with survey depth}. Furthermore, the field of view of the ACS is sufficiently large with respect to the extent of smaller galaxies so that limitations in the sense of poor sky subtraction are unlikely to be an issue. % We obtained new wide-field imaging that allowed us to re-measure this galaxy's extent. Thereby, we could confirm that the characteristic radius (here parameterized by a S\'ersic profile) is fully consistent with typical sizes of other Virgo cluster dwarf galaxies of comparable luminosity (see Fig.~1), arguing against any inflation through significant past or present tidal perturbations. Moreover, a remarkable smoothness in the isophotes of VCC~1661 has been noted in all studies to date, including our own results and the data at high angular resolution like the SDSS and ACS. {Janz et al. (2012,2014) assert only 27\% of dwarf galaxies can be fit with single S\'ersic-profiles and show that subtraction of the smooth background frequently reveals substructure like bars, lenses, or spiral patterns. Apart from bright central core region, our images do not show any obvious substructures -- VCC~1661 appears to be in clear equilibrium. Conversely, the recent discovery of tidal tails around HCC-087 (Koch et al. 2012), which had previously been classified as an average ``early-type dwarf'', opposed to the present case of a null-detection in a supposedly unusual contender, clearly emphasizes the need to carefully inspect satellite galaxies that appear outstanding in any regard. Here, deep surveys like the Next Generation Virgo Cluster Survey (Ferrarese et al. 2012) are invaluable to unearth unambiguous structures around cluster galaxies (e.g., Paudel et al. 2013). The field of view of our image contains at least one other yet uncatalogued dwarf galaxy with clear tidal tails at comparable surface brightness as the faintest contours of VCC 1661. This supports our conclusion, as also stated in earlier literature, that this galaxy is remarkably undisturbed. If VCC 1661 had significant tidal features, our image would have revealed them. % \acknowledgements AK and CSB acknowledge the Deutsche Forschungsgemeinschaft for funding from Emmy-Noether grant Ko 4161/1. We are grateful to the anonymous referee for a constructive report and we thank L. Ferrarese and T. Lisker for comments on an early version of the manuscript. The observations were carried out as part as part of the Halos and Environments of Nearby Galaxies (HERON) survey.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Magnification} \label{sub:magnification} The calculations described thus far concern the amplification in flux of a distant source with a terrascope, but not the magnification in angular size of a source. Variations in the Earth's atmosphere already present a major limiting factor in the resolving power of ground-based telescope and thus one should expect it to be an even greater for the terrascope. Even a Hill sphere observatory, exploiting stratospheric lensing, will observe rays that have traversed through ${\sim}$20 airmasses (see Figure~\ref{fig:interp}) and thus seeing will be of order tens of arcseconds. Consider a distant object whose light arrives at the Earth such that the object subtends an angle $\phi$. This light is refracted and arrives at the terrascope detector subtending an angle $\theta = \Delta + \phi$, where $\Delta$ is the deflection angle. The magnification is $\theta/\phi = (\Delta/\theta - 1)^{-1}$. Since $\theta \simeq b/L$ and $\Delta = \Delta_0 e^{-(b-R)/H_{\Delta}}$, then magnification is approximately $[(\Delta_0 L/b) e^{-(b-R)/H_{\Delta}} - 1]^{-1}$. Magnification thus theoretically tends to infinity, representing a caustic, as $b$ approaches the value for maximum amplification. This caustic behavior not surprisingly echoes the situation of gravitational lensing, although practically realizing this magnification would not possible given the atmospheric disturbance. \subsection{Separation of nearby sources} \label{sub:contamination} The fact that an off-axis source still produces significant lensing as much as \SI{1.4}{\degree} is useful, because lensing events occur over a prolonged timescale, but also potentially problematic. This is because it indicates that nearby sources, within a degree, will have some fraction of their light also lensed onto the detector and thus might be concerned about blending. Consider the on-axis lensing scenario but with an additional source offset by an angle $\theta$ on the sky. The line connecting the contaminating source and the detector does not pass through the Earth's center (as with on-axis lensing) but rather is offset by a distance $Q$, meaning that $\theta = Q/L$. The on-axis lensed sourced travels through the Earth's atmosphere at an impact parameter of $b_{\mathrm{mid}}$, which is equivalent to $\lim_{Q\to0} b_{\mathrm{mid}}(Q)$, whereas the offset source has $b_{\mathrm{mid}}(Q)$. These two lensed images appear separated in the atmosphere, as seen from the detector, by an angle of $\alpha$ given by \begin{align} \alpha &= \frac{ (\lim_{Q\to0} b_{\mathrm{mid}}(Q)) - b_{\mathrm{mid}}(Q) }{L} \end{align} Accordingly, the apparent angular separation decreases from $\theta$ to $\alpha$ by the ratio \begin{align} \frac{\alpha}{\theta} &= \frac{ (\lim_{Q\to0} b_{\mathrm{mid}}(Q)) - b_{\mathrm{mid}}(Q) }{Q}. \end{align} Using the numerical results from Section~\ref{sub:offaxisresults}, this ratio can be computed for any given wavelength and at any given phase angle around the Earth. For $Q>$\SI{20000}{\kilo\meter}, all phase angles converge to a $\tfrac{\alpha}{\theta}$ ratio of one over a few thousand. Since the detector has a diffraction limited angular resolution of $1.22\tfrac{\lambda}{W}$, then the source separation ability of the terrascope will be $\sim \tfrac{\lambda H_{\Delta}}{W R}$. This is $\simeq 0.25$\,milliarcseconds for a 1\,metre detector at \SI{1}{\micro\metre} and thus is a factor of $1.22 H_{\Delta}/W$ improved. Aside from resolving a contaminant through angular separation, it may be possible to separate sources (to some degree) based on the distinct temporal lensing light curves that emerge due to the differing geometries. \subsection{Atmospheric radiance} \label{sub:onaxisradiance} The Earth's atmosphere is luminous from airglow, scattering and thermal emission and this radiance poses an obstacle to the terrascope. By using a shade adapted for the Earth, it may be possible to remove flux from the Earth's disk, which greatly outshines the sky brightness. A simple shade would need to be offset from the detector by a distance of $L [b/(W+b)]$ in order to occult the Earth's disk and have a radius of $R [W/(W+b)]$. For all detectors with $W<$\SI{0.209}{\metre} (corresponding to an effective telescope diameter of \SI{96.9}{\metre}), the effective collecting area of the terrascope exceeds the size of the Earth-shade. It may be still economical to go beyond \SI{0.2}{\metre} since the cost of a shade is expected to be cheaper than a mirror. Scattering from the upper atmosphere will be ever present and represents a source of background (rather than necessarily a source of noise). This background will be strongly dependent upon the relative position of the Sun during the observations. Let us denote the angle subtended from the Earth to the terrascope detector to the Sun as $\Theta$. If \SI{0}{\degree}$<\Theta<$\SI{90}{\degree} or \SI{270}{\degree}$<\Theta<$\SI{360}{\degree}, then the Sun will appear directly in view to the detector excluding observations during this time. If \SI{90}{\degree}+\SI{18}{\degree}$<\Theta<$\SI{270}{\degree}-\SI{18}{\degree}, then one side of the Earth will be in astronomical twilight where scattered sunlight cannot interfere (except at the instant of $\Theta=\pi$). If the observatory is exactly in the ecliptic plane, then at any one time during this range in $\Theta$ exactly one half of the Earth's circumference will be in astronomical twilight. Accordingly, it is estimated here that the actual amplification from a terrascope will be one half of that depicted in the various figures throughout. This assumes that any part of the Earth which is illuminated will have a background component that is simply not removable. However, more detailed calculations than possible here may be able demonstrate that at least some fraction of this lost capability can be recovered through background suppression strategies, such as leveraging polarization, wavelength information, and temporal light curve variations. These are undoubtedly technical challenges for a realized terrascope but effort should be encouraged to explore overcoming them given the very large gains potentially given by such a system. \subsection{Atmospheric stability} The refractivity of air at a specific altitude will vary as a function of position and time in a realistic atmosphere. It is argued here that so long as the terrascope detector is a significant distance away from the inner focal point, these variations will not affect the amplification factor in a meaningful way. Consider a particular location where there is a increase in pressure at altitude $z$ compared to the typical pressure at altitude $z$. This causes the refractivity to increase and thus light traveling at that location will now refract too much and miss the detector at distance $L$. However, there must be an altitude $z'>z$ where the pressure decreases back down to the typical pressure, thereby refracting light back onto the detector. In this way, the perfect circular ring image is distorted into an irregular ring - but the thickness of the ring is the same and thus the amplification is unchanged. \subsection{Pointing} Since an off-axis source still causes significant lensing at \SI{1.4}{\degree} for a Hill sphere terrascope, this denotes the approximate angular band on the sky suitable for observation. This represents just under one percent of the sky. The orbital plane of the detector is a free parameter but ecliptic observing minimizes the affect of high altitude clouds and Solar scattering, as well as providing the densest field of targets. Pointing is naturally limited to whatever happens to be behind the Earth at any given time, although fleets of terrascope detectors could increase the coverage as needed. \subsection{Radio terrascope} The calculations of extinction in this work strictly assume optical/infrared light. Moving further out into the radio offers two major advantages though. First, extinction due to clouds can be largely ignored, allowing for detectors much closer including on the lunar surface. Second, Solar scattering is far less problematic in the radio and indeed it is typical for radio telescopes to operate during daylight phases. The simple refraction model of this work was extended to the radio and indeed the amplification was estimated to be largely achromatic beyond a micron. Nevertheless, the model did not correctly account for the radio refractivity as a function of humidity, nor the impact of the ionosphere on lensed rays. Accordingly, a radio terrascope may be an excellent topic for further investigation. It should be noted though that a disadvantage of a giant radio receiver in space is that humanity already regularly builds large receivers on Earth at much lower expense than their optical counterparts. Thus, the benefit of going into space for radio observations may not prove ultimately economical. \section{Introduction} \label{sec:intro} \input{introduction.tex} \section{Modeling Atmospheric Refraction} \label{sec:refraction} \input{refraction.tex} \section{Ray Tracing Simulations} \label{sec:raytracing} \input{raytracing.tex} \section{Calculation Results} \label{sec:results} \input{results.tex} \section{Discussion} \label{sec:discussion} \input{discussion.tex} \acknowledgments DMK is supported by the Alfred P. Sloan Foundation. Thanks to members of the Cool Worlds Lab and the NASA Goddard Institute for Space Science group for useful discussions in preparing this manuscript. Thanks to Jules Halpern, Duncan Forgan, Caleb Scharf and Claudio Maccone for reviewing early drafts and discussions of this work. Special thanks to Tiffany Jansen for her assistance with coding questions. Finally, thank-you to the anonymous reviewer for their constructive feedback. \vspace{5mm} \software{{\tt lowtran7}\ \citep{lowtran:1988 } \subsection{Generating a training set} \label{sub:trainingset} Two physical principles are critical in consideration of the terrascope, refraction and extinction. The issue of atmospheric extinction will be tackled later in Section~\ref{sub:onaxisextinction}, and thus we first deal with refraction using the expressions found earlier in Section~\ref{sec:refraction} to ray trace through the Earth's atmosphere. In what follows, rays are only traced from space to the point of closest approach to the Earth's surface. Since the assumed model atmosphere is static and one-dimensional, the egress path will be identical to the ingress path and one may exploit this symmetry to save computational effort. Before a ray can be traced, it is first necessary to choose how many shells ($N$) will be used for ray tracing experiment. In general, the greater the number of shells, the more accurate the integration, but at greater computational expense. Further, it is necessary to choose up to what geopotential altitude the atmosphere terminates, $Z$ (technically the atmosphere extends to infinity but this of course is not computationally reasonable, nor are the standard atmospheres well-defined above \SI{86}{\kilo\metre}). In preliminary ray tracing experiments, it was noted the deflection angles follow a generally smooth trend with respect to impact parameter until the impact parameter approaches $R$ + 86\,km. At the 86\,km boundary, the refractivity is discontinuous, jumping off a monotonically decreasing smooth function down zero. This appears to introduce peculiar behavior for extreme impact parameters and thus $Z=$\SI{80}{\kilo\metre} was adopted in an effort to avoid this regime. A high but computationally efficient resolution was chosen such that each shell has a thickness of $h=$\SI{10}{\centi\metre}, corresponding to 800,000 steps. Let us choose a particular wavelength, $\lambda$, of light to work with. Using this wavelength, one still needs to choose an impact parameter, $b$, before ray tracing can be executed. To generate a suite of examples, let us define a grid of impact parameters varying from $R$ to $R+Z$ uniformly spaced in geopotential altitude. Since the numerical resolution of the atmosphere is itself 800,000, the resolution used for $b$ cannot be higher than this and a reasonable choice is to adopt an order-of-magnitude lower to ensure the highest possible scan yet retain reliable results. Accordingly, a resolution in $b$ of 80,000 was adopted (i.e. \SI{1}{\metre} step sizes). These experiments essentially generate a training set from which one can learn the relationship between $b$ and deflection angle, $\Delta$ (as well as other properties). However, the set is conditioned upon a specific choice of $\lambda$. To build a complete training set, it is necessary to also vary $\lambda$. This is done by creating a grid from $\lambda=$\SI{0.2}{\micro\metre} to \SI{30}{\micro\metre} with 219 examples spaced log-uniformly. This wavelength range corresponds to the range of support for the {\tt lowtran7}\ extinction model that will be used later (see Section~\ref{sub:onaxisextinction}). In each run, the total deflection angle, $\Delta$, is recorded, as well as the minimum geopotential height achieved by the ray\footnote{ In this way, $b$ and $D$ are closely related; $b$ is the minimum separation between the Earth's center and the ray in the absence of refraction, whereas $D$ is the same but with refraction turned on. } (``depth''), $D$, and the airmass traversed through, $X$. It was found that impact parameters close to the \SI{80}{\kilo\metre} boundary exhibited slight trend differences than the bulk, suggestive of a numerical error. To alleviate this, samples with $b>$\SI{77}{\kilo\metre} were excluded, as well as samples for which the ray strikes the Earth, leaving a total of 10,606,382 ray tracing experiments that were saved. \subsection{Interpolation scheme} \label{sub:interpolation} In order to generalize the numerical results to arbitrary values of $b$ and $\lambda$, one can apply interpolation to the training data. \subsubsection{Critical impact parameter, $b_{\mathrm{crit}}$} For each ray tracing experiment, only $X$, $D$ and $\Delta$ are saved and so these are the three terms that require interpolation. However, a useful product of these is $b_{\mathrm{crit}}$, the impact parameter at which $D=R$. Since the simulations iterate through in 1\,metre steps in $b$, one may simply cycle through the list until $D<R$ and save the previous example as $b_{\mathrm{crit}}$, which will have a maximum associated error of \SI{1}{\metre}. In total, there are 220 training examples of $b_{\mathrm{crit}}$ versus $\lambda$. When cast as $b_{\mathrm{crit}}$ against $\lambda^{-2}$, the relationship appears quasi-linear and thus it is using this parameterization that the interpolation is performed. Since the number of samples is relatively small, it is feasible to perform Gaussian process (GP) regression \citep{gp1,gp2}, in this case using a squared exponential kernel. Using exhaustive leave 1-out, this process is repeated to evaluate the error in the final estimates. The final interpolative function, shown in Figure~\ref{fig:bcrit}, has a maximal error of one metre, which is the numerical error of the training set to begin with. It therefore represents an excellent predictor and is adopted in what follows. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{bcrit.pdf} \caption{ The critical impact parameter as a function of wavelength. Impact parameters below this will refract so much they strike the Earth. The different lines shows the effect of varying the climate model. } \label{fig:bcrit} \end{center} \end{figure} \subsubsection{Airmass, $X$, and depth, $D$} For airmass and depth, a Gaussian process regression is impractical due to the much larger and two-dimensional training set of over 10 million samples. Instead, this large sample is suitable for a dense interpolative net. As with $b_{\mathrm{crit}}$, it was found that $\lambda^{-2}$ provides a more linear basis for training for both $X$ and $D$. The 2D bilinear interpolation therefore maps $\{\lambda^{-2},b\} \to X$ and $D$. Examples of the interpolative function are shown in Figure~\ref{fig:interp}. To provide further intuition and context, we also show the ``effective'' refractivity of the Earth's atmosphere in Figure~\ref{fig:interp}. This is computed by taking the computed deflection angles and solving for the equivalent refractivity needed for that deflection using a single-layer atmosphere. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{interpolation.pdf} \caption{ Simulation results are shown in dark gray, interpolative functions in dashed colors. Upper: Airmass traversed for a ray traveling through the Earth's atmosphere from space to its closest approach to the Earth as function of impact parameter. Middle-upper: Depth of the ray at closest approach. Middle-lower: Deflection angle of the ray by the time it reaches closest approach (rays interior to the critical impact parameter are omitted). Lower: ``Effective'' refractivity of the atmosphere, calculated as described in the main text. } \label{fig:interp} \end{center} \end{figure} Using leave 1-out, one may evaluate the error of these interpolations. This is done by leaving a random example out, re-training the interpolation, and then evaluating the residual between the omitted sample and the interpolated prediction of said point. Since the training is relatively expensive computationally, the aforementioned Monte Carlo experiment is limited to 1000 realizations. This process revealed the bilinear scheme is able to perform excellent interpolation across the grid. The relative airmass residuals show a non-Gaussian distribution with a standard deviation of 0.68\%, a mean absolute error of 0.13\% and a 99.9\% of samples exhibiting an absolute error less than 0.11\%. For depth, the standard deviation of the absolute residuals is 0.030\,m, the mean absolute error is 0.012\,m, and 99.9\% of samples have an absolute error less than 0.20\,m. \subsubsection{Deflection angle, $\Delta$} For deflection angle, it was found that simple bilinear interpolation did not provide particularly stable results, too closely tracing out the small numerical errors found in the simulations rather than smoothing over them. Another problem with this scheme was that as the simulations approach $b \to b_{\mathrm{crit}}$, there is no training data for deflection angle since it cannot be defined below this point. This led to unstable extrapolations in the final shell. Instead of bilinear interpolation, a Gaussian process with a rational quadratic kernel is trained on each wavelength slice independently. The training data is also thinned by a factor of 100 to expedite the training (leaving approximately 750 samples per slice). Rather than use $\Delta$ as the target function, we use $\log \Delta$ which behaves quasi-linearly with respect to $(b-R)$, the independent variable for the training. The GP is used because it smooths over the numerical noise introduced by our machine-precision calculations. Calling the GP predictor is computationally slow, and so a library of predictions is generated for later use. This library samples the original simulations at a thinning rate of ten along the $b$-axis, giving 7506 samples. The library is then interpolated using splines in cases where needs to evaluate the deflection angle at intermediate choices of $\Delta$. Comparing to the original samples, the agreement of the final interpolations is better than 1.9\% for all samples, with a standard deviation of 0.83\%. The interpolated function is shown in Figure~\ref{fig:interp}. \subsection{Computing on-axis amplification} \label{sub:onaxis} Amplification is defined here as the intensity received by a detector using the terrascope relative to the intensity the same detector would receive in the absence of the Earth. The latter of these two terms is simply the incident flux multiplied by the collecting area of the detector, $\pi (W/2)^2$, where $W$ is the diameter/aperture of the detector. In the same way, the intensity received by the terrascope can be computed by simply considering the effective light-collecting area. In what follows, it is assumed that the source, the lens and the detector are all perfectly aligned, which is referred to as ``on-axis''. \begin{figure*} \begin{center} \includegraphics[width=17.0cm,angle=0,clip=true]{onaxis.pdf} \caption{ Illustration of a detector of diameter $W$ utilizing the terrascope. Two rays of different impact parameters, but the same wavelength, lens through the atmosphere and strike the detector. The ring formed by those two rays enables a calculation of the amplification. In this setup, the detector is precisely on-axis. } \label{fig:onaxis} \end{center} \end{figure*} The setup is illustrated in Figure~\ref{fig:onaxis}. A ray of wavelength $\lambda$ and impact parameter of $b_{-}$ is refracted by an angle $\Delta_{-}$ such that it strikes lower tip of the detector located at a distance of $L$. Another ray with the same wavelength but a higher impact parameter, $b_{+}$, is refracted by an angle $\Delta_{+}<\Delta_{-}$ and eventually strikes the upper tip of the detector. It follows that all rays of wavelength $\lambda$ and impact parameter $b_{-} \leq b \leq b{+}$ will strike the detector. In the on-axis case considered here, along with the assumption of a 1D atmosphere, the problem is symmetric about the $x$-axis and thus the lensing region is circular ring of area $\pi (b_{+}^2 - b_{-}^2)$, meaning that the amplification $\mathcal{A}$, is given by \begin{align} \mathcal{A} &= \epsilon \frac{b_{+}^2 - b_{-}^2}{(W/2)^2}, \label{eqn:ampdefinition} \end{align} where $\epsilon$ is a loss parameter describing the degree of extinction. Rather than forming a single focus point, light focusses along a line much like the case of gravitational lensing. The maximum distance of the focal line is infinity, but the inner distance is well-defined and it is labelled as $F$ in what follows. This distance corresponds to a ray striking the Earth at the critical impact parameter, $b_{\mathrm{crit}}$ (for a given wavelength). The focal distance is given by simple trigonometry \begin{align} F &= b_{\mathrm{crit}} \cot \Delta_{\mathrm{crit}} \end{align} where \begin{align} \Delta_{\mathrm{crit}} &\equiv \lim_{b \to b_{\mathrm{crit}}} \Delta(b). \end{align} In the wavelength range of \SI{0.2}{\micro\metre} to \SI{30}{\micro\metre}, the inner focus point varies from ${\simeq}$\SI{200000}{\kilo\metre} to ${\simeq}$\SI{350000}{\kilo\metre}, depending on the wavelength and climate model (see Figure~\ref{fig:focus}). This indicates that it would be possible to focus light at the lunar distance itself since the focal line extends to infinity past this inner point. Accordingly, observatories at or beyond the lunar distance could be feasible locations for the terrascope detector. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{focus.pdf} \caption{ Location of the inner focal point of the terrascope as a function of wavelength. Rays cannot focus interior to this point because they would strike the Earth's surface. Results shown for six different model temperature-pressure profiles. } \label{fig:focus} \end{center} \end{figure} The impact parameters $b_{+}$ and $b_{-}$ dictating the amplification can be derived by geometrical arguments. Consider a ray of impact parameter $b_{+}$ which deflects by angle $\Delta[b_{+}]$ such that it strikes the upper tip of the detector located at a distance $L$. Since the offset from the $x$-axis of the upper detector tip is $W/2$ and thus one can write that \begin{align} b_{+} - L \tan\Delta[b_{+}] = W/2. \label{eqn:bplus} \end{align} Similarly, consider a ray which passes a little deeper through the planetary atmosphere at impact parameter $b_{-}$, such that it deflects enough to strike the lower tip of the detector, which satisfies \begin{align} b_{-} - L \tan \Delta[b_{+}] = -W/2. \label{eqn:bminus} \end{align} In practice, solutions for these two impact parameters are found through a numerical Nelder-Mead optimizer, because of the subtle dependency of $\Delta$ upon $b$ (see Section~\ref{sub:interpolation}). This optimization is repeated for various choices of $L$, $W$ and $\lambda$. For $\lambda$, the original grid of 220 wavelengths was adopted. For $L$, it was found that the inner focus of the shortest wavelength with the US Standard Atmosphere 1976 model was \SI{281700}{\kilo\metre} and thus a uniform grid was adopted from this distance out to \SI{1500000}{\kilo\metre} (the Hill sphere of the Earth) with 101 steps. Finally, for $W$, five fiducial diameters are adopted: $10^{-2}$, $10^{-1}$, $10^{0}$, $10^1$ and $10^2$ metres. \subsection{Analytic estimates} \label{sub:analytic} Although the numerical experiments provide the most precise view, it is instructive to consider the approximate scaling relations expected. One can note that the $\Delta$ function is approximately log-linear and thus can be approximated as $\Delta \simeq \Delta_0 e^{-(b-R)/H_{\Delta}}$ where $H_{\Delta}$ is an effective scale height for the lensing, equal to \SI{6.911}{\kilo\metre} to within three-decimal places for all rays between $\lambda=$\SI{0.2}{\micro\metre} to $\lambda=$\SI{30}{\micro\metre}. Using this approximate formalism, Equations~(\ref{eqn:bplus}) \& (\ref{eqn:bminus}) can be combined to give \begin{align} W =& \Delta b + \nonumber\\ \qquad& L \Bigg( \tan\Big(\Delta_0 e^{-(b_{-}-R)/H_{\Delta}}\Big) - \tan\Big(\Delta_0 e^{-(b_{+}-R)/H_{\Delta}}\Big) \Bigg) \end{align} where $\Delta b = (b_{+}-b_{-})$. Taking a small-angle approximation, replacing $b_{-}=b_{\mathrm{mid}} - \Delta b/2$ and $b_{+}=b_{\mathrm{mid}} + \Delta b/2$, and then Taylor expanding for small $\Delta b$ (thin ring approximation) gives \begin{align} \Delta b \Bigg( 1 + \frac{L}{H_{\Delta}} \Delta_0 e^{-\frac{b_{\mathrm{mid}}-R}{H_{\Delta}}} \Bigg) = W. \end{align} In order to reach the detector, one may write that $\Delta \simeq b/L$, or simply $\Delta \sim R/L$ by noting that $R \gg (b-R)$. This allows us to write that \begin{align} \Delta b \sim& \frac{W}{1 + (L/H_{\Delta}) (R/L)},\nonumber\\ \qquad \sim& \frac{W}{(R/H_{\Delta})}, \label{eqn:approxthickness} \end{align} where the second lines has used the fact $R \gg H_{\Delta}$. Whilst one might naively intuit that $\Delta b \sim W$, the gradient in the refractive index means the rays needs to be closer together than this, since even a slight angular difference is magnified over the large distance $L$. The denominator is of order $10^3$ and thus implies that for a one-meter diameter detector, the lensing ring is about a millimetre thick. The above allows one to approximately estimate the amplification, $\mathcal{A}$. If one writes that $b_{+} = b_{-} + \Delta b$, then Equation~(\ref{eqn:ampdefinition}) becomes $\mathcal{A} = \epsilon b_{-}^2 ((1 + \tfrac{\Delta b}{b_{-}})^2-1)/(W/2)^2$. Since $\Delta b \ll W$ and $b_{-} \sim R$, then provided $W \ll R$ (which practically speaking will always be true), one may write that $\tfrac{\Delta b}{b_{-}} \ll 1$. This permits a Taylor expansion of $\mathcal{A}$ such that \begin{align} \mathcal{A} \simeq 2 \epsilon b_{-} \Delta b/(W/2)^2 \end{align} which can be further refined by adopting $b_{-} \sim R$ and using Equation~(\ref{eqn:approxthickness}) to write \begin{align} \mathcal{A} \sim& 2 \epsilon R \frac{W}{(R/H_{\Delta})} \Big(\frac{4}{W^2}\Big),\nonumber\\ \qquad \sim& 8 \epsilon H_{\Delta}/W. \label{eqn:approxamp} \end{align} Since $H_{\Delta} = $\SI{6.911}{\kilo\metre}, then $\mathcal{A}/\epsilon \sim 55000/W$, which gives a first estimate for the approximate degree of amplification expected. The effective aperture size is given by $\sqrt{\mathcal{A}}$ and thus equals $W_{\mathrm{eff}} \sim 235 \epsilon^{1/2} \sqrt{W/(\mathrm{metres})}$\,metres. \subsection{Computing off-axis amplification} \label{sub:offaxis} The alignment of the source, lens and detector is instantaneous. Whilst useful for estimating the limiting amplification, practically speaking the source spends infinitesimal time at this position and so the useful lensing time is defined by the off-axis positions. Amplification still occurs off-axis but now the rays which reach the detector must be deflected by different angles, depending upon whether the rays travel above or below the mid-plane. \begin{figure*} \begin{center} \includegraphics[width=17.0cm,angle=0,clip=true]{offaxis.pdf} \caption{ Same as Figure~\ref{fig:onaxis} except for off-axis lensing. Only the extrema rays in the $z=0$ plane are shown. } \label{fig:offaxis} \end{center} \end{figure*} Consider that the Earth is offset from the line connecting the source and the detector's mid-point by a distance $Q$, as shown in Figure~\ref{fig:offaxis}. Although in reality the Earth is three-dimensional and rays can take different paths than the four lines shown in this diagram, the four rays represent the most extreme affected paths as a result of the translation shift. All four rays can reach the detector provided the following four conditions hold true \begin{align} &L \tan \Delta[b_{-,d}] - b_{-,d} = +W/2 + Q ,\nonumber\\ &L \tan \Delta[b_{+,d}] - b_{+,d} = -W/2 + Q ,\nonumber\\ &L \tan \Delta[b_{-,u}] - b_{-,u} = +W/2 - Q ,\nonumber\\ &L \tan \Delta[b_{+,u}] - b_{+,u} = -W/2 - Q . \end{align} Or more generally, received rays satisfy \begin{align} L \tan \Delta[b_d] - b_d - Q \leq |W/2|,\nonumber\\ L \tan \Delta[b_u] - b_u + Q \leq |W/2|. \label{eqn:updown2D} \end{align} Naturally, if $Q\gtrsim(R+Z)$, then no deflection is required and rays will arrive at the detector unlensed above the detector axis. Consider a ray which now lives out of the plane, with a $\hat{z}$-axis offset of $\beta_{d,z}$. For a ray below the $\hat{x}$-axis, the incident ray has a detection axis offset in the $\hat{y}$-direction of $\beta_{d,y} + x$, which when combined with the $z$ term gives a Euclidean offset from the detector axis of $\beta_d=\sqrt{(\beta_{d,y} + Q)^2 + \beta_{d,z}^2}$ but an impact factor from the planet of $b_d=\sqrt{\beta_{d,y}^2 + \beta_{d,z}^2}$. One may write that $\beta_{d,y} = b_d \cos \phi$ and $\beta_{d,z} = b_d \sin \phi$, where $\phi$ is the azimuthal angle about the $\hat{x}$-axis of the incident ray. Now the total offset from the detector axis is given by $\beta_d=\sqrt{b_d^2+Q^2+2 b_d Q \cos\phi}$. Similarly, if the incident ray were above the detector axis, then $\beta_u=\sqrt{b_u^2+Q^2-2 b_u Q \cos\phi}$. For a circular detector of radius $W/2$, rays will strike the detector if \begin{align} L \tan \Delta[b_d] - \sqrt{b_d^2+Q^2+2 b_d Q \cos\phi} \leq |W/2|,\nonumber\\ L \tan \Delta[b_u] - \sqrt{b_u^2+Q^2-2 b_u Q \cos\phi} \leq |W/2|. \end{align} If $Q=0$ in the above (i.e. on-axis), then these expressions are identical to the previous equations in Section~\ref{sub:onaxis}. Also, setting $\phi=0$ recovers Equation~(\ref{eqn:updown2D}). Accordingly, one twists $\phi$ in the range $-\pi/2<\phi<\pi/2$ for both expressions and expects them to meet at the extrema (which is indeed true). Moreover, one can generalize the above pair into a single expression where $-\pi/2<\phi<3\pi/2$: \begin{align} L \tan \Delta[b] - \sqrt{b^2+Q^2+2 b x \cos\phi} \leq |W/2|,\nonumber\\ \end{align} Unlike the on-axis case, the ring of lensed light no longer forms a circle and more closely resembles an egg-shape, which is illustrated in Figure~\ref{fig:shapes}. This occurs because each lensed ray now requires a different deflection angle to reach the detector, as a result of the offset between the source, lens and detector. This, in turn, means that the lensing depth - which strongly controls the deflection angle (see Figure~\ref{fig:interp}) - is different for each lensed ray. For $Q \ll R$, the shape is essentially circular, for $Q \sim R$ the shape is highly oval, and for $Q \gg R$, up to a maximum critical point, the shape disappears behind the planet. \begin{figure*} \begin{center} \includegraphics[width=17.0cm,angle=0,clip=true]{shapes.pdf} \caption{ Numerically computed shapes of the lensing strata for three different offsets (red lines).These are the altitudes of the rays above the Earth in order for them to come to a focus point at distance $L$. Black lines show the critical impact parameter inside which rays strike the planet. They represent a kind of refractive surface, below which rays will eventually the intercept the physical surface - which is located at the origin. Calculations use the US Standard Atmosphere 1976, $\lambda=$\SI{0.2}{\micro\metre} and $L = R_{\mathrm{Hill}}$. Shapes are exaggerated by virtue of the subtraction of $R$ off both axes. } \label{fig:shapes} \end{center} \end{figure*} \subsection{Lensing through a 1D static atmosphere} \label{sub:refraction} Consider a luminous source located at a large distance from the Earth such that it can be approximated as a point source (we leave consideration of diffuse sources of emission to future work). Light from the source arrives at the Earth as an approximately plane-parallel wave with a wavelength $\lambda$ and can be represented by a sum of parallel rays, each with an impact parameter $b$ (see Figure~\ref{fig:triplelayer}). The Earth's atmosphere is considered to be described by a one-dimensional temperature-pressure profile and is unchanging in time. The Earth's atmosphere is then split up into a series of $N$ shells, within which the temperature, pressure and refractivity is assumed to be constant. \begin{figure*} \begin{center} \includegraphics[width=17.0cm,angle=0,clip=true]{triplelayer.pdf} \caption{ Schematic of a $N=3$ shell atmosphere where (exagerrated) refraction is calculated by considering the interactions at each shell boundary. } \label{fig:triplelayer} \end{center} \end{figure*} The total geopotential altitude of the atmosphere is defined as $Z$, such that each shell has a thickness of $h=Z/N$ and a refraction index of $n_j$. Shell indices are assigned in ascending order such that the deepest layer have the highest index. The outer shell, shell $j=0$, effectively extends out to infinity and has a refraction index of $n_0=1$. When the ray of light reaches the boundary between from shell $j-1$ to $j$, the change in atmospheric density leads to a change in the light's speed and thus refraction occurs as described by Snell's law \begin{align} n_{j-1} \sin \theta_{i,j} &= n_j \sin \theta_{r,j}, \end{align} where the indices $i$ and $r$ refer to ``incidence'' and ``refraction''. The deflection angle of the ray is therefore \begin{align} \alpha_j = \theta_{i,j} - \theta_{r,j}. \label{eqn:deflection} \end{align} At the boundary of $j=1$ shell, the angle of incidence can be written in terms of the impact parameter, $b$, by the trigonometric relation \begin{align} \sin \theta_{i,1} &= \frac{b}{1} \frac{1}{R + N h} \end{align} and thus via Snell's law we also have \begin{align} \sin \theta_{r,1} &= \frac{b}{n_1} \frac{1}{R + N h}. \end{align} The angle of incidence at the boundary of the $j=2$ shell can be deduced from this result, by applying the sine rule inside the triangle subtended from the $j=1$ boundary intersection to the $j=2$ boundary intersection to the Earth's centre: \begin{align} \sin \theta_{i,2} &= \frac{R+ N h}{R + (N-1) h} \sin \theta_{r,1},\nonumber\\ \qquad&= \frac{b}{n_1} \frac{1}{R + (N-1) j} \end{align} And again using Snell's law this gives \begin{align} \sin \theta_{r,2} &= \frac{b}{n_2} \frac{1}{R + (N-1) j}. \end{align} Continuing this process, it is easy to show that \begin{align} \sin \theta_{i,j} &= \frac{b}{n_{j-1}} \frac{1}{R + (N-j+1) h}. \label{eqn:incidence} \end{align} and \begin{align} \sin \theta_{r,j} &= \frac{b}{n_j} \frac{1}{R + (N-j+1) h}. \label{eqn:refraction} \end{align} Using Equations~(\ref{eqn:incidence}) \& (\ref{eqn:refraction}) with Equation~(\ref{eqn:deflection}) allows one to calculate the deflection angle at each shell boundary. In practice, this is done by using the sine addition rule \begin{align} \sin \alpha_j &= \sin(\theta_{i,j} - \theta_{r,j}),\nonumber\\ \qquad &= \sin \theta_{i,j} \cos\theta_{r,j} - \sin\theta_{i,j}\cos\theta_{i,j},\nonumber\\ \qquad &= \sin \theta_{i,j} \sqrt{1-\sin^2\theta_{r,j}} - \sin\theta_{i,j}\sqrt{1-\sin^2\theta_{i,j}} \end{align} such that \begin{align} \alpha_j =& \sin^{-1}\Bigg[ \nonumber\\ &\frac{b}{n_{j-1}} \frac{1}{R + (N-j+1) h} \sqrt{1 - \Big(\frac{b}{n_j} \frac{1}{R + (N-j+1) h}\Big)^2 }- \nonumber\\ \qquad& \frac{b}{n_j} \frac{1}{R + (N-j+1) h} \sqrt{1 - \Big(\frac{b}{n_{j-1}} \frac{1}{R + (N-j+1) h}\Big)^2} \Bigg]. \end{align} \subsection{Critical refraction limits} \label{sub:critrefraction} If the impact parameter is low enough, the ray will eventually strike the planetary surface and thus end its journey. Consider the critical case of this such that the ray just grazes the planetary surface tangentially, initiated from an impact parameter $b_{\mathrm{crit}}$. If this happens, then it follows that the ray must also reach the deepest atmospheric shell. In that shell, one can draw a right-angled triangle from the planet's center to the two intersection points and write that \begin{align} \sin \theta_{r,N} &= \frac{R}{R+h} \end{align} Now substituting in Equation~(\ref{eqn:refraction}), we have \begin{align} \frac{b_{\mathrm{crit}}}{n_N} \frac{1}{R + h} &= \frac{R}{R+h}, \end{align} which may be solved to give \begin{align} b_{\mathrm{crit}} \equiv R n_N. \end{align} If the impact parameter is in the range $b_{\mathrm{crit}} < b < R + N h$, then the ray will penetrate the Earth's outer atmospheric shell and continue down to some depth before making its way back out of the atmosphere again. Let us write that the deepest shell is given by the index $J_{\mathrm{limit}}$. In order to calculate the total deflection angle, $\Delta$, of a ray, we must calculate $J_{\mathrm{limit}}$ since $\Delta = \sum_{j=1}^{J_{\mathrm{limit}}} \alpha_j$. This may be calculated by considering that the $\cos\theta_{i,j}$ term in the $\Delta$ calculation must be real. The term becomes imaginary if $\sin\theta_{i,j}>1$. Using Equation~(\ref{eqn:incidence}), one sees that this corresponds to \begin{align} \frac{1}{n_{j-1}} \frac{b}{R+(N-j+1) h} > 1. \end{align} Therefore, one may find $J_{\mathrm{limit}}$ by sequentially calculating the above inequality from $j=1$ up until the condition holds true. At this point, the previous shell is assigned as $J_{\mathrm{limit}}$. The total deflection angle, from the lowest altitude to space (space-to-space is simply twice as much) is then computed as \begin{align} \Delta = 2\sum_{j=1}^{J_{\mathrm{limit}}} \alpha_j. \label{eqn:totaldeflection} \end{align} \subsection{Airmass} \label{sub:airmass} Aside from computing the deflection angle and deepest shell layer of an incoming ray, one can also compute the airmass traversed, $X$. The path length can be found by using the sine rule inside the triangles formed between the planet's center and the shell intersection points: \begin{align} d_j &= \Big(\frac{ \sin(\theta_{i,j+1}-\theta_{r,j})}{\sin\theta_{i,j+1}}\Big) \big(R + (N-j+1)h\big). \label{eqn:path} \end{align} Substituting Equation~(\ref{eqn:incidence}) \& (\ref{eqn:refraction}) into Equation~(\ref{eqn:path}), and after simplification, yields \begin{align} d_j =& \sqrt{ (R + (N-j+1)h)^2 - (b/n_j)^2 } \nonumber\\ \qquad& - \sqrt{ (R + (N-j)h)^2 - (b/n_j)^2 }. \label{eqn:pathlength} \end{align} The airmass passed through by a ray is proportional to the path length multiplied by the density. Using the ideal gas law, the density is proportional to pressure over temperature ($P/T$). Accordingly, airmass is given by \begin{align} X &= \frac{\sum_{j=1}^{J_{\mathrm{limit}}} d_j P_j/T_j}{X_0}, \end{align} where $X_0$ is a constant of proportionality defined such that $X=1$ for a ray which travels from sea level to space along the zenith. This equates to \begin{align} X_0 \equiv \sum_{j=1}^N h P_j/T_j. \end{align} \subsection{Model atmosphere} \label{sub:modelatmosphere} As stated earlier, this work assumes that within a given shell, the pressure, temperature and refractivity are constant. In other words, a one-dimensional static atmosphere. The purpose of this work is to simply demonstrate the concept of the terrascope. If worthwhile, future work could be undertaken to use more sophisticated atmospheric models accounting for weather, turbulence, and regional differences. For now, the goal is merely to compute the approximate feasibility and properties of the terrascope, by making reasonable but ultimately simplifying assumptions. To accomplish this, the US Standard Atmosphere 1976 \citep{US:1976} was adopted as a fiducial temperature-pressure (TP) profile. This atmosphere can be considered to be an average over the global climate but can be a poor representation for particular local climates. To investigate the impact of differing conditions, five other standard TP profiles were utilized - in particular the same atmospheres as used by {\tt lowtran7}\ \citep{lowtran:1988}. These are the ``tropical'', ``mid-latitude summer'', ``mid-latitude winter'', ``sub-arctic summer'' and ``sub-arctic winter'' models (these are shown in Figure~\ref{fig:TP}). \begin{figure} \begin{center} \includegraphics[width=8.4cm,angle=0,clip=true]{TP.pdf} \caption{ Temperature-pressure profiles for six standard atmospheres used in this work. } \label{fig:TP} \end{center} \end{figure} The models define a continuous temperature-pressure profile from 0 to 85\,km geopotential altitude ($z$) but with functional changes occurring at six key boundaries distributed in altitude. Within each of the layers (defined by sharp boundaries), the lapse rate, $\mathbb{L}$, is varied and the pressure computed assuming an ideal gas and vertical pressure variation of $\mathrm{d}P/\mathrm{d}z = -\rho g$ (where $\rho$ is the density of air and $g$ is the acceleration due to gravity). Temperature, as a function of geopotential altitude, within the $k^{\mathrm{th}}$ layer is defined by \begin{equation} T_k[a] = T_{k-1} + \mathbb{L}_k (z-z_k), \end{equation} where $T_0$ is the temperature at sea level. The pressure is then given by \begin{equation} P_k[z] = \begin{cases} P_{k-1} \exp\Big( -\frac{-g_0 M (z-z_k)}{R T_{k-1}} \Big) & \text{if } \mathbb{L}_k=0,\\ P_{k-1} \Big(\frac{T_{k-1}}{T_k[z]}\Big)^{ \frac{g_0 M}{R \mathbb{L}_k} } & \text{otherwise},\\ \end{cases} \end{equation} where $P_0$ is the pressure at sea level and $g_0 M/R = 34.1645$\,K/km. \subsection{Calculating refractivity} \label{sub:refractivity} The refractivity of air, $\eta$, equals the refraction index, $n$, minus unity. Given a shell's pressure and temperature defined by the US Standard Atmosphere 1976, the refractivity may be computed using a semi-empirical formula. In this work, the expression of \citet{birch:1994} is adopted since the expression is found to be in better agreement with recent measurements than the \citet{edlen:1966} formula - largely due to the increase in ambient carbon dioxide levels \citep{birch:1994}. The refractivity of dry air is thus given by \begin{align} \eta = 10^{-8} P &\Big( C_1 + \frac{C_2}{C_3-\sigma^2} + \frac{C_4}{C_5-\sigma^2} \Big) \nonumber\\ \qquad& \Big( \frac{ 1 + 10^{-10}P (C_6-C_7 T')}{ C_8 (1+C_9 T') } \Big), \end{align} where $T'$ is the temperature in Celsius, $\sigma$ is the reciprocal of the wavelength of light in a vacuum in units of nanometres, and $C_1$ to $C_9$ are constants. The calculations described throughout the rest of the paper were also repeated using moist air refractivity, instead of dry air, using the appropriate correction \citep{birch:1994}. However, this was found to produce very minor changes to the results and thus the dry air formula will be used in what follows. It is instructive to consider the approximate relationship as well. Refractivity is proportional to the gas density. For an isothermal atmosphere, one expects $\rho \propto e^{-z/H}$, where $H$ is the scale height and $z$ is the altitude. Accordingly, one expects $\eta \propto e^{-z/H}$ too. \subsection{Aperture scaling} \label{sub:onaxisaperture} For a one-metre diameter telescope, typical non-extincted amplifications are found in the range of 50,000 to 80,000 - using the numerical methods described in Section~\ref{sec:raytracing}. For a one-metre aperture, the lensing ring is just over a millimetre in thickness. For other aperture sizes the thickness is found to scale with the inverse of the aperture diameter (see Figure~\ref{fig:ampplot}) i.e. $\Delta b = (b_{+} - b_{-}) \propto W$. These numerical results agree with the approximate analytic estimates deduced earlier in Section~\ref{sub:analytic}. Changing the telescope aperture has a dramatic impact on the amplification. A clear pattern is that the amplification scales as $1/W$. For example, the amplification of a 10-metre detector is 10 times less i.e. 7,000 to 8,000. This result was also found in our earlier approximate analytic estimates in Section~\ref{sub:analytic}. This scaling result indicates that one may simply consider the results for a fixed fiducial detector and scale appropriately. In what follows, $W=$\SI{1}{\metre} is adopted and thus the amplification of such an aperture is denoted as $\mathcal{A}_0$. Taking the amplification and the aperture size used, one can estimate what the effective aperture of the telescope would have to be to match the terrascope. Using the scaling law just described, this allows the effective aperture to be compactly expressed as \begin{align} \Big(\frac{W_{\mathrm{eff}}}{\mathrm{metres}}\Big) &= \sqrt{ \mathcal{A}_0 \epsilon \Big(\frac{W}{\mathrm{metres}}\Big) } \end{align} This reveals that the effective aperture of the terrascope equals the actual aperture when $W = \mathcal{A}_0$\,metres i.e. ${\sim}$\SI{80}{\kilo\metre}, setting an upper limit for the useful size of a terrascope observatory. \subsection{Distance dependency} \label{sub:onaxisdistance} Figure~\ref{fig:ampplot} shows the amplification as a function of $L$, illustrating how there is an overall drop-off in amplification away from the inner focus. Although the overall maxima occurs at the inner focus, a curious second maxima occurs at around $L=$\SI{500,000}{\kilo\metre} but appears highly chromatic. These maxima all corresponds to rays with a depth of ${\simeq}H_{\Delta}$ revealing their commonality. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{distance_plot.pdf} \caption{ Amplification (lower panel) of 1\,metre detector using the terrascope as a function of separation from the Earth, $L$, for four different wavelengths of light. The upper panel shows the corresponding depth of the ray, and the middle panel shows the lensing ring width. } \label{fig:ampplot} \end{center} \end{figure} Consider fixing $L$ to several plausible options as depicted in Figure~\ref{fig:lunar}, which shows the wavelength dependent amplification for a one-metre aperture. The deepest telluric depth of the rays received by the detector is shown in the second panel of that figure, illustrating how redder light needs to travel deeper to reach the observatory. The airmass traversed is shown in the top panel. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{lunar.pdf} \caption{ The airmass traversed, telluric depth and amplification as a function of wavelength for a 1\,metre telescope at five possible locations. } \label{fig:lunar} \end{center} \end{figure} In all cases, the rays travel through a substantial amount of airmass and thus one might question whether atmospheric extinction would overwhelm any gains made by the terrascope setup. Two forms of extinction are considered here - clear-sky scattering and interception with clouds. These are dealt with separately in what follows. \subsection{Clear-sky extinction} \label{sub:onaxisextinction} To estimate extinction, the {\tt lowtran7}\ transmittance and radiance package is used \citep{lowtran:1988}. Practically speaking, the code used is a {\tt python}\ wrapper implementation of {\tt lowtran7}\ (available at \href{https://github.com/scivision/lowtran}{this URL}), where the {\tt TransmittanceGround2Space.py} script is run setting the zenith angle to $90^{\circ}$. {\tt lowtran7}\ computes transmittance from the UV/optical out to \SI{30}{\micro\metre} and thus this defines the wavelength range considered inn what follows. The code is run for 41 choices of observer height, from \SI{0.01}{\kilo\metre} to \SI{100}{\kilo\metre} in log-uniform steps. The \SI{100}{\kilo\metre} run is so close to 100\% transmittance it is defined as such in what follows to provide a crisp boundary condition for interpolation. Intermediate observer heights are then interpolated as desired using splines. The amplification after extinction for a given observatory may now be computed. This is done by evaluating the {\tt lowtran7}\ spectral interpolator at a depth equal to the depth traveled by the lensed rays, which is itself a function of wavelength. Since {\tt lowtran7}\ assumes ground-to-space (although ``ground'' here is really just a user-chosen altitude), then space-to-ground-to-space transmission will simply be the self-product. Finally, this function is then multiplied by the chromatic amplification function for the lunar observatory. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{extinction.pdf} \caption{ Amplification after extinction expected for a 1\,metre diameter telescope at the Earth's Hill radius (top), half the Hill radius (middle) and the Moon's separation (bottom). Six atmosphere models are shown (same color coding as Figure~\ref{fig:focus}), which control temperature-pressure profiles (and thus refractivity profile) as well as the extinction computed using {\tt lowtran7}. All models assume no clouds. Standard photometric filters highlighted in gray, except for L, M and N which are slightly offset to encompass the optimal regions. } \label{fig:extinction} \end{center} \end{figure} As a test of the {\tt lowtran7}\ model, the transmission was converted to an equivalent atmospheric extinction coefficient for some common optical filters. The extinction coefficient for B-band was found to be 0.45, V-band 0.28, R-band 0.19, I-band 0.086 and H-band 0.080. These all line-up with typical coefficients for a good observing site\footnote{ See \href{http://spiff.rit.edu/classes/phys445/lectures/atmos/atmos.html}{this URL}\ for a pedagogical description of extinction coefficients and some typical values. } Figure~\ref{fig:extinction} shows the amplification for 1\,metre terrascope observatory after accounting for the {\tt lowtran7}\ extinction. Despite the extinction, amplification up to 70,000 remains feasible. One can see from Figure~\ref{fig:extinction} that extinction is severe for detectors at the Moon's orbital radius, since lensed rays need to travel deep through the Earth's atmosphere - just a couple of km (see Figure~\ref{fig:ampplot}). As we move out in orbital radius, sufficient lensing is obtained at higher altitudes thereby reducing the effect of atmosphere extinction, with clear benefits to such detectors. \subsection{Interception by clouds} \label{sub:onaxisclouds} The grazing nature of the terrascope lensed rays means that interception by clouds has the potential to dramatically attenuate the overall transmission through the atmosphere. Even wispy high-altitude cirrus clouds, with an optical depth of ${\sim}0.1$ and \SI{1}{\kilo\metre} thickness scale, can appear completely opaque to terrascope rays since the path length can be up to ${\sim}$\SI{100}{\kilo\metre}. This simplifies the analysis, since one can simply assume that encountering any kind of cloud leads to zero transmission. The real question is then what is the frequency which rays intercept a cloud? It is important to recall that for rays lensed onto a Hill sphere terrascope, the deepest altitude penetrated by the ray is \SI{13.7}{\kilo\metre} (see Figure~\ref{fig:lunar}), and at this altitude there are almost no clouds. Thus, if $L \sim F$, then the $(D-R)\sim0$ and lensed rays will have to traverse not only a large airmass but also most likely intercept opaque clouds during their journey. On the other hand, observatories away from $F$ require less deflection and thus need not travel so deep through the Earth's atmosphere, largely avoiding clouds. The relationship between $L$ and $(D-R)$ is well-constrained from our simulations. The first thing to highlight is that redder than about a micron, the refraction is almost achromatic and thus the lensing depth is approximately constant for a given $L$ (this is apparent from Figure~\ref{fig:lunar}). Thus, one can simply take $\lim_{\lambda \to \infty} (D-R)$ as an excellent approximation for wavelengths redder than a micron. The second thing to highlight is that if one varies $L$ from $F$ out to $R_{\mathrm{Hill}}$ in 100 uniform steps, the relationship is tight and monotonic, empirically found to be described by \begin{align} \lim_{\lambda \to \infty} (D-R) \simeq a_0 (1 - a_1 e^{-L/a_2}), \label{eqn:empiricaldepth} \end{align} where for the US Standard Atmosphere 1976 model one obtains $a_0=$\SI{15.54}{\kilo\metre} $a_1 = 1.829$ and $a_2 = $\SI{551,100}{\kilo\metre} (to four significant figures). To estimate the effect of clouds, this work uses data from the High-resolution Infrared Radiation Sounder (HIRS) satellite instrument. Statistical properties of clouds have been catalogued with multi-year observations taken from polar orbit and are have been described in the literature \citep{wylie:1994,wylie:1998}. This work uses the data made available at \href{ftp.ssec.wisc.edu}{this FTP}. Within a field of view of approximately \SI{20}{\kilo\metre} by \SI{20}{\kilo\metre}, HIRS determines the effective cloud fraction, $N \epsilon$, where $N$ is the frequency of clouds and $\epsilon$ is the emissivity, which approximately equals one minus the transmission, $T$. Averaging over all longitudes, latitudes and months, the average effective cloud fraction for all clouds below a pressure level of \SI{950}{\milli\bar} is 76.6\%. At or below a pressure level of \SI{200}{\milli\bar} the effective cloud fraction has dropped to 5.4\%. The global averages are shown in Figure~\ref{fig:cloudmap} where pressure levels have been converted to altitudes. It is found that the nine available data points, for any given location, are well described by a broken power-law, with a break at around one scale height (as shown by the smooth function overplotted in the left panel of Figure~\ref{fig:cloudmap}). \begin{figure*} \begin{center} \includegraphics[width=17.0cm,angle=0,clip=true]{cloudmaps.pdf} \caption{ Left: Effective cloud fraction, $N \epsilon$, averaged over all months and locations as measured over 11-years by HIRS, as a function of altitude \citep{wylie:1994,wylie:1998}. Right: Two example cloud maps from the data plotted on the left. Note how high altitude clouds are much more common around the equatorial regions. } \label{fig:cloudmap} \end{center} \end{figure*} To generalize the HIRS data to arbitrary locations and altitudes, the data set is first interpolated to a regularized grid then each location fitted using the broken power-law. The interpolation is necessary because no data is available north of \SI{84}{\degree} latitude or south of \SI{-84}{\degree}. To interpolate, longitudinal great circles are drawn around the Earth in one-degree intervals and then the data is wrapped around to ensure a continuous periodic function. A Gaussian Process with a Mattern-3/2 kernel is trained at each pressure level across all latitudes and used to fill in the missing latitudes. The broken power-law is then fitted to each one-square degree location independently, where the free parameters are two slopes, one offset and one transition point. To simplify the analysis, only $L>R_{\mathrm{Hill}}/2$ is considered in what follows, meaning that $\lim_{\lambda \to \infty} (D-R)>$\SI{8.2}{\kilo\metre}. At these altitudes, only high-altitude cirrus clouds are present. With these points established, it is now possible to estimate the impact of clouds on terrascope rays. It is stressed that the following is an approximate estimate and more detailed cloud modeling would be encouraged in future work to refine the estimate made here. The purpose of this section is to merely gauge the approximate feasibility of a terrascope when including clouds. If one assumes a terrascope detector orbiting in the Earth's equatorial plane, then lensed rays will be described by a great circle of constant longitude (or really one constant longitude plus another offset by \SI{180}{\degree}). Since the HIRS public data used here has a resolution of \SI{1}{\degree}, one can draw 180 such great circles - representing different rotational phases of the Earth. Working in the equatorial plane is not only a simplifying assumption but also minimizes the impact of high altitude clouds which are more frequent at equatorial regions (see Figure~\ref{fig:cloudmap}). For each great circle, there are 360 different locations (spread across latitude) sampled in the (interpolated) HIRS data. Since a terrascope detector located at $L=R_{\mathrm{Hill}}/2$ has focused red rays which traverse a depth of $(D-R) =$\SI{8.229}{\kilo\metre}, at each location one can evaluate the cumulative effective cloud fraction above this altitude using the broken power-law described earlier. Effective cloud fraction is not equal to cloud frequency. Fortunately, for high altitude clouds ($>$\SI{6}{\kilo\metre}), the approximate relationship $N \simeq \tfrac{1}{2} N \epsilon$ may be used \citep{wylie:1998}. Thus, at each of the 360 points along the great circle, the cloud frequency for all clouds above \SI{8.2}{\kilo\metre} altitude can be estimated. Since the cumulative fraction is defined as all altitudes above altitude $z$, and a terrascope ray indeed is forced to pass through all altitudes above $z$, the cloud frequency $N$ may be interpreted as a time-averaged transmission fraction for the depth $(D-R)$. The total transmission can now be estimated by simply averaging over all such values along the great circle. \begin{figure} \begin{center} \includegraphics[width=1.05\columnwidth,angle=0,clip=true]{cloudtransmission.pdf} \caption{ Estimated transmission through the Earth's atmosphere due to clouds for a terrascope detector at a distance $L$. At one Hill radius (\SI{1500000}{\kilo\metre}), lensed rays travel no deeper than \SI{13.7}{\kilo\metre} and thus largely avoid clouds, thereby losing less than 10\% of the lensed light. } \label{fig:cloudextinction} \end{center} \end{figure} The results of this calculation are shown in Figure~\ref{fig:cloudextinction}. A clear exponential trend is apparent in the results, highlighting as expected how more distant terrascope observatories are less affected by clouds. For the lowest $(D-R)$ allowed by our model, of \SI{6}{\kilo\metre}, $L=$\SI{600,000}{\kilo\metre} and the average cloud transmission is 41.4\%. Moving out to $R_{\mathrm{Hill}}/2$, the situation is decidedly better with an average transmission of 64.9\% and by the time $L=R_{\mathrm{Hill}}$, 91.9\% of the lensed rays make it through the atmosphere unimpeded by clouds. In conjunction with the earlier extinction calculations, these results strongly suggest that a terrascope detector as close to $L=R_{\mathrm{Hill}}$ as possible would optimize the setup. \subsection{Off-axis lensing} \label{sub:offaxisresults} Off-axis lensing was calculated using the method described in Section~\ref{sub:offaxis}. The terrascope detector is fixed to a distance of $L=R_{\mathrm{Hill}}$ and to $W=$\SI{1}{\metre} in what follows. The shape of the lensed source around the Earth was computed for the US Standard Atmosphere 1976 model at various off-axis distances ranging from $Q=$\SI{0}{\kilo\metre} to $Q=$\SI{40000}{\kilo\metre} to \SI{1000}{\kilo\metre} steps. As with the images shown in Figure~\ref{fig:shapes}, the rings are often egg-shaped (see Figure~\ref{fig:offaxisresults}) and thus the area was calculated through numerically integration along 2000 uniformly spaced choices of $\phi$, yielding an amplification value. The calculation was then repeated across the same grid of wavelengths used earlier in Section~\ref{sub:trainingset}. The amplification computed above describes the idealized case with no extinction. Since the shape is saved in each simulation, this information can be used to estimate the fraction of lost light due to clear sky extinction and also that of clouds, using the same methods described earlier in Sections~\ref{sub:onaxisextinction} \& \ref{sub:onaxisclouds}. Since the depth varies as a function of $\phi$ along the ring, the overall transmission is given by the amplification multiplied by the mean of the extinction over all 2000 phase points (where extinction here includes both the clear sky and cloud components). The resulting amplifications from this process are shown in Figure~\ref{fig:offaxisresults}. \begin{figure*} \begin{center} \includegraphics[width=17.0cm,angle=0,clip=true]{offaxis_results.pdf} \caption{ Off-axis lensing through the terrascope. Panel [A] shows the amplification after extinction for $\lambda=$\SI{1.74}{\micro\metre}. Panel [B] shows the simulated lensed images for 41 evenly spaced offset distances from $Q=$\SI{0}{\kilo\metre} to $Q=$\SI{40000}{\kilo\metre}. Panel [C] shows the spectral amplification at six different offset distances. } \label{fig:offaxisresults} \end{center} \end{figure*} For offsets of $Q>$\SI{18900}{\kilo\metre}, the amplification has dropped to less than half that as on-axis. This may be converted into a timescale by nothing that at one Hill radius, a satellite would have a tangential velocity of ${\simeq}$\SI{0.5}{\kilo\meter/\second}. Accordingly, the lensing timescale would be ${\sim}$\SI{20}{\hour} including both sides of the off-axis lensing, or roughly a day. During this time, the target has moved by approximately \SI{1.4}{\degree} on the sky.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Sec:Introduction} Machine learning (ML) has been extensively studied in recent years. Currently, the data-driven ML procedure is dominating the artificial intelligence field and its countless applications have emerged in many application domains, including the ones that the Internet of Things (IoT) targets. With the fast growing interest and technology that support IoT systems, the number of IoT devices deployed in our everyday environments is significantly increasing. GSMA Intelligence~\footnote{https://www.gsmaintelligence.com/} forecasts that the number of IoT devices will increase to 25 billion by 2025. IoT devices with various sensors will appear ubiquitously, taking the role of capturing data for various application scenarios from healthcare, surveillance, and environmental monitoring. Data generated from these devices can provide opportunities for applying data-driven ML methods to address their application goals. With the hopes to locally process information on the device itself, on-device ML integrates ML algorithms on the IoT devices themselves, and is becoming an active area of research. Largely, on-device ML consists of on-device inference and training operations. On-device inference refers to deploying pre-trained ML models on devices and locally performing inference operations such as classification or regression. Researchers have presented extensive works~\cite{Comprehensive_Benchmark,nemo,DeepCache,appqueryseroniotcamera,DeepWear,alookatdnnonsamrtphone,asymo,spinn,ulayer,nestDNN} using such an ``inference-embedded'' system architecture. On the other hand, on-device training takes a step further and targets to \textit{train} the models locally. Note that typically, until now, due to computational power limitations, cloud-based (or server-based training) was the mainstream approach, where local data is shared with the server and the training operations execute at remote platforms. Later, the model is re-distributed to local IoT devices so that they can perform their inference operations. However, since holding the capability to locally train a ML model can preserve the precious network bandwidth and limited battery budgets, and at the same time contain the raw data locally to preserve privacy, researchers have recently started to propose schemes to train ML models on-device, despite their computational resource limitations~\cite{patil2022poet, Mandheling, minilearn, Melon, Sage, ren2021tinyol, cai2020tinytl, lin2022device}. Still, these works are in their early stages, and we believe that a comprehensive view of what is done and what is left, can catalyze further research ahead. Unlike previous work that target to analyze the research space on an algorithmic perspective~\cite{dhar2021survey}, we take a systematic view, and present a survey on systems and frameworks for supporting on-device learning. \subsection{Motivation} Advances in Machine Learning (ML) - including Deep Learning (DL) - have resulted in tremendous improvements to solving various types of problems in vision, natural language processing, machine translation, etc., and across different applications domains such as biology and healthcare, automotive industries, smart cities and many more. Some of these advancements, however, have remained merely as potential improvements and are not yet turned to, because of the existing challenges in real-world scenarios. For example, many advanced ML models require large cloud servers with high-throughput accelerators such as GPUs~\cite{patil2022poet}. Recently, efforts have been made to deploy such ML models on less resource-rich edge devices such as smartphones and embedded platforms. However, such models are typically restricted to inference while still the training is mostly performed in a less limited environment. We would like to push this even further, by deploying ML models in resources-constrained devices such as sensors and actuators with limited battery and connectivity. This comes with several advantages. \begin{itemize} \item From the device perspective, the main advantages are that learning can take place even in the absence of an Internet connection. Secondly, there is no need to upload data to the cloud and/or download an updated model back, and that in itself saves bandwidth and reduces latency and energy. The latter is important for low-power IoT devices, where communication is typically far more expensive than computing. It is also important when dealing with time-critical applications that should react quickly without a need to communicate with a server or over a network. \item From a data perspective, training on local devices is privacy-preserving by design. Privacy is a critical issue in many real-world use cases and has slowed down the deployment of AI models in such cases. Enabling on-device training removes this obstacle and enables the utilization of the ML models in reality. \item From a modeling perspective, devices can become smarter as they can cope with model drift problems~\cite{modeldrift} and retrain the deployed pre-trained models in order to adapt to the environment and even the end-user. For example, a medical device can over time learn to provide personalized prediction or services that fits the specific conditions of a certain patient. \end{itemize} \subsection{Challenges} On-device learning entails multiple core challenges. Firstly, the available resources on a device are extremely limited. The limitations include memory, battery, computation and communication resources. This poses a big challenge for deploying ML models that are both effective and efficient. Secondly, existing cloud-based training optimization techniques are not feasible for on-device training. The reason is that these techniques are designed for specific hardware with toolkits, such as NVIDIA and Arm GPUs. These GPUs own much more resources and inherently support parallel computing. For example, GeForce RTX 4090~\footnote{https://www.nvidia.com/sv-se/geforce/graphics-cards/40-series/rtx-4090/} features 24~GB memory and a 2.52~GHz boost clock. Thirdly, IoT devices are heterogeneous, which hinders the design of a general method for all devices. \subsection{Scope of this Survey} This survey summarizes current state-of-the-art systems research about on-device training. These systems aim to enable neural network training on resource-constraint devices, including modern smartphones and microcontrollers. In addition, they focus on single-device training. Other similar works, including Gao et al.~\cite{jang2020knowledge}, Xun et al.~\cite{xun2021drone} and Zeng et al.~\cite{Mercury} use less resource constraint devices, e.g., NVIDIA Jetson\footnote{https://developer.nvidia.com/embedded-computing}, to perform on-device training. We exclude them since this survey focuses on on-device training for resource-constraint devices. The articles we survey are published in top-tier conferences on relevant topics, including mobile computing, wireless and mobile networking, and machine learning. Specifically, we searched the recent four years (2019-2022) publications from IPSN, MobiCom, MobiSys, SenSys, and EWSN, as well as AAAI, ICLR, ICML, NeurIPC, IJCAI. Apart from these conferences, we also use Google Scholar and Microsoft Research to find papers. The keywords employed to obtain these papers are on-device/edge/device + learning/training /adaptation/update. Overall, we choose and analyze eight existing systems. \subsection{Outline} The rest of the paper is organized as follows. Section~\ref{Sec:systems} provides an overview of current on-device training systems. Section~\ref{Sec:devices} describes the target devices of current works. Section~\ref{Sec:ml} presents analysis from the Machine Learning perspective. Section~\ref{Sec:optimaztiontechs} presents optimization techniques. Before presenting the conclusions, Section~\ref{Sec:result} discusses the results of current state-of-the-art systems. \section{Overview of the Existing Systems} \label{Sec:systems} We survey the following systems for on-device training: POET~\cite{patil2022poet}, Mandheling~\cite{Mandheling}, MiniLearn~\cite{minilearn}, Melon~\cite{Melon}, SAGE~\cite{Sage}, TinyOL~\cite{ren2021tinyol}, TinyTL~\cite{cai2020tinytl} and TTE~\cite{lin2022device}. We observe that these systems' design follows two different paradigms: the one-stage paradigm and the two-stage paradigm. The one-stage paradigm systems apply various optimization techniques to perform model training, such as pruning and quantization. Differing from the one-stage paradigm, the two-paradigm has an additional preparation stage, where systems prepare the computing graph and generate the training plan before performing model training. The plan is used to schedule when and how to use specific techniques during training. \subsection{One-stage Systems} This section summarizes the one-stage systems MiniLearn~\cite{minilearn}, TinyOL~\cite{ren2021tinyol}, and TinyTL~\cite{cai2020tinytl}. \subsubsection{MiniLearn} MiniLearn aims to enable the re-training of Convolutional Neural Networks (CNNs) on IoT devices to empower the on-device ML applications to adapt dynamically to real deployment environments and overcome the gap between the pre-collected training datasets and real-world data. The targeted device is a microcontroller nRF-52840 SoC that features a 32-bit ARM Cortex-M4 with FPU at 64 MHz, 256 KB of RAM, and 1 MB of Flash. MiniLearn applies quantization and dequantization techniques to enable model re-training on resource-constraints devices. The key idea is to store the weights and intermediate output in integer precision and dequantize them to floating-point precision during training. MiniLearn consists of the following three steps: (1) Dequantizing and pruning of filters. (2) Training the filters. (3) Fine-tuning fully connected layers. \fakepar{Dequantizing and pruning of filters} First, MiniLearn performs dequantization. It keeps the first layer in integer precision format to retain the input shape (the input is as a tensor with a specific shape) of the NN and dequantize the other layers to the floating-point format. Then, MiniLearn prunes less significant filters during training. The authors use the L1-Norm as a static method for selecting filters. Filters with small L1 norms imply they are less important in the networks. Finally, MiniLearn adjusts the output layer based on the fine-tuning demand, i.e., it adjusts the neurons' number of output layers according to the number of target output classes. \fakepar{Training the filters} Firstly, MiniLearn pre-processes the training set and filters. It collects the output of the first layer in compressed quantized (INT8) format as the training set, which will be converted to the floating point format when training. On the other hand, it initializes models based on quantized pre-trained models and converts them to floating point format. Then, MiniLearn performs the training. \fakepar{Fine-tuning fully connected layers} MiniLearn quantizes the pruned filters back to integer precision format. It freezes the filters and performs a few extra epochs of fine-tuning on the fully-connected layers to compensate for the potential loss of information caused by pruning and quantification. \subsubsection{TinyOL} TinyOL mainly aims to cope with model drift~\cite{modeldrift}, which refers to the decay of models' performance caused by changes in real-world environments. TinyOL applies online learning to perform on-device training. Online learning~\cite{hoi2021online} refers to training during inference, which means learning from a sequence of data instances one by one at each inference time. In contrast, traditional training algorithms are called offline learning. They pre-prepare a training dataset, and then train the model. The authors re-train a pre-trained autoencoder on an Arduino Nano 33 BLE board that features a Cortex-M4 CPU running at 64 MHz, 256KB SRAM. TinyOL attaches one additional TinyOL layer to the NNs or replaces one layer of the NN with the TinyOL layer. TinyOL keeps tuning the additional layer during the inference process through online learning. Specifically, the learning process consists of three steps and follows an online learning workflow. Firstly, the NN receives the input data. Then the data sample is fed to TinyML. Secondly, TinyOL calculates loss and gradient. Thirdly, it performs gradient descent for the additional layer. In addition, TinyML modifies the output layer according to the task. For instance, a classification model needs to adjust the output layer when required to classify a new object that is not contained in the original training set. \subsubsection{TinyTL} The goal of TinyTL is to train deep CNNs on memory-constraint devices. Specifically, the authors train ProxylessNAS-Mobile~\cite{cai2018proxylessnas} on a Raspberry Pi~1 that which features 256 MB RAM. The authors argue that techniques, which reduce trainable parameters, such as pruning, do not fully solve the memory footprint problem in on-device training. Therefore, TinyTL enables on-device training by tuning only the biases of the CNN and by freezing the parameters of filters to reduce the memory footprint significantly. In CNN, the number of bias parameters is far less than the parameters of the filter. However, only tuning bias will lead to limited generalization ability. To compensate for this, the authors propose the lite residual module, a module that employs group convolution~\cite{NIPS2012_c399862d} to reduce the channel numbers and a $2\times 2$ average pooling to downsample the input feature map. The lite residual model refines the intermediate feature maps. The workflow is similar to the biased module. TinyTL adds a lite residual model into the CNN model to reduce the memory footprint of training, meanwhile keeping a high performance of the model. \subsection{Two Stage Systems} The two-stage paradigm is popular within current systems research. POET, Mandheling, Melon, Sage, and TTE follow this paradigm. As mentioned before, two-stage systems have an extra preparation stage compared to one-stage systems. During the preparation stage, these systems pre-process models, generate the computing graph (DAG), and/or find an optimal or suboptimal execution plan for model training. Notable, the execution plan schedules the training process, for example, by applying suitable techniques according to the characteristic of operations. But, this does not mean that the training will strictly follow one chosen schedule during runtime. During runtime, the budgets of resources are dynamic, hence the optimal schedule is also dynamically changing. Therefore, the schedule should also be adapted correspondingly. For example, Melon generates several execution plans during the first stage. During runtime, Melon switches plans according to dynamic runtime budgets. Melon's evaluation shows that this achieves better results than statically following one schedule. \subsubsection{POET} POET (Private Optimal Energy Training) aims to enable training the state-of-the-art NNs on memory-scarce and battery-operated edge devices. The authors devise POET to enable on-device training for DNNs, such as ResNet-18~\cite{he2016deep} and BERT~\cite{devlin2018bert}. The target devices include the ARM Cortex M0 class MKR1000, ARM Cortex M4F class nrf52840, A72 class Raspberry Pi 4B+, and Nvidia Jetson TX2. \fakepar{First stage} Firstly, given a NN model, POET profiles the memory and computation costs of operators. Operators indicate the computing operations of the ML model, such as non-linearity functions and convolutions. Based on the resource cost of operators, POET chooses a suitable technology to optimize operators during training. Secondly, given hardware constraints, POET generates a Mixed Integer Linear Programming (MILP) to search for the optimal schedule of paging~\cite{peng2020capuchin} and rematerialization~\cite{chen2016training}. At last, POET ships the schedule to the target edge devices to perform memory-efficient ML training. The objective function of the MILP is: \begin{equation} min \sum_{T}[R \Phi_{compute}+M_{in} \Phi{pagein} + M_{out}\Phi_{pageout}]_T \end{equation} Where $\Phi_{compute}$, $\Phi_{pagein}$ and $\Phi_{pageout}$ denote the energy consumption of each computing node, page in and page out operations. Besides, $R$ refers to recomputing operations. $M_{in}$ represents paging a tensor from secondary storage to RAM and $M_{out}$ refers to the contrary. \fakepar{Second stage} POET follows the schedule to perform model training by applying paging and rematerialization. \subsubsection{Mandheling} Mandheling is designed to train DNNs on modern smartphones. The authors' starting points is that Digital Signal Processors (DSP) are particularly suitable for integer operations. DSPs are biquitously available on modern smartphones. Therefore, the authors leverage CPU-DSP co-scheduling with mixed-precision training algorithms to enable on-device training. \fakepar{First stage} Mandheling calls this the preparing stage. Mandheling starts with model transformation. The target models could either be pre-trained or randomly initialized based on different frameworks, such as Tensorflow and PyTorch. Given a model, Mandheling translates it into FlatBuffer-format, a format with quantization that is used in TensorFlow Lite. In the execution stage, Mandheling generates CPU and DSP compute-subgraphs, then performs compute-subgraph execution on Android devices. \fakepar{Second stage} Mandheling calls this the execution stage. Mandheling fully leverages DSPs for efficient model training by using four key techniques: CPU-DSP co-scheduling, self-adaptive rescaling, batch splitting, and DSP computes subgraph reuse. CPU-DSP co-scheduling reduces the overhead of CPU-DSP context switching by avoiding offloading DSP-unfriendly operators to DSP, for example, normalization, dynamic rescaling, and compute-graph preparation. For example, normalization is a DSP-unfriendly operation because of its irregular memory access. Self-adaptive rescaling refers to periodically performing rescaling instead of every batch. Rescaling denotes tuning the scale factor, which is also a trainable parameter and is used to scale the output of each layer. Dynamic rescaling (adjusting scale factor per batch) can guarantee convergence accuracy but adds at least two times latency compared to static rescaling (using a static scale factor). In their experiments, the authors found that the scale factor jumps between 10 and 11, and the changing frequency is low, that is once per 10 to 60 batches. Based on the observation, the authors propose to rescale periodically instead of each batch. Batch splitting splits batches based on their workload. Specifically, Mandheling first finds the abnormal batches with noticeably higher workloads than others. Then, Mandheling splits these batches into multiple micro-batches and executes them individually. DSP computes subgraph reuse is the fourth technique. The DSP compute subgraph comprises operators with inputs, outputs, and parameters. The current training method prepares a new compute subgraph for each training batch. The authors find that models are rarely modified during training. Therefore, reusing the compute subgraph eliminates the preparation overhead. In addition, the Mandheling release most recently used (MRU) memory when the memory runs out. The reason is the allocation/deallocation of memory for subgraph reuse always follows the execution order of DNNs, which means the MRU memory region has the longest reuse distance. \subsubsection{Melon} Melon is based on the observation that constrained memory hinders training performance. Concretely, large enough batch size and batch normalization layers are two key factors for guaranteeing the convergence accuracy of large NNs. Resource constraint devices cannot support large batch sizes during training since large batch size leads to large peak memory usage. However, when training a model with batch normalization layers, the mean and deviation of small-batch samples are not representative enough of the distribution of the whole dataset. This will slow down the training process and reduce the generalization ability of the model. Melon aims to solve these two problems. For the hardware side, the authors conduct experiments on four Android phones with different SoCs and memory capacities. Melon integrates micro-batch~\cite{huang2019gpipe} and recomputation~\cite{chen2016training} to reduce memory consumption. The main idea is to minimize the recomputation overhead by scheduling when and which tensors (intermediate output of hidden layers) should be discarded or recomputed. Melon employs a tensor-lifetime-aware algorithm for memory layout optimization since the performance of recomputation highly depends on memory management. Melon defines the tensors into two categories and manages them respectively. Firstly, for tensors with long lifetimes, such as a tensor produced at the forward pass and released at the backward pass. Melon follows a First Produced Last Release order. For others, it follows a greedy way to produce and release them. Overall, the key idea is to place those long-lifetime tensors beneath short-lifetime ones to consolidate the overall memory layout. Based on the tensor-lifetime-aware memory pool, the authors present a memory-calibrated progressive recomputation method. The method takes the whole operator DAG as input. To estimate the benefit of recomputing each tensor, the authors define a metric, TPS, as shown in Equitation~\ref{eq:tps}. \begin{equation} TPS = \frac{TensorSize \times FreedLifetime}{ResomputationTime} \label{eq:tps} \end{equation} Where $TensorSize$ refers to the size of activations, $FreedLifetime$ denotes to lifetime span between discarding and recomputing, and $ResomputationTime$ indicates the time that costs to recompute the discarded activation. When performing recomputation, Melon will continuously discard the tensor with maximal TPS and calibrates the memory pool until the pool size is smaller than the budget. \fakepar{First stage} Melon calls this the decision stage. At this stage, it generates the optimal execution plan under the given budgets. Specifically, it obtains the runtime information via an execution profiler that contains NN operators and intermediate tensors. The tensor information consists of data flow dependency, the computation time of each operator, as well as the size and lifetime of each tensor. Then based on the profiler, Melon generates the execution plan. The plan is used for determining: (1) where each tensor is placed in a large memory pool, (2) which operators need to be recomputed, and (3) how to split the batch. Melon generates multiple plans for different given budgets, but only once. \fakepar{Second stage} Melon calls this the execution stage. At this stage, it performs the training based on a proper plan. Notably, Melon can switch to a more proper plan when the memory budget changes. The switching strategy is a 'lazy strategy,' i.e., switching to a new plan when the current training batch ends. \subsubsection{Sage} Sage aims to enable state-of-the-art DNN training on smartphones. Concretely, The authors implemented SAGE to train ResNet-50~\cite{he2016deep}, DenseNet-121~\cite{huang2017densely}, MobileNetV2~\cite{sandler2018mobilenetv2}, and BERT-small~\cite{devlin2018bert} on smartphones with different SoCs, including, Samsung Galaxy S10 and Note 20. The authors also demonstrate that the scarce and dynamic memory of mobile devices is the major bottleneck of on-device training. They find that the memory usage of training is 5 to 100 times more than inference. Concretely, training will consume multiple Gigabytes of memory. Therefore, Sage employs dynamic gradient checkpointing~\cite{kirisame2020dynamic} and dynamic gradient accumulation to dynamically adjust the memory usage to the available system memory budget. Dynamic gradient checkpointing shares the same idea with materialization, which is dropping intermediate results and recomputing them when needed. For dynamic gradient accumulation, the authors use a traditional micro-batch technique~\cite{stein2021latency} that can dynamically adjust the size of the micro-batch size according to the runtime memory availability to reduce peak memory usage. \fakepar{First Stage} In this stage, SAGE constructs a computation graph (DAG) with automatic differentiation (AD). Concretely, a DNN node is expressed by differentiable operations (DO) and computable operations (CO) abstraction. In this way, the DAG decouples the evaluation and differentiation processes of DNN computation graphs. Then, SAGE performs both graph- and operator-level optimizations for the DAG. \fakepar{Second stage} This stage is also called the graph execution stage. SAGE employs a hybrid strategy to combine gradient checkpointing and gradient accumulation during runtime adaptively to execute the DAG. \subsubsection{TTE} The authors present the Tiny Training Engine (TTE) to cope with the two main challenges of on-device training, which are: (1) optimizing quantized models is difficult. The deployed model is quantized, which leads to lower convergence accuracy than the unquantized model because of low precision and lack of batch normalization layers. (2) Limited hardware resources of tiny devices. The memory usage of full back-propagation can easily exceed the memory of microcontrollers. The authors use three popular TinyML models in experiments: MobileNetV2, ProxylessNAS, and MCUNet~\cite{lin2020mcunet}. The experiments are performed on microcontroller STM32F746 features 320KB SRAM and 1MB Flash using a single batch size. The authors verify that the quantization process distorts the gradient update, which causes the lower convergence accuracy problem of quantified models. To address this problem, they propose quantization-aware scaling (QAS). The key idea of QAS is to multiply the square of scaling factors corresponding to precision with intermittent output to relieve the disorder caused by quantization. Then, TEE combines QAS and sparse update techniques to overcome the second challenge. \fakepar{First stage} In the first stage, the TTE engine works in three steps. Firstly, TTE moves the auto-differentiation to the compile time from the runtime and generates a static backward computing graph. Secondly, TTE prunes away the gradient nodes with the sparse layer update method. Thirdly, TTE reorders the operators to immediately apply the gradient update to a specific tensor before back-propagating to earlier layers so that the gradient could be released. \fakepar{Second stage} TTE executes the computing graph. \section{Device Categories} \label{Sec:devices} We categorize devices based on hardware resources, mainly memory since this is typically the major constraint for on-device training~\cite{patil2022poet}, into three categories: high-constraint devices, middle-constraint devices, and low-constraint devices. The high-constraint devices have hundreds of KiloBytes memory. Middle-constraint devices hundreds of MegaBytes and low-constraint devices less than ten GigaBytes of memory. Table~\ref{tab:targetdevice} shows the systems research works and their target devices. Table~\ref{tab:device} illustrates the hardware feature of the devices that existing works have used. \begin{table} \caption{Overview for targeted devices of current systems research.} \label{tab:targetdevice} \begin{tabular}{P{0.11\textwidth}P{0.16\textwidth}P{0.16\textwidth}} \Xhline{2\arrayrulewidth} Research & Device categories & Device \\ \Xhline{2\arrayrulewidth} \multirow{3}{*}{POET~\cite{patil2022poet}} & Low-constraint & Jetson TX2\\ &Middle-constraint & Raspberry Pi 4B+\\ &High-constraint & MKR1000 and nRF-52840 \\ \hline SAGE~\cite{Sage} & Low-constraint & Smartphones \\ \hline Mandheling~\cite{Mandheling} & Low-constraint & Smartphones \\ \hline Melon~\cite{Melon} & Low-constraint & Smartphones \\ \hline TinyTL~\cite{cai2020tinytl} & Middle-constraint & Raspberry Pi 1\\ \hline MiniLearn~\cite{minilearn} & High-constraint & nRF-52840 \\ \hline TinyOL~\cite{ren2021tinyol} & High-constraint & Nano 33 BLE \\ \hline TTE~\cite{lin2022device} & High-constraint & STM32F746 \\ \hline \end{tabular} \label{tab:targetdevice} \end{table} \begin{table} \caption{The hardware feature of targeted devices.} \label{tab:device} \begin{tabular}{P{0.15\textwidth}P{0.10\textwidth}P{0.18\textwidth}} \Xhline{2\arrayrulewidth} Devices & Memory & Clock Speed \\ \Xhline{2\arrayrulewidth} Jetson TX2 & 8GB & 2GHz \\ \hline Smartphones & 4-8GB & 900MHz-2.84GHz \\ \hline Raspberry Pi 4B+ & 2GB & 1.5GHz \\ \hline Raspberry Pi 1 & 256MB & 250 MHz \\ \hline Nano 33 BLE & 1MB & 64MHz \\ \hline STM32F746 & 320KB & 216 MHz \\ \hline nRF-52840 & 256KB & 64MHz \\ \hline MKR1000 & 256 KB & 48 MHz\\ \hline \end{tabular} \label{tab:device} \end{table} \section{Summary from the Machine Learning Perspective} \label{Sec:ml} This section analyses current systems from the ML perspective. The analysis focuses on two aspects: the categories of NNs that are evaluated in the selected papers and the training strategy. \subsection{Evaluated Neural Networks As shown in Table~\ref{tab:targetNN}, NNs for computer vision tasks appear with the highest frequency as the target NN among current on-device training works. POET~\cite{patil2022poet}, Mandheling~\cite{Mandheling}, SAGE~\cite{Sage}, Melon~\cite{Melon} and TTE~\cite{lin2022device} are all evaluated by training various computer vision models. In addition, MiniLearn~\cite{minilearn} is designed for training Convolutional Neural Networks on devices. Apart from computer vision models, BERT~\cite{devlin2018bert} is another popular model in on-device training works, such as POET and SAGE, which is widely used and achieves significant success in natural language processing tasks. Furthermore, the authors of TinyTL~\cite{cai2020tinytl} and TTE also evaluated their system on ProxylessNAS, which is an important Neural Architecture Search (NAS)~\cite{elsken2019neural} algorithm. NAS is widely used in the automatic design of NNs and outperforms hand-designed architectures in many cases~\cite{zoph2016neural, zoph2018learning}. TinyOL~\cite{ren2021tinyol} is evaluated by training an autoencoder~\cite{hinton1993autoencoders}, which is a kind of unsupervised learning technique for the task of representation learning. It is widely used in knowledge distillation~\cite{gou2021knowledge}. \begin{table} \caption{Overview of evaluated Neural Networks by current systems research.} \label{tab:targetNN} \begin{tabular}{P{0.11\textwidth}P{0.32\textwidth}} \Xhline{2\arrayrulewidth} Research & Neural Networks \\ \Xhline{2\arrayrulewidth} POET & VGG16~\cite{simonyan2014very}, ResNet-18~\cite{he2016deep} and BERT~\cite{devlin2018bert}\\ \hline Mandheling & VGG-11/16/19~\cite{simonyan2014very}, ResNet-18/34~\cite{he2016deep} and InceptionV3~\cite{szegedy2016rethinking} \\ \hline SAGE & ResNet-50~\cite{he2016deep}, DenseNet-121~\cite{huang2017densely}, MobileNetV2~\cite{sandler2018mobilenetv2} and BERT-small~\cite{devlin2018bert} \\ \hline Melon & MobileNetV1~\cite{howard2017mobilenets}, MobileNetV2, SqueezeNet~\cite{iandola2016squeezenet} and ResNet-50\\ \hline TinyTL & ProxylessNAS-Mobile~\cite{cai2018proxylessnas} \\ \hline MiniLearn & CNNs with three to four convolutional layers and two to three fully connected layers. \\ \hline TinyOL & Autoencoder~\cite{hinton1993autoencoders} \\ \hline TTE & MobileNetV2, ProxylessNAS~\cite{cai2018proxylessnas} and MCUNet~\cite{lin2020mcunet} \\ \hline \end{tabular} \label{tab:targetNN} \end{table} \subsection{Transfer Learning} Regarding the training strategies, we find that except for Mandheling all systems apply transfer learning setting. The authors of Mandheling perform both transfer learning and training from scratch, which means that they initialize the model with random numbers. The authors suggest that transfer learning is more suitable for resource-constraint devices since it converges in fewer iterations, which leads to lower resource usage. Transfer learning has the following workflow: first, pre-training models on cloud/servers, then deploying the model, and finally, performing training to update the model on the device. The reason is that the training model from scratch is too heavy for devices, both from an energy and resource consumption perspective, even with state-of-the-art optimization methods~\cite{Mandheling}. On the hand, it is unnecessary to train models from scratch. ML communities have presented plenty of NNs, datasets, and pre-trained models. According to research questions, system researchers can find a suitable NN and train a model in the cloud, then deploy the model to devices and perform transfer learning. This workflow is more efficient and feasible than deploying a randomly initialized model to the device and training from scratch. Figure~\ref{fig:workflow} shows a generic workflow of on-device training. \begin{figure}[!h] \centering \includegraphics[width=0.8\linewidth]{figures/worflow.png} \caption{A generic workflow of on-device training} \label{fig:workflow} \end{figure} \section{Training optimization strategies} \label{Sec:optimaztiontechs} To reduce memory consumption, the surveyed works apply or combine different optimization strategies, which consist of three categories: memory optimization, model optimization, and extra-hardware assistance. \subsection{Memory Optimization} All the above works, except for TinyTL and TinyOL, enable on-device training by applying various memory optimization techniques. The peak memory usage for model training is far larger than for inference, from four up to 103 times larger~\cite{qu2022p, Sage, lin2022device}, due to the large amount of activations, such as the output of the hidden layers. Therefore, on-device training needs memory optimization techniques to avoid out-of-memory issues and to improve training throughput. The main techniques current works use are recomputation, paging and micro-batch. \fakepar{Recomputation} Recomputation refers to deleting the intermediate activations to free up memory and recomputing them when they are needed, such as the results of a non-linear function and output of hidden layers. POET, Melon, and SAGE apply recomputation techniques to reduce their peak memory usage. \fakepar{Paging} Paging is a memory management technique that is widely used, for instance, as a memory management method in various operating systems. POET applies paging to reduce peak memory usage during training. \fakepar{Micro-batch} Micro-batch is a training optimization technique that uses smaller batch sizes during training. A smaller batch size reduces peak memory usage. However, it may negatively affect the performance of the model with batch normalization layers. Hence, this is a trade-off between feasibility and accuracy. Mandheling, Sage, and Melon use this technique in their systems. \subsection{Model Optimization} Besides the memory optimization techniques described above, model-side optimization techniques are also leveraged by on-device training systems, including quantization, dequantization, mix-precision, sparse update, and adjusting the architecture of NNs, such as replacing layers or inserting new layers to the NN. \fakepar{Quantization} Quantization for deep learning refers to using low-precision formats, such as INT8 and INT16, to represent the parameters of NN models and store intermediate activations, which normally use a floating-point format. Quantization can reduce the memory usage of both inference and training, but at the cost of reduced accuracy of the models. Researchers widely use this technique to design efficient ML algorithms. Mandheling, MiniLearn, TTE, and TinyTL apply this quantization technique. \fakepar{Dequantization} Dequantization is the converse process of quantization, which means presenting quantized parameters of models and intermediate activations back to a high-precision format. MiniLearn combines quantization and dequantization techniques to enable on-device training. \fakepar{Mixed-precision algorithms} Researchers have proposed many mixed-precision algorithms~\cite{lin2017towards, lin2015neural, jacob2018quantization, courbariaux2015binaryconnect} for training, where the weights and intermediate activations are represented not only by FP32 but also by lower precision formats such as INT8 and INT16. Mixed-precision algorithm is the key technique used in Mandheling. \fakepar{Pruning} Pruning can roughly be divided into network pruning and activation pruning~\cite{liu2018dynamic}. Network pruning removes less important units of NN. Activation pruning~\cite{liu2018dynamic} builds a dynamic sparse computation graph to prune activations during training. MiniLearn uses network pruning as one of its main techniques. \fakepar{Sparse update} Sparse update refers to re-training only part of the NN and freezing other parameters during training. For example, for a CNN model, TinyTL only re-trains the bias and freezes the parameters of filters. MiniLearn, TinyTL, and TinyOL apply this strategy. \fakepar{Adjusting the architecture of NNs} TinyTL and TinyOL enable ODL by modifying the architecture of NNs. TinyTL. TinyOL design an extra layer and then inserts it into the NN or uses the extra layer to replace one original layer of the NN. During training, TinyOL only tunes the extra layer to achieve high-efficient model training on resource-constraint devices. \subsection{Extra-hardware Assistance} Mandheling leverages a Digital Signal Processor (DSP) to enable training models on smartphones since a DSP is particularly suitable for integer operations and ubiquitously available on modern SoCs. \section{Performance Gains} \label{Sec:result} This section describes baselines, and the key results current systems achieve. The results include memory usage reduction, accuracy improvement, training speed improvement, and energy savings. \subsection{Baselines} Baselines that current works use for evaluation mainly consist of four types. First, TensorFlow Lite~\cite{li2020tensorflow} and MNN~\cite{jiang2020mnn}-based baselines. TensorFlow Lite and MNN are two efficient and lightweight deep learning frameworks. Second, the existing model training optimization techniques. Thirdly, baselines that only use one single optimization technique. The fourth baseline is to train models in a powerful computing environment with enough memory and computing resources, such as the cloud or powerful servers. The first three types of baselines are used to show that the system outperforms existing frameworks and the advantage of integrating multiple optimization techniques, respectively. The last baseline is used to quantify the lower resource usage their systems achieve. Table~\ref{tab:baselines} presents the baselines used in each of the surveyed works. Apart from the four baselines, some works also compare the systems to training without any optimizations. \begin{table} \caption{Baselines of current works.} \label{tab:baselines} \begin{tabular}{P{0.11\textwidth}P{0.32\textwidth}} \Xhline{2\arrayrulewidth} Research & Baselines \\ \Xhline{2\arrayrulewidth} POET & Capuchin~\cite{peng2020capuchin}, POFO~\cite{beaumont2021efficient}, rematerialization(recomputation)-only, and Paging-only baselines \\ \hline Mandheling & TensorFlow Lite and MNN \\ \hline SAGE & Ideal baseline, dynamic gradient checkpointing (DGC) only method, and the dynamic gradient accumulation (DGA) only baselines \\ \hline Melon & MNN and Ideal baseline \\ \hline TinyTL & Bias-only and Fine-tuning baseline, including fine-tuning the last layer, full NN, or the last layer and normalization layers. \\ \hline MiniLearn & Ideal baseline and also compared with training without any optimizations\\ \hline TinyOL & Compared with training without any optimizations \\ \hline TTE & Fine-tuning baselines, including only fine-tuning biases, as well as fine-tuning weights and biases of the last few layers. \\ \hline \end{tabular} \label{tab:baselines} \end{table} \subsection{Memory Usage Reduction} Memory consumption reduction is the main concern for on-device learning. This section quantifies the reduction in memory the different systems achieve. \fakepar{Mandheling} Mandheling implements baselines based on MNN and TensorFlow Lite. For memory consumption, Mandheling consistently and remarkably outperforms baselines. Specifically, Mandheling reduces energy consumption up to 12.5 times compared to baselines. \fakepar{MiniLearn} The authors separately evaluate memory consumption of training and inference of re-trained model. For training, when using 100 samples to re-train, Minilearn reduces up to 58\%, 77\%, and 94\% of flash consumption with KWS, CIFAR, and WISDM-HAR, respectively, compared to training on the cloud. However, when using 600 samples, the flash consumption reduction becomes 8\% with KWS, 17\% with CIFAR, and 80\% with WISDM-HAR, respectively. For the inference, after the re-training, the inference reduces by almost 50\% memory consumption and 48\% less inference time with 75\% pruning for all use cases. \fakepar{Sage} Sage leads to up to 50\% memory usage reduction compared to the maximum reduction of the DGA-only baseline and up to 76\% reduction compared to the DGC-only baseline in training ResNet-50. \fakepar{TinyTL} Compared to fine-tuning full models, TinyTL achieves six times memory savings with the same accuracy. Meanwhile, TinyTL reaches up to 7.3\% higher accuracy while reducing the memory footprint 5.2 times compared to fine-tuning the normalization layers and the last layer of models. \fakepar{TTE} The authors compare the peak memory usage in three settings, fine-tuning the full model, sparse update-only, and sparse update with TTE graph reordering. The sparse update effectively reduces peak memory by 7-9 times compared to the fine-tuning full model while achieving the same or higher transfer learning accuracy. Sparse update with TTE graph reordering leads to 20-21 times total memory savings. \subsection{Accuracy Improvement} \fakepar{Mandheling} Mandheling can train models efficiently while guaranteeing convergence accuracy in end-to-end training experiments. In single-device scenarios, the convergence accuracy is only 1.9 to 2.7\% lower than in training with FP32. In federated scenarios, the convergence accuracy is at most 2.73\% lower than the baseline. However, it leads to up to 10.75 times speedup. \fakepar{MiniLearn} When using 100 samples to re-train, Minilearn reduces the flash consumption up to 58\%, 77\%, and 94\% with Google Keyword Spotting dataset (KWS)~\cite{DBLP:journals/corr/abs-1804-03209}, CIFAR dataset, and WISDM-HAR dataset, respectively, compared to training on the cloud. However, when using 600 samples, the flash consumption reduction becomes 8\% with KWS, 17\% with CIFAR, and 80\% with WISDM-HAR, respectively. \fakepar{Melon} The authors evaluate Melon in federated and centralized settings. In both settings, Melon outperforms MNN-based baselines with larger batch sizes. Concretely, in federated settings, Melon achieves higher convergence accuracy than the original MNN, i.e., 3.94\% and 3.20\% with MobileNet-V2 and SqueezeNet, respectively. In centralized settings, the convergence accuracy is 1.98\% and 2.04\% higher, respectively. \fakepar{TinyTL} TinyTL archives a clearly better accuracy than the Tiny-B baseline, which only re-trains bias, with a little extra memory overhead, and slightly better than TinyTL-L baseline, which only re-trains the residual modules, with the same memory footprint. \subsection{Training Speed Improvement} Training speed is another important evaluation metric for on-device training since the training task normally is expected to be completed in a finite time. \fakepar{Mandheling} Mandheling achieves up to 8.25 times faster per-batch training time than the baseline. Overall, Mandheling spends 5.06 to 6.27 times less time to converge than baselines. \fakepar{TTE} The authors contrast three methods: fine-tuning the full model with TFLite Micro, sparse update with TFLite Micro kernels, and sparse update with TTE kernels. The sparse update plus TTE kernels method increases the training speed by 23-25 times significantly compared to the sparse update with the TF-Lite Micro method. On the other hand, fine-tuning the full model with the TFLite Micro method leads to an out-of-memory error. \subsection{Energy Savings} \fakepar{POET} Under a time budget between 0.5-0.9 milliseconds for one training epoch, POET consumes up to 40\% less energy with a contract to paging-only or rematerialization-only method. \fakepar{Mandheling} Mandheling consumes 5.96-9.62 times less energy than baselines in single-device scenarios. In federated scenarios, it reduces the energy consumption 13.2 times. \fakepar{MiniLearn} Compared to the cloud-based re-training baseline, MiniLearn avoids extra communication energy costs. The authors evaluate the energy consumed by BLE communication. For the CIFAR sub-set, the device needs to send 1848~KB of newly collected data to the cloud and download a model update of 96KB. This procedure will take 22 seconds and consume 462 mJ. \section{Conclusions} We have surveyed the existing systems for on-device learning. On-device training is still a new research area, and only a few systems have been presented. According to our survey, smartphones and microcontrollers are the main types of target devices. Smartphones currently play an important role in people's daily life and normally the capabilities of their hardware is sufficient to apply various optimization techniques to empower model training, such as several gigabytes of memory, strong CPU, and hardware accelerators, including GPUs and DSPs. Therefore, on-device training will facilitate the development of intelligent applications for smartphones and make smartphones smarter. On the other hand, also microcontrollers are widely used in both academia and industry. However, performing model training on microcontrollers is more challenging than on smartphones since microcontrollers are more resource constrained. Hence, on-device training systems for microcontrollers require both model-side and resource-consumption optimization. Overall, on-device learning enables various devices to fully leverage the power of ML and we expect it to become part of many applications of importance for industry and society. \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what's in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what's in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what's in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Traffic congestion is a major problem in modern cities. While the use of traffic signal mitigates the congestion to a certain extent, most traffic signals are controlled by timers. Timer-based systems are simple, but their performance might deteriorate at intersections with inconsistent traffic volume. Thus, an adaptive traffic signal control method, especially controlling multiple intersections simultaneously, is required. Existing conventional methods, such as SOTL~\cite{Cools2013}, use observations of intersections to form better strategies, but they do not consider long-term effects of different signals and lack a proper coordination among intersections. Recently, reinforcement learning (RL) methods have showed promising performance in controlling traffic signals. Some of them achieve good performance in controlling traffic signals at a single intersection~\cite{zheng2019learning,DBLP:conf/nips/OroojlooyNHS20}; others focus on collaboration of multi-intersections~\cite{wei2019colight,chen2020toward} Multi-intersection traffic signal control is a typical multi-agent reinforcement learning problem. The main challenges include stability, nonstationarity, and curse of dimensionality. Independent Q-learning splits the state-action value function into independent tasks performed by individual agents to solve the curse of dimensionality. However, given a dynamic environment that is common as agents may change their policies simultaneously, the learning process could be unstable and divergent. To enable the sharing of information among agents, a proper communication mechanism is required. This is critical as it determines the content/amount of the information each agent can observe and learn from its neighboring agents, which directly impacts the amount of uncertainty that can be reduced. Common approaches include enabling the neighboring agents i) to exchange their information with each other and use the partial observations directly during the learning~\cite{el2013multiagent}, or ii) to share hidden states as the information~\cite{wei2019colight,yu2020macar}. While enabling communication is important to stabilize the training process, existing methods have not yet examined the impact of the content/amount of the shared information. For example, when each agent shares more information with other agents, the network needs to manage a larger number of parameters and hence converges in a slower speed, which actually reduces the stability. As reported in \cite{zheng2019diagnosing}, additional information does not always lead to better results. Consequently, it is very important to select the right information for sharing. Motivated by the deficiency in the current communication mechanism adopted by existing methods, we propose a universal communication form \emph{UniComm}. To facilitate the understanding of UniComm, let's consider two neighboring intersections $I_i$ and $I_j$ connected by a unidirectional road $R_{i,j}$ from $I_i$ to $I_j$. If vehicles in $I_i$ impact $I_j$, they have to first pass through $I_i$ and then follow $R_{i,j}$ to reach $I_j$. This observation inspires us to propose UniComm, which picks relevant observations from an agent $A_i$ who manages $I_i$, predicts their impacts on road $R_{i,j}$, and only shares the prediction with its neighboring agent $A_j$ that manages $I_j$. We conduct a theoretical analysis to confirm that UniComm does pass the most important information to each intersection. While UniComm addresses the inefficiency of the current communication mechanism, its strength might not be fully achieved by existing methods, whose network structures are designed independent of UniComm. We therefore design \emph{UniLight}, a concise network structure based on the observations made by an intersection and the information shared by UniComm. It predicts the Q-value function of every action based on the importance of different traffic movements. In brief, we make three main contributions in this paper. Firstly, we propose a universal communication form \emph{UniComm} in multi-intersection traffic signal control problem, with its effectiveness supported by a thorough theoretical analysis. Secondly, we propose a traffic movement importance based network \emph{UniLight} to make full use of observations and UniComm. Thirdly, we conduct experiments to demonstrate that UniComm is universal for all existing methods, and UniLight can achieve superior performance on not only simple but also complicated traffic situations. \section{Related Works} Based on the number of intersections considered, traffic signal control problem (TSC) can be clustered into i) single intersection traffic signal control (S-TSC) and ii) multi-intersection traffic signal control (M-TSC). \noindent\textbf{S-TSC.} S-TSC is a sub-problem of M-TSC, as decentralized multi-agent TSC is widely used. Conventional methods like SOTL choose the next phase by current vehicle volumes with limited flexibility. As TSC could be modelled as a Markov Decision Process (MDP), many recent methods adopt reinforcement learning (RL)~\cite{DBLP:conf/aaai/HasseltGS16,DBLP:conf/icml/MnihBMGLHSK16,DBLP:conf/icml/HaarnojaZAL18,ault2020learning}. In RL, agents interact with the environment and take rewards from the environment, and different algorithms are proposed to learn a policy that maximizes the expected cumulative reward received from the environment. Many algorithms~\cite{zheng2019learning,zang2020metalight,DBLP:conf/nips/OroojlooyNHS20} though perform well for S-TSC, their performance at M-TSC is not stable, as they suffer from a poor generalizability and their models are hard to train. \noindent\textbf{M-TSC.} Conventional methods for M-TSC mainly coordinate different traffic signals by changing their offsets, which only works for a few pre-defined directions and has low efficiency. When adapting RL based methods from single intersection to multi-intersections, we can treat every intersection as an independent agent. However, due to the unstable and dynamic nature of the environment, the learning process is hard to converge \cite{bishop2006pattern,nowe2012game}. Many methods have been proposed to speedup the convergence, including parameter sharing and approaches that design different rewards to contain neighboring information~\cite{chen2020toward}. Agents can also communicate with their neighboring agents via either direct or indirect communication. The former is simple but results in a very large observation space~\cite{el2013multiagent,arel2010reinforcement}. The latter relies on the learned hidden states and many different methods~\cite{nishi2018traffic,wei2019colight,chen2020toward} have been proposed to facilitate a better generation of hidden states and a more cooperative communication among agents. While many methods show good performance in experiments, their communication is mainly based on hidden states extracted from neighboring intersections. They neither examine the content/importance of the information, nor consider what is the key information that has to be passed from an agent to a neighboring agent. This makes the learning process of hidden states more difficult, and models may fail to learn a reasonable result when the environment is complicated. Our objective is to develop a communication mechanism that has a solid theoretical foundation and meanwhile is able to achieve a good performance \section{Problem Definition} We consider M-TSC as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP)~\cite{oliehoek2016concise}, which can be described as a tuple $\mathcal{G} = \langle \mathcal{S}, \mathcal{A}, P, r, \mathcal{Z}, O, N, \gamma \rangle$. Let $\boldsymbol{s}\in\mathcal{S}$ indicate the current true state of the environment. Each agent $i\in\mathcal{N}:={1,\cdots,N}$ chooses an action $a_i\in \mathcal{A}$, with $\boldsymbol{a}:=[a_i]_{i=1}^N\in\mathcal{A}^N$ referring to the joint action vector formed. The joint action then transits the current state $\boldsymbol{s}$ to another state $\boldsymbol{s}'$, according to the state transition function $P(\boldsymbol{s}'|\boldsymbol{s}, \boldsymbol{a}):\mathcal{S}\times\mathcal{A}^N\times\mathcal{S}\to[0,1]$. The environment gets the joint reward by reward function $r(\boldsymbol{s}, \boldsymbol{a}):S\times\mathcal{A}^N\to\mathbb{R}$. Each agent $i$ can only get partial observation $z\in\mathcal{Z}$ according to the observation function $O(\boldsymbol{s}, i):\mathcal{S}\times i\to\mathcal{Z}$. The objective of all agents is to maximize the cumulative joint reward $\sum_{i=0}^{\infty} \gamma^i r(\boldsymbol{s}_i, \boldsymbol{a}_i)$, where $\gamma\in[0,1]$ is the discount factor. Following CoLight~\cite{wei2019colight} and MPLight~\cite{chen2020toward}, we define M-TSC in Problem 1. We plot the schematic of two adjacent 4-arm intersections in Figure~\ref{fig:definition} to facilitate the understanding of following definitions. \begin{definition} An \emph{intersection} $I_i\in\mathcal{I}$ refers to the start or the end of a road. If an intersection has more than two approaching roads, it is a real intersection $I^R_i\in\mathcal{I}^R$ as it has a traffic signal. We assume that no intersection has exactly two approaching roads, as both approaching roads have only one outgoing direction and the intersection could be removed by connecting two roads into one. If the intersection has exactly one approaching road, it is a virtual intersection $I^V_i\in\mathcal{I}^V$, which usually refers to the border intersections of the environment, such as $I_2$ to $I_7$ in Figure \ref{fig:definition}. The neighboring intersections $\mathcal{I}^N_i$ of $I_i$ is defined as $\mathcal{I}^N_i = \{I_j|R_{i,j}\in\mathcal{R}\}\cup \{I_j|R_{j,i}\in\mathcal{R}\}$, where roads $R_{i,j}$ and $\mathcal{R}$ are defined in Definition \ref{def:road}. \end{definition} \begin{definition} \label{def:road} A \emph{Road} $R_{i,j}\in \mathcal{R}$ is a unidirectional edge from intersection $I_i$ to another intersection $I_j$. $\mathcal{R}$ is the set of all valid roads. We assume each road has multiple lanes, and each lane belongs to exactly one traffic movement, which is defined in Definition \ref{def:movement}. \end{definition} \begin{definition} \label{def:movement} A \emph{traffic movement} $T_{x, i, y}$ is defined as the traffic movement travelling across $I_i$ from entering lanes on road $R_{x,i}$ to exiting lanes on road $R_{i,y}$. For a 4-arm intersection, there are 12 traffic movements. We define the set of traffic movements passing $I_i$ as $\mathcal{T}_i = \{T_{x,i,y}|x, y \in \mathcal{I}^N_i, R_{x,i}, R_{i,y}\in\mathcal{R} \}$. $T_{3,0,4}$ and $T_{7,1,6}$ represented by orange dashed lines in Figure \ref{fig:definition} are two example traffic movements. \end{definition} \begin{figure} \centering \includegraphics[width=0.483\textwidth]{figures/definition-detail.pdf} \caption{Visualization of two adjacent 4-arm intersections and their corresponding definitions, and 8 phases. Phase \#6 is activated in $I_1$. We omit the turn-right traffic movements in all phases as they are always permitted in countries following right-handed driving. } \label{fig:definition} \end{figure} \begin{definition} A \emph{vehicle route} is defined as a sequence of roads $V$ with a start time $e\in\mathbb{R}$ that refers to the time when the vehicle enters the environment. Road sequence $V=\langle R^1, R^2, \cdots, R^n\rangle$, with a traffic movement $T$ from $R^i$ to $R^{i+1}$ for every $i\in[1,n-1]$. We assume that all vehicle routes start from/end in virtual intersections. $\mathcal{V}$ is the set of all valid vehicle routes. $V_0$ in Figure \ref{fig:definition} is one example. \end{definition} \begin{definition} A traffic signal phase $P_i$ is defined as a set of permissible traffic movements at $I_i$. The bottom of Figure \ref{fig:definition} shows eight phases. $\mathcal{A}_i$ denotes the complete set of phases at $I_i$, i.e., the action space for the agent of $I_i$. \end{definition} \begin{problem} In multi-intersection traffic signal control (\emph{M-TSC}), the environment consists of intersections $\mathcal{I}$, roads $\mathcal{R}$, and vehicle routes $\mathcal{V}$. Each real intersection $I^R_i\in \mathcal{I}^R$ is controlled by an agent $A_i$. Agents perform actions between time interval $\Delta t$ based on their policies $\pi_i$. At time step $t$, $A_i$ views part of the environment $z_i$ as its observation, and tries to take an optimal action $a_i\in \mathcal{A}_i$ (i.e., a phase to set next) that can maximize the cumulative joint reward $r$. \end{problem} As we define the M-TSC problem as a Dec-POMDP problem, we have the following RL environment settings. \noindent\textbf{True state $\boldsymbol{s}$ and partial observation $\boldsymbol{z}$.} At time step $t \in \mathbb{N}$, agent $A_i$ has the partial observation $z_i^t \subseteq \boldsymbol{s}^t$, which contains the average number of vehicles $n_{x,i,y}$ following traffic movement $T_{x,i,y}\in\mathcal{T}_i$ and the current phase $P_i^t$. \noindent\textbf{Action $\boldsymbol{a}$.} After receiving the partial observation $z_i^t$, agent $A_i$ chooses an action $a_i^t$ from its candidate action set $\mathcal{A}_i$ corresponding to a phase in next $\Delta t$ time. If the activated phase $P_i^{t+1}$ is different from current phase $P_i^t$, a short all-red phase will be added to avoid collision. \noindent\textbf{Joint reward $\boldsymbol{r}$.} In M-TSC, we want to minimize the average travel time, which is hard to optimize directly, so some alternative metrics are chosen as immediate rewards. In this paper, we set the joint reward $r^t=\sum_{I_i\in \mathcal{I}^R} r_i^t$, where $r_i^t = -\overline{n^{t+1}_{x,i,y}}, s.t.\ x,y\in\mathcal{I}^N_i \land T_{x, i, y}\in\mathcal{T}_i$ indicates the reward received by $I_i$, with $n_{x, i, y}$ the average vehicle number on the approaching lanes of traffic movement $T_{x, i, y}$. \section{Methods} In this section, we first propose UniComm, a new communication method, to improve the communication efficiency among agents; we then construct UniLight, a new controlling algorithm, to control signals with the help of UniComm. We use Multi-Agent Deep Q Learning \cite{mnih2015human} with double Q learning and dueling network as the basic reinforcement learning structure. Figure~\ref{fig:model} illustrates the newly proposed model \begin{figure*}[t] \centering \includegraphics[width=175mm]{figures/model-ijcai.pdf} \caption{UniComm and UniLight structure} \label{fig:model} \end{figure*} \subsection{UniComm}\label{subsec:unicomm} The ultimate goal to enable communication between agents is to mitigate the nonstationarity caused by decentralized multi-agent learning. Sharing of more observations could help improve the stationarity, but it meanwhile suffers from the curse of dimensionality. In existing methods, neighboring agents exchange hidden states or observations via communication. However, as mentioned above, there is no theoretical analysis on how much information is sufficient and which information is important. While with deep learning and back propagation method, we expect the network to be able to recognize information importance via well-designed structures such as attention. Unfortunately, as the training process of reinforcement learning is not as stable as that of supervised learning, the less useful or useless information might affect the convergence speed significantly. Consequently, how to enable agents to communicate effectively with their neighboring agents becomes critical. Consider intersection $I_0$ in Figure \ref{fig:definition}. Traffic movements such as $T_{4, 0, 2}$ and $T_{3, 0, 4}$ will not pass $R_{0, 1}$, and hence their influence to $I_1$ is much smaller than $T_{2, 0, 1}$. Accordingly, we expect the information related to $T_{4, 0, 2}$ and $T_{3, 0, 4}$ to be less relevant (or even irrelevant) to $I_1$, as compared with information related to $T_{2, 0, 4}$. In other words, when $I_0$ communicates with $I_1$ and other neighboring intersections, ideally $I_0$ is able to pass different information to different neighbors. We propose to share only important information with neighboring agents. To be more specific, we focus on agent $A_1$ on intersection $I_1$ which learns maximizing the cumulative reward of $r_1$ via reinforcement learning and evaluate the importance of certain information based on its impact on $r_1$. We make two assumptions in this work. First, spillback that refers to the situation where a lane is fully occupied by vehicles and hence other vehicles cannot drive in never happens. This simplifies our study as spillback rarely happens and will disappear shortly even when it happens. Second, within action time interval $\Delta t$, no vehicle can pass though an entire road. This could be easily fulfilled, because $\Delta t$ in M-TSC is usually very short (e.g. $10s$ in our settings). Under these assumptions, we decompose reward $r^t_1$ as follows. Recall that the reward $r^t_1$ is defined as the average of $n_{x,1,y}$ with $T_{x,1,y}\in\mathcal{T}_1$ being a traffic movement. As the number of traffic movements $|\mathcal{T}_1|$ w.r.t. intersection $I_1$ is a constant, for convenience, we analyze the sum of $n_{x,1,y}$, i.e. $|\mathcal{T}_1|r_1$, and $r^t_1$ can be derived by dividing the sum by $|\mathcal{T}_1|$. \begin{align} |\mathcal{T}_1|r^t_1 &= \sum{n^{t+1}_{x,1,y}} &x, y \in \mathcal{I}^N_1 \nonumber \\ &= \sum{\left((n^t_{x,1,y} - m^t_{x,1,y})+ l^t_{x,1,y}\right)} \label{eq:dec} \\ &= f(z^t_1) + g(\boldsymbol{s}^t) \label{eq:fandg} \end{align} In Eq.~(\ref{eq:dec}), we decompose $n^{t+1}_{x, 1, y}$ into three parts, i.e. current vehicle number $n^t_{x, 1, y}$, approaching vehicle number $l^t_{x, 1, y}$, and leaving vehicle number $m^t_{x, 1, y}$. Here, $n^t_{x, 1, y} \in z^t_1$ can be observed directly. All leaving vehicles in $m^t_{x,1,y}$ are on $T_{x,1,y}$ now and can drive into next road without the consideration of spillback (based on first assumption). $P^t_1$ is derived by $\pi_1$, which uses partial observation $z^t_1$ as input. Accordingly, $m^t_{x,1,y}$ is only subject to $n^t_{x,1,y}$ and $P^t_1$ that both are captured by the partial observation $z^t_1$. Therefore, approaching vehicle number $l^t_{x,1,y}$ is the \textbf{only} variable affected by observations $o\notin z^t_1$. We define $f(z^t_1) = \sum{(n^t_{x,1,y} - m^t_{x,1,y})}$ and $g(\boldsymbol{s}^t) = \sum{l^t_{x,1,y}}$, as shown in Eq.~(\ref{eq:fandg}). To help and guide an agent to perform an action, agent $A_1$ will receive communications from other agents. Let $c^t_1$ denote the communication information received by $A_1$ in time step $t$. In the following, we analyze the impact of $c^t_1$ on $A_1$. For observation of next phase $z^{t+1}_1 = \{n^{t+1}_{x,1,y}, P^{t+1}_1\}$, we have i) $n^{t+1}_{x,1,y} = n^t_{x,1,y} + l^t_{x,1,y} - m^t_{x,1,y}$; and ii) $P^{t+1}_1 = \pi_1(z^t_1, c^t_1)$. As $n^t_{x,1,y}$ and $m^t_{x,1,y}$ are part of $z^t_1$, there is a function $Z_1$ such that $z^{t+1}_1 = Z_1(z^t_1, c^t_1, l^t_{x,1,y})$. Consequently, to calculate $z^{t+1}_1$, the knowledge of $l^t_{x,1,y}$, that is not in $z^t_1$, becomes necessary. Without loss of generality, we assume $l^t_{x,1,y}\subseteq c^t_1$, so $z^{t+1}_1 = Z_1\left(z^t_1, c^t_1\right)$. In addition, $z^{t+j}_1$ can be represented as $Z_j(z^t_1, c^t_1, c^{t+1}_1, \cdots, c^{t+j-1}_1)$. Specifically, we define $Z_0(\cdots)=z^t_1$, regardless of the input. We define the cumulative reward of $A_1$ as $R_1$, which can be calculated as follows: \begin{align} R^t_1 &= \sum\nolimits_{j=0}^\infty{\gamma^j r^{t+j}_1} = \sum\nolimits_{j=0}^\infty{\gamma^j\left(f\left(z^{t+j}_1\right)+g\left(\boldsymbol{s}^{t+j}\right)\right)} \nonumber \\ &= \sum\nolimits_{j=0}^\infty{\gamma^j\left(f\left(Z_j\left(z^t_1, c^t_1, \cdots, c^{t+j-1}_1\right)\right)+g\left(\boldsymbol{s}^{t+j}\right)\right)} \nonumber \end{align} From above equation, we set $c_1^t$ as the future values of $l$: \begin{align} \label{eq:unicomm} c_1^t &= \left\{g\left(\boldsymbol{s}^t\right),g\left(\boldsymbol{s}^{t+1}\right), \cdots \right\} = \left\{\sum{l_{x,1,y}^t}, \sum{l_{x,1,y}^{t+1}}, ... \right\} \nonumber \end{align} All other variables to calculate $R_1^t$ can be derived from $c_1^t$: \begin{align} &c_1^{t+j} = c_1^t\backslash\{g\left(\boldsymbol{s}^t\right), \cdots, g\left(\boldsymbol{s}^{t+j-1}\right)\} & j&\in\mathbb{N}+ \nonumber \\ &g\left(\boldsymbol{s}^{t+k}\right) \in c_1^t & k&\in\mathbb{N}\nonumber \end{align} Hence, it is possible to derive the cumulative rewards $R_1^t$ based on future approaching vehicle numbers of traffic movements $l^{t}_{x, 1, y}$s from $c_1^t$, even if other observations remain unknown. As the conclusion is \emph{universal} to all existing methods on the same problem definition regardless of the network structure, we name it as \emph{UniComm}. Now we convert the problem of finding the cumulative reward $R^t_1$ to how to calculate approaching vehicle numbers $l^t_{x, 1, y}$ with current full observation $\boldsymbol{s}^t$. As we are not aware of exact future values, we use existing observations in $\boldsymbol{s}^t$ to predict $l^{t+1}_{x, 1, y}$. We first calculate approaching vehicle numbers on next interval $\Delta t$. Take intersections $I_0, I_1$ and road $R_{0,1}$ for example, for traffic movement $T_{0, 1, x}$ which passes intersections $I_0$, $I_1$, and $I_x$, approaching vehicle number $l_{0, 1, x}$ depends on the traffic phase $P_0^{t+1}$ to be chosen by $A_0$. Let's revisit the second assumption. All approaching vehicles should be on $\boldsymbol{T}_{0,1}=\{T_{y,0,1}|T_{y,0,1}\in\mathcal{T}_0, y\in\mathcal{I}^N_0\}$, which belongs to $z^t_0$. As a result, $\boldsymbol{T}_{0, 1}$ and the phase $P^{t+1}_0$ affect $l_{0, 1, x}$ the most, even though there might be other factors. We convert observations in $\mathcal{T}_0$ into hidden layers $\boldsymbol{h}_0$ for traffic movements by a fully-connected layer and ReLU activation function. As $P^{t+1}_0$ can be decomposed into the permission for every $T\in\mathcal{T}_0$, we use self-attention mechanism~\cite{DBLP:conf/nips/VaswaniSPUJGKP17} between $\boldsymbol{h}_0$ with Sigmoid activation function to predict the permissions $\boldsymbol{g}_0$ of traffic movements in next phase, which directly multiplies to $\boldsymbol{h}_0$. This is because for traffic movement $T_{x,0,y} \in \mathcal{T}_0$, if $T_{x,0,y}$ is permitted, $h_{x,0,y}$ will affect corresponding approaching vehicle numbers $l_{0,y,z}$; otherwise, it will become an all-zero vector and has no impact on $l_{0,y,z}$. Note that $R_{x,0}$ and $R_{0,y}$ might have different numbers of lanes, so we scale up the weight by the lane numbers to eliminate the effect of lane numbers. Finally, we add all corresponding weighted hidden states together and use a fully-connected layer to predict $l_{0, y, z}$. To learn the phase prediction $P_0^{t+1}$, a natural method is using the action $\boldsymbol{a}_0$ finally taken by the current network. However, as the network is always changing, even the result phase action $\boldsymbol{a}_0$ corresponding to a given $\boldsymbol{s}$ is not fixed, which makes phase prediction hard to converge. To have a more stable action for prediction, we use the action $a^r_0$ stored in the replay buffer of DQN as the target, which makes the phase prediction more stable and accurate. When the stored action $a^r_0$ is selected as the target, we decompose corresponding phase $P^r_0$ into the permissions $\boldsymbol{g}^r_0$ of traffic movements, and calculate the loss between recorded real permissions $\boldsymbol{g}^r_0$ and predicted permissions $\boldsymbol{g}^p_0$ as $L_p = \text{BinaryCrossEntropy}(\boldsymbol{g}^r_0, \boldsymbol{g}^p_0)$. For learning approaching vehicle numbers $l_{0, y, z}$ prediction, we get vehicle numbers of every traffic movement from replay buffer of DQN, and learn $l_{0, y, z}$ through recorded results $l^r_{0, y, z}$. As actions saved in replay buffer may be different from the current actions, when calculating volume prediction loss, different from generating UniComm, we use $\boldsymbol{g}^r_0$ instead of $\boldsymbol{g}_0$, i.e. $L_v=\text{MeanSquaredError}(\boldsymbol{g}^r_0 \cdot \boldsymbol{h}_0, l^r_{0, y, z})$. Based on how $c_1^t$ is derived, we also need to predict approaching vehicle number for next several intervals, which is rather challenging. Firstly, with off-policy learning, we can't really apply the selected action to the environment. Instead, we only learn from samples. Considering the phase prediction, while we can supervise the first phase taken by $A_0$, we, without interacting with the environment, don't know the real next state $s'\sim P(s, a)$, as the recorded $a^r_0$ may be different from $a_0$. The same problem applies to the prediction of $l_{0, y, z}$ too. Secondly, after multiple $\Delta t$ time intervals, the second assumption might become invalid, and it is hard to predict $l_{0, y, z}$ correctly with only $z_0$. As the result, we argue that as it is less useful to pass unsupervised and incorrect predictions, we only communicate with one time step prediction. We list the pseudo code of UniComm in Algorithm~\ref{alg:unicomm}. As the process is the same for all intersections, for simplicity, we only present the code corresponding to one intersection. The input contains current intersection observation $z$, recorded next observation $z^r$, the set of traffic movements $T$ of the intersection, and turning direction $d$ of traffic movements. The algorithm takes observations $o^l$ belong to every traffic movement, and generates embeddings $\boldsymbol{h}$ based on $o^l$ (lines 4-5). The permission prediction uses $\boldsymbol{h}$ with self-Attention, linear transformation and Sigmoid function, where the linear transformation is to transform an embedding to a scalar (line 6). It next calculates the phase prediction loss with recorded phase permissions using BinaryCrossEntropy Loss (line 7). Then, for every traffic movement $T_{a, x, b}\in T$, it accumulates embeddings to its corresponding outgoing lanes $\boldsymbol{h}^p_{x,b}$ and $\boldsymbol{h}^r_{x,b}$ with traffic movement embedding and permission respectively (lines 9-12). Finally, it uses linear transformation to predict the vehicle number of outgoing lanes $l$ and also volume prediction loss $L_v$ with MeanSquaredError Loss (lines 13-14). The algorithm outputs the estimated vehicle number $l$ for every outgoing lane of the intersection, and two prediction losses, including phase prediction loss $L_p$ and volume prediction loss $L_v$. Note that $z^r$ is only used to calculate $L_p$ and $L_v$. \begin{algorithm}[tb] \caption{UniComm Algorithm for an Intersection} \label{alg:unicomm} \textbf{Input}: intersection index $x$, observation $z$, recorded next observation $z^r$, traffic movements $T$, turning direction $d$ \\ \textbf{Parameter}: Traffic movement permission $\boldsymbol{g}$, traffic movement observation $o^l$, traffic movement embedding $\boldsymbol{h}$, out-lane prediction embedding $\boldsymbol{h}^p$, out-lane record embedding $\boldsymbol{h}^r$\\ \textbf{Output}: Estimated vehicle number $l$, phase loss $L_p$, volume loss $L_v$ \begin{algorithmic}[1] \STATE $(\text{Vehicle number}\ n, \text{current phase}\ P) \gets z$ \STATE $(\text{Recorded approaching number}\ l^r, \text{phase}\ P^r) \gets z^r$ \STATE $\boldsymbol{g}^r \gets P^r$ \STATE $o^l_{a, x, b} \gets \{n_{a, x, b}, \boldsymbol{g}_{a, x, b}, d_{a, x, b}\}$ \STATE $\boldsymbol{h}_{a, x, b} \gets \text{Embedding}(o^l_{a, x, b})$ \STATE $\boldsymbol{g}^p_{a, x, b} \gets \text{Sigmoid}\left(\text{Self-Attention}(\boldsymbol{h}_{a, x, b})\right)$ \STATE Let $L_p \gets \text{BinaryCrossEntropy}(\boldsymbol{g}^r, \boldsymbol{g}^p)$ \STATE $\boldsymbol{h}^p \gets \boldsymbol{0}, \boldsymbol{h}^r \gets \boldsymbol{0}$ \FOR{$T_{a, x, b}$ \textbf{in} $T$} \STATE $\boldsymbol{h}^p_{x, b} \gets \boldsymbol{h}^p_{x, b} + \boldsymbol{g}^p_{a, x, b}\cdot\boldsymbol{h}_{a, x, b}$ \STATE $\boldsymbol{h}^r_{x, b} \gets \boldsymbol{h}^r_{x, b} + \boldsymbol{g}^r_{a, x, b}\cdot\boldsymbol{h}_{a, x, b}$ \ENDFOR \STATE $l \gets \text{Linear}(\boldsymbol{h}^p)$ \STATE $L_v \gets \text{MeanSquaredError}\left(\text{Linear}(\boldsymbol{h}^r), l^r\right)$ \STATE \textbf{return} $l$, $L_p$, $L_v$ \end{algorithmic} \end{algorithm} \subsection{UniLight} Although UniComm is universal, its strength might not be fully achieved by existing methods, because they do not consider the importance of exchanged information. To make better use of predicted approaching vehicle numbers and other observations, we propose \emph{UniLight} to predict the Q-values. Take prediction of intersection $I_1$ for example. As we predict $l_{x, 1, y}$ based on traffic movements, UniLight splits average number of vehicles $n_{x,1,y}$, traffic movement permissions $\boldsymbol{g}_1$ and predictions $l_{x,1,y}$ into traffic movements $T_{x,1,y}$, and uses a fully-connected layer with ReLU to generate the hidden state $\boldsymbol{h}_1$. Next, considering one traffic phase $P$ that permits traffic movements $T_P$ among traffic movements $\mathcal{T}_1$ in $I_1$, we split the hidden states $\boldsymbol{h}_1$ into two groups $G_1=\{h_p | T_p \in T_P\}$ and $G_2=\{h_x|h_x\in\boldsymbol{h}_1,T_x\notin T_P \}$, based on whether the corresponding movement is permitted by $P$ or not. As traffic movements in the same group will share the same permission in phase $P$, we consider that they can be treated equally and hence use the average of their hidden states to represent the group. Obviously, traffic movements in $G_1$ are more important than those in $G_2$, so we multiply the hidden state of $G_1$ with a greater weight to capture the importance. Finally, we concatenate two hidden states of groups and a number representing whether current phase is $P$ through a fully-connected layer to predict the final Q-value. The pseudo code of UniLight is listed in Algorithm~\ref{alg:unilight}. Same as UniComm, we only present the code corresponding to one intersection. The input contains observation $z$, all possible phases, i.e. actions $\mathcal{A}$, traffic movements $T$ of the intersection, turning direction $d$ of traffic movements, and the estimated vehicle number $l$ for every traffic movement. It's worth to mention that UniComm estimates $l$ for every outgoing lane, but UniLight takes $l$ for incoming lanes as inputs. Consequently, we need to arrange the estimations in a proper manner such that the output of UniComm w.r.t. an intersection become the inputs of UniLight w.r.t. an adjacent intersection. UniLight outputs Q-value estimation $Q(z, \mathcal{A})$ of current observation and all possible actions. It takes the combination of observations and incoming vehicle estimation $o^l$ for every traffic movement, and generates the embeddings $\boldsymbol{h}$ for them (lines 3-4). For every action $a\in \mathcal{A}$, we split traffic movement embeddings into two groups $G_1$ and $G_2$ based on whether the corresponding traffic movement is permitted by $a$ or not, i.e., $G_1$ consists of the embeddings of all the movements in $T$ that are permitted by $a$ and $G_2$ consists of the embeddings of rest of the movements in $T$ that are stopped by $a$ (lines 6-9). We use the average of embeddings to represent a group (i.e., a group embedding), and multiply the group embedding with different weights. Obviously, traffic movements in $G_1$ are more important than the rest traffic movements in $G_2$ for action $a$, so $w_1$ is much larger than $w_2$. Finally, we concatenate group embeddings with current phase $P$, and use a linear transformation to predict $Q(z, a)$ (line 10). \begin{algorithm}[tb] \caption{UniLight Algorithm for an Intersection} \label{alg:unilight} \textbf{Input}: Intersection index $x$, observation $z$, possible phases $\mathcal{A}$, traffic movements $T$, turning direction $d$, estimated vehicle number $l$\\ \textbf{Parameter}: Traffic movement permission $\boldsymbol{g}$, traffic movement observation $o_l$, traffic movement embedding $\boldsymbol{h}$, action phase $a$ \\ \textbf{Output}: Q-values $Q(z, \mathcal{A})$ \begin{algorithmic}[1] \STATE $(\text{Vehicle number}\ n, \text{current phase}\ P) \gets z$ \STATE $\boldsymbol{g} \gets P$ \STATE $o^l_{a, x, b} \gets \{n_{a, x, b}, \boldsymbol{g}_{a, x, b}, d_{a, x, b}, l_{a, x, b}\}$ \STATE $\boldsymbol{h}_{a, x, b} \gets \text{Embedding}(o^l_{a, x, b})$ \FOR{$a$ \textbf{in} $\mathcal{A}$} \STATE $g^a \gets a$ \STATE $T_a \gets \{T_i \in T | i: \boldsymbol{g}^a_i \text{~is permitted} \}$ \STATE $G_1 \gets \{\boldsymbol{h}_i \in \boldsymbol{h} | i: T_i \in T_a\}$ \STATE $G_2 \gets \boldsymbol{h}\backslash G_1$ \STATE $Q(z, a) \gets \text{Linear}\left(w_1\overline{G_1} \oplus w_2\overline{G_2} \oplus P\right)$ \ENDFOR \STATE \textbf{return} $Q(z,\mathcal{A})$ \end{algorithmic} \end{algorithm} \section{Experiments} In this section, we first explain the detailed experimental setup, including implementaion details, datasets used, competitors evaluated, and performance metrics employed; we then report the experimental results. \subsection{Experimental Setup} \subsubsection{Implementation details} We conduct our experiments on the microscopic traffic simulator CityFlow~\cite{zhang2019cityflow}. We have made some modifications to CityFlow to support the collection of structural data from the intersections and the roads. In the experiment, the action time interval $\Delta t=10s$, and the all-red phase is $5s$. We run the experiments on a cloud platform with 2 virtual Intel 8269CY CPU core and 4GB memory. We train all the methods, including newly proposed ones and the existing ones selected for comparison, with $240,000$ frames. We run all the experiments 4 times, and select the best one which will be then tested 10 times with its average performance and its standard deviation reported in our paper. Most experiments are run on Unbuntu 20.04, PyTorch 1.7.1 with CPU only. A small part of experiments are run on GPU machines. We list the parameter settings in the following and please refer to the source codes for more detailed implementations. For Double-Dueling DQN, we use discount factor $\gamma=0.8$ and 5-step learning. The replay buffer size is 8000. When replay buffer is fulfilled, the model is trained after every step with batch size 30, and the target network is updated every 5 steps. For $\epsilon$-greedy strategy, the initial $\epsilon=0.9$, the minimum $\epsilon=0.02$, and $\epsilon$ is decreasing linearly in the first 30\% training steps. As roads have different number of lanes and different traffic movements have various lane numbers, the average number of vehicles of one traffic movement $n$ in the observation $z$ is divided by its lane number. In UniComm, the dimension of lane embedding $\boldsymbol{h}[i]$ is 32, turning direction $d[i]$ is represented by an embedding with dimension 2, and Self-Attention used in phase prediction has one head. In UniLight, the dimension of lane embedding $h[i]$ is 32, and the weights for two groups are $w_1=5$ and $w_2=1$ respectively. \subsubsection{Datasets} We use the real-world datasets from four different cities, including Hangzhou (HZ), Jinan (JN), and Shanghai (SH) from China, and New York (NY) from USA. The HZ, JN, and NY datasets are publicly available~\cite{wei2019colight}, and widely used in many related studies. However, the simulation time of these three datasets is relatively short, and hence it is hard to test the performance of a signal controlling method, especially during the rush hour. We also notice that roads in these three datasets contain exactly three lanes, i.e., one lane for every turning direction. In those public datasets, the length of roads sharing the same direction is a constant (e.g., all the vertical roads share the same road length), which is different from the reality. Meanwhile, although the traffic data of public datasets is based on real traffic flow, they adopt a fixed probability distribution, as it is not possible to track all the vehicle trajectories. For example, they assume 10\% of traffic flow turn left, 60\% go straight, and 30\% turn right to simulate vehicle trajectories, which is very unrealistic. Motivated by the limitations of public datasets, we try to develop new datasets that can simulate the urban traffic in a more realistic manner. As taxis perform an important role in urban traffic, we utilize substantial taxi trajectories to reproduce the traffic flow for an area closer to the reality. Shanghai taxi dataset contains $11,861,593$ trajectories generated by $13,767$ taxis in the whole month of April, 2015. Directly using taxi trajectories in one day will face the data sparsity problem. To overcome data sparsity, we squash all trajectories into one day to decrease the variance and meanwhile balance the number of vehicle routes based on the traffic volume from official data to truthfully recover the traffic of one area\footnote{In 2015, there were 2.5 million registered vehicles in Shanghai. There are close to 12 million trajectories in our Shanghai taxi dataset, corresponding to on average 4 to 5 trips per vehicle in a day.}. We choose two transportation hubs, which contain roads of different levels, i.e. roads have different number of lanes, denoted as SH$_1$ and SH$_2$ respectively. In our datasets, the two road segments belonging to the same road but having opposite directions have the same number of lanes. Figure~\ref{fig:shanghaimap} visualizes those two hubs. The first hub SH$_1$ contains six intersections, which are marked by green icons. The lane numbers of the three horizontal roads are 3, 7, and 3 respectively, and the lane numbers of the two vertical roads are 3 and 4 respectively. For the second hub SH$_2$, it contains eight intersections. The three horizontal roads have 3, 5, and 3 lanes respectively, while all the vertical roads have 3 lanes. As stated previously, for all the roads in these two hubs, they share the same number of lanes on both directions. For example, the very wide horizontal road in SH$_2$ corresponds a bidirectional road in reality, and it is represented by two roads in our simulation. Those two roads have opposite directions, one from the east to the west and the other from the west to the east, but they have the same number of lanes. In reality, some small roads only contain one or two lanes, and some turning directions share one lane, e.g., in a two-lane road, the left lane can be used to turn left, and the right lane can be used to go-straight or turn right. However, lanes in CityFlow cannot share multiple turning directions as CityFlow allows each lane to have exact one turning direction. Consequently, we have to add lanes to those roads such that all the turning directions allowed in reality can be supported too in our experiments. This explains why in the second hub, the vertical road located in the middle looks wider than the other two vertical roads but it shares the same number of lanes as the other two vertical roads. We show the statistics of all five datasets in Table~\ref{tab:datastatistic}. The average number of vehicles in 5 minutes is counted based on every single intersection in the dataset. Compared with existing datasets, the roads in our newly generated datasets SH$_1$ and SH$_2$ are more realistic. They have different number of lanes and different length, much wider time span, and the vehicle trajectories are more realistic and dynamic. \begin{figure} \centering \includegraphics[width=24mm]{figures/gen2.png} \includegraphics[width=47mm]{figures/764-8.png} \caption{Real road networks of two Shanghai datasets with 6 and 8 intersections respectively. Roads with more lanes are colored with wider lines. } \label{fig:shanghaimap} \end{figure} \linespread{1.2} \begin{table*}[htb] \centering \begin{tabular}{l|ccc|cc} \hline \cline{2-6} & HZ & JN & NY & SH$_1$ & SH$_2$ \\ \hline \# of Intersections & 16 & 12 & 48 & 6 & 8 \\ Road Length (metre) & 600/800 & 400/800 & 100/350 & 62$\sim$612 & 102$\sim$706 \\ \# of Lanes & 3 & 3 & 3 & 3$\sim$7 & 3$\sim$5 \\ Time Span (second) & 3600 & 3600 & 3600 & 86400 & 86400 \\ \# of vehicle & 2983 & 6295 & 2824 & 22313 & 15741 \\ Average \# of vehicles in 5 minutes & 13$\sim$21 & 21$\sim$57 & 5$\sim$6 & 3$\sim$76 & 2$\sim$52 \\ \hline \end{tabular} \caption{Datasets Statistics} \label{tab:datastatistic} \end{table*} \iffalse \linespread{1.2} \begin{table}[htb] \resizebox{0.483\textwidth}{!}{% \begin{tabular}{ccc||c|c|c|c|c} \multicolumn{3}{c||}{Datasets} & {JN} & {HZ} & {NY} & {SH$_1$} & {SH$_2$} \\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{Traditional}} & \multicolumn{1}{c|}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & SOTL & 420.70 &451.62 &1137.29 &2362.73 &2084.93 \\ & \multicolumn{1}{c|}{} & MaxPressure & 434.75 &408.47 &243.59 &4843.07 &788.55 \\ & \multicolumn{1}{c|}{} & MaxBand & 378.45 &458.00 &1038.67 &5675.98 &4501.39 \\ & \multicolumn{1}{c|}{} & TFP-TCC & 362.70 &400.24 &1216.77 &2403.66 &1380.05 \\ & \multicolumn{1}{c|}{} & MARLIN & 409.27 &388.00 &1274.23 &6623.83 &5373.53 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{DRL}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & MA-A2C & 355.29 &353.28 &906.71 &4787.83 &431.53 \\ & \multicolumn{1}{c|}{} & MA-PPO & 460.18 &352.64 &923.82 &3650.83 &2026.71 \\ & \multicolumn{1}{c|}{} & PressLight & 335.93 &338.41 &1363.31 &5114.78 &322.48 \\ & \multicolumn{1}{c|}{} & CoLight & 329.67 &340.36 &244.57 &7861.59 &4438.90 \\ & \multicolumn{1}{c|}{} & MPLight & 383.95 &334.04 &646.94 &7091.79 &433.92 \\ & \multicolumn{1}{c|}{} & AttendLight & 361.94 &339.45 &1376.81 &4700.22 &2763.66 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 335.85 &324.24 &186.85 &2326.29 &209.89 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{With}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{UniComm}}} & MA-A2C & 332.80 &349.93 &834.65 &4018.67 &303.69 \\ & \multicolumn{1}{c|}{} & MA-PPO & 331.96 &349.82 &847.49 &3806.77 &290.99 \\ & \multicolumn{1}{c|}{} & PressLight & \textbf{317.72} &330.28 &1152.76 &6200.91 &549.56 \\ & \multicolumn{1}{c|}{} & CoLight & 318.93 &336.66 &291.40 &7612.02 &1422.99 \\ & \multicolumn{1}{c|}{} & MPLight & 336.29 &329.57 &193.21 &5095.34 &542.82 \\ & \multicolumn{1}{c|}{} & AttendLight & 363.41 &330.38 &608.12 &4825.83 &2915.35 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 325.47 &\textbf{323.01} &\textbf{180.72} &\textbf{159.88} &\textbf{208.06} \end{tabular}% } \caption{The overall performance (average travel time) of UniLight and its competitors with and without UniComm.} \label{tab:overall} \end{table} \fi \subsubsection{Competitors} To evaluate the effectiveness of newly proposed \textbf{UniComm} and \textbf{UniLight}, we implement five conventional TSC methods and six representative reinforcement learning M-TSC methods as competitors, which are listed below. (1) \textbf{SOTL}~\cite{Cools2013}, a S-TSC method based on current vehicle numbers on every traffic movement. (2) \textbf{MaxPressure}~\cite{varaiya2013max}, a M-TSC method that balances the vehicle numbers between two neighboring intersections. (3) \textbf{MaxBand}~\cite{little1981maxband} that maximizes the green wave time for both directions of a road. (4) \textbf{TFP-TCC}~\cite{DBLP:conf/icaci/JiangHC21} that predicts the traffic flow and uses traffic congestion control methods based on future traffic condition. (5) \textbf{MARLIN}~\cite{el2013multiagent} that uses Q-learning to learn joint actions for agents. (6) \textbf{MA-A2C}~\cite{DBLP:conf/icml/MnihBMGLHSK16}, a general reinforcement learning method with actor-critic structure and multiple agents. (7) \textbf{MA-PPO}~\cite{DBLP:journals/corr/SchulmanWDRK17}, a popular policy based method, which improves MA-A2C with proximal policy optimization to stable the training process. (8) \textbf{PressLight}~\cite{wei2019presslight}, a reinforcement learning based method motivated by MaxPressure, which modifies the reinforcement learning RL design to use pressure as the metric. (9) \textbf{CoLight}~\cite{wei2019colight} that uses graph attentional network to communicate between neighboring intersections with different weights. (10) \textbf{MPLight}~\cite{chen2020toward}, a state-of-the-art M-TSC method that combines FRAP~\cite{zheng2019learning} and PressLight, and is one of the state-of-the-art methods in M-TSC. (11) \textbf{AttendLight}~\cite{DBLP:conf/nips/OroojlooyNHS20} that uses attention and LSTM to select best actions. \subsubsection{Performance Metrics} To prove the effectiveness of our proposed methods, we have performed a comprehensive experimental study to evaluate the performance of my methods and all the competitors. Following existing studies~\cite{DBLP:conf/nips/OroojlooyNHS20,chen2020toward}, we consider in total four different performance metrics, including average travel time, average delay, average wait time and throughput. The meanings of different metrics are listed below. \begin{itemize} \item \textbf{Travel time.} The travel time of a vehicle is the time it used between entering and leaving the environment. \item \textbf{Delay.} The delay of a vehicle is its travel time minus the expected travel time, i.e., the time used to travel when there is no other vehicles and traffic signals are always green. \item \textbf{Wait time.} The wait time is defined as the time a vehicle is waiting, i.e., its speed is less than a threshold. In our experiments, the threshold is set to 0.1m/s. \item \textbf{Throughput.} The throughput is the number of vehicles that have completed their trajectories before the simulation stops. \end{itemize} \subsection{Evaluation Results} \linespread{1.2} \begin{table}[htb] \resizebox{0.48\textwidth}{!}{% \begin{tabular}{cc|c|c|c|c|c} \multicolumn{2}{c|}{Datasets} & JN & HZ & NY & SH$_1$ & SH$_2$ \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{\rotatebox[origin=c]{90}{UniLight}}} & No Com. & 335.85 & 324.24 & 186.85 & 2326.29 & 209.89 \\ \multicolumn{1}{c|}{} & Hidden State & 330.99 & 323.88 & 180.99 & 1874.11 & 224.55 \\ \multicolumn{1}{c|}{} & UniComm & \textbf{325.47} & \textbf{323.01} & \textbf{180.72} & \textbf{159.88} & \textbf{208.06} \end{tabular}% } \caption{Compare UniComm with hidden state in average travel time.} \label{tab:unicommvshs} \end{table} \begin{figure*}[tb] \centering \includegraphics[width=56mm]{figures/loss/JN.png} \includegraphics[width=56mm]{figures/loss/HZ.png} \includegraphics[width=56mm]{figures/loss/NY.png} \includegraphics[width=56mm]{figures/loss/SH1.png} \includegraphics[width=56mm]{figures/loss/SH2.png} \caption{Phase prediction loss of different phase prediction target.} \label{fig:phaselossall} \end{figure*} \begin{table*}[htb] \resizebox{\textwidth}{!}{% \begin{tabular}{ccc||c|c|c|c|c} \multicolumn{3}{c||}{Datasets} & {JN} & {HZ} & {NY} & {SH$_1$} & {SH$_2$} \\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{Traditional}} & \multicolumn{1}{c|}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & SOTL & 420.70 $\pm$ 0.00 &451.62 $\pm$ 0.00 &1137.29 $\pm$ 0.00 &2362.73 $\pm$ 216.82 &2084.93 $\pm$ 82.83 \\ & \multicolumn{1}{c|}{} & MaxPressure & 434.75 $\pm$ 0.00 &408.47 $\pm$ 0.00 &243.59 $\pm$ 0.00 &4843.07 $\pm$ 93.82 &788.55 $\pm$ 50.06 \\ & \multicolumn{1}{c|}{} & MaxBand & 378.45 $\pm$ 2.06 &458.00 $\pm$ 1.41 &1038.67 $\pm$ 12.81 &5675.98 $\pm$ 122.45 &4501.39 $\pm$ 85.04 \\ & \multicolumn{1}{c|}{} & TFP-TCC & 362.70 $\pm$ 5.30 &400.24 $\pm$ 1.42 &1216.77 $\pm$ 21.27 &2403.66 $\pm$ 878.66 &1380.05 $\pm$ 146.18 \\ & \multicolumn{1}{c|}{} & MARLIN & 409.27 $\pm$ 0.00 &388.00 $\pm$ 0.03 &1274.23 $\pm$ 0.00 &6623.83 $\pm$ 507.52 &5373.53 $\pm$ 101.74 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{DRL}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & MA-A2C & 355.29 $\pm$ 1.03 &353.28 $\pm$ 0.82 &906.71 $\pm$ 28.96 &4787.83 $\pm$ 362.12 &431.53 $\pm$ 41.01 \\ & \multicolumn{1}{c|}{} & MA-PPO & 460.18 $\pm$ 20.71 &352.64 $\pm$ 1.18 &923.82 $\pm$ 22.77 &3650.83 $\pm$ 171.07 &2026.71 $\pm$ 597.83 \\ & \multicolumn{1}{c|}{} & PressLight & 335.93 $\pm$ 0.00 &338.41 $\pm$ 0.00 &1363.31 $\pm$ 0.00 &5114.78 $\pm$ 647.92 &322.48 $\pm$ 3.84 \\ & \multicolumn{1}{c|}{} & CoLight & 329.67 $\pm$ 0.00 &340.36 $\pm$ 0.00 &244.57 $\pm$ 0.00 &7861.59 $\pm$ 69.86 &4438.90 $\pm$ 260.66 \\ & \multicolumn{1}{c|}{} & MPLight & 383.95 $\pm$ 0.00 &334.04 $\pm$ 0.00 &646.94 $\pm$ 0.00 &7091.79 $\pm$ 22.97 &433.92 $\pm$ 23.26 \\ & \multicolumn{1}{c|}{} & AttendLight & 361.94 $\pm$ 2.85 &339.45 $\pm$ 0.82 &1376.81 $\pm$ 16.41 &4700.22 $\pm$ 87.50 &2763.66 $\pm$ 425.19 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 335.85 $\pm$ 0.00 &324.24 $\pm$ 0.00 &186.85 $\pm$ 0.00 &2326.29 $\pm$ 242.90 &209.89 $\pm$ 21.70 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{With}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{UniComm}}} & MA-A2C & 332.80 $\pm$ 1.71 &349.93 $\pm$ 1.09 &834.65 $\pm$ 38.09 &4018.67 $\pm$ 319.11 &303.69 $\pm$ 6.13 \\ & \multicolumn{1}{c|}{} & MA-PPO & 331.96 $\pm$ 1.34 &349.82 $\pm$ 1.21 &847.49 $\pm$ 30.88 &3806.77 $\pm$ 194.88 &290.99 $\pm$ 4.23 \\ & \multicolumn{1}{c|}{} & PressLight & \textbf{317.72 $\pm$ 0.00} &330.28 $\pm$ 0.00 &1152.76 $\pm$ 0.00 &6200.91 $\pm$ 529.39 &549.56 $\pm$ 51.20 \\ & \multicolumn{1}{c|}{} & CoLight & 318.93 $\pm$ 0.00 &336.66 $\pm$ 0.00 &291.40 $\pm$ 0.00 &7612.02 $\pm$ 271.91 &1422.99 $\pm$ 633.09 \\ & \multicolumn{1}{c|}{} & MPLight & 336.29 $\pm$ 0.00 &329.57 $\pm$ 0.00 &193.21 $\pm$ 0.00 &5095.34 $\pm$ 224.42 &542.82 $\pm$ 119.56 \\ & \multicolumn{1}{c|}{} & AttendLight & 363.41 $\pm$ 3.79 &330.38 $\pm$ 1.08 &608.12 $\pm$ 38.59 &4825.83 $\pm$ 249.90 &2915.35 $\pm$ 757.95 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 325.47 $\pm$ 0.00 &\textbf{323.01 $\pm$ 0.00} &\textbf{180.72 $\pm$ 0.00} &\textbf{159.88 $\pm$ 1.87} &\textbf{208.06 $\pm$ 0.88 } \end{tabular}% } \caption{Evaluation result of average travel time (seconds).} \label{tab:travel} \end{table*} \iffalse \begin{table*}[htb] \resizebox{\textwidth}{!}{% } \caption{Evaluation result of average travel time from all 4 runs.} \label{tab:travelall} \end{table*} \fi \begin{table*}[htb] \resizebox{\textwidth}{!}{% \begin{tabular}{ccc||c|c|c|c|c} \multicolumn{3}{c||}{Datasets} & {JN} & {HZ} & {NY} & {SH$_1$} & {SH$_2$} \\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{Traditional}} & \multicolumn{1}{c|}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & SOTL & 260.69 $\pm$ 0.17 &232.33 $\pm$ 0.04 &1079.57 $\pm$ 0.00 &2340.80 $\pm$ 217.95 &2047.33 $\pm$ 84.18 \\ & \multicolumn{1}{c|}{} & MaxPressure & 269.30 $\pm$ 0.04 &173.03 $\pm$ 0.00 &120.52 $\pm$ 0.00 &4827.94 $\pm$ 94.16 &738.10 $\pm$ 50.68 \\ & \multicolumn{1}{c|}{} & MaxBand & 216.83 $\pm$ 2.40 &237.20 $\pm$ 1.50 &973.48 $\pm$ 13.43 &5663.45 $\pm$ 122.96 &4479.86 $\pm$ 85.46 \\ & \multicolumn{1}{c|}{} & TFP-TCC & 199.80 $\pm$ 5.11 &182.46 $\pm$ 1.65 &1161.35 $\pm$ 22.28 &2380.09 $\pm$ 884.42 &1334.86 $\pm$ 147.56 \\ & \multicolumn{1}{c|}{} & MARLIN & 247.59 $\pm$ 0.04 &158.49 $\pm$ 0.08 &1220.23 $\pm$ 0.00 &6614.58 $\pm$ 508.83 &5354.42 $\pm$ 101.87 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{DRL}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & MA-A2C & 190.84 $\pm$ 1.29 &121.30 $\pm$ 1.00 &834.03 $\pm$ 30.93 &4773.34 $\pm$ 363.34 &378.63 $\pm$ 41.98 \\ & \multicolumn{1}{c|}{} & MA-PPO & 305.78 $\pm$ 22.30 &120.53 $\pm$ 1.51 &852.49 $\pm$ 24.34 &3631.40 $\pm$ 171.77 &1984.16 $\pm$ 603.71 \\ & \multicolumn{1}{c|}{} & PressLight & 169.30 $\pm$ 0.06 &100.17 $\pm$ 0.00 &1314.57 $\pm$ 0.00 &5100.96 $\pm$ 649.87 &267.35 $\pm$ 3.72 \\ & \multicolumn{1}{c|}{} & CoLight & 162.71 $\pm$ 0.07 &98.84 $\pm$ 0.04 &125.32 $\pm$ 0.00 &7854.45 $\pm$ 70.14 &4411.89 $\pm$ 263.23 \\ & \multicolumn{1}{c|}{} & MPLight & 218.41 $\pm$ 0.03 &88.68 $\pm$ 0.00 &554.87 $\pm$ 0.00 &7083.68 $\pm$ 23.03 &379.99 $\pm$ 23.65 \\ & \multicolumn{1}{c|}{} & AttendLight & 197.32 $\pm$ 2.62 &103.16 $\pm$ 1.58 &1329.94 $\pm$ 17.37 &4684.92 $\pm$ 87.76 &2731.29 $\pm$ 428.06 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 166.49 $\pm$ 0.01 &74.93 $\pm$ 0.00 &55.85 $\pm$ 0.00 &2302.12 $\pm$ 244.06 &153.11 $\pm$ 22.65 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{With}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{UniComm}}} & MA-A2C & 166.91 $\pm$ 1.73 &117.20 $\pm$ 1.63 &757.19 $\pm$ 40.58 &4000.45 $\pm$ 320.40 &249.05 $\pm$ 6.21 \\ & \multicolumn{1}{c|}{} & MA-PPO & 165.65 $\pm$ 1.38 &116.75 $\pm$ 2.20 &771.26 $\pm$ 32.83 &3788.19 $\pm$ 195.69 &236.43 $\pm$ 4.39 \\ & \multicolumn{1}{c|}{} & PressLight & \textbf{149.92 $\pm$ 0.05} &86.35 $\pm$ 0.00 &1096.51 $\pm$ 0.00 &6189.67 $\pm$ 530.99 &497.70 $\pm$ 52.01 \\ & \multicolumn{1}{c|}{} & CoLight & 151.00 $\pm$ 0.12 &94.24 $\pm$ 0.00 &176.94 $\pm$ 0.00 &7605.18 $\pm$ 272.52 &1379.88 $\pm$ 637.67 \\ & \multicolumn{1}{c|}{} & MPLight & 166.26 $\pm$ 0.07 &81.76 $\pm$ 0.00 &63.52 $\pm$ 0.00 &5081.75 $\pm$ 225.04 &491.35 $\pm$ 121.36 \\ & \multicolumn{1}{c|}{} & AttendLight & 197.60 $\pm$ 4.23 &86.03 $\pm$ 1.76 &507.25 $\pm$ 39.91 &4811.16 $\pm$ 250.93 &2884.94 $\pm$ 763.61 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 156.33 $\pm$ 0.04 &\textbf{72.63 $\pm$ 0.00} &\textbf{48.47 $\pm$ 0.00} &\textbf{119.99 $\pm$ 1.84} &\textbf{151.63 $\pm$ 0.93} \end{tabular}% } \caption{Evaluation result of average delay (seconds).} \label{tab:delay} \end{table*} \begin{table*}[htb] \resizebox{\textwidth}{!}{% \begin{tabular}{ccc||c|c|c|c|c} \multicolumn{3}{c||}{Datasets} & {JN} & {HZ} & {NY} & {SH$_1$} & {SH$_2$} \\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{Traditional}} & \multicolumn{1}{c|}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & SOTL & 181.40 $\pm$ 0.00 &149.48 $\pm$ 0.00 &1057.95 $\pm$ 0.00 &2309.50 $\pm$ 220.12 &1988.22 $\pm$ 87.12 \\ & \multicolumn{1}{c|}{} & MaxPressure & 197.85 $\pm$ 0.00 &117.95 $\pm$ 0.00 &66.44 $\pm$ 0.00 &4813.49 $\pm$ 94.59 &679.05 $\pm$ 53.07 \\ & \multicolumn{1}{c|}{} & MaxBand & 138.97 $\pm$ 1.95 &158.44 $\pm$ 1.42 &946.18 $\pm$ 14.03 &5646.98 $\pm$ 124.50 &4448.32 $\pm$ 86.18 \\ & \multicolumn{1}{c|}{} & TFP-TCC & 124.67 $\pm$ 4.92 &105.31 $\pm$ 1.48 &1142.30 $\pm$ 22.99 &2346.91 $\pm$ 894.53 &1268.55 $\pm$ 148.46 \\ & \multicolumn{1}{c|}{} & MARLIN & 175.82 $\pm$ 0.00 &92.56 $\pm$ 0.03 &1202.31 $\pm$ 0.00 &6602.23 $\pm$ 510.35 &5329.60 $\pm$ 104.08 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{DRL}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & MA-A2C & 110.93 $\pm$ 1.06 &54.30 $\pm$ 0.67 &798.16 $\pm$ 32.82 &4756.17 $\pm$ 365.55 &298.59 $\pm$ 41.97 \\ & \multicolumn{1}{c|}{} & MA-PPO & 234.25 $\pm$ 25.70 &53.51 $\pm$ 1.12 &817.35 $\pm$ 25.36 &3607.33 $\pm$ 172.69 &1939.55 $\pm$ 609.96 \\ & \multicolumn{1}{c|}{} & PressLight & 94.89 $\pm$ 0.00 &41.62 $\pm$ 0.00 &1292.71 $\pm$ 0.00 &5083.04 $\pm$ 652.50 &197.60 $\pm$ 3.86 \\ & \multicolumn{1}{c|}{} & CoLight & 90.75 $\pm$ 0.00 &43.16 $\pm$ 0.00 &79.26 $\pm$ 0.00 &7848.86 $\pm$ 70.36 &4390.50 $\pm$ 263.65 \\ & \multicolumn{1}{c|}{} & MPLight & 141.80 $\pm$ 0.00 &36.38 $\pm$ 0.00 &520.81 $\pm$ 0.00 &7075.08 $\pm$ 23.24 &306.22 $\pm$ 25.09 \\ & \multicolumn{1}{c|}{} & AttendLight & 118.12 $\pm$ 2.78 &42.20 $\pm$ 0.86 &1313.50 $\pm$ 18.15 &4660.36 $\pm$ 88.32 &2669.69 $\pm$ 433.18 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 96.65 $\pm$ 0.00 &27.18 $\pm$ 0.00 &22.03 $\pm$ 0.00 &2273.12 $\pm$ 246.13 &87.10 $\pm$ 24.52 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{With}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{UniComm}}} & MA-A2C & 90.16 $\pm$ 1.55 &51.30 $\pm$ 0.96 &717.81 $\pm$ 42.64 &3972.36 $\pm$ 323.13 &170.02 $\pm$ 6.00 \\ & \multicolumn{1}{c|}{} & MA-PPO & 89.48 $\pm$ 1.26 &51.49 $\pm$ 1.09 &733.61 $\pm$ 34.53 &3761.27 $\pm$ 197.65 &160.16 $\pm$ 4.23 \\ & \multicolumn{1}{c|}{} & PressLight & \textbf{77.83 $\pm$ 0.00} &32.86 $\pm$ 0.00 &1065.61 $\pm$ 0.00 &6173.58 $\pm$ 533.88 &424.60 $\pm$ 53.66 \\ & \multicolumn{1}{c|}{} & CoLight & 79.62 $\pm$ 0.00 &40.15 $\pm$ 0.00 &124.21 $\pm$ 0.00 &7596.31 $\pm$ 273.81 &1315.25 $\pm$ 649.06 \\ & \multicolumn{1}{c|}{} & MPLight & 96.48 $\pm$ 0.00 &29.62 $\pm$ 0.00 &26.73 $\pm$ 0.00 &5065.68 $\pm$ 226.48 &438.78 $\pm$ 125.50 \\ & \multicolumn{1}{c|}{} & AttendLight & 121.75 $\pm$ 3.65 &32.72 $\pm$ 1.02 &435.25 $\pm$ 39.52 &4787.83 $\pm$ 253.85 &2833.39 $\pm$ 775.06 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 84.83 $\pm$ 0.00 &\textbf{25.53 $\pm$ 0.00} &\textbf{19.16 $\pm$ 0.00} &\textbf{73.46 $\pm$ 1.83} &\textbf{84.89 $\pm$ 1.32} \end{tabular}% } \caption{Evaluation result of average wait time (seconds).} \label{tab:wait} \end{table*} \begin{table*}[htb] \centering \begin{tabular}{ccc||c|c|c|c|c} \multicolumn{3}{c||}{Datasets} & {JN} & {HZ} & {NY} & {SH$_1$} & {SH$_2$} \\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{Traditional}} & \multicolumn{1}{c|}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & SOTL & 5369 $\pm$ 0 &2648 $\pm$ 0 &879 $\pm$ 0 &11459 $\pm$ 1193 &8769 $\pm$ 1042 \\ & \multicolumn{1}{c|}{} & MaxPressure & 5330 $\pm$ 0 &2656 $\pm$ 0 &2642 $\pm$ 0 &6477 $\pm$ 216 &12245 $\pm$ 383 \\ & \multicolumn{1}{c|}{} & MaxBand & 5422 $\pm$ 6 &2574 $\pm$ 3 &1010 $\pm$ 21 &5316 $\pm$ 436 &4705 $\pm$ 99 \\ & \multicolumn{1}{c|}{} & TFP-TCC & 5607 $\pm$ 17 &2668 $\pm$ 5 &790 $\pm$ 32 &10267 $\pm$ 1458 &10341 $\pm$ 24 \\ & \multicolumn{1}{c|}{} & MARLIN & 5028 $\pm$ 0 &2489 $\pm$ 0 &691 $\pm$ 0 &2895 $\pm$ 389 &3191 $\pm$ 909 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{DRL}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Algorithms}}} & MA-A2C & 5537 $\pm$ 15 &2708 $\pm$ 6 &1115 $\pm$ 41 &4344 $\pm$ 1701 &240 $\pm$ 27 \\ & \multicolumn{1}{c|}{} & MA-PPO & 5059 $\pm$ 123 &2712 $\pm$ 4 &1168 $\pm$ 48 &4931 $\pm$ 2131 &5690 $\pm$ 743 \\ & \multicolumn{1}{c|}{} & PressLight & 5610 $\pm$ 0 &2720 $\pm$ 0 &492 $\pm$ 0 &5066 $\pm$ 589 &14162 $\pm$ 175 \\ & \multicolumn{1}{c|}{} & CoLight & 5641 $\pm$ 0 &2714 $\pm$ 0 &425 $\pm$ 0 &1737 $\pm$ 154 &4169 $\pm$ 414 \\ & \multicolumn{1}{c|}{} & MPLight & 4653 $\pm$ 0 &2530 $\pm$ 0 &617 $\pm$ 0 &2114 $\pm$ 431 &4287 $\pm$ 71 \\ & \multicolumn{1}{c|}{} & AttendLight & 5380 $\pm$ 33 &2488 $\pm$ 22 &411 $\pm$ 26 &4470 $\pm$ 835 &3972 $\pm$ 174 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 5626 $\pm$ 0 &2730 $\pm$ 0 &2686 $\pm$ 0 &8991 $\pm$ 695 &14746 $\pm$ 595 \\ \hline \multirow{7}{*}{\rotatebox[origin=c]{90}{With}} & \multicolumn{1}{c|}{\multirow{7}{*}{\rotatebox[origin=c]{90}{UniComm}}} & MA-A2C & 5662 $\pm$ 12 &2711 $\pm$ 5 &1192 $\pm$ 50 &8091 $\pm$ 839 &14433 $\pm$ 46 \\ & \multicolumn{1}{c|}{} & MA-PPO & 5662 $\pm$ 8 &2711 $\pm$ 4 &1249 $\pm$ 37 &7440 $\pm$ 691 &14453 $\pm$ 115 \\ & \multicolumn{1}{c|}{} & PressLight & 5662 $\pm$ 0 &2726 $\pm$ 0 &561 $\pm$ 0 &2663 $\pm$ 381 &7323 $\pm$ 2052 \\ & \multicolumn{1}{c|}{} & CoLight & \textbf{5675 $\pm$ 0} &2715 $\pm$ 0 &478 $\pm$ 0 &2124 $\pm$ 401 &5798 $\pm$ 413 \\ & \multicolumn{1}{c|}{} & MPLight & 5321 $\pm$ 0 &2725 $\pm$ 0 &721 $\pm$ 0 &1285 $\pm$ 486 &3854 $\pm$ 459 \\ & \multicolumn{1}{c|}{} & AttendLight & 5371 $\pm$ 10 &2730 $\pm$ 3 &671 $\pm$ 38 &6754 $\pm$ 657 &5254 $\pm$ 188 \\ \cline{3-8} & \multicolumn{1}{c|}{} & UniLight & 5654 $\pm$ 0 &\textbf{2739 $\pm$ 0} &\textbf{2688 $\pm$ 0} &\textbf{16928 $\pm$ 3963} &\textbf{14538 $\pm$ 981} \end{tabular}% \caption{Evaluation result of average throughput (number of vehicles).} \label{tab:throughput} \end{table*} We report the results corresponding to all four metrics below, including the average values and their standard deviations, in Tables~\ref{tab:travel} to~\ref{tab:throughput}. Note, the standard deviation of some experiments is zero. This is because in these datasets, the environment is determined, and DQN in testing is also determined. Consequently, the results remain unchanged in all 10 tests. \subsubsection{Performance comparison without UniComm} We first focus on the results without UniComm, i.e., all competitors communicate in their original way, and UniLight runs without any communication. The numbers in bold indicate the best performance. We observe that RL based methods achieve better results than traditional methods in public datasets. However, in a complicated environment like SH$_1$, agents may fail to learn a valid policy, and accordingly RL based methods might perform much worse than the traditional methods. UniLight performs the best in almost all datasets, and it demonstrates significant advantages in complicated environments. It improves the average performance by 8.2\% in three public datasets and 35.6\% in more complicated environments like SH$_1$/SH$_2$. We believe the improvement brought by UniLight is mainly contributed by the following two features of UniLight. Firstly, UniLight divides the input state into traffic movements and uses the same model layer to generate corresponding hidden states for different traffic movements. As the layer is shared by all traffic movements regardless of traffic phases selected, the model parameters can be trained more efficiently. For example, the intersection shown in Figure 1 has 12 traffic movements. For every input state, the model layer is actually trained 12 times. Secondly, to predict Q-value of traffic phase $P$, we split hidden states of traffic movements into two groups based on their permissions and aggregate hidden states from same groups, so the model layer to predict Q-value can be used by different phases. This again implies that the layer is trained more times so it's able to learn better model weights. JN dataset is the only exception, where CoLight and PressLight perform slightly better than UniLight. This is because many roads in JN dataset share the same number of approaching vehicles, which makes the controlling easier, and all methods perform similarly. \subsubsection{The Impact of UniComm} As UniComm is universal for existing methods, we apply UniComm to the six representative RL based methods and re-evaluate their performance, with results listed in the bottom portion of Tables~\ref{tab:travel} to~\ref{tab:throughput}. We observe that UniLight again achieves the best performance consistently. In addition, almost all RL based methods (including UniLight) are able to achieve a better performance with UniComm. This is because UniComm predicts the approaching vehicle number on $R_{i,j}$ mainly by the hidden states of traffic movements covering $R_{i,j}$. Consequently, $A_i$ is able to produce predictions for neighboring $A_j$ based on more important/relevant information. As a result, neighbors will receive customized results, which allow them to utilize the information in a more effective manner. In addition, agents only share predictions with their neighbors so the communication information and the observation dimension remain small. This allows existing RL based methods to outperform their original versions whose communications are mainly based on hidden states. Some original methods perform worse with UniComm in certain experiments, e.g. PressLight in SH$_1$. This is because these methods have unstable performance on some datasets with very large variance, which make the results unstable. Despite of a small number of outliers, UniComm makes consistent boost on all existing methods. \subsubsection{Phase Prediction Target Evaluation} As mentioned previously, instead of directly using current phase action to calculate phase prediction loss $L_p$, we use actions stored in replay buffer. To evaluate its effectiveness, we plot the curve of $L_p$ in Figure~\ref{fig:phaselossall}. \emph{Current} and \emph{Replay} refer to the use of the action taken by the current network and that stored in the replay buffer respectively when calculating $L_p$. The curve represents the volume prediction loss, i.e. the prediction accuracy during the training process. We can observe that when using stored actions as the target, the loss becomes smaller, i.e., it has learned better phase predictions. The convergence speed of phase prediction loss with stored actions is slower at the beginning of the training. This is because to perform more exploration, most actions are random at the beginning, which is hard to predict. \subsubsection{UniComm vs. Hidden States} To further evaluate the effectiveness of UniComm from a different angle, we introduce another version of UniLight that uses a 32-dimension hidden state for communication, the same as \cite{wei2019colight}. In total, there are three different versions of UniLight evaluated, i.e., \emph{No Com.}, \emph{Hidden State}, and \emph{UniComm}. As the names suggest, \emph{No Com.} refers to the version without sharing any information as there is no communication between agents; \emph{Hidden State} refers to the version sharing 32-dimension hidden state; and \emph{UniComm} refers to the version that implements UniComm. The average travel time of all the vehicles under these three variants of UniLight is listed in Table \ref{tab:unicommvshs}. Note, \emph{Hidden State} shares the most amount of information and is able to improve the performance, as compared with version of \emph{No Com.}. Unfortunately, the amount of information shared is \emph{not} proportional to the performance improvement it can achieve. For example, UniLight with UniComm performs better than the version with Hidden State, although it shares less amount of information. This further verifies our argument that the content and the importance of the shared information is much more important than the amount. \section{Conclusion} In this paper, we propose a novel communication form UniComm for decentralized multi-agent learning based M-TSC problem. It enables each agent to share the prediction of approaching vehicles with its neighbors via communication. We also design UniLight to predict Q-value based on UniComm. Experimental results demonstrate that UniComm is universal for existing M-TSC methods, and UniLight outperforms existing methods in both simple and complex environments. \bibliographystyle{named}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} The field of robotics has improved drastically in the last decade. From robots that were initially meant to reduce manual labour or automate menial tasks, current robots focus on social aspects such as teachers ~\cite{Han_EducationRobots_HAI_2015} and companions for the elderly~\cite{Josephine_HealthRobots_SR_2015}. The appearance of robots has also evolved over this period. In recent years, robots which are more human-like and are autonomous when interacting with humans are preferred over conventional robots. Due to this, the field of social robotics has gained momentum. In contrast to early task-based robots, design of humanoid social robots involves developing cognition that considers context, work environment and social interaction clues. With the development of Artificial Intelligence and robotics, there is a universal question ``Can a humanoid social robot be a part of a company's workforce?''. Does it have all the skills and etiquette to function in an open work environment with different tasks successfully? Human-robot interaction studies are usually conducted in controlled settings or pre-defined tasks. Such kind of interaction would not allow us to understand if a robot can adapt to and perform any role in an organization. To answer these questions in this paper, we have conducted an initial study with Nadine~\cite{NadineSR_CGI_2019}, a humanoid social robot as a customer service agent in an insurance company. The setup was entirely open for the public, and the customer could ask any questions. As a service agent, Nadine is required to be able to answer customer queries and maintain a courteous, professional behaviour during her interaction. Our objective was to study if a robot could handle real social situations if customers are willing to interact with human-like robots and study effects of humanoid robots in any work environment. Nadine's success as an insurance agent is based on the quality of customer-agent interaction. To evaluate the human-robot interaction, we have used survey questionnaires and customer feedback. The survey questionnaire was prepared in such a way that the customer can rate the interaction with the robot agent in terms of functionality, behaviour, usefulness etc. Also, the customer can provide feedback about his/her interaction. Even though surveys can be quantified easily, the same is not applicable for feedback. Usually, customer feedback has to be manually read by someone to understand limitations of the robot. In this paper, we have borrowed sentic-computing~\cite{cambria2015sentic} concepts, specifically, aspect-based sentiment analysis and applied it to the collected customer feedback. In contrast to sentiment analysis, that detects sentiment of overall text, aspect-based analyzes each text to identify various aspects and determine the corresponding sentiment for each one. Using the proposed framework, we can examine each aspect customer talks about. This will also help us to quantify customer feedback data, aspects that customers notice in a work environment. Also, limitations and future extensions to humanoid robots in the work environment can be identified. The rest of the paper is organized as follows: In section II, we provide related work for robots employed in the work environment and their interaction. We also look into some aspect-based sentiment analysis methods for gauging customer satisfaction. In section III, we explain our experimental setup of Nadine at the insurance company. In section IV, we describe the details of our data collection methods. In section V, we provide details of our framework to analyze user comments based on aspect-based sentiment analysis. In section VI, we present experimental results of the analysis of survey and user comments. We also discuss potential limitations and possible future work. We provide conclusions in section VII. \section{Literature Review \subsection{Robots at work places} Robots have become an integral part of society. In recent times, several researchers and organizations are considering to make robots a part of their workforce. Initial studies of the robot at workplaces were restricted to simple tasks such as greeting at information booth \cite{Actroid}, performing the predefined skill of archery \cite{iCub:2011} and bartender with communication skills \cite{Giuliani:2013}, museum guide \cite{museum_guide:2005}. Robovie \cite{Tutors_robot:2004} was used as a language tutor for a small group of people in elementary school. In all these tasks, the complexity of human-robot interaction was low, and mistakes made by a robot in such scenarios are inconsequential. Robots have been considered for a more open work environment with serious consequences as well. For instance, health care \cite{Ljungblad:2012}, restaurant waiters \cite{Robotics_Waiter}, space research \cite{Robonaut} and rescue functionalities \cite{MOIRA} have also been considered. Ljungblad et.al \cite{Ljungblad:2012} introduced a simple utility robot in a hospital environment for transporting goods between departments for 13 days and studied effects of it using interviews, questionnaires and observations. Juan Fasola et.al \cite{Robot_Motivate_Exercise_Older} used a socially assistive robot to motivate physical exercise for older adults. The results of the survey questions regarding participant perceptions and feelings toward the robot were very encouraging; the participants rated the robot highly in terms of intelligence and helpfulness attributed a moderately high level of importance to the exercise sessions, and reported their mood throughout the sessions to be normal-to-moderately pleased. In a restaurant setting, \cite{robot-waiter} employed robots that could move, take orders and even talk to customers in a limited fashion. However, these robots were limited in nature as they could not carry heavier items like soup, pour water to customers or properly communicate. The success of robots in each of these applications is measured differently due to the variation of tasks involved in them. In most of the applications considered for robots at workplaces, the tasks involved are simple and social interactions with the human is considerably less. The appearance of the robots are not human-like always depending upon the task they are involved. In contrast, we choose to use a realistic looking humanoid social robot, Nadine for our experiments. Also, we set her up as an insurance agent to interact with customers in open social scenarios and perform tasks defined for an agent in the organization. \subsection{Customer satisfaction analysis using aspect-based sentiment analysis} Hu et. al \cite{Mining_summarizing_customer_reviews} analyzed the customer reviews using an aspect extraction method. The authors restricted themselves to explicit aspects and set of rules based on statistical observations. Scaffidi et. al \cite{Product_feature_Scoring_from_Reviews} presented a method that uses a language model to identify product features. They assumed that product features are more frequent in product reviews than in a general natural language text. However, their method seems to have low precision since retrieved aspects are affected by noise. Zhang et. al \cite{Weakness_Finder} introduced such an expert system, Weakness Finder, which can help manufacturers find their product weakness from Chinese reviews by using aspect-based sentiment analysis. For explicit features, they incorporated a morpheme-based method and Hownet based similarity measure for grouping them. While a collocation selection method for each aspect was employed for grouping implicit features. For each of the extracted aspects, they utilized a sentence based sentiment analysis method to determine the polarity. All these methods were applied to product reviews to quantify about product's features and usage. In contrast, we apply sentic-computing to understand customer demands and expectations of humanoid robot agent in a work place. For any new technology, gauging customer satisfaction is very important as it can provide essential insights on usefulness and customer demand. For these reasons, customers are usually asked to rate their experience via simple questionnaires and provide feedback. However, customers tend to skip these surveys as it is usually voluntary. The analysis of such feedback comments is also tedious as it requires someone to read all reviews and highlight the primary customer demands manually. Such manual analysis could be time-consuming and subject to human bias. In contrast, in this paper, we propose a NLP based framework relying on aspect-based sentiment analysis to analyze customer feedback and to get insights on Nadine's performance as an agent, overall customer experience with her and areas for improvement. \section{Experimental Setup} For our experiments, we have used Nadine, a realistic humanoid social robot with natural skin, hair and appearance. Figure \ref{fig:framework} shows Nadine's architecture that consists of three layers, namely, perception, processing and interaction. Nadine \cite{NadineSR_CGI_2019} receives audio and visual stimuli from microphone, 3D cameras and web cameras to perceive user characteristics and her environment, which are then sent to the processing layer. \begin{figure}[h] \centering \includegraphics[width = 0.4\textwidth]{RobotArchitecture.jpg} \caption{Nadine Robot's Architecture} \label{fig:framework} \end{figure} The processing layer is the core module of Nadine that receives all results from the perception layer about environment and user to act upon them. This layer includes various sub-modules such as dialog processing (chatbot), affective system (emotions, personality, mood), Nadine's memory of previous encounters with users. Responses are sent to interaction layer so that they can be executed and visibly shown by Nadine such as head movement to maintain eye gaze, gestures and facial expressions, dialog and tone (to show different emotions, personality). \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{Nadine_AIA_black} \caption{Nadine setup in insurance company} \label{fig:Nadine_setup} \end{figure} Nadine was set up in an insurance company to work as a customer service agent alongside other human employees. She was required to handle customer queries and behave like a courteous service agent. For customer queries, the company had provided several FAQs from their customer interactions. A separate chatbot \cite{Chatterbot_2018} was trained based on these FAQs and integrated into Nadine, that allowed her to handle customer queries. The main objective of the study was to see if customers were willing to interact with the robot service agent and is Nadine able to handle such workplace scenarios? The customers voluntarily fill in a survey form to rate their experience with Nadine and provide feedback comments. \section{Data Collection for survey} To analyze Nadine's performance as a customer service agent, we needed to collect feedback from customers. The data collected was used for analyzing customer-agent(robot) interaction and effectiveness of the robot in the workplace. For this purpose, we employed two modes of data collection, namely, Survey Questionnaire and customer feedback. In this section, we outline the details of both these modalities. \subsection{Questionnaire} We created a questionnaire on Survey Monkey with seven questions. Throughout the questionnaire, Nadine was addressed as staff rather than as a robot. The survey was voluntary for the customers to fill in and was set up in a tablet. We collected $14$ customer survey responses on Nadine's performance as a customer service agent. The questions are tabulated in table \ref{table:cust_survey} \begin{table}[] \begin{tabular}{|p{8cm}|} \hline What is your gender \\ \hline How old are you? \\ \hline Does the staff posses the required skills and knowledge about the company's products and services. \\ \hline Was the staff friendly and behaving in a courteous manner when dealing with you. \\ \hline Is the staff is professional and has a pleasing and presentable appearance. \\ \hline Was the staff willing to listen and respond to your needs on time. \\ \hline How would you rate the ease of access and the usefulness of our online e-care with the help of the staff?\\ \hline \end{tabular} \caption{Customer Survey Questions} \label{table:cust_survey} \end{table} \subsection{Customer feedback} We also asked all customers to give unrestricted feedback on Nadine's performance as a customer service agent at the insurance company so that they can express their opinion on Nadine outside the survey questionnaire. Total of $75$ users gave their valuable feedback on Nadine. These comments were analysed using sentic computing framework explained in the section \ref{Sec:NLP_Framework} and discuss the results of the analysis in section \ref{Sec:Results}. \section{Proposed NLP Framework for analysis of customer feedback \label{Sec:NLP_Framework} This section discusses the proposed framework for the aspect extraction and sentiment analysis on customer comments. We use sentic computing to analyze user comments. Sentic Computing aims to bridge the gap between statistical NLP and many other disciplines that are necessary for understanding human languages, such as linguistics, commonsense reasoning, and affective computing. The sentic computing framework is designed to receive as input a natural language concept represented according to an M-dimensional space, and predict the corresponding sentic levels for the four affective dimensions. We first analyze the comments based on aspects and then parse the sentence through SenticNet for polarity detection (Figure \ref{fig:sentic}). The framework depicts the sentiment analysis of each aspect customers talk about. As customers talk only about the single aspect in their feedback so, the sentence polarity is the same as the aspect polarity. We explain how aspect extraction and senticnet works in subsection \ref{subsec:aspect} and \ref{subsec:senticnet} respectively. \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{sentic} \caption{Flowchart of the sentence-level polarity detection framework. Text is first decomposed into concepts. If these are found in SenticNet, sentic patterns are applied. If none of the concepts is available in SenticNet, the ELM classifier is employed.~\cite{cambria2015sentic}} \label{fig:sentic} \end{figure} \subsection{Aspect Extraction} \label{subsec:aspect} Aspect-based opinion mining~\cite{hukdd} focuses on the relations between aspects and document polarity. An aspect, also known as an opinion target, is a concept in which the opinion is expressed in the given document. The framework~\cite{poria2016aspect} incorporates a 7-layer deep convolutional neural network (CNN) which tags each word in opinionated sentences as either aspect or non-aspect word. The model also includes a set of linguistic patterns for the same purpose and combined them with the CNN. This ensemble model is used to extract the aspect from the customers' comments. The procedure for aspect model is as follows: \begin{enumerate} \item form a window around the word to tag \item apply CNN on that window \item apply maxpool on that window \item obtain logits \item apply CRF for sequence tagging \end{enumerate} The trained model can be found here\footnote{\url{https://github.com/SenticNet/aspect-extraction}} \subsection{SenticNet} \label{subsec:senticnet} SenticNet is the knowledge base which the sentic computing~\cite{cambria2015sentic} framework leverages on for concept-level sentiment analysis. SenticNet is a publicly available semantic resource for concept-level sentiment analysis that exploits an ensemble of graph mining and multi-dimensional scaling to bridge the conceptual and affective gap between word-level natural language data and the concept-level opinions and sentiments conveyed by them~\cite{cambria2018senticnet}. SenticNet provides the semantics and sentics associated with 100,000 commonsense concepts, instantiated by either single words or multi-word expressions. Figure \ref{fig:sentic} shows how a sentence is processed. The input text is first decomposed into concepts. If these are found in SenticNet~\cite{vilares2018babelsenticnet}, sentic patterns are applied. If none of the concepts is available in SenticNet, the ELM classifier is employed. \section{Experimental Results and Discussions} \label{Sec:Results} In this section, we explain the results of our proposed survey questionnaire and aspect-based sentiment analysis on customer feedback. We also discuss the limitations and possible future directions based on the analysis of the customer-agent interaction data. \subsection{Analysis of Questionnaire} The first two questions of the survey were meant to understand the customer demographics in the insurance company. This helped us to understand the group of customers that robots like Nadine would attract in a work environment. From our questionnaire, we observed that females were more interested in talking to Nadine. People in the age group of $36 - 45$ had interacted the most with her. In general, we can observe that the younger generation was more comfortable and willing to interact with the robot. The results of both questions can be seen in figures \ref{fig:q1} and \ref{fig:q2}. \begin{figure}[ht] \centering \includegraphics[width =0.5\textwidth]{Q1} \caption{Customer response on question 1} \label{fig:q1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width =0.5\textwidth]{Q2} \caption{Customer response on question 2} \label{fig:q2} \end{figure} It was observed that $39$\% of the customers recorded that Nadine had the required skills and knowledge about the insurance company's products and services. In this initial study, Nadine was required only to answer customer queries and behave with the apt etiquette for the customer service agent. The skill set and questions provided for Nadine were limited. Due to security reasons and privacy concerns, Nadine was not able to access sensitive and personal data such as customer policy information. Due to the open nature of dialog, the customers believed that she could access this type of data and help them with all possible queries, which Nadine was not trained for. This could be the reason why only $32$\% of the customers believed that Nadine was professional. People whose queries she could not handle thought she was not professional. Also, Nadine has a very realistic human-like appearance, which raised customer's expectations of the robot's professional ability and insurance-related functionality to be high. Thus it also shows there needs to trade off between the tasks trained for robots and its appearance. Nadine had an additional capability to help customers use the online platform of the insurance company. The motive was to familiarize the customers with the new online platform, which customers can use at their homes to get all routine policy-related information. This would help to avoid unnecessary travel to the service centre. Nadine could guide them step by step to register their account and change their address on online platform. Mostly customers rated Nadine moderate on the help of the online platform. The results of questions 3, 5, 7 are shown in figure \ref{fig:q3}. \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{chart} \caption{Customer response on questions 3, 5, 7} \label{fig:q3} \end{figure} For the question on Nadine's courteous nature and friendliness, customers mostly agreed or were neutral. For a customer service agent, it is necessary to be pleasant, courteous, always smiling and show emotions. As a social robot, Nadine has been programmed to simulate a range of emotions and had been set up according to the needs of an agent in the insurance company. Similarly, the customers mostly agreed or were neutral, when asked on Nadine's willingness to listen and respond on time. As a robot, Nadine is always welcoming and willing to listen, but responses could be delayed. For instance, when a customer's question is not in her database, she searches online for the most appropriate answer, which will delay her responses. Also, sometimes, she may not reply when the customer did not speak into the microphone correctly as she did not receive any input. The results of questions 4 and 6 can be seen in figures \ref{fig:q4} and \ref{fig:q6} respectively. \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{Q4} \caption{Customer response on question 4} \label{fig:q4} \end{figure} \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{Q6} \caption{Customer response on question 6} \label{fig:q6} \end{figure} \subsection{Sentic Computing Framework results of Customer Feedback} The results show the aspects and sentiments associated with the aspects of the user comments. The results discussed here will help the future generation of humanoid robots to enhance human-robot interaction. The aspects in Figure \ref{fig:aspect} shows the main characteristics that the customer looked for in the robot. In figure \ref{fig:aspect}, the size of the word shows the number of customer feedback about that aspect. It can be observed that functionality, appearance and performance were three main aspects that any customer commented on. The sentiments can be either Positive, Negative and Neutral. The sentiments observed in customer feedback can be seen in Figure \ref{fig:polarity}. 50\% of the customers were Positive towards Nadine as an employee, 37.1\% Negative and 10\% as Neutral sentiments. SenticNet could not find sentiments in 2.9\% of the feedback since they were written in informal language (microtext)~\cite{satapathy2017phonetic} which our current version is not able to handle. \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{polarity} \caption{Polarity detection} \label{fig:polarity} \end{figure} Due to the non-availability of sensitive customer data, Nadine cannot perform all tasks or functionalities that a customer agent can perform. We also need to add a security layer to be able to handle sensitive data. In an open social interaction, there are no restrictions on questions a customer can ask. It is challenging to train a robot for open-ended questions. Thus, if Nadine is not trained for any question, she would have to go online and depend on her network speed for her answers. This will affect her response time and performance. It includes retraining model to answer more effectively and quick. The appearance of the robot plays a vital role in the way customers perceive the robot. Due to the uncanny valley, the expectation of realism from a human-like robot is high. The customers would believe Nadine could handle all types of situations and interactions like other human agents. Due to this, the results were mostly positive but with negative towards manicure and other minute details. Few comments are related to the hardware used (such as the speaker, microphone), the language of communication and appearance (requires manicure) that can be easily changed to give a better customer interaction experience. We observe that most majority of sentiments are positive. The positive sentiments are mainly for the appearance, while negative sentiments focus around functionality, performance and response time. The negative sentiments are the result of the robot being very human-like, which increases the expectation of customers. We are also working on adding microtext normalization~\cite{satapathy2019phonsenticnet} to handle informal texts and common sense reasoning for an effective dialogue system. Future work revolves mostly around the negative aspects customers gave feedback about. To summarize, our results show that the customers had an overall positive experience with Nadine as their service agent. Both user survey and aspect-based sentiment analysis of customer feedback show that Nadine's social behaviour was acceptable and pleasing. The functionality and performance of Nadine was limited due to some of the reasons as mentioned above but can be improved as a part of our future work. \begin{figure}[h] \centering \includegraphics[width =0.5\textwidth]{aspect-cloud} \caption{Aspect Cloud from Customer's comments} \label{fig:aspect} \end{figure} \section{Conclusion} We have conducted user studies on a social robot at the workplace. Based on the customer feedback and survey questionnaire, we have identified customer expectations and demands of such a robot employee. We analyzed customer feedback using aspect-based sentiment analysis to identify aspects and sentiments associated with them. From our experiments, we observed that the general customer sentiment was positive to Nadine. Functionality, appearance and performance were 3 main aspects of customer feedback. From our analysis, we also observed that Nadine performs well in a work environment and is capable of maintaining proper social etiquette that is pleasing to the customers. \section*{ACKNOWLEDGMENT} This research is supported by the BeingTogether Centre, a collaboration between Nanyang Technological University (NTU) Singapore and University of North Carolina (UNC) at Chapel Hill. The BeingTogether Centre is supported by the National Research Foundation, Prime Minister's Office, Singapore under its International Research Centres in Singapore Funding Initiative. We would also like to thank our colleagues Yiep Soon and Ajay Vishwanath for their support to setup Nadine for the experiment. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Wolf-Rayet (WR) stars -- the progeny of massive O-type stars -- are excellent tracers of young stellar populations in galaxies owing to their unique spectroscopic signatures of strong, broad emission lines \citep{crowther07}. However, whilst WR surveys of nearby galaxies are nearing completeness \citep{massey14}, the Wolf-Rayet content of the Milky Way remains woefully incomplete \citep[e.g.][]{shara09} due to high dust obscuration at visual wavelengths. Our detailed knowledge of the evolution of massive stars remains unclear, with inaccuracies in earlier evolutionary phases magnified in the WR phase. In addition, it is becoming clear that the conventional view of $\geq$20--25 $M_{\odot}$ O stars advancing through the Luminous Blue Variable (LBV) stage en route to the nitrogen- (WN) and carbon- (WC) sequence Wolf-Rayet phase and ultimately a stripped envelope core-collapse supernova (ccSN) is incomplete if not incorrect. First, a high fraction of massive stars are now known to be multiple \citep{sana12}, so the major effects of close binary evolution needs to be considered. Second, it has been proposed that LBVs are lower mass binary products, from an inspection of their spatial location in the Milky Way and Large Magellanic Cloud (LMC) with respect to Wolf-Rayet and O stars \citep{smith15}. Third, \citet{sander12} argue from a spectroscopic analysis of Milky Way WC stars that the most massive stars do not pass through this phase. Finally, it is not clear whether the most massive stars will undergo a bright SN explosion after core-collapse, since they may collapse directly to a black hole or produce a faint SN and fallback to a black hole \citep{langer12}. Still, our Galaxy contains the largest \emph{spatially resolved} population of WR stars, predicted to number ${\sim}1200$ \citep[][hereafter RC15]{rosslowe15a,rosslowe15b}. The confirmed population has doubled over the previous decade, and currently stands at ${\sim}640$\footnote{\url{http://pacrowther.staff.shef.ac.uk/WRcat/}}. The Galactic disk therefore presents a rich hunting ground for further discoveries. Due to the large foreground interstellar dust extinction towards stars in the galactic disk, near and mid- infrared surveys are required. The dense, ionized stellar wind of WR stars facilitate two approaches to infrared surveys. First, their strong, broad emission lines are amenable to near-IR narrow-band imaging \citep{shara09, shara12, kanarek15}. Second, their dense winds exhibit a free-free excess leading to unusual infrared colours, which have been exploited in the near-IR \citep{homeier03, hadfield07} and mid-IR \citep{mauerhan11, messineo12, faherty14}. To date, the majority of spectroscopic follow-up has been carried out to an approximate depth of $K_{\rm S} \lesssim$11\,mag. However, the (coarse) model of the Galactic Wolf-Rayet distribution developed by RC15 suggests follow-up spectroscopy is needed to $K_{\rm S} \sim$13\,mag in order to sample the majority of Wolf-Rayet stars. Here we exploit prior photometric approaches to spectroscopically survey a region of the Galactic disk to fainter limits than to date ($K_{\rm S}\sim$13\,mag). This has two interrelated goals: 1) the refinement and development of techniques that can be used to classify Wolf-Rayet stars using only infrared spectroscopy; 2) comprehensive searches for Wolf-Rayet stars in the Milky Way to allow more robust comparisons between their spatial locations and other massive stars; longer term goals involve the second data release (DR2) of {\it Gaia} which will provide parallaxes for hundreds of Wolf-Rayet stars, permitting their use as tracers of Galactic structure, and will be combined with upcoming large fibre-fed spectroscopic surveys, including WHT/WEAVE \citep{weave} and VISTA/4MOST \citep{4most}. This paper is structured as follows. In Section~\ref{sec:cand}, we describe our photometric selection criteria and survey region, namely the Scutum-Crux spiral arm, the tangent to which lies at approximately $l\,{\simeq}\,310^\circ$ \citep{georgelin76}. Spectroscopic observations of Wolf-Rayet candidates, plus some previously known Galactic WR templates, are presented in Section~\ref{sec:data}, including a brief description of non-WR stars. Refined near-IR classification criteria for Wolf-Rayet stars are presented in Section~\ref{sec:class}. Results for newly identified WR stars are presented in Section~\ref{sec:new}, including distance estimates. We consider the spatial location of Wolf-Rayet and other massive stars in Section~\ref{sec:disc}, including discussion of prior inferences about the nature of Luminous Blue Variables. Finally, in Section~\ref{sec:conc} we reflect on the low success rate of the methodology employed, and share some motivating points for future IR surveys targeting WR stars. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{candidates_c-c.eps} \caption{ Colour-colour diagrams from 2MASS and GLIMPSE-I, showing 191 candidates in our survey area (circles) we obtained NTT/SOFI HK spectra for 127 stars (black), leaving 64 unobserved (grey). Newly discovered WR stars are indicated: triangles for WN stars and squares for WC stars. The left panel shows K$_{\rm s}$--[8.0] versus J--K$_{\rm s}$, central panel indicates [3.6]--[8.0] versus [3.6]--[4.5] while the right panel presents the reddening free parameters Q1 versus Q2 from \citet{messineo12}. } \label{fig:c-c} \end{center} \end{figure*} \section{Candidate Selection}\label{sec:cand} Here we discuss our selection of sight lines towards the Scutum-Crux arm, plus our photometric criteria for the selection of candidate Wolf-Rayet stars. Specifically, we focus on $l\,{=}\,\mbox{298--340}^\circ$, which \citet{Russeil05} have previously highlighted in determining Galactic structure, since this intersects three proposed spiral arm features -- Sagittarius-Carina, Scutum-Crux and Norma-Cygnus \citep[][their Fig.~5]{russeil03}. The majority of known Wolf-Rayet stars in this region lie at distances ${<}6$\,kpc, consistent with the nearby Sagittarius-Carina arm. However, assuming typical $M_{K_S}\,{=}\,{-}\,5$ and $A_{K_S}\,{=}\,2$ for Galactic WR stars, it is possible to probe heliocentric distances $\mbox{6--15}\,$kpc by identifying WR stars in the magnitude range $K_S\,{=}\,\mbox{11--13}$\,mag. Therefore WR stars may provide a comparable and complimentary tracer of Galactic structure to commonly used non-stellar objects, i.e., H\,{\sc ii} regions and atomic H gas. We confined our search to latitudes $|b|{<}0.5^\circ$, ensuring that at the furthest expected distances, candidate WR stars remain within a few scale heights of the Galactic plane (FWHM${=}80\,$pc for WRs, RC15). \begin{table*} \begin{center} \caption{Catalogue of newly identified Wolf-Rayet stars, including WR75-30 (1083--1765) from \citet{kanarek15}} \label{tab:obs} \begin{tabular}{ l@{\hspace{2mm}}l@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}} c@{\hspace{2mm}}r@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}} c@{\hspace{2mm}}c@{\hspace{2mm}}r@{\hspace{2mm}}l} \hline ID & WR & RA & Dec & $l$ & $b$ & K$_S$ & J--K$_S$ & H--K$_S$ & K$_S$--[3.6] & [3.6]--[4.5] & K$_S$--[8.0] & SOFI & Spectral\\ & Number & \multicolumn{2}{c}{-- J2000 --} & & & mag & mag & mag & mag & mag & mag & Grisms & Type \\ \hline E\#3 & WR46-18 & 12:08:52.49 &--62:50:54.9 & 298.0981 &--0.3769 & 10.47 & 2.87 & 1.20 & 0.80 & 0.46 & 1.98 & GB,GR &WC6-7\\ B\#13 & WR47-5 & 12:50:48.98 &--62:24:39.8 & 302.8599 & +0.4606 & 11.09 & 2.23 & 0.83 & 0.70 & 0.44 & 1.69 & GB,GR & WN6(h)\\ B\#37 & WR56-1 & 13:41:50.01 &--62:20:25.8 & 308.7434 &--0.0387 & 12.18 & 2.76 & 1.03 & 0.87 & 0.48 & 1.69 & GB,GR & WN5o\\ B\#51 & WR60-7 & 14:02:33.44 &--61:20:27.2 & 311.3533 & +0.3626 & 10.28 & 3.00 & 1.29 & 1.09 & 0.43 & 2.10 & GB,GR & WC7-8\\ B\#56 & WR60-8 & 14:12:15.19 &--61:42:45.2 & 312.3517 &--0.3277 & 11.76 & 3.14 & 1.22 & 1.15 & 0.51 & 2.33 & GB,GR & WN6o \\ B\#85 & WR64-2 & 15:01:14.05 &--58:49:07.4 & 319.0607 &--0.0724 & 11.89 & 2.57 & 0.99 & 1.07 & 0.51 & 2.13 & GB,GR & WN6o \\ B\#87 & WR64-3 & 15:02:46.14 &--58:27:06.5 & 319.4120 & +0.1535 & 10.18 & 2.21 & 0.86 & 0.88 & 0.47 & 1.84 & GB,GR & WN6o \\ B\#88 & WR64-4 & 15:04:11.15 &--58:27:21.5 & 319.5721 & +0.0601 & 9.10 & 2.25 & 0.91 & 0.76 & 0.46 & 1.69 & GB,GR & WN6o+OB\\ B\#91 & WR64-5 & 15:07:31.84 &--58:15:09.6 & 320.0540 & +0.0209 & 10.80 & 2.31 & 0.86 & 0.89 & 0.44 & 1.83 & GB,GR & WN6o\\ B\#93 & WR64-6 & 15:10:57.65 &--57:57:28.5 & 320.5939 & +0.0474 & 11.11 & 2.73 & 1.08 & 0.99 & 0.54 & 2.14 & GB,GR & WN6b \\ B\#105 & WR70-13 & 15:37:46.51 &--56:08:45.2 & 324.6325 &--0.4487 & 9.96 & 3.16 & 1.42 & 1.52 & 0.50 & 2.58 & GB,GR & WC8d\\ B\#107 & WR70-14 & 15:39:17.02 &--55:49:18.9 & 324.9945 &--0.3129 & 11.50 & 2.62 & 1.08 & 1.15 & 0.59 & 2.28 & GB,GR & WN4b \\ B\#123 & WR70-15 & 15:58:57.97 &--52:46:05.4 & 329.1414 & +0.2865 & 12.40 & 3.21 & 1.19 & 1.20 & 0.58 & 2.28 & GB,GR & WN5o \\ B\#132 & WR72-5 & 16:07:01.45 &--51:58:18.3 & 330.5909 & +0.0725 & 10.27 & 2.29 & 0.87 & 0.77 & 0.52 & 1.81 & GB,GR & WN6o \\ A\#11 & WR75-31 & 16:25:13.60 &--48:58:22.3 & 334.7557 & +0.2255 & 12.21 & 3.07 & 1.13 & 1.09 & 0.46 & 2.11 & GB,GR & WN7o \\ A\#13 & WR75-30 & 16:32:25.70 &--47:50:45.8 & 336.3959 & +0.1395 & 11.57 & 3.58 & 1.36 & 1.22 & 0.60 & 2.22 & GR & WN7o \\ B\#154 & WR76-11 & 16:40:12.92 &--46:08:54.0 & 338.5451 & +0.2996 & 11.98 & 3.07 & 1.17 & 1.10 & 0.45 & 2.01 & GR & WN7o \\ \hline \end{tabular} \end{center} \end{table*} We selected candidate Wolf-Rayet stars for which $298^{\circ} \leq l \leq 340^{\circ}$ and $|b| \leq 0.5^{\circ}$ from their near-IR \citep[2MASS][]{skrutskie06} and mid-IR \citep[GLIMPSE-I][]{benjamin03} photometry. We limited our survey to GLIMPSE-I point sources with a corresponing 2MASS detection\footnote{via the IPAC/NASA Infrared Science Archive: \url{http://irsa.ipac.caltech.edu}}, requiring a minimum 2MASS quality flag of `C' in the $K_{\rm S}$ filter, and rejected sources with one or more Source Quality Flags ${>}\,10^5$ in the GLIMPSE-I catalogue. We then used the TOPCAT\footnote{Available at: \url{http://www.starlink.ac.uk/topcat/}} tool to apply various cuts in colour and magnitude. \citet{mauerhan11} identified several regions of colour space (their grey shaded region in Fig.~1) favoured by Wolf-Rayet stars, which we have adapted as follows: \begin{gather*} 1.25(\mathrm{K}_\mathrm{S}-[8.0]) \leq (J-K_S) + 0.5 <2.5(\mathrm{K}_\mathrm{S}-[8.0]), \\ \mathrm{K}_\mathrm{S}-[8.0] \geq 1.3, \\ 0.8 \leq ([3.6]-[8.0]) \leq 1.6, \\ 0.3 \leq ([3.6]-[4.5]) \leq 0.75. \end{gather*} In addition, \citet{messineo12} introduced additional reddening-free parameters $Q1 = (J-H) - 1.8 (H-K_{\rm S})$ and $Q2 = (J-K_{\rm S}) - 2.69 (K - [8.0])$ which we also utilise: \begin{gather*} (11.25\mathrm{Q}1-2.38)<\mathrm{Q}2<-1.0. \end{gather*} It is necessary to emphasise that not all WR stars occupy this parameter space, although there is no bias towards either WN or WC subtypes. Still, some dusty WC stars are offset from the majority of WR stars in the J-K$_S$ vs $\mathrm{K}_\mathrm{S}-[8.0]$ colour colour diagram \citep[][their Fig.~1]{mauerhan11}, so our survey criteria are potentially biased against such stars. Approximately 250 sources satisfied these criteria. We subsequently cross-checked the co-ordinates of these with the SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad/}} database, to find any with previous identifications. Encouragingly, 14\% of these were known WR stars (23\% of known WRs in the survey area), 4\% had non-WR classifications (mostly Be stars or young stellar objects), leaving $\sim$200 candidates with no previous spectral classification. Colour-colour diagrams for 191 candidates involving $\mathrm{K}_\mathrm{S}-[8.0]$ vs $(\mathrm{J}-\mathrm{K}_\mathrm{S}$) and $[3.6]-[8.0]$ vs $[3.6]-[4.5]$ are presented in Figure~\ref{fig:c-c} together with reddening-free parameters Q1 and Q2 from \citet{messineo12}. \begin{table} \begin{center} \caption{Spectroscopic datasets exploited for the updated near-IR classification scheme} \label{tab:log} \begin{tabular}{ l@{\hspace{-2mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}} l@{\hspace{1mm}}l@{\hspace{2mm}}l@{\hspace{2mm}}l} \hline Tel/Inst & R & \multicolumn{4}{l}{Spect} & Epoch & Ref & Dataset/\\ & & \multicolumn{4}{l}{Range} & & & Programme\\ \hline CTIO 4m/IRS & 3000 & Y & & & K & Mar 1996 & a & C1 \\ ESO 1.5m/B\&C& & Y & & & & Feb 1982 & b & E1 \\ INT/IDS & 3000 & Y & & & & Aug 1990 & c & I1 \\ Lick 3m/UCLA & 525 & & & & K & 1994-1995 & d & L1 \\ MSO/CGS &600--850 & Y & J & H & K & & e & M1 \\ NTT/SOFI & 1000 & Y& J & H & K & Sep 1999 & f & N1/63.H-0683 \\ NTT/SOFI & 600--1300& Y& J & & K & May 2002 & g & N2/69.B-0030 \\ NTT/SOFI &600--1000 & Y& J &H & K & Nov 2003 & & N3/71.D-0272 \\ NTT/SOFI & 600 & Y& J & & & Nov 2004 & & N4/74.D-0696 \\ NTT/SOFI & 600 & Y& J & H & K & Jun 2005 & h & N5/75.D-0469 \\ NTT/SOFI & 600 & Y& J & H & K & Mar 2015 & i & N6/94.D-0839 \\ OHP/CARELEC & & Y& & & & Sep 1989 & j & O1 \\ UKIRT/CGS2 &400--600 &Y & J & H & K & & k & U1 \\ UKIRT/CGS4 &1000--1400& & & H& K & Jan 1994 & l & U2 \\ UKIRT/CGS4 &600--850 & Y& J & H & K & Aug 1994 & m & U3 \\ UKIRT/CGS4 & 1500 & Y& J & & K & Sep 1995 & & U4 \\ UKIRT/CGS4 & 3000 & & & & K & May 1996 & a & U5 \\ UKIRT/CGS4 & 1500 & Y& J & & K & Apr 1997 & & U6 \\ VLT/XSHOOTER & 5000 & Y&J& H & K & 2013-2014 & n & V1 \\ \hline \multicolumn{9}{l}{ \begin{minipage}{\columnwidth} a: \citep{bohannan99}, b: \citep{vreux83}; c: \citep{howarth92}, d: \citep{figer97}, e: \citep{hillier83}, f: \citep{crowther00}, g: \citep{homeier03}; h: \citep{crowther06}, i: This study; j: \citep{vreux90}, k: \citep{eenens91}, l: \cite{crowther95c}, m: \citep{crowther96}: n: \citep{tramper15} \end{minipage} } \end{tabular} \end{center} \end{table} \section{NTT/SOFI spectroscopy of Wolf-Rayet candidates}\label{sec:data} We obtained near-IR spectroscopy of 127 WR candidates between 29--31 March 2015 (program ID 094.D-0839) using the Son-of-Isaac (SOFI) spectrograph at the New Technology Telescope (NTT). These represent 66\% of the IR selected candidates presented in Fig.~\ref{fig:c-c}. Candidates were observed with the red grism (GR) covering the 1.53--2.52$\mu$m spectral region, a dispersion of 10.2\AA/pix, and a slit width of 1 arcsec, providing a spectral resolution of $R\sim$600. All sources were observed using a standard ABBA sequence, including a small random offset in the A and B positions between exposures. Before extracting 1D spectra, we subtracted a median dark frame from each individual frame, then subtracted adjacent AB pairs from one another. The result of this was 4 dark frame-corrected spectra for each source, free from sky lines. We extracted these 4 spectra for each object using IRAF. Wavelength calibration was performed using strong and isolated sky lines at known wavelengths \citep{rousselot00} present in each raw frame, after which all spectra for each object were co-added. Throughout each night, we periodically observed bright Vega-type telluric standard stars, at similar airmasses to the WR candidates. The removal of telluric spectral features was achieved using \texttt{telluric} in IRAF. We also used these telluric standards, together with Kurucz models of the same spectral types, to perform relative flux calibration, which are subsequently adjusted to match 2MASS photometry. Of the 127 candidates, 17 stars were identified as WR stars. Of these, one candidate was subsequently matched to the recently discovered WN6 star 1093-1765 (= WR75-30) from \citet{kanarek15}, such that 16 stars are newly identified as WR stars in this study. Previous surveys of Wolf-Rayet stars from IR photometric criteria have achieved similar efficiencies \citep{mauerhan11, faherty14}. We briefly discuss the nature of the non-WR stars in Sect.~\ref{sec:reject} and discuss the newly identified WR stars in Sect.~\ref{sec:class}. New WR stars are indicated in Fig.~\ref{fig:c-c}, with a subtype dependence apparent in the reddening-free Q1 vs Q2 diagram. Table~\ref{tab:obs} provides basic observational properties for the new Wolf-Rayet stars, for which we obtained NTT/SOFI spectroscopy, while Table~B1 (available online) provides a list of all candidates for which we obtained spectroscopy, together with a brief note describing the nature of each source. In addition to the candidate WR stars, we have also obtained NTT/SOFI spectroscopy of 14 Wolf-Rayet stars for which optical classifications have been undertaken, in order to refine near-IR based classification criteria (see Sect.~\ref{sec:class}). Finally, we also obtained blue grism (GB) spectroscopy with SOFI for the majority of newly identified WR stars. The GB observations cover the spectral region 0.95--1.64$\mu$m, a dispersion of 7.0\AA/pix, and an identical slit width of 1 arcsec, again providing $R\sim$600. Data reduction was undertaken in an identical manner to the GR datasets. \subsection{Non Wolf-Rayet stars}\label{sec:reject} A significant subset of the 110 candidates that were not confirmed to be WR stars exhibited a hydrogen emission line spectrum, with Br$\gamma$ observed in 60 (55\%) cases, plus often higher Brackett series (Br10, 11), and He\,{\sc i} 2.058$\mu$m emission present in a quarter of instances. These sources are likely to be massive young stellar objects (mYSOs) or Herbig AeBe stars \citep[see e.g.][]{porter98, cooper13}. Br$\gamma$ emission equivalent widths are typically 10--30\AA, with He\,{\sc i}/Br$\gamma$ ratios of 0.3--1. The majority of Br$\gamma$ emission lines are unresolved (FWHM$\sim$30\AA) although several stars (e.g. B\#127, 147, 149) possess broad emission (FWHM$\sim$50-60\AA). Unusually, B\#66 exhibits strong He\,{\sc i} 2.058$\mu$m emission, without significant Br$\gamma$ emission, warranting follow up observations. Mid-IR imaging has revealed circumstellar ring nebulae around many evolved stars (e.g., \citealt{wachter10,toala15}). Such nebulae appear prominently in Spitzer 8.0\micron\ images, owing to thermal emission from dust swept up by stellar winds. Mindful of this, we inspected $5^\prime\times5^\prime$ Spitzer 8.0\micron\ images centred on all candidates. One of the Br$\gamma$ emission line sources, A\#9 (SSTGLMC G330.7900-00.4539), is the central star of a striking oval mid-IR ring nebula, S65, which was identified by \citet{churchwell06} and studied by \citet{simpson12}. Strong absorption lines in the Brackett series are observed in one candidate, B\#3, indicating an A- or late-B type star, with Br$\gamma$ absorption observed in another object, B\#54, albeit without other prominent features suggesting an early type star in this instance. Four candidates -- B\#21, B\#100, B\#122 and B\#153 -- exhibit prominent CO 2.3$\mu$m bandhead absorption features, although none of these involve Br$\gamma$ emission line sources, indicating a late-type star origin. The remaining 44 candidates (40\% of the non WR stars) either have no prominent absorption or emission features, or the S/N achieved was insufficient to identify their nature. \section{Near-IR classification of Wolf-Rayet stars}\label{sec:class} The switch from optical to near-IR spectroscopy for the overwhelming majority of new Galactic Wolf-Rayet stars requires a reassessment of spectral classification criteria. \citet{vreux90} provided an classification scheme based upon Y-band observations of northern WR stars, while \citet{eenens91} devised a near-IR scheme for WC stars from 1--5$\mu$m spectroscopy. More recently \citet[][hereafter C06]{crowther06} provide near-IR classification diagnostics for WN and WC stars, based on equivalent width ratios. Qualitatively, early-type WC4--6 stars possess broader emission lines than later WC7--9 subtypes, although exceptions do exist \citep{eenens94}. An updated a quantitative near-IR classification of Wolf-Rayet stars is made feasible by access to a greatly expanded sample of Galactic and Magellanic Cloud WR stars for which optical classifications have been made, primarily \citet{smith96} for WN stars and \citet{smith90} for WC stars. The datasets utilised were drawn from various sources, primarily NTT/SOFI and UKIRT/CGS4, as summarised in Table~\ref{tab:log}. We have also inspected high resolution, intermediate resolution IRTF/SpeX spectroscopy of northern Galactic WR stars, provided by W.D.~Vacca (priv. comm.), although we focus our criteria on moderate resolution (R = 600 -- 1000), modest signal-to-noise spectroscopy in the 0.95 -- 2.5$\mu$m near infrared. To date, no criteria for the identification of WN/C or WO stars from near-IR spectroscopy have been considered, nor has an attempt to distinguish between H-rich and H-deficient WN stars, although C06 did separate broad-lined WN stars (FWHM He\,{\sc ii} 1.01\micron\ $\geq$65\AA) from narrow-lined counterparts. In our revised near-IR classification scheme we attempt to utilise pairs of lines from adjacent ionization stages of helium for WN stars, and adjacent ionization stages of carbon for WC stars. In some instances nitrogen lines are required for WN stars, in common with C06, plus ratios of carbon to helium lines are utilised for WN/C and WC stars, which will also depend upon their relative abundances. We omit from our discussion the near-IR classification of transition Of/WN stars, which has been considered by \citet{bohannan99} and \citet{crowther11}. WN stars with intrinsic absorption features (WNh+abs, WNha) also offer specific challenges which will need to be considered separately. A detailed description of the updated classification scheme is provided in Appendix A (available online), while we present a summary of Y, J, H and K-band classification diagnostics for WN, WN/C, WC and WO subtypes in Table~\ref{tab:class}. In Figs~A1--A2 (available online) we present YJ-band and HK-band spectroscopy of template (optically classified) WN, WN/C, WC and WO stars, with line measurements provided in Tables~A1--A3, also online. Overall, the use of solely IR diagnostics provide satisfactory classifications, although confidence in resulting spectral types requires multi-band spectral coverage, a minimum spectral resolution of $R\sim$500 and moderate signal-to-noise. In many instances, observations at a single band prevent a refined classification. For example, WC4--7 stars may not be distinguished using solely K-band spectroscopy, while it is not possible to differentiate between broad-lined WN4--7 stars on the basis of low S/N spectroscopy in the Y-band. Still, reliable, Wolf-Rayet subtypes can be obtained from complete 1--2.5$\mu$m spectroscopy, with the exception of broad-lined WN4--6 stars and WC5--6 stars. In addition, WO stars have a very distinctive near-IR spectrum, and WN/C stars possess characteristics in each of Y, J, H and K-bands which distinguish them from normal WN stars. In addition, the presence of hydrogen in WN stars can be identified in most subtypes, although very late subtypes are challenging since a low He\,{\sc ii} 1.163/P$\beta$ or He\,{\sc ii} 2.189/Br$\gamma$ ratio may indicate either a high hydrogen content or a low ionization atmosphere. \subsection{Robustness of near-IR classification} In order to assess the robustness of the new scheme, we reclassify several WN and WC stars which have been discovered and classified from red optical spectroscopy. We utilise NTT/SOFI spectroscopy of four WN stars, WR62a, WR68a, WR93a from \citet{homeier03} (dataset N2 in Table~\ref{tab:log}), plus WR75-30 from our own observations (dataset N5 in Table~\ref{tab:log}), together with three WC stars, WR107a from \citet{homeier03} plus WR75aa and WR75c from our own observations. Near-IR spectra are presented in Figs~\ref{fig:new_wr_blue}--\ref{fig:new_wr_red}. Individual line measurements are provided in Table~\ref{newbies_wn} and \ref{newbies_wc} for WN and WC stars, respectively, while line ratios are presented in Table~\ref{tab:new_wrstars}. Measurements have employed Gaussian fits, using the \texttt{elf} suite of commands within the Starlink \texttt{DIPSO} package\footnote{Available at: \url{http://starlink.eao.hawaii.edu/starlink}}. \subsubsection{WN stars} WR62a was classified as WN5o by \citet[][their source \#11]{shara99}, and we support its classification as a narrow-lined WN star. Consequently, the primary diagnostics are the He\,{\sc i} 1.08/He\,{\sc ii} 1.01$\mu$m ratio and K-band morphology. The former indicates a WN6 subtype, while He\,{\sc i} + N\,{\sc iii} 2.11 $>$ N\,{\sc v} 2.10, favours a WN5--6 subtype. The P$\beta$/He\,{\sc ii} 1.16$\mu$m ratio indicates WR62a is hydrogen-free, while the Br$\gamma$/He\,{\sc ii} 2.19$\mu$m suggests a borderline o/(h) classification, so overall we favour WN6o for WR62a. The same arguments and ratios apply to WR68a for which \citet[][their source \#13]{shara99} assigned WN6o. We support this classification owing to its morphological similarity to WR62a. WR93a (Th~3--28), was originally classified as WN2.5--3 by \citet{acker90} and revised to WN6 by \citet{miszalski13} from optical spectroscopy. This is also a narrow-lined WN star so again we focus on its He\,{\sc i} 1.08/He\,{\sc ii} 1.01$\mu$m ratio and K-band morphology. Both favour a WN6 subtype, with a significant hydrogen content from our multiple diagnostics (darker shaded regions in Fig.~A4, available online), so we adopt WN6h for WR93a. \citet[][their source 1083-1765]{kanarek15} originally classified WR75-30 as a WN6 star from near-IR spectroscopy. The He\,{\sc i} 1.70/He\,{\sc ii} 1.69$\mu$m ratio favours a WN7 subtype, as does (He\,{\sc i} + N\,{\sc iii} 2.11)/He\,{\sc i} 2.19$\mu$m $\sim$ 1, while the Br$\gamma$/He\,{\sc ii} 2.19$\mu$m ratio lies in the hydrogen-free region of Fig.~A4 so we favour WN7o for this star. \subsubsection{WC stars} WR75aa and WR75c were identified as WC9 stars by \citet{hopewell05} from red optical spectroscopy. All our primary near-IR diagnostics support this assessment, as do the secondary criteria involving helium for WR75c. WR75aa has a borderline WC8--9 classification from the He\,{\sc i-ii} 1.7/C\,{\sc iv} 1.74$\mu$m ratio, but overall both stars are unambiguous WC9 stars. Finally, WR107a \cite[\#18 from][]{shara99} was originally classified as a WC6 star from red optical spectroscopy. Our primary criteria indicate the follow for WR107a: WC6$\pm$1 from both C\,{\sc iii} 1.20/C\,{\sc iv} 1.19 and C\,{\sc iii} 2.11/C\,{\sc iv} 2.07, WC5--8 from C\,{\sc ii} 0.99/C\,{\sc iii} 0.97. Our secondary criteria indicate WC5 from He\,{\sc i} 1.08/He\,{\sc ii} 1.01, and WC7 from C\,{\sc iii} 0.97/He\,{\sc ii} 1.01$\mu$m (H-band spectra are unavailable), so although WC6 is plausible we provide a more cautious WC5--7 classification. Indeed, the primary optical diagnostic ratio (C\,{\sc iii} 5696/C\,{\sc iv} 5808) also favoured WC6--7 according to \citet{shara99}. In general, the K-band is preferred to shorter wavelengths for classification of highly reddened WR stars, but K-band spectral features of dusty WC stars are often masked by host dust emission. Extremely high S/N is required to identify K-band spectral features of the Quintuplet Q stars. By way of example, \citet{liermann09} assign a WC8/9d+OB subtype to Q3 (WR102ha) from K-band spectroscopy, whereas WC features are relatively prominent in deep H- and J-band spectroscopy. We confirm a WC9d subtype for Q3 on the basis of high S/N Gemini spectroscopy presented by \citet{najarro15}, owing to C\,{\sc iii} 1.20/C\,{\sc iv} 1.19$\gg$1, C\,{\sc ii} 1.78/C\,{\sc iv} 1.74$>$1 and C\,{\sc iii} 2.11/C\,{\sc iv} 2.07$\gg$1. \begin{landscape} \begin{table} \begin{center} \caption{Classification of Wolf-Rayet stars based on emission equivalent width ratios of diagnostics in the Y,J,H,K spectral regions, updated from C06, primary diagnostics in bold font. The majority of diagnostics are blended with other lines in broad-lined Wolf-Rayet stars, including O\,{\sc vi} 1.075 with C\,{\sc iv} 1.054 and He\,{\sc ii} 1.093, He\,{\sc i} 1.083 with P$\gamma$, C\,{\sc iv} 1.191 with C\,{\sc iii} 1.198, He\,{\sc ii} 1.692 with He\,{\sc i} 1.700, C\,{\sc ii} 1.785 with C\,{\sc iv} 1.801, C\,{\sc iv} 2.070--2.080 with C\,{\sc iii} 2.115, and He\,{\sc ii} 2.189 with Br$\gamma$. For near-IR classification of transition Of/WN stars, see \citet{bohannan99} and/or \citet{crowther11}.} \label{tab:class} \begin{tabular}{r@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}l@{\hspace{2mm}}l} \hline Subtype & FWHM & Diagnostic & Diagnostic & Diagnostic & Diagnostic & \multicolumn{2}{l}{Notes} & Templates \\ & km\,s$^{-1}$ & Y-band & J-band & H-band & K-band & & & \\ \hline \multicolumn{9}{c}{\bf Narrow-lined WN stars (FWHM(He\,{\sc ii} 1.01$\mu$m) $\lesssim$ 1900 km\,s$^{-1}$ and $\log W_{\lambda}$(He\,{\sc ii} 1.01$\mu$m/\AA) $\lesssim$ 2.5)} \\ WN & He\,{\sc ii} & log $W_{\lambda}$(He\,{\sc i} 1.08/ & log $W_{\lambda}$(P$\beta$/ & $\log W_{\lambda}$(He\,{\sc i} 1.70/ & $\log W_{\lambda}$(Br$\gamma$/ & \multicolumn{2}{l}{Notes} & Templates\\ Subtype & 1.012$\mu$m & He\,{\sc ii} 1.01) & He\,{\sc ii} 1.16) & He\,{\sc ii} 1.69) & He\,{\sc ii} 2.19) \\ 9 & 300 & {\bf $\geq$1.4} & $\sim$1.5 (h) & {\bf $\geq$1.4} & {\bf $\sim$1.5 (h)} & \multicolumn{2}{l}{\bf He\,{\sc i}+N\,{\sc iii} 2.11 $\gg$ He\,{\sc ii} 2.19 } & WR105, BAT76\\ 8 & 700 & {\bf 1$\pm$0.4} & $\leq$0.3 (o); $\geq$0.1 (h) & {\bf 0.9$\pm$0.5} & $\leq$0.5 (o); $\geq$0.4 (h) & \multicolumn{2}{l}{\bf He\,{\sc i}+N\,{\sc iii} 2.11 $\gg$ He\,{\sc ii} 2.19} & WR40, WR123 \\ 7 & 800 & {\bf 0.4$\pm$0.2} & $\leq$0 (o); $\geq$0 (h) & {\bf 0.2$\pm$0.2} & $\leq$0.1 (o); $\geq$0.1 (h) & \multicolumn{2}{l}{\bf He\,{\sc i}+N\,{\sc iii} 2.11 $\sim$ He\,{\sc ii} 2.19} & WR78, WR120 \\ 6 & 1200 & {\bf 0.0$\pm$0.2} & $\leq$--0.1 (o); $\geq$--0.1 (h) & $\sim$ --0.1 & $\leq$--0.1 (o); $\geq$--0.1 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $>$ He\,{\sc i}+N\,{\sc iii} 2.11 $\gg$ N\,{\sc v} 2.10} & WR115 \\ 5 & 1400 & {\bf --0.3$\pm$0.1} & $\leq$--0.2 (o); $\geq$--0.2 (h) & $\sim$ --0.3 & $\leq$--0.2 (o); $\geq$--0.2 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $\gg$ He\,{\sc i}+N\,{\sc iii} 2.11 $\geq$ N\,{\sc v} 2.10} & BAT122 \\ 4 & 1500 & {\bf --0.7$\pm$0.3} & $\leq$--0.3 (o); $\geq$--0.3 (h) & $\sim$ --0.6 & $\leq$--0.3 (o); $\geq$--0.3 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $\gg$ N\,{\sc v} 2.10 $>$ He\,{\sc i}+N\,{\sc iii} 2.11)} & WR128, BAT75 \\ 3 & 1600 & {\bf $\leq$ --1.0} & $\leq$--0.4 (o); $\geq$--0.4 (h) & $\leq$--0.8 & $\leq$--0.4 (o); $\geq$--0.4 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $\gg$ N\,{\sc v} 2.10 $\gg$ He\,{\sc i}+N\,{\sc iii} 2.11} & WR46, WR152 \\ \hline \multicolumn{9}{c}{\bf Broad-lined WNb stars (FWHM(He\,{\sc ii} 1.01$\mu$m) $\gtrsim$ 1900 km\,s$^{-1}$ and $\log W_{\lambda}$(He\,{\sc ii} 1.01$\mu$m/\AA) $\gtrsim$ 2.5)} \\ WN & He\,{\sc ii} & log $W_{\lambda}$(He\,{\sc i} 1.08)/ & log $W_{\lambda}$(P$\beta$/ & $\log W_{\lambda}$(He\,{\sc i} 1.70/ & $\log W_{\lambda}$(Br$\gamma$/ & \multicolumn{2}{l}{Notes} & Templates\\ Subtype & 1.012$\mu$m & He\,{\sc ii} 1.01) & He\,{\sc ii} 1.16) & He\,{\sc ii} 1.69) & He\,{\sc ii} 2.19) \\ 7 & 3300 & {\bf $\geq$0.2} & --0.3 (o) & +0.4: & --0.2 (o) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $>$ He\,{\sc i}+N\,{\sc iii} 2.11} & WR77sc \\ 6 & 2600 & {\bf +0$\pm$0.2} & $\leq$--0.3 (o); $\geq$--0.3 (h) & --0.3: & $\leq$--0.2 (o); $\geq$--0.2 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $\gg$ He\,{\sc i}+N\,{\sc iii} 2.11 $>$ N\,{\sc v} 2.10} & WR75, WR134 \\ 4 & 2400 & {\bf --0.5$\pm$0.5} & $\leq$--0.3 (o) & --0.5: & $\leq$--0.2 (o); $\geq$--0.2 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $\gg$ He\,{\sc i}+N\,{\sc iii} 2.11 $\sim$ N\,{\sc v} 2.10} & WR6, WR18 \\ 2--3 & 2550 & {\bf $\leq$--1} & $\leq$--0.5 (o); $\geq$--0.5 (h) & $\leq$--1.0 & $\leq$--0.4 (o); $\geq$--0.4 (h) & \multicolumn{2}{l}{\bf He\,{\sc ii} 2.19 $\gg$ N\,{\sc v} 2.10 $\gg$ He\,{\sc i}+N\,{\sc iii} 2.11} & WR2, BAT51 \\ \hline \multicolumn{9}{c}{\bf WN/C stars} \\ WN/C & & $\log W_{\lambda}$(C\,{\sc iii} 0.97/ & $\log W_{\lambda}$(C\,{\sc iv} 1.19/ & $\log W_{\lambda}$(C\,{\sc iv} 1.74/ & $\log W_{\lambda}$(C\,{\sc iv} 2.07/ & $\log W_{\lambda}$(C\,{\sc iv} 2.43/ & Notes & Templates\\ Subtype& & He\,{\sc ii} 1.01) & He\,{\sc ii} 1.16) & He\,{\sc i-ii} 1.7) & He\,{\sc i}+C\,{\sc iii} 2.11) & He\,{\sc ii} 2.34) \\ All & & --0.5 & {\bf $\geq$--0.7} & {\bf $\geq$--0.7} & {\bf $\geq$--0.3} & $\geq$0 & & WR8, WR26\\ \hline \multicolumn{9}{c}{\bf WC stars} \\ WC & He\,{\sc ii} & $\log W_{\lambda}$(He\,{\sc i} 1.08/ & $\log W_{\lambda}$(C\,{\sc iii} 1.20/ & $\log W_{\lambda}$(He\,{\sc i-ii} 1.7/ & $\log W_{\lambda}$(C\,{\sc iii} 2.11/ & $\log W_{\lambda}$(C\,{\sc ii} 0.99/ & Notes & Templates \\ Subtype & 1.190$\mu$m &He\,{\sc ii} 1.01)& C\,{\sc iv} 1.19) & C\,{\sc iv} 1.74) & C\,{\sc iv} 2.07) &C\,{\sc iii} 0.97) & \\ 9 & 850 & +1.1 & {\bf +0.6$^{+0.2}_{-0.5}$} & 0.3$^{+0.1}_{-0.5}$ & {\bf +0.1$\pm$0.3} & {\bf --1.1$\pm$0.2} & {\bf C\,{\sc ii} 1.78 $>$ C\,{\sc iv} 1.74} & WR92, WR103\\ 8 & 1800 & +0.4$\pm$0.2 & {\bf --0.2$^{+0.3}_{-0.2}$} & --0.4$\pm$0.2 & {\bf --0.35$\pm$0.1} & {\bf --1.3$\pm$0.1} & & WR135\\ 7 & 1900 & +0.2$\pm$0.2 & {\bf --0.5$\pm$0.1} & --0.7$\pm$0.1 & {\bf --0.6$^{+0.2}_{-0.15}$ } & {\bf --1.35$\pm$0.15} & & WR90\\ 6 & 2900 & +0.2 & {\bf --0.7$\pm$0.2 } & --0.8$\pm$0.1& {\bf --0.7$\pm$0.1 } & {\bf --1.6$\pm$0.1} & & WR15, WR23\\ 5 & 2300 & +0.0 & {\bf --0.6$\pm$0.1} & --1.0$\pm$0.2 & {\bf --0.7$\pm$0.1}& {\bf --1.5$\pm$0.1} & & WR111\\ 4 & 3300 & --0.1 & {\bf --0.6$\pm$0.1} & --1.2$\pm$0.2 & {\bf $<$ --0.7} & {\bf $<$--1.5 } & {\bf C\,{\sc ii} 0.99 absent} & WR143,BAT11 \\ \hline \multicolumn{9}{c}{\bf WO stars} \\ WO & C\,{\sc iv} & $\log W_{\lambda}$(O\,{\sc vi} 1.07/ & $\log W_{\lambda}$(He\,{\sc ii} 1.16/&$\log W_{\lambda}$(O\,{\sc vi} 1.46+He\,{\sc ii} 1.47)/ & $\log W_{\lambda}$(C\,{\sc iv-iii} 2.07-2.11/ & $\log W_{\lambda}$(C\,{\sc iii} 0.97/ & Notes & Templates\\ Subtype & 1.74$\mu$m & C\,{\sc iv} 1.19) & C\,{\sc iv} 1.19)& C\,{\sc iv} 1.74) & (C\,{\sc iv} 2.43 + O\,{\sc vi} 2.46) & He\,{\sc ii} 1.01) & \\ 4 & 3600 & --0.8 & --0.7 & --0.3 & 0.2 & --0.7 & {\bf O\,{\sc vi} 1.075, 1.46, 2.46} & LH41-1042 \\ 3 & 4200 & --0.8 & --0.7 & --0.5 &$\leq$0.0 & C\,{\sc iii} weak & {\bf O\,{\sc vi} 1.075, 1.46, 2.46} & WR93b \\ 2 & 6300 & --0.8 & $\leq$--1 & --0.5 & --1.0 & {\bf C\,{\sc iii} absent} & {\bf O\,{\sc vi} 1.075, 1.46, 2.46} & WR102 \\ \hline \end{tabular} \end{center} \end{table} \end{landscape} \begin{landscape} \begin{table} \begin{center} \begin{footnotesize} \caption{Near-IR equivalent width and FWHM measurements for newly identified Galactic WN stars plus previously discovered WN stars lacking optical spectroscopy. Equivalent widths (in \AA) are generally robust to $\pm$0.05 dex, except for weak lines $\pm$0.1 dex, while measured FWHM (in km\,s$^{-1}$) are generally reliable to $\pm$50 km\,s$^{-1}$ (approximate values are indicated with colons). The key to the spectroscopic datasets utilised is provided in Table~\ref{tab:log}.}\label{newbies_wn} \begin{tabular}{l@{\hspace{1mm}}l@{\hspace{1mm}}c@{\hspace{1mm}} c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}} c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}} c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}l} \hline WR & WN & \multicolumn{2}{c}{He\,{\sc ii} 1.01} & He\,{\sc i} 1.08 & P$\gamma$ & N\,{\sc v} 1.11 & He\,{\sc ii} 1.16 & P$\beta$ & He\,{\sc ii} 1.48 & N\,{\sc v} 1.55 & He\,{\sc ii} 1.69 & He\,{\sc i} 1.70 & He\,{\sc i} 2.06 & N\,{\sc iii-v} 2.11 & Br$\gamma$ & \multicolumn{2}{c}{He\,{\sc ii} 2.19} & Note & Data \\ & SpType & FWHM & $\log W_{\lambda}$ & $\log W_{\lambda}$& $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$& $\log W_{\lambda}$ & $\log W_{\lambda}$ & FWHM & $\log W_{\lambda}$ \\ \hline WR47-5&WN6(h)& 1000 & 1.82 &1.90 &1.1 & & 1.76 & 1.59 & 1.54 & & 0.95 &$<$0.4& & 1.3: &1.58 &1630 & 1.59& 2.11$>$2.10 & N6 \\ WR56-1&WN5o& 1220 & 2.09 & 1.78 & & & 1.73 & 1.32 & 1.50 & & 1.18 & 0.9: & & 1.45 &1.30 & 1200& 1.56& 2.11$>$2.10 & N6 \\ WR60-8&WN6o& 1510 &2.28 & 2.27 &1.31&0.7: & 2.10 & 1.72 & 1.82 & & 1.53 & 1.26 & & 1.46 & 1.63& 1670& 1.83& 2.11$>$2.10 & N6 \\ WR62a&WN6o & 1670 & 1.82 & 1.73 &0.7:& & 1.55 & 1.09 & 1.30 & & & & & 1.18 & 1.28& 1650& 1.48& 2.11$>$2.10 & N2 \\ WR64-2&WN6o& 1480 & 2.29 & 2.26 &1.33& & 2.08 & 1.67 & 1.83 & & 1.50 & 1.18 & & 1.54 & 1.64&1640 & 1.97&2.11$>$2.10 & N6 \\ WR64-3&WN6o & 1340 & 2.18 & 2.20 &1.40& & 2.02 & 1.80 & 1.75 & & 1.40 & 1.08 & & 1.45 & 1.60&1710 &1.79 &2.11$>$2.10 & N6 \\ WR64-4&WN6o+& 1930 & 1.87 & 2.00 &1.04& & 1.71 & 1.43 & 1.48 & 0.5: & 1.20 & 0.94 & & 1.08 &1.20 & 2220&1.56 &2.11$\gg$2.10 & N6 \\ WR64-5&WN6o& 1680 & 2.09 & 2.25 &1.18&0.8: & 1.91 & 1.62 & 1.69 & 0.6: & 1.30 & 1.20 & & 1.48 & 1.54&1670 & 1.79&2.11$\gg$2.10 & N6 \\ WR64-6&WN6b & 2130 & 2.39 & 2.54 &1.27& & 2.17 & 1.84 & 1.91 & 0.7 & 1.56 &1.40 & & 1.55 & 1.61&2300 & 1.99&2.11$\gg$2.10 & N6 \\ WR68a&WN6o& 1680 & 1.92 & 1.81 &0.8 &0.5: & 1.68 & 1.31 & 1.46 & 0.6 & & & & 1.30 & 1.41&1570 & 1.54&2.11$>$2.10 & N2 \\ WR70-14&WN4b& 2760 & 2.49 & 2.33 & & & 2.38 & 1.95 & 2.11 & & 1.65 &1.3: & & 1.69 & 1.89&2800 & 2.23&2.11$\sim$2.10 & N6 \\ WR70-15&WN5o& 1450:& 2.3: & 1.85 &1.2:& & 2.17 & 1.67 & 1.90 & 0.6 & 1.58 &$<$0.5& & 1.38 & 1.43&1740 &1.98 &2.11$>$2.10 & N6 \\ WR72-5&WN6o& 1370 & 2.23 & 2.25 &1.2 & & 2.07 & 1.69 & 1.82 & 0.5 & 1.52 & 1.26 & 0.6:& 1.63 &1.66 &1500 &1.92 &2.11$\gg$2.10 & N6 \\ WR75-31&WN7o & 1250:& 1.7: & 2.3: & & & 1.95 & 1.78 & 1.67 & & 1.23 &1.62 &1.1: & 1.72 &1.71 &1370 & 1.72&2.11$\sim$2.19 & N6 \\ WR75-30&WN7o & & & & & & & & & & 1.34& 1.59 & 0.8:& 1.84 & 1.79& 1880& 1.82&2.11$\sim$2.19 & N6 \\ WR76-11&WN7o& & & & & & & & & & 1.15 & 1.57 & 1.1 & 1.64& 1.67& 1260& 1.59& 2.11$\sim$2.19 & N6 \\ WR93a&WN6h& 1800 & 1.91 &1.85 &1.71 & &1.71 & 1.98 &1.38 & & & & & 1.38 &1.88 &1770:&1.59 &2.11$>$2.10 & N2 \\ \hline \multicolumn{20}{l}{ \begin{minipage}{\columnwidth}~\\ Note: 2.10 = N\,{\sc v} 2.100; 2.11 = He\,{\sc i} 2.112 + N\,{\sc iii} 2.116; 2.19 = He\,{\sc ii} 2.189 \\ \end{minipage} } \end{tabular} \end{footnotesize} \end{center} \end{table} \begin{table} \begin{center} \begin{footnotesize} \caption{Near-IR equivalent width and FWHM measurements for newly identified Galactic WC stars plus previously discovered WC stars lacking optical spectroscopy. Equivalent widths (in \AA) are generally robust to $\pm$0.05 dex, except for weak lines $\pm$0.1 dex, while measured FWHM (in km\,s$^{-1}$) are generally reliable to $\pm$50 km\,s$^{-1}$. The key to the spectroscopic datasets utilised is provided in Table~\ref{tab:log}.}\label{newbies_wc} \begin{tabular}{l@{\hspace{1mm}}l@{\hspace{1mm}}c@{\hspace{1mm}} c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}} c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}} c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}l} \hline WR & WC & C\,{\sc iii} 0.97 & C\,{\sc ii} 0.99 & \multicolumn{2}{c}{He\,{\sc ii} 1.01} & He\,{\sc i} 1.08 & He\,{\sc ii} 1.16 & C\,{\sc iv} 1.19 & C\,{\sc iii} 1.20 & C\,{\sc iv} 1.43 & He\,{\sc i-ii} 1.70 & \multicolumn{2}{c}{C\,{\sc iv} 1.74} & C\,{\sc ii} 1.78 & He\,{\sc i} 2.06 & C\,{\sc iv} 2.07 & C\,{\sc iii} 2.11 & Data \\ & SpType & $\log W_{\lambda}$ & $\log W_{\lambda}$& FWHM & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$ & FWHM & $\log W_{\lambda}$ & $\log W_{\lambda}$ & $\log W_{\lambda}$& $\log W_{\lambda}$ & $\log W_{\lambda}$ \\ \hline WR46-18 & WC6--7 & 2.72 &$<$1.1& 3290 & 2.24 & 2.38 & 2.09 & 2.26 & 1.81 & 1.88 & 1.62 & 3040 & 2.33 & & & 3.02 & 2.36 & N6\\ WR60-7 & WC7--8 & 2.99 & 1.63 & 1640 & 2.08 & 2.36 & 2.05 & 2.25 & 1.89 & 2.25 & 1.56 & 1770 & 2.26 & & & 2.89 & 2.34 & N6\\ WR70-13 & WC8d & 2.89 & 1.68 & 1350 & 1.72 & 2.34 & 1.85 & 1.94 & 2.01 & 2.01 & 1.54 & 1300 & 1.90 & 1.84 & & 2.31 & 2.03 & N6\\ WR75aa & WC9d & 2.72 & 1.67 & 1060 & 1.41 & 2.26 & 1.76 & 1.75 & 1.98 & 1.75 & 1.11 & 1330 & 1.30 & 1.54 & & 1.64 & 1.57 & N5\\ WR75c & WC9 & 2.58 & 1.70 & 990 & 1.30 & 2.63 & 1.69 & 1.60 & 2.12 & 1.82 & 1.81 & 1190 & 1.58 & 2.14 & 2.43 & 2.00 & 2.16 & N5\\ WR107a & WC5--7& 3.11 & 1.72 & 2170 & 2.24 & 2.25 & 2.12 & 2.47 & 1.83 & 2.34 & & & & & & 2.94 & 2.25 & N2\\ \hline \end{tabular} \end{footnotesize} \end{center} \end{table} \end{landscape} \begin{figure*} \begin{center} \includegraphics[width=0.85\textwidth]{new_wr_blue_v2.eps} \caption{YJ-band spectra of new WR stars, plus unpublished NTT/SOFI spectroscopy of previously identified WR stars (WR62a, WR68a, WR75aa, WR75c, WR93a and WR107a).} \label{fig:new_wr_blue} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.85\textwidth]{new_wr_red_v2.eps} \caption{HK-band spectra of new WR stars, plus unpublished NTT/SOFI spectroscopy of previously identified WR stars (WR62a, WR68a, WR75aa, WR75c, WR93a and WR107a) plus our NTT/SOFI spectroscopy of WR75--30.} \label{fig:new_wr_red} \end{center} \end{figure*} \begin{table*} \begin{center} \caption{Ratios of near-IR diagnostic lines for newly identified Wolf-Rayet stars from NTT/SOFI spectroscopy plus revised near-IR classifications for previously known stars. Line strengths/widths are provided for He\,{\sc ii} 1.012$\mu$m (2.189$\mu$m in parenthesis) or C\,{\sc iv} 1.736$\mu$m (1.190$\mu$m in parenthesis) for WN and WC stars, respectively.} \label{tab:new_wrstars} \begin{tabular}{ l@{\hspace{2mm}}l@{\hspace{2mm}}c@{\hspace{2mm}} r@{\hspace{2mm}}l@{\hspace{1mm}} r@{\hspace{2mm}}l@{\hspace{1mm}} r@{\hspace{2mm}}l@{\hspace{1mm}} r@{\hspace{2mm}}l@{\hspace{1mm}} r@{\hspace{2mm}}l@{\hspace{1mm}} l@{\hspace{2mm}}l@{\hspace{1mm}}l} \hline ID & WR & 1.01$\mu$m & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & Old & Ref & New \\ & & FWHM & \multicolumn{2}{c}{(He\,{\sc i} 1.08/ } & \multicolumn{2}{c}{(P$\beta$/ } & \multicolumn{2}{c}{(He\,{\sc i} 1.70/ } & \multicolumn{2}{c}{(Br$\gamma$/ } & \multicolumn{2}{c}{(He\,{\sc i}+N\,{\sc iii} 2.11/ } & SpT & & SpT\\ & & km\,s$^{-1}$ & \multicolumn{2}{c}{He\,{\sc ii} 1.01)} & \multicolumn{2}{c}{He\,{\sc ii} 1.16)} & \multicolumn{2}{c}{He\,{\sc ii} 1.69)} & \multicolumn{2}{c}{He\,{\sc ii} 2.19)} & \multicolumn{2}{c}{N\,{\sc v} 2.10)} & & & \\ \hline B\#13 & WR47-5 &1000 & +0.08 & WN6 & --0.17 & (h) & --0.55 & WN4 & --0.01 & (h) & $>$0 & WN5--6 & -- & --& WN6(h) \\ B\#37 & WR56-1 &1220 &--0.31 & WN5 & --0.41 & o & --0.28 & WN5 & --0.26 & o & $>$0 & WN5--6 & -- & --& WN5o \\ B\#56 & WR60-8 &1510 & --0.01 & WN6 & --0.38 & o & --0.27 & WN5 & --0.20 & o & $>$0 & WN5--6 & -- & --& WN6o \\ & WR62a &1670& --0.09 & WN6 & --0.46 & o & \multicolumn{2}{c}{---} & --0.20 & o/(h) & $>$0 & WN5--6 &WN5o&S99& WN6o\\ B\#85 & WR64-2 &1480& --0.03 & WN6 & --0.41 & o & --0.32 & WN5 & --0.33 & o & $>$0 & WN5--6 & -- & --& WN6o \\ B\#87 & WR64-3 &1340& +0.02 & WN6 & --0.22 & o/(h) & --0.32 & WN5 & --0.19 & o & $>$0 & WN5--6 & -- & --& WN6o\\ B\#88 & WR64-4 &1930& +0.13 & WN6 & --0.28 & o & --0.26 & WN5 & --0.36 & o & $\gg$0 & WN6 & -- & --& WN6o+OB\\ B\#91 & WR64-5 &1680& +0.16 & WN6 & --0.29 & o & --0.10 & WN6 & --0.25 & o & $\gg$0 & WN6 & -- & --& WN6o\\ B\#93 & WR64-6 &2130& +0.15 & WN6b & --0.33 & o & --0.16 & WN6b & --0.38 & o & $>$0 & WN6b & -- & --& WN6b\\ & WR68a &1680& --0.11& WN6 & --0.37 & o & \multicolumn{2}{c}{--- } & --0.13 & o/(h) & $>$0 & WN5--6 &WN6o&S99& WN6o\\ B\#107& WR70-14 &2760& --0.16& WN4--6b & --0.43 & o & --0.35 & WN6b & --0.34 & o & $\simeq$0 & WN4b& -- & --& WN4b\\ B\#123& WR70-15 &1450& --0.45 & WN4--5 & --0.50 & o & --1.08 & WN2--4 & --0.55 & o & $>$0 & WN5--6 & -- & --& WN5o\\ B\#132& WR72-5 &1370 & +0.02 & WN6 & --0.38 & o & --0.26 & WN5: & --0.26 & o & $\gg$0 & WN6 & -- & --& WN6o\\ A\#11 & WR75-31 &1250& +0.60: & WN7--8 & --0.17 & o & +0.39 & WN7--8 & --0.01 & o & 0.00$\ddag$ & WN7 & -- & --& WN7o\\ A\#13 & WR75-30 &(1880)& \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---} & +0.25 & WN7 & --0.03 & o & 0.02$\ddag$ & WN7 & WN6 &K15& WN7o\\ B\#154& WR76-11 &(1260) & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---} & +0.42 & WN7--8 & --0.08 & o & 0.05$\ddag$ & WN7 & -- & --& WN7o\\ & WR93a &1800 & --0.06 & WN6 & +0.27 & h & \multicolumn{2}{c}{---} & +0.29 & h & $\gg$0 & WN6 &WN6 &M13& WN6h\\ \hline ID & WR & 1.74$\mu$m & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & \multicolumn{2}{c}{$\log W_{\lambda}$} & Old & Ref & New\\ & & FWHM & \multicolumn{2}{c}{(He\,{\sc i} 1.08/ } & \multicolumn{2}{c}{(C\,{\sc iii} 1.20/ } & \multicolumn{2}{c}{(He\,{\sc i-ii} 1.70/ } & \multicolumn{2}{c}{(C\,{\sc iii} 2.11/ } & \multicolumn{2}{c}{(C\,{\sc ii} 0.99/ } & SpT & & SpT\\ & & km\,s$^{-1}$ & \multicolumn{2}{c}{He\,{\sc ii} 1.01)}& \multicolumn{2}{c}{C\,{\sc iv} 1.19)} & \multicolumn{2}{c}{C\,{\sc iv} 1.74)} & \multicolumn{2}{c}{C\,{\sc iv} 2.07)} & \multicolumn{2}{c}{C\,{\sc iii} 0.97)} & & & \\ \hline E\#3 & WR46-18 & 3100 & +0.14 & WC6--7 & --0.45 & WC7 & --0.71 & WC6--7 & --0.66 & WC4--7 & --1.62 & WC4--6 &--&--& WC6--7\\ B\#51& WR60-7 & 1800 & +0.28 & WC7--8 & --0.36 & WC8 & --0.70 & WC6--7 & --0.55 & WC7--8 & --1.36 & WC7--8 &--&--& WC7--8\\ B\#105& WR70-13 & 1300 & +0.62 & WC8--9 & +0.07 & WC8--9& --0.36 & WC8 & --0.28 & WC8 & --1.21 & WC8--9 & -- & --& WC8d\\ & WR75aa & 1400 & +0.85 & WC9 & +0.23 & WC9 & --0.19 & WC8--9 & --0.07 & WC9 & --1.05 & WC9 &WC9d&H05& WC9\\ & WR75c & 1350 & +1.33 & WC9 & +0.52 & WC9 & +0.23 & WC9 & +0.19 & WC9 & --0.88 & WC9 &WC9 &H05 & WC9\\ & WR107a &(2400)& +0.01 & WC5 & --0.64 & WC5--7 & \multicolumn{2}{c}{---} & --0.69 & WC5--7 & --1.39 & WC5--8 &WC6&S99& WC5--7\\ \hline \multicolumn{15}{l}{ \begin{minipage}{2\columnwidth}~\\ S99 \citep{shara99}, H05 \citep{hopewell05}, M13 \citep{miszalski13}, K15 \citep{kanarek15} \\ $ \ddag: \log W_{\lambda}$ (He\,{\sc i} 2.112 + N\,{\sc iii} 2.116/He\,{\sc ii} 2.189)\\ \end{minipage} } \end{tabular} \end{center} \end{table*} \section{New Galactic Wolf-Rayet stars}\label{sec:new} We have identified 16 new Wolf-Rayet stars, which we have assigned Galactic WR numbers, in accordance with the current IAU convention (see Appendix of RC15). Here we discuss their spectral types, spatial location and their potential association with Scutum-Crux or other spiral arms. Near-IR spectra of the new WR stars are presented in Figures~\ref{fig:new_wr_blue} (IJ) and \ref{fig:new_wr_red} (HK), together with our NTT/SOFI observations of WR75aa, WR75c, the recently discovered WN star WR75-30 \citep{kanarek15}, plus previously unpublished NTT/SOFI spectroscopy of WR62a, WR68a, WR93a, WR107a, as discussed above. line measurements are provided in Table~\ref{newbies_wn} and \ref{newbies_wc} for WN and WC stars, respectively, with diagnostic line ratios presented in Table~\ref{tab:new_wrstars}. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{FWHM.eps} \caption{{\bf (a):} FWHM (in km\,s$^{-1}$) vs equivalent width (in \AA) for He\,{\sc ii}\,1.012\micron\ in apparently single Galactic broad-lined WN stars (open squares), weak-lined WN stars (open triangles) and WNha/+abs stars (crosses) together with the newly identified WN stars (filled circles). The grey region indicates the parameter space covered by broad-lined WN stars. {\bf (b)}: FWHM (in km\,s$^{-1}$) vs equivalent width (in \AA) for C\,{\sc iv}\,1.74\micron\ in apparently single Galactic WC4--7 stars (open squares), WC8--9 stars (open triangles) plus WCd/WC+O systems (crosses) together with the newly identified WC stars (filled circles).} \label{fig:wn-lines} \end{center} \end{figure*} \subsection{Classification of the new WR stars} \subsubsection{Broad-lined WN stars} Only two of the new WN stars, WR64--6 and WR70--14, are identified as broad-lined WN stars, owing to their He\,{\sc ii} 1.01$\mu$m line widths (FWHM $>$ 1900 km\,s$^{-1}$) and strengths ($\log W_{\lambda}$/\AA\ $\geq$ 2.4), albeit WR64-6 only narrowly complies with the second criterion. A WN6b subtype is favoured for WR64--6 from its He\,{\sc i} 1.08/He\,{\sc ii} 1.01$\mu$m ratio which is supported by (N\,{\sc iii} + He\,{\sc i} 2.11) $>$ N\,{\sc v} 2.10, while both hydrogen criteria (involving P$\beta$ and Br$\gamma$) indicate no hydrogen, so we adopt WN6b for this star. For WR70--14, the He\,{\sc i} 1.08/He\,{\sc ii} 1.01$\mu$m ratio is somewhat ambiguous, consistent with WN4--6b, but (N\,{\sc iii} + He\,{\sc i} 2.11) $\sim$ N\,{\sc v} 2.10 favours WN4b. This is supported by weak N\,{\sc v} 1.11$\mu$m in the J-band (Fig.~\ref{fig:new_wr_blue}). Again, there is no evidence for atmospheric hydrogen from our criteria (Fig.~A4, available online), so WN4b is assigned to WR70--14. \subsubsection{Narrow-lined WN stars} The remaining 11 WN stars are a relatively homogeneous group, almost all classified as either WN5o, WN6o or WN7o stars, with only WR47--5 showing evidence of hydrogen so we consider these according to their subtype. The two highest ionization narrow-lined stars are WR56-1 and WR70-15, according to their He\,{\sc i} 1.08/He\,{\sc ii} 1.01 ratios (Fig.~A3, available online), which indicate WN5 for both stars. This is supported by the He\,{\sc i} 1.70/He\,{\sc ii} 1.69 ratio for WR56-1, although an earlier WN3--4 subtype is favoured by He\,{\sc i} 1.70/He\,{\sc ii} 1.69 for WR70-15. We also consider their K-band morphologies, for which (N\,{\sc iii} + He\,{\sc i} 2.11) $>$ N\,{\sc v} 2.10 in both cases, indicating WN5--6. Neither star shows any evidence for atmospheric hydrogen from Fig.~A4 so we adopt WN5o for both stars. The majority of our narrow-lined WN stars are WN6 stars according to their He\,{\sc i} 1.08/He\,{\sc ii} 1.01 ratios (Fig.~A3), with He\,{\sc i} 1.70/He\,{\sc ii} 1.69 suggesting WN4, 5 or 6. As with the WN5o stars considered above, we also consider the K-band morphology, for which either (N\,{\sc iii} + He\,{\sc i} 2.11) $>$ N\,{\sc v} 2.10, implying WN5--6 or (N\,{\sc iii} + He\,{\sc i} 2.11) $\gg$ N\,{\sc v} 2.10, implying WN6. Only WR47--5 indicates the presence of (modest) hydrogen from Fig.~A4, such that we classify it as WN6(h), but favour WN6o for WR60-8, WR64-2, -3, -4, -5 and WR72-5. Of the remaining stars only WR75-31 was observed in the IJ-band with SOFI, for which He\,{\sc i} 1.08/He\,{\sc ii} 1.01 indicates a WN7--8 subtype, with He\,{\sc i} 1.70/He\,{\sc ii} 1.69 also providing an ambiguous WN7--8 classification. Its K-band morphology strongly favours WN7 since (N\,{\sc iii} + He\,{\sc i} 2.11) $\sim$ He\,{\sc ii} 2.19, while there is no evidence for atmospheric hydrogen in WR75-31 from Fig.~A4, such that we assign WN7o to this star. WR76-11 was observed solely in the H and K bands, but closely resembles WR75-31 such that we classify it as WN7o, as with WR75-30. None of the new WN stars qualify as WN/C stars, since C\,{\sc iii} 0.971$\mu$m, C\,{\sc iv} 1.19$\mu$m, 1.74$\mu$m, 2.07$\mu$m are weak/absent. \subsubsection{WC stars} Three of the new WR stars are carbon sequence WC stars. Considering the primary diagnostics for WR46--16, WC7 is favoured from the C\,{\sc iv} 1.19/C\,{\sc iii} 1.20 ratio, WC4--6 from the C\,{\sc iii} 0.97/C\,{\sc ii} 0.99 ratio, and WC5--7 from the C\,{\sc iv} 2.07/C\,{\sc iii} 2.11 ratio. Secondary indicators suggest WC5--7 from He\,{\sc i} 1.08/He\,{\sc ii} 1.01, WC5--6 from C\,{\sc iii} 0.97/He\,{\sc ii} 1.01 and WC6--7 from He\,{\sc i-ii} 1.7/C\,{\sc iv} 1.74. Overall, we adopt WC6--7 reflecting the tension in primary indicators for WR46--16 (Fig.~A7, available online). WR60--7 is classified as WC8, WC7, WC7--8 from primary diagnostics C\,{\sc iv} 1.19/C\,{\sc iii} 1.20, C\,{\sc iv} 2.07/C\,{\sc iii} 2.11 and C\,{\sc iii} 0.97/C\,{\sc ii} 0.99, respectively. Secondary criteria C\,{\sc iii} 0.97/He\,{\sc ii} 1.01 and He\,{\sc i-ii} 1.7/C\,{\sc iv} 1.74 indicate WC6--7, while He\,{\sc i} 1.08/He\,{\sc ii} 1.01 favours WC7--8. Overall, WC7--8 is selected for WR60--7, reflecting the lack of a consensus amongst primary criteria (Fig.~A7). Finally, primary diagnostics C\,{\sc iv} 1.19/C\,{\sc iii} 1.20, C\,{\sc iv} 2.07/C\,{\sc iii} 2.11 and C\,{\sc iii} 0.97/C\,{\sc ii} 0.99, imply WC8--9, WC8 and WC8--9 for WR70--13, while C\,{\sc iv} 1.74 $\geq$ C\,{\sc ii} 1.78 indicates WC8. Consequently we adopt WC8 for WR70--13, which is supported by our secondary indicator He\,{\sc i-ii} 1.7/C\,{\sc iv} 1.74, with He\,{\sc i} 1.08/He\,{\sc ii} 1.01 and C\,{\sc iii} 0.97/He\,{\sc ii} 1.01 consistent with either WC8 or WC9 (Fig.~A7). \subsection{Binarity}\label{binary} Approximately 40\% of the Galactic WR population are observed in multiple systems \citep{vdh01}. This is a lower limit on the true binary fraction, since no systematic survey has been carried out. It is therefore highly likely that some of the newly discovered WR stars are in fact multiple systems. Direct detection of companion stars, usually main-sequence OB stars, is not possible with the current dataset since their absorption lines are generally weak with respect to the strong WR emission lines. It is, however, possible to infer the presence of a companion star by considering the equivalent width of near-IR emission lines which will be diluted by the continuum of a companion star, and/or dust for the case of some WC+OB systems since dust formation is an indicator of binarity in WC stars. Since a companion star and/or thermal dust emission will not reduce line widths, a weak line compared to single stars at a specific FWHM is suggestive of binarity. In Figure~\ref{fig:wn-lines} we compare the FWHM (km\,s$^{-1}$) and equivalent widths (in \AA) of strong, isolated lines in apparently single Galactic WN stars (He\,{\sc ii}\,1.012\micron) and WC stars (C\,{\sc iv} 1.736\micron) with newly discovered WR stars. We also include weak-lined WN stars with intrinsic absorption lines (WR24, WR87, WR108) which could be mistaken for WN+OB stars, plus dusty WC stars (WR121, WR75aa), whose near-IR emission lines are diluted by hot dust. Of the newly identified stars, the majority of WR stars possess emission line strengths which are charactistic of single stars. From Fig.~\ref{fig:wn-lines}(a) two exceptions are WR75-31 (WN7(h)) and WR64-4 (WN6o) which possess weak emission for their He\,{\sc ii} 1.012$\mu$m FWHM. Both are potential binaries, although WR64-4 is the strongest candidate, such that we revise its spectral type to WN6o+OB. In contrast, WR75-31 has an overall relatively strong emission line spectrum, albeit with an anomalously weak (and low S/N) He\,{\sc ii} 1.0124$\mu$m line. Of the WC stars, none possess unusually weak emission lines based on their C\,{\sc iv} 1.736$\mu$m FWHM (Fig.~\ref{fig:wn-lines}(b)). However, the increased dilution of WC emission lines from 1$\mu$m to 2.5$\mu$m arising from hot dust in WCd systems also severely modifies equivalent width ratios of C\,{\sc iii-iv} lines. For example, $W_{\lambda}$(C\,{\sc iii} 2.11)/$W_{\lambda}$(C\,{\sc iii} 0.97) = 0.3 for WR88 (WC9) but hot dust in WR121 (WC9d) reduces this ratio to 0.05. A similar reduction in line strength is observed for prominent He\,{\sc ii} lines, with $W_{\lambda}$(He\,{\sc ii} 2.19)/$W_{\lambda}$(He\,{\sc ii} 1.28) = 0.5 for WR88 and 0.17 for WR121. WR135 is a prototypical non-dusty WC8 star with $W_{\lambda}$(C\,{\sc iii} 2.11)/$W_{\lambda}$(C\,{\sc iii} 0.97) = 0.2, with a ratio of 0.2 for WR60-7 but only 0.1 for WR70-13, suggestive of dust dilution in the latter. Indeed, WR60-7 (WC7--8) and WR70-13 (WC8) possess similar J-H colours, yet the latter has 0.5 mag higher K$_S$--[8] colour (Table~\ref{tab:obs}), so we amend its spectral type to WC8d. Indeed, WR70-13 is offset from the other WC stars in the reddening free Q1 vs Q2 comparison in Fig.~\ref{fig:c-c}. Turning to WR46-18, $W_{\lambda}$(C\,{\sc iii} 2.11)/$W_{\lambda}$(C\,{\sc iii} 0.97) $\sim$0.3 for non-dusty WC6--7 stars, with a ratio of 0.4 for WR46-18, arguing against dust emission in this instance. Discovery of a hard X-ray source associated with any of the WR stars would be highly indicative of stellar wind collision in a massive binary. Indeed, several WR stars towards the Galactic Centre, coincide with hard X-ray sources (e.g., \citealt{mauerhan10}, \citealt{nebot15}). From a search of the XMM Newton science archive and the Chandra source catalogue (1.0), fields including WR60-7 (WC7--8), WR60-8 (WN6o), and WR64-4 (WN6o) had been imaged by Chandra ACIS, although none revealed a source at the location of the WR star. \subsection{Spatial location of the new WR stars} \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{gal_xydist.eps} \caption{ A top-down view of the Galactic disk showing the 4th quadrant. The x-axis is parallel to the line-of-sight in the direction $l\,{=}\,270^\circ$. Shaded grey is the swathe of Galactic longitude constituting the survey area ($298^\circ\,{<}\,l\,{<}\,340^\circ$). New WN stars are shown as filled blue circles (WR75-30 is an open symbol), new WC stars are shown as filled red squares, while 355 previously identified WR stars are small grey circles \citep{rosslowe15a}. Dashed lines show the measured locations of prominent spiral arms in this Galactic quadrant, represented as logarithmic spirals. } \label{fig:spiral} \end{center} \end{figure} We have estimated distances to the new WR stars by adopting an absolute K$_S$-band magnitude based on the assigned spectral subtype. To do this, we followed the approach of RC15, which we briefly summarise here. To calculate the foreground dust extinction to each new WR star, we used subtype-specific intrinsic J$-$H and H$-$K$_S$ colours to measure a colour excess. Using the near-IR extinction law of \citet{stead09}, we thereby obtained two measures of extinction in the K$_S$-band, of which an average was taken. Distances were then calculated using the standard relation between absolute and apparent magnitude. We used the uncertainties on calibrated absolute magnitudes, given by RC15, to calculate upper and lower bounds on the distances calculated. In Table~\ref{tab:dist} we provide interstellar extinctions, distance moduli/distances, with uncertainties, for every new WR star. Typical extinctions are $A_{\rm K} = 1.2\pm$0.2 mag, with characteristic distances of 9.7$\pm$3.8 kpc. This method inherently assumes the WR star is the sole (or dominant) contributor of near-IR flux to each source. Recalling Sect.~\ref{binary}, this assumption is justified for new WN stars with the exception of WR64-4, the emission line strengths of which suggest a significant contribution from a companion source, implying a larger distance. In addition, we provide a second distance estimate to WR70-13 in Table~\ref{tab:dist} since there is evidence for a contribution by circumstellar dust to the K-band. Adopting the absolute magnitude of a WC8 star which is dominated by hot dust would significantly increase the distance to WR70-13 from 5.3 to 10.7 kpc, though in reality the dust contribution is likely to be modest such that an intermediate distance is more realistic. \begin{table} \begin{center} \caption{Estimated interstellar extinctions and distances to the newly discovered WR stars, following the methodology of C15. Two entries are provided for the WR70-13, the second appropriate for a dusty WC8 star. We also provide a distance estimate to the recently identified star WR75-30 \citep{kanarek15} resulting from our new spectroscopic classification.} \begin{footnotesize} \begin{tabular}{l@{\hspace{1mm}}l@{\hspace{0.5mm}}c@{\hspace{1mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}r} \hline WR & Spec. & M$_{\mathrm{K}_\mathrm{S}}$ & A$_{\mathrm{K}_\mathrm{S}}$ & DM & \multicolumn{1}{c}{d} \\ & Type & mag & mag & mag & \multicolumn{1}{c}{kpc} \\ \hline WR46-18 & WC6--7 & $-4.75 \pm 0.77$ & 0.97$\pm$0.10 & 14.25$\pm$0.78 & $7.1^{+3.0}_{-2.1}$ \\ WR47-5 & WN6(h) & $-4.94 \pm 0.46$ & 0.96$\pm$0.02 & 15.07$\pm$0.46 & $10.3^{+2.4}_{-2.0}$ \\ WR56-1 & WN5o & $-3.86 \pm 0.34$ & 1.23$\pm$0.00 & 14.81$\pm$0.34 & $9.2^{+1.6}_{-1.3}$ \\ WR60-7 & WC7--8 & $-4.94 \pm 0.55$ & 1.16$\pm$0.02 & 14.06$\pm$0.55 & $6.5^{+1.9}_{-1.5}$ \\ WR60-8 & WN6o & $-4.94 \pm 0.46$ & 1.45$\pm$0.05 & 15.25$\pm$0.46 & $11.2^{+2.6}_{-2.1}$ \\ WR64-2 & WN6o & $-4.94 \pm 0.46$ & 1.15$\pm$0.02 & 15.68$\pm$0.46 & $13.7^{+3.2}_{-2.6}$ \\ WR64-3 & WN6o & $-4.94 \pm 0.46$ & 0.98$\pm$0.01 & 14.14$\pm$0.46 & $6.7^{+1.6}_{-1.3}$ \\ WR64-4 & WN6o+ & $-4.94 \pm 0.46$ & 1.02$\pm$0.04 & 13.02$\pm$0.46 & $4.0^{+0.9}_{-0.8}$ \\ WR64-5 & WN6o & $-4.94 \pm 0.46$ & 1.00$\pm$0.01 & 14.74$\pm$0.46 & $8.9^{+2.1}_{-1.7}$ \\ WR64-6 & WN6b & $-5.16 \pm 0.37$ & 1.13$\pm$0.01 & 15.14$\pm$0.37 & $10.7^{+2.0}_{-1.7}$ \\ WR70-13 & WC8d & $-5.04 \pm 0.41$ & 1.38$\pm$0.08 & 13.62$\pm$0.42 & $5.3^{+1.1}_{-0.9}$ \\ & & $-6.57 \pm 0.41$ & 1.38$\pm$0.08 & 15.15$\pm$0.42 & $10.7^{+2.3}_{-1.9}$ \\ WR70-14 & WN4b & $-4.85 \pm 0.38$ & 1.11$\pm$0.04 & 15.24$\pm$0.38 & $11.2^{+2.1}_{-1.8}$ \\ WR70-15 & WN5o & $-3.86 \pm 0.34$ & 1.45$\pm$0.01 & 14.81$\pm$0.34 & $9.2^{+1.6}_{-1.3}$ \\ WR72-5 & WN6o & $-4.94 \pm 0.46$ & 1.00$\pm$0.00 & 14.21$\pm$0.46 & $6.9^{+1.6}_{-1.3}$ \\ WR75-31 & WN7o & $-5.49 \pm 0.42$ & 1.42$\pm$0.02 & 16.28$\pm$0.42 & $18.0^{+3.9}_{-3.2}$ \\ WR75-30 & WN7o & $-5.49 \pm 0.42$ & 1.70$\pm$0.06 & 15.36$\pm$0.42 & $11.8^{+2.5}_{-2.1}$ \\ WR76-11 & WN7o & $-5.49 \pm 0.42$ & 1.45$\pm$0.05 & 16.02$\pm$0.42 & $16.0^{+3.4}_{-2.8}$ \\ \hline \end{tabular} \end{footnotesize} \label{tab:dist} \end{center} \end{table} \subsection{Association of WR stars with the Scutum-Crux and other spiral arms} In Figure~\ref{fig:spiral} we present the locations of the new WR stars on a top-down view of the Galactic disk, together with WR stars mapped by RC15 (including approximately half of the currently known population). Over-plotted are the locations of the three main spiral features detected in the 4th Galactic quadrant ($270\,{<}\,l\,{<}\,360$), assuming $R_G\,{=}\,8.0$\,kpc. We assume each arm is a logarithmic spiral, parameterised as: \begin{equation} x\,{=}\,r\cos(\theta),~y\,{=}\,r\sin(\theta),~r\,{=}\,r_t\exp[(\theta-\theta_0)\tan(p)]. \end{equation} in which $r_t$ is the Galactocentric radius of the observed tangent, $\theta$ is the angle measured anti-clockwise about the origin from the positive x-axis, and $\theta_t$ is the angle at which the observed tangent is located. The parameter $p$ is the pitch angle of the arm. The calculation of $r_t$ requires a measurement of the longitude of the tangent to each arm ($l_t$), and the Galactocentric radius of the Sun ($R_G$). \begin{equation} r_t=R_G\sin(360^\circ-l_t)/\sin(90^\circ-p). \end{equation} \citet{vallee14} catalogued the observed tangents to spiral arms in the Galaxy, and calculate averages of measurements using different tracers. Subsequently, \citet{vallee15} use these observations to measure pitch angles for individual spiral arms. From these studies we adopt $l_t\,{=}\,284^\circ\,\&\,p\,{=}\,14^\circ$ for the Sagittarius-Carina arm, $310^\circ\,\&\,13.3^\circ$ for the Scutum-Crux arm, and $328^\circ\,\&\,9.9^\circ$ for the Norma (3kpc) arm. The new WR stars appear to be evenly distributed throughout the section of Galactic disk observed; neither they, nor the previously mapped WR stars, show any obvious association to the spiral features in the region. Indeed, no pattern resembling a spiral arm can be seen in the distribution of WR stars, as would be expected if they were tightly confined to spiral arms, albeit affected by a systematic offset in distance measurements. The upcoming second data release (DR2) from {\it Gaia} should address this general question, although the majority of the newly discovered WR stars are too faint for reliable parallaxes with {\it Gaia}. Indeed, only 7 of the 17 stars are included in the {\it Gaia} first data release \citep[DR1][]{GAIA-DR1}, with G = 17.4 -- 20.1 mag. \begin{table*} \begin{center} \caption{Galactic WR stars hosted by a star cluster in the range 298$^{\circ} \leq l \leq 340^{\circ}$, updated from \citet{lundstrom84}. We consider WR stars to be associated with a cluster if $r \leq 4 R$, representing 27\% of the known WR content of this region of the Milky Way.} \label{tab:clusters} \begin{tabular}{ l@{\hspace{2mm}}l@{\hspace{2mm}} c@{\hspace{2mm}}c@{\hspace{2mm}} r@{\hspace{2mm}}r@{\hspace{2mm}} l@{\hspace{2mm}} l@{\hspace{2mm}}l@{\hspace{2mm}} l} \hline Cluster & Alias & $l$ & $b$ & $d$ & $R$ &Ref& WN ($r$, arcmin) & WC ($r$, arcmin) & Ref \\ & & deg & deg & kpc & arcmin& & & & \\ \hline & VVV CL 011 & 298.506 & --0.170 & 5.0: & 0.1 & d, l & WR46-17 (0.0) & & d \\ & Mercer 30 & 298.756 & --0.408 & 7.2 & 0.3 & a, j & WR46-3 (0.2), WR46-4 (0.1) & & a \\ & & & & & & & WR46-5 (0.1), WR46-6 (0.2) & & a \\ C 1240-628 & Hogg 15 & 302.047 & --0.242 & 3.0 & 3.5 & k & WR47 (1.6) & & b \\ C 1309-624 & Danks 1 & 305.339 & +0.080 & 4.2 & 0.75 & c, k & WR48-8 (0.6), WR48-9 (0.6) & WR48a (1.3), WR48-3 (1.9) & c \\ & & & & & & & WR48-10 (0.6), WR48-6 (2.7)& WR48-4 (2.4) & c \\ & & & & & & & WR48-7 (2.5) & & c \\ C 1310-624 & Danks 2 & 305.393 & +0.088 & 4.2 & 0.75 & c, k & & WR48-2 (0.6) & c \\ & VVV CL 036 & 312.124 & +0.212 & 2.0:& 0.8 & d, l & WR60-6 (0.1) & & d \\ & VVV CL 041 & 317.109& +0.281 & 4.2 & 0.5 & e, l & WR62-2 (0.2) & & e \\ & & & & & & & & & \\ & Pismis 20 & 320.516 &--1.200 & 3.6 & 2.0 & k & WR67 (2.1) & & i \\ & Mercer 70 & 329.697 & +0.584 & 7.0 & 0.4 & f, j & & WR70-12 (0.4) & f\\ & VVV CL 073 & 335.894 & +0.133 & 4.0:& 0.3 & d, l & WR75-25 (0.1), WR75-26 (0.1)& & d \\ & VVV CL 074 & 336.373 & +0.194 & 6.0:& 0.55 & d, l & WR75-28 (0.1), WR75-29 (0.0)& WR75-27 (0.3) & d \\ & Mercer 81 & 338.384 & +0.111 & 11.0 & 0.6 & g, j & WR76-2 (0.6), WR76-3 (0.7) & & g \\ & & & & & & & WR76-4 (0.9), WR76-5 (0.8) & & g \\ & & & & & & & WR76-6 (0.9), WR76-7 (0.9) & & g \\ & & & & & & & WR76-8 (0.9), WR76-9 (1.0) & & g \\ C 1644-457 & Westerlund 1 & 339.555 & --0.399 & 3.8 & 1.2 & m, n, k & WR77a (1.7), WR77c (0.8) & WR77aa (4.1), WR77b (4.8) & h \\ & & & & & & & WR77d (1.1), WR77e (0.4) & WR77g (0.1), WR77i (0.9) & h \\ & & & & & & & WR77f (0.4), WR77h (0.1) & WR77l (0.6), WR77m (0.4) & h \\ & & & & & & & WR77j (0.7), WR77k (0.4) & WR77n (1.7), WR77p (0.8) & h \\ & & & & & & & WR77o (0.4), WR77q (0.5) & & h \\ & & & & & & & WR77r (0.8), WR77s (0.5) & & h \\ & & & & & & & WR77sa (0.4), WR77sb (2.0) & & h \\ & & & & & & & WR77sc (0.8), WR77sd (2.8)& & h \\ \hline \multicolumn{9}{l}{ \begin{minipage}{2\columnwidth}~\\ a: \citet{kurtev07}; b: \citet{sagar01}; c: \citet{davies12a}; d: \citet{chene13}; e: \citet{chene15}; f: \citet{delaFuente15}; g: \citet{davies12b}; h: \citet{crowther06} i: \citet{lundstrom84} j: \citet{mercer05} k: \citet{dias02} l: \citet{borissova11} m: \citet{kothes07} n: \citet{koumpia12} \end{minipage} } \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{Galactic WR stars coincident with star-forming regions in the range 298$^{\circ} \leq l \leq 340^{\circ}$. We consider a WR star to be associated with a star-forming region \citep[WISE v1.4]{anderson14} if $r \leq 1.5 R_{\rm HII}$, although some WR stars will doubtless be foreground sources, such that 53/206 (= 26\%) WR stars associated with star-forming regions represents a strict upper limit.} \label{tab:SFRegions} \begin{tabular}{ l@{\hspace{2mm}} r@{\hspace{1mm}}r@{\hspace{2mm}} l@{\hspace{2mm}} l@{\hspace{1mm}} l@{\hspace{1mm}}l@{\hspace{2mm}} l} \hline WISE & $d$ & $R_{\rm HII}$ &Ref& Cluster? & WN ($r$, arcmin) & WC ($r$, arcmin) & Ref \\ H\,{\sc ii} & kpc & arcmin& & & & & \\ \hline G298.224--0.334 & 11.1 & 5.0 & a & & & WR46-7 (5), WR46-18 (8) & b, p \\ G298.529--0.251 & 10.5 & 16.2 & a & VVV CL 011 & WR46-8 (15.6), WR46-9 (3.5), WR46-17 (4.9) & WR46-10 (7.6) & b, c, d, e \\ & & & & & WR46-15 (16.9), WR46-2 (22.4) & & g \\ G298.862--0.432 & 10.7 & 4.8 & a & Mercer 30 & WR46-3 (6.6), WR46-4 (6.6), WR46-5 (6.6), & & f \\ & & & & & WR46-6 (6.6) & & f \\ G302.503--0.762 & 12.1 & 4.2 & a & & WR47b (4.0) & & h \\% G302.504-0.749 G302.631+0.030 & 4.6 & 14.3 & a & & WR47-1 (18.9) & & e \\ G303.445-0.745 & 12.5 & 7.2 & a & & & WR47-2 (2.6) & d \\ G305.233+0.110 & 4.9 & 11.3 & a & Danks 1/2 & WR48-6 (5.5), WR48-10 (6.5), WR48-7 (7.7) & WR48-1 (8.6), WR48-3 (6.0) & e, g, h, i \\ & & & & & WR48-9 (6.8), WR48-8 (6.8) & WR48-4 (8.2), WR48a (8.4) & h, i \\ & & & & & & WR48-2 (10.3) & b \\ G305.322--0.255 & 4.9 & 13.4 & a & & WR48-5 (4.7) & & h \\ G311.991+0.219 & & 6.0 & a & VVV CL 036 & WR60-6 (7.8) & & c \\ G317.030+0.028$^{\ast}$ & & 24.4 & a & VVV CL 041 & WR62-2 (15.7), WR62-1 (19.4), WR63 (34) & & h, j \\ G321.015--0.527& 4.1 & 4.0 & a & & WR67-3 (6.1) & & k \\ G321.115--0.546& 3.8 & 4.2 & a & & WR67-1 (1.8) & & l \\ G326.474--0.292& & 20.9 & a & & & WR70-5 (17) & m \\ G327.824+0.117 & 7.2 & 3.9 & a & & WR70-6 (5.2) & & g \\ G331.020--0.143& & 3.8 & a & & & WR72-4 (3.5) & n \\ G335.794+0.153 & & 13.7 & a & VVV CL 073 & WR75-6 (19.7), WR75-25 (6.1), WR75-26 (6.1) & WR75b (16.1), WR75-15 (12.4) & h \\ & & & & & & WR75-16 (4.2) & n \\ G336.446--0.198& & 10.4 & a & & & WR75-9 (11.7), WR75-21 (10.3) & n \\ & & & & & & WR75-5 (14.3) & d \\ G338.350+0.221 & & 6.9 & a & & & WR76-10 (4.7) & n \\ G338.430+0.048 & & 5.8 & a & Mercer 81 & WR76-2 (3.5), WR76-3 (3.5), WR76-4 (3.5) & & o \\ & & & & & WR76-5 (3.5), WR76-6 (3.5), WR76-7 (3.5) & & o \\ & & & & & WR76-8 (3.5), WR76-9 (3.5) & & o \\ G338.911+0.615 & 4.4 & 3.8 & a & & & WR76 (1.9) & h \\ \hline \multicolumn{8}{l}{ \begin{minipage}{2\columnwidth}~\\ a: \citet{anderson14} b: \citet{mauerhan09} c: \citet{chene13} d: \citet{shara09} e: \citet{hadfield07} f: \citet{kurtev07} g: \citet{mauerhan11} h: \citet{vdh01} i: \citet{davies12a}; j: \citet{chene15} k: \citet{marston13} l: \citet{roman11} m: \citet{wachter10} n: \citet{shara12} o: \citet{davies12b}; p: This work \\ $\ast$: Alternatively WR62-2 and WR62-1 may be associated with WISE H\,{\sc ii} region G317.236+0.516 \end{minipage} } \end{tabular} \end{center} \end{table*} \section{Spatial distribution of Wolf-Rayet stars and other massive stars in the Milky Way}\label{sec:disc} \subsection{Association of WR stars with star clusters?} If the majority of WR progenitors are born in relatively massive star-forming regions, one might expect them to be in close proximity to their natal star cluster. We have compared the spatial location of the new WR stars with young star clusters from \citet{dutra03}, \citet{mercer05} and \citet{borissova11}. None of the new WR stars are located within known clusters. At face value, this might be considered to be surprising, but \citet{lundstrom84} in the most extensive study of the environment of Galactic WR stars to date, found that only 10--30\% of WR stars are located in star clusters. Since then, the known WR population has quadrupled, so we provide revised statistics for the 298$^{\circ} \leq l \leq 340^{\circ}$ survey region as a whole, involving 190 + 16 = 206 Galactic Wolf-Rayet stars, comprising 119 WN, 1 WN/C and 86 WC stars. \citet{lundstrom84} considered a WR star to be associated with a star cluster if its projected distance was within two cluster radii. Here, we soften this requirement, extending the potential association to 4 cluster radii, utilising published cluster centres and radii from \citet{dias02}, \citet{dutra03}, \citet{mercer05}, \citet{borissova11}. Overall, 55 WR stars (27\%) are associated with a total of 12 star clusters, as shown in Table~\ref{tab:clusters}, although it is notable that 44\% of all the WR cluster members in our survey range are located within a single open cluster, Westerlund~1 (C06). Consequently 73\% of WR stars in the survey region are {\it not} associated with a star cluster. If the majority of WR progenitors are born in relatively massive star forming regions, why are so few currently associated with clusters? The lower WR mass limit for single rotating stars at Solar metallicity is $\sim$22 $M_{\odot}$ star \citep{meynet03}. Empirically, such stars are observed in clusters with $\geq 500 M_{\odot}$ \citep{larson03, weidner10}. Stochasticity in the sampling of the initial mass function will result in some massive stars originating in low mass (10$^{2} M_{\odot}$) clusters/star-forming regions, as demonstrated theoretically by \citet{parker07}. Therefore, some WR stars could be associated with low mass clusters, with other members of the star forming region inconspicuous. Indeed, $\gamma$ Vel (WC8+O), the closest Galactic WR star, is located in a very low mass star-forming region \citep{jeffries14}. Other members of such star-forming region would be very difficult to identify at the typical distance of Galactic WR stars. There is evidence that some dense, young massive clusters rapidly achieve virial equilibrium \citep[e.g.][]{henault12}, such that they would retain the bulk of their stellar content over the representative WR ages of 5--10 Myr \citep{meynet03}. Indeed, the bulk of the WR stars associated with Westerlund 1 lie within the 1.2 arcmin radius -- corresponding to 1.4 pc at a distance of 3.8 kpc -- such that our 4 cluster radius limit is equivalent to $\sim$5.5 pc. Of course not all open clusters are dense or bound. For example, Hogg 15 is a sparsely populated cluster with a much larger radius of 3 pc, at its distance of 3 kpc, so will be in the process of dispersing. Consequently the general lack of an association with star clusters is not wholly unsurprising. Indeed, inspection of the Galactic O Star Catalogue \citep[GOSC v3.0][]{gosc13} reveals 50 (optically visible) O stars in our survey region. Of these, only 18 (36\%) are associated with a star cluster, not significantly larger than WR stars since the bulk of these are late-type O stars with comparable ages to many WR stars. Of course, not all massive stars originate from star clusters. \citet{bressert10} demonstrated that only a small fraction of star formation in the Solar Neighbourhood originates from dense environments. It is probable that the majority of OB stars originate from intermediate density regions, i.e. OB asociations and/or extended star-forming regions \citep[see][]{parker17}. Indeed, \citet{wright16} have unambiguously demonstrated that the distributed massive stars in Cyg OB2 did not originate from a dense star cluster. \citet{lundstrom84} found that $\geq$50\% of optically visible WR stars were located in OB associations. It is not possible to calculate the fraction of IR-selected WR stars that lie within OB associations since the latter are limited to nearby optical surveys. Instead, it is necessary to compare the location of Wolf-Rayet stars with infrared catalogues of star-forming regions. We have compared the locations of all 206 WR stars in our survey region with confirmed H\,{\sc ii} regions from the all-sky WISE catalogue of Galactic H\,{\sc ii} regions from \citet{anderson14}. In total, 53 stars are located within $\approx$1.5 $R_{\rm HII}$ of the H\,{\sc ii} region, representing 26\% of the total WR population, as shown in Table~\ref{tab:SFRegions}. Of course, a subset of these WR stars will be foreground sources, so the quoted statistics represent strict upper limits. The majority of these WR stars are associated with complexes at G298 (Dragonfish nebula), G305 and G338. By way of example, as many as twelve WR stars are associated with G298 `Dragonfish' complex \citep{rahman11}. The VVV CL011 and Mercer 30 clusters are coincident with this region, although radio distances of $\sim$10.5 kpc significantly exceed spectrophotometric cluster distances. Similar issues affect the association of Mercer 81 with G338.430+0.048, and VVV CL 041 with G317.030+0.028. Finally, stellar winds and/or supernovae associated with Westerlund~1 have sculpted an IR cavity, such that there is no longer an associated H\,{\sc ii} region. Overall, it is apparent that WR stars are rarely located within known H\,{\sc ii} regions, albeit with some notable exceptions which includes the Carina Nebula in the Milky Way. Typical radii of star-forming regions are 9 arcmin, so for representative radio-derived distances of 7.5 kpc to star forming regions, we include stars within a projected distance of 30 parsec from the centre of the H\,{\sc ii} region. Again, it is possible that stars have migrated away from where they formed. Typical velocity dispersions of stars in such regions are several km/s, corresponding to several pc/Myr, such that the WR progenitor could have travelled several ten's of parsec. It is important to stress that the parent star-forming region of a WR star would not necessarily display significant radio free-free emission after 5--10 Myr \citep{meynet03} since most O stars will have died if there had been a single burst of star formation. Indeed, only 23 (46\%) of the 50 GOSC optically visible O stars in our survey region lie in an OB association (Cen OB1, Nor OB1, Ara OB1). Regardless of whether WR progenitors form in clusters or lower density environments, there are other explanations for their relative isolation. To have travelled more than a few 10s of parsec from their birth site, the WR progenitor may have been ejected via dynamical effects or following a supernova kick from a close companion. The former, involving 3-body dynamical interactions, is favoured in dense stellar environments, in which the fraction of massive stars ejected is inversely proportional to the stellar mass of the cluster \citep{fujii11}. Therefore, the population of WR stars dynamically ejected in this way will be dominated by those from low to intermediate mass clusters, explaining why the 10$^{5} M_{\odot}$ cluster Westerlund~1 has successfully retained the majority of its WR population. Historically, a supernova origin for runaway WR stars has been considered to be a major factor affecting their spatial distribution \citep{eldridge11}. This requires a close binary companion, and assumes that all massive stars whose initial masses exceed $\sim20 M_{\odot}$ undergo a core-collapse SN. Of course, such stars may collapse directly to a black hole, or form a black hole via fallback, so their companion would not necessarily receive a significant supernova kick \cite[e.g.][]{oConnor11}. Therefore, only a small fraction of isolated WR stars likely originate in this way. Overall, the most promising scenario for the low observed frequency of WR stars associated with star clusters is via dynamical ejection, but this alone doesn't explain the very high fraction of WR stars in the field. Instead, it is likely that a majority of Galactic WR progenitors do not form within dense, high mass star clusters. The observed high fraction of WR stars located in the Galactic field can therefore be attributed to a combination of dynamical ejection from star clusters, and their origins in modest star-forming regions which subsequent dissolve into the field. The latter will not necessarily be recognisable as an OB association during the WR stage since the majority of O stars will have ended their lives after 5--10 Myr, with similar arguments applying to isolated H\,{\sc ii} regions. Some distant WR stars will not be recognised as being associated with a star forming region if other stars in the region possess low masses, as is the case for $\gamma$ Vel \citep{jeffries14}. \subsection{Relative isolation of WR stars and Luminous Blue Variables} The general lack of association between Wolf-Rayet stars and O stars -- except for the minority located in young, dense star clusters (e.g. NGC~3603, Westerlund 1) -- is relevant to the ongoing debate about the nature of Luminous Blue Variables \citep{humphreys16, smith16}. Historically LBVs were considered to be massive stars in transition towards the WR stage. In part, this association arose from the spectroscopic evolution of LBVs to the WN phase \citep[e.g. AG Car][]{smith94} and LBV outbursts from WN stars \citep[e.g. R127, HDE\,269582,][]{walborn17}. \citet{smith15} have challenged this view, finding that LBVs in the Milky Way and LMC are more isolated (from O stars) than Wolf-Rayet stars. They have argued that they possess lower masses, with their high luminosities arising from being the mass gainers (former secondaries) within close binary systems. Their conclusions were largely based upon the spatial distribution of O stars, WR stars and LBVs in the LMC, owing to visual studies of Galactic massive stars being severely restricted by dust extinction. Still, the reliance on SIMBAD for catalogues of O stars hinders their conclusions owing to severe incompleteness for both galaxies, while \citet{humphreys16} argued for a mass separation between high luminosity (classical) and low luminosity LBVs \citep[though see][]{smith16}. \citet{kniazev15, kniazev16} provide the current census of 17 confirmed LBVs in the Milky Way, which is restricted to confirmed spectroscopic variables. Their criteria therefore exclude candidate LBVs possessing ejecta nebulae, which \citet{bohannan97} had previously argued should be an additional discriminator. Only three spectroscopically variable LBVs lie within our Galactic survey region -- WS1 \citep{kniazev15}, Wray 16-137 \citep{gvaramadze15} and Wd1-W243 \citep{clark05} -- so we need to consider the global Milky Way population of LBVs and WR stars. Of the known WR content in the Milky Way, 27\% are members of star clusters \citep[][]{crowther15}. Meanwhile, 4 of the 17 spectroscopically variable LBVs are located in star clusters -- W243 in Westerlund 1, $\eta$ Car in Trumpler 16, GCIRS 34W in the Galactic Centre cluster and qF 362 in the Quintuplet cluster \citep{geballe00}) -- comprising 24\% of the total, so their global statistics are comparable. Indeed, a number of widely accepted LBVs known to be associated with star clusters are omitted from the compilation of \citet{kniazev15}. These include the Pistol star (qF 134), another member of the Quintuplet cluster \citep{figer98}, and the LBV in the 1806-20 cluster \citep{eikenberry04, figer04}. 5 of the 17 spectroscopically variable LBVs are potentially associated with star-forming regions \citep{anderson14} from a similar exercise to that discussed above for WR stars, namely $\eta$ Car, Wray 16-137, HD~168607, AFGL 2298 and G24.73+0.69, namely 29\% of the total. Consequently, the overwhelming majority of WR stars and LBVs are located in the Galactic field, away from star clusters or star-forming regions. Overall, there is no significant difference in the spatial distribution of WR and LBVs in our Galaxy, with a quarter of both populations associated with young massive star clusters. \citet{smith15} proposed that LBVs generally arise from significantly lower mass progenitors than WR stars, challenging the hitherto scenario that LBVs are transition objects between the O and WR phases. However, those star clusters which host LBVs are relatively young \citep[4--6 Myr][]{clark05, bibby08, liermann12, schneider14}, putting aside $\eta$ Car\footnote{$\eta$ Car is unusual since it is associated with an even younger star cluster, Trumpler~16, which also hosts the massive main-sequence O2.5\,If/WN6 star WR25 \citep{massey93, crowther11}}. Not only are the statistics of WR and LBV association with star clusters comparable, but crucially 4 young Milky Way clusters hosting LBVs -- Westerlund~1, CL 1806-20, the Galactic Centre and the Quintuplet -- also contain (classical) WR stars. \citet{smith15} argued that LBVs arise from mass gainers in close binary systems \citep{langer12, demink14}. Mass gainers will be rejuvenated, yielding an apparently younger star than the rest of the population \citep{schneider14}. The presence of spectroscopically variable LBVs (Wd1-W243, GCIRS 34W, qF 362) plus leading LBV candidates (qF 134, LBV 1806-20) in young clusters with progenitor masses in the range 30--40 $M_{\odot}$ and coexisting with O stars and WR stars, should permit their scenario to be tested. Of course, LBVs in such systems might be the mass-gaining secondaries whose primaries have already undergone core-collapse. However, LBV 1806-20 is an SB2 system, whose companion is not a WR star since its exhibits He\,{\sc i} absorption lines. This appears to rule out the \citet{smith15} scenario in this instance. If LBVs are rejuvenated secondaries in close binaries spanning a wide range of masses, they should also be present in older, massive star clusters such as those hosting large red supergiant (RSG) populations. However, \citet{smith15} report that LBVs and RSGs do not share a common parent population, and LBVs are not known within RSG-rich clusters at the base of the Scutum-Crux arm \citep[RSGC 1--3,][]{figer06, davies07, clark09}. The presence of $\sim$50 RSG in these clusters and absence of LBVs argues either argues against the \citet{smith15} scenario, or requires a short ($\sim 2 \times 10^{4}$ yr) lifetime for such LBV, in view of the $\sim$1 Myr RSG lifetime. The only potential LBV within these RSG rich clusters identified to date is IRAS 18367---0556 in RSGC 2, which possesses a mid-IR ejecta nebula \citep{davies07}. Overall, it is apparent that LBVs span a wide range of physical properties \citep{smith04}, though the same is true for WR stars, some of which are located in significantly older star clusters (recall Table~9 from C06), albeit solely Westerlund 1 hosts RSG and WR stars. \section{Conclusions}\label{sec:conc} We have undertaken a near-IR spectroscopic search for Wolf-Rayet stars along the Scutum-Crux spiral arm, based upon 2MASS and GLIMPSE photometric criteria \citep{mauerhan11, faherty14}. Observations of 127 candidate stars ($K_{\rm S} \sim 10-13$ mag) resulted in the confirmation of 17 WR stars (14 WN, 3 WC), representing a success rate of $\sim$13\%, comparable to previous IR-selected studies based on similar criteria \citep{hadfield07, mauerhan11}. More sophisticated techniques are clearly required for optimising future spectroscopic searches. As we have found, large numbers of other stellar types (young stellar objects, Be stars) lie in the same location as WR stars within individual colour-colour diagrams, but may not do so in a multi-dimensional parameter space. Therefore, future progress might entail a Bayesian approach to optimising searches for candidate WR stars. We have extended our earlier near-IR classification scheme (C06) to cover all YJHK bands and all subtypes, including WN/C and WO subtypes. This has been tested on several recently discovered WR stars for which optical classifications have previously been obtained. In general, the near-IR criteria are successful in obtaining reliable subtypes, including the presence of atmospheric hydrogen in WN stars, and identifying transition WN/C stars. However, inferences are weaker if limited wavebands are available, the spectral resolution is modest, and/or the signal-to-noise obtained is low. The majority of newly discovered WR stars are weak-lined WN5--7 stars, with two broad-lined WN4--6 stars and three WC6--8 stars. Therefore, despite the low success rate, our goal of extending the spectroscopic confirmation of WR stars to the far side of the Milky Way has been successful. Based on the absolute magnitude calibration of C15, inferred distances ($\sim$10 kpc) extend beyond previous spectroscopic surveys, with $A_{\rm K_{s}}{\sim}1.2$ mag. Spectroscopic searches beyond the Galactic Centre ($A_{\rm K_{s}}{\sim}3$ mag) will be significantly more challenging. Only a quarter of WR stars in the selected Galactic longitude range are associated with star clusters and/or H\,{\sc ii} regions. We suggest that this arises from a combination of dynamical ejection from (modest stellar mass) clusters and formation within lower density environments (OB associations/extended H\,{\sc ii} regions). We also revisit the recent controversy about the association between LBVs and WR stars, or lack thereof. Considering the whole Milky Way, 27\% of WR stars are hosted by clusters, comparable to that of spectroscopically variable LBVs (4 from 17 stars). More significantly, several young clusters with main sequence turn-off masses close to 30--40\,$M_{\odot}$ host classical WR stars {\it and} LBVs, permitting \citet{smith15}'s suggestion that some LBVs are rejuvenated mass gainers of close binaries to be tested. Specifically, the non-WR companion to LBV 1806-20 and absence of LBVs in RSG-rich star clusters argue against this scenario. Returning to the main focus of this study, until more robust distances to WR stars - and in turn absolute magnitudes - can be established by {\it Gaia} there are legitimate concerns about the completeness of surveys for different subtypes, especially the challenge of identifying faint, weak emission line WN stars with respect to WC stars \citep{Massey15b}. Near-IR narrow-band photometric searches suffer from dilution of emission lines by companion stars and hot dust emission from WC binaries, while our broad-band near to mid-IR photometric approach is limited by the low spatial resolution of Spitzer. \citet{Massey15a} have undertaken a deep optical narrow-band survey of the LMC, revealing a population of faint, weak-lined WN3 stars (which they characterize as WN3/O3 stars). Stars with these characteristics -- which would usually be classified as WN3ha according to the \citet{smith96} scheme -- comprise a negligible fraction of Galactic WR stars (e.g. WR3), a minority of the moderately metal-poor LMC WR population, and a majority of the more significantly metal deficient SMC WR population \citep{hainich15}. It is anticipated that ongoing infrared searches using a combination of continuum methods and emission line characteristics will significantly improve the completeness of WR surveys in a sizeable volume of the Galaxy within the near future. \section*{Acknowledgments} We wish to thank Nicole Homeier, Bill Vacca and Frank Tramper for providing us with near-IR spectroscopic datasets, from NTT/SOFI, IRTF/SpeX and VLT/XSHOOTER respectively, for several WR stars. We appreciate useful discussions with Simon Goodwin and Richard Parker, and useful comments from the referee which helped improve the focus of the paper. Financial support for CKR was provided by the UK Science and Technology Facilities Council. PAC would like to thank ESO for arranging emergency dental care in La Serena immediately after the 2015 observing run. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Distribution grid is the part of the power grid network from the distribution substation to the loads and end-users. Often the distribution grid is structured as a radial tree with the substation node as root and load buses/buses powered by the substation located elsewhere. This radial topology is constructed by switching on and off breakers from a set of candidate lines. The optimal operation of smart grids depends on the accurate real-time estimate of the operational topology as well as of the statistics of disturbance/variation in consumption at different grid nodes. However such estimation problems are not straightforward due to the low deployment of real-time meters in the distribution grid \cite{hoffman}. In recent years, there has been a growing adoption of certain `nodal' meters on the distribution side. Examples include distribution PMUs, micro-PMUs \cite{micropmu}, FNETs \cite{fnet}. Additionally, some smart devices at end-user nodes are capable of measuring nodal quantities like voltages for their primary operation. In this paper, we study the problem of structure and statistical estimation in distribution grids using such nodal measurements available only at a subset of the grid nodes - remaining nodes being unobserved/`missing'. Moving forward into the regime of higher meter placement, incomplete observability may still be an issue for third-parties due to privacy concerns and encrypted measurements. As the number of possible radial networks that can be constructed from a set of candidate edges can scale exponentially with its size, brute force methods for topology identification and subsequently injection estimation are avoided. Instead we focus on designing \emph{computationally efficient} theoretical learning algorithms for exact recovery despite the presence of \emph{missing nodes} in the grid. \subsection{Prior Work} Learning and estimation in power grids and radial distribution grids in particular has attracted significant attention in recent years. The prior work can be distinguished based on methodology used, assumptions and measurements involved. For available line measurements, \cite{ramstanford} uses maximum likelihood tests for estimating the operational topology using cycles basis. For available nodal voltage measurements, \cite{usc, bolognani2013identification, dekapscc, dekairep, ram_loopy} use properties of the graphical model of voltage measurements to identify the operational topology. Similarly, properties of graphical models in dynamical systems that represent swing dynamics in power grids have been used in grid identification in \cite{sauravacc,sauravacm}. \cite{dekatcns,dekasmartgridcomm} use properties of second moments of voltage magnitudes measurements to identify the radial topology through iterative algorithms that build the tree from leaves to the root. For availability of both voltage and injection measurements, \cite{cavraro2018graph,sejunpscc} design algorithms for topology and parameter (line impedance) identification that considers missing nodes. In agnostic data-driven efforts, topology and parameter identification techniques using machine learning techniques have been discussed in \cite{berkeley,arya}. An important feature of the majority of cited work based on voltage measurement samples is that exact learning algorithms are only provided for cases with sufficient nodal observability (i.e. without missing nodes). In prior work that considers missing nodes \cite{dekatcns, dekaecc}, topology learning algorithms are designed but require historical knowledge of injections statistics at all nodes, including missing nodes. Such estimates may be unreliable or not present in reality. Further, the hidden nodes are assumed to be separated by three or more hops in the operational grid. We relax both these drawbacks in this paper. In a different setting, \cite{cavraro2018graph,sejunpscc} require both injection and voltage samples at the observed nodes. Availability of real-time injection samples may have stronger consequences for end-user privacy \cite{privacy}. In this work, we consider a setting where samples of nodal voltages and statistics of injections (not samples) are available only at the observed nodes, while missing nodes are two or more hops away (ie. non-adjacent). Our algorithms are able to learn the exact grid topology and estimate the injection statistics at the missing nodes. \subsection{Technical Contribution} We consider estimation in partially observed radial grids using time-stamped voltage samples and injection statistics collected from a subset of nodes. Operational edges are selected from among a larger set of permissible edges with known impedances. Under the assumption that missing nodes are non-adjacent and have a degree greater than two, we present learning algorithms to estimate the operational grid topology and estimate the injection statistics at the missing nodes. Based on a linearized AC power flow model \cite{dekatcns,89BWa,89BWb,bolognani2016existence}, we determine relations (equalities and inequalities) between second moments of voltages at groups of two and three nodes that enable guaranteed estimation. We first consider the case with no missing nodes and provide the theoretical sample and computational complexity of a spanning tree based Algorithm \ref{alg:1} originally presented in \cite{dekaecc} that uses only voltage magnitude samples at all nodes to learn the operational topology. We demonstrate through simulations that this algorithm has improved performance at low sample sizes over prior work \cite{dekatcns}. Further we discuss theoretical limitations of Algorithm \ref{alg:1} when missing nodes exist in the system. Next we consider the case where missing nodes are three hops away and present Algorithm \ref{alg:2}, which incorporates additional checks to identify the missing nodes and estimate their injection statistics. Finally we present Algorithm \ref{alg:3} that is able to learn topology and statistics when hidden node are two or more hops away. Going from three to two hop separation uses clustering based on novel monotonic properties of voltages at three nodes. We show the polynomial computational complexity of the designed algorithms, and validate the algorithms on a test distribution system with non-linear AC power flow samples simulated through Matpower \cite{matpower}. This work is the journal version of a preliminary conference paper \cite{dekasmartgridcomm18} which described Algorithm \ref{alg:3}. This work includes extended proofs of theorems, new Algorithm \ref{alg:2}, as well as theoretical results on sample and computational complexity. Further, unlike \cite{dekasmartgridcomm18}, we present realistic simulation results that enhance the practicality of our proposed algorithms. The rest of the paper is organized as follows. Section \ref{sec:structure} introduces structural and power flow variables and models used in the remaining sections. Section \ref{sec:trends} describes relations (equalities and inequalities) of second moments of nodal voltage magnitudes that are used in algorithm design. The first learning algorithm for grids with no missing nodes is presented in Section \ref{sec:trends}, along with the analysis of its sample and computational complexity. The second and third learning algorithms for grids with missing nodes are given in Section \ref{sec:learningmissing1} along with derivation of voltage properties that enable their design. Numerical simulations of our learning algorithms on test radial networks are presented in Section \ref{sec:experiments}. Finally, Section \ref{sec:conclusions} contains conclusions and discussion of future work. \section{Distribution Grid: Structure and Power Flows} \label{sec:structure} \begin{figure}[!bt] \centering \includegraphics[width=0.21\textwidth]{foresttree.eps} \vspace{-1.5 mm}\caption{Distribution grid with $1$ substation (large red node). The operational lines are solid, and non-operational lines are dashed grey. Nodes $b$ and $a$ are parent-child pair, while $b$ and $d$ are grandparent-grandchild. Node $c$ and $a$ are siblings. Nodes $c,d$ are leaves. The green edges represent path $\mathcal{P}^c$. The descendant set of node $a$ is $\mathcal{D}^a = \{a,d\}$.}\label{fig:picHinv} \end{figure} \textbf{Structure}: We represent radial distribution grid by the graph ${\mathcal T}=({\mathcal V},{\mathcal E})$, where ${\mathcal V}$ is the set of buses/nodes of the graph and ${\mathcal E}$ is the set of edges. We denote nodes by alphabets ($a$, $b$,...) and the edge connecting nodes $a$ and $b$ by $(ab)$. The root node of the tree represents a substation and is assumed to have a degree one. This is done for ease of notation as each sub-network emerging from the substation can be separately identified as later discussed. The edge set $\mathcal{E}$ is determined by operational lines (closed) in a set of candidate permissible edges $\mathcal{E}_{full}$. We seek to identify the set of operational edges $\mathcal{E}$ given the set of candidate edges $\mathcal{E}_{full}$. In radial grid $\mathcal{T}$, we denote the unique set of edges that connect a node $a$ to the root node by \emph{path} $\mathcal{P}^a$. We call a node $a$ to be a \textit{descendant} of another node $b$ if $\mathcal{P}^b\subset \mathcal{P}^a$ (i.e. the path from $a$ to root passes through $b$). $\mathcal{D}^a$ is used to denote the set of descendants of $a$. We include node $a$ in $\mathcal{D}^a$ by definition. If $a \in \mathcal{D}^b$ and $(ab) \in \mathcal{E}$, then $b$ and $a$ are termed \emph{parent} and \emph{child} nodes respectively. A parent of a parent is termed \emph{grandparent}. Two nodes that share the same parent are termed \emph{siblings}. Finally terminal nodes that do not have a child are termed \emph{leaves}. An illustrative example of a radial grid with operational edges selected from a candidate set is shown in Fig.~\ref{fig:picHinv} along with the graph-theoretic notations defined. Next we describe the power flow model used in this paper. \textbf{Power Flow (PF) Model}: Each line $(ab)$ (either operational or open) is associated with a complex impedance $z_{ab}=r_{ab}+\hat{i}x_{ab}$ ($\hat{i}^2=-1$), where $r_{ab}>0$ and $x_{ab}>0$ denote line resistance and reactance respectively. Let the real and reactive injections at node $a$ be denoted by $p_a$ and $q_a$ respectively. Kirchhoff's law relates the complex AC injection at $a$ by the following power flow equation termed AC-PF (Alternating Current Power Flow). \begin{align} p_a+\hat{i} q_a = \underset{b:(ab)\in{\mathcal E}}{\sum}\frac{v_a^2-v_a v_b\exp(\hat{i}\theta_a-\hat{i}\theta_b)}{z_{ab}^*}.\label{P-complex1} \end{align} Here real valued scalars, $v_a$, $\theta_a$ are the voltage magnitude and phase respectively at node $a$. Under normal operating conditions, small deviations in voltage magnitude from nominal value ($1 p.u.$) at each node and small phase differences between neighboring nodes can be assumed and the following linearized power flow model is derived by ignoring second order terms: \cite{dekatcns,bolognani2016existence}: \begin{align} p_a&=\underset{b:(ab)\in{\mathcal E}}{\sum}\left(r_{ab}(v_a-v_b)+ x_{ab}(\theta_a-\theta_b)\right)/\left({x_{ab}^2+r_{ab}^2}\right), \label{LC-PF_p}\\ q_a&=\underset{b:(ab)\in{\mathcal E}}{\sum}\left(x_{ab}(v_a-v_b) -r_{ab}(\theta_a-\theta_b)\right)/\left({x_{ab}^2+r_{ab}^2}\right) \label{LC-PF_q} \end{align} We term Eqs.~(\ref{LC-PF_p},\ref{LC-PF_q}) as LC-PF (Linear Coupled Power Flow). Note that the active and reactive injections in LC-PF are linear functions of differences in nodal voltage magnitudes and phases. Thus the equations are satisfied if the voltage and phase at all buses are measured relative to some reference bus. Here we consider the substation root node as reference bus with magnitude $1 p.u.$ and phase $0$. Further, summing each equation over all nodes gives $0$. Thereby LC-PF is lossless. Without a loss of generality, we can thus restrict LC-PF analysis to a reduced system without the reference node. This is similar to work in similar lossless models as LinDistFlow \cite{89BWa} or DC power flow. The reduced system is in fact invertible and enables us to express voltages in terms of injections as noted below: \begin{align} v = H^{-1}_{1/r}p + H^{-1}_{1/x}q~~ \theta = H^{-1}_{1/x}p - H^{-1}_{1/r}q \label{LC_PF} \end{align} Abusing notation, we use $v,\theta, p, q$ to denote the vector of voltage magnitude, phase, active and reactive injections respectively at the non-reference buses in the reduced system. The derivation uses basic matrix inversion. $H_{1/r}$ and $H_{1/x}$ denote the full-rank \emph{reduced weighted Laplacian matrices} for tree $\mathcal T$ where reciprocal of resistances ($1/r$) and reactances ($1/x$) are used respectively as edge weights. The reduction is achieved by removing the row and column corresponding to the reference bus in the original weighted Laplacian matrix. Simulation results on the similarity of LC-PF with non-linear AC power flow generated voltages are described in Section \ref{sec:experiments} for test cases. For a random vector $X$. we use $\mu_{X} = \mathbb{E}[X]$ to denote its mean. For two random vectors $X$ and $Y$, the centered covariance matrix is denoted by $\Omega_{XY} = \mathbb{E}[(X-\mu_{X})(Y-\mu_{Y})^T]$. If $X=Y$, we denote the covariance matrix by $\Omega_X$. As LC-PF is linear, we relate the means and covariances of voltage magnitudes and phases with that of the active and reactive injections. \begin{subequations}\label{moments} \footnotesize \begin{align} \mu_v &= H^{-1}_{1/r}\mu_p + H^{-1}_{1/x}\mu_q,~~ \mu_{\theta} = H^{-1}_{1/x}\mu_p - H^{-1}_{1/r}\mu_q \label{means}\\ \Omega_{v} &= H^{-1}_{1/r}\Omega_{p}H^{-1}_{1/r} + H^{-1}_{1/x}\Omega_q H^{-1}_{1/x}+H^{-1}_{1/r}\Omega_{pq}H^{-1}_{1/x} +H^{-1}_{1/x}\Omega_{qp}H^{-1}_{1/r}\label{volcovar1}\\ \Omega_{\theta} &= H^{-1}_{1/x}\Omega_{p}H^{-1}_{1/x} + H^{-1}_{1/r}\Omega_q H^{-1}_{1/r}-H^{-1}_{1/x}\Omega_{pq}H^{-1}_{1/r} -H^{-1}_{1/r}\Omega_{qp}H^{-1}_{1/x}\label{phasecovar1}\\ \Omega_{v\theta} &= H^{-1}_{1/r}\Omega_{p}H^{-1}_{1/x}-H^{-1}_{1/x}\Omega_{q}H^{-1}_{1/r} - H^{-1}_{1/r}\Omega_{pq} H^{-1}_{1/r}+H^{-1}_{1/x}\Omega_{qp}H^{-1}_{1/x} \label{volphasecovar1} \end{align} \end{subequations} We look at functions of the covariance matrices in the next section and prove equality and inequality results that enable topology and statistical estimation. \section{Properties of Voltage Second Moments} \label{sec:trends} At the onset, we make the following assumption regarding statistics of nodal injections at the grid nodes. \textbf{Assumption $1$:} Active and reactive injections at different nodes are not correlated, while at the same node are non-negatively correlated. Mathematically, $\forall a,b$ non-substation nodes \begin{align} \Omega_{qp}(a,a) \geq 0,~\Omega_p(a,b) = \Omega_q(a,b)= \Omega_{qp}(a,b) = 0 \nonumber \end{align} This assumption, similar to ones in \cite{bolognani2013identification,dekapscc,ram_loopy,dekatcns}, is motivated by the fact that at short time-scales, injection fluctuations are the result of loads changes that are independent across nodes. Note that fluctuations at the same node may be aligned. Under Assumption $1$, we analyze second moments of voltages in radial grid $\mathcal{T}$. First we mention two structural results for inverse weighted Laplacian matrices that are true for radial networks, mentioned in \cite{dekatcns}. \begin{enumerate}[leftmargin=*] \item For nodes $a$ and $b$ in tree $\mathcal T$, \begin{align} H_{1/r}^{-1}(a,b)= \sum_{(cd) \in {\mathcal P}^a\bigcap {\mathcal P}^b} r_{cd} \label{Hrxinv} \end{align} \item For parent node $b$ and its child $a$, \begin{align} {\huge H}_{1/r}^{-1}(a,c)-{\huge H}_{1/r}^{-1}(b,c) =\begin{cases}r_{ab} & \quad\text{if node $c \in \mathcal{D}^a$}\\ 0 & \quad\text{otherwise,} \end{cases} \label{Hdiff} \end{align} \end{enumerate} Note that ${\mathcal P}^a\bigcap {\mathcal P}^b$ denotes the common edges on paths from nodes $a$ and $b$ to the root. The first result follows from structure of inverse reduced incidence matrices in trees. The second result follows from the first result as for parent-child pair $b,a$ and node $c \notin \mathcal{D}^a$, ${\mathcal P}^a\bigcap {\mathcal P}^c$ and ${\mathcal P}^b\bigcap {\mathcal P}^c$ are identical. We now consider a specific non-negative function of two nodes $\phi_{ab} =\mathbb{E}[(v_a - \mu_{v_a})-(v_b-\mu_{v_b})]^2 $, which represents the variance of the difference in voltage magnitudes at nodes $a$ and $b$. Using Eq.~(\ref{volcovar1}), $\phi_{ab}$ can be expanded in terms of covariances at nodal injections as {\footnotesize \begin{align} &\phi_{ab} = \Omega_{v}(a,a) + \Omega_{v}(b,b)- 2\Omega_{v}(a,b) \nonumber\\ &= \smashoperator[lr]{\sum_{d \in {\mathcal T}}}(H^{-1}_{1/r}(a,d)- H^{-1}_{1/r}(b,d))^2\Omega_p(d,d) +(H^{-1}_{1/x}(a,d)- H^{-1}_{1/x}(b,d))^2 \Omega_q(d,d) \nonumber\\ &~~+2\left(H^{-1}_{1/r}(a,d)- H^{-1}_{1/r}(b,d)\right)\left(H^{-1}_{1/x}(a,d)- H^{-1}_{1/x}(b,d)\right)\Omega_{pq}(d,d) \label{usediff_1} \end{align}} \begin{figure}[bt] \centering \hspace*{\fill} \includegraphics[width=0.3\textwidth]{item1.eps} \vspace{-1.5 mm} \hspace*{\fill} \caption{Distribution grid tree for Theorem \ref{Theoremcases} illustration. Here $\phi_{ac} = \phi_{ab}+\phi_{bc}$, and $\phi_{ac_1} = \phi_{ab}+\phi_{bc_1}$, while $\phi_{ac_2} >\phi_{ab_2}+\phi_{b_2c_2}$, and $\phi_{ac_3}>\phi_{ab}+\phi_{bc_3}$.} \label{fig:item1} \end{figure} The following result shows increasing trends in $\phi_{ab}$ along paths in the radial grid. \begin{theorem} \label{Theoremcases} For three nodes $a \neq b \neq c$ in tree $\mathcal{T}$, let the path from $a$ to $c$ pass through node $b$ in tree $\mathcal{T}$. Then \begin{enumerate}[leftmargin =*] \item $\phi_{ab} + \phi_{bc} = \phi_{ac} \text{~~if~~} \mathcal{P}^c\bigcap \mathcal{P}^a = \mathcal{P}^b$ \item $\phi_{ab} + \phi_{bc} < \phi_{ac} \text{~~if~~} \mathcal{P}^c\bigcap \mathcal{P}^a \subset \mathcal{P}^b$ \end{enumerate} \end{theorem} The proof, originally presented in the conference paper \cite{dekaecc}, is provided in Appendix \ref{sec:proof1} for completion and use in subsequent theorems. Theorem \ref{Theoremcases} states that $\phi$ computed across any path in $\mathcal{T}$ is at least as large as the sum computed across its non-overlapping sub-paths as shown in Fig.~\ref{fig:item1}. The following theorem from \cite{dekaecc} uses this result to estimate the operational tree from the set of permissible edges $\mathcal{E}_{full}$. \begin{theorem}\label{main} Let each permissible edge $(ab)$ in ${\mathcal E}_{full}$ be given weight $\phi_{ab} = \mathbb{E}[(v_a-\mu_{v_a}) -(v_b-\mu_{v_b})]^2$. The operational edge set ${\mathcal E}$ is given by the minimum weight spanning tree in set ${\mathcal E}_{full}$. \end{theorem} Theorem \ref{main} states that the exact topology of the grid can be computed using just the voltage magnitude measurements at all grid nodes. No additional information related to injection statistics are needed. If voltage phase angles are also available, the injection statistics at all nodes can be computed by inverting Eqs.~(\ref{moments}) or iteratively from leaves to the root using Eq.~(\ref{first}) described later. The steps in topology and injection statistics estimation are listed in Algorithm \ref{alg:1}, originally presented as Algorithm \ref{alg:1} in \cite{dekaecc}. \begin{algorithm} \caption{Learning without missing nodes}\label{alg:1} \textbf{Input:} Voltage observations $v$, $\theta$ at all nodes, set of permissible edges ${\mathcal E}_{full}$ with line impedances.\\ \textbf{Output:} Operational edges $\mathcal{E}$, injection covariances $\Omega_p, \Omega_q, \Omega_{pq}$ at all nodes \begin{algorithmic}[1] \State $\forall (ab) \in {\mathcal E}_{full}$, compute $\phi_{ab}=\mathbb{E}[(v_a - \mu_{v_a})-(v_b-\mu_{v_b})]^2$ \State Find min. spanning tree from $\mathcal E$ with $\phi_{ab}$ as edge weights. \State ${\mathcal E} \gets $ {edges in spanning tree} \State Compute $\Omega_p, \Omega_q, \Omega_{pq}$ using Eqs.~(\ref{moments}). \end{algorithmic} \end{algorithm} \textbf{Computational Complexity:} For set $\mathcal{E}_{full}$, minimum spanning tree can be found using Kruskal's Algorithm \cite{kruskal1956shortest,Cormen2001} in $O(|{\mathcal E}|\log|{\mathcal E}|)$ operations. In the worst case, where all node pairs are permissible edges, the complexity scales as $O(|{\mathcal V}|^2\log|{\mathcal V}|)$. The next result presents the number of voltage samples necessary for accurate recovery using empirical estimates of $\phi$. \begin{theorem} \label{theorem:sample_complexity} For radial grid $\mathcal{T}$ with node set ${\mathcal V}$ and depth $d$, assume line impedances are bounded by non-zero values and nodal injections to be zero-mean Gaussians with bounded variance. For $0 < \eta<1$, if the number of nodal voltage magnitude samples $n$ is greater than $Cd^4|\mathcal V|^2\log(|\mathcal V|/\eta)$ for some constant $C$, then Algorithm \ref{alg:1} recovers the true topology with probability $1-\eta$. \end{theorem} The proof is given in Appendix \ref{sec:proofsample}. In a realistic grid, all nodes may not observed. Naive application of Algorithm \ref{alg:1} can lead to errors in topology estimation as noted in the following result. \begin{figure}[!bt] \centering \hspace*{\fill} \subfigure[]{\includegraphics[width=0.1\textwidth]{miss01.eps}\label{fig:missing_a}}\hfill \subfigure[]{\includegraphics[width=0.1\textwidth]{miss02.eps}\label{fig:missing_b}}\hfill \hspace*{\fill} \vspace{-1.5 mm} \caption{(a) Distribution grid tree ${\mathcal T}$ with unobserved nodes of degree less than $3$.(b) Output of applying Algorithm \ref{alg:1}\label{fig:missing0}} \end{figure} \begin{theorem}\label{thm:hiddendegree1} Consider missing nodes of degree at most $2$ in grid tree $\mathcal T$. Algorithm \ref{alg:1} using observed node voltages creates a tree ${\mathcal T}_{\mathcal M}$ where observed nodes in $\mathcal T$ separated by missing nodes are connected by spurious edges, while rest of the true edges are identified. \end{theorem} \begin{proof} Using theorem \ref{Theoremcases}, it is clear that observed neighbors in $\mathcal{T}$ will be neighbors in ${\mathcal T}_{\mathcal M}$. As missing nodes have maximum degree $2$, there is at most a line sub-graph of connected hidden nodes with observed nodes at either end (see Fig.~\ref{fig:missing0}). These observed nodes have the lowest $\phi$ among nodes separated by the hidden nodes, hence edges between them appear in the spanning tree ${\mathcal T}_{\mathcal M}$. \end{proof} The following result follows immediately from Theorem \ref{thm:hiddendegree1}. \begin{corollary} If grid $\mathcal{T}$ of $|\mathcal V|$ nodes has $k$ non-adjacent missing nodes, each of degree $2$, then Algorithm \ref{alg:1} produces a tree ${\mathcal T}_{\mathcal M}$ of $|\mathcal V|-k$ nodes with $k$ spurious edges not present in $\mathcal T$, and does not have $2k$ missing edges from $\mathcal T$. \end{corollary} The next section presents additional results on nodal voltages and discusses tractable learning in the presence of missing nodes. \section{Learning with missing nodes} \label{sec:learningmissing1} We consider voltage measurements and knowledge of injection statistics at the observed nodes while the missing nodes are unobserved. First we consider the setting where missing nodes are separated by greater than two hops. \subsection{Missing nodes separated by three or more hops} \label{sec:hidden_3} Let the set of observed nodes be $\mathcal O$, i.e., where voltage measurements and injection covariances are known. We consider arbitrary placement of unobserved node set $\mathcal M$ with no measurements or historical data under the following restriction in this section. \textbf{Assumption $2$:} All missing nodes have a degree greater than $2$ and are separated by greater than two hops in the grid tree $\mathcal T$. The degree assumption ensures uniqueness of topology reconstruction. In particular, if hidden nodes of degree $2$ are adjacent, one can combine them into a single hidden node by Kron reduction (similar to Theorem \ref{thm:hiddendegree1}) while maintaining consistency with available measurements. This prevents unique reconstruction. Note that under Assumption $2$, no hidden node is a leaf. Consider a tree $\mathcal T$ where missing node set $\mathcal M$ satisfies Assumption $2$. Let the minimum spanning tree ${\mathcal T}_{\mathcal M}$ between observed nodes $\mathcal{O}$ be constructed using Algorithm \ref{alg:1} with $\phi$'s as edge weights. Consider the case shown in Fig.~\ref{fig:missing1} with missing node $b$. By Assumption $2$, all nodes within two hops of $b$ are observed. Hence its parent $a$, children node set ${\mathcal C}_b = \{c_1, c_2, c_3, c_4\}$ are observed. Also all neighbors of $a$ and ${\mathcal C}_b$ except $b$ are observed. By Theorem \ref{Theoremcases}, all edges between $a$ and non-descendants of $b$ in $\mathcal{T}_{\mathcal M}$ are true edges, while observed descendants of $b$ are connected to the rest of ${\mathcal T}_{\mathcal M}$ through false\footnote{non-existent edges} edges between ${\mathcal C}_b$ and $a$. The following theorem gives possible configurations between ${\mathcal C}_b$ and $a$ in ${\mathcal T}_{\mathcal M}$. \begin{figure}[!bt] \centering \hspace*{\fill} \subfigure[]{\includegraphics[width=0.15\textwidth]{miss1.eps}\label{fig:missing1}}\hfill \subfigure[]{\includegraphics[width=0.15\textwidth]{miss2.eps}\label{fig:missing2}}\hfill \hspace*{\fill} \vspace{-1.5 mm} \caption{(a)Distribution grid tree ${\mathcal T}$ with unobserved node $b$. Node $a$ is $b$'s parent while nodes $c_1,c_2,c_3,c_4$ are its children. (b) Possible configuration of spanning tree ${\mathcal T}_{\mathcal M}$ of observed nodes as per Theorem \ref{permissiblecases}\label{fig:missing}} \end{figure} \begin{theorem}\label{permissiblecases} For missing node $b$ in $\mathcal T$ with observed parent $a$ and observed children node set ${\mathcal C}_b$, let $\arg\min\limits_{c_i \in {\mathcal C}_b} \phi_{bc_i} = c^*$. Then \begin{itemize}[leftmargin=*] \item No edge $(c_ic_j)$ between children $c_i, c_j \neq c^*$ exists in ${\mathcal T}_{\mathcal M}$. \item Nodes in set ${\mathcal C}_b^1= \{c_i\in {\mathcal C}_b, \phi_{ac_i} < \phi_{c^*c_i}\}$ are connected to node $a$, those in $\mathcal{C}_b-{\mathcal C}_b^1-\{c^*\}$ are connected to $c^*$. \end{itemize} \end{theorem} \begin{proof} Consider node pair $c_i,c_j\neq c^*$ in $\mathcal{C}_b$. Using Eq.~(\ref{second}) $\phi_{c_ic_j} = \phi_{bc_i}+ \phi_{bc_j} <\phi_{bc_i}+ \phi_{bc^*} = \phi_{c_ic^*}$. Thus, any possible edge between nodes in $\mathcal{C}_b$ includes node $c^*$. The edges for each node in sets ${\mathcal C}_b^1$ and $\mathcal{C}_b -{\mathcal C}_b^1$ follows by definition of min-weight spanning tree. \end{proof} Note that one of the sets ${\mathcal C}^1$ or ${\mathcal C}^2$ may be empty. It is worth mentioning that node $c^*$ can be connected to some node $c\dag \in {\mathcal C}^1$ instead of directed to $a$ if $\phi_{ac\dag} < \phi_{c^*c\dag} <\phi_{ac^*}$ holds. Theorem \ref{permissiblecases} thus suggests that if Algorithm \ref{alg:1} outputs $\mathcal{T}_{\mathcal{M}}$ between observed nodes, it may include false edges between an observed node to either its siblings (for missing parent), or to its grandchildren (for a single missing child). This is depicted in Fig.~\ref{fig:missing2}. In particular, two sibling nodes with missing parent in $\mathcal T$ may be as far as four hops away in ${\mathcal T}_{\mathcal M}$. Note that unlike the case for missing nodes of degree $2$ (see Theorem \ref{thm:hiddendegree1}), here multiple configurations may be possible. To estimate the operational edges, locate the missing nodes and estimate their injections statistics, we require additional properties of $\phi$ that make learning tractable. First we prove equality relations for $\phi$ computed for parent-child nodes and parent-grandchildren nodes. \begin{theorem} \label{Theoremcases2} In $\mathcal T$, the following statements hold:\\\\ $1$. If node $b$ is the parent of nodes $a$ and $c$ (see Fig.~\ref{fig:item1}) \begin{subequations} \footnotesize \begin{align} \phi_{ab} &= \smashoperator[lr]{\sum_{d \in \mathcal{D}^a}}r_{ab}^2\Omega_p(d,d)+x_{ab}^2 \Omega_q(d,d)+2r_{ab}x_{ab}\Omega_{pq}(d,d)\label{first}\\ \phi_{ac} &= \smashoperator[lr]{\sum_{d \in \mathcal{D}^a}}r_{ab}^2\Omega_p(d,d)\nonumber+x_{ab}^2 \Omega_q(d,d)+2r_{ab}x_{ab}\Omega_{pq}(d,d)\nonumber\\ &+ \smashoperator[lr]{\sum_{d \in \mathcal{D}^c}}r_{bc}^2\Omega_p(d,d)+x_{bc}^2 \Omega_q(d,d)+2r_{bc}x_{bc}\Omega_{pq}(d,d)\label{second} \end{align} \end{subequations} $2$. If node $g$ is the parent of node $b$ and grandparent of nodes $a$ and $c$ (see Fig.~\ref{fig:item1}), \begin{subequations} \footnotesize \begin{flalign} &\phi_{ag}-\phi_{cg} = \smashoperator[lr]{\sum_{d \in \mathcal{D}^a}}\Omega_p(d,d)(r_{ab}^2+ 2r_{ab}r_{bg}) + \Omega_q(d,d)(x_{ab}^2+ 2x_{ab}x_{bg})\nonumber\\ &+2\Omega_{pq}(d,d)(r_{ab}x_{ab}+r_{bg}x_{ab}+r_{ab}x_{bg})- \smashoperator[lr]{\sum_{d \in \mathcal{D}^c}}\Omega_p(d,d)(r_{cb}^2+ 2r_{cb}r_{bg}) \nonumber\\ &-\Omega_q(d,d)(x_{cb}^2+ 2x_{cb}x_{bg}) -2\Omega_{pq}(d,d)(r_{cb}x_{cb}+r_{bg}x_{cb}+r_{cb}x_{bg})\text{~and,}\label{third}\end{flalign} \begin{flalign} &\phi_{ag} = \smashoperator[lr]{\sum_{d \in \mathcal{D}^a}}\Omega_p(d,d)(r_{ab}+ r_{bg})^2 +2\Omega_{pq}(d,d)(r_{ab}+r_{bg})(x_{ab}+x_{bg})\label{fourth}\\ + &\Omega_q(d,d)(x_{ab}+ x_{bg})^2+ \smashoperator[lr]{\sum_{d \in \mathcal{D}^b-\mathcal{D}^a}}\Omega_p(d,d)r_{bg}^2+\Omega_q(d,d)x_{bg}^2+2\Omega_{pq}(d,d)r_{bg}x_{bg}\nonumber\\ &\phi^{\theta}_{ag} = \smashoperator[lr]{\sum_{d \in \mathcal{D}^a}}\Omega_p(d,d)(x_{ab}+ x_{bg})^2 -2\Omega_{pq}(d,d)(r_{ab}+r_{bg})(x_{ab}+x_{bg})\label{fifth}\\ +& \Omega_q(d,d)(r_{ab}+ r_{bg})^2+ \smashoperator[lr]{\sum_{d \in \mathcal{D}^b-\mathcal{D}^a}}\Omega_p(d,d)x_{bg}^2+\Omega_q(d,d)r_{bg}^2-2\Omega_{pq}(d,d)r_{bg}x_{bg}\nonumber\\ &\phi^{v\theta}_{ag}= \smashoperator[lr]{\sum_{d \in \mathcal{D}^a}}(\Omega_p(d,d)-\Omega_q(d,d))(r_{ab}+r_{bg})(x_{ab}+x_{bg})+\Omega_{pq}(d,d)(x_{ab}+x_{bg})^2\label{sixth}\\ -&\Omega_{pq}(d,d)(r_{ab}+r_{bg})^2+\smashoperator[lr]{\sum_{d \in \mathcal{D}^b-\mathcal{D}^a}}(\Omega_p(d,d)-\Omega_q(d,d))r_{bg}x_{bg}+\Omega_{pq}(d,d)(x^2_{bg}-r^2_{bg})\nonumber \end{flalign} \end{subequations} where $\phi^{v\theta}_{ab} =\mathbb{E}[(v_a -\mu_{v_a}-v_b+\mu_{v_b})(\theta_a - \mu_{\theta_a}-\theta_b+\mu_{\theta_b})]$ and $\phi^{\theta}_{ab} =\mathbb{E}[(\theta_a - \mu_{\theta_a})-(\theta_b-\mu_{\theta_b})]^2$. \end{theorem} The derivation of statement $1$ in Theorem \ref{Theoremcases2} follows the derivation of the first statement in Theorem \ref{Theoremcases} for parent-child pairs. The second statement is proven by expanding $\phi$, $\phi^{\theta}$ and $\phi^{v\theta}$ for grandchildren-grandparent pairs using Eq.~(\ref{usediff_1}) and Eq.~(\ref{Hdiff}). We mention key takeaways from Theorem \ref{Theoremcases2} that enable verification of relative nodal positions in tree $\mathcal T$ and estimate injection statistics. \begin{enumerate}[leftmargin =*] \item If all descendants of nodes $a$ are known then Eq.~(\ref{first}) can be used to verify its parent. \item If $a$ and $c$ are known siblings and their descendants are known, then Eq.~(\ref{second}) can be used to search for their parent $b$ among possible edges in $\mathcal{E}_{full}$. \item If $a$ and $c$ are siblings with known grandparent $g$ and descendant sets $\mathcal{D}^a,\mathcal{D}^c$, Eq.~(\ref{third}) can be used to search for $a$ and $c$'s parent. \item If the injections at all descendants of node $b$ is known and its parent is verified as $g$, Eqs.~(\ref{fourth}-\ref{sixth}) can be used to determine its injection statistics. \end{enumerate} Note that identification of parents as listed above (takeaways $2,3$) involves a linear search over the set of permissible edges and hence is not computationally intensive. In the final takeaway, the estimation of $b$'s injection statistics ($\Omega_p(b,b),\Omega_q(b,b),\Omega_{pq}(b,b)$) involves solving three linear equations with three unknowns if all its descendants are known. These results are used next to jointly estimate topology and injection statistics in the presence of missing nodes. The overall steps in the learning procedure are listed in Algorithm \ref{alg:2}. \begin{algorithm} \caption{Learning with Hidden Nodes separated by more than $2$ hops}\label{alg:2} \textbf{Input:} Voltage observations $v$, $\theta$, and injection covariances $\Omega_p, \Omega_q, \Omega_{pq}$ at available node set $\mathcal O$, hidden node set ${\mathcal M}$, set of permissible edges ${\mathcal E}_{full}$ with line impedances, thresholds $\tau_1,\tau_2$.\\ \textbf{Output:} Operational edges ${\mathcal E}$, $\Omega_p, \Omega_q, \Omega_{pq}$ at set $\mathcal{M}$ \begin{algorithmic}[1] \State $\forall$ nodes $a,b \in {\mathcal O}$, compute $\phi_{ab}$ and find minimum weight spanning tree ${\mathcal T}_{\mathcal{M}}$ with $\phi_{ab}$ as edge weights. \label{span2} \State Sort nodes in ${\mathcal T}_{\mathcal{M}}$ in decreasing order of their depths and mark them as unexplored. \While {$|{\mathcal M}| >0$ OR no node unexplored} \State Select unexplored node $a$ with parent $p$ at greatest depth with observed children set $\mathcal{C}_a$ and undetermined grandchildren set $G_{a}$ in ${\mathcal T}_{\mathcal{M}}$. \ForAll{$b \in \mathcal{C}_a$}\label{step_parent21} \If {$\phi_{ab}$ satisfy Eq.~(\ref{first}) with threshold $\tau_1$} \State ${\mathcal E} \gets {\mathcal E} \cup \{(ab)\}$, $\mathcal{C}_a\gets \mathcal{C}_a-\{b\}$ \EndIf \EndFor\label{step_parent22} \State $\mathcal{C}_a \gets \mathcal{C}_a \bigcup G_a$ \For{$b \in \mathcal{M}$,$|\mathcal{C}_a|\geq 2$} \label{step_grandparent21} \If{$a$, child $b$, grandchildren in $\mathcal{C}_a$ satisfy Eq.~(\ref{third}) with threshold $\tau_2$} \State ${\mathcal E}\gets {\mathcal E} \bigcup \{(ba)\}\bigcup\{(bc)\forall c\in \mathcal{C}_a\}$ \State Solve $\Omega_p(b,b), \Omega_q(b,b), \Omega_{pq}(b,b)$ from (\ref{fourth}-\ref{sixth}). \State $\mathcal{C}_a\gets \{\}$, $\mathcal M\gets \mathcal M-\{b\}$. \EndIf \EndFor\label{step_grandparent22} \If{$|\mathcal{C}_a|\geq 0$}\label{step_sibling21} \State Disconnect $(ap)$ from $a$'s parent $p$ in $\mathcal{T}_{\mathcal{M}}$. Expand undetermined grandchildren set of $p$, $G_{p} \gets G_{p}\bigcup\mathcal{C}_a\bigcup\{a\}$ \EndIf\label{step_sibling22} \State Mark $a$ as explored \EndWhile \end{algorithmic} \end{algorithm} \textbf{Algorithm \ref{alg:2} working:} We first construct the spanning tree $\mathcal{T}_{\mathcal{M}}$ of observed nodes using $\phi$ as edge weights of permissible edges in set $\mathcal{E}_{full}$ (Step \ref{span2}). To determine missing nodes and their injection statistics, we iteratively verify edges starting from leaves to the root in $\mathcal{T}_{\mathcal M}$. This is done as checks at a node depend on injections at its descendants that may be missing. We consider observed non-leaf nodes at the greatest depth in ${\mathcal T}_{\mathcal{M}}$ to iteratively search for hidden nodes with the first iteration involving parents of leaf nodes. We first use Eq.~(\ref{first}) to verify whether each edge is true (Steps \ref{step_parent21}-\ref{step_parent22}). If edges to some set $\mathcal{C}_a$ are not verified, we check if $a$ is their grandparent with some missing parent $b$ using Eq.~(\ref{third}) (Steps \ref{step_grandparent21}-\ref{step_grandparent22}). From Assumption $2$, nodes in $\mathcal{C}_a$ can have one missing parent. If missing parent is identified, its injections are estimated using Eqs.~(\ref{fourth}). If not confirmed, we list $a$ and $\mathcal{C}_a$ as siblings with unknown parent under $a$'s previous parent $p$ (Steps \ref{step_sibling21}-\ref{step_sibling22}). $a$ is marked as explored and the algorithm looks at the next unexplored node. \textbf{Computational Complexity:} As before, we can compute the spanning tree for observed nodes in $O((|{\mathcal V}|-|{\mathcal M}|)^2\log(|{\mathcal V}|-|{\mathcal M}|))$ in worst case when all edges between observed nodes are permissible. Next we sort the observed nodes in topological order in linear time $O(|{\mathcal V}|-|{\mathcal M}|)$ \cite{Cormen2001}. Checking the parent-child and grandparent-grandchildren relations has complexity $O((|{\mathcal V}|- |{\mathcal M}|)|{\mathcal V}|)$ due to iterations over $O(|{\mathcal V}|- |{\mathcal M}|)$ observed nodes in $\mathcal{T}_{\mathcal{M}}$ with possible search over each child and each missing node. The overall complexity is thus $O(|{\mathcal V}|^2\log |{\mathcal V}|)$ in the worst case. In the next section, we extend Algorithm \ref{alg:2} to consider cases where missing nodes can be two hops away instead of three. \subsection{Missing nodes separated by two or more hops} \label{sec:hidden_2} Here we consider missing nodes' placement under the following assumption. \textbf{Assumption $3$:} All missing nodes have a degree greater than $2$ and are not adjacent in the grid tree $\mathcal T$. Under Assumption $3$, both parent and multiple children of an observed node $a$ may be missing (see Fig. \ref{fig:twohop}). This is unlike Assumption $2$ where only parent or one child of $a$ may be missing. Let $\mathcal{T}_{\mathcal{M}}$ be the spanning tree of observed nodes given by Algorithm \ref{alg:1}. In $\mathcal{T}_{\mathcal{M}}$ under Assumption $3$, $a$ may thus be connected as parent to its siblings (from missing parent), as well as to its grandchildren (from multiple missing children) as depicted in Fig.~\ref{fig:missing_1}. Thus, observed nodes that are four hops away in $\mathcal T$ may be two hops away in $\mathcal{T}_{\mathcal M}$. To distinguish true siblings and true grandchildren in $\mathcal T$ among false children in the spanning tree of observed nodes, we use additional voltage inequalities at node triplets (groups of three), described next.\vspace{-1.5 mm}\squeezeup\vspace{-1.5 mm} \begin{figure}[hbt] \centering \hspace*{\fill} \subfigure[]{\includegraphics[width=0.15\textwidth]{miss3.eps}\label{fig:twohop}}\hfill \subfigure[]{\includegraphics[width=0.18\textwidth]{matrix.eps}\label{fig:matrix}} \vspace{-1.5 mm} \hspace*{\fill} \caption{(a) Node $a$ with parent $p$ and children $b_1,b_2$. Node $a$ has siblings $C_p$, grandchildren $C_{b_1},C_{b_2}$. (b) $[\phi_{k_1a} -\phi_{k_2a}+\phi_{k_1k_2}]$ for $k_1,k_2 \in C_{b_1},C_{b_2},C_p$} \end{figure} \begin{theorem} \label{Theoreminequality} Consider node $a$ in $\mathcal T$ with parent $p$ and children nodes $b_1,b_2$. Let $\mathcal{C}_p$ be set of sibling nodes of $a$ with parent $p$ (see Fig.~\ref{fig:twohop}). Let $\mathcal{C}_{b_1} $ be children nodes of $b_1$ and $\mathcal{C}_{b_2}$ be children of $b_2$. Then the following inequalities hold: \begin{enumerate}[leftmargin = *] \item $\phi_{k_1a} +\phi_{k_2a}- \phi_{k_1k_2} > 0 \text{~if~} k_1,k_2 \text{~are siblings in~} \mathcal{C}_{b_1}, \mathcal{C}_{b_2},\text{~or~}\mathcal{C}_p$ \noindent\item $\phi_{k_1a} +\phi_{k_2a}- \phi_{k_1k_2} = 0 \text{~if~} k_1 \in \mathcal{C}_{b_1},k_2 \in \mathcal{C}_{b_2}$ \noindent\item $\phi_{k_1a} +\phi_{k_2a}- \phi_{k_1k_2} < 0 \text{~if~} k_1 \in \mathcal{C}_{b_1} \text{or~} \mathcal{C}_{b_2}$ and $k_2 \in \mathcal{C}_{p}$ \end{enumerate} \end{theorem} \begin{proof} To simplify notation, we consider sets $\mathcal{C}_p = \{c_1,c_2\}, \mathcal{C}_{b_1} = \{c_3,c_4\}, \mathcal{C}_{b_2} = \{c_5,c_6\}$ as shown in Fig.~\ref{fig:twohop}. Using the first result in Theorem \ref{Theoremcases}, we have \begin{align*} &\phi_{c_1a} =\phi_{ap} + \phi_{c_1p},~ \phi_{c_1c_2} = \phi_{c_1p} + \phi_{c_2p},~ \phi_{c_2a} = \phi_{ap} + \phi_{c_2p}\nonumber\\ \Rightarrow~&\phi_{c_1a} +\phi_{c_2a}- \phi_{c_1c_2} > 0. \end{align*} Now consider grandchildren of node $a$ and children of $b_1$. From second result in Theorem \ref{Theoremcases}, we have \begin{align*} &\phi_{c_3a} > \phi_{c_3b_1},~ \phi_{c_4a} >\phi_{c_4b_1}\nonumber\\ \Rightarrow~&\phi_{c_3a} +\phi_{c_4a}- \phi_{c_3c_4}>0 ~(\text{as}~ \phi_{c_3c_4} = \phi_{c_3b_1} + \phi_{c_4b_1})\nonumber \end{align*} By symmetry it is true for $c_5,c_6\in \mathcal{C}_{b_2}$. This proves the first statement. Statement $2$ follows immediately from the first result in Theorem \ref{Theoremcases}. For Statement $3$, consider the case $k_1 = c_3 \in \mathcal{C}_{b_1}, k_2 = c_1\in \mathcal{C}_p$. We have \begin{align*} \phi_{c_3c_1} =\phi_{c_3p}+\phi_{c_1p}> \phi_{c_3a}+\phi_{ap}+\phi_{c_1p} = \phi_{c_3a}+\phi_{c_1p} \end{align*} where the inequality follows from the second result in Theorem \ref{Theoremcases}. \end{proof} The key result of Theorem \ref{Theoreminequality} is effective depicted in Fig.~\ref{fig:matrix} through the matrix $[\phi_{k_1a} +\phi_{k_2a}- \phi_{k_1k_2}]$ constructed using siblings or grandchildren of node $a$. Note that the positive values in the matrix correspond to siblings of common parent. Hence it can be used to distinguish erroneous children of a node into its siblings and grandchildren in our learning algorithm. The true parent of each grandchildren group can be identified and its injection statistics estimated using Eq.~(\ref{third}) and Eq.~(\ref{fourth}-\ref{sixth}) in Theorem \ref{Theoremcases2}. Next, we design Algorithm \ref{alg:3} to learn the topology and injection statistics with non-adjacent missing nodes. \begin{algorithm} \caption{Learning with Hidden Nodes separated by more than $1$ hop}\label{alg:3} \textbf{Input:} Voltage observations $v$, $\theta$, and injection covariances $\Omega_p, \Omega_q, \Omega_{pq}$ at available node set $\mathcal O$, hidden node set ${\mathcal M}$, set of permissible edges ${\mathcal E}_{full}$ with line impedances, thresholds $\tau_1, \tau_2,\tau_3$\\ \textbf{Output:} Operational edges ${\mathcal E}$, $\Omega_p, \Omega_q, \Omega_{pq}$ at set $\mathcal{M}$ \begin{algorithmic}[1] \State $\forall$ nodes $a,b \in {\mathcal O}$, compute $\phi_{ab}$ and find minimum weight spanning tree ${\mathcal T}_{\mathcal{M}}$ with $\phi_{ab}$ as edge weights. \label{span3} \State Sort nodes in ${\mathcal T}_{\mathcal{M}}$ in decreasing order of their depths and mark them as unexplored. \While {$|{\mathcal M}| >0$ OR no node unexplored} \State Select in ${\mathcal T}_{\mathcal{M}}$ unexplored node $a$ with parent $p$ at greatest depth with observed children set $\mathcal{C}_a$ and undetermined grandchildren sets $G^i_{a}, i = 1,2..$. \ForAll{$b \in \mathcal{C}_a$}\label{step_parent31} \If {$\phi_{ab}$ satisfy Eq.~(\ref{first}) with threshold $\tau_1$} \State ${\mathcal E} \gets {\mathcal E} \cup \{(ab)\}$, $\mathcal{C}_a\gets \mathcal{C}_a-\{b\}$ \EndIf \EndFor\label{step_parent_32} \State Take one grandchild $g_i$ per $G^i_a$ and nodes in $\mathcal{C}_a$ and separate them into grandchildren sets $G^i_a$ and sibling set $\mathcal{S}_a$ by clustering $\phi$ using Theorem \ref{Theoreminequality} with threshold $\tau_3$. Add siblings of each $g_i$ to its separated set. \label{step_grandparent31} \State Find missing parent of separated grandchildren set $G_i$ using Eq.~(\ref{third}) with threshold $\tau_2$, determine its injection statistics using Eqs.~(\ref{fourth}-\ref{sixth}) and remove it from $\mathcal M$. Add discovered edges to $\mathcal{E}$.\label{step_grandparent32} \If{$|\mathcal{S}_a|\geq 0$}\label{step_sibling31} \State Disconnect $(ap)$ from $a$'s parent $p$ in $\mathcal{T}_{\mathcal{M}}$. Form undetermined grandchildren group $G^i_{p}$ with $\mathcal{S}_a$ and $a$ \EndIf\label{step_sibling32} \State Mark $a$ as explored \EndWhile \end{algorithmic} \end{algorithm} \textbf{Algorithm \ref{alg:3} working:} The basic working of Algorithm \ref{alg:3} follows a similar logic as Algorithm \ref{alg:2}. The differences exist in Steps (\ref{step_grandparent31}-\ref{step_grandparent32}) where Theorem \ref{Theoreminequality} is used to separate siblings of a current node $a$ from its grandchildren and then to identify its missing children and estimate their injection statistics. For better elucidation, the steps in Algorithm \ref{alg:3} for estimating the grid in Fig.~\ref{fig:twohop} are depicted in Fig.~\ref{fig:missing_example}. Note that the hidden nodes are $p,b_1,b_2$. \begin{figure}[!ht] \centering \hspace*{\fill} \subfigure[]{\includegraphics[width=0.11\textwidth]{miss4.eps}\label{fig:missing_1}} \subfigure[]{\includegraphics[width=0.11\textwidth]{miss5.eps}\label{fig:missing_1a}} \subfigure[]{\includegraphics[width=0.11\textwidth]{miss6.eps}\label{fig:missing_2}} \subfigure[]{\includegraphics[width=0.11\textwidth]{miss7.eps}\label{fig:missing_3}} \hspace*{\fill} \vspace{-1.5 mm} \caption{Steps in Learning distribution grid in Fig.~\ref{fig:twohop} with hidden nodes $p,b_1,b_2$ (a) Spanning tree ${\mathcal T}_{\mathcal M}$ for observed nodes (b) Separation of children of node $a$ in $\hat{\mathcal T}$ into grandchildren and sibling sets with unknown parent nodes (c) Identifying parent node of $a$'s grandchildren, $a$'s parent unidentified (d) Identifying missing parent $p$ of node $a$.} \label{fig:missing_example}\end{figure} \textbf{Computational Complexity:} The complexity of Algorithm \ref{alg:3} can be computed similar to that of Algorithm \ref{alg:2} as the logic is similar. The primary difference in complexity arises due to separation between siblings and grandchildren of a node in Step (\ref{step_grandparent21}-\ref{step_grandparent22}) and identifying its missing children. This has complexity $O(|{\mathcal V}|^2)$. Iterating over all nodes, the complexity becomes $O(|{\mathcal V}|^3)$ in the worst case. This also dominates the overall complexity which is $O(|{\mathcal V}|^3)$. It is worth mentioning that the computational complexity results in the paper do not assume knowledge of tree depth, maximum node degree, or cardinality of permissible edge set. If they are known, the complexity can be reduced further. \textbf{Extension to Multiple Trees:} Algorithms \ref{alg:1}, \ref{alg:2} and \ref{alg:3} can be extended to grids with multiple trees powered by different sub-stations. There we separate node groups for each tree before running the learning algorithms. This is possible as voltage magnitudes are measured relative to the root node, and hence voltages at two nodes $a$ and $b$ in distinct trees will be uncorrelated. {\textbf{Correlated Injections:} Algorithms \ref{alg:1}, \ref{alg:2} and \ref{alg:3} and the theorems guaranteeing their correctness rely on the injection fluctuations being uncorrelated. Note that under correlated injections, $\Sigma_p, \Sigma_q, \Sigma_{pq}$ are not diagonal. Hence, the number of unknown variables (injection cross-correlations) increase in the case with missing nodes, and the current algorithms will not be able to estimate them. Exact injection estimation under correlated injections will be analyzed in future work. On the other hand, the correctness of the estimate of topology and injection variances (same node) by our algorithms under small correlated injections can be analyzed using perturbation theory. In particular the correlated covariance matrix $\Sigma_p$ (similarly for $q$ and $pq$) can be expressed as $\Sigma^{uc}_p+\Delta_p$, where $\Sigma^{uc}_p$ is the diagonal matrix of injection variances and $\Delta_p$ is the matrix of cross-covariances with zero in the diagonal. Consequently voltage covariances can be expressed as covariances under $\Sigma^{uc}_p$ and an error term. Thus voltage trends and equalities used in the learning algorithms can be satisfied up to a threshold for small injection correlations (small $\Delta_p$), and the correct topology can be learnt. We plan to study bounds on maximum injection correlation under which our algorithms are provably correct in future work.} \textbf{Finite sample effect:} Empirically computed values of $\phi$ may differ from their true values and hence equalities and inequalities used in Algorithm \ref{alg:2} and \ref{alg:3} may only be satisfied approximately. As such we use user-defined tolerances ($\tau_1$ in Eq.~(\ref{first}), $\tau_2$ in Eq.~(\ref{third}), and $\tau_3$ in Theorem \ref{Theoreminequality}) to establish if the desired equalities/inequalities are true. To reduce the effect of varying injection covariances, we use thresholds based on the relative values (expressed as ratio) in the equality relations to determine their correctness. In the next section, we discuss the performance of our learning algorithms in test distribution networks, notably on voltage samples generated by non-linear AC power flows. \section{Experiments} \label{sec:experiments} \begin{figure}[bt] \centering \subfigure[]{\includegraphics[width=0.14\textwidth]{case33_algo1.eps}\label{fig:case33_algo1}}\hfill \subfigure[]{\includegraphics[width=0.14\textwidth]{case33_algo2.eps}\label{fig:case33_algo2}}\hfill \subfigure[]{\includegraphics[width=0.14\textwidth]{case33_algo3.eps}\label{fig:case33_algo3}} \vspace{-1.5 mm} \hspace*{\fill} \caption{Modified $33$-bus test case \cite{matpower} with observed nodes (solid blue), missing nodes (uncolored circles) for Algorithms \ref{alg:1} (a), \ref{alg:2} (b), and \ref{alg:3} (c)}\label{fig:case33} \end{figure} \subsection{Comparison of LC-PF and AC-PF}\label{sec:compare} We demonstrate the accuracy of LC-PF Eqs.~(\ref{LC_PF}) for the modified $33$-bus test case \cite{matpower} in Fig.~\ref{fig:case33_algo1}. The modification is done to ensure hidden nodes in subsequent simulations have degree greater than two. Fig.~\ref{fig:approx} compares voltage magnitudes at non-substation buses computed by LC-PF with AC-PF solver in Matpower \cite{matpower} for two different variances in nodal injections relative to mean injection (range of $10^{-2}$ and $10^{-3}$). The voltages are measured relative to the per unit (p.u.) value at the reference bus. Note that the values are close. Hence theoretical algorithms proven for linearized power flow are able to perform estimation tasks with Matpower generated voltage measurements as presented next. \begin{figure}[!hbt] \centering \includegraphics[width=0.38\textwidth]{LC_ACcompare33bus_new.eps} \vspace{-.10cm} \caption{Bus Voltage magnitudes (p.u.) by AC-PF (Matpower) and LC-PF (\ref{LC_PF}) for two different ranges of injection variances}\label{fig:approx} \end{figure} \vspace{-1.5 mm}\squeezeup\subsection{Algorithms' performance} \begin{figure*}[!ht] \centering\hfill \subfigure[]{\includegraphics[width=.33\textwidth]{33bus_adj_algo1compare.eps}\label{fig:adj_algo1_tcns}}\hfill\subfigure[]{\includegraphics[width=.33\textwidth]{33bus_adj_algo1noise.eps}\label{fig:adj_algo1}}\hfill \subfigure[]{\includegraphics[width=.33\textwidth]{33bus_inj_algo1_new.eps}\label{fig:inj_algo1}}\hfill \caption{(a) Comparison of topology estimation in Algorithm \ref{alg:1} with \cite{dekatcns} for grid in Fig.~\ref{fig:case33_algo1} with injection covariance of order $10^{-3}$ (b) Average relative errors in topology estimation for different noise levels, and (b) injection covariance estimation, v/s number of samples in Algorithm \ref{alg:1} for grid in Fig.~\ref{fig:case33_algo1}. Errors for two injection covariances are simulated. }\label{fig:alg1} \end{figure*} We discuss the performance of our learning Algorithms $1,2,3$ in the test networks listed in Fig.~\ref{fig:case33}. To the operational $32$ edges, we add $50$ additional edges (at random) with similar impedances to create the input permissible edge set $\mathcal{E}_{full}$. To create the input set, we consider Gaussian active and reactive load fluctuations with random covariances selected relative to base loads. We consider two settings where the approximate order of covariances are taken as $10^{-3}$ and $10^{-2}$. The injections are uncorrelated across nodes and used to generate injection samples. These injections are then used to generate voltage samples with non-linear AC-PF solver Matpower \cite{matpower}. {We add independent Gaussian noise of fixed variance to the available voltage measurements to simulate noisy observations.} The set $\mathcal{E}_{full}$ along with the voltage samples and the injection statistics at the observed nodes are available as input to each algorithm. The observed nodes and hidden nodes are selected respecting Assumptions $2$ and $3$ as mentioned later. {Each plot presented in this section depicts average results over $1000$ independent realizations.} {We first consider Algorithm \ref{alg:1} where voltages at all nodes are observed. We consider increasing number of noiseless LC-PF and AC-PF samples and present relative errors in topology estimation in Fig.~\ref{fig:adj_algo1_tcns}. The relative errors are computed as the differences between estimated and true edge sets measured relative to the number of total edges ($32$). Compared to the learning algorithm in \cite{dekatcns}, Algorithm \ref{alg:1} is not iterative and has better accuracy. Crucially, the errors under LC-PF and AC-PF are similar for Algorithm \ref{alg:1} due to sufficient accuracy of LC-PF samples as discussed in Section~\ref{sec:compare}. For the remaining simulations in the paper, we focus on AC-PF samples only.} {In Fig.~\ref{fig:adj_algo1}, we present relative errors in topology estimation for different sample sizes and varying noise variances. We consider three noise variance settings: (a) noiseless, (b) $1\%$, and (c) $5\%$, relative to the measurement variance. Observe that the errors are insignificant beyond $60$ samples for both injection covariance settings considered, when noise variance is $1\%$ or less. For the $5\%$ noise case, the decay in error is slower and it takes around $120$ samples to reach the same level of accuracy.} The accuracy of estimated active and reactive injection statistics from noiseless AC-PF voltage samples is presented in Fig.~\ref{fig:inj_algo1}. The errors in injection statistics are measured relative to their true values and averaged over all nodes. Note that the estimate improves at higher samples as empirical moments are more accurate. Next, we consider Algorithm \ref{alg:2} where missing nodes are separated by greater than two hops. We consider the setting in Fig.~\ref{fig:case33_algo2} with $4$ missing nodes and present results of topology and injection statistics estimation in Figs.~\ref{fig:adj_algo2} and \ref{fig:inj_algo2} respectively. Observe that the number of topology errors under both cases of nodal injection statistics reduce as the number of samples increase. However compared to Fig.~\ref{fig:adj_algo1} for no missing nodes, the number of samples needed is much higher. {Moreover the errors for noise variance of $1\%$ are much higher than than for the noiseless and $1\%$ noise setting. This is due to the fact that Algorithm \ref{alg:2} uses equality constraints (\ref{first},\ref{third}) to confirm true edges. At lower samples and higher noise, these constraints may not be satisfied up to the thresholds (pre-selected for the noiseless case), and hence errors are higher.} \begin{figure}[!bh] \centerin \subfigure[]{\includegraphics[width=.35\textwidth]{33bus_adj_algo2noise.eps}\label{fig:adj_algo2}} \subfigure[]{\includegraphics[width=.37\textwidth]{33bus_inj_algo2_new.eps}\label{fig:inj_algo2}}\vspace{-1.5 mm} \vspace{-1.5 mm}\squeezeup\caption{ Average relative errors in topology estimation for different noise levels, and (b) injection covariance estimation, v/s number of samples in Algorithm \ref{alg:2} for grid in Fig.~\ref{fig:case33_algo2}}\label{fig:alg2} \end{figure} Finally, we consider Algorithm \ref{alg:3} that operates when hidden nodes are non-adjacent. We consider the setting in Fig.~\ref{fig:case33_algo3} with $8$ missing nodes ($4$ more than for Algorithm \ref{alg:2}). The performance for topology and injection statistics estimation are presented in Figs.~\ref{fig:adj_algo3} and \ref{fig:inj_algo3} respectively, for increasing voltage sample sizes. As before, the estimation errors decay with an increase in sample sizes for both injection covariance ranges selected. On expected lines, the performance of topology estimation worsens on increasing the noise level and decreasing the number of samples considered. Further note that the decay of errors in estimated injection statistics with increasing number of samples in each of the three algorithms is lower than that for topology estimation. This is not surprising as differences in estimated and true topologies are integer valued and depend on satisfaction of equality and inequality constraints within some threshold. On the other hand, errors in injection statistics are induced by real-valued differences with the true statistics that depend on empirical estimates and not just on the estimate of the true topology. \begin{figure}[!bt] \centerin \subfigure[]{\includegraphics[width=.36\textwidth]{33bus_adj_algo3noise.eps}\label{fig:adj_algo3}} \subfigure[]{\includegraphics[width=.36\textwidth]{33bus_inj_algo3_new.eps}\label{fig:inj_algo3}} \caption{ Average relative errors in topology estimation for different noise levels, and (b) injection covariance estimation, v/s number of samples in Algorithm \ref{alg:3} for grid in Fig.~\ref{fig:case33_algo3}}\label{fig:alg3} \end{figure} {\textbf{Effect of Threshold:} Note that unlike Algorithm \ref{alg:1}, Algorithms \ref{alg:2},\ref{alg:3} use thresholds $\tau_1$, $\tau_2$, and Algorithm \ref{alg:3} additionally uses $\tau_3$. The thresholds are picked to ensure correctness of output at large sample values ($4\times 10^4$ samples). To understand the impact of selected thresholds, we consider $5000$ noiseless voltage samples in Algorithms \ref{alg:2}, \ref{alg:3} for both injection covariances and vary each threshold relative to their pre-selected values, while fixing the others. It is clear from Figs.~\ref{fig:thres_algo2} and \ref{fig:thres_algo3} that Algorithms \ref{alg:2} and \ref{alg:3} are both more sensitive to $\tau_1$ (used for Eq.~(\ref{first})) than $\tau_2,\tau_3$ respectively. This can be explained as $\tau_1$ enables the preliminary determination of true parent-child edges that affects edge identification in follow-up steps. We postpone a theoretical study of correct threshold selection based on historical to future work.} \begin{figure}[!bt] \centerin \subfigure[]{\includegraphics[width=.36\textwidth]{threshold_algo2.eps}\label{fig:thres_algo2}} \subfigure[]{\includegraphics[width=.36\textwidth]{threshold_algo3.eps}\label{fig:thres_algo3}} \caption{Errors in topology estimation with relative change in thresholds (a) ($\tau_1, \tau_2$) in Algorithm \ref{alg:2}, and (b) ($\tau_1, \tau_3$) in Algorithm \ref{alg:3} for $5000$ voltage samples.}\label{fig:thres} \end{figure} \section{Conclusions} \label{sec:conclusions} This paper discusses algorithms for radial distribution grids to estimate the operational topology and injection statistics of missing nodes using voltage measurements and injection statistics at a subset of the grid nodes. We show that the learning algorithms provably learn the exact topology when all missing nodes are non-adjacent and have degree greater than two. Compared to previous work, the learning algorithms in this paper are able to handle a greater fraction of hidden nodes and require less information regarding them. Simulation results on test cases demonstrate the performance of the algorithms on realistic voltage samples generated by non-linear AC power flows. In future we propose to extend the algorithms here to linearized multi-phase distribution networks \cite{dekathreephase,lowlinear}. {A formal understanding of the selection of thresholds and extension of the algorithm to cases with correlated injections are directions of future work.} The novel properties of voltage moments used in algorithm design may have applications in general network flow problems such as gas networks \cite{dekacdc}. We propose to analyze its relation to general graphical models.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ACKNOWLEDGEMENTS} Hang sincerely thanks Fan Wang~\shortcite{WangH19a}, who selflessly shared with him lots of valuable suggestions and experiences on irregular object packing. Thanks also give to Yin Yang, Qijin She, Juzhan Xu, Lintao Zheng, and Jun Li for their helpful discussions. \section*{Appendix} \setcounter{section}{0} In this appendix, we provide more details of our method as well as more quantitative and qualitative results. An animation illustrating our packing problem settings and exhibiting dynamic packing processes is also submitted with the supplemental material. \begin{itemize} \item \prettyref{sec:proof} gives proof that our found candidate placements are local optima that enable tight object packing. \item \prettyref{sec:appendixA} outlines a complete pipeline to process general 3D shapes compatible with packing tasks. An algorithm used to remove redundant object poses is also provided here. \item \prettyref{sec:appendixB} reports more statistical results, including ablations on packing experimental parameters and policy training methods, new results on shapes from the ABC dataset, and policy generalization results between different datasets. \item \prettyref{sec:moreVisual} visualizes more packing results tested on various datasets and packing problem settings. \end{itemize} \section{\label{sec:proof}RATIONALE OF CANDIDATE GENERATION METHOD} We highlight some intuitive but important properties that motivate our candidate generation design. Since the robot executes a top-down placement manner, we can convert 3D object placements into some distinct 2D planes which lead to similar object altitudes. Given a 3D object with known orientation, its projection on the horizontal plane is represented as a 2D polygon $G$. Similarly, we denote $O\subset\mathbb{R}^2$ as the horizontal projection of packed objects. We can get feasible action regions $E$ which satisfy that $E\oplus G \subset C$ and $E \oplus G \cap O = \emptyset$, where $\oplus$ is Minkowski sum~\cite{mark2008computational}. The packing of $G$ can be mapped to selecting a reference point $p = (l_x, l_y)$ from $E$ and $\{p\} \oplus G$ is the corresponding placement as illustrated in~\prettyref{fig:minkov}ab. \begin{figure}[h] \centering \centerline{\includegraphics[width=0.46\textwidth]{images/minkov.pdf}} \caption{\label{fig:minkov} Select a reference point $p$ from the feasible action region $E$ (a), its corresponding placement is $\{p\} \oplus G$ (b). (c): In direction $d$, the projection $e_p = e(G,p,d)$ of $\{p\} \oplus G$ is higher than $e_q$ of $\{q\} \oplus G$. Thus $\{p\} \oplus G$ is a more extreme placement in $d$. Besides, $\{p\} \oplus G$ is tightly against obstacle $O$ and can no longer move in $d$. Therefore, $e_p$ is local optima. } \vspace{-10pt} \end{figure} We prefer point $p$ which places $G$ tightly against obstacles, for which there is no straightforward metric. \citet{mark2008computational} define an extreme concept on polygons, one polygon is more extreme in a direction $d$ than another if its extreme points lie further in that direction. Inspired by this, we adopt $e(G,p,d)=\max_{g\in\{p\}\oplus G}d^Tg$ which is the extreme value of $G$'s projection for packing task. Placing $G$ at $p$ is more extreme than one placement $q$ in direction $d$ if $e(G,p,d)$ is higher than $e(G,q,d)$. When $G$ is tightly packed against obstacles at $p$ along $d$, $e(G,p,d)$ also reaches a local maximal, as illustrated in~\prettyref{fig:minkov}c. Therefore, we get an intuitive indicator of tight packing, i.e.: \begin{align} \label{eq:extreme} p=\text{argmax}_{q\in \mathcal{N}(p)}\;e(G,q,d) \end{align} for some open neighborhood $\mathcal{N}(p)$ of $p$ and some direction $d\in\mathbb{R}^2$. The following property is an immediate consequence of the Minkowski sum, which establishes a direct connection between packing tightness and reference points in $E$: \begin{proposition} If \prettyref{eq:extreme} holds for some open neighborhood $\mathcal{N}(p)$, then: \begin{align} p=\text{argmax}_{q\in \mathcal{N}(p)} d^Tq \label{eq:target} \end{align} \end{proposition} In other words, if $d^Tp$ is local optima, i.e. $d^Tp \ge d^Tq$ for all $ q \in \mathcal{N}(p)$, the point $p$ corresponds to the most extreme placement of $G$ in its neighborhood. With this tool at hand, we can directly evaluate and compare the potential packing tightness of a point $p$ by looking at the range of directions $d$ that makes $p$ satisfy \prettyref{eq:target}. Such a range of $d$ corresponds exactly to the spanning angle of a 2D normal cone~\cite{boyd2004convex}. The normal cone of a set $C$ at a boundary point $x_0$ is the set of all vectors $y$ such that $y^T (x - x_0) \le 0$ for all $x \in C$. We summarize its property below: \begin{proposition} For a convex polygonal set $E$ and $p\in\partial E$, the spanning angle of the normal cone at $p$ is defined as $\tau(p)\triangleq \pi-\theta$, where $\theta$ is the interior angle of $E$ at $p$. \end{proposition} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.48 \textwidth]{images/proof.pdf}} \caption{ (a): For a boundary point $p$ on a convex cone $E$, arbitrary direction $d$ in its normal cone satisfy $d^Tp \ge d^Tq$, where $q\in E$. (b): If $p$ locates on a concave polygon part, for $\forall d \in \mathbb{R}^2$, there exists $q\in E$ that $<\vec{pq},d>$ is less than $\pi/2$, i.e. $d^Tp < d^Tq$. } \vspace{-10pt} \label{fig:proof} \end{center} \end{figure} A demonstration of a normal cone is provided in ~\prettyref{fig:proof}a. For a concave polygon, points located on its concave parts have no normal cone, and we extend that $\tau(p)=0$ in this case, as shown in ~\prettyref{fig:proof}b. Using $\tau(p)$ as a tightness metric, we compare different choices of $p$. The following property is obvious: \begin{theorem} For a polygonal set $E$ and a convex $p\in\partial E$, we have $p = \text{argmax}_{q\in \mathcal{N}(p)}\tau(q)$ for some open neighborhood $\mathcal{N}(p)$ of $p$. \end{theorem} \begin{proof} Normal cones only exist on boundary points of the polygon and not the internal ones. Therefore, we could consider a neighborhood $\mathcal{N}(p)$ containing $p$ and $p$'s two neighboring edges, without any other convex vertices included. Within $\mathcal{N}(p)$, $p$ is convex so $\tau(p)>0$. All other points $q$ are on a straight line or concave vertices, with $\tau(q)=0$. \end{proof} \begin{figure}[h] \centering \includegraphics[width=0.92\linewidth]{images/compact.pdf} \caption{ (a): The pink parts are neighborhoods $\mathcal{N}(p_1)$ and $\mathcal{N}(p_2)$ for convex vertex $p_1$ and $p_2$. Both $\tau(p_1)$ and $\tau(p_2)$ are local optima and they correspond to object placements, (b) and (c), tightly against obstacles. } \label{fig:neighborhood} \end{figure} We demonstrate some neighborhoods in ~\prettyref{fig:neighborhood}. Each convex vertex $p$ corresponds to the local optima of $\tau(p)$ and a tight object placement against obstacles. Based on the above observations, our candidate set consists of all convex vertices of $E$. \section{\label{sec:appendixA}Pipeline for Data Preparation} We aim at learning physically realizable packing (PRP) skills for general 3D shapes, which can be collected either by synthetic modeling or real-world scanning. For this, we first need to process task shapes into a compatible format for efficient and accurate simulation with the Bullet simulator~\cite{coumans2016pybullet}. We also need the planar-stable object poses to satisfy the robot manipulation stipulations. We outline our specific data preparation pipeline in~\prettyref{alg:data}, where a polygonal mesh is transformed into several watertight convex decompositions and its planar-stable poses are provided. Considering that some stable poses are rotation-symmetric, we propose~\prettyref{alg:poses} to remove redundant poses to avoid unbalanced shape distribution. \input{algs/dataProcess} \input{algs/removeRepeatPoses} \section{\label{sec:appendixB}More Experimental Results} \subsection{\label{sec:resolutionAbaltion}Effects of Experimental Parameters} Our packing experiment setup involves a set of parameters for finding candidates and describing packing observations. Here we refine these parameters to make ablation and study their effect on the final packing performance. We conduct this experiment on our main dataset \textit{General}. We double the number of points sampled from object surfaces to $2048$ and the candidate number $N$ to $1000$. For the intervals $\Delta_h, \Delta_\theta, \Delta_z, $ and $\Delta_g$ used to find candidates, we halve them to investigate whether finer action can lead to better performance. \input{tables/ablation.tex} We summarize the test results in~\prettyref{table:ablation}. Halving the pixel interval $\Delta_g$ used to sample grid points has improvement to the packing performance, but it also substantially increases the time for decision-making. Finer tuning of other parameters no longer has a clear impact and may increase the computational overheads. We choose a set of efficient and effective parameters to do our main experiments. \subsection{\label{sec:trainingAbaltion}Ablations on RL Training} Here we do ablation studies to demonstrate the efficacy of our asynchronous RL training. We provide policies trained with the vanilla Rainbow as a baseline. We remove the distributed learner and the non-blocking actor equipped with batched simulation to illustrate their effect. We also report the performance without transforming the point cloud observation to a canonical configuration. All these policies are trained within $48$ hours for fairness. From the test results summarized in~\prettyref{table:architecture}, we can see that our asynchronous training achieves the best performance. Removing the canonical transform affects the data efficiency and lowers the packing utility. \input{tables/architecture.tex} \subsection{Results on Shapes from ABC} We test our method on shapes collected from the ABC dataset~\cite{KochMJWABAZP19}. These shapes are mostly complex mechanical parts with distinct characteristics, as shown in ~\prettyref{fig:abc}. Totally 136 industrial shapes with 440 planar-stable poses are collected. We train an online packing policy and compare it with the existing heuristic competitors to demonstrate the superiority of our method. We also train the object-ordering and the placement policy pairs for solving buffered packing problems. We train these policy pairs with buffer size $K = 10$ and generalize them to buffered packing scenarios with $K=3$ and $K=5$. We report these results in ~\prettyref{table:abcresult}. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.46 \textwidth]{images/abcInUse.pdf}} \caption{ Mechanical shapes selected from the ABC dataset. These shapes are scaled with their maximal AABB size equal to 1 for clarity. } \label{fig:abc} \end{center} \vspace{-20pt} \end{figure} \input{tables/abc.tex} \subsection{Generalization between Datasets\label{sec:betweenDatasets}} We test generalization between datasets, that is, crossly test trained policies on other datasets. Since part of the shapes is shared between $General$ and $Kitchen$, we conduct this experiment among $General$, shapes from the ABC dataset, and $BlockOut$, with 483, 136, and 8 shapes respectively. These results are summarized in ~\prettyref{table:generation}. When being transferred to a new dataset, the trained policies can still show decision-making ability more competitive than heuristics. \input{tables/generation.tex} \begin{figure}[t!] \begin{center} \centerline{\includegraphics[width=0.48 \textwidth]{images/dataVariety.pdf}} \vspace{-6pt} \caption{ Training policies with more shapes from $General$ benefits the performance when being generalized to the other two datasets. } \label{fig:dataVariety} \end{center} \vspace{-20pt} \end{figure} We note that policies trained on datasets with more variety of shapes tend to perform better on one out-of-distribution dataset. To confirm this, we train policies with different numbers of shapes from $General$ and test them on ABC and $BlockOut$. Results are visualized in~\prettyref{fig:dataVariety}. We can see that rich training shapes help policies transfer to the other two datasets. We recommend that users increase shape variety to enable better performance on out-of-distribution shapes. \section{\label{sec:moreVisual}More Visualized Results} We provide more visualization results tested on all datasets mentioned above, i.e. the $General$ dataset, the $BlockOut$ dataset, the $Kitchen$ dataset, as well as industrial shapes that come from the ABC dataset. We show galleries of online PRP results on each dataset in~\prettyref{fig:galleryOnline}. We provide qualitative results of buffered packing policies with $K=10$ in~\prettyref{fig:galleryBuffer1}. The generalized results on buffered packing scenarios with $K=3$ and $K=5$ are visualized in~\prettyref{fig:galleryBuffer2} and~\prettyref{fig:galleryBuffer3}. \begin{figure*}[ht] \begin{center} \centerline{\includegraphics[width=0.92\textwidth]{images/galleryOnline.pdf}} \vspace{-10pt} \caption{\label{fig:galleryOnline} Results generated by our online packing policies. Their utility and number of packed objects are labeled. } \end{center} \vskip -0.2in \end{figure*} \begin{figure*}[ht] \begin{center} \centerline{\includegraphics[width=0.92\textwidth]{images/galleryBuffer.pdf}} \vspace{-10pt} \caption{Results generated by our buffered packing policies. These policies are trained and tested with a buffer size $K=10$.} \label{fig:galleryBuffer1} \end{center} \vskip -0.2in \end{figure*} \begin{figure*}[ht] \begin{center} \centerline{\includegraphics[width=0.92\textwidth]{images/galleryBuffer3.pdf}} \vspace{-10pt} \caption{Results generated by our buffered packing policies. These policies are trained with a buffer size $K=10$ and tested with $K=3$.} \label{fig:galleryBuffer2} \end{center} \vskip -0.2in \end{figure*} \begin{figure*}[ht] \begin{center} \centerline{\includegraphics[width=0.92\textwidth]{images/galleryBuffer5.pdf}} \vspace{-10pt} \caption{Results generated by our buffered packing policies. These policies are trained with a buffer size $K=10$ and tested with $K=5$.} \label{fig:galleryBuffer3} \end{center} \vskip -0.2in \end{figure*} \section{\label{sec:conclusion}Conclusion and Future Work} We investigate problem setups and solution techniques for learning online packing skills for irregular 3D shapes. We propose a learning-based method to successfully pack objects with complex 3D shapes at real-time rates, while taking physics dynamics and constraints of a placement into account. Our theoretically-provable candidate generation algorithm prunes sub-optimal actions and forms a set of placements for a learnable policy, leading to high-quality packing plans. Equipped with asynchronous RL acceleration techniques and a data preparation process of simulation-ready training sequences, a mature packing policy can be trained within 48 hours in a physically realistic environment. Through evaluations on a variety of real-life object datasets, our performance beats state-of-the-art baselines in terms of both packing utilities and the number of packed objects. Our method can also be naturally extended to solve buffered packing problems by only introducing an additional object-ordering policy. Our results shed light on many other packing-related problems in the graphics community including UV generation and 3D printing. The limitation of this work is that we model the general irregular shapes as rigid bodies and neglect their material. For future research, we are interested in introducing the stress metric~\citep{9196938} for better placing fragile objects. Exploiting deformation during planning to achieve tighter packing~\cite{YinVK21} is also an interesting direction. \section{\label{sec:intro}Introduction} The problem of packing, i.e., finding an efficient placement of as many as possible objects within a designated volume, has garnered a multidisciplinary research interest from combinatorial optimization~\citep{MartelloPV00, Seiden02}, computational geometry~\citep{MaCHW18,HuXCG0020} and machine learning~\cite{zhao2022learning, hu2017solving}. More than four centuries ago, the famous Kepler conjecture already considered the continuous packing of spheres in infinite spaces, which has been proved only recently~\cite{hales2017formal}. Even the discrete bin packing problem has been proven NP-hard~\cite{hartmanis1982computers}. For 3D shape packing, most existing works consider simple object shapes such as cuboids~\cite{MartelloPV00}, tetrahedra~\cite{conway2006packing}, or ellipsoids~\cite{Kallrath17}. The more general problem of \emph{irregular shape packing}, on the other hand, has received much less study, although being practically useful in many real application scenarios. In robotics, product packing robots~\cite{WangH19a,8793966,wang2020robot,yang2021packerbot} for logistics automation is an active research area. In computer graphics, irregular shape packing has been widely explored in UV atlas generation~\cite{LimperVS18,Liu_AAAtlas_2019,ZHANG2020101854}, artistic puzzle design~\cite{wang2021mocca,Chen-2022-HighLevelPuzzle}, 2D panel fabrication~\cite{SaakesCMI13}, and 3D printing~\cite{ChenZLHLHBCC15}, with various constraints. We study the problem of irregular shape packing through the lens of practically feasible robotic packing, namely \emph{Physically Realizable Packing (PRP)}. In particular, PRP combines two well-known robotic tasks, pick-and-place~\cite{han2019toward} and irregular shape packing~\cite{WangH19a}, \zhn{while further requiring packed objects governed by physics dynamics.} As illustrated in~\prettyref{fig:teaser}, our virtual problem setup involves a robot arm equipped with a sucker-type gripper. Irregularly shaped objects are transported by a conveyor belt in a planar-stable pose and move at a constant speed. The upper surface of the object is captured by a top-view camera and the robot can move this object above an up-looking camera to capture its bottom surface. We pursue an online setting ~\citep{Seiden02} where the robot observes only the object coming in the next instead of the full sequence. After the robot releases each object at its planned configuration inside the target container, we use a full-fledged physics simulator~\cite{coumans2016pybullet}, compatible with the standard RL learning platform~\cite{BrockmanCPSSTZ16}, to determine the ultimate quasi-static pose of all objects and enforce physically realizable constraints. Our physics constraint accounts for both quasi-static and dynamic interactions between objects, generalizing the prior pile stability constraint~\cite{WangH19a} which only considers quasi-static interactions. We propose a novel Reinforcement Learning (RL) pipeline to train effective packing policies. The sequential nature of packing has stimulated several recent RL-based approaches~\cite{HuXCG0020,ZhaoS0Y021,zhao2022learning}. Compared with manually designed heuristics~\cite{KarabulutI04,ramos2016container, ha2017online}, RL is capable of learning complex application-side constraints from guided explorations. RL also bears the potential to outperform humans on both continuous and discrete decision-making problems~\cite{MnihKSRVBGRFOPB15,duan2016benchmarking}. However, prior RL-based approaches opt to factor out the continuous aspect of the problem and only learn a discrete packing policy, via assuming cuboid objects and omitting physics constraints. The continuous nature of the irregular shape packing problem calls for exploiting the full potential of RL through accounting for physics constraints. We contribute a practical algorithm to learn packing policies for irregular 3D shapes via overcoming a series of technical challenges. First of all, learning effective packing policies is naturally a tough challenge. The complex irregular geometry and imperfect object placement \zhn{due to physics dynamics} together lead to huge solution space. Direct policy training through trial and error in such spaces is prohibitively data intensive. We instead propose a candidate action generation method to reduce the action space of RL \zhn{as well as the learning burden}. The candidates are the convex vertices of the (polygonal) connected regions within which the current object can be placed. We prove that these candidates are local optima that make the object tightly packed against the obstacles (the placed objects and the container). Our learned policy is then used to understand the geometry of the current object and choose the best placement from its corresponding candidates. This method significantly reduces the search space of RL and enables reliable learning of packing policies. Both the geometry understanding and the candidate selection need a gazillion experiences collected through simulation. However, interaction with the simulation world is CPU-bound and is time-consuming, which leaves policies less trained per wall-clock time. Some abnormally slow instances also block the uniform training schedule. To this end, we propose to accelerate the training via an asynchronous sampling strategy. In particular, we decouple the training into a parallelized experience sampling process for non-blocking treatment and a separate learning process for continuously updating parameters. Our method allows a robust packing policy to be trained within 48 hours on a desktop machine. In addition, RL algorithms should be trained with sufficient data variety to ensure the robustness of the learned policies. \zhf{As compared with cubical shape packing, however, the variety of object shapes and poses for irregular shape packing is on a much higher level.} \kx{We propose a data preparation process for generating sequences of simulation-ready objects each with a planar-stable pose.} Working with a combination of several real-world and synthetic datasets, we create a packing problem generator emitting randomized, versatile, and faithful problem instances. We have conducted extensive evaluations of our method on well-established datasets with various geometric characteristics. By comparing with a row of baseline algorithms, \zhn{we demonstrate that our method significantly outperforms the best-performing baseline on all datasets by at least $12.8\%$ in terms of packing utility.} Furthermore, we extend our method to the scenario of buffered packing, where the robot maintains \zhf{a buffer} for re-ordering objects, and show that higher packing quality can be achieved. Our contributions include: \begin{itemize} \item An effective and theoretically-provable placement candidate generation method for pruning the action space of RL, along with a learnable pack policy for candidate selection. \item An efficient, asynchronous, off-policy sampling strategy for accelerating packing policy training. \item A constructive PRP environment modeling realistic packing with a large object dataset and RL-compatible interfaces. \end{itemize} \section{\label{sec:method}Method} We introduce our online packing problem setup in~\prettyref{sec:definition} and formulate it as Markov Decision Process (MDP) in~\prettyref{sec:environment}. To effectively solve this problem, we design a novel packing pattern based on candidate actions generated by a theoretically-provable method in~\prettyref{sec:policy}. In~\prettyref{sec:RL}, we describe our asynchronous RL algorithm for accelerating packing policy training in the physics simulation world. Extensions to buffered packing scenarios will be discussed in~\prettyref{sec:buffer}. The pipeline of our learning-based algorithm is outlined in~\prettyref{fig:architecture}. \subsection{\label{sec:definition}Problem Statement} Irregular shape packing problem considers a finite set of $N$ geometric objects $G_1,\cdots,G_N$. Each $G_i\subset\mathbb{R}^3$ (in its local frame of reference) is of an irregular and possibly non-convex shape. Following the conventional Bin Packing Problem~\cite{MartelloPV00} (BPP), the target container $C\subset\mathbb{R}^3$ takes up the space $[0,S_x]\times[0,S_y]\times[0,S_z]\subset\mathbb{R}^3$. The goal is to move as many objects into $C$ in a collision-free and physically realizable manner and maximize the packing utility: \begin{align} \label{eq:object} \sum_{G_{i} \subset C} |G_{i}|/|C|, \end{align} where $|G_{i}|$ and $|C|$ are the volume of object $G_{i}$ and the container volume, respectively. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{images/manipulation.pdf} \caption{\label{fig:manipulation} Snapshots of robot manipulations. The robot picks up the incoming object (a) from the conveyor belt and moves this object above an up-looking camera to observe the bottom surface (b). Then, the robot adjusts the vertical orientation of the object and places it in the container (c). The robot only picks and places an object from a top-down view. } \end{figure} \subsubsection{Environmental Setup} To mimic the real-world scenario of pick-and-place tasks, we assume that objects are stably laid on the conveyor belt moving at a constant speed, leaving the robot with a limited time window to pick up and place each $G_i$ into $C$. Clearly, the time window size is a complex function of conveyor belt speed, robot motion speed, and various system delays. However, these factors can be tuned for different hardware platforms and are out of the scope of this work. We assume a fixed time window size, which allows us to model object packing as a sequential-decision problem. We postulate that the robot arm is equipped with a sucker-type gripper that can only pick and place an object from the top, as illustrated in~\prettyref{fig:manipulation}, and that any object and any placement location in $C$ can be reached by the robot. The robot can also apply a 1D rotation of the gripper around the Z-axis (vertical), the same assumption is adopted in~\cite{zhao2022learning}. Therefore, the space for robot decision is $\mathbb{R}^3\times SO(1)$, consisting of a 3D position and an in-plane rotation. To make packing decisions, the robot is equipped with three RGB-D cameras to fully observe the packing environment, as demonstrated in~\prettyref{fig:teaser}a. The on-conveyor camera is used for capturing the top surface of the incoming object while the on-container camera observes the continuous packing configurations inside the container. After picking up the incoming object, the robot moves it over a third, up-looking camera to capture its bottom surface, as shown in ~\prettyref{fig:manipulation}b, thus fully observing the geometry of the object to be packed. Different from those works which only optimize the final packing result~\cite{MaCHW18}, PRP also concerns the packing process, i.e., moving the objects into the container one by one. Since the robot can only observe one object at a time, our packing problem follows the \textit{online} setting~\cite{Seiden02} where each object is placed without the knowledge of the next ones. No further adjustment, such as unloading or readjusting, will be allowed. The robot must make an immediate decision that accommodates the incoming object while optimizing the overall compactness of the in-container layout. Packing alone is already a difficult combinatorial optimization problem, the arbitrarily complex object geometry and the imperfect placement make this problem even more challenging with huge solution space. We resort to RL to learn this packing skill automatically. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{images/dynamics.pdf} \caption{\label{fig:stable} (a): The object $G_{i(t)}$ is transported by the conveyor with its planar-stable pose $T_{i(t)}[G_{i(t)}]$, at time step $t$. (b): The robot moves this object into the container and releases it with transform $T_tT_{i(t)}$. (c): Governed by rigid body dynamics, the ultimate continuous configuration for $G_{i(t)}$ is $\bar{T}_{i(t)}^t$.} \vspace{-10pt} \end{figure} \subsubsection{Stable Object Poses} As compared with BPP problems which only consider 6 axis-aligned orientations of cuboids, an irregular 3D object can have an arbitrary local-to-world orientation spanning the entire $SO(3)$. However, we can adopt a mild assumption that objects are stably lying on the conveyor belt with force equilibrium. This assumption holds when the conveyor belt has sufficient frictional forces and moves reasonably slowly, which is generally the case. We denote the incoming object id at the $t$th timestep as $i(t)$. Mathematically, $G_{i(t)}$ is subject to a world transform $T_{i(t)}$ such that $T_{i(t)}[G_{i(t)}]\subset\mathbb{R}^3$ is a physically stable pose on the conveyor belt as ~\prettyref{fig:stable}a. The robot can then apply a vertical rigid transformation $T_t\triangleq T(\theta,l_x,l_y,l_z)$ such that $(T_tT_{i(t)})[G_{i(t)}]\subset C$ and $(T_tT_{i(t)})[G_{i(t)}]$ is collision-free with the boundary of $C$ or any other objects already in $C$, as shown in ~\prettyref{fig:stable}b. Here $\theta$ is the vertical \zhf{rotation angle} and $(l_x,l_y,l_z)$ \zhf{the} 3D target position which takes the Front-Left-Bottom (FLB) of the Axis-Aligned Bounding Box (AABB) of $G_{i(t)}$ as the reference point. Finally, the robot releases $G_{i(t)}$ with $T_tT_{i(t)}$ and the ultimate continuous configurations of all placed objects $G_{i(1)},\cdots,G_{i(t)}$, denoted as $\bar{T}_{i(1)}^t,\cdots,\bar{T}_{i(t)}^t$, are governed by rigid body physics dynamics like ~\prettyref{fig:stable}c. \subsection{\label{sec:environment}PRP Learning Environment} In this section, we discuss our PRP environment compatible with all statements of \prettyref{sec:definition}. This involves a \zh{random packing problem emission procedure and an environment model} that casts online packing as a Markov Decision Process for RL training. \paragraph{Problem Emission} We aim at learning packing skills for 3D shapes of a specific data distribution, so the training and testing sequences are generated with objects from the same object dataset. The sequences are randomly generated and are not shared by training and testing. Although this is a typical assumption by most policy learning works, we will demonstrate the generalization of our method to out-of-distribution objects in~\prettyref{sec:generalization}. Given a shape set, we emit a random packing problem (a sequence of posed objects) by first picking a random shape from the dataset and then selecting a stable pose for it, both with uniform probability and bootstrap sampling. Then, the vertical in-plane orientation of an object is also uniformly randomized. The sampling is repeated until the objects of the sequence are enough to fill the container ($\sum_i{|G_i|}>|C|$). \paragraph{Markov Decision Process} Our online packing problem can be formulated as a Markov Decision Process, which is a tuple $<\mathcal{S},\mathcal{A},\mathcal{P},R,\gamma>$. During each step $t$ of the decision, an agent observes the (partial) state $s_t\in\mathcal{S}$ of the current environment and makes a decision $a_t\in\mathcal{A}$, where $\mathcal{S}$ and $\mathcal{A}$ are the state space and the action space, respectively. The environment then responds by bringing $s_t$ to $s_{t+1}$ via a stochastic transition function $s_{t+1}\sim{\mathcal{P}}(s_t,a_t)$ and granting a reward $R(s_t,a_t)$. The agent is modeled as a policy function $a_t\sim\pi(o(s_t),\omega)$, where $o$ is the observation function and $\omega$ the parameter of $\pi$. Under this setting, solving the online packing problem amounts to the following stochastic optimization: \begin{align} \label{eq:MDP} \text{argmax}_{\omega}\quad\mathbb{E}_{s_0\sim I,\tau \sim \pi,\mathcal{P}} \left[\sum_{t=0}^{\infty}\gamma^tR(s_t,a_t)\right], \end{align} where $I$ is the stochastic problem emitter, $\gamma$ is a constant discount factor, and $\tau = (s_0, a_0, s_1, ...)$ is a sampled trajectory. Below we postulate each component of our MDP, putting together to form our packing environment. \paragraph{State Space $\mathcal{S}$ and Transition Function $\mathcal{P}$} At timestep $t$, the true state of the current environment involves ultimate object configurations inside the target container and the state of the incoming object, i.e.: \begin{align*} s_t\triangleq<\bar{T}_{i(1)}^{t-1}[G_{i(1)}],\cdots,\bar{T}_{i(t-1)}^{t-1}[G_{i(t-1)}],T_{i(t)}[G_{i(t)}]>, \end{align*} essentially mimicking the online packing setting. The robot applies a temporary, initial transform $T(\theta,l_x,l_y,l_z)T_{i(t)}$ for the $i(t)$th object. After that, our transition function, aka., the rigid body simulator~\cite{coumans2016pybullet}, then integrates the poses of all $t$ objects to reach force equilibrium. We decide that all objects have reached force equilibrium when they have a velocity magnitude smaller than some threshold. At the force equilibrium state, we check for any objects that fall outside $C$ or have a height beyond $C$, in which case we terminate the episode. \paragraph{Observation Function \textit{o}} We provide enough RGB-D cameras for capturing continuous object configurations inside container $C$ and the top/bottom surface of the incoming object $T_{i(t)}[G_{i(t)}]$. We assume that the captured RGB-D images have been segmented into foreground objects and background. We discard all color details and only retain the depth information. We further extract the heightmap of container $H_c$ with resolution $\lceil{S_x/\Delta_h}\rceil\times\lceil{S_y/\Delta_h}\rceil$, where $\Delta_h$ is a regular interval. And we get a surface point cloud $P$ belonging to $T_{i(t)}[G_{i(t)}]$. We thus define our \zhf{observation function} as $o(s_t)=(H_c,P)$, which is also illustrated in ~\prettyref{fig:architecture}ab. \paragraph{Action Space $\mathcal{A}$} As mentioned in~\prettyref{sec:definition}, the space for robot decision spans the entire~$\mathbb{R}^3\times SO(1)$, involving desired packing position and vertical orientation of the $i(t)$th object. We can naturally omit the $z$-dimension decision because of the top-down placement manner. \zhf{Typically, a robotic hardware platform such as~\cite{9560782} is equipped with force sensors and can determine the opportune time for releasing an object once collisions are detected. } Therefore, no height measurement is needed for the object. Similarly, once the bottom object surface and the container heightmap are captured, we can get the object's landing altitude $l_z$ when being placed at $(l_x,l_y)$ coordinates \cite{WangH19a} and we denote $l_z$ as a function $l_z(l_x,l_y)$. Our packing policy only needs to figure out horizontal positions $l_x,l_y$ for given vertical rotation $\theta$, essentially reducing the action space to $SE(2)\times SO(1)$. We define our action as $a_t\triangleq(\theta,l_x,l_y, l_z)$, where $l_z$ is optional. However, the $SE(2)\times SO(1)$ space is still unaffordable for the sequential-decision nature of packing. For enabling efficient and effective policy learning, we propose to prune this enormous action space to limited placement candidates via a \zhf{geometric-inspired method} and use a parameterized policy to further select the best one. Our motivation and implementation of this candidate-based packing pattern will be deferred to~\prettyref{sec:policy}. \paragraph{Reward Signal \textit{R}} Since we aim to maximize the packing utility in ~\prettyref{eq:object}, we directly grant a reward $R(s_t,a_t)=w|G_{i(t)}|$ proportional to the volume of $G_{i(t)}$ once $G_{i(t)}$ is successfully placed inside the container. Here $w$ is a constant weight. Otherwise, the reward is zero and the trajectory is terminated. To avoid premature trajectory termination and get more step-wise profits, the agent should learn to optimize the packing process for accommodating more future possible objects. \subsection{\label{sec:policy} \zh{Candidate-Based Packing Pattern}} In this section, we introduce our packing policy representation, including a theoretically-provable candidate action generation method and a learnable policy for further candidate selection. \subsubsection{Candidate Action Generation} An intuitive attempt for \zhf{acting} in the $SE(2)\times SO(1)$ space is discretizing it using a regular interval, as done in~\cite{GoyalD20}. However, this leads to a large action space which also grows exponentially with higher resolutions and larger container sizes. Meanwhile, the clustered actions with close distances also result in meaningless RL exploration. Generating an effective action subset with a controllable size is necessary for efficient packing policy learning which is also verified in~\cite{ZhaoZXHX22} for cuboid packing. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{images/convexAction.pdf} \caption{\label{fig:convexAction} We illustrate the procedure for generating candidate actions. Given an in-plane rotation of the object, $\theta$, we first extract feasible and connected action regions $E$ for the incoming object. One found region is exemplified in (a). Assigning the incoming object to grid points inside \ding{172} or on the edge \ding{173} of this region would leave small gaps with less potential for accommodating future objects. We approximate the contour of this region to a polygon and detect convex polygon vertices (b). Our candidate actions correspond to having the FLB corner of the object's AABB at a convex vertex (c) which makes the object placed tightly against the obstacles. This procedure is repeated once for each discretized rotation $\theta$. } \vspace{-10pt} \end{figure} Given an object to be packed, we hope to find a finite set of candidate actions leading to a tight packing of the object. Since we adopt a top-down placement manner, we are allowed to simplify the action space of object placement as a few connected 2D regions $\mathcal{E}=\{E_i\}$, each of which has a similar $l_z$ value. Our general goal is to pack the object in one of the connected regions in a compact manner so that as much empty room as possible can be left for future object packing. See an illustration in~\prettyref{fig:convexAction}a. To ensure compact packing, we conjecture that the object should be placed at a (locally) convex vertex of a connected region (\prettyref{fig:convexAction}b), with which we can obtain a set of the candidate placement of the object as the action space. To theoretically verify that, we prove the following theorem: \begin{theorem} For a (polygonal) connected region $E$ and a point on its boundary $p\in\partial E$, if $p$ is a convex vertex, then $p = \argmax_{p'\in \mathcal{N}(p)}\tau(p')$ for an open neighborhood $\mathcal{N}(p)$ of $p$ which does not contain any other convex vertices and a tightness measure $\tau(\cdot)$. \end{theorem} We define the tightness measure as the range of directions making $\{p\} \oplus G_{\theta(t)}$ touch the obstacles (container boundary or already placed objects) and permit no further movement, where $\oplus$ is Minkowski sum~\cite{mark2008computational} and we vertically rotate $T_{i(t)}[G_{i(t)}]$ with $\theta$ to get $G_{\theta(t)}$. See \prettyref{fig:tight} for the explanation of this definition. We provide a proof of this theorem in Appendix~\ref{sec:proof}. Essentially, this theorem claims that a convex vertex of $E$ is a \emph{local optimum} that leads to a tight object packing (see \prettyref{fig:convexAction}c) which can be chosen as an action of the candidate. This greatly reduces the action space and benefits RL-based policy learning. \begin{figure}[h] \centering \includegraphics[width=0.46\textwidth]{images/tight.pdf} \caption{\label{fig:tight} Given a point $p$ in the empty space represented by a polygon $E$, we define a tightness measure $\tau(p,G)$ of packing $G$ at $p$ as the range of directions $d$ along which the extreme value of $G$'s projection, i.e., $e(G,p,d)=\max_{g\in\{p\}\oplus G}(d^Tg)$, reaches a local maximal over a neighborhood of $p$: $p=\argmax_{q\in \mathcal{N}(p)}e(G,q,d)$. This means that $G$ is tightly packed against obstacles at $p$ along $d$. In (a), the range for point $p_2$ is depicted as the pink sector. For $p_1$ which is a concave vertex, $e_1=e(G,p_1,d)$ is not locally maximal since there are points nearby like $p_2$ which has a larger extreme value $e_2>e_1$. In fact, there does not exist a direction along which $e_1$ can attain local maximal, so $\tau(p_1) = 0$. For the convex vertex $p_2$, however, $\tau(p_2)= \pi / 2$ corresponding to the pink sector in (a). } \end{figure} Given a fixed in-plane rotation $\theta$, we construct the candidate action set for $G_{\theta(t)}$ using three steps. We firstly consider placement positions $F$ which satisfy $F \oplus G_{\theta(t)} \subset C$ and $F \oplus G_{\theta(t)} \cap \bar{T}_{i(t')}^{t-1} G_{i(t')} = \emptyset$, for $t' < t$. We discretize the container $C$ into regular grids and sample grid points $(l_x,l_y, l_z)$ lying in $F$. Next, we detect and cluster feasible, connected 2D regions $E$ from discretized positions which result in $G_{\theta(t)}$ placed with similar altitudes, see also \prettyref{fig:convexAction}a. We denote two neighboring grids $(l_x,l_y)$ and $(l_x',l_y')$ as connected if: \begin{align} \label{eq:neighbor} |l_z(l_x,l_y)-l_z(l_x',l_y')|\leq\Delta_z, \end{align} with $\Delta_z$ being a constant parameter. As our third step, for each connected region $E \in \mathcal{E}$, we draw the region contour $\partial E$~\cite{SuzukiA85} where $\partial E \oplus G_{\theta(t)}$ touches the container boundary or packed objects from the top-down view. Since contours $\partial E$ are pixelized, we approximate $\partial E$ to polygons with the Ramer-Douglas-Peucker algorithm \cite{Ramer72} and detect convex polygon vertices as candidate FLBs (\prettyref{fig:convexAction}b). We execute this procedure for all possible in-plane rotations discretized at a regular interval of $\Delta_\theta$. Finally, the number of candidate FLB positions can be large, and we sort them by $l_z(l_x,l_y)$ in ascending order and retain the first $N$. We outline the candidate generation details in \prettyref{alg:candidate}. Our candidate generation procedure imposes no requirement on object geometry and fits the general shape packing need. \begin{algorithm}[t] \caption{\label{alg:candidate}Candidate Action Generation} \begin{algorithmic}[1] \STATE Sample $\theta$ at a regular interval $\Delta_\theta$ \STATE{Sample points $(l_x,l_y)$ from $H_c$ per $\Delta_g$ grids} \FOR{each sampled $\theta$} \FOR{each grid point $(l_x,l_y)$} \STATE Compute $l_z(l_x,l_y)$ \STATE Rewrite infeasible $l_z(l_x,l_y) = \infty$ \ENDFOR \FOR{each pair of grid points $(l_x,l_y)$ and $(l_x',l_y')$ of $H_c$} \STATE Connect neighbors if \prettyref{eq:neighbor} holds \ENDFOR \STATE Candidate action set $A\gets\emptyset$ \STATE Detect connected regions \FOR{each connected region} \STATE \zhf{Draw the region contour and approximate it as a polygon } \STATE \zhf{Detect convex polygon vertices and insert them into $A$} \ENDFOR \STATE Sort $A$ by $l_z(l_x,l_y)$ and retain the first $N$ \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{Policy Parameterization} Given the packing observation \zhf{tuple} $(H_c,P)$ and the generated candidate actions, our packing policy $\pi(o(s_t),\omega)$ is used to understand 3D task geometry and rank candidates for selection. Our policy first encodes the container heightmap $H_c$ by a Convolutional Neural Network (CNN) to extract feature $f_h$. Similarly, we use a PointNet architecture~\cite{QiSMG17} to project the point cloud $P$ to feature $f_p$. Both feature extractors are designed lightweight for faster training. We apply a vertical transform to the point cloud until its AABB has the smallest volume with FLB at the origin. This essentially follows a similar idea to~\cite{zeng2020transporter} \zhf{which} transforms the point cloud to a canonical pose to improve data efficiency. The $i$th candidate action is brought through an element-wise Multi-Layer Perceptron (MLP) to derive a candidate descriptor $f_a^i$. Our candidate selector then takes the same form as a standard dueling Q-network~\cite{WangSHHLF16}, which is also illustrated in~\prettyref{fig:architecture}. This architecture represents the state-action value function $Q(s,a)$ as: \begin{align*} Q(s_t,a_i)=V(s_t) + A(s_t,a_i) \end{align*} where $N$ is the number of candidate actions. $V(s_t)$ is called the state value function and $A(s_t,a_i)$ is the advantage function. We concatenate and parameterize the problem features to $V$ and $A$ with two MLPs, with learnable parameters $\alpha$ and $\beta$, respectively: \begin{align*} V(s_t)=&\text{MLP}(f_h,f_p,\alpha),\\ A(s_t,a_i)=&\text{MLP}(f_h,f_p,f_a^i,\beta). \end{align*} Our ultimate action is defined as: \begin{align*} a_t\triangleq\text{argmax}_{a_i}Q(s_t,a_i). \end{align*} The number of candidate actions accepted by our policy is fixed to $N$. If the factual number of candidates is less than $N$, we fulfill the candidate array with dummy actions that are all-zero tuples. The Q-value predictions for these redundant candidates are replaced with $-\infty$ during action selection. \subsection{\label{sec:RL}Asynchronous Policy Training in Simulation World} Aside from that the complex irregular geometry and imperfect object placement enlarge the combinatorial solution space of packing, the heavy cost of exploring this enormous solution space, i.e. physics simulation, also makes it more difficult to learn an effective packing policy. In this section, we discuss several practical approaches to improve the efficiency of exploration and policy training in the simulation world. We choose a data-efficient off-policy RL algorithm and further use an asynchronous pipeline for accelerating the training, where the simulation-intensive trajectory sampling and policy optimization are performed asynchronously; a partial simulation further lowers the sample cost. Existing reinforcement learning algorithms can be largely divided into on-policy approaches~\cite{SchulmanWDRK17,WuMGLB17} and off-policy methods~\cite{Barth-MaronHBDH18,WangSHHLF16}. The former maximizes the expectation in~\prettyref{eq:MDP} by sampling trajectories using the current policy, while the latter maintains an experience memory $D$ that stores trajectories sampled using out-of-sync policies. Off-policy methods minimize the Bellman loss using samples from $D$: \begin{align*} \text{argmin}_\omega\quad\mathbb{E}_{(s,a,r,s^\prime)\sim D}\left[(r+\gamma\max_{a^\prime}Q(s^\prime,a^\prime)-Q(s,a))^2\right]. \end{align*} The ability to reuse samples makes off-policy algorithms more data-efficient, which is critical to our problem. As mentioned in~\prettyref{sec:policy}, we use dueling networks~\cite{WangSHHLF16} to allow more frequent updates of $V(s)$ and share the learning across multiple candidate actions. We adopt the discrete action space DRL algorithm --- Rainbow~\cite{HesselMHSODHPAS18} to train the dueling networks. Besides the dueling architecture, the Rainbow algorithm also fruitfully combines the well-known DQN method~\cite{MnihKSRVBGRFOPB15} with other five independent improvements like prioritized replay~\cite{SchaulQAS15}, noisy nets~\cite{FortunatoAPMHOG18}, distributional Q-learning~\cite{BellemareDM17}, and so on. \begin{figure}[t!] \centering \includegraphics[width=0.46\textwidth]{images/dataSample.pdf} \caption{\label{fig:DataCollect} The vanilla Rainbow flowchart (a) and that of our asynchronous version (b). The vanilla Rainbow (a) runs the CPU-bound physics simulations and GPU-bound policy optimization in a sequential manner, which leads to idle computing resources and under-trained policies. This also results in the uniform training schedule being blocked when an abnormal instance (gray) is sampled. Instead, our method (b) runs an experience sampling actor and a policy learner in asynchronous processes. The experience sampling is performed on a batch of CPU threads. The learner keeps learning on GPU and shares the updated policy parameter $\omega$ with the actor. The batch of threads is synchronized when policy inference is needed. We further incorporate an abnormal detection mechanism, so that the abnormal thread is not synchronized with other normal ones. } \vspace{-10pt} \end{figure} \subsubsection{Asynchronous Rainbow} In the vanilla Rainbow algorithm, the experience collection step and the policy learning step run alternatively, as demonstrated in~\prettyref{fig:DataCollect}a. For our problem, however, experience collection via physics simulation is CPU-bound and time-consuming, which leaves policies less trained. Given that the policy learning is merely GPU-bound, we propose to parallelize training via an asynchronous scheme similar to~\cite{HorganQBBHHS18}. Specifically, we create an actor and a learner residing in two different processes, as illustrated in~\prettyref{fig:DataCollect}b. The actor interacts with the packing environment and collects experience. The collected data is saved to a memory $D$ which is shared between processes. The learner keeps learning from $D$ and spontaneously updates the actor with the latest parameters. Our asynchronous implementation not only saves wall-clock training time but also accesses higher-quality experiences by having the actor refine its parameter more frequently before each decision. To further accelerate the experience collection step, we run multiple simulation threads along with a non-blocking treatment in the actor process. There are two-fold benefits of doing so. First, more sampling threads enrich experiences for policy learning. Moreover, the batched simulation also avoids the uniform training schedule being blocked by abnormally slow instances, as exemplified in~\prettyref{fig:DataCollect}a. Such abnormality can happen when the physical simulator is unstable and experiences a sudden gain in kinetic energy, requiring many more timesteps to converge to a new equilibrium configuration. We suspend the actor thread whenever its reaction time is longer than a threshold, as demonstrated in~\prettyref{fig:DataCollect}b. This suspended thread will then be treated as a standalone instance and rejoin others after its task is finished. This implementation guarantees sufficient concurrency in trajectory sampling. \subsubsection{Partial Simulation} Even by using asynchronous Rainbow, the policy training for PRP is still bound by the massive simulation cost. We can further accelerate training by fixing the configurations of already packed objects. In practice, we find that old objects are not affected much by new arrivals. Using this strategy can save a large portion of simulation costs. It can also save the price of computing heightmap $H_c$, since we only need to update the slice of $H_c$ covered by the newly arrived object $\bar{T}_{i(t)}^t[G_{i(t)}]$ but scan the entire container. This simple strategy can significantly boost the simulation frequency by more than three times. Note that this partial simulation is only used during training, and we always simulate all objects and scan the entire container at test time for practicality. \subsection{\label{sec:buffer}Extension to Buffered Packing Scenario} So far we have discussed the strictly online packing problem. In some real-world scenarios, a buffered area $B$ can be used to temporarily store objects before the final placement, as shown in~\prettyref{fig:selector}. By introducing the buffer, the robot can reorder objects locally before packing, potentially improving the packing performance by enabling a much larger search space. We suppose the buffered area is of a fixed size $K$. When a new object comes, the robot will store this object in $B$. If there are already $K$ objects inside $B$, the robot will pass on one of the objects from $B$ to $C$. Solving this problem not only involves reasoning about the picked object geometry, but considering the permutation of objects inside the buffer and future possible ones as well. To this end, we use an additional policy $\pi_s$ to select objects from $B$, which is followed by our packing policy $\pi$ in~\prettyref{sec:environment}. The area of $B$ is sufficiently large such that objects can be placed horizontally, and the point cloud feature of each object in $B$ is stored and available to $\pi_s$. Finally, we assume the robot can reach any position in the buffered area from a top-down view. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{images/buffer.pdf} \vspace{-20pt} \caption{\label{fig:selector}(a): Our object-ordering policy, consisting of a Graph Attention Network (GAT) and a dueling Q-Network ranker. (b): The robot picks the object with the highest Q-value from the buffer and packs it into the container.} \vspace{-16pt} \end{figure} Our architecture of $\pi_s$ is illustrated in~\prettyref{fig:selector}a. We assume there are $K$ objects in the buffer, denoted as $G_1,\cdots,G_K$, with their point clouds being $P_1,\cdots,P_K$. Similar to~\prettyref{fig:architecture}, our policy $\pi_s$ first maps each $P_k$ to a feature $f_p^k$ in an element-wise manner and maps the container heightmap $H_c$ to a feature $f_h$. We adopt a Graph Attention Network (GAT)~\cite{VelickovicCCRLB18} to project the tuple $(f_p^k, f_h)$ to a high-level element feature $f_g^k$. A dueling Q-network block is then used to select the best object with $\text{argmax}_{k=1,\cdots,K}Q(s_t,G_k)$, where $s_t$ is the average of all $f_g$ features. We train the object-ordering policy $\pi_s$ and the placement policy $\pi$ jointly in an end-to-end manner. That is, $\pi_s$ first chooses one shape $G_k$, then it is $\pi$'s turn to cooperate and choose a candidate action for placing $G_k$. \section{Properties of Candidate Generation Method} We highlight some intuitive but important properties that motivate the design of our candidate generation method. Since we discretize the horizontal space of the container $C$, every feasible action area can be denoted as a polygon $Q\subset\mathbb{R}^2$ with piecewise linear boundary. Given a 3D object with known orientation, its projected 2D area is also represented as a 2D polygon $S$ under our discretization. Our packing policy would predict a horizontal translation $p=(l_x,l_y)$ such that $S\oplus\{p\}\subseteq Q$, so the set of feasible $p$ is denoted as $E=Q\ominus S$, where $\oplus$ and $\ominus$ are Minkowski sum and difference~\cite{mark2008computational}, respectively, as illustrated in~\prettyref{fig:minkov}. \begin{figure}[h] \centering \centerline{\includegraphics[width=0.46\textwidth]{images/minkov.pdf}} \vspace{-6pt} \caption{\label{fig:minkov}Through the Minkowski difference of $Q$ and $S$ (a), we get configuration space $E$ (b). Packing $S$ can be mapped as selecting a point $p \in E$ and moving the shape $S$ to $\{p\}\oplus S$ (c).} \vspace{-10pt} \end{figure} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.14\textwidth]{images/extreme.pdf}} \vspace{-6pt} \caption{\label{fig:extreme}In direction $d$, the extreme point $e$ of polygon $S$ lies further than the extreme point $e^\prime$ of polygon $S^\prime$, thus $S$ is more extreme than $S^\prime$ in $d$. Set an axis in $d$ with origin $O$, the projected coordinate value of $e$ on this axis is the extremity of $S$.} \vspace{-10pt} \end{center} \end{figure} In addition to requiring $S\oplus\{p\}\subseteq Q$, we further prefer $p$ that leads to a tight packing, for which there is not straightforward metric. However, an intuitive indicator of tight packing is that $p$ reaches a local extreme along some direction $d\in\mathbb{R}^2$, i.e.: \begin{align} \label{eq:extreme} p=\text{argmax}_{p'\in B(p)\cap E}\; d^Tp', \end{align} for some open neighborhood $B(p)$ of $p$, which is illustrated in~\prettyref{fig:extreme}. The following property is an immediate consequence of Minkowski sum, which establishes direct connection between packing tightness and local extreme points: \textbf{Proposition.} If \prettyref{eq:extreme} holds for some open neighborhood $B(p)$, then: \begin{align*} p=\text{argmax}_{p'\in B(p)} d^Tp'\quad\text{s.t.}\;S+\{p'\}\subseteq Q. \end{align*} In other words, the extreme points in $E$ corresponds to extreme placement of $S$ in $Q$. With this tool at hand, we can evaluate and compare the potential packing tightness of a point $p$ by looking at the range of directions $d$ for which $p$ is an extreme. Such range of $d$ corresponds exactly to the spanning angle of normal cone~\cite{boyd2004convex} in 2D and we summarize its property below: \textbf{Proposition.} For a polygonal set $E$ and $p\in\partial E$, the spanning angle of normal cone at $p$ is defined as $\mathcal{N}(p)\triangleq\max(0,\pi-\theta)$, where $\theta$ is the interior angle of $E$ at $p$. Using $\mathcal{N}(p)$ as a tightness metric, we compare different choices of $p$. An intuitive choice is to find the global optima $\text{argmax}_{p'\in E}\mathcal{N}(p')$, which corresponds to the convex vertex in $E$ with the smallest interior angle. However, we need to expose more freedom to our learned policy, so we consider the set of local optima and use this set as our candidates. The following property is obvious: \textbf{Theorem.} For a polygonal set $E$ and strictly convex $p\in\partial E$, we have \prettyref{eq:extreme} holds for some open neighborhood $B(p)$ of $p$. \begin{proof} Since $E$ is polygonal, we could consider a neighborhood $B(p)$ containing only $p$ and parts of $p$'s two neighboring edges. Within $B(p)$, $p$ is strictly convex so $\mathcal{N}(p)>0$ and all other points $p'$ are on a straightline with $\mathcal{N}(p')=0$. \end{proof} Based on the above observation, our candidate set consists of all the strictly convex vertices of $E$ as well as the vertices of its convex hull. \begin{comment} Given a polygon $S\subset \mathbb{R}^2$ and a maximum area $Q$ in container $C$ that can be covered by $S$, we can get configuration space $E = Q\ominus S$ via Minkowski difference~\cite{mark2008computational}, as shown in ~\prettyreft{fig:minkov}. The packing of $S$ can be mapped to a reference point $p \in E$, where $\{p\} \oplus S \subseteq Q$ is the corresponding placement. We expect that the polygon $S$ is placed compactly against obstacles, i.e. packed shapes or the container. \citet{mark2008computational} stipulate that one polygon is more extreme in a direction $\vec{d}$ than another if its extreme points lie further in that direction. From this, we set a coordinate axis in $\vec{d}$ with an arbitrary origin and we define the maximum coordinate value of a polygon's extreme points projected on this axis as an extremity. If $S$ is placed compactly against obstacles in $\vec{d}$, it also reaches a local maximum of extremity, as shown in~\prettyref{fig:extreme}. We find reference points from $E$ such that $\{p\} \oplus S$ reaches local maximums of extremity in as many directions as possible. \textbf{Proposition.} For a convex angle $\theta$, its vertex is extreme in directions of a range $\pi - \theta$. For a concave angle, there is no direction that makes its vertex extreme. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.48 \textwidth]{images/proof.pdf}} \caption{ (a): Rotating a line $l$ passing through the vertex $v$, towards direction $\vec{d}$ from $\vec{n_1}$ to $\vec{n_2}$. This line always separates the convex angle to one side. (b): No line can separate a concave angle to only one side. } \vspace{-10pt} \label{fig:proof} \end{center} \end{figure} \begin{proof} For a convex angle $\theta$, as shown in ~\prettyref{fig:proof}a, the points on its edges are naturally extreme in directions of outer edge normals, denoted as $\vec{n_1}$ and $\vec{n_2}$ respectively. We rotate a line $l$ which passes through the vertex $v$ towards direction $\vec{d}$ from $\vec{n_1}$ to $\vec{n_2}$. During the rotation, the points on $\theta$ are always separated to one side of $l$. Hence, $v$ is extreme in directions from $\vec{n_1}$ to $\vec{n_2}$. The line $l$ rotates from one edge of $\theta$ to another, so the range of directions it experiences is the complement of $\theta$, i.e. $\pi-\theta$. For a concave angle, no line can separate this angle to only one side, as illustrated in~\prettyref{fig:proof}b. So no direction makes a concave vertex extreme. \end{proof} \begin{wrapfigure}{r}{0.25\linewidth} \centering \vspace{-12pt} \includegraphics[width=1.\linewidth]{images/neiborhood.pdf} \vspace{-15pt} \caption{ Neighborhood of a vertex $v$. } \vspace{-12pt} \label{fig:neighborhood} \end{wrapfigure} For a reference point $p$ on the internal of $E$, $p$ can move in an arbitrary direction and $\{p\} \oplus S$ meets no obstacle. So we only find reference points of interest on the edges of $E$. We define domain $U(v)$ as the neighborhood of a polygon vertex $v$, which includes $v$'s adjacent edges but adjacent vertexes, as the red part demonstrated in ~\prettyref{fig:neighborhood}. \textbf{Theorem.} In domain $U(v)$, if $v$ is a convex vertex, its corresponding placement reaches the most extreme with the most directions. \begin{proof} We first prove that for domain $U(v)$, if $p\in U(v)$ is extreme in direction $\vec{d}$, the placement $\{p\} \oplus S$ also reaches the most extreme in this direction. We take $e$ as the extreme point of shape $S$ in $\vec{d}$. If there is another point $p^\prime \in U(v)$ makes $\{p^\prime\}\oplus S$ more extreme than $\{p \} \oplus S$, then $\{p^\prime\}\oplus \{e\}$ lies further than $\{p\}\oplus \{e\}$ in $\vec{d}$. This contradicts the condition that $p\in U(v)$ is extreme in $\vec{d}$. Then we only need to compare the range of directions that make point $p$ extreme in domain $U(v)$, which is an angle of polygon $S$. As we discussed before, if $p=v$ is a convex vertex with the interior angle $\theta$, it is extreme in directions of range $\pi-\theta > 0$. A point $p$ located on the edges of $U(v)$ is extreme only in the outer edge normal. If $p=v$ is a concave vertex, no direction makes $p$ extreme. In summary, for domain $U(v)$, a convex vertex $v$ has the most directions that make itself extreme, and its corresponding placement reaches the most extreme with the most directions. \end{proof} Therefore, the convex vertexes of $E$ reach maximums of extremity in their neighborhoods with the most directions. We choose these convex vertexes as candidates for the packing task. Note that, for a direction $\vec{d}$, $\{v\} \oplus S$ reaches the most extreme in local domain $U(v)$ does not represent it reaches the global maximum of extremity in domain $E$. There may exist multiple obstacles against $S$ in the same direction, as illustrated in \prettyref{fig:compact}. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.46 \textwidth]{images/compact.pdf}} \caption{For a horizontal direction $\vec{d}$, both vertex $v_1$ and $v_2$ are extreme in their neighborhoods (a). They both correspond to compact placements, (b) and (c), that achieve local maximums of extremity and cannot move in $\vec{d}$. } \label{fig:compact} \end{center} \end{figure} \end{comment} \end{comment} \clearpage \section{Properties of Candidate Generation Method} We highlight some intuitive but important properties that motivate the design of our candidate generation method. Given a 3D object with known orientation, its projected area is represented as a 2D polygon $S$. We denote $Q\subset\mathbb{R}^2$ as a maximum area in container $C$ that can be covered by $S$. We can get a configuration space $E = Q\ominus S$, and the packing of $S$ can be mapped to selecting a reference point $p \in E$, where $S \oplus \{p\} \subseteq Q$ is the corresponding placement. Here $\oplus$ and $\ominus$ are Minkowski sum and difference~\cite{mark2008computational}, respectively, as illustrated in~\prettyref{fig:minkov}. \begin{figure}[h] \centering \centerline{\includegraphics[width=0.46\textwidth]{images/minkov.pdf}} \vspace{-6pt} \caption{\label{fig:minkov}Through $Q \ominus S$ (a), we get a configuration space $E$ (b). Select a reference point $p$ from $E$, its corresponding placement is $S \oplus \{p\} (c)$. } \vspace{-10pt} \end{figure} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.17\textwidth]{images/extreme.pdf}} \vspace{-6pt} \caption{\label{fig:extreme}In direction $d$, the extreme point $e$ of polygon $S$ lies further than the extreme point $e^\prime$ of polygon $S^\prime$, thus $S$ is more extreme than $S^\prime$ in $d$. The polygon $S$ is compactly against an obstacle and can no longer move in $d$. Therefore, $d^Te$ is local optima. } \vspace{-10pt} \end{center} \end{figure} We prefer point $p$ which leads to tight packing, i.e. placing $S$ compactly against obstacles, for which there is no straightforward metric. \zhf{\citet{mark2008computational} define an extreme concept on polygons, one polygon is more extreme in a direction $\vec{d}$ than another if its extreme points lie further in that direction. Therefore, an intuitive indicator of tight packing is that $S\oplus\{p\}$ reaches a local extreme along some direction $d\in\mathbb{R}^2$, i.e.: \begin{align} \label{eq:extreme} p=\text{argmax}_{p'\in B(p)\cap E}\;\text{max}_{e\in S}\; d^T(p'+e), \end{align} } for some open neighborhood $B(p)$ of $p$, which is illustrated in~\prettyref{fig:extreme}. The following property is an immediate consequence of the Minkowski sum, which establishes a direct connection between packing tightness and reference points in $E$: \textbf{Proposition.} If \prettyref{eq:extreme} holds for some open neighborhood $B(p)$, then: \begin{align} p=\text{argmax}_{p'\in B(p)} d^Tp'\quad\text{s.t.}\;S+\{p'\}\subseteq Q. \label{eq:target} \end{align} In other words, an extreme placement of $S$ in $Q$ corresponds to a point $p$ in $E$ which satisfy $d^Tp \ge d^Tp'$ for $p'$ in $B(p)$. With this tool at hand, we can directly evaluate and compare the potential packing tightness of a point $p$ by looking at the range of directions $d$ that make $p$ satisfy \prettyref{eq:target}. Such a range of $d$ corresponds exactly to a 2D normal cone~\cite{boyd2004convex}. \zhf{The normal cone of a set $C$ at a boundary point $x_0$ is the set of all vectors $y$ such that $y^T (x - x_0 \le 0)$ for all $x \in C$.} We summarize its property below: \textbf{Proposition.} For a \zhf{convex} polygonal set $E$ and $p\in\partial E$, the spanning angle of the normal cone at $p$ is defined as \zhf{$\mathcal{N}(p)\triangleq \pi-\theta$}, where $\theta$ is the interior angle of $E$ at $p$. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.2 \textwidth]{images/proof.pdf}} \caption{ For a boundary point $p$ on a convex cone $E$, arbitrary direction $d$ in its normal cone satisfy $d^Tp \ge d^Tp'$, where $p'\in E$. } \vspace{-10pt} \label{fig:proof} \end{center} \end{figure} A demonstration of a normal cone is provided in ~\prettyref{fig:proof}. A local optima $\mathcal{N}(p)$ represents $S \oplus \{p\}$ arranged compactly against obstacles in more directions, which is exactly what we need. Using $\mathcal{N}(p)$ as a tightness metric, we compare different choices of $p$. The following property is obvious: \textbf{Theorem.} For a polygonal set $E$ and strictly convex $p\in\partial E$, we have \prettyref{eq:extreme} holds for some open neighborhood $B(p)$ of $p$. \begin{proof} We could consider a neighborhood $B(p)$ containing only $p$ and parts of $p$'s two neighboring edges. Within $B(p)$, $p$ is strictly convex so $\mathcal{N}(p)>0$, and all other points $p'$ are on a straight line with $\mathcal{N}(p')=0$. \end{proof} \begin{figure}[h] \centering \includegraphics[width=0.3\linewidth]{images/neiborhood.pdf} \caption{ A neighborhood $B(p)$ for a convex vertex $p$. } \label{fig:neighborhood} \end{figure} We demonstrate a neighborhood colored red in ~\prettyref{fig:neighborhood}. Based on the above observation, our candidate set consists of all the strictly convex vertexes of $E$. \section{\label{sec:related}Related Work} Our work is built off of prior works on packing policy searches. Our problem is also closely related to other packing-related tasks in computer graphics and robotics. \subsection{Packing Policies} Being an NP-hard problem, various heuristic policies have been proposed for 2D~\cite{lodi2002two} and 3D~\cite{ali2022line} cubical object packing over time, without a single best performer. The transition of focus to irregular shapes is quite recent. \citewithauthor{LiuLCY15} pack irregular 3D shapes using a Minimum-Total-Potential-Energy (MTPE) heuristic and prioritize shapes with the lowest gravitational center of height. \citewithauthor{WangH19a} propose the Heightmap Minimization (HM) heuristic to minimize the volume increase of packed non-convex shapes as observed from the loading direction. \citewithauthor{GoyalD20} make their packing decision via the Bottom-Left-Back-Fill (BLBF) heuristic~\cite{tiwari2010fast}, which places an object in the bottom-most, left-most, and back-most corner. All these placement rules are manually designed based on specific observations, which limits their applicability. For example, the HM heuristic performs quite well for non-convex shape packing, since it aims to minimize space occupancy vertically. However, it cannot differentiate horizontal placements on a flat container, leading to sub-optimal cases demonstrated in~\prettyref{fig:Insight}a. The latest efforts~\cite{hu2017solving,DuanHQGZWX19,HuXCG0020,ZhaoS0Y021,zhao2022learning} resort to machine learning, specifically RL, to automatically synthesize policies that work well on an arbitrary \zhf{cubical shape} packing task without human intervention. Early works~\cite{hu2017solving,DuanHQGZWX19,HuXCG0020}, however, only learn the order of \zhf{cuboids} to be packed, while using separate heuristics or algorithms to compute the placement. More recent work~\cite{ZhaoS0Y021} learns the placement policy via predicting logits of discretized placement locations. The latest work~\cite{zhao2022learning} combines the merit of heuristic and learning-based policies using a candidate selection policy parameterization, having the policy rank a pre-defined set of candidate placements. Our policy inherits these ideas with necessary modifications toward \zhf{irregular shapes}. \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{images/LBlock.pdf} \vspace{-10pt} \caption{\label{fig:Insight} When packing simple polyominos in a flat container, the result generated using the HM heuristic (a) exhibits a non-trivial sub-optimality gap as compared with that of our learned policy (b).} \end{figure} \subsection{Packing in Computer Graphics} There is a handful of packing-related graphic problems, most of which involve irregular shapes and continuous decision variables. UV atlas generation, for example, minimizes the memory footprint by packing many texture charts into a single texture image. To generate high-quality textures, an algorithm needs to jointly optimize irregular chart shapes and their continuous packing placements. Early works~\cite{levy2002least,10.2312:egs.20031064} nail down the two classes of decision variables in separate sub-stages. More recent works~\cite{LimperVS18,Liu_AAAtlas_2019,ZHANG2020101854} couple the two sub-stages by decomposing the charts for higher packing efficacy. \citewithauthor{LimperVS18} propose a heuristic strategy to iteratively cut charts into smaller ones and repack them using \zhf{heuristics}~\cite{noll2011efficient}. However, the shape of chart boundaries is pre-determined and cannot be modified during cutting. \citewithauthor{Liu_AAAtlas_2019} and \citewithauthor{ZHANG2020101854} propose to deform charts into rectangular patches and then adopt rectangular packing heuristics~\cite{schertler2018generalized}. But their work is irrespective of the sequential nature of the packing problem, deforming chart shapes in a sub-stage. A similar requirement to UV atlas generation arises in 3D printing, where the limited volume of commodity 3D printers requires large objects to be partitioned and/or repacked for printing efficacy. The packing plan can be further constrained for manufacturability and structure robustness. The first work~\cite{luo2012chopper} in this direction cuts large objects using the BSP-tree, such that each leaf node fits into the printing volume. They choose cutting planes from the object surface patches, essentially discretizing the continuous solutions. \citewithauthor{luo2012chopper} select BSP-tree structures guided by a score function incorporating various constraints. However, their optimization strategy is myopic, i.e., using a horizon equal to one. Their follow-up work~\cite{yao2015level} further allows irregular part shapes to be locally optimized for structure robustness and collision resolution, and then packs the parts into the printing volume using local numerical optimization. As a result, \citewithauthor{yao2015level} allow continuous optimization of both irregular object shapes and packing placements, but their optimizer is still myopic. A similar approach has been proposed by~\citet{MaCHW18}, which only optimizes the final object placement result and neglects the packing process. Our problem is closely related to these works by optimizing online placements for general 3D shapes. Modeling packing as a sequential-decision making problem, our method bears the potential of closing the sub-optimality gap as illustrated in~\prettyref{fig:Insight}b. A related work to ours is TAP-Net~\cite{HuXCG0020}, which considers sequential object selection and packing problem in a discrete state-action space, assuming cubical objects and perfect placement without physics dynamics. Our method deviates from~\cite{HuXCG0020} in two important ways. First, we assume continuous object placements and physics realizability constraints. This is a much more realistic setting mimicking real-world packing problems. Our physics simulator allows uncertainty to be modeled, lifting the assumption of perfect policy execution. Second, our policy parameterization enables the RL algorithm to directly train the entire policy end-to-end, instead of only the object selection policy as done in~\cite{HuXCG0020}, which is \zhf{one} reason for our superior performance over all existing baselines. \begin{figure*}[ht] \includegraphics[width=0.98\textwidth]{images/network.pdf} \caption{\label{fig:architecture} Our policy learning architecture. The input to our method is the surface point cloud (a) of the incoming object, $P$, and the heightmap (b) of continuous object configurations in the target container, $H_c$. Our neural network policy uses PointNet (d) and CNN (e) to extract features of $P$ and $H_c$ respectively for 3D geometry understanding. Our geometric-inspired candidate generalization method would then provide a set of placements (c), each encoded as a feature $f_a^i$ using an MLP (f). Finally, our policy which is a dueling network ranks the placement candidates via the state-action value function $Q$, and the best candidate is selected for execution. The continuous object configurations inside the target container are governed by a physics simulator (g). The packing process continues with receiving the next observation until the container is full. Our RL algorithm trains the ranking policy by asynchronously sampling trajectories and updating policy parameters with granted reward signals. } \end{figure*} \subsection{Packing in Robotics} Roboticists consider packing as a component in the pipeline of perception, planning, predicting, and control, rather than a standalone algorithmic problem. However, the progress in robotic packing tasks is relatively slow. This is because the robust execution of high-quality packing plans is extremely difficult due to the tiny spaces between objects. We are only aware of two prior works~\cite{WangH19a,8793966} presenting full-featured packing systems. \citewithauthor{WangH19a} use a robot arm with a sucker-type gripper to grab objects from top-down views. Their following work~\cite{WangH22} further introduces a fallback strategy to shake the target container and create new open spaces for packing. \citewithauthor{8793966} restart the sensing, motion planning, and control loop whenever failures are detected in the downstream stages. Both methods use simple heuristics to solve the underlying packing problem, with \citewithauthor{WangH19a} relying on the heightmap minimization heuristic and \citewithauthor{8793966} assuming known cubical object shapes and compatible target container sizes. In contrast, bin-picking~\cite{mahler2016dex,mahler2017learning} can be robustly executed on robot hardware, because the target container is assumed to be much larger than objects, rendering packing unimportant. The bin-picking solutions~\cite{mahler2016dex,mahler2017learning}, though quite different from ours, share commonality with our policy design. Both methods assume the availability of a set of candidate actions, which is then ranked by a learning-based policy. Most recently, \citewithauthor{zeng2020transporter} and \citewithauthor{huang2022equivariant} bring the accuracy and generality of learning-based object transfer policy to another level by introducing equivalency. By factoring out the 2D rigid rotations from the neural network input-output space, learning becomes much more sample-efficient. We adopt a similar approach for \zhf{factoring out rigid rotations } during the forward calculation of packing policies. \section{\label{sec:result}Results and Evaluation} In this section, we first explain our packing experiment setup. Then, we describe our carefully prepared datasets emulating realistic packing tasks in~\prettyref{sec:dataPreparation}. We illustrate the superiority of our method for online packing by comparing it with a row of existing baselines in~\prettyref{sec:comparisons} and demonstrate the benefits of our candidate-based packing pattern in~\prettyref{sec:actionDesign}. We report the generalization results of our method in~\prettyref{sec:generalization}, where the problem emitter transfers to noisy point cloud inputs and unseen shapes. Immediately following, we show performance on buffered packing scenarios in~\prettyref{sec:buffeResult}. We establish our packing environment using the Bullet simulator~\cite{coumans2016pybullet}, with a deep container of size $S_x=32$cm, $S_y=32$cm, and $S_z=30$cm following~\citet{WangH19a}. We assume all objects have the same uniform density for estimating the center of mass. The coefficients of friction among objects and against the container are both set to $0.7$. We set the grid size of the heightmap $H_c$ to be $\Delta_h=1$cm. We then sample $H_c$ at an interval of $\Delta_g=2$ grids to form grid points for the candidate generation procedure. For clustering the connected region with similar $l_z$ values, we use $\Delta_z=1$cm. Unless otherwise stated, we use $\Delta_\theta=\pi/4$ to discretize object rotations. We use a maximum of $N=500$ candidate actions for policy ranking. Our PointNet takes a fixed-sized set of $1024$ points, so we resample $1024$ points from the surface point cloud $P$ to fit this size. We adopt $16$ simulation threads in the actor process. Our learning-based policies are trained within 48 hours on a desktop computer equipped with a Xeon E5-2620 CPU and a TITAN Xp GPU. Ablations on these experimental parameters and training methods are reported in~Appendix~\ref{sec:appendixB}. \begin{figure}[t!] \centering \includegraphics[width=0.44\textwidth]{images/stablePoses.pdf} \caption{\label{fig:poses} (a): Different planar-stable poses of a bucket shape. These poses have invariant appearances after vertical rotations and we remove redundant poses (b) to avoid imbalanced shape distribution.} \vspace{-16pt} \end{figure} \subsection{\label{sec:dataPreparation}Object Data Preparation} For training reliable policies for packing irregular 3D shapes, we need to prepare object datasets that contain abundant shapes as well as their planar-stable poses, while being compatible with the simulator. Our object test suite combines objects from widely used, open-source polygonal mesh~\cite{0033116} datasets, collected either by synthetic modeling or real-world scanning. To perform robust collision detections and ray-casting tests in the simulation world, we stipulate that objects are represented as closed manifold mesh surfaces. For each object, we first use the method in~\cite{StutzG20} to reconstruct the mesh into a watertight one. We then extract all the planar-stable poses using the detection algorithm~\cite{GoldbergMZCCC99}, as explained in~\prettyref{fig:poses}a. Some planar-stable poses are rotation-symmetric as shown in \prettyref{fig:poses}b and we propose~\prettyref{alg:poses} in Appendix~\ref{sec:appendixA} to remove redundant poses and retain only one representative from each rotation-symmetric group. This step avoids unbalanced shape distributions and improves the robustness of trained policies. \begin{wrapfigure}{r}{0.46\linewidth} \centering \vspace{-6pt} \includegraphics[width=1.\linewidth]{images/vhacd.pdf} \vspace{-20pt} \caption{ Convex decomposition. } \vspace{-10pt} \label{fig:vhacd} \end{wrapfigure} Finally, several rigid body simulators only accept convex shape primitives, e.g., approximating a non-convex power drill shape with its convex hull in~\prettyref{fig:vhacd}a (red). We thus apply the convex decomposition algorithm~\cite{mamou2016volumetric} for non-convex shapes, as illustrated in~\prettyref{fig:vhacd}b. The complete data preparation pipeline is outlined in~\prettyref{alg:data} of Appendix~\ref{sec:appendixA}. The packing result or the final packing utility is highly related to the geometric properties of the picked object dataset. For providing convincing evaluations, we apply our pipeline to construct three datasets of \zhf{various} features. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{images/shapeInUse.pdf} \vspace{-10pt} \caption{\label{fig:dataset} Gallery of our datasets. (a): Part of shapes from the \textit{General} dataset. (b): All polyominos from the \textit{BlockOut} dataset, where each polyomino is presented with a selected pose. (c): The bowl shapes from the \textit{Kitchen} dataset. The shapes in (a) and (c) are scaled with their maximal AABB size equal to 1.} \end{figure*} \textbf{\textit{General}} is a large dataset made up of daily life objects, as illustrated in~\prettyref{fig:dataset}a. It combines shapes from the KIT dataset~\cite{KasperXD12}, the BigBIRD dataset~\cite{SinghSNAA14}, the GD dataset~\cite{KapplerBS15}, the APC dataset~\cite{RennieSBS16}, and the YCB dataset~\cite{CalliSBWKSAD17}, totalizing 483 shapes and 1094 planar-stable poses. All these shapes are collected through real-world scanning. \textit{General} is our main dataset for profiling the performance of different methods. \textbf{\textit{BlockOut}} is a synthetic 3D puzzle dataset with polyominos that comes from~\citet{LoFL09}. This dataset includes 8 polyominos with 23 planar-stable poses, as illustrated in~\prettyref{fig:dataset}b. Each polyomino is composed of basic cubes of the same size. This dataset involves relatively regular shapes that are more complex to handle than cubical objects in BPP problems. \textit{BlockOut} demonstrates the algorithm's ability to understand shapes and combine them, as done in~\prettyref{fig:Insight}. \textbf{\textit{Kitchen}} consists of three categories of shapes, namely bowl, board, and fruit. The bowl shapes are concave CAD models collected from the ShapeNet dataset~\cite{ChangFGHHLSSSSX15}, as illustrated in~\prettyref{fig:dataset}c. The board shapes are manually created by us and the fruit shapes are small items coming from the \textit{General} dataset. This dataset is created in order to verify a commonsense logic: an effective packing method would try to place fruits in bowls before covering boards. We expect the learned policies to pick up such logical rules without human intervention thus achieving better packing performance. We compose the polyominos in the \textit{BlockOut} dataset with basic cubes with a side length of $6$cm. \zhf{Therefore, the upper packing utility bound for the chosen container is $87.9\%$.} The rotation interval $\Delta_\theta$ for \textit{BlockOut} is set to $\Delta_\theta=\pi/2$ since the polyomino shapes are axis-aligned. To construct the \textit{Kitchen} dataset, we collect 54 concave bowl shapes from the ShapeNet dataset, 106 fruits from the \textit{General} dataset, and 20 planar boards generated with random sizes. The problem emitter for the \textit{Kitchen} dataset is slightly different from the others. Since we focus on verifying the packing logic between shape categories, we first sample an object category and then sample a random object with its most stable pose from this category. We also provide experimental results on industrial shapes collected from the ABC dataset~\cite{KochMJWABAZP19} in Appendix~\ref{sec:appendixB}. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{images/resultsVisual.pdf} \caption{\label{fig:visual}Visualization of various packing methods on three datasets. Each test instance is labeled with its utility/number of packed objects. \zhf{Our learned policy consistently exhibits tight packing and achieves the best performance. } } \end{figure*} \subsection{\label{sec:comparisons}Comparisons with Heuristic Baselines} We compare our learned policies with several representative heuristic methods that can pack irregular 3D shapes. The MTPE heuristic~\citep{LiuLCY15} searches for the pose of the lowest center-of-mass height. The HM heuristic~\citep{WangH19a} chooses to minimize the volume increase as observed from the top-down direction. The BLBF~\citep{GoyalD20} heuristic chooses the bottom-most, left-most, and back-most placement. We also test a classic First-Fit (FF)~\citep{Falkenauer96} packing logic which places items in the first found feasible placement. We run all methods in the same environment setup with the same test sequences. Totalizing $2000$ object sequences randomly generated from each dataset are tested. These baselines are compared along five different metrics. First, we have the packing utility in~\prettyref{eq:object} which is the volume of packed objects divided by the volume of $C$. Except for the \textit{BlockOut} dataset which is composed of regular cubes, the optimal packing ground truths of other datasets are hard to obtain. Therefore, we provide the utility gap against the best baseline to highlight the performance discrepancy. We also list the utility variance and the average number of packed objects. Finally, we show the average \zhf{time} cost of each packing decision-making. \input{tables/baselines.tex} The quantitative and qualitative comparisons are summarized in~\prettyref{table:baselines} and~\prettyref{fig:visual}, respectively. The best-performing heuristic for each dataset varies due to their distinct geometric properties. For example, MTPE has an advantage on the \textit{General} dataset while HM is the best heuristic for \textit{Kitchen} with more non-convex shapes. Compared to these manually designed packing rules, our learned policies adapt well to each dataset and consistently dominate heuristic rules. The gap metric demonstrates that our method outperforms even the best-performing baseline on each dataset with at least 12.8\% in terms of packing utility. All these methods meet real-time packing requirements with a framerate of at least $25$. Our method also achieves the smallest variance on each dataset. More packing visualizations are provided in Appendix~\ref{sec:appendixB}. We also provide a dynamic packing video in the supplemental material. \subsection{\label{sec:actionDesign}Benefits of Candidate-based Packing Pattern} One of our main contributions lies in action space design, i.e., \zhf{the candidate-based packing pattern for pruning sub-optimal actions.} We highlight the effectiveness of our design in this section. We first compare with the resolution-complete action space, which discretizes the action space using a regular interval as done in~\cite{GoyalD20}. This action space can also be treated as a full geometry set including convex vertices, concave vertices, edges, and internal points of connected action regions, which would demonstrate the functionality of our generated candidates. Then, we also compare with other dimension-reduction techniques below. \input{tables/actionDesign.tex} \paragraph{Act in Line} The resolution-complete action space for packing is typically large and sensitive to the container size $S$ and the related intervals $\Delta$ for discretizing the space $SE(2) \times SO(1)$. We can get the scale of this discretized space $|\mathcal{A}|=\mathcal{O}(S^2\Delta^{-3})$. To sidestep such an enormous space that can grow explosively, we only require the robot to provide $l_x$ and $\theta$. The position $l_y$ can be determined via $\text{argmin}_{l_y}l_z(l_x,l_y)$ for given $\theta$, thus reducing the action space to $\mathbb{R} \times SO(1)$ with $|\mathcal{A}|=\mathcal{O}(S\Delta^{-2})$. \paragraph{Act on Orientation} Along the same line of thinking, we can further reduce the action space by only asking the robot to output $\theta$ and determine $l_x,l_y$ via $\text{argmin}_{l_x,l_y}l_z(l_x,l_y)$. Thus reducing the action space to $SO(1)$ with $|\mathcal{A}|=\mathcal{O}(\Delta^{-1})$. \paragraph{Act on Heuristics} Since heuristic methods have different packing preferences, a straightforward packing strategy is to select one of the heuristics for the object to be packed. We collect all heuristics mentioned in~\prettyref{sec:comparisons} as a candidate action set $\textbf{m}$. The action space size, in this case, is $|\mathcal{A}|=|\textbf{m}|$. For all alternatives mentioned above, we replace Q-values of invalid actions with $-\infty$ during decision-making, and performances are summarized in ~\prettyref{table:actionSpace}. The resolution-complete design, which is also a full geometry set of connected action regions, performs badly on each dataset. The reason is that a large action space must be trained with a huge amount of RL explorations, which is beyond regular computational capabilities. \zhf{Benefits from dimension reduction}, Act-in-Line achieves consistently better performance \zhf{than resolution-complete actions}. However, if we further reduce the action space to $SO(1)$, the decision flexibility decreases, and the performance of Act-on-Orientation degrades. Unsurprisingly, Act-on-Heuristics exceeds the performance of every single heuristic reported in~\prettyref{table:baselines}. However, the packing behavior of the learned policy is still restricted by basic heuristic components and the improvements are limited. \zhf{In comparison, our method stands out as the consistently best performer by reducing the action space to candidate actions of a fixed size $N$, which both loosens the learning burden and enables tight object packing.} \subsection{\label{sec:generalization}Generalization Ability} The generalization of learning-based methods, i.e., testing the trained policies with a problem emitter different from the training one, has always been a concern. Here we demonstrate our generalization ability with two experiments from a practical perspective. \paragraph{Generalization on Noisy Point Clouds} The point cloud used to represent the incoming object is re-sampled from $P$ randomly. To demonstrate our generalizability, we apply Gaussian noises $d \cdot N(0,\sigma^2)$ to the re-sampled $P$. Here $\sigma$ is the standard deviation and $d$ is the diagonal length of $P$'s AABB. Then we generalize the policies trained on $\sigma = 0$ to $\sigma = 1\%-10\%$. \paragraph{Generalization on Unseen Shapes} To test our method on unseen shapes, we randomly remove $20\%$ of shapes from each dataset for training and we test trained policies with full datasets. For the \textit{Kitchen} dataset, this shape removal is performed by each category. \input{tables/noise.tex} \input{tables/unseen.tex} The generalization results on noisy point clouds and unseen shapes are summarized in~\prettyref{table:noise} and~\prettyref{table:unseen} respectively. Our method maintains its performance under various amplitude of point cloud noises and behaves well even with a strong noise disturbance of $\sigma = 10\%$. The policies trained with only part of each dataset can still be well adapted to the full dataset, with a marginal drop in performance. Note that, even the worst generalization performance in~\prettyref{table:noise} and~\prettyref{table:unseen} still clearly outperforms the best heuristic performance as reported in~\prettyref{table:baselines}, which demonstrates the robustness of our learned policies. We also conduct experiments to crossly test trained policies on other datasets, please see \prettyref{sec:betweenDatasets}. \subsection{\label{sec:buffeResult}Performance on Buffered Packing Scenario} \input{tables/offline.tex} Here we show that our method can naturally solve the buffered packing problem by merely introducing an object-ordering policy $\pi_s$ cooperating with the placement policy $\pi$. To verify that both $\pi_s$ and $\pi$ are necessary components, we compare them with the systematically corresponding heuristic baselines proposed by~\citet{GoyalD20}. Besides the placement heuristic BLBF mentioned above, \citet{GoyalD20} also propose an LFSS method for ordering objects in a largest-volume-first preference. We combine LFSS with $\pi$ and combine $\pi_s$ with BLBF as our baselines. All learning-based policies are trained with buffer size $K=10$. We further test the trained policies with various $K$ to exemplify the generalization ability, as visualized in~\prettyref{fig:visualBuffer} and summarized in~\prettyref{table:resultBuffer}. \begin{figure}[t!] \begin{center} \centerline{\includegraphics[width=0.48 \textwidth]{images/resultsBuffer.pdf}} \caption{ Visualization results of our method on buffered packing scenarios with various $K$. The larger buffer provides more flexibility for the object-ordering policy $\pi_s$ and results in a more dense packing. } \label{fig:visualBuffer} \end{center} \vspace{-30pt} \end{figure} Our comparison shows that introducing an object-ordering policy $\pi_s$ for buffered packing scenarios with $K=10$ significantly improves the packing performance compared with the strictly online case, where $\pi_s$ does not exist and $K=1$. The jointly trained policies $\pi_s$ and $\pi$ outperform the other two combination alternatives by a large margin on each dataset. This advantage was still well maintained when we generalize the trained policies to buffered packing scenarios with $K=3$ and $K=5$. More packing results of our method on buffered packing are visualized in Appendix~\ref{sec:appendixB}. To figure out what the two policies learned, we calculate the average volume of objects chosen by $\pi_s$ during the packing process and visualize it in ~\prettyref{fig:sizeRatio}a. We can see that $\pi_s$ automatically learns a strategy that selects objects from large to small like LFSS. We visualize a metric $\sum |G_{i(t)}| / V_f$ in ~\prettyref{fig:sizeRatio}b to reflect space occupancy, where $V_f$ is the volume below the up surface composed of packed objects. We can see that $\pi_s$ can select more suitable items to keep higher occupancy than LFSS so that occupied space $V_f$ is better utilized. Also, we can get that the learnable $\pi$ also contributes better packing by comparing the occupancy between $\pi_s + \text{BLBF}$ and $\pi_s + \pi$. \begin{figure}[t!] \begin{center} \centerline{\includegraphics[width=0.48 \textwidth]{images/sizeRatio.pdf}} \vspace{-12pt} \caption{ Packing behavior analysis on the $General$ dataset with $K = 10$. The policies $\pi_s$ and $\pi$ both contribute to better utilization of occupied spaces. } \label{fig:sizeRatio} \end{center} \vspace{-20pt} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} In this paper we investigate the quantization of the gauged $(G/G)$ WZW model in the generalized momentum representation. The consideration is inspired by the study of (two-dimensional) Yang-Mills and BF-theories in the momentum representation \cite{AER}. The problem of quantization of gauge theories in the momentum representation has been attracting attention for a long time \cite{JG}.\footnote{The authors are grateful to Prof. R.Jackiw for drawing their attention to this paper and for making them know about the scientific content of his letter to Prof. D. Amati.} This question arises naturally in the Hamiltonian version of the functional intergral formalism \cite{FRS}. While in the connection representation the idea of gauge invariance may be implemented in a simple way \begin{equation} \Psi(A^g)=\Psi(A) \ , \eel PsiA we get a nontrivial behavior of the quantum wave functions under gauge transformations in the momentum representation. Indeed, one can apply the following simple argument. The wave functional in the momentum representation may be thought of as a functional Fourier transformation of the wave functional in the connection representation (\ref{PsiA}): \begin{equation} \Psi(E)=\int {\cal D}A \, \exp \left(i \int tr E_i A_i \right) \, \Psi(A) \ . \eel A-E Taking into account the behavior of $A$ and $E$ under gauge transformations, \begin{eqnarray} \label{trans} A_i^g&=&g^{-1}A_i g+g^{-1}\partial_i g \, , \nonumber \\ E_{i}^g&=&g^{-1}E_i g \, , \end{eqnarray} we derive \begin{equation} \Psi(E^g)= e^{i\int tr E_i \partial_igg^{-1}} \Psi(E) \, . \eel PsiE We conclude that the wave functional in the momentum representation is not invariant with respect to gauge transformations. Instead, it gains a simple phase factor $\phi(E, g)$, which is of the form \begin{equation} \phi(E, g)=\int tr E_i \partial_igg^{-1} . \eel phi The infinitesimal version of the same phase factor, \begin{equation} \phi(E, \epsilon)=\int tr E_i \partial_i \epsilon \, , \eel ephi corresponds to the action of the gauge algebra. It is easy to verify that $\phi$ satisfies the following equation \begin{equation} \phi(E, gh) = \phi(E, g) + \phi(E^g, h) \, . \eel cocycle This property assures that the composition of two gauge transformations \rz PsiE with gauge parameters $g$ and $h$ is the same as a gauge transformation with a parameter $gh$. Eq.\ \rz cocycle is usually referred to as a cocycle condition. It establishes the fact that $\phi$ is a one-cocycle of the (infinite dimensional) gauge group. A one-cocycle is said to be trivial, if there exists some $\tilde\phi$ such that: \begin{equation} \phi(E, g) =\tilde\phi (E^g)-\tilde\phi(E) \, . \eel cobcon In this case the gauge invariance of the wave function may be restored by the redefinition \begin{equation} \tilde{\Psi}(E)=e^{-i\tilde{\phi}(E)}\Psi(E). \eel tilPsi An infinitesimal cocycle $\phi(E, \epsilon)$ is trivial, if it can be represented as a function of the commutator $[\epsilon, E]$: \begin{equation} \phi(E, \epsilon)=\tilde\phi ([\epsilon, E]). \eel ecob It is easy to see that the cocycle \rz ephi is nontrivial. Indeed, let us choose both $E$ and $\epsilon$ having only one nonzero component $E^a$ and $\epsilon^a$ (in the Lie algebra). Then the commutator in \rz ecob is always equal to zero, whereas the expression \rz ephi is still nontrivial. As a consequence, also the gauge group cocycle \rz phi is nontrivial. On the other hand, on some {\em restricted} space of values for the field $E$ the cocycle may become trivial (generically if we admit nonlocal expressions for $\tilde{\phi}$). This is important to mention as one may rewrite the (integrated) Gauss law (\ref{PsiE}) as a triviality condition on the cocycle: Let us parameterize $\Psi$ as $\Psi(E) :=\exp i\tilde\phi(E)$, which is possible whenever $\Psi \neq 0$, and insert this expression into (\ref{PsiE}). The result is precisely (\ref{cobcon}) with $\phi \equiv \int tr E_i \partial_igg^{-1}$.\footnote{More accurately, one obtains (\ref{cobcon}) only mod $2\pi$. But anyway this modification of (\ref{cobcon}) is quite natural in view of the origin of the cocycle within \ry PsiE . Alternatively one might regard also a multiplicative cocycle $\Phi=\exp (i\phi)$ right from the outset, cf.\ Appendix A.} In fact, e.g.\ in two dimensions the wave functions of the momentum representation are supported on some conjugacy classes $E(x)=g(x)\,E_0\,g^{-1}(x)$ with specific values of $E_0$. But away from these specific conjugacy classes, and in particular in the original, unrestricted space of values for $E$, the general argument of the cocycle \rz cobcon being nontrivial applies. More details on this issue may be found in Appendix A. It is worth mentioning that in the Chern-Simons theory a cocycle appears in the connection representation as well: \begin{equation} \Psi(A^g)=e^{i\phi(A, g)}\Psi(A). \eel anomaly The cocycle $\phi(A, g)$ is usually called Wess-Zumino action \cite{Wess}. It is intimately related to the theory of anomalies \cite{FSh}. Recently, a cocycle of type \rz phi has been observed in two-dimensional BF- and YM-theories. In this paper we consider the somewhat more complicated example of the gauged WZW (GWZW) model for a semi-simple Lie group. Like the BF theory, it is a two-dimensional topological field theory \cite{Sp} (for a detailed account see \cite{Blau}). It has a connection one-form (gauge field) as one of its dynamical variables and possesses the usual gauge symmetry. However, there is a complication which makes the analysis different from the pattern (\ref{PsiE}). In the GWZW model the variable which is conjugate to the gauge field, and which shall be denoted by $g$ in the following, takes values in a Lie group $G$ instead of a linear space. So, we get a sort of curved momentum space. We calculate the cocycle $\phi_{GWZW}$ which governs the gauge dependence of wave functions in a $g$-representation and find that it differs from the standard expression (\ref{phi}). We argue that while the cocycle \rz phi corresponds to a Lie group $G$, our cocycle is related to its quantum deformation $G_q$. In the course of the analysis we find that the GWZW model belongs to the class of Poisson $\sigma$-models recently discovered in \cite{Th}. This theory provides a technical tool for the evaluation of the cocycle $\phi_{GWZW}$. Let us briefly characterize the content of each section. In Section 1 we develop the Hamiltonian formulation of the GWZW model, find canonically conjugate variables, and write down the gauge invariance equation for the wave functional in the $g$-representation. Section 2 is devoted to the description of Poisson $\sigma$-models. A two-dimensional topological $\sigma$-model of this class is defined by fixing a Poisson bracket on the target space. Using the Hamiltonian formulation (the topology of the space-time being a torus or cylinder), we prove that the GWZW model is equivalent to a certain Poisson $\sigma$-model coupled to a `topological' term $S_\d$ that has support of measure zero on the target space of the field theory. The target space of the (coupled) Poisson $\sigma$-model is the Lie group $G$. We start from the GWZW action, evaluate the Poisson structure on $G$ and discover its relation to the theory of Quantum Groups. The origin of the term $S_\d$ is considered in details in Appendix B. In Section 3 we solve the gauge invariance equation and find the gauge dependence of the GWZW wave functional in the $g$-representation. This provides a new cocycle $\phi_{GWZW}$. Calculations are performed for the Poisson $\sigma$-model without the singular term. In Appendix C we reconsider the problem in the presence of the topological term. It is shown that in the case of $G=SU(2)$ at most one quantum state is affected. We compare the results with other approaches \cite{ADW}, \cite{Blau}. In some final remarks we conjecture that the Poisson $\sigma$-model coupled to $S_\d$ gives an alternative formulation of the GWZW model valid for a Riemann surface of arbitrary genus. We comment on the new relation between the WZW model and Quantum Groups which emerges as a by-product of our consideration. \section{Hamiltonian Formulation of the GWZW Model} The WZW theory is defined by the action \begin{equation} WZW(g)={ k\over 8\pi} \int tr \, \6_\mu gg^{-1} \6^\mu gg^{-1}\, d^2x + {k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 12\pi} \int\, tr \, d^{-1}(dgg^{-1})^3 \, , \eel acwzw where the fields $g$ take values in some semi-simple Lie group $G$ and indices $\mu$ are raised with the standard Minkowski metric. The case of a Euclidean metric may be treated in the same fashion. Some remarks concerning the second term in \rz acwzw may be found in Appendix B. The simplest way to gauge the global symmetry transformations $g\to lgl^{-1}$ is to introduce a gauge field $h$ taking its values in the gauge group; the action \begin{equation} GWZW(h,g)=WZW(hgh^{-1}) \end{equation} is then invariant under the local transformations $g\to lgl^{-1}, h \to hl^{-1}$. With the celebrated Polyakov-Wigmann formula and $a_\pm :=h^{-1} \6_\pm h$, where $\6_\pm =\6_o \pm\6_1$, GWZW can be brought into the standard form \begin{equation} \begin{array}{r@{}l} GWZW&(g,a_+,a_-)= WZW(g) \, + \\[2pt] &+ \, {k\over 4\pi} \int tr [a_+\6_-gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow-a_-g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow \6_+g -a_+ga_-g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow +a_+a_- ] \, d^2x \, . \end{array} \eel agwzw In the course of our construction of GWZW $\;$ $a \equiv a_+ dx^+ + a_- dx^-$ has been subject to the zero curvature condition $da + a^2 \equiv 0$. This condition results also from the equations of motion arising from (\ref{agwzw}). So, further on we treat $a_\pm$ as unconstrained fields (taking their values in the Lie algebra of the chosen gauge group). In order to find a Hamiltonian formulation of the GWZW model, we first bring \rz agwzw into first order form. To this end we introduce an auxiliary field $p(x)$ into the action by the replacement ($\dot g \equiv \6_0 g$) \begin{equation} {k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 8\pi} (\dot g g^{-1} + a_+ - g a_- g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow )^2 \to \, p (\dot g g^{-1} + a_+ - g a_- g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow ) - {2\pi \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} k} p^2 \, . \eel curre As $p$ enters the action quadratically, it may be eliminated always by means of its equations of motion so as to reproduce the original action (\ref{agwzw}). In the functional integral approach this corresponds to performing the Gaussian integral over $p$. Now the action (\ref{agwzw}) may be seen to take the form (with $\partial g \equiv \partial_1 g$) \begin{eqnarray} \label{fofac} &GWZW\!\!\!\!& (g, p, a_\pm)= {k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 12\pi} \int\, tr d^{-1}(dgg^{-1})^3 \, + \int d^2x\, tr \Biggl\{ p \dot gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow - \nonumber \\ &&- a_-\left[g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow pg-p+ {k\over 4\pi}(g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow \6 g+\6 gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow)\right]-\\&& - p \6gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow - {k\over 8\pi} (a_+ - a_- - {4\pi \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} k} p + \6 g g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow )^2 \Biggr\} \, . \nonumber \end{eqnarray} This is already linear in time derivatives. After the simple shift of variables \begin{equation} a_+ \to \ti a_+ \equiv a_+ - a_- - {4\pi \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} k} p + \6 g g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow \end{equation} the last term is seen to completely decouple from the rest of the action. Therefore one can exclude it from the action without loss of information. We can again employ the argument about integration over $\ti a_+$ (or also $a_+$ in (\ref{fofac})). So we have introduced one extra variable $p$ and now one variable is found to drop out from the formalism. After $\ti a_+$ is excluded, the rest of formula (\ref{fofac}) provides the Hamiltonian formulation of the model. The first two terms play the role of a symplectic potential, giving rise to the symplectic form\footnote{Cf.\ also Appendix C.} \begin{equation} \O^{field}=tr \oint \left[ dpd g g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow + \left(p + {k\over 4\pi}\6 gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow\right) \left(d g g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow\right)^2 \right]dx^1 \; . \eel psss Here $d$ is interpreted as an exterior derivative on the phase space. It is interesting to note that the nonlocal term in (\ref{fofac}) gives a local contribution to the symplectic form on the phase space. The third term, which includes $a_-$, represents a constraint: \begin{equation} g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow p g- p+{k\over 4\pi} \left(g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow \6 g+\6 gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow\right) \approx 0. \eel const The variable $a_-$ is a Lagrange multiplier and the constraint is nothing but the Gauss law of the GWZW model. It is a nice exercise to check with the help of (\ref{psss}) that the constraints \rz const are first class and that they generate the gauge transformations. Equation \rz const implies $tr \left(g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow p g + {k\over 4\pi}g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow \6 g \right)^2 \approx tr (p- {k\over 4\pi}\6 g g^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow)^2$ and hence \begin{equation} tr[p\6 gg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow ] \approx 0. \eel lemm5 This permits to eliminate the Hamiltonian in \rz fofac in agreement with the fact that the model \rz agwzw is topological. Being a Hamiltonian formulation of the GWZW model, the form \rz fofac is not quite satisfactory, if one wants to solve the Gauss law equation (\ref{const}). We therefore apply here some trick usually referred to as bosonization \cite{GK,AS}. The main idea is to substitute the Gauss decomposition for the matrix $g$ into the GWZW action: \begin{equation} g=g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \gu \ , \eel decom where $\gd$ is lower triangular, $\gu$ is upper triangular, and both of them are elements of the complexification of $G$. (Note, however, that we do not complexify the target space $G$ here, but only use complex coordinates $\gu$, $\gd$ on it). If the diagonal parts of $\gd$ and $\gu$ are taken to be inverse to each other, this splitting is unique up to sign ambiguities in the evaluation of square roots. Analogously any element of the Lie algebra $\cal G$ corresponding to $G$ may be split into upper and lower triangular parts according to \begin{equation} Y=Y\da .+Y\ua .\, ,\quad \left(Y\dA\right)_d =\left(Y_\uparrow} \def\dA{_\downarrow\right)_d = \frac{1}{2} Y_d \ {}, \eel algde where a subscript $d$ is used to denote the diagonal parts of the corresponding matrices. Observe that the three-form $tr (dgg^{-1})^3$ may be rewritten in terms of $\gu$ and $\gd$ as follows: \begin{equation} \omega= {k\over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 12\pi} tr[(dgg^{-1}} \def\gd{g_\downarrow} \def\gu{g_\uparrow )^3] =d \left[{k\over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 4\pi} tr(d \gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\wedge d\gu \gui) \right]+ \varpi. \eel wzter Here $\varpi$ is a three-form on $G$ supported at the lower dimensional subset of $G$ which does not admit the Gauss decomposition. Now we can rewrite the topological Wess-Zumino term as \begin{equation} WZ(g)={k\over 12\pi}tr \int d^{-1}(dgg^{-1})^3= {k\over 4\pi}tr \int d \gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\wedge d\gu \gui + {k\over 12\pi}\int d^{-1}\varpi . \eel WZvar In this way we removed the symbol $d^{-1}$ in the first term of the right hand side. The topological term \begin{equation} S_\d={k\over 12\pi}\int d^{-1} \varpi \eel var is considered in details in Appendix B. In contrast with the conventional WZ term the new topological term \rz var influences the equations of motion only on some lower dimensional subset of the target space. Let us return to the action of the GWZW model. We make the substitution \rz decom and introduce a new momentum variable \begin{equation} \Pi = \Pi_\uparrow} \def\dA{_\downarrow + \Pi\dA = \gd p g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} - {k\over 4\pi} \left(\6 \gu \gui + \6 \gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \right) \, . \eel defpi Rescaling $a_-$ according to $\lambda := {k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 2\pi} a_-$, we now may rewrite the GWZW action in the form \begin{equation} \label{finac} GWZW(g, \Pi, \l)=S_{{\cal P}}(g, \Pi, \l)+ S_\d(g), \end{equation} where $S_\d$ has been introduced in (\ref{var}) and $S_{{\cal P}}$ is given by \begin{eqnarray} \begin{array}{r@{}l} S_{{\cal P}}(g, \Pi, \l) &=\int d^2x\, tr\left\{\Pi\left(\6_0\gu\gui - \6_0\gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \right) + \right. \\[2pt] & \left. +\lambda\left[\gui \6_1\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\6_1\gd + {2\pi \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} k} \left(\gui \Pi \gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \Pi \gd \right)\right]\right\} \, . \label{finac2} \end{array} \end{eqnarray} In the further consideration we systematically disregard the topological term $S_\d$. In Appendix C we prove that if we take \rz var into account, the results change only for wave functions having support on those adjoint orbits in $G$ (one in the case of $G=SU(2)$) on which the Gauss decomposition breaks down. For the formulation of a quantum theory in the $g$-representation, the momentum $\Pi$ should be replaced by some derivative operator on the group. The first term in \rz finac2 represents the symplectic potential on the phase space and suggests the ansatz \begin{equation} g\to g \quad ,\qquad \Pi \to - i (\gu {\d\over\d\gu} - \gd {\d\over\d\gd}) \, \, . \eel qop At this point some remark on the notational convention is in order: On $GL(N)$ coordinates are given by the entries $g_{ij}$ of the matrix representing an element $g \in GL(N)$. The corresponding basis in the tangent space may be arranged into matrix form via \begin{equation} \left(\frac{\delta}{\delta g} \right)_{ij} \equiv \frac{\delta}{\delta (g_{ji})} \,\; . \end{equation} With this convention the entries of $g{\6 \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} \6 g}$ are seen to be the right translation invariant vector fields on $GL(N)$. Given a subgroup $G$ of $GL(N)$ the trace can be used to project the translation invariant derivatives from $GL(N)$ to $G$. In more explicit terms, given an element $Y$ of the Lie algebra of $G$, a right translation invariant derivative on $G$ is defined by $tr\,Yg{\6 \over\6 g}$. The matrix valued derivatives in this paper are to be understood in this sense. In particular, \rz qop means that the quantum operator associated to $tr\,Y\Pi$ is given by \begin{equation} tr\,Y{\Pi} \to -i \; tr\; \left(Y_\uparrow} \def\dA{_\downarrow\gu {\d\over\d\gu} - Y\dA\gd {\d\over\d\gd}\right) \; . \end{equation} With this interpretation it is straightforward to prove that commutators of the quantum operators defined in \rz qop reproduce the Poisson algebra of the corresponding classical observables, as defined by the symplectic potential term in \ry finac2 . Let us look for the wave functionals of the GWZW model in the $g$-representation. This means that we must solve the equation \begin{eqnarray} \label{gl} (\gui \6_1\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\6_1\gd)\Psi(\gu, \gd)= \qquad \qquad \qquad \qquad \qquad \nonumber \\ \frac{2\pi i}{k}\left(g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}(\gd\frac{\delta}{\delta \gd} - \gu\frac{\delta}{\delta \gu})\gd- \gui (\gd\frac{\delta}{\delta \gd} -\gu\frac{\delta}{\delta \gu})\gu \right) \Psi(\gu, \gd) \end{eqnarray} for $\Psi$ being a wave functional; the functional derivatives are understood to act on $\Psi$ only (but not on everything to their right). The problem is clearly formulated, but at first sight it is not evident how to solve equation (\ref{gl}). To simplify it we introduce another parameterization of the matrix $g$: \begin{equation} g= h^{-1}g_0 h. \eel diag Here $g_0$ is diagonal and $h$ is defined up to an arbitrary diagonal matrix which may be multiplied from the left. The part of the operator \rz gl which includes functional derivatives simplifies dramatically in terms of $h$. One can rewrite equation \rz gl as \begin{equation} \left(\gui \6_1\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\6_1\gd + \frac{2\pi i}{ k } \frac{\delta}{\delta h} h\right) \Psi[g_0, h] =0, \eel hgl where $\gu, \gd$ are determined implicitly as functions of $h$ and $g_0$ via \begin{equation} g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \gu = h^{-1}g_0 h \, . \label{g} \end{equation} We discuss the interpretation of equations \rz gl and \rz hgl in Section 2 and solve them efficiently in Section 3. \section{Gauged WZW as a Poisson $\sigma$-Model} The Gauss law equations of the previous section may be naturally acquired in the theory of Poisson $\sigma$-models. We start with a short description of this type of topological $\sigma$-model. The name Poisson $\sigma$-model originates from the fact that its target space $N$ is a Poisson manifold, i.e.\ $N$ carries a Poisson structure ${\cal P}$. We denote coordinates on the two-dimensional world-sheet $M$ by $x^{\mu}, \mu=1,2$ and coordinates on the target space $N$ by $X^i, i=1,\dots ,n$. A Poisson bracket $\{ \, \cdot \, , \, \cdot \, \}$ on $N$ is defined by specifying its value for some coordinate functions: $\{X^i,X^j\} = {\cal P}^{ij}(X)$. Equivalently the Poisson structure may be represented by a bivector \begin{equation} {\cal P}=\2 {\cal P}^{ij}(X) \frac{\partial}{\partial X^i}\wedge \frac{\partial}{\partial X^j} \, . \eel Poiss In terms of this tensor the Jacobi identity for $\{ \, \cdot \, , \, \cdot \, \}$ becomes \begin{equation} {\cal P}^{li}\frac{\partial {\cal P}^{jk}}{\partial X^l} + {\cal P}^{lk}\frac{\partial {\cal P}^{ij}}{\partial X^l} + {\cal P}^{lj}\frac{\partial {\cal P}^{ki}}{\partial X^l} =0 \ . \eel Jid For nondegenerate ${\cal P}$ the notion of a Poisson manifold coincides with that of a symplectic manifold. In general, however, ${\cal P}$ need not be nondegenerate. In the world-sheet picture our dynamical variables are the $X^i$'s and a field $A$ which is a one-form in both world-sheet and target space. In local coordinates $A$ may be represented as \begin{equation} A=A_{i \mu} dX^i \wedge dx^{\mu}. \eel defA The topological action of the Poisson $\sigma$-model consists of two terms, which we write in coordinates: \begin{equation} S_{{\cal P}}(X, A)=\int_{M}\left( A_{i \nu} \frac{\partial X^i}{\partial x^\mu} + \frac{1}{2} {\cal P}^{ij} A_{i \mu} A_{j \nu}\right) dx^{\mu} \wedge dx^{\nu} \, . \eel defS Here $A$ and $X$ are understood as functions on the world-sheet. Both terms in \rz defS are two-forms with respect to the world-sheet. Thus, they may be integrated over $M$. The action \rz defS is obviously invariant with respect to diffeomorphisms of the world-sheet. It is also invariant under diffeomorphisms of the target space which preserve the Poisson tensor. Equations of motion for the fields $X$ and $A$ are \begin{eqnarray} \partial_{\mu} X^i+{\cal P}^{ij}A_{j\mu}=0 \, , \nonumber \\ \partial_{\mu} A_{i\nu}-\partial_{\nu} A_{i\mu} - \frac{\partial {\cal P}^{jk}}{\partial X^i}A_{j\mu} A_{k\nu}=0 \, . \end{eqnarray} Here $\partial_{\mu}$ is the derivative with respect to $x^{\mu}$ on the world-sheet. Let us remark that the two-dimensional BF theory may be interpreted as a Poisson $\sigma$-model. Indeed, if one chooses a Lie algebra with structure constants $f^{ij}{}_k$ as the target space $N$ and uses the natural Poisson bracket \begin{equation} \{ X^i , X^j\}=f^{ij}{}_k X^k , \eel Liealg one reproduces the action of the BF theory \begin{equation} BF(X, A)=\int_M tr X(dA+A^2). \eel BF In the traditional notation $X$ is replaced by $B$ and the curvature $dA+A^2$ of the gauge field is denoted by $F$. The class of Poisson $\sigma$-models includes also nontrivial examples of two-dimensional theories of gravity (for details see \cite{Th2,Th}). We argue that the gauged WZW model is equivalent to a Poisson $\sigma$-model coupled to the term (\ref{var}). The target space is the group $G$, parameterized by $\gu$ and $\gd$. The $(1, 1)$-form $A$ is identified readily from (\ref{finac2}): \begin{equation} A = \Pi \left( d\gu \gui - d\gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \right)\wedge dx^1 -\l \left( \gui d\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} d\gd \right)\wedge dx^0 \, . \label{A} \end{equation} Here we have interpreted the terms linear in $\Pi$ and $\l$. Then the part of the action quadratic in $\Pi$ and $\l$ directly determines the Poisson structure. In our formulation of the general Poisson $\sigma$-model \rz defS the indices $i,\m$ of $A$ correspond to a coordinate basis $dX^i$ in $T^*N$ and $dx^\m$ in $T^* M$. In such a formulation we simply have to replace $A_{i\mu}$ by ${\6\over\6X^i}$ in the quadratic part of the action to obtain the Poisson bivector \rz Poiss as the 'coefficient' of the volume-form $dx^\m \wedge dx^\n$. Each of the matrix-valued one-forms $ d\gu \gui - d\gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} $ and $\gui d\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} d\gd $ in the present expression \rz A for $A$, however, represents a non-holonomic basis in the cotangent bundle of the target space $G$. In such a case the corresponding components of $A$, i.e.\ $\Pi$ and $\l$ in our notation, have to be replaced by the respective dual derivative matrices. Applying this simple recipe to the quadratic part of \ry finac2 , we find the Poisson bivector on $G$: \begin{equation} \begin{array}{c} \Pi \to (\gu {\6\over\6\gu} - \gd {\6\over\6\gd})\, ; \quad \l \to ({\6\over\6\gu} \gu - {\6\over\6\gd} \gd) \\[5pt] \Rightarrow \quad {\cal P} = {4\pi\over k }\, tr \, \left( {\6\over\6\gu} \gu - {\6\over\6\gd} \gd \right) \wedge \\[2pt] \left(\gui (\gd \frac{\partial}{\partial \gd} -\gu \frac{\partial}{\partial \gu} )\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} (\gd \frac{\partial}{\partial \gd} -\gu \frac{\partial}{\partial \gu})\gd \right) \ . \end{array} \eel WZWPois Using the parameterization (\ref{diag}, \ref{g}), this expression can be formally simplified to \begin{equation} {\cal P} = {4\pi \over k }\, tr \, \left( {\6\over\6\gu} \gu - {\6\over\6\gd} \gd \right) \wedge {\6\over\6 h} h \ . \eel WZWPois2 For means of completeness we should check now that this bivector fulfills the Jacobi identity \ry Jid . In our context the simplest way to do so is to recall that the constraints of the GWZW model are first class; this suffices, because one can show that the constraints of any action of the form \rz defS are first class exactly iff ${\cal P}^{ij}$ obeys \ry Jid . Certainly one can verify the Jacobi identity also by some direct calculation and in fact this is done implicitly when establishing \rz clYB and \rz G* below. The above Poisson bracket on $G$ requires further comment. For this purpose it is useful to introduce some new object. We always assume that the group $G$ is realized as a subgroup in the group of $n$ by $n$ matrices. Then the following matrix $r$ acting in $C^n\otimes C^n$ is important for us: \begin{equation} r=\frac{1}{2}\sum_i h^i\otimes h_i + \sum_\alpha t_{-\alpha}\otimes t^\alpha. \eel rmatrix Here $h^i$ and $h_i$ are generators of dual bases in the Cartan subalgebra, $t^\alpha$ and $t_{-\alpha}$ are positive and negative roots, respectively. The matrix $r$ is usually called classical $r$-matrix. It satisfies the classical Yang-Baxter equation in the triple tensor product which reads \begin{equation} [r_{12} , r_{13}] + [r_{12} , r_{23}] + [r_{13} , r_{23}]=0. \eel clYB Here $r_{12}=r \otimes 1$ is embedded in the product of the first two spaces and so on. An important property of the $r$-matrix is the following \begin{equation} tr_{1,2} r A^1 B^2= tr \, A_\uparrow B_\downarrow + \frac{1}{2} tr \, A_d B_d \, , \eel Traces where the trace in the left hand side is evaluated in the tensor product of two spaces and \begin{equation} A^1 \equiv A\otimes 1 \ , \ B^2 \equiv 1\otimes B \ . \eel tens12 Now we are ready to represent the bracket \rz WZWPois in a more manageable way. As the most natural coordinates on the group are matrix elements, we are interested in Poisson brackets of entries of $\gu$ and $\gd$. Using shorthand notations \rz tens12 and the definition of the $r$-matrix, we arrive at the following elegant result \begin{eqnarray} \label{G*} \{ \gu^1 , \gu^2\}=\frac{4\pi}{k}[r, \gu^1 \gu^2] \ , \nonumber \\ \{ \gd^1 , \gd^2\}=\frac{4\pi}{k}[r, \gd^1 \gd^2] \ , \\ \{ \gd^1 , \gu^2\}=\frac{4\pi}{k}[r, \gd^1 \gu^2] \ . \nonumber \end{eqnarray} We omit the calculation which leads to \ry G* , as it is rather lengthy but straightforward. Each equation in \rz G* provides a Poisson bracket between any matrix element of the matrix with superscript $1$ with any matrix element of the matrix with superscript $2$. In order to clarify this statement, we rewrite the first equation in components: \begin{equation} \{ \gu^{ij} , \gu^{kl}\}={4\pi \over k } (r^{i\tilde{i}}_{k\tilde{k}}\gu^{\tilde{i}j} \gu^{\tilde{k}l}-\gu^{i\tilde{j}} \gu^{k\tilde{l}}r^{\tilde{j}j}_{\tilde{l}l}). \eel r-comp Here summation over the indices with tilde in the right hand side is understood. The remaining equations of \rz G* can be rewritten in the same fashion. The formulae \rz G* define the Poisson bracket only on the subset of the group $G$ which admits the Gauss decomposition \ry decom . One can easily recover the Poisson brackets of matrix elements of the original matrix $g$. We leave this as an exercise to the reader. The result may be presented in tensor notation: \begin{equation} \{ g^1, g^2\}=\frac{4\pi}{ k }[g^1 r g^2 + g^2 r' g^1 - r' g^1 g^2 - g^1 g^2 r]. \eel 4r Here $r'$ is obtained from $r$ by exchanging the two copies of the Lie algebra: \begin{equation} r'= \frac{1}{2}\sum_i h^i\otimes h_i + \sum_\alpha t_\alpha \otimes t^{-\alpha}. \eel r' The Poisson bracket \rz 4r is quadratic in matrix elements of $g$ and obviously smooth. This means that the bracket \rz G* which has been defined only on the part of the group $G$ where the Gauss decomposition is applicable, may now be continued smoothly to the whole group. E.g.\ for the case of $G=SU(2)$ it is straightforward to establish that the right-hand side of \ry 4r , and thus also the smoothly continued Poisson tensor ${\cal P}$, vanishes at antidiagonal matrices $g \in SU(2)$. The latter represent precisely the one-dimensional submanifold of $SU(2)$ where a decomposition (\ref{decom}) for $g$ does not exist. It is worth mentioning that the bracket \rz 4r appeared first in \cite{Sem} within the framework of the theory of Poisson-Lie groups. The group $G$ equipped with the Poisson bracket \rz 4r may be used as a target space of the Poisson $\sigma$-model. We have just proved that in the Hamiltonian formulation (geometry of the world-sheet is torus or cylinder) this Poisson $\sigma$-model coupled to the topological term \rz var coincides with the gauged WZW model. \section{Solving the Gauss Law Equation} This section is devoted to the quantization of Poisson $\sigma$-models. More exactly, we are interested in the Hilbert space of such a model in the Hamiltonian picture. This implies that we need a distinguished time direction on the world-sheet and thus we are dealing with a cylinder. The remarkable property of Poisson $\sigma$-models is that the problem of finding the Hilbert space in this two-dimensional field theory may be actually reduced to a quantum mechanical problem. This has been realized in \cite{Th} and here we give only a short account of the argument\footnote{But cf.\ also Appendix C.}. It follows form \rz defS that in the Hamiltonian formulation the variables $X^i$ and $A_{i1}$ are canonically conjugate to each other (this changes slightly when the Poisson $\sigma$-model is coupled to the term $S_\d$, see Appendix C). In the $X$-representation of the quantum theory the $X^i$ act as multiplicative operators and the $A_{i1}$ act as functional derivatives \begin{equation} A_{i1}=i\frac{\delta}{\delta X^i} \ . \eel A=dX The components $A_{i0}$ enter the action linearly. They are naturally interpreted as Lagrange multipliers. The corresponding constraints look as \begin{equation} G^i \equiv \partial_1 X^i + {\cal P}^{ij}(X) A_{j1} \, \approx \, 0 \; . \eel consXA Combining \rz A=dX and \ry consXA , one obtains an equation for the wave functional in the $X$-representation \begin{equation} \left(\partial_1 X^i + i {\cal P}^{ij}(X)\frac{\d}{\d X^j}\right)\Psi[X]=0 \ . \eel PsiX Equations \rz gl and \rz hgl are particular cases of this equation. In order to solve \ry PsiX , we first turn to a family of finite dimensional systems on the target space $N$ defined by the Poisson structure ${\cal P}$. As the target space of a Poisson $\sigma$-model carries a Poisson bracket, it may be considered as the starting point of a quantization problem. Namely, one can consider the target space as the phase space of a finite dimensional Hamiltonian system, which one may try to quantize. The main obstruction on this way is that the Poisson bracket ${\cal P}$ may be degenerate. This means that if we select some point in the target space and then move it by means of all possible Hamiltonians, we still do not cover the whole target space with trajectories but rather stay on some surface ${\cal S} \subset N$. The simplest example of such a situation is a three-dimensional space $N=\mbox{{\sl I \hspace{-0.8em} R}}^3$ with the Poisson bracket \begin{equation} \{ X^i , X^j \}=\epsilon_{ijk} X^k \ . \eel su2 This Poisson bracket describes a three-dimensional angular momentum and it is well-known that the square of the length \begin{equation} R^2:=\sum_i (X^i)^2 \end{equation} commutes with each of the $X^i$. So, $R^2$ cannot be changed by means of Hamiltonian flows and the surfaces ${\cal S}$ are two-dimensional spheres. If the Poisson bracket ${\cal P}$ is degenerate, we cannot use $N$ as a phase space. However, if we restrict to some surface ${\cal S}$ (these surfaces are also called symplectic leaves), the Poisson bracket becomes nondegenerate and one can try to carry out the quantization program. In the functional integral approach we are interested in the exponent $\exp(i{\cal A})$ of the classical action ${\cal A}$, being the main ingredient of the quantization scheme. In order to construct the classical action ${\cal A}$, we invert the matrix of Poisson brackets (restricted to some particular surface ${\cal S}$) and obtain a symplectic two-form \begin{eqnarray} \label{sympl} \O=\frac{1}{2} \O_{ij} dX^i\wedge dX^j \ , \nonumber \\ \sum_k \O_{ik} {\cal P}^{jk}=\delta_i^j \ . \end{eqnarray} As a consequence of the Jacobi identity the form $\O$ is closed \begin{equation} d \O=0 \ \eel closed and we can look for a one-form $\a$ which solves the equation \begin{equation} d\a=\O \ . \eel pdq If $\O$ belongs to some nontrivial cohomology class, $\a \sim pdq$ does not exist globally. Still the expression \begin{equation} \Psi[X]:=exp\left(i \int d^{-1}\O \right) \eel ansatz makes sense, if the cohomology class of $\O$ is integral, i.e. if \begin{equation} \oint_\sigma \O = 2\pi n \; , \quad n \in Z \eel 63 for all two-cycles $\sigma \subset {\cal S}$; in this case ${\cal A}=\int d^{-1}\O$ is defined $\mbox{mod} \; 2\pi$ and \rz ansatz is one-valued (cf.\ also \rz defA(X) below). Alternatively to the functional integral approach we might use the machinery of geometric quantization \cite{Wood} to obtain condition \ry 63 : Within the approach of geometric quantization it is a well-known fact that a Hamiltonian system $({\cal S}, \O)$ may be quantized consistently only if the symplectic form $\O$ belongs to an integral cohomology class of ${\cal S}$. In the example of two-dimensional spheres in the three-dimensional target space considered above the requirement of the symplectic leaf to be quantizable, obtained in any of the two approaches suggested above, implies that the radius $R$ of the sphere is either integer or half-integer (for more details confer \cite{NR,Wood}). This is a manifestation of the elementary fact that a three-dimensional spin has to be either integer or half-integer. After this excursion into Hamiltonian mechanics we return to equation (\ref{PsiX}). It is possible to show that formula \rz ansatz provides a solution of equation \ry PsiX . Moreover, any solution of \rz PsiX can be represented as a linear combination of expressions \rz ansatz corresponding to different integral symplectic leaves \cite{Th}. Let us explain this in more detail. The wave functional $\Psi[X]$ of the field theory depends on $n$ functions $X^i$ on the circle. They define a parameterized closed trajectory (loop) in the target space $N$. Now it is a more or less immediate consequence of \rz PsiX that the quantum constraints of the field theory restrict the support of $\Psi$ to trajectories (loops) $X(x^1)$ which lie completely within a symplectic leaf ${\cal S}$ (just use coordinates $X^i$ in the target space adapted to the foliation of $N$ into symplectic leaves). A further analysis, recapitulated in part in Appendix C within the more general framework of a Poisson $\s$-model coupled to a topological term, shows that these leaves have to be quantizable and that admissible quantum states are indeed all of the form \rz ansatz or a superposition of such functionals. In the case that ${\cal S}$ is simply connected, \rz ansatz may be rewritten more explicitly as: \begin{equation} \Psi[X] \propto \exp (i{\cal A}(X)) \; \; , \;\; \quad {\cal A}(X) = \int_\Sigma \O \quad (\mbox{mod} \; 2 \pi) \; , \eel defA(X) where the two-dimensional surface $\S$ is bounded by the closed path $X(x^1)$ lying in some quantizable leaf ${\cal S}$.\footnote{In the language of Appendix C the definition \rz defA(X) corresponds to the choice of a constant (point-like) 'loop of reference' for $\Psi_0$.} As $\O$ belongs to an integral cohomology class (by the choice of ${\cal S}$), \rz defA(X) is a globally well-defined functional of $X(x^1)$. As stated already before any such a functional solves the quantum constraints (\ref{PsiX}) and, vice versa, any solution to the latter has to be a superposition of states \ry defA(X) . On the other hand \rz ansatz or \rz defA(X) may be also reinterpreted as exponentiated point particle action. $x^1$ then is the 'time-parameter' of the trajectory $X(x^1)$, which one requires to be periodic in time. So we obtain the following picture for the relation between the Poisson $\sigma$-model and finite dimensional quantum mechanics: In order to obtain the Hilbert space of the $\s$-model on the cylinder, one may regard the target space as a phase space of a dynamical system. This space splits into a set of surfaces on which the Poisson bracket is nondegenerate, creating a family of finite dimensional systems. Some of these systems are quantizable in the sense that the cohomology class of the symplectic form is integral. To each quantum system generated in this way corresponds a linearly independent vector in the Hilbert space ${\cal H}$ of the $\s$-model. In the case that the respective (quantizable) symplectic leaf ${\cal S}$ is not simply connected, however, there is a linearly independent vector in ${\cal H}$ for {\em any} element of $\pi_1({\cal S})$. This idea may be successfully checked for BF theories in two dimensions (for more details confer \cite{Th}). Now we apply the machinery of this section to the GWZW model. First, we should look at the surfaces ${\cal S}$ in the group $G$ where the restriction of the Poisson bracket \rz G* is nondegenerate. For generic leaves this problem has been solved in \cite{Sem}. In order to make ${\cal P}$ nondegenerate, one should restrict to some conjugacy class in the group \begin{equation} g=h^{-1}g_0h \ . \eel conju Each conjugacy class may be used as the phase space of a Hamiltonian system. However, in the case of $G=SU(2)$ we found that the Poisson bracket vanishes on the subset of antidiagonal matrices. Hence, any antidiagonal matrix represents a zero-dimensional symplectic leaf in $G=SU(2)$. So, some exceptional conjugacy classes may further split into families of symplectic leaves. This occurs precisely where the Gauss decomposition does not hold. The form $\O$ on a generic orbit characterized by $g_0$ has been recently evaluated in \cite{AM} (a presentation more adapted to the physical audience can be found in \cite{AT}) and has the form \begin{equation} \Omega = { k \over 4\pi} tr\left[h^{-1}dh\wedge (\gui d\gu -g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} d\gd)\right] \, , \eel omega where $\gu$, $\gd$, and $h$ are related through \ry g . The corresponding point particle action or phase factor of the quantum states, respectively, is \begin{equation} {\cal A}_{GWZW}(g)={ k \over 4\pi} \int d^{-1} tr\left[h^{-1}dh\wedge (\gui d\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} d\gd)\right] \ . \eel geom As outlined above quantum states are assigned only to integral symplectic leaves. In Appendix C the corresponding integrality condition \rz 63 is evaluated explicitly for the example of $G= SU(2)$. The exceptional conjugacy classes require some special attention. From the point of view of the {\em pure} Poisson $\sigma$-model there corresponds a quantum state to any integral symplectic leaf which the respective conjugacy class may contain. For the case of $SU(2)$, e.g., there is one exceptional (two-dimensional) conjugacy class \rz conju characterized by \mbox{$tr \, g=0$}. It contains the one-dimensional submanifold ${\cal C}$ of antidiagonal matrices in $SU(2)$. Any point of ${\cal C}$ is a zero-dimensional symplectic leaf and, because zero-dimensional leaves are always quantizable, one would be left with a whole bunch of states corresponding to this exceptional conjugacy class. However, we know that in order to describe the GWZW model in full generality, we need to add the topological term $S_\d$ to the pure Poisson $\sigma$-part of the action. Also, appropriate boundary conditions of $A$ have to be taken into account at the part of $G$ where the Gauss decomposition breaks down. Whereas $S_\d$ and these boundary conditions may be seen to be irrelevant for the quantum states corresponding to generic conjugacy classes, they decisively change the picture at the exceptional ones. E.g.\ for $G=SU(2)$ the net result is that there corresponds only one or even no quantum state to the exceptional conjugacy class, depending on whether $k$ is even or odd, respectively. Further details on this may be found in Appendix C. {}From \rz geom it is straightforward to evaluate the cocycle $\phi_{GWZW}$ which controls the behavior of the wave functional with respect to gauge transformations. E.g., for the case of infinitesimal transformations \begin{equation} \delta g= - [\epsilon, g] \ , \ \delta h= h\epsilon \eel infgauge the new gauge cocycle looks as: \begin{equation} \phi_{GWZW}(g, \epsilon)={ k \over 4\pi} \int tr \, \epsilon \: (\gui d\gu -g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} d\gd)\ . \eel WLu An integrand of this type has been studied in the framework of Poisson-Lie group theory \cite{WLu}. However, the gauge algebra interpretation is new. In order to check that the cocycle $\phi_{GWZW}$ is nontrivial, it is convenient to use the same trick as we applied in Introduction. Indeed, choose both $g$ and $\epsilon$ to be diagonal. Then any trivial cocycle vanishes, but \rz WLu is not equal to zero for generic diagonal $g$ and~$\epsilon$. \section*{Discussion} Let us briefly recollect and discuss the results of the paper. Using the Hamiltonian formulation we have proved that the GWZW model is equivalent to a Poisson $\sigma$-model coupled to the topological term $S_\d$: \begin{equation} GWZW(g, A)=S_{\cal P}(g, A)+S_\d(g). \eel S+S2 It is natural to conjecture that this equivalence holds true for a surface of arbitrary genus. Let us mention that originally the GWZW is formulated as a Witten type topological field theory. This means that the action includes the kinetic term and explicitly depends on the world-sheet metric. Then one can use some supersymmetry to prove that in fact the terms including the world-sheet metric do not influence physical correlators. The Poisson $\sigma$-model provides a Schwarz type formulation of the same theory. The right hand side of \rz S+S2 is expressed in terms of differential forms exclusively and does not include any metric from the very beginning. At the moment the GWZW model is solved in many ways whereas the general Poisson $\sigma$-model has not been investigated much. Applying various methods which work for the GWZW to Poisson $\sigma$-models, one can hope to achieve two goals. First, one can select the methods which work in a more general framework and, hence, which are more reliable. This is especially important when one deals with functional integrals. The other ambitious program is to solve an arbitrary Poisson $\sigma$-model coupled to a topological term explicitly. Solution should include an evaluation of the partition function and topological correlators in terms of the data of the target space. In this respect an experience of the GWZW model may be very useful. Another issue which deserves some comment is the relation between Quantum Groups and WZW models. This issue has been much studied in the literature \cite{BB}. The picture of the quantum symmetry in WZW models may be described in short as follows. Separating left-moving and right-moving sectors of the model we add some finite number of degrees of freedom to the system. The Quantum Group symmetry is a gauge symmetry acting on the left- and right-movers. The physical fields are invariants of the Quantum Group action. Usually one can choose some special boundary conditions when separating the sectors in order to make the Quantum Group symmetry transparent. Let us compare this picture to the considerations of the present paper. The gauged WZW model appears to be equivalent to some Poisson $\sigma$-model with gauge group $G$ as target space. We derive the Poisson bracket \rz 4r directly from the GWZW action. This bracket is quite remarkable. Quantizing the bracket \ry 4r , one gets the generating relations of the Quantum Group \cite{FRT}. We have found that the gauge dependence of the wave functionals of the GWZW model is described by the classical action defined on the symplectic leaves. This type of action for the bracket \rz 4r has been considered in \cite{AT}. It is proved there that such an action possesses a symmetry with respect to the Quantum Group. So, confirming our expectations, the Quantum Group governs the non-physical gauge degrees of freedom of the GWZW model. The new element of the picture is that we do not have to introduce any new variables or choose specific boundary conditions in order to discover the Quantum Group structure. Let us remark that the treatment may look somewhat more natural for GWZW than for the original WZW model. The reason is that GWZW may be viewed as a chiral theory from the very beginning\footnote{We are grateful to K.Gawedzki for this remark.}. The only choice which we make is the way how we bosonize the WZW action. We conclude that the Quantum Group degrees of freedom are introduced by bosonization. It would be interesting to explore this idea from a more mathematical point of view. \newpage \section*{Appendices} \begin{appendix} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \section{Gauge Cocycles and Integral Coadjoint Orbits} Here we study in details the one-cocycle \begin{equation} \phi(E, g)=\oint tr \, E \partial gg^{-1} dx \eel Aphi of the loop group $LG$, which plays the role of the gauge group on the circle. Along with the additive cocycle $\phi$ we consider a multiplicative cocycle \begin{equation} \Phi(E, g)=\exp (i \phi(E, g)) \, . \end{equation} The counterparts of the cocycle and coboundary conditions in the multiplicative setting are \begin{eqnarray} \Phi(E,g_1g_2)&=&\Phi(E^{g_1}, g_2) \Phi(E,g_1) \, , \\ \Phi(E, g)&=&\tilde{\Phi}(E^g)\tilde{\Phi}(E)^{-1} \, . \label{GL} \end{eqnarray} Let us observe that one can consistently restrict the region of definition of $E$ from the loop algebra $l{\cal G}$ to any subspace invariant with respect to the action of the gauge group by conjugations. Let us choose such a subset in the form \begin{equation} E=h(x)^{-1} E_0 h(x) \eel coorb for $E_0$ being a constant diagonal matrix. For fixed $x$ equation \rz coorb defines a conjugacy class in the algebra ${\cal G}$ (coadjoint orbit). The diagonal matrix $E_0$ may be decomposed using a basis of fundamental weights $w_i$ in the Cartan subalgebra: \begin{equation} E_0= \sum_i E_0^i w_i. \eel int In the case of compact groups the cocycle $\Phi$ is trivial if and only if all coefficients $E_0^i$ are integer. To demonstrate this, let us present the explicit solution for $\tilde{\Phi}$. It is given by \begin{equation} \tilde{\Phi}=\exp \left(i \oint tr \, E_0 \partial hh^{-1} dx \right) \,. \eel Atil It is easy to check that \rz Atil provides a solution of the coboundary problem. It is less evident that \rz Atil is well-defined. The group element $h(x)$ is defined by $E(x)$ only up to an arbitrary diagonal left multiplier. When coefficients in \rz int are integral, this multiplier does not influence \ry Atil . For non-compact groups, though, \rz Atil may turn out to be well-defined even for continuously varying choices of $E_0$. To establish contact with the presentation in the main text, one may observe that the additive coboundary (generically not well-defined) \begin{equation} \tilde{\phi}=\oint tr \, E_0 \partial hh^{-1} dx \eel plops may be reformulated in terms of the (well-defined) Kirillov form on the coadjoint orbit, \begin{equation} \O = tr \, E_0 (dhh^{-1})^2 = \2 \, tr \, dE \wedge h^{-1}dh \, , \eel Kir as \begin{equation} \ti \phi = \int_\S\O \; , \quad \6 \S = E(x) \, . \eel formel The ambiguity in the choice of $\Sigma$ does not influence the multiplicative cocycle $\tilde\Phi$, iff the Kirillov form is integral, i.e.\ iff $\O$ satisfies \rz 63 . For compact groups the integrality condition \rz 63 on $\O$ coincides with the before-mentioned condition on the $E_0^i$. If \rz 63 is fulfilled with $n = 0$ not only the multiplicative but also the additive cocycle $\phi$ becomes trivial. This occurs, e.g., in the non-compact case ${\cal G}=sl(2,\mbox{{\sl I \hspace{-0.8em} R}})$). It is worth mentioning that \rz plops is the action for a quantum mechanical system with the phase space being a coadjoint orbit. We consider a similar system in Section 3. There the quantum mechanical phase space is a conjugacy class in the group and the analogue of the Kirillov form \rz Kir is \ry omega , the Kirillov form of the Quantum Group. We conclude that for certain restricted subspaces of the loop algebra the cocycle $\Phi(E, g)$ may become trivial. In two dimensions the wave functionals in the momentum representation are supported on these special subspaces. The corresponding coboundary $\tilde{\Phi}$ governs the gauge dependence of the wave functionals: \begin{equation} \Psi= \tilde{\Phi} \, \Psi_0 \eel tPhi for $\Psi_0$ being a gauge independent distribution with support on loops in integral coadjoint orbits. Let us stress again that the triviality condition \rz GL is actually an integrated form of the Gauss law (as shown in the introduction). Then \rz tPhi provides a universal solution of the Gauss law. In the example which we considered in this Appendix we observe a new phenomenon in the theory of gauge cocycles. A nontrivial cocycle may shrink its support in order to become trivial and produce a physical wave functional. This may lead (as in the example of 2D YM theory with compact gauge group) to a discrete spectrum in the momentum representation. \section{Topological Term for $G=SU(2)$} \setcounter{equation}{0} The topological Wess-Zumino term in the WZW model is usually represented in the form \begin{equation} WZ(g)=\frac{ k }{12\pi} tr \int_{\Sigma} d^{-1}(dgg^{-1})^3. \eel WZ The integration is formally performed over the two-dimensional surface $\Sigma$. (Here $\Sigma$ is the image of the world sheet $M$ under the map $g(x)$ from $M \ra G$). The symbol $d^{-1}$ has been introduced by Novikov \cite{Nov} and applied to construct miltivalued action functionals in \cite{Nov}, \cite{JW}. It is understood in the following way. One chooses a three-dimensional submanifold $B$ in the group $G$ so that $\partial B=\Sigma$. The integration over $\Sigma$ is replaced by an integration over $B$: \begin{equation} WZ(g)=\frac{ k }{12\pi}tr \int_B (dgg^{-1})^3. \eel WZB The definition \rz WZB is ambiguous as $B$ may be chosen in many ways. The possible ambiguity in the definition of $WZ(g)$ is an integral over the union of two possible $B$'s: \begin{eqnarray} \label{DWZ} \Delta WZ(g)&=&\frac{ k }{12\pi} tr (\int_{B'} (dgg^{-1})^3 - \int_{B''} (dgg^{-1})^3) \; = \nonumber \\ &=& \frac{ k }{12\pi} tr \int_{B'\cup \bar{B''}} (dgg^{-1})^3 \; . \end{eqnarray} Here we denote by $\bar{B''}$ the manifold $B''$ with opposite orientation. Let us restrict our consideration to the case of $G=SU(2)$. The only nontrivial three-dimensional cycle in $SU(2)$ is the group itself. It implies that the integral (\ref{DWZ}) is always proportional with some integer coefficient to the normalization integral \begin{equation} I=\frac{ k }{12\pi}tr \int_G (dgg^{-1})^3= 2\pi k \; . \eel WZnorm Here we used the fact that the volume of the group $SU(2)$ with respect to the form $tr(dgg^{-1})^3$ is equal to $24\pi^2$. This calculation explains why one should choose integer values of the coupling constant $ k $. In this case the Wess-Zumino term $WZ(g)$ is defined modulo $2\pi$ and its exponent $\exp(iWZ(g))$ is well-defined. Usually $WZ(g)$ is referred to as a topological term because the defining three-form $tr (dgg^{-1})^3$ on the group $G$ is closed and belongs to a nontrivial cohomology class. This implies that the integral (\ref{WZ}) does not change when we fix $\Sigma$ and vary $B$ in a smooth way. Choosing the proper coefficient $k/12\pi$, $k \in N$, we get a three-form which belongs to an integer cohomology class. As we have seen this ensures that $\exp(iWZ(g))$ is preserved even by a topologically nontrivial change of $B$. So the fact that the three-form $$ \omega=\frac{k}{12\pi} tr (dgg^{-1})^3 $$ is closed and belongs to integer cohomology of $G$ makes the action $WZ(g)$ well-defined. However, it is not true that $WZ(g)$ is defined already by the cohomology class of $\omega$. If we choose some other representative in the same class (as, e.g., $\varpi$ in Eq.\ \rz gov below), we get a new topological term, which is well-defined for the same reason as $WZ(g)$. In fact, the new action will differ from $WZ(g)$. The reason is that the integral \rz WZB is defined over the manifold with a boundary and, hence, it is not defined by the cohomology class of the integrand. It depends on the representative as well. Now we are prepared to introduce a new topological term for the WZW model. As it was explained in Section 1, we use the Gauss decomposition for the group element $g$: \begin{equation} g=g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \gu \ . \eel decomA Observe that the Gauss decomposition is not applicable for some elements in $SU(2)$. The Gauss components $\gd, \gu$ do not exist on the subset of antidiagonal unitary matrices. In a parameterization \begin{equation} g=\left(\begin{array}{cc} z &\sqrt{1-z\bar z}\,e^{i\phi} \\[6pt] -\sqrt{1-z\bar z}\,e^{-i\phi} & \bar z \end{array}\right)\quad , \qquad z\in C,\; |z| \le 1,\; \phi \in [0,2\pi ) \eel parm these elements are given by $z=0$. They form a circle $\cal C$ parameterized by $\phi$. We can apply the Gauss decomposition on the rest of the group in order to remove the symbol $d^{-1}$ from the topological term $\o$. Indeed, consider a two-form \begin{equation} \gamma={k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 4\pi} \, tr\, (d \gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} \wedge d\gu \gui ) \eel gamma on the compliment of $\cal C$. It is easy to verify the relation \begin{equation} d\gamma= \frac{1}{3} \omega. \eel gamom Here we have used the fact that \begin{equation} tr (d \gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1})^3= tr (d\gu \gu)^3=0, \eel vanish3 which holds since the diagonal parts of $(dM M^{-1})^m$ vanish for any triangular matrix $M$ if $m \ge 2$. We established equation \rz gamom on the part of the group $G$ which admits the Gauss decomposition. It is easy to see that this equation cannot hold true on all of $G$. Indeed, the left hand side is represented by the exact form $d\gamma$ whereas the right hand side belongs to a nontrivial cohomology class. In order to improve \ry gamom , we introduce a correction to it: \begin{equation} d\gamma=\frac{1}{3}(\omega- \varpi). \eel gov This equation is to be understood in a distributional sense: The three-form $\varpi$ is supported on ${\cal C}$. Moreover it is closed and belongs to the same cohomology class as $\omega$. To determine $\varpi$ for $G=SU(2)$, we return to the parameterization \ry parm . In these coordinates \rz gamma takes the form \begin{equation} \g = i \left(\bar z dz - z d \bar z -2 {dz \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} z} \right) d \phi \; . \end{equation} Multiplying $\g$ by test one-forms, the resulting three-forms are integrable on $G$. So $\g$ is a regular distribution and therefore the derivative $d\g$ is also well-defined. Using $d ( dz/z)= \pi \d(\mbox{Re}(z))\d(\mbox{Im}(z)) dzd \bar z =: -2\pi i \d({\cal C})$, where $\d({\cal C})$ has been introduced to denote the delta-two-form supported on the critical circle ${\cal C}$, we obtain \begin{equation} \varpi=12\pi \delta({\cal C}) d\phi. \eel varpi Let us conclude that the topological Wess-Zumino term may be replaced by the sum of a local term and a topological term supported on the set ${\cal C}$ of antidiagonal matrices: \begin{eqnarray} \label{Stop} WZ(g)=\frac{ k }{4\pi} tr \int_{\Sigma}(d \gd g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\wedge d\gu \gui )+ S_\d(g), \nonumber \\ S_\d(g)=\frac{ k }{12\pi}\int_{B}\varpi= k \int_{B} \delta({\cal C}) d\phi. \end{eqnarray} The new topological term $S_\d(g)$ depends exclusively on the positions of the points where $\Sigma$ intersects ${\cal C}$. In particular, it vanishes if $\Sigma$ belongs to the part of the group which admits the Gauss decomposition. In Section 2 we showed that the local term of (\ref{Stop}) fits nicely into the formalism of Poisson $\sigma$-models. Coupling of such a model to the topological term $S_\d$ is subject of Appendix C. \section[]{Poisson $\sigma$-Model Coupled to a Topological Term and \\ Quantum States for $SU(2)$-GWZW} \setcounter{equation}{0} Within this last Appendix we pursue the following three goals: First we investigate the change of a Poisson $\sigma$-model \begin{equation} S_{{\cal P}}(X, A)=\int_{M}\left( A_{i \nu} \frac{\partial X^i}{\partial x^\mu} + \frac{1}{2} {\cal P}^{ij} A_{i \mu} A_{j \nu}\right) dx^{\mu} \wedge dx^{\nu} \, \eel defSP under the addition of a topological term: \begin{equation} S(X, A)=S_{{\cal P}}(X, A)+S_{top}(X) \, . \eel S+S Here $S_{top}(X)$ is supposed to be given by some closed three-form $\omega_{top}$, \begin{equation} S_{top}(X)=\int_B \omega_{top} \; , \quad \6 B = \mbox{Image $M$} \; , \eel SX of (generically) nontrivial cohomology on the target space $N$ of the model (cf.\ also Appendix B). To not spoil the symmetries of \ry defSP , we further require $\o_{top}$ to be invariant under any transformation generated by vector fields of the form ${\cal P}^{ij}\6_j$. We will focus especially on the change in the Hamiltonian structure that is induced by \ry SX . Next we will specify the considerations to the GWZW model. In the main text and the previous Appendix we have shown that the (Hamiltonian) GWZW action \rz fofac may be rewritten {\em identically} in the form \rz S+S with $\omega_{top}=\varpi$. However, an additional complication arises due to the fact that the matrix-valued one-form \begin{equation} \b \equiv \b_i \, dX^i := \gui d\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1} d\gd \, , \eel alph which we used in the identification \rz A for $A$, becomes singular at the part of $G$ where the Gauss decomposition breaks down. The singular behavior of $A$ has to be taken into account in the variation for the field equations, if we want to describe the GWZW model by means of \rz S+S globally. We will show that the bulk of the quantum states obtained in the main text remains unchanged by these modifications. The considerations change only for states that have support on loops lying on exceptional conjugacy classes in $G$. Finally we will make the considerations more explicit for $G=SU(2)$ and compare the resulting picture to the literature. In the classical Hamiltonian formulation the term \rz SX contributes only into a change of the symplectic structure of the field theory. With \begin{equation} \o_{top}=\frac{1}{6} \, \o^{(top)}_{ijk} \, dX^i\wedge dX^j\wedge dX^k \eel varcor the symplectic structure takes the form \begin{equation} \Omega^{field}(X, A)= \oint_{S^1} dA_{i1}(x^1) \wedge dX^i(x^1) dx^1 + \O^{field}_{top} \label{OmXA} \end{equation} with the extra piece \begin{equation} \O^{field}_{top}= \frac{1}{2} \, \oint_{S^1} \o^{(top)}_{ijk}(X(x^1)) \, \partial_1 X^i (x^1) \, dX^j(x^1) \wedge dX^k(x^1) \, dx^1 . \label{Omtop} \end{equation} Note that as (resp.\ if) $\o_{top}$ is non-trivial in cohomology on the target space, $\Omega^{field}$ becomes non-trivial as well, i.e.\ globally there will not exist any symplectic potential $\Theta^{field}$ such that $\Omega^{field} = d \Theta^{field}$. In the case $N=G$ and $\o_{top}:=\varpi$ the symplectic forms \rz OmXA and \rz psss in the main text coincide. Actually $A_{i1}(x^1)$ and $X^i(x^1)$ are Darboux coordinates of the symplectic form $\O^{field}$ of the GWZW model. As $\Omega^{field}$ has non-trivial cohomology such Darboux coordinates cannot exist globally. The situation may be compared to the one of a sphere with standard symplectic form $\O=\sin \vartheta d\vartheta \wedge d\varphi$; trying to extend the local Darboux coordinates $cos \vartheta$ and $\varphi$ as far as possible, one finds (again in a distributional sense) $\O= d\left(\cos \vartheta d\varphi \right) + 2\pi\d^2(\mbox{'south pole'})- 2\pi\d^2(\mbox{'north pole'})$. Here we used $d(d\varphi)=\sum_{poles}2\pi\d^2(pole)$, resulting from the breakdown of $d\varphi$ as a coordinate differential at the poles while it still remains a regular one-form in a distributional sense. By the way, one may infer eq.\ \rz psss also from (\ref{OmXA},\ref{Omtop}): Just replace the coordinate basis $dX^i$ by the left-invariant basis $dg g^{-1}$ and note that $d(pdg g^{-1})=dpdg g^{-1} + p(dg g^{-1})^2$ has to be substituted for $d(A_{i1} dX^i) =dA_{i1} dX^i$. The classical Gauss law \ry consXA , on the other hand, remains unaltered by the addition of a term \rz SX to the action. Indeed the constraints $G^i \approx 0$ emerge as the coefficient of $A_{i0}$ within the action $S=S_{{\cal P}}+S_\d$ and $S_\d$ does not depend on $A$. Now let us turn to the quantum theory of the coupled model \ry S+S . Again we go into an $X$-representation. In general 'wave functions' may be regarded as section of a line bundle, the curvature of which is the symplectic form \cite{Wood}. In the case that this line bundle is trivial, i.e.\ when the symplectic form $\O^{field}$ allows for a global symplectic potential, one may choose a global non-vanishing section in the bundle. The relative coefficient of any other section with respect to the chosen one is then a function, the wave function $\Psi[X]$. This procedure is called trivialization of the line bundle. In the case of prominent interest for us in which $\o_{top}$ and (thus) $\O^{field}$ belong to some non-trivial cohomology class the quantum line bundle over the loop space will be non-trivial \cite{GCarg}. Sections may be represented by functions $\Psi[X]$ then only within some local charts. The $X^i$ may still be represented as multiplicative operators. However, the change in the symplectic structure implies that one cannot represent $A_{i1}$ as the derivative operators \rz A=dX any more. Indeed the modification $\O^{field}_{top}$ preserves commutativity of the $X^i$ as well as the commutation relations between the $A_{i1}$ and the $X^i$; however, the $A_{i1}$ do not commute among each other any longer. The net result of the change in the symplectic structure is that we have to add some $X$-dependent piece to the operator representation of $A_{i1}$: \begin{equation} A_{i1}=i \frac{\delta}{\delta X^i}+ \vartheta^{field}_i(X). \eel Aopneu The new quantity $\vartheta^{field}_i$ is a symplectic potential to the non-trivial part $\Omega^{field}_{top}$ of the symplectic form, i.e. \begin{equation} \vartheta^{field}_{top}=\oint \vartheta^{field}_i dX^i(x^1) dx^1 \eel F is a solution to the equation \begin{equation} \Omega^{field}_{top}=d\vartheta^{field}_{top} \quad \mbox{(locally)} \, . \eel OdF $\vartheta^{field}_{top}$ is not unique and may be chosen in many ways. If $\o_{top}$ belongs to a trivial cohomology class, \rz OdF may be solved globally. Any choice for $\vartheta^{field}_i$ then corresponds to the choice of a trivialization of this line bundle. If, on the other hand, $\o_{top}$ belongs to some nontrivial cohomology class, we can speak about a solution to \rz OdF only locally. Still any choice of a local potential $\vartheta^{field}_{top}$ corresponds to a local trivialization of the quantum line bundle within some chart. Within the latter, quantum states may be represented again as ordinary functions $\Psi[X]$ on the loop space and \rz Aopneu gives the corresponding operator representation of $A_{i1}$. Let us finally write down the new quantum Gauss law. Within a local chart on the loop space it takes the form: \begin{equation} i \left(\6 X^i + {\cal P}^{ij} \vartheta^{field}_j\right)\Psi = {\cal P}^{ij} {\delta \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} \delta X^j} \Psi \, . \eel conneu1 For non-singular forms $\vartheta^{field}_{top}$ these constraints yield a restriction to functionals with support on loops lying entirely within some symplectic leaf again. (This holds true also for a singular $\vartheta^{field}_j$, as long as its contraction with the Poisson tensor ${\cal P}^{ij}$ vanishes). To see this, just use the first $k$ coordinates $X^i$ to parameterize the symplectic leaves in any considered region of $N$. Then \rz conneu1 yields $\6 X^i \, \Psi =0$ for $i=1,...,k$. So, strictly speaking, the physical wave functionals will be distributions that restrict the loops to lie entirely within symplectic leaves. The remaining $n-k$ equations \rz conneu1 then determine the form of $\Psi$ on each leaf. Let us show this for trivial cohomology of the defining three-form in \ry SX , i.e.\ for the special case that \begin{equation} \o_{top}=d\vartheta_{top} \eel triv globally on $N$. Then $\vartheta^{field}_j = \vartheta^{(top)}_{jk}\left(X(x^1)\right) \6_1 X^k(x^1)$ globally on the phase space. To find the form of $\Psi$ on a given symplectic leaf $\cal S$, we multiply \rz conneu1 for $i= k+1,\, ...\, , n$ by $\O_{li}$ from the left (cf.\ Eq.\ \ry sympl ). The resulting equation can be integrated easily to yield: \begin{equation} \Psi = \Psi_0 \exp \left( i \int d^{-1} (\Omega + \vartheta_{top}) \right) \eel loesung where $\Psi_0$ is an integration constant, which, however, may depend on the chosen symplectic leaf (and, if $\cal S$ is not simply connected, also on the homotopy class of the argument loop of $\Psi$). $\Psi_0$ may be regarded as the evaluation of $\Psi$ on some reference loop on $\cal S$ and the phase is determined by the integration of the two-form $\O+\vartheta_{top}$ over a two-surface that is enclosed between the reference loop and the argument loop of $\Psi$. Independence of the choice of the chosen two-surface requires, e.g.\ for a simply connected $\cal S$: \begin{equation} \oint_\s \, \O+\vartheta_{top} = 2\pi n \;\; , \quad n \in Z \; ,\eel neu for all two-cycles $\s \in \cal S$. This generalizes the integrality condition \ry 63 , which corresponds to $\vartheta_{top} \equiv 0$. \rz neu is a well-formulated condition, as the invariance requirement for \rz SX under the symmetries of \rz defSP may be seen to imply that the {\em restriction} of $\vartheta_{top}$ onto any symplectic leaf must be a closed two-form (while, certainly, $\vartheta_{top}$ will not be closed in general on all of $N$). For a truly {\em topological} term \rz SX equation \rz triv holds only locally. Still \rz loesung provides the local solution to the quantum constraints \rz conneu1 in the space of loops on $\cal S$. However, as the form $\vartheta_{top}$ is not defined globally on $\cal S$ in general, the global integrability for \rz conneu1 does not have the simple form \ry neu . Instead the use of various charts in the line bundle over the loop space will be unavoidable to determine integrability of \rz conneu1 on a leaf and thus the existence of a quantum state located on that leaf. We will not study this problem in full generality here further. Rather we will restrict our attention to the GWZW model in the following. Everything that has been written above applies to the GWZW model, too, except for one small change: Actually, the correct Gauss law for GWZW is not $G^i \approx 0$, but \begin{equation} \b_i G^i \approx 0 \, , \eel conneu2 where the matrix-valued coefficients $\b_i$ have been defined in \ry alph . To see this, we recall that the constraints of the GWZW model, given first in Eq.\ \ry const , result from a variation for $\l \propto a_-$ within the action. According to \rz A $A_{i0}$ differs from (the components of) $\l$ by $A_{i0}= tr \, \l \b_i$. So the correct GWZW Gauss law \rz const may be rewritten as \ry conneu2 . For loops inside the Gauss-decomposable region of $G$ this is equivalent to the old form of the constraints $G^i =0$, since on that part of $G$ the difference corresponds merely to a change of basis in $T^{\ast}G$. However, as $\b$ becomes singular at that lower dimensional part of $G$ where the Gauss decomposition breaks down, the constraints \rz conneu2 have somewhat different implications than $G^i =0$ in that region. This consideration applies also to the quantum constraints; we have to multiply \rz conneu1 by $\b_i$ from the left. The result is \begin{equation} \left(\b_i \6 X^i + \b_i {\cal P}^{ij} \vartheta^{field}_j\right)\Psi + i\b_i {\cal P}^{ij} {\delta \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} \delta X^j} \Psi =0 \, , \eel conneu3 or, equivalently, \begin{equation} \left(\gui \6_1\gu - g_\downarrow^{-1}} \def\gui{g_\uparrow^{-1}\6_1\gd + \b_i {\cal P}^{ij} \vartheta^{field}_j + \frac{2\pi i}{ k } \frac{\delta}{\delta h} h\right) \Psi[g_0, h] =0 \, . \eel hgl2 The part $\b_i {\cal P}^{ij} \vartheta^{field}_j$, which may be rewritten also as the insertion of the vector field $(2\pi/k) (\d / \d h)h$ into the one-form $\vartheta^{field}_{top}=\vartheta^{field}_\d$, is the new contribution from $S_\d$ that has been dropped in the derivation of \ry hgl . It is not difficult to see that for loops that lie at least partially outside exceptional conjugacy classes ('critical region') in $G$ one may solve \rz PsiX instead of \rz conneu3 or \ry hgl2 . Indeed close to any part of the loop outside the critical region we may use \rz conneu1 as the quantum constraint, because \rz alph is invertible in that part of $G$. But as argued above this restricts the loop to lie {\em entirely} within a symplectic leaf outside the critical region in $G$. For {\em such} loops now we may always choose \begin{equation} \vartheta^{field}_i \equiv 0 \, , \eel thetanull as $\O^{field}_\d$ vanishes on that part of the phase space. This justifies that in the main text we dropped the contributions from $S_\d$ (as well as the multiplicative factor $\b_i$) and restricted our attention to the solution of \ry PsiX . Also we had not to think of a non-trivial quantum line bundle in this way. The main part of the states could be obtained within one local trivialization of the line bundle, given by \ry thetanull . What has to be considered separately only are possible states that have support on loops lying {\em entirely} within the critical region of $G$. In this case the full quantum constraints \rz conneu3 have to be taken into account. It is also in this region of $G$, furthermore, where the notion of symplectic leaves and conjugacy classes do not coincide. From \rz hgl2 we learn that it is precisely the modifications of \rz PsiX that restore the adjoint transformations as symmetries on the quantum level. $\b$ diverges precisely where $\cal P$ vanishes so as to give rise to the finite contribution $(\d/\d h) \, h$ in \ry hgl2 . As a result there will correspond at most one quantum state to an exceptional conjugacy class, even if the respective orbit splits into several (possibly in part integrable) symplectic leaves. Let us now specify our considerations to $G=SU(2)$. In particular we want to determine all quantum states within our approach. For this purpose let us first consider the splitting of $SU(2)\sim S^3$ into conjugacy classes. Parameterizing conjugacy classes by (cf.\ also \ry parm ) \begin{equation} \2 tr\, g = \mbox{Re}(z) = : \cos \theta = const \: , \quad \theta \in [0,\pi]\ , \eel pz we find that, topologically speaking, these orbits are two-spheres for $\theta \in \; (0,\pi)$ and points for $\theta = 0, \pi \lra z = \pm 1$. Only one of the conjugacy classes is 'exceptional'; it corresponds to $\theta = \pi/2 \lra tr g = 0$. Parameterizing this critical $S^2$ by polar coordinates $\phi$ and $\vartheta := \arccos \mbox{Im}(z)$, the part $\cal C$ of $SU(2)$ on which the Gauss decomposition is not applicable is identified with the equator $\vartheta = \pi/2$ of this two-sphere. So the picture we obtain is that $N = S^3$ is foliated into two-spheres except for its 'poles' $z=\pm 1$. The 'equator' of the three-sphere, itself an $S^2$, is what we called an exceptional conjugacy class. The equator ${\cal C} \sim S^1$ of this $S^2$ is precisely the subset of $N=G$ where the Gauss decomposition breaks down and, correspondingly, where the support of $\o_{top}=\varpi$ lies. The exceptional conjugacy class splits into several symplectic leaves: the Northern part of the $S^2$, its Southern part, and the points of the equator ${\cal C}$, where ${\cal P}$ vanishes. According to our general considerations above, this splitting is, however, irrelevant; there will correspond at most one quantum state to the exceptional conjugacy class. On the other hand there corresponds precisely one quantum state to any integral (non-exceptional) conjugacy class, as all of these orbits are simply connected. So let us evaluate the integrality condition \rz 63 for the non-exceptional conjugacy classes in $SU(2)$. From \rz omega we find that in the coordinates \rz parm \begin{equation} \Omega = {i k \over 2\pi}{dz\over z} \wedge d\phi \, . \eel omonsu2 In the parameterization \rz pz for the adjoint orbits this yields for the integral of $\Omega$ over the respective two-spheres \begin{equation} \int_{S^2}\Omega = \left\{ \begin{array}{r@{\quad , \quad}l} 2 k \theta & \theta \in \, [0,\pi/2) \\ 2 k (\pi -\theta) & \theta \in \;\; (\pi/2,\pi] \end{array} \right. \eel volume Here we have taken into account that the imaginary part Im$(z)$ of $z$ runs only from $-\sin \theta$ to $+\sin \theta$ since $|z| \le 1$. For the critical orbit at $\theta = \pi/2$ the symplectic volume \rz volume becomes ill-defined. This comes as no surprise. Here obviously the choice \rz thetanull does not apply for all loops on the critical conjugacy class. Still the correct integrability condition may be guessed from a simple limiting procedure: From \rz volume we obtain \begin{equation} \lim_{\theta \to \pi/2}\; \, \int_{S^2}\Omega = k\pi . \eel aa It is plausible to assume that the critical orbit will carry a quantum state, iff again \rz aa is an integer multiple of $2\pi$ (cf.\ Eq.\ \ry 63 ). In fact, one can prove that this is indeed correct. To do so one might use two charts in the quantum line bundle. First \ry thetanull , which works for all loops that do not intersect the equator ${\cal C}$ of the critical conjugacy class. And second, \begin{equation} \vartheta_\d := {k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 2 \pi} \left({1 \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} z} +i\right) dz \wedge d\phi \quad \longrightarrow \quad \vartheta^{field}_i = {k \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} 2 \pi} \left({1 \over } \def\1{\vec } \def\2{{1\over2}} \def\4{{1\over4} z} +i\right) (dz \, \6_1 \phi - \6_1 z \, d\phi) \, . \eel chart2 This second chart is applicable to all loops on the critical conjugacy class that do not touch its 'pole' $z=-i$. The solution to the full quantum constraints \rz conneu3 has again the form \rz loesung within the respective domain of definition of the two charts. Now one might regard the value of the wave functional in {\em both} charts for two small loops close to the pole $z=-i$, one of which with winding number one around this pole, the other one with winding number zero. In the first chart continuity of the wave function implies that the wave functional will have basically the same value for both loops. In the second chart the two loops are separated from each other by a two-surface that encloses basically all of the critical $S^2$ (since in this chart the first loop may not be transformed into the second one through the pole $z=-i$, but instead one has to move through the other pole $z=i$); this gives a relative phase factor of the wave functions in this chart that may be determined by means of \ry loesung . The corresponding phase need, however, {\em not} be a multiple of $2\pi$. Instead, the result of chart two has to coincide with the result of chart one only {\em after} taking into account the transition functions between the two charts. (Note that both loops lie in both charts). In fact, for the first loop one picks up a nontrivial contribution to the integrality condition from there. Further details shall be left to the reader. In any case the result coincides with the one obtained from the limit above. So one finds that there exists a quantum state with support on the critical orbit $\theta = \pi/2$ for even values of $k$ and no such a state for odd values of $k$. Let us remark here that in the latter case {\em all} $\,$ 'physical' quantum states, i.e.\ all states in the kernel of the quantum constraints, may be described within just one chart of the quantum line bundle (as e.g.\ by \ry thetanull ). So, the restriction to physical states may yield the originally non-trivial quantum line bundle of a coupled model \rz S+S to become effectively trivial. Summing up the results for $G=SU(2)$, we conclude that the integral orbits (i.e.\ the orbits allowing for nontrivial quantum states of the $SU(2)$-GWZW model) are given by $\theta = n\pi/ k $, $n=0,1,... k$. Now we want to compare this result with the current literature. According to \cite{ADW}, there are two different pictures for the space of states of the GWZW model. (In \cite{ADW} they consider partition functions of the WZW model. However, these two issues may be related using results of \cite{GK}.) The first picture eventually coincides with our answer. The second one suggests the finite renormalization $k\rightarrow k+2$. In this case the integral orbits are characterized by $\theta= n\pi/(k+2), \ n=0,\dots , k+2$. However, in {\em this} picture the singular orbits with $n=0$ and $n=k+2$, corresponding to the central elements $\pm I\in SU(2)$, should be excluded. In \cite{ADW} it is proved that the two pictures are equivalent. However, it would be interesting to establish this equivalence in the language of Poisson $\sigma$-models. One motivation is to compare the results with the similar formalism \cite{Blau}. Also, it seems to be easier to handle the spectrum of the model in the second picture. \end{appendix} \section*{Acknowledgements} We are grateful to M.Blau, K.Gawedzki and R.Jackiw for the interest in this work and useful comments. A.A. thanks the Erwin Schr\"{o}dinger Institute (Vienna, Austria) for hospitality during the period when this work was initiated.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We study the local H\"older regularity for viscosity solutions of possibly degenerate and singular non-local equations of the form \begin{equation* \operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))K(x,y)\, dy = f(x), \end{equation*} where $f$ is bounded and $K(x,y)$ essentially behaves like $|y|^{-n-sp}$. Here $\operatorname{PV}$ stands for the \emph{principal value}. This type of equations is one possible non-local counterpart of equations of $p$-Laplace type and arises for instance as the Euler-Lagrange equation of functionals in fractional Sobolev spaces. Solutions can also be constructed directly via Perron's method (cf. \cite{IN10}). In the case $K(y)=|y|^{-n-sp}$, when properly rescaled, solutions converge to solutions of the $p$-Laplace equation $$ \Delta_p u=\div (|\nabla u|^{p-2}\nabla u)=0 $$ as the parameter $s$ tends to 1, see \cite{IN10}. Our first and main result is that bounded viscosity solutions (see Section \ref{sec:visc}) of the homogeneous equation are locally H\"older continuous, see the theorem below. Throughout the paper we denote by $B_r$, the ball of radius $r$ centered at the origin. \begin{thm}\label{thm:main} Assume $K$ satisfies $K(x,y)=K(x,-y)$ and there exist $\Lambda\geq \lambda>0$, $M>0$ and $\gamma>0$ such that \begin{align*} \frac{\lambda}{|y|^{n+sp}}\leq &K(x,y)\leq \frac{\Lambda}{|y|^{n+sp}}, \text{ for } y\in B_2,x\in B_2, \\%\Lambda\geq \lambda>0,\\ 0\leq &K(x,y) \leq \frac{M}{|y|^{n+\gamma}}, \text{ for } y\in \mathbb{R}^n\setminus B_\frac14, x\in B_2, \end{align*} where $s\in (0,1)$ and $p\in (1,\infty)$. In the case $p<2$ we require additionally $p>1/(1-s)$. Let $u\in L^\infty(\mathbb{R}^n)$ be a viscosity solution of $$ Lu:=\operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))K(x,y)\, dy =0\text{ in }B_2. $$ Then $u$ is H\"older continuous in $B_1$ and in particular there exist $\alpha$ and $C$ depending on $\lambda,\Lambda,M,p,s$ and $\gamma$ such that $$ \|u\|_{C^\alpha(B_1)}\leq C\|u\|_{L^\infty(\mathbb{R}^n)}. $$ \end{thm} In particular, Theorem \ref{thm:main} applies for the fractional $p$-Laplace equation $$ \operatorname{PV} \int_{\mathbb{R}^n}\frac{|u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))}{|y|^{n+sp}}\, d y=0. $$ We are also able to prove H\"older estimates for inhomogeneous equations with variable exponents, see below: \begin{thm}\label{thm:main2} Assume $K$ satisfies $K(x,y)=K(x,-y)$ and there exist $\Lambda\geq \lambda>0$, $M>0$ and $\gamma>0$ such that \begin{align*} \frac{\lambda}{|y|^{n+s(x)p(x)}}\leq &K(x,y)\leq \frac{\Lambda}{|y|^{n+s(x)p(x)}}, \text{ for } y\in B_2,x\in B_2,\\% \Lambda\geq \lambda>0,\\ 0\leq &K(x,y) \leq \frac{M}{|y|^{n+\gamma}}, \text{ for } y\in \mathbb{R}^n\setminus B_\frac14, x\in B_2, \end{align*} where $0<s_0<s(x)<s_1<1$ and $1<p_0<p(x)<p_1<\infty$. In the case $p(x)<2$ we require additionally that there is $\tau>0$ such that $$ p(x)(1-s(x))-1>\tau.$$ Let $f\in C(B_2)\cap L^\infty(B_2)$ and let $u\in L^\infty(\mathbb{R}^n)$ be a viscosity solution of $$ Lu:=\operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(x+y)|^{p(x)-2}(u(x)-u(x+y))K(x,y)\, dy = f(x)\text{ in } B_2.$$ Then $u$ is H\"older continuous in $B_1$ and in particular there exist $\alpha$ and $C$ depending on $\lambda,\Lambda,M,p_0,p_1,s_0,s_1,\gamma $ and $\tau$ such that $$ \|u\|_{C^\alpha(B_1)}\leq C\left(\|u\|_{L^\infty(\mathbb{R}^n)}+\max\left(\|f\|_{L^\infty(B_2)}^\frac{1}{p_0-1},\|f\|_{L^\infty(B_2)}^\frac{1}{p_1-1}\right)\right). $$ \end{thm} \begin{rem} It might seem odd that the two conditions on $K$ in our main theorems are supposed to be satisfied in overlapping regions, $B_2$ and $B_\frac14$. This is only for notational convenience. It would be sufficient to have the first condition satisfied in $B_\rho$ for some $\rho>0$ and the second one satisfied outside $B_R$ for some large $R$ as long as we ask $K$ to be bounded in $B_R\setminus B_\rho$. \end{rem} \begin{comment}We can also formulate our result for the so called maximal and minimal operators $$ M^+ u :=\int_{\mathbb{R}^n}\left(\Lambda \delta^+(u,x,y)-\lambda \delta^-(u,x,y)\right)\,\frac{d y}{|y|^{n+sp}} $$ and $$ M^- u :=\int_{\mathbb{R}^n}\left(\lambda \delta^+(u,x,y)-\Lambda \delta^-(u,x,y)\right)\,\frac{d y}{|y|^{n+sp}}, $$ where $$ \delta(u,x,y)=\frac{1}{2} \left(|u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))+|u(x)-u(x-y)|^{p-2}(u(x)-u(x-y))\right), $$ $$ \delta^\pm(u,x,y) = \max(\pm \delta(u,x,y),0). $$ We have the following result: \begin{thm}\label{thm:main2} Assume $p\in (1,\infty)$ and $p(1-s)>1$ if $p<2$. If $u\in L^\infty(\mathbb{R}^n)$ and $u$ satisfies $$ M^+ u\geq -A,\quad M^-u\leq A \text{ in $B_2$}, $$ then there are constants $\alpha$ and $C$ depending only on $s$, $p$ such that $$ \|u\|_{C^\alpha(B_1)}\leq C(\|u\|_{L^\infty(\mathbb{R}^n)}+A). $$ \end{thm} \end{comment} \subsection{Known results} Equations similar to the ones in Theorem \ref{thm:main} were, to the author's knowledge, introduced in \cite{IN10}, where existence and uniqueness is established. It is also shown that the solutions converge to solutions of the $p$-Laplace equation, as $s\to 1$. Similar equations were also studied in \cite{CLM12}, where the focus lies in the asymptotic behaviour as $p\to \infty$. Related equations have also been suggested to be used in image processing and machine learning, see \cite{EDL12} and \cite{EDLL14}. Recently, in \cite{CKP13} and \cite{CKP14}, H\"older estimates and a Harnack inequality were obtained for weak solutions of a very general class of equations of this type. The difference between these results and the ones in the present paper can be seen as the difference between equations in divergence form and those in non-divergence form in the non-local setting. In other words, their results are more in the flavour of Di Giorgi-Nash-Moser (cf. \cite{DG57}, \cite{Nas58} and \cite{Mos61}) while the results in this paper are more in the flavour of Krylov-Safonov (cf. \cite{KS79}). In the case $p=2$, corresponding to equations of the form \begin{equation}\label{eq:fraclap} \operatorname{PV} \int_{\mathbb{R}^n} \frac{u(x)-u(x+y)}{|y|^{n+2s}}\, dy =f(x), \end{equation} a similar development has already taken place. In \cite{Sil06}, a surprisingly simple proof of H\"older estimates for viscosity solutions were given for a very general class of equations corresponding to equations of non-divergence form. An adaptation of the method used therein is used in the present paper. In \cite{Kas09}, H\"older estimates were obtained for weak solutions for a class of equations corresponding to equations of divergence form, including equations of the form \eqref{eq:fraclap}. Related is also \cite{BCF12a} and \cite{BCF12b}, where another type of degenerate (or singular) non-local equation is studied. H\"older estimates and some higher regularity theory are established. It is also proved that these equations approach the $p$-Laplace equation in the local limit. \subsection{Comments on the equation} Let us very briefly point out the difference between the class of equations considered in \cite{CKP13} and \cite{CKP14}, and the class of equations considered here (see also \cite{Sil06} for a similar discussion). There, weak solutions are considered, in the sense that \begin{equation}\label{eq:weak} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}|u(x)-u(y)|^{p-2}(u(x)-u(y))(\phi(x)-\phi(y))G(x,y)\,dx dy=0 \end{equation} for any $\phi\in C_0^\infty(B_2)$, where $G(x,y)$ behaves like $|x-y|^{-n-sp}$. These solutions arise for instance as minimizers of functionals of the form $$ \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}|u(x)-u(y)|^{p}G(x,y)\,dx dy. $$ In the most favorable of situations, we are allowed to change the order of integration and write \eqref{eq:weak} as $$ \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}|u(x)-u(y)|^{p-2}(u(x)-u(y))(G(x,y)+G(y,x))\phi(x)\,dx dy=0, $$ and conclude $$ \operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(y)|^{p-2}(u(x)-u(y))(G(x,y)+G(y,x))\,dy=0. $$ The change of variables $y=z+x$ yields $$ \operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(z+x)|^{p-2}(u(x)-u(z+x))(G(x,z+x)+G(z+x,x))\,dz=0, $$ or $$ \operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(z+x)|^{p-2}(u(x)-u(z+x))K(x,z)\,dz=0, $$ where $K(x,z)=G(x,z+x)+G(z+x,x)$. Then necessarily $K(x,z-x)=K(z,x-z)$. Moreover, we are not always allowed to perform the transformations above. Hence, the two types of equations overlap but neither is contained in the other. In other words, the results in \cite{CKP13} and \cite{CKP14} do not always apply to the equations considered in this paper, and vice versa, the results in this paper do not always apply to the equations studied therein. Another important remark is that the estimates obtained in this paper are not uniform as $s\to 1$, i.e., in the limit in which the equation becomes local. This is also the case in \cite{Sil06}. For fully nonlinear equations of fractional Laplace type, uniform estimates as $s\to 1$ have been obtained (see for instance \cite{CS09}), but they are more involved, and they follow the same strategy as the estimates for fully nonlinear (local) equations. In our case, if $\phi \in C^2_0$ and $p>2$, then $$ (1-s)\operatorname{PV} \int_{\mathbb{R}^n}\frac{|\phi(x)-\phi(x+y)|^{p-2}(\phi(x)-\phi(x+y))}{|y|^{n+sp}}\,d y \to -C_{p,n}\Delta_p \phi, $$ as $s\to 1$. If we instead have a kernel of the form $$G\left(\frac{y}{|y|}\right)\frac{1}{|y|^{n+sp}},$$ then \begin{align*} &(1-s)\operatorname{PV} \int_{\mathbb{R}^n}\frac{|\phi(x)-\phi(x+y)|^{p-2}(\phi(x)-\phi(x+y))G(\frac{y}{|y|})}{|y|^{n+sp}}\,d y \\ &\to -C_{p,n}|\nabla \phi|^{p-2}a_{ij}(\nabla \phi)D_{ij}^2 \phi, \end{align*} as $s\to 1$, where the matrix $(a_{ij})(\nabla \phi)$ is positive definite and can be given explicitly as integrals over the sphere in terms of $G$. This type of degenerate (or singular) equations of non-divergence form, remained fairly unstudied until quite recently. Starting with \cite{BID04}, these equations have attracted an increasing amount of attention. See also \cite{Imb11} and \cite{IS13} where $C^\alpha$ and $C^{1,\alpha}$-estimates are established, respectively. \section{Viscosity solutions}\label{sec:visc} In this section, we introduce the notion of viscosity solutions (as in \cite{CS09}) and prove that viscosity solutions can be treated almost as classical solutions. \begin{de} Let $D$ be an open set and let $L$ be as defined in Theorem \ref{thm:main} or Theorem \ref{thm:main2}. A function $u\in L^\infty(\mathbb{R}^n)$ which is upper semicontinuous in ${D}$ is a subsolution of $$ L u\,\leq C \text{ in $D$} $$ if the following holds: whenever $x_0\in D$ and $\phi\in C^2({B_r(x_0)})$ for some $r>0$ are such that $$ \phi(x_0)=u(x_0), \quad \phi(x)\geq u(x) \text{ for $x\in B_r(x_0)\subset D$} $$ then we have $$ L\phi_r\, (x_0)\leq 0, $$ where $$ \phi_r =\left\{\begin{array}{lr}\phi \text{ in }B_r(x_0),\\ u\text{ in }\mathbb{R}^n\setminus B_r(x_0). \end{array}\right. $$ A supersolution is defined similarly and a solution is a function which is both a sub- and a supersolution. \end{de} The following result verifies that whenever we can touch a subsolution from above with a $C^2$ function, we can treat the subsolution as classical subsolution. The proof is almost identical to the one of Theorem 2.2 in \cite{CS09}. \begin{prop}\label{prop:pw} Assume the hypotheses of Theorem \ref{thm:main} or Theorem \ref{thm:main2}. Suppose $Lu\leq C$ in $B_1$ in the viscosity sense and that $x_0\in B_1$ and $\phi\in C^2({B}_r(x_0))$ is such that $$ \phi(x_0)=u(x_0),\quad \phi(x)\geq u(x)\text{ in }B_r(x_0)\subset B_1, $$ for some $r>0$. Then $Lu$ is defined pointwise at $x_0$ and $Lu\, (x_0)\leq C$. \end{prop} \begin{proof} Since the result is only concerned with the behavior at one fixed point $x_0$, we see that there is no difference between assuming the hypotheses of Theorem \ref{thm:main} or Theorem \ref{thm:main2}. Hence, we give the proof under the hypotheses of Theorem \ref{thm:main}. For $0<s\leq r$, let $$ \phi_s=\left\{\begin{array}{lr} \phi \text{ in }B_s(x_0),\\ u\text{ in }\mathbb{R}^n\setminus B_s(x_0). \end{array}\right. $$ Since $u$ is a viscosity subsolution, $L \phi_s\,(x_0)\leq C$. Now introduce the notation \begin{align*} \delta(\phi_s, x,y)=&\frac{1}{2}|\phi_s(x)-\phi_s(x+y)|^{p-2}(\phi_s(x)-\phi_s(x+y))\\&+\frac{1}{2}|\phi_s(x)-\phi_s(x-y)|^{p-2}(\phi_s(x)-\phi_s(x-y)), \end{align*} $$ \delta^\pm(\phi_s,x,y) = \max(\pm \delta(\phi_s,x,y),0). $$ By simply interchanging $y\to-y$ we have \begin{equation}\label{eq:deltasubsol} \int_{\mathbb{R}^n}\delta(\phi_s,x_0,y)K(x_0,y)\, dy\leq C, \end{equation} since one can easily see that the integral is well defined since $\phi_s$ is $C^2$ near $x_0$. Moreover, $$\delta(\phi_{s_2},x_0,y)\leq \delta(\phi_{s_1},x_0,y)\leq \delta(u,x_0,y)\text{ for }s_1< s_2< r, $$ so that $$ \delta^-(u,x_0,y)\leq |\delta(\phi_r,x_0,y)|. $$ Since $|\delta(\phi_r,x_0,y)K(x,y)|$ is integrable, so is $\delta^-(u,x_0,y)K(x,y)$. In addition, by \eqref{eq:deltasubsol} $$ \int_{\mathbb{R}^n}\delta^+(\phi_s,x_0,y)K(x_0,y)\, dy\leq \int_{\mathbb{R}^n}\delta^-(\phi_s,x_0,y)K(x_0,y)\, dy+C. $$ Thus, for $s_1<s_2$ \begin{align}\label{eq:srineq} \int_{\mathbb{R}^n}\delta^+(\phi_{s_1},x_0,y)K(x_0,y)\, dy&\leq\int_{\mathbb{R}^n}\delta^-(\phi_{s_1},x_0,y)K(x_0,y)\, dy+C\\ &\leq\int_{\mathbb{R}^n}\delta^-(\phi_{s_2},x_0,y)K(x_0,y)\, dy+C<\infty.\nonumber \end{align} Since $\delta^+(\phi_s,x_0,y)\nearrow \delta^+(u,x_0,y)$, the monotone convergence theorem implies $$ \int_{\mathbb{R}^n}\delta^+(\phi_s,x_0,y)K(x_0,y)\, dy \to \int_{\mathbb{R}^n}\delta^+(u,x_0,y)K(x_0,y)\, dy, $$ and by \eqref{eq:srineq} \begin{equation}\label{eq:deltaplus} \int_{\mathbb{R}^n}\delta^+(u,x_0,y)K(x_0,y)\, dy\leq \int_{\mathbb{R}^n}\delta^-(\phi_s,x_0,y)K(x_0,y)\, dy+C<\infty, \end{equation} for any $0<s<r$. We conclude that $\delta^+(u,x_0,y)K(x_0,y)$ is integrable. By \eqref{eq:srineq} and the bounded convergence theorem, we can pass to the limit in the right hand side of \eqref{eq:deltaplus} and obtain $$ \int_{\mathbb{R}^n}\delta (u,x_0,y)K(x_0,y)\, dy=\lim_{s\to 0}\int_{\mathbb{R}^n}\delta (\phi_s,x_0,y)K(x_0,y)\, dy\leq C. $$ This implies that $Lu\,(x_0)$ exists in the pointwise sense and $Lu\, (x_0)\leq C$. \end{proof} \section{H\"older regularity for constant exponents} In this section we give the proof of our main theorem for the case of constant $s$ and $p$. This is based on Lemma \ref{lem:key}, sometimes referred to as the oscillation lemma. Throughout this section, $L$ denotes an operator of the form in Theorem \ref{thm:main}, i.e., $$ Lu\,(x):=\operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))K(x,y)\, dy. $$ Let us also, by abuse of notation, introduce the function $$ \beta(x)=\beta(|x|)=\left((1-|x|^2)^+\right)^2. $$ The exact form of $\beta$ is not important, we could have chosen any radial function which is $C^2$ and zero outside $B_1$ and non-increasing along rays from the origin. We start with a couple of auxiliary inequalities. Here $a,b\in \mathbb{R}$. \begin{lem}\label{lem:pineq1} Let $p\geq 2$. Then $$ \big||a+b|^{p-2}(a+b)-|a|^{p-2}a\big |\leq (p-1)|b|(|a|+|b|)^{p-2}. $$ \end{lem} \begin{proof} We have \begin{align*} \big||a+b|^{p-2}(a+b)-|a|^{p-2}a\big|&\leq \int_0^{|b|}\Big| \frac{d}{ds} (|a+s|^{p-2}(a+s)\Big|\,ds\\ &= \int_0^{|b|}(p-1)|a+s|^{p-2}\, ds\\ &\leq (p-1)|b|(|a|+|b|)^{p-2}. \end{align*} \end{proof} \begin{lem}\label{lem:pineq2} Let $p\in (1,2)$. Then $$ \big||a+b|^{p-2}(a+b)-|a|^{p-2}a\big|\leq (3^{p-1}+2^{p-1})|b|^{p-1}. $$ \end{lem} \begin{proof} We split the proof into two cases. \noindent {\bf Case 1: $|a|\leq 2|b|$.} Then $$ \big| |a+b|^{p-2}(a+b)-|a|^{p-2}a\big|\leq |a+b|^{p-1}+|a|^{p-1}\leq (3^{p-1}+2^{p-1})|b|^{p-1}. $$ \noindent {\bf Case 2: $|a|> 2|b|$.} Then for $|s|\leq |b|$ $$ |a+s|\geq |a|-|s|>2|b|-|b|=|b|, $$ so that $$ \big| |a+b|^{p-2}(a+b)-|a|^{p-2}a\big|\leq \int_0^{|b|}(p-1)|a+s|^{p-2} \, ds \leq (p-1)|b|^{p-1}. $$ Since $p-1\leq 3^{p-1}+2^{p-1}$, this concludes the proof. \end{proof} \begin{lem}\label{lem:pest} Let $p\geq 2$ and assume $a+b\geq 0$. Then $$ |a+b|^{p-2}(a+b)\leq 2^{p-2}(|a|^{p-2}a+|b|^{p-2}b). $$ \end{lem} \begin{proof} The inequality is trivial for $p=2$ so we assume $p>2$. Since $a+b\geq 0$, $|a|^{p-2}a+|b|^{p-2}b\geq 0$. Without loss of generality we can assume $a>0$ and define $t=b/a$. The statement of the lemma is then equivalent to $$ |1+t|^{p-2}(1+t)\leq 2^{p-2}(1+|t|^{p-2}t), \text{ for }t\geq -1. $$ This is trivially true for $t=-1$. Hence we are lead to study the function $$ f(t):=\frac{|1+t|^{p-2}(1+t)}{1+|t|^{p-2}t}, \text{ for }t> -1. $$ We find that $f$ has critical points at $t=1$ and $t=0$. In addition, $$ f(1)=2^{p-2}, \lim_{t\searrow -1} f(t)=0, f(0)=1, \lim_{|t|\to \infty} f(t)=1. $$ We conclude that $f(t)\leq 2^{p-2}$ for all $t\geq -1$, and the result follows. \end{proof} Below we prove that a kernel $K$ behaving like $y^{-n-sp}$ satisfies certain inequalities that might look strange at a first glance, but they are exactly the ones that will appear in the proof of our key lemma later. \begin{prop}\label{prop:fixp} Assume $K$ satisfies $K(x,y)=K(x,-y)$ and there exist $\Lambda\geq \lambda>0$, $M>0$ and $\gamma>0$ such that \begin{align*} \frac{\lambda}{|y|^{n+sp}}\leq & K(x,y)\leq \frac{\Lambda}{|y|^{n+sp}}, \text{ for } y\in B_2,x\in B_2,\\ 0\leq & K(x,y) \leq \frac{M}{|y|^{n+\gamma}}, \text{ for } y\in \mathbb{R}^n\setminus B_\frac14,x\in B_2, \end{align*} where $s\in (0,1)$ and $p\in (1,\infty)$. In the case $p<2$ we require additionally $p>1/(1-s)$. Then for any $\delta>0$ there are $1/2\geq k>0$ and $\eta>0$ such that for $p\in (2,\infty)$ \begin{align}\label{eq:kassp2} &2^{p-2}k^{p-1}\operatorname{PV} \int_{x+y\in B_1}|\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(y+x))K(x,y)\, dy \nonumber \\ &+2^{p-2}\int_{y\in \mathbb{R}^n\setminus B_\frac14}|k\beta(x)+2(|8y|^ \eta-1)|^{p-1} K(x,y)\, dy\\ \nonumber &+2^{p-1}\int_{y\in \mathbb{R}^n\setminus B_\frac14}(|8y|^ \eta-1)^{p-1} K(x,y)\, dy<2^{1-p}\inf_{A\subset B_2,|A|>\delta}\int_A K(x,y)\,d y \end{align} and for $p\in (1/(1-s),2)$ \begin{align}\label{eq:kassp1} (3^{p-1}+2^{p-1})k^{p-1}\int_{\mathbb{R}^n}|\beta(x)-\beta(x+y)|^{p-1}K(x,y)\, dy\\ \nonumber +2^{p-1}\int_{\mathbb{R}^n\setminus B_\frac14}(|8y|^\eta-1)^{p-1}K(x,y)\,dy<2^{1-p}\inf_{A\subset B_2,|A|>\delta}\int_A K(x,y)\,d y, \end{align} for any $x\in B_{3/4}$. Here $k$ and $\eta$ depend on $\lambda,\Lambda,M,p,s,\gamma$ and $\delta$. \end{prop} \begin{proof} The proof is split into two different cases.\\ \noindent {\bf Case 1: $p>2$}\\ The first term in the left hand side of \eqref{eq:kassp2} reads \begin{align*} &2^{p-2}k^{p-1}\operatorname{PV} \int_{x+y\in B_1}|\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(y+x))K(x,y)\, dy\\ =&2^{p-2}k^{p-1}\operatorname{PV} \int_{x+y\in B_1,y\not\in B_\frac14}|\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(y+x))K(x,y)\, dy\\ &+2^{p-2}k^{p-1}\operatorname{PV} \int_{y\in B_\frac14}|\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(y+x))K(x,y)\, dy\\ &=I_1+I_2. \end{align*} Since $\beta$ is uniformly bounded by a constant $C$, we can, using the upper bound on $K$ outside $B_{1/4}$, obtain \begin{equation}\label{eq:case1est1a} |I_1|\leq |2kC|^{p-1}\int_{\mathbb{R}^n\setminus B_\frac14} K(x,y)\,dy\leq |2kC|^{p-1} M\int_{\mathbb{R}^n\setminus B_\frac14} \frac{dy}{|y|^{n+\gamma}}, \end{equation} which is finite and converges to zero as $k\to 0$. For $I_2$ we proceed as follows \begin{align*} I_2&=2^{p-2}k^{p-1}\operatorname{PV} \int_{y\in B_\frac14}|\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(y+x))K(x,y)\, dy\\ &=2^{p-3}k^{p-1}\operatorname{PV} \int_{y\in B_\frac14}|\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(y+x))K(x,y)\,dy \\ &+2^{p-3}k^{p-1}\operatorname{PV} \int_{y\in B_\frac14}|\beta(x)-\beta(-y+x)|^{p-2}(\beta(x)-\beta(-y+x))K(x,y)\,dy. \end{align*} Introducing the notation $$ F=-(\beta(x)-\beta(x-y)),\quad G=(\beta(x)-\beta(x-y))+(\beta(x)-\beta(x+y)), $$ $I_2$ can be written as \begin{align*} &2^{p-3}k^{p-1}\int_{y\in B_\frac14}\left(|F+G|^{p-2}(F+G)-|F|^{p-2}F\right) K(x,y) \,dy\\ &\leq 2^{p-3}k^{p-1}(p-1)\int_{y\in B_\frac14}|G|(|F|+|G|)^{p-2}K(x,y)\,dy, \end{align*} by Lemma \ref{lem:pineq1}. Since $\beta$ is $C^2$, $|F|\leq C|y|$ and $|G|\leq C|y|^2$. Invoking the upper bound on $K$ in $B_2$ yields the estimate \begin{equation}\label{eq:case1est1} I_2\leq C^{p-1}2^{p-3}k^{p-1}(p-1)\Lambda\int_{y\in B_\frac14}|y|^{p-n-sp}\, dy\leq \frac{C^{p-1}2^{p-3}k^{p-1}(p-1)\Lambda \left(\frac14\right)^{p(1-s)}}{p(1-s)}, \end{equation} where $C$ only depends on the $C^2$-norm of $\beta$, which is fixed. Clearly the left hand side of \eqref{eq:case1est1} goes to zero as $k\to 0$. For the rest of the terms in the left hand side we observe first that if $\eta<\gamma/(p-1)$ then from the upper bound on $K$ outside $B_{1/4}$ \begin{equation}\label{eq:case1est2} \int_{\mathbb{R}^n\setminus B_\frac14}\left(|8y|^\eta-1\right)^{p-1} K(x,y)\, dy\leq M\int_{\mathbb{R}^n\setminus B_\frac14}\left(|8y|^\eta-1\right)^{p-1} \frac{dy }{|y|^{n+\gamma}}, \end{equation} which is uniformly bounded and tends to zero as $\eta \to 0$, by the dominated convergence theorem. In addition, since $\beta$ is uniformly bounded by some constant $C>0$ we have \begin{equation}\label{eq:case1est3} \int_{\mathbb{R}^n\setminus B_\frac14}|k\beta(x)|^{p-1} K(x,y)\,dy\leq k^{p-1}C^{p-1}M\int_{\mathbb{R}^n\setminus B_\frac14} \frac{dy}{|y|^{n+\gamma}}, \end{equation} which is finite and converges to zero as $k\to 0$, where we again have used the upper bound on $K$ outside $B_{1/4}$. Thus, if we choose $\eta$ and $k$ small enough (depending on $\Lambda$, $M$, $p$, $s$ and $\gamma$) we can make all the terms in the left hand side as small as desired. Now we turn our attention to the right hand side. We have, due to the lower bound on $K$ in $B_2$ $$ 2^{1-p}\inf_{A\subset B_2,|A|>\delta}\int_{A}K(x,y)\,dy \geq \frac{2^{1-p}\lambda \delta}{2^{n+sp}}. $$ Then it is clear that we can choose $\eta$ and $k$, depending only on $\lambda$, $\Lambda$, $M$, $p$, $s$, $\gamma$ and $\delta$, so that the left hand side is larger than the right hand side. \noindent {\bf Case 2: $1/(1-s)<p<2$}\\ The only difference from the case $p>2$ is the first term in the left hand side. We need to show that for $k$ small enough, the term $$(3^{p-1}+2^{p-1})k^{p-1}\int_{\mathbb{R}^n}|\beta(x)-\beta(x+y)|^{p-1}K(x,y)\, dy,$$ is small. We split the integral into two parts, one in $B_1$ and one in $\mathbb{R}^n\setminus B_1$. We have $|\beta(x)-\beta(x+y)|\leq C|y|$ for $y\in B_1$ and $|\beta(z)|\leq C$ for all $z\in \mathbb{R}^n$. Hence, \begin{align} &(3^{p-1}+2^{p-1})k^{p-1}\int_{B_1}|\beta(x)-\beta(x+y)|^{p-1}K(x,y)\, dy\nonumber \\\label{eq:case2est1} &\leq \Lambda C^{p-1}(3^{p-1}+2^{p-1})k^{p-1}\int_{B_1}|y|^{p-1-n-sp}\,dy\\\nonumber &\leq \Lambda C^{p-1}(3^{p-1}+2^{p-1})k^{p-1}\frac{1}{p(1-s)-1}, \end{align} where we have used the upper bound on $K$ in $B_2$. For the part outside $B_1$ we have \begin{align} &(3^{p-1}+2^{p-1})k^{p-1}\int_{\mathbb{R}^n\setminus B_1}|\beta(x)-\beta(x+y)|^{p-1}K(x,y)\, dy\nonumber \\\label{eq:case2est2} &\leq C^{p-1}M(3^{p-1}+2^{p-1})k^{p-1}\int_{\mathbb{R}^n\setminus B_1}\frac{dy}{|y|^{n+\gamma}}\\\nonumber &\leq C^{p-1}M(3^{p-1}+2^{p-1})k^{p-1}\gamma^{-1}, \end{align} from the upper bound on $K$ outside $B_{1/4}$. By choosing $k$ small (depending on $\Lambda$, $M$, $p$, $s$, $\gamma$) we can make both of these terms as small as desired. Hence, the result follows as in the case $p>2$. \end{proof} \begin{rem} We remark that in the proof above, nothing would change if the exponents would depend on $x$, since $x$ is a fixed point. This is important later when we redo the proof for the case of variable exponents. \end{rem} The lemma below is the core of this paper. The proof is an adaptation of the proof of Lemma 4.1 in \cite{Sil06}. \begin{lem}\label{lem:key} Assume the hypotheses of Proposition \ref{prop:fixp}. Suppose \begin{align*} Lu\leq 0\text{ in }B_1,\\ u\leq 1\text{ in }B_1,\\ u(x)\leq 2|2x|^\eta-1\text{ in }\mathbb{R}^n\setminus B_1,\\ |B_1\cap \{u\leq 0\}|>\delta, \end{align*} where $\eta$ is as in Proposition \ref{prop:fixp}. Then $u\leq 1-\theta$ in $B_{1/2}$, where $\theta =\theta(\lambda,\Lambda,M,p,s,\gamma,\delta)>0$. \end{lem} \begin{proof} We argue by contradiction. Let $$ \theta=k\left(\beta(1/2)-\beta(3/4)\right), $$ where $k$ is as in Proposition \ref{prop:fixp}. If there is $x_0\in B_{1/2}$ such that $u(x_0)>1-\theta$, then $$ u(x_0)+k\beta(1/2)>1+k\beta(3/4). $$ Moreover, for any $y\in B_1\setminus B_{3/4}$ there holds $$ u(x_0)+k\beta(x_0)>u(x_0)+k\beta(1/2)>1+k\beta(3/4)\geq u(y)+k\beta(y). $$ Hence, the maximum of $u+k\beta$ in $B_1$ is attained inside $B_{3/4}$ and it is strictly larger than 1. Suppose that the maximum is attained at the point $x$. The rest of the proof is devoted to estimating $L(u+k\beta)\,(x)$ from above and from below in order to obtain a contradiction with Proposition \ref{prop:fixp}. At this point, we remark that $-k\beta+(u+k\beta)(x)$ touches $u$ from above at $x$. Hence, by Proposition \ref{prop:pw}, $Lu\,(x)\leq 0$ in the pointwise sense. We first estimate $L(u+k\beta)\, (x)$ from below. We split the integrals into two parts and write \begin{align*} L(u+k\beta)\, (x)&=\operatorname{PV} \int_{x+y\in B_1}+\int_{x+y\not\in B_1}\\ &=\lim_{r\to 0}\int_{x+y\in B_1,y\not\in B_r}+\int_{x+y\not\in B_1}=\lim_{r\to 0} I_r+I_2, \end{align*} where there is no need for the principal value in the second integral, since $x\in B_{3/4}$. Using that $u(x)+k\beta(x)>1$ is the maximum of $u+k\beta$ in $B_1$ we see that the integrand in $I_r$ is non-negative and we have the estimate \begin{align*} I_r\geq \int_{A_0}(1-k\beta(x+y))^{p-1}K(x,y)\, dy, \end{align*} where $$A_0=\{x+y\in B_1,\quad u(x+y)\leq 0\}.$$ Since $\beta \leq 1$ and $k\leq 1/2$ we conclude $$ I_r\geq \frac{1}{2^{p-1}}\inf_{A_0\subset B_2,|A_0|>\delta}\int_{A_0}K(x,y)\,d y. $$ Now we estimate $I_2$ from below. Using that $u(x)+k\beta(x)>1$ and $u(z)\leq 2|2z|^\eta-1$ for $z\in \mathbb{R}^n\setminus B_1$ and $\beta=0$ in $\mathbb{R}^n\setminus B_1$, we have \begin{align*} I_2&\geq \int_{x+y\not\in B_1} 2^{p-1}\Big|1-|2(x+y)|^\eta\Big|^{p-2}(1-|2(x+y)|^\eta) K(x,y)\, dy\\ &\geq 2^{p-1}\int_{y\not\in B_\frac14}\Big|1-\Big|2\left(|y|+\frac34\right)\Big|^\eta\Big|^{p-2}\left(1-\Big|2\left(|y|+\frac34\right)\Big|^\eta\right)K(x,y)\, dy\\ &\geq -2^{p-1}\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p-1} K(x,y)\, dy. \end{align*} Adding the two estimates together we can summarize \begin{align}\label{eq:Lfrombelow} &L(u+k\beta)\,(x)\geq \\ \nonumber & \frac{1}{2^{p-1}}\inf_{A_0\subset B_2,|A_0|>\delta}\int_{A_0}K(x,y)\,d y-2^{p-1}\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p-1} K(x,y)\, dy. \end{align} The next step is to estimate $L(u+k\beta)\, (x)$ from above. This part of the proof is split into two cases: $p\geq 2$ and $p<2$. \noindent {\bf Case 1: $p\geq 2$}\\ Again we split the integral defining $L(u+k\beta)\, (x)$ into two parts $$ L(u+k\beta)\, (x)=\operatorname{PV} \int_{x+y\in B_1}+\int_{x+y\not\in B_1}:=I_1+I_2, $$ where again, there is no need for the principal value in the second integral. We first treat $I_1$ by noting that when $x+y\in B_1$, we know $$ u(x)+k\beta(x)-u(x+y)-k\beta(x+y)\geq 0, $$ recalling that $u+k\beta$ attains its maximum (in $B_1$) at $x$. From Lemma \ref{lem:pest} \begin{align*} |u(x)&-u(x+y)+ k\beta(x)-k\beta(x+y)|^{p-2}(u(x)-u(x+y)+k\beta(x)-k\beta(x+y))\leq \\ &2^{p-2}|u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))\\ +&2^{p-2}|k\beta(x)-k\beta(x+y)|^{p-2}(k\beta(x)-k\beta(x+y)). \end{align*} Hence, \begin{align*} I_1&\leq 2^{p-2}\operatorname{PV}\int_{x+y\in B_1} |u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))K(x,y)\,dy \\ &+ 2^{p-2}k^{p-1}\operatorname{PV}\int_{x+y\in B_1} |\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(x+y)) K(x,y)\,dy. \end{align*} Now we turn our attention to $I_2$. We note that when $x+y\not\in B_1$, we cannot apply Lemma \ref{lem:pest} directly, but we still have from the hypothesis $$ u(x)+k\beta(x)>1,\quad u(x+y)+k\beta(x+y)\leq 2|2(x+y)|^\eta-1. $$ In other words, $$ u(x)-u(x+y)+k\beta(x)-k\beta(x+y)>2(1-|2(x+y)|^\eta). $$ By adding the term $2(|2(x+y)|^\eta-1)>0$ to the the expression, we increase the integrand, and we also make the integrand non-negative so that we can, once more, apply Lemma \ref{lem:pest}. It follows that \begin{align*} I_2&\leq \int_{x+y\not\in B_1}|u(x)-u(x+y)+k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1)|^{p-2}\times \\ &(u(x)-u(x+y)+k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1))K(x,y)\, dy\\ &\leq 2^{p-2}\int_{x+y\not\in B_1} |u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))K(x,y)\,dy\\ &+2^{p-2}\int_{x+y\not\in B_1}|k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1)|^{p-2}\times \\ &(k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1))K(x,y)\, dy. \end{align*} Adding the estimates for $I_1$ and $I_2$ together we arrive at \begin{align}\nonumber &L(u+k\beta)\,(x)\leq 2^{p-2}Lu\, (x)\\\nonumber &+2^{p-2}k^{p-1}\operatorname{PV}\int_{x+y\in B_1} |\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(x+y)) K(x,y)\,dy\\\nonumber &+2^{p-2}\int_{x+y\not\in B_1}|k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1)|^{p-2}\times\\ \label{eq:Lfromabove1}ß &(k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1))K(x,y)\, dy\\\nonumber &\leq 2^{p-2}k^{p-1}\operatorname{PV}\int_{x+y\in B_1} |\beta(x)-\beta(x+y)|^{p-2}(\beta(x)-\beta(x+y)) K(x,y)\,dy\\\nonumber &+2^{p-2}\int_{x+y\not\in B_1}|k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1)|^{p-1}K(x,y)\, dy, \end{align} since $Lu\,(x)\leq 0$. \noindent {\bf Case 2: $\frac{1}{1-s}<p<2$}\\ From Lemma \ref{lem:pineq2} \begin{align*} |u(x)-u(x+y)+k\beta(x)-k\beta(x+y)|^{p-2}(u(x)-u(x+y)+k\beta(x)-k\beta(x+y))\leq \\ |u(x)-u(x+y)|^{p-2}(u(x)-u(x+y))+(3^{p-1}+2^{p-1})k^{p-1}|\beta(x)-\beta(x+y)|^{p-1} \end{align*} from which it follows that \begin{align} L(u+k\beta)\,(x)&\leq Lu\,(x)+k^{p-1}(3^{p-1}+2^{p-1})\int_{\mathbb{R}^n}|\beta(x)-\beta(x+y)|^{p-1} K(x,y)\,d y\label{eq:Lfromabove2}\\\nonumber &\leq k^{p-1}(3^{p-1}+2^{p-1})\int_{\mathbb{R}^n}|\beta(x)-\beta(x+y)|^{p-1} K(x,y)\,d y. \end{align} Finally, we arrive at a contradiction by observing that \eqref{eq:Lfrombelow} combined with either \eqref{eq:Lfromabove1} or \eqref{eq:Lfromabove2} results in a contradiction with \eqref{eq:kassp2} or \eqref{eq:kassp1} in Proposition \ref{prop:fixp}. \end{proof} Once the lemma above is established, the proof of the H\"older regularity is standard. We follow the lines of the proof of Theorem 5.1 in \cite{Sil06}. \begin{proof}[~Proof of Theorem \ref{thm:main}]We first rescale $u$ by the factor $$ \frac{1}{2\|u\|_{L^\infty(\mathbb{R}^n)}}. $$ Then the new $u$ satisfies $$ L u =0\text{ in }B_1,\quad \operatorname{osc}_{\mathbb{R}^n}u \leq 1. $$ We will now show that for $j=0,1,\ldots$ $$ \operatorname{osc}_{B_{2^{-j}(x_0)}} u\leq 2^{-j\alpha}, \text{ for any }x_0\in B_1, $$ where $\alpha$ is chosen so that $$ \frac{2-\theta}{2}\leq 2^{-\alpha}\text{ and } \alpha\leq \eta, $$ where $\theta$ is from Lemma \ref{lem:key} and $\eta$ is from Proposition \ref{prop:fixp}, with $\delta=|B_1|/2$. This will imply the desired result with $C=2^{\alpha}$. In what follows we will find constants $a_j$ and $b_j$ so that \begin{equation}\label{eq:akbk} b_j\leq u\leq a_j\text{ in }B_{2^{-j}(x_0)},\quad |a_j-b_j|\leq 2^{-j\alpha}. \end{equation} We construct these by induction. For $j\leq 0$, \eqref{eq:akbk} holds true with $b_j=\inf_{\mathbb{R}^n} u $ and $a_j=b_j+1$. Assume \eqref{eq:akbk} holds for all $j\leq k$. We need to construct $a_{k+1}$ and $b_{k+1}$. Put $m=(a_k+b_k)/2$. Then $$ |u-m|\leq 2^{-k\alpha-1}\text{ in $B_{2^{-k}}(x_0)$.} $$ Let $$ v(x)=2^{\alpha k+1}(u(2^{-k}x+x_0)-m). $$ Then $$ \operatorname{PV} \int_{\mathbb{R}^n}|v(x)-v(x+y)|^{p-2}(v(x)-v(x+y))K_{x_0,2^{-k}}(x,y)\, dy= 0 \text{ in }B_1 $$ and $$ |v|\leq 1 \text{ in }B_1, $$ where $$ K_{x_0,2^{-k}}(x,y)=2^{-k(n+sp)}K(2^{-k}x+x_0,2^{-k}y),$$ which satisfies the same assumptions as $K$ itself. We also remark for $|y|>1$ such that $2^\ell\leq |y|\leq 2^{\ell+1}$ we have \begin{align*} v(y)= 2^{\alpha k+1}(u(2^{-k}y+x_0)-m)&\leq 2^{\alpha k+1}(a_{k-\ell-1}-m)\\ &\leq 2^{\alpha k+1}(a_{k-\ell-1}-b_{k-\ell-1}+b_{k}-m)\\ &\leq 2^{\alpha k+1}(2^{-\alpha(k-\ell-1)}-\frac12 2^{-k\alpha})\\ &\leq 2^{1+\alpha(\ell+1)}-1\leq 2|2y|^\alpha-1\\ &\leq 2|2y|^\eta-1, \end{align*} where we have used that \eqref{eq:akbk} holds for $j\leq k$. Suppose now that \mbox{$|\{v\leq 0\}\cap B_1|\geq |B_1|/2$} (if not we would apply the same procedure to $-v$). Then $v$ satisfies all the assumptions of Lemma \ref{lem:key}, with $\delta =|B_1|/2$ and we obtain $$ v(x)\leq 1-\theta\text{ in }B_\frac12, $$ where $\theta=\theta(\lambda,\Lambda,M,p,s,\gamma)$, since $\delta$ is fixed. Scaling back to $u$ this yields \begin{align*} u(x)&\leq 2^{-1-\alpha k}(1-\theta)+m\leq 2^{-1-k\alpha}(1-\theta)+\frac{a_k+b_k}{2}\\ &\leq b_k+2^{-1-\alpha k}(1-\theta)+2^{-1-\alpha k}\\ &\leq b_k+2^{-\alpha (k+1)} \end{align*} by our choice of $\alpha$. Hence, if we let $b_{k+1}=b_k$ and $a_{k+1}=b_k+2^{-\alpha(k+1)}$ we obtain \eqref{eq:akbk} for the step $j=k+1$ and the induction is complete. \end{proof} \section{Variable exponents} In this section we show that our results also apply to the case when both $p$ and $s$ vary with $x$. In particular we prove Theorem \ref{thm:main2}. Throughout this section $L$ denotes the operator $$ Lu\,(x):=\operatorname{PV} \int_{\mathbb{R}^n}|u(x)-u(x+y)|^{p(x)-2}(u(x)-u(x+y))K(x,y)\, dy. $$ We follow the same strategy as in the case of constant exponents and prove slightly modified versions of Proposition \ref{prop:fixp} and Lemma \ref{lem:key}. The proof of H\"older continuity is then similar. \begin{prop}\label{prop:varp} Assume $K$ satisfies $K(x,y)=K(x,-y)$ and there exist $\Lambda\geq \lambda>0$, $M>0$ and $\gamma>0$ such that \begin{align*} \frac{\lambda}{|y|^{n+s(x)p(x)}}\leq & K(x,y)\leq \frac{\Lambda}{|y|^{n+s(x)p(x)}}, \text{ for } y\in B_2,x\in B_2,\\ 0\leq & K(x,y) \leq \frac{M}{|y|^{n+\gamma}}, \text{ for } y\in \mathbb{R}^n\setminus B_\frac14,x\in B_2, \end{align*} where $0<s_0<s(x)<s_1<1$ and $1<p_0<p(x)<p_1<\infty$. In the case $p(x)<2$ we require additionally that there is $\tau>0$ such that $$ p(x)(1-s(x))-1>\tau .$$ Then for any $\delta>0$ there are $1/2 \geq k>0$ and $\eta>0$ such that for $p\in (2,\infty)$ \begin{align}\label{eq:varkassp2} &2^{p(x)-2}k^{p(x)-1}\operatorname{PV} \int_{x+y\in B_1}|\beta(x)-\beta(x+y)|^{p(x)-2}(\beta(x)-\beta(y+x))K(x,y)\, dy \nonumber \\ &+2^{p(x)-2}\int_{y\in \mathbb{R}^n\setminus B_\frac14}|k\beta(x)+2((|8y|^ \eta-1)|^{p(x)-1} K(x,y)\, dy\\ \nonumber &+2^{p(x)}\int_{y\in \mathbb{R}^n\setminus B_\frac14}((|8y|^ \eta-1)|^{p(x)-1} K(x,y)\, dy<2^{1-p(x)}\inf_{A\subset B_2,|A|>\delta}\int_A K(x,y)\,d y \end{align} and for $p(x)\in (1/(1-s),2)$ \begin{align}\label{eq:varkassp1} (3^{p(x)-1}+2^{p(x)-1})k^{p(x)-1}\int_{\mathbb{R}^n}|\beta(x)-\beta(x+y)|^{p(x)-1}K(x,y)\, dy\\ \nonumber +2^{p(x)}\int_{\mathbb{R}^n\setminus B_\frac14}(|8y|^\eta-1)^{p(x)-1}K(x,y)\,dy<2^{1-p(x)}\inf_{A\subset B_2,|A|>\delta}\int_A K(x,y)\,d y, \end{align} for any $x\in B_{3/4}$. Here $k$ and $\eta$ depend on $\lambda,\Lambda,M,p_0,p_1,s_0,s_1,\gamma,\tau$ and $\delta$. \end{prop} \begin{proof} We point out the differences to the proof of Proposition \ref{prop:varp} and briefly explain how they can be dealt with. \noindent{\bf Case 1: $p(x)\geq 2$}\\ By the exact same computation as in \eqref{eq:case1est1a}, \eqref{eq:case1est1}, \eqref{eq:case1est2} and \eqref{eq:case1est3} in the proof of Proposition \ref{prop:fixp} (since the computation is made for a fixed $x$), we can conclude that the left hand side is bounded by \begin{align*} |2kC|^{p-1} M\int_{\mathbb{R}^n\setminus B_\frac14} \frac{dy}{|y|^{n+\gamma}}+\frac{C^{p(x)-1}2^{p(x)-3}k^{p(x)-1}(p(x)-1)\Lambda \left(\frac14\right)^{p(x)(1-s(x))}}{p(x)(1-s(x))} \end{align*} plus terms involving the quantities \begin{align*} M\int_{\mathbb{R}^n\setminus B_\frac14}\left(|8y|^\eta-1\right)^{p(x)-1} \frac{dy }{|y|^{n+\gamma}} \end{align*} and \begin{align*} k^{p(x)-1}C^{p(x)-1}2^{p(x)-1}M\int_{\mathbb{R}^n\setminus B_\frac14} \frac{dy}{|y|^{n+\gamma}}. \end{align*} Due to the assumptions on $p$ and $s$, the terms are all uniformly bounded. Thus, if we choose $\eta$ and $k$ small enough (depending on $\Lambda$, $M$, $p_0$, $p_1$, $s_0$, $s_1$ and $\gamma$) we can make all the terms in the left hand side as small as desired. For the right hand side, we again have $$ 2^{1-p(x)}\inf_{A\subset B_2,|A|>\delta}\int_{A}K(x,y)\,dy \geq \frac{2^{1-p(x)}\lambda \delta}{2^{n+s(x)p(x)}}\geq \frac{2^{1-p_1}\lambda \delta}{2^{n+s_1p_1}}. $$ Then it is clear that we can choose $\eta$ and $k$, depending only on $\lambda$, $\Lambda$, $M$, $p_0$, $p_1$, $s_0$, $s_1$, $\gamma$ and $\delta$, so that the left hand side is larger than the right hand side. \noindent{\bf Case 2: $1/(1-s(x))<p(x)<2$}\\ We can again estimate the left hand side by \begin{align*} &\Lambda C^{p(x)-1}(3^{p(x)-1}+2^{p(x)-1})k^{p(x)-1}\frac{1}{p(x)(1-s(x))-1}\\&+C^{p(x)-1}M(3^{p(x)-1}+2^{p(x)-1})k^{p(x)-1}\gamma^{-1}, \end{align*} as in \eqref{eq:case2est1} and \eqref{eq:case2est2}. By choosing $k$ small (depending on $\Lambda$, $M$, $p_0$, $p_1$, $\tau$, $\gamma$) we can make both these terms as small as desired. The result follows also in this case. \end{proof} \begin{lem}\label{lem:key2} Assume the hypotheses of Proposition \ref{prop:varp}. Suppose \begin{align*} Lu\leq \varepsilon\text{ in }B_1,\\ u\leq 1\text{ in }B_1,\\ u(x)\leq 2|2x|^\eta-1\text{ in }\mathbb{R}^n\setminus B_1,\\ |B_1\cap \{u\leq 0\}|>\delta, \end{align*} where $\eta$ is as in Proposition \ref{prop:varp} and \begin{align*} \varepsilon=\min(2,2^{p(x)-1})\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p(x)-1} K(x,y)\, dy. \end{align*} Then $u\leq 1-\theta$ in $B_{1/2}$, where $\theta = \theta(\lambda,\Lambda,M,p_0,p_1,s_0,s_1,\gamma,\tau,\delta)>0$. \end{lem} \begin{proof} The first part of the proof is exactly the same as the one of Proposition \ref{prop:fixp}. Then it comes to estimating $L(u+k\beta)\, (x)$ from below. Since $x$ is a fixed point throughout all the calculations, we obtain as in \eqref{eq:Lfrombelow} \begin{align}\label{eq:Lfrombelowvar} &L(u+k\beta)\,(x)\geq \\ \nonumber & \frac{1}{2^{p(x)-1}}\inf_{A_0\subset B_2,|A_0|>\delta}\int_{A_0}K(x,y)\,d y-2^{p(x)-1}\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p(x)-1} K(x,y)\, dy. \end{align} The next step is then to estimate $L(u+k\beta)\, (x)$ from above. We obtain almost the same estimate as in \eqref{eq:Lfromabove1} and \eqref{eq:Lfromabove2}. The difference is that we instead of $Lu\,(x)\leq 0$ use $Lu\,(x)\leq \varepsilon$ and obtain an extra term $$ \min(2,2^{p(x)-1})\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p(x)-1} K(x,y)\, dy. $$ Hence the estimate reads in the two different cases:\\ \noindent{\bf Case 1: $p(x)\geq 2$} \begin{align}\nonumber &L(u+k\beta)\,(x) \\&\label{eq:Lfromabove1var} \leq 2^{p(x)-2}k^{p(x)-1}\operatorname{PV}\int_{x+y\in B_1} |\beta(x)-\beta(x+y)|^{p(x)-2}(\beta(x)-\beta(x+y)) K(x,y)\,dy\\ &+2^{p(x)-2}\int_{x+y\not\in B_1}|k\beta(x)-k\beta(x+y)+2(|2(x+y)|^\eta-1)|^{p(x)-1}K(x,y)\, dy \nonumber \\ &+2^{p(x)-1}\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p(x)-1} K(x,y)\, dy .\nonumber \end{align} \noindent{\bf Case 2: $1/(1-s(x))<p(x)<2$} \begin{align} L(u+k\beta)\,(x)&\leq k^{p(x)-1}(3^{p(x)-1}+2^{p-1})\int_{\mathbb{R}^n}|\beta(x)-\beta(x+y)|^{p(x)-1} K(x,y)\,d y \label{eq:Lfromabove2var}\\ &\nonumber +2^{p(x)-1}\int_{y\not\in B_\frac14}(|8y|^\eta-1)^{p(x)-1} K(x,y)\, dy. \end{align} The combination of \eqref{eq:Lfrombelowvar} with either \eqref{eq:Lfromabove1var} or \eqref{eq:Lfromabove2var} is a contradiction to \eqref{eq:varkassp2} or \eqref{eq:varkassp1}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main2}] The proof is very similar to the proof of Theorem \ref{thm:main}. We first rescale $u$ by the factor $$ \left(2\|u\|_{L^\infty(\mathbb{R}^n)}+2^{\frac{p_1-1}{p_0-1}}\max\Big\{\left(\frac{\|f\|_{L^\infty(B_2)}}{\varepsilon}\right)^\frac{1}{p_0-1},\left(\frac{\|f\|_{L^\infty(B_2)}}{\varepsilon}\right)^\frac{1}{p_1-1}\Big\}\right)^{-1}, $$ where $\varepsilon$ is chosen as in Lemma \ref{lem:key2} with $\delta=|B_1|/2$. Then one readily verifies that $$ L u =\tilde f\text{ in }B_2,\quad \|\tilde f\|_{L^\infty(B_2)}\leq \frac{\varepsilon}{2^{p_1-1}},\quad \operatorname{osc}_{\mathbb{R}^n}u \leq 1. $$ Next we proceed as before: we find $a_j$ and $b_j$ such that \begin{equation}\label{eq:akbkvar} b_j\leq u\leq a_j\text{ in }B_{2^{-j}(x_0)},\quad |a_j-b_j|\leq 2^{-j\alpha}, \end{equation} where we require from $\alpha$ that $$ \frac{2-\theta}{2}\leq 2^{-\alpha}, \alpha\leq \eta \text{ and } \alpha\leq \frac{s_0p_0}{p_1-1}, $$ where $\beta$ is from Lemma \ref{lem:key2} and $\eta$ from Proposition \ref{prop:varp}, with $\delta =|B_1|/2$. As before, \eqref{eq:akbkvar} is satisfied for $j\leq 0$ with the choice $b_j=\inf_{\mathbb{R}^n} u$ and $a_j=b_j+1$. Now, given that \eqref{eq:akbkvar} holds for $j\leq k$ we construct $a_{k+1}$ and $b_{k+1}$. Define $$ v(x)=2^{\alpha k+1}(u(2^{-k}x+x_0)-m),\quad \text{ with } m=\frac{a_k+b_k}{2}. $$ Then \begin{align*} &\operatorname{PV} \int_{\mathbb{R}^n}|v(x)-v(x+y)|^{p(x)-2}(v(x)-v(x+y))K_{x_0,2^{-k}}(x,y)\, dy \\ &=2^{(\alpha k+1)(p(2^{-k}x+x_0)-1)-k(s(2^{-k}x+x_0)p(2^{-k}x+x_0))}\tilde f\text{ in }B_1, \end{align*} and $$ |v|\leq 1 \text{ in }B_1. $$ As before, $$ K_{x_0,2^{-k}}(x,y)=2^{-k(n+s(2^{-k}x+x_0)p(2^{-k}x+x_0))}K(2^{-k}x+x_0,2^{-k}y) $$ satisfies the same assumptions as $K$. From our choice of $\alpha$ it also follows that $$ \Big|2^{(\alpha k+1)(p(2^{-k}x+x_0)-1)-k(s(2^{-k}x+x_0)p(2^{-k}x+x_0))}\tilde f\Big|\leq \varepsilon \text{ in $B_1$}. $$ Supposing that $|\{v\leq 0\}\cap B_1|\geq |B_1|/2$ and observing that as before $$ v(y)\leq 2|2y|^\eta-1, \quad \text{ for $|y|>1$}, $$ we see that $v$ satisfies all the assumptions of Lemma \ref{lem:key2}. The choice $\delta = |B_1|/2$ yields $$ v(x)\leq 1-\theta \text{ in }B_\frac12,$$ which again implies $$ u(x)\leq b_k+2^{-\alpha(k+1)}. $$ Thus the choice $b_{k+1}=b_k$ and $a_{k+1}=b_k+2^{-\alpha(k+1)}$ settles \eqref{eq:akbkvar} for the step $j=k+1$. Hence, we arrive at the estimate $$ \operatorname{osc}_{B_r(x_0)} u\leq 2^\alpha r^\alpha . $$ Recalling our rescaling factor in the beginning and rescaling back to our original $u$ yields \begin{align*} &\operatorname{osc}_{B_r(x_0)} u\\ &\leq 2^\alpha\left(2\|u\|_{L^\infty(\mathbb{R}^n)}+2^{\frac{p_1-1}{p_0-1}}\max\Big\{\left(\frac{\|f\|_{L^\infty(B_2)}}{\varepsilon}\right)^\frac{1}{p_0-1},\left(\frac{\|f\|_{L^\infty(B_2)}}{\varepsilon}\right)^\frac{1}{p_1-1}\Big\}\right)^{-1} r^\alpha \\ &\leq C\left(\|u\|_{L^\infty(\mathbb{R}^n)}+\max\left(\|f\|_{L^\infty(B_2)}^\frac{1}{p_0-1},\|f\|_{L^\infty(B_2)}^\frac{1}{p_1-1}\right)\right) r^\alpha, \end{align*} which is the desired result. \begin{comment} Put $m=(a_k+b_k)/2$. Then $$ |u-m|\leq 2^{-k\alpha-1}\text{ in $B_{2^{-k}}(x_0)$.} $$ Let $$ v(x)=2^{\alpha k+1}(u(2^{-k}x+x_0)-m). $$ Then $$ L_{2^{-k},x_0}v = 0 \text{ in }B_1, \quad |v|\leq 1 \text{ in }B_1. $$ Suppose for the moment that $|\{v\leq 0\}\cap B_1|\geq |B|/2$ (if not we would apply the same procedure to $-v$), then for $|y|>1$ such that $2^j\leq |y|\leq 2^{j+1}$ we have \begin{align*} v(y)= 2^{\alpha k+1}(u(2^{-k}y+x_0)-m)&\leq 2^{\alpha k+1}(a_{k-j-1}-m)\\ &\leq 2^{\alpha k+1}(a_{k-j-1}-b_{k-j-1}+b_{k}-m)\\ &\leq 2^{\alpha k+1}(2^{-\alpha(k-j-1)}-\frac12 2^{-k\alpha})\\ &\leq 2^{1+\alpha(j+1)}-1\leq 2|2y|^\alpha-1. \end{align*} Hence, $v$ satisfies all the assumptions of Lemma \ref{lem:key}. With $\delta =|B_1|/2$ we then obtain $$ v(x)\leq 1-\gamma,\text{ in }B_\frac12. $$ Scaling back to $u$ this yields \begin{align*} u(x)&\leq 2^{-1-\alpha k}(1-\gamma)+m\leq 2^{-1-k\alpha}(1-\gamma)+\frac{a_k+b_k}{2}\\ &\leq b_k+2^{-1-\alpha k}(1-\gamma)+2^{-1-\alpha k}\\ &\leq b_k+2^{-\alpha (k+1)} \end{align*} by our choice of $\alpha$. Hence, if we let $b_{k+1}=b_k$ and $a_k+1=b_k+2^{-\alpha(k+1)}$ we obtain \eqref{eq:akbk} for the step $k+1$ and the induction is complete. \end{comment} \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Let $f:I\subset \mathbb{R} \rightarrow \mathbb{R} $ be a convex function on the interval of $I$ of real numbers and $a,b\in I$ with $a<b.$ The following double inequalit \begin{equation} f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\dint\nolimits_{a}^{b}f(x)d \leq \frac{f(a)+f(b)}{2} \label{1.1} \end{equation is well-known in the literature as Hadamard's inequality. We recall some definitions; In \cite{PEC}, Pecaric et al. defined quasi-convex functions as following \begin{definition} A function $f:\left[ a,b\right] \rightarrow \mathbb{R} $ is said quasi-convex on $\left[ a,b\right] $ i \begin{equation*} f\left( \lambda x+(1-\lambda )y\right) \leq \max \left\{ f(x),f(y)\right\} \text{ \ \ \ \ }\left( QC\right) \end{equation* holds for all $x,y\in \left[ a,b\right] $ and $\lambda \in \lbrack 0,1].$ \end{definition} Clearly, any convex function is quasi-convex function. Furthermore, there exist quasi-convex functions which are not convex. \begin{definition} (See \cite{SS1}, \cite{WR}) We say that $f:I\rightarrow \mathbb{R} $ is a Wright-convex function or that $f$ belongs to the class $W(I),$ if for all $x,$ $y+\delta \in I$ with $x<y$ and $\delta >0,$ we hav \begin{equation*} f(x+\delta )+f(y)\leq f(y+\delta )+f(x) \end{equation*} \end{definition} \begin{definition} (See \cite{SS1}) For $I\subseteq \mathbb{R} ,$ the mapping $f:I\rightarrow \mathbb{R} $ is wright-quasi-convex function if, for all $x,y\in I$ and $t\in \left[ 0, \right] ,$ one has the inequalit \begin{equation*} \frac{1}{2}\left[ f\left( tx+\left( 1-t\right) y\right) +f\left( \left( 1-t\right) x+ty\right) \right] \leq \max \left\{ f\left( x\right) ,f\left( y\right) \right\} ,\text{ \ \ \ \ }\left( WQC\right) \end{equation* or equivalentl \begin{equation*} \frac{1}{2}\left[ f\left( y\right) +f\left( x+\delta \right) \right] \leq \max \left\{ f\left( x\right) ,f\left( y+\delta \right) \right\} \end{equation* for every $x,$ $y+\delta \in I,$ $x<y$ and $\delta >0.$ \end{definition} \begin{definition} (See \cite{SS1}) The mapping $f:I\rightarrow \mathbb{R} $ is Jensen- or J-quasi-convex if \begin{equation*} f\left( \frac{x+y}{2}\right) \leq \max \left\{ f(x),f(y)\right\} ,\text{ \ \ \ \ }\left( JQC\right) \end{equation* for all $x,y\in I.$ \end{definition} Note that the class $JQC(I)$ of J-quasi-convex functions on $I$ contains the class $J(I)$ of J-convex functions on $I,$ that is, functions satisfying the conditio \begin{equation*} f\left( \frac{x+y}{2}\right) \leq \frac{f(x)+f(y)}{2},\text{ \ \ \ }\left( J\right) \end{equation* for all $x,y\in I.$ In \cite{SS1}, Dragomir and Pearce proved following theorems containing J-quasi-convex and Wright-quasi-convex functions. \begin{theorem} Suppose $a,b\in I\subseteq \mathbb{R} $ and $a<b.$ If $f\in JQC\left( I\right) \cap L_{1}\left[ a,b\right] ,$ then \begin{equation} f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\din \nolimits_{a}^{b}f(x)dx+I\left( a,b\right) \label{1.2} \end{equation wher \begin{equation*} I\left( a,b\right) =\frac{1}{2}\int_{0}^{1}\left\vert f\left( ta+\left( 1-t\right) b\right) -f\left( \left( 1-t\right) a+tb\right) \right\vert dt. \end{equation*} \end{theorem} \begin{theorem} Let $f:I\rightarrow \mathbb{R} $ be a Wright-quasi-convex map on $I$ and suppose $a,b\in I\subseteq \mathbb{R} $ with $a<b$ and $f\in L_{1}\left[ a,b\right] ,$ one has the inequalit \begin{equation} \frac{1}{b-a}\dint\nolimits_{a}^{b}f(x)dx\leq \max \left\{ f(a),f(b)\right\} . \label{1.3} \end{equation} \end{theorem} In \cite{SS1}, Dragomir and Pearce also gave the following theorems involving some inclusions. \begin{theorem} Let $WQC\left( I\right) $ denote the class of Wright-quasi-convex functions on $I\subseteq \mathbb{R} ,$ then \begin{equation} QC\left( I\right) \subset WQC\left( I\right) \subset JQC\left( I\right) . \label{1.4} \end{equation Both inclusions are proper. \end{theorem} \begin{theorem} We have the inlusion \begin{equation} W\left( I\right) \subset WQC\left( I\right) ,\text{ \ \ }C(I)\subset QC(I) \text{ \ \ }J(I)\subset JQC(I). \label{1.5} \end{equation Each inclusion is proper. \end{theorem} For recent results related to quasi-convex functions see the papers \cit {AL1}-\cite{AH} and books \cite{SS2}, \cite{GRE}. In \cite{SS}, Dragomir defined co-ordinated convex functions and proved following inequalities. Let us consider the bidimensional interval $\Delta =\left[ a,b\right] \times \left[ c,d\right] $ in \mathbb{R} ^{2}$ with $a<b$ and $c<d.$ A function $f:\Delta \rightarrow \mathbb{R} $ will be called convex on the co-ordinates if the partial mapping \begin{equation*} f_{y}:\left[ a,b\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{y}\left( u\right) =f\left( u,y\right) \end{equation* an \begin{equation*} f_{x}:\left[ c,d\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{x}\left( v\right) =f\left( x,v\right) \end{equation* are convex where defined for all $y\in \left[ c,d\right] $ and $x\in \left[ a,b\right] .$ Recall that the mapping $f:\Delta \rightarrow \mathbb{R} $ is convex on $\Delta $, if the following inequality \begin{equation} f\left( \lambda x+\left( 1-\lambda \right) z,\lambda y+\left( 1-\lambda \right) w\right) \leq \lambda f\left( x,y\right) +\left( 1-\lambda \right) f\left( z,w\right) \label{a} \end{equation holds for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta $ and \lambda \in \left[ 0,1\right] .$ \begin{theorem} (see \cite{SS}, Theorem 1) Suppose that $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is convex on the co-ordinates on $\Delta .$ Then one has the inequalities \begin{eqnarray} f\left( \frac{a+b}{2},\frac{c+d}{2}\right) &\leq &\frac{1}{2}\left[ \frac{1} b-a}\dint\nolimits_{a}^{b}f\left( x,\frac{c+d}{2}\right) dx+\frac{1}{d-c \dint\nolimits_{c}^{d}f\left( \frac{a+b}{2},y\right) dy\right] \notag \\ &\leq &\frac{1}{\left( b-a\right) \left( d-c\right) }\int_{a}^{b \int_{c}^{d}f\left( x,y\right) dydx \label{1.6} \\ &\leq &\frac{1}{4}\left[ \frac{1}{b-a}\dint\nolimits_{a}^{b}f\left( x,c\right) dx+\frac{1}{b-a}\dint\nolimits_{a}^{b}f\left( x,d\right) dx\right. \notag \\ &&\left. \frac{1}{d-c}\dint\nolimits_{c}^{d}f\left( a,y\right) dy+\frac{1} d-c}\dint\nolimits_{c}^{d}f\left( b,y\right) dy\right] \notag \\ &\leq &\frac{f\left( a,c\right) +f\left( b,c\right) +f\left( a,d\right) +f\left( b,d\right) }{4} \notag \end{eqnarray The above inequalities are sharp. \end{theorem} Similar results can be found in \cite{AL}-\cite{BAK}. This paper is arranged as follows. Firstly, we will give some definitions on quasi-convex functions and lemmas belong to this definitions. Secondly, we will prove several inequalities contain co-ordinated quasi-convex functions. Also, we will discuss the inclusions a connection with some different classes of co-ordinated convex functions. \section{DEFINITIONS AND MAIN\ RESULTS} We will start the following definitions and lemmas; \begin{definition} A function $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is said quasi-convex function on the co-ordinates on $\Delta $ if the following inequalit \begin{equation*} f\left( \lambda x+\left( 1-\lambda \right) z,\lambda y+\left( 1-\lambda \right) w\right) \leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} \end{equation* holds for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta $ and \lambda \in \left[ 0,1\right] $ \end{definition} $f:\Delta \rightarrow \mathbb{R} $ will be called co-ordinated quasi-convex on the co-ordinates if the partial mapping \begin{equation*} f_{y}:\left[ a,b\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{y}\left( u\right) =f\left( u,y\right) \end{equation* an \begin{equation*} f_{x}:\left[ c,d\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{x}\left( v\right) =f\left( x,v\right) \end{equation* are convex where defined for all $y\in \left[ c,d\right] $ and $x\in \left[ a,b\right] .$ We denote by $QC(\Delta )$ the classes of quasi-convex functions on the co-ordinates on $\Delta .$ The following lemma holds. \begin{lemma} Every quasi-convex mapping $f:\Delta \rightarrow \mathbb{R} $ is quasi-convex on the co-ordinates. \end{lemma} \begin{proof} Suppose that $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is quasi-convex on $\Delta .$ Then the partial mappings \begin{equation*} f_{y}:\left[ a,b\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{y}\left( u\right) =f\left( u,y\right) ,\text{ \ \ }y\in \left[ c,d\right] \end{equation* an \begin{equation*} f_{x}:\left[ c,d\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{x}\left( v\right) =f\left( x,v\right) ,\text{ \ \ }x\in \left[ a,b\right] \text{\ } \end{equation* are convex on $\Delta .$ For $\lambda \in \left[ 0,1\right] $ and v_{1},v_{2}\in \left[ c,d\right] ,$ one ha \begin{eqnarray*} f_{x}\left( \lambda v_{1}+\left( 1-\lambda \right) v_{2}\right) &=&f\left( x,\lambda v_{1}+\left( 1-\lambda \right) v_{2}\right) \\ &=&f\left( \lambda x+\left( 1-\lambda \right) x,\lambda v_{1}+\left( 1-\lambda \right) v_{2}\right) \\ &\leq &\max \left\{ f\left( x,v_{1}\right) ,f\left( x,v_{2}\right) \right\} \\ &=&\max \left\{ f_{x}\left( v_{1}\right) ,f_{x}\left( v_{2}\right) \right\} \end{eqnarray* which completes the proof of quasi-convexity of $f_{x}$ on $\left[ c,d\right] .$ Therefore $f_{y}:\left[ a,b\right] \rightarrow \mathbb{R} ,$ \ \ $f_{y}\left( u\right) =f\left( u,y\right) $ is also quasi-convex on \left[ a,b\right] $ for all $y\in \left[ c,d\right] ,$ goes likewise and we shall omit the details. \end{proof} \begin{definition} A function $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is said J-convex function on the co-ordinates on $\Delta $ if the following inequalit \begin{equation*} f\left( \frac{x+z}{2},\frac{y+w}{2}\right) \leq \frac{f\left( x,y\right) +f\left( z,w\right) }{2} \end{equation* holds for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta .$ We denote by $J(\Delta )$ the classes of J-convex functions on the co-ordinates on $\Delta $ \end{definition} \begin{lemma} Every J-convex mapping defined $f:\Delta \rightarrow \mathbb{R} $ is J-convex on the co-ordinates. \end{lemma} \begin{proof} By the partial mappings, we can write for $v_{1},v_{2}\in \left[ c,d\right] , \begin{eqnarray*} f_{x}\left( \frac{v_{1}+v_{2}}{2}\right) &=&f\left( x,\frac{v_{1}+v_{2}}{2 \right) \\ &=&f\left( \frac{x+x}{2},\frac{v_{1}+v_{2}}{2}\right) \\ &\leq &\frac{f\left( x,v_{1}\right) +f\left( x,v_{2}\right) }{2} \\ &=&\frac{f_{x}\left( v_{1}\right) +f_{x}\left( v_{2}\right) }{2} \end{eqnarray* which completes the proof of J-convexity of $f_{x}$ on $\left[ c,d\right] .$ Similarly, we can prove J-convexity of $f_{y}$ on $\left[ a,b\right] .$ \end{proof} \begin{definition} A function $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is said J-quasi-convex function on the co-ordinates on $\Delta $ if the following inequalit \begin{equation*} f\left( \frac{x+z}{2},\frac{y+w}{2}\right) \leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} \end{equation* holds for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta .$ We denote by $JQC(\Delta )$ the classes of J-quasi-convex functions on the co-ordinates on $\Delta $ \end{definition} \begin{lemma} Every J-quasi-convex mapping defined $f:\Delta \rightarrow \mathbb{R} $ is J-quasi-convex on the co-ordinates. \end{lemma} \begin{proof} By a similar way to proof of Lemma 1, we can write for $v_{1},v_{2}\in \left[ c,d\right] , \begin{eqnarray*} f_{x}\left( \frac{v_{1}+v_{2}}{2}\right) &=&f\left( x,\frac{v_{1}+v_{2}}{2 \right) \\ &=&f\left( \frac{x+x}{2},\frac{v_{1}+v_{2}}{2}\right) \\ &\leq &\max \left\{ f\left( x,v_{1}\right) ,f\left( x,v_{2}\right) \right\} \\ &=&\max \left\{ f_{x}\left( v_{1}\right) ,f_{x}\left( v_{2}\right) \right\} \end{eqnarray* which completes the proof of J-quasi-convexity of $f_{x}$ on $\left[ c, \right] .$ We can also prove J-quasi-convexity of $f_{y}$ on $\left[ a, \right] .$ \end{proof} \begin{definition} A function $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is said Wright-convex function on the co-ordinates on $\Delta $ if the following inequalit \begin{equation*} f\left( \left( 1-t\right) a+tb,\left( 1-s\right) c+sd\right) +f\left( ta+\left( 1-t\right) b,sc+\left( 1-s\right) d\right) \leq f\left( a,c\right) +f\left( b,d\right) \end{equation* holds for all $\left( a,c\right) ,$ $\left( b,d\right) \in \Delta $ and t,s\in \left[ 0,1\right] .$ We denote by $W(\Delta )$ the classes of Wright-convex functions on the co-ordinates on $\Delta $ \end{definition} \begin{lemma} Every Wright-convex mapping defined $f:\Delta \rightarrow \mathbb{R} $ is Wright-convex on the co-ordinates. \end{lemma} \begin{proof} Suppose that $f:\Delta \rightarrow \mathbb{R} $ is Wright-convex on $\Delta $. Then by partial mapping, for v_{1},v_{2}\in \left[ c,d\right] ,$ $x\in \left[ a,b\right] , \begin{eqnarray*} &&f_{x}\left( \left( 1-t\right) v_{1}+tv_{2}\right) +f_{x}\left( tv_{1}+\left( 1-t\right) v_{2}\right) \\ &=&f\left( x,\left( 1-t\right) v_{1}+tv_{2}\right) +f\left( x,tv_{1}+\left( 1-t\right) v_{2}\right) \\ &=&f\left( \left( 1-t\right) x+tx,\left( 1-t\right) v_{1}+tv_{2}\right) +f\left( tx+\left( 1-t\right) x,tv_{1}+\left( 1-t\right) v_{2}\right) \\ &\leq &f\left( x,v_{1}\right) +f\left( x,v_{2}\right) \\ &=&f_{x}\left( v_{1}\right) +f_{x}\left( v_{2}\right) \end{eqnarray* which shows that $f_{x}$ is Wright-convex on $\left[ c,d\right] .$ Similarly one can see that $f_{y}$ is Wright-convex on $\left[ a,b\right] .$ \end{proof} \begin{definition} A function $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is said Wright-quasi-convex function on the co-ordinates on $\Delta $ if the following inequalit \begin{equation*} \frac{1}{2}\left[ f\left( tx+\left( 1-t\right) z,ty+\left( 1-t\right) w\right) +f\left( \left( 1-t\right) x+tz,\left( 1-t\right) y+tw\right) \right] \leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} \end{equation* holds for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta $ and t\in \left[ 0,1\right] .$ We denote by $WQC(\Delta )$ the classes of Wright-quasi-convex functions on the co-ordinates on $\Delta $ \end{definition} \begin{lemma} Every Wright-quasi-convex mapping defined $f:\Delta \rightarrow \mathbb{R} $ is Wright-quasi-convex on the co-ordinates. \end{lemma} \begin{proof} Suppose that $f:\Delta \rightarrow \mathbb{R} $ is Wright-quasi-convex on $\Delta $. Then by partial mapping, for v_{1},v_{2}\in \left[ c,d\right] , \begin{eqnarray*} &&\frac{1}{2}\left[ f_{x}\left( tv_{1}+\left( 1-t\right) v_{2}\right) +f_{x}\left( \left( 1-t\right) v_{1}+tv_{2}\right) \right] \\ &=&\frac{1}{2}\left[ f\left( x,tv_{1}+\left( 1-t\right) v_{2}\right) +f\left( x,\left( 1-t\right) v_{1}+tv_{2}\right) \right] \\ &=&\frac{1}{2}\left[ f\left( tx+\left( 1-t\right) x,tv_{1}+\left( 1-t\right) v_{2}\right) +f\left( \left( 1-t\right) x+tx,\left( 1-t\right) v_{1}+tv_{2}\right) \right] \\ &\leq &\max \left\{ f\left( x,v_{1}\right) ,f\left( x,v_{2}\right) \right\} \\ &=&\max \left\{ f_{x}\left( v_{1}\right) ,f_{x}\left( v_{2}\right) \right\} \end{eqnarray* which shows that $f_{x}$ is Wright-quasi-convex on $\left[ c,d\right] .$ Similarly one can see that $f_{y}$ is Wright-quasi-convex on $\left[ a, \right] .$ \end{proof} \begin{theorem} Suppose that $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is J-quasi-convex on the co-ordinates on $\Delta .$ If $f_{x}\in L_{1 \left[ c,d\right] $ and $f_{y}\in L_{1}\left[ a,b\right] ,$ then we have the inequality \begin{eqnarray} &&\frac{1}{2}\left[ \frac{1}{b-a}\dint\nolimits_{a}^{b}f\left( x,\frac{c+d}{ }\right) dx+\frac{1}{d-c}\int_{c}^{d}f\left( \frac{a+b}{2},y\right) dy\right] \label{2.1} \\ &\leq &\frac{1}{\left( b-a\right) \left( d-c\right) }\int_{c}^{d}\din \nolimits_{a}^{b}f(x,y)dxdy+H(x,y) \notag \end{eqnarray wher \begin{eqnarray*} H\left( x,y\right) &=&\frac{1}{4\left( d-c\right) }\int_{c}^{d}\int_{0}^{1 \left\vert f\left( ta+\left( 1-t\right) b,y\right) -f\left( \left( 1-t\right) a+tb,y\right) \right\vert dtdy \\ &&+\frac{1}{4\left( b-a\right) }\dint\nolimits_{a}^{b}\int_{0}^{1}\left\vert f\left( x,tc+\left( 1-t\right) d\right) -f\left( x,\left( 1-t\right) c+td\right) \right\vert dtdx. \end{eqnarray*} \end{theorem} \begin{proof} Since $f:\Delta \rightarrow \mathbb{R} $ is J-quasi-convex on the co-ordinates on $\Delta .$ We can write the partial mapping \begin{equation*} f_{y}:\left[ a,b\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{y}\left( u\right) =f\left( u,y\right) ,\text{ \ \ }y\in \left[ c,d\right] \end{equation* an \begin{equation*} f_{x}:\left[ c,d\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{x}\left( v\right) =f\left( x,v\right) ,\text{ \ \ }x\in \left[ a,b\right] \end{equation* are J-quasi-convex on $\Delta .$ Then by the inequality (\ref{1.2}), we hav \begin{equation*} f_{y}\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\din \nolimits_{a}^{b}f_{y}(x)dx+\frac{1}{2}\int_{0}^{1}\left\vert f_{y}\left( ta+\left( 1-t\right) b\right) -f_{y}\left( \left( 1-t\right) a+tb\right) \right\vert dt. \end{equation* That i \begin{equation*} f\left( \frac{a+b}{2},y\right) \leq \frac{1}{b-a}\din \nolimits_{a}^{b}f(x,y)dx+\frac{1}{2}\int_{0}^{1}\left\vert f\left( ta+\left( 1-t\right) b,y\right) -f\left( \left( 1-t\right) a+tb,y\right) \right\vert dt. \end{equation* Integrating the resulting inequality with respect to $y$ over $\left[ c, \right] $ and dividing both sides of inequality with $\left( d-c\right) ,$ we ge \begin{eqnarray} &&\frac{1}{d-c}\int_{c}^{d}f\left( \frac{a+b}{2},y\right) dy \label{2.2} \\ &\leq &\frac{1}{\left( b-a\right) \left( d-c\right) }\int_{c}^{d}\din \nolimits_{a}^{b}f(x,y)dxdy \notag \\ &&+\frac{1}{2\left( d-c\right) }\int_{c}^{d}\int_{0}^{1}\left\vert f\left( ta+\left( 1-t\right) b,y\right) -f\left( \left( 1-t\right) a+tb,y\right) \right\vert dtdy. \notag \end{eqnarray By a similar argument, we hav \begin{eqnarray} &&\frac{1}{b-a}\dint\nolimits_{a}^{b}f\left( x,\frac{c+d}{2}\right) dx \label{2.3} \\ &\leq &\frac{1}{\left( b-a\right) \left( d-c\right) }\dint\nolimits_{a}^{b \dint\nolimits_{c}^{d}f(x,y)dydx \notag \\ &&+\frac{1}{2\left( b-a\right) }\dint\nolimits_{a}^{b}\int_{0}^{1}\left\vert f\left( x,tc+\left( 1-t\right) d\right) -f\left( x,\left( 1-t\right) c+td\right) \right\vert dtdx. \notag \end{eqnarray Summing (\ref{2.2}) and (\ref{2.3}), we get the required result. \end{proof} \begin{theorem} Suppose that $f:\Delta =\left[ a,b\right] \times \left[ c,d\right] \rightarrow \mathbb{R} $ is Wright-quasi-convex on the co-ordinates on $\Delta .$ If $f_{x}\in L_{1 \left[ c,d\right] $ and $f_{y}\in L_{1}\left[ a,b\right] ,$ then we have the inequality \begin{eqnarray} &&\frac{1}{\left( b-a\right) \left( d-c\right) }\dint\nolimits_{c}^{d}\din \nolimits_{a}^{b}f(x,y)dxdy \label{2.4} \\ &\leq &\frac{1}{2}\left[ \max \left\{ \frac{1}{\left( b-a\right) \dint\nolimits_{a}^{b}f(x,c)dx,\frac{1}{\left( b-a\right) \dint\nolimits_{a}^{b}f(x,d)dx\right\} \right. \notag \\ &&\left. +\max \left\{ \frac{1}{\left( d-c\right) }\din \nolimits_{c}^{d}f(a,y)dy,\frac{1}{\left( d-c\right) }\din \nolimits_{c}^{d}f(b,y)dy\right\} \right] . \notag \end{eqnarray} \end{theorem} \begin{proof} Since $f:\Delta \rightarrow \mathbb{R} $ is Wright-quasi-convex on the co-ordinates on $\Delta .$ We can write the partial mapping \begin{equation*} f_{y}:\left[ a,b\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{y}\left( u\right) =f\left( u,y\right) ,\text{ \ \ }y\in \left[ c,d\right] \end{equation* an \begin{equation*} f_{x}:\left[ c,d\right] \rightarrow \mathbb{R} ,\text{ \ \ }f_{x}\left( v\right) =f\left( x,v\right) ,\text{ \ \ }x\in \left[ a,b\right] \end{equation* are Wright-quasi-convex on $\Delta .$ Then by the inequality (\ref{1.3}), we hav \begin{equation*} \frac{1}{b-a}\dint\nolimits_{a}^{b}f_{y}(x)dx\leq \max \left\{ f_{y}(a),f_{y}(b)\right\} . \end{equation* That i \begin{equation*} \frac{1}{b-a}\dint\nolimits_{a}^{b}f(x,y)dx\leq \max \left\{ f(a,y),f(b,y)\right\} . \end{equation* Dividing both sides of inequality with $\left( d-c\right) $ and integrating with respect to $y$ over $\left[ c,d\right] $ $,$ we ge \begin{equation} \frac{1}{\left( b-a\right) \left( d-c\right) }\dint\nolimits_{c}^{d}\din \nolimits_{a}^{b}f(x,y)dxdy\leq \max \left\{ \frac{1}{\left( d-c\right) \dint\nolimits_{c}^{d}f(a,y)dy,\frac{1}{\left( d-c\right) \dint\nolimits_{c}^{d}f(b,y)dy\right\} . \label{2.5} \end{equation By a similar argument, we can writ \begin{equation} \frac{1}{\left( b-a\right) \left( d-c\right) }\dint\nolimits_{c}^{d}\din \nolimits_{a}^{b}f(x,y)dxdy\leq \max \left\{ \frac{1}{\left( b-a\right) \dint\nolimits_{a}^{b}f(x,c)dx,\frac{1}{\left( b-a\right) \dint\nolimits_{a}^{b}f(x,d)dx\right\} . \label{2.6} \end{equation By addition (\ref{2.5}) and (\ref{2.6}), we hav \begin{eqnarray*} &&\frac{1}{\left( b-a\right) \left( d-c\right) }\dint\nolimits_{c}^{d}\din \nolimits_{a}^{b}f(x,y)dxdy \\ &\leq &\frac{1}{2}\left[ \max \left\{ \frac{1}{\left( b-a\right) \dint\nolimits_{a}^{b}f(x,c)dx,\frac{1}{\left( b-a\right) \dint\nolimits_{a}^{b}f(x,d)dx\right\} \right. \\ &&\left. +\max \left\{ \frac{1}{\left( d-c\right) }\din \nolimits_{c}^{d}f(a,y)dy,\frac{1}{\left( d-c\right) }\din \nolimits_{c}^{d}f(b,y)dy\right\} \right] \end{eqnarray* which completes the proof. \end{proof} \begin{theorem} Let $C\left( \Delta \right) ,$ $J\left( \Delta \right) ,$ $W\left( \Delta \right) ,$ $QC\left( \Delta \right) ,$ $JQC\left( \Delta \right) ,$ WQC\left( \Delta \right) $ denote the classes of functions co-ordinated convex, co-ordinated J-convex, co-ordinated W-convex, co-ordinated quasi-convex, co-ordinated J-quasi-convex and co-ordinated W-quasi-convex functions on $\Delta =\left[ a,b\right] \times \left[ c,d\right] $, respectively, we have following inclusions \begin{equation} QC\left( \Delta \right) \subset WQC\left( \Delta \right) \subset JQC\left( \Delta \right) \label{2.7} \end{equation \begin{equation} W\left( \Delta \right) \subset WQC\left( \Delta \right) ,\text{ \ \ }C\left( \Delta \right) \subset J\left( \Delta \right) ,\text{ \ \ }J\left( \Delta \right) \subset JQC\left( \Delta \right) . \label{2.8} \end{equation} \end{theorem} \begin{proof} Let $f\in QC\left( \Delta \right) .$ Then for all $\left( x,y\right) ,$ \left( z,w\right) \in \Delta $ and $t\in \left[ 0,1\right] ,$ we hav \begin{equation*} f\left( \lambda x+\left( 1-\lambda \right) z,\lambda y+\left( 1-\lambda \right) w\right) \leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} \end{equation* \begin{equation*} f\left( \left( 1-\lambda \right) x+\lambda z,\left( 1-\lambda \right) y+\lambda w\right) \leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} . \end{equation* By addition, we obtai \begin{eqnarray} &&\frac{1}{2}\left[ f\left( \lambda x+\left( 1-\lambda \right) z,\lambda y+\left( 1-\lambda \right) w\right) +f\left( \left( 1-\lambda \right) x+\lambda z,\left( 1-\lambda \right) y+\lambda w\right) \right] \label{2.9} \\ &\leq &\max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} \notag \end{eqnarray that is, $f\in WQC\left( \Delta \right) .$ In (\ref{2.9}), if we choose \lambda =\frac{1}{2},$ we obtain $WQC\left( \Delta \right) \subset JQC\left( \Delta \right) .$ Which completes the proof of (\ref{2.7}). In order to prove (\ref{2.8}), taking $f\in W\left( \Delta \right) $ and using the definition, we ge \begin{equation*} \frac{1}{2}\left[ f\left( \left( 1-t\right) a+tb,\left( 1-s\right) c+sd\right) +f\left( ta+\left( 1-t\right) b,sc+\left( 1-s\right) d\right) \right] \leq \frac{f\left( a,c\right) +f\left( b,d\right) }{2} \end{equation* for all $\left( a,c\right) ,\left( b,d\right) \in \Delta $ and $t\in \left[ 0,1\right] .$ Using the fact tha \begin{equation*} \frac{f\left( a,c\right) +f\left( b,d\right) +\left\vert f\left( a,c\right) -f\left( b,d\right) \right\vert }{2}=\max \left\{ f(a,c),f(b,d)\right\} \end{equation* we can write \begin{equation*} \frac{f\left( a,c\right) +f\left( b,d\right) }{2}\leq \max \left\{ f(a,c),f(b,d)\right\} \end{equation* for all $\left( a,c\right) ,\left( b,d\right) \in \Delta ,$ we obtain W\left( \Delta \right) \subset WQC\left( \Delta \right) .$ Taking $f\in C\left( \Delta \right) $ and, if we choose $t=\frac{1}{2}$ in \ref{a})$,$ we obtain \begin{equation*} f\left( \frac{x+z}{2},\frac{y+w}{2}\right) \leq \frac{f\left( x,y\right) +f\left( z,w\right) }{2} \end{equation* for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta .$ One can see that $C\left( \Delta \right) \subset J\left( \Delta \right) .$ Taking $f\in J\left( \Delta \right) ,$ we can writ \begin{equation*} f\left( \frac{x+z}{2},\frac{y+w}{2}\right) \leq \frac{f\left( x,y\right) +f\left( z,w\right) }{2} \end{equation* for all $\left( x,y\right) ,$ $\left( z,w\right) \in \Delta .$ Using the fact tha \begin{equation*} \frac{f\left( x,y\right) +f\left( z,w\right) +\left\vert f\left( x,y\right) -f\left( z,w\right) \right\vert }{2}=\max \left\{ f(x,y),f(z,w)\right\} \end{equation* we can write \begin{equation*} \frac{f\left( x,y\right) +f\left( z,w\right) }{2}\leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} . \end{equation* Then obviously, we obtai \begin{equation*} f\left( \frac{x+z}{2},\frac{y+w}{2}\right) \leq \max \left\{ f\left( x,y\right) ,f\left( z,w\right) \right\} \end{equation* which shows that $f\in JQ\left( \Delta \right) .$ \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent years it has become clear that the current computational methods for scientific and engineering phenomena are inadequate for challenging problems. These include problems with propagating waves, turbulent fluid flow, nonlinear interactions, and multiple scales. This has resulted in a significant interest in so-called high-order accurate methods, which have the potential to produce fundamentally more reliable solutions. A number of numerical methods have been proposed, including multi-block finite difference methods \cite{lele93compact,visbal02ho,nordstrom09multiblock}, high-order finite volume methods \cite{Barth_kexact,gooch08hofvm}, stabilized finite element methods \cite{hughes08stabilized}, discontinuous Galerkin (DG) methods \cite{Reed_Hill,cockburn01rkdg,hesthaven08dgbook}, DG spectral element methods (DGSEM) \cite{kopriva96}, spectral volume/difference methods \cite{zj02spectralvolume,zj06spectraldifference,huynh07fluxreconstruction,vincent11fluxreconstruction}, and hybridized DG methods \cite{cockburn08hybrid,peraire09hybrid}. All methods have advantages in particular situations, but for various reasons most general purpose commercial-grade simulation tools still use traditional low-order methods. Much of the current research is devoted to the discontinuous Galerkin method. This is partly because of its many attractive properties, including the use of fully unstructured simplex meshes, the natural stabilization mechanism based on approximate Riemann solvers, and the rigorous theoretical foundations. It can certainly be discussed why the DG method is not used routinely for real-world simulations, but one of the main reasons is clearly its high computational cost, which is still at least a magnitude more than low-order methods or high-order finite difference methods on similar grids. For some problems, explicit time-stepping or matrix-free implicit methods can be employed, but for many real-world problems and meshes full Jacobian matrices are required for the solvers to be efficient. Here, nodal-based Galerkin methods have a fundamental disadvantage in that they connect all unknowns inside an element, as well as all neighboring face nodes, even for first-order derivatives. This leads to a stencil size that scales like $p^D$ for polynomial degrees $p$ in $D$ spatial dimensions. As a contrast, a standard finite difference method only connects neighboring nodes along the $D$ coordinate lines through the node. This gives a stencil size proportional to $Dp$, which in three dimensions can be magnitudes smaller ever for moderate values of $p$. Several high-order schemes for unstructured meshes have been proposed with a similar stencil-size reduction. In particular, the DG spectral element method \cite{kopriva96,gassner10dgsem} is a collocation-based method on a staggered grid which only uses information along each coordinate line for the discretized equations. Other closely related schemes have the same property, such as the spectral difference method \cite{zj06spectraldifference}, the flux reconstruction method \cite{huynh07fluxreconstruction,vincent11fluxreconstruction}, and the DGM-FD method \cite{hu11dgmfd}. For the special case of a linear one-dimensional problem, many of these methods can be shown to be identical to the standard DG method \cite{zj10unifying}, but in general they define different schemes with varying properties. In an attempt to further reduce the size of the Jacobians, and to ensure that the scheme is identical to the standard DG method along each line of nodes, we propose a new line-based DG scheme. Like the DGSEM, our Line-DG scheme is derived by considering only the 1-D problems that arise along each coordinate direction. We apply standard 1-D DG formulations for each of these sub-problems, and all integrals are computed fully consistently (with sufficient accuracy), which means in particular that the definition of the scheme makes no statement about flux points. We note that this can be done without introducing additional connectivities, since all nodes in the local 1-D problem are already connected by the shape functions. In addition, our scheme uses solution points along each element face, which further reduces the number of connectivities with the neighboring elements. For the second-order terms in the Navier-Stokes equations, we use an LDG-type approach \cite{cockburn98ldg} with upwind/downwind fluxes based on consistent switches along all globally connected lines of elements. Special care is required to preserve the sparsity of the resulting matrices, and we propose a simple but efficient Newton-Krylov solver which splits the matrix product in order to avoid introducing additional matrix entries. Many options for preconditioning are possible, and in this work we use a block-Jacobi method with sparse blocks. We first describe the method for first-order systems in Section 2 and mention some practical implementation issues, including a study of the structure of the Jacobian matrices. In Section 3 we extend the scheme to second-order systems using the LDG-type scheme, and in Section 4 we discuss the implicit temporal discretization, some approaches for maintaining the high sparsity of the discretization, and the Newton-Krylov solver. Finally, in Section 5 we show numerical results and convergence for Poisson's equation, an inviscid Euler vortex, and flow over a cylinder. We also compare the method to the standard nodal DG method, and we conclude that the differences are overall very small. For the Navier-Stokes equations, we show convergence of drag and lift forces for steady-state laminar flow around an airfoil, and we demonstrate our implicit time-integrators on a transient LES-type flow problem. \section{Line-based discontinuous Galerkin discretization} \subsection{First-order equations} Consider a system of $m$ first-order conservation laws with source terms, \begin{align} \frac{\partial \bm{u}}{\partial t} + \nabla \cdot \bm{F}(\bm{u}) = \bm{S}(\bm{u}), \label{conslaw} \end{align} in a three-dimensional domain $\Omega$, with solution $\bm{u}$, flux function $\bm{F}(\bm{u})$, source function $\bm{S}(\bm{u})$, and appropriate boundary conditions on $\partial \Omega$. We will use a discretization of $\Omega$ into non-overlapping, conforming, curved hexahedral elements. Within each element we introduce a Cartesian grid of $(p+1)^3$ node points, where $p\ge 1$, by defining a smooth one-to-one mapping given by a diffeomorphism $\bm{x}=\bm{x}(\bm{X})$ between the reference unit cube $V=[0,1]^3$ and the element $v$, and setting $\bm{x}_{ijk}=\bm{x}(\bm{X}_{ijk})$, where $\bm{X}_{ijk}=(s_i,s_j,s_k)$ for $0\le i,j,k\le p$, and $\{s_i\}$ is an increasing sequence of $p+1$ node positions $s_i\in[0,1]$ with $s_0=0$ and $s_p=1$ (see figure~\ref{plt}). To obtain our numerical scheme for approximating (\ref{conslaw}), we consider a single element $v$ and its mapping $\bm{x}=\bm{x}(\bm{X})$, and follow standard procedure to change independent variables from $\bm{x}$ to $\bm{X}$. This transforms (\ref{conslaw}) into \begin{align} J\frac{\partial \bm{u}}{\partial t} + \nabla_{\bm{X}} \cdot \widetilde{\bm{F}}(\bm{u}) = J \bm{S}(\bm{u}), \label{conslawref} \end{align} in the reference domain $V$. Here we have defined the mapping Jacobian $J=\det(\bm{G})$ and the contravariant fluxes $\widetilde{\bm{F}} = (\widetilde{\bm{f}}_1,\widetilde{\bm{f}}_2,\widetilde{\bm{f}}_3 ) = J \bm{G}^{-1} \bm{F}$, with the mapping deformation gradient $\bm{G}=\nabla_{\bm{X}} \bm{x}$. A standard nodal discontinuous Galerkin method would now consider the multivariate polynomial $\bm{u}(\bm{X})$ that interpolates the grid function, $\bm{u}_{ijk} = \bm{u}(\bm{X}_{ijk})$, and define a numerical scheme for the spatial derivatives of (\ref{conslaw}) by a Galerkin procedure in $V$. Our approach differs in that it considers each of the three spatial derivatives in (\ref{conslawref}) separately and approximates them numerically using one-dimensional discontinuous Galerkin formulations along each of the $3(p+1)^2$ curves defined by straight lines in the reference domain $V$, through the three sets of nodes along each space dimension. More specifically, the $(p+1)^2$ curves along the first space dimension are $\bm{x}_{jk}(\xi)=\bm{x}(\xi,X_j,X_k)$ for $0\le j,k \le p$. On these we define the polynomial $\bm{u}_{jk}(\xi)\in \mathcal{P}_p([0,1])^m$ that interpolates $\bm{u}_{ijk}$, $i=0,\ldots,p$, and we define a numerical approximation $\bm{r}_{jk}(X_1)$ to $\partial \widetilde{\bm{f}_1}/\partial X_1$ by a one-dimensional Galerkin procedure: Find $\bm{r}_{jk}(\xi)\in\mathcal{P}_p([0,1])^m$ such that \begin{align} &\int_0^1 \bm{r}_{jk}(\xi) \cdot \bm{v}(\xi)\, d\xi = \int_0^1 \frac{d\widetilde{\bm{f}_1}(\bm{u}_{jk}(\xi))}{d\xi}\cdot \bm{v}(\xi)\,d\xi \nonumber \\ &\qquad = \widehat{\widetilde{\bm{f}_1}}(\bm{u}_{jk}^+(1), \bm{u}_{jk}(1)) \cdot \bm{v}(1) - \widehat{\widetilde{\bm{f}_1}}(\bm{u}_{jk}(0), \bm{u}_{jk}^-(0)) \cdot \bm{v}(0) - \int_0^1 \widetilde{\bm{f}_1}(\bm{u}_{jk}(\xi)) \cdot \frac{d\bm{v}}{d\xi}\,d\xi, \label{galerkinscheme} \end{align} for all test functions $\bm{v}(\xi)\in\mathcal{P}_p([0,1])^m$. Here, $\bm{u}_{jk}^+(1)$ is the numerical solution at $\bm{x}_{jk}(1^+)=\bm{x}(1^+,X_j,X_k)$, and similarly $\bm{u}_{jk}^-(0)$ at $\bm{x}_{jk}(0^-)=\bm{x}(0^-,X_j,X_k)$. These will be given either by nodes in the neighboring elements or implicitly through the boundary conditions. Furthermore, $\widehat{\widetilde{\bm{f}_1}}(\bm{u}_R,\bm{u}_L)$ is a numerical flux function for $\widetilde{\bm{f}_1}$, but we note that with the reference normal direction $\bm{N}_1^+=(1,0,0)$, the contravariant flux can be written \begin{align} \widetilde{\bm{f}}_1 = \widetilde{\bm{F}} \cdot \bm{N}_1^+ = (J\bm{G}^{-1}\bm{F})\cdot \bm{N}_1^+ = \bm{F}\cdot (J\bm{G}^{-T}\bm{N}_1^+) = \bm{F}\cdot\bm{n}_1^+ \label{contraflux} \end{align} with the (non-normalized) normal vector $\bm{n}_1^+=J\bm{G}^{-T}\bm{N}_1^+$ at the boundary point $\bm{x}_{jk}(1)$. Our numerical flux then becomes \begin{align} \widehat{\widetilde{\bm{f}_1}}(\bm{u}_R,\bm{u}_L) = \widehat{\widetilde{\bm{F}}\cdot\bm{N}_1^+}(\bm{u}_R,\bm{u}_L) = \widehat{\bm{F}\cdot\bm{n}_1^+} (\bm{u}_R,\bm{u}_L), \label{numflux1} \end{align} where $\widehat{\bm{F}\cdot\bm{n}}(\bm{u}^+,\bm{u}^-)$ is a standard numerical flux function used in finite volume and discontinuous Galerkin schemes, with normal direction $\bm{n}$ and traces $\bm{u}^\pm$ in the positive/negative normal direction. This allows us to use existing flux functions and approximate Riemann solvers without modification. Similarly, for the second numerical flux we move the negative sign to the normal direction and define $\bm{N}_1^-=(-1,0,0)$ and $\bm{n}_1^-=J\bm{G}^{-T}\bm{N}_1^-$, which is again an outward normal vector. We can then write: \begin{align} -\widehat{\widetilde{\bm{f}_1}}(\bm{u}_R,\bm{u}_L) = \widehat{\widetilde{\bm{F}}\cdot\bm{N}_1^-}(\bm{u}_L,\bm{u}_R) = \widehat{\bm{F}\cdot\bm{n}_1^-} (\bm{u}_L,\bm{u}_R), \label{numflux2} \end{align} where we also have swapped the order of the arguments to the flux function $\widehat{\bm{F}\cdot\bm{n}}$ to be consistent with the negative normal direction. Our Galerkin scheme (\ref{galerkinscheme}) then gets the final form: Find $\bm{r}_{jk}(\xi)\in\mathcal{P}_p([0,1])^m$ such that \begin{align} &\int_0^1 \bm{r}_{jk}(\xi) \cdot \bm{v}(\xi)\, d\xi = \widehat{\bm{F}\cdot\bm{n}_1^+} (\bm{u}_{jk}^+(1), \bm{u}_{jk}(1)) \cdot \bm{v}(1) + \widehat{\bm{F}\cdot\bm{n}_1^-} (\bm{u}_{jk}^-(0), \bm{u}_{jk}(0)) \cdot \bm{v}(0) - \int_0^1 \widetilde{\bm{f}_1}(\bm{u}_{jk}(\xi)) \cdot \frac{d\bm{v}}{d\xi}\,d\xi, \label{galerkinscheme2} \end{align} for all $\bm{v}(\xi)\in\mathcal{P}_p([0,1])^m$. \begin{figure} \includegraphics[width=\textwidth]{mappingplt.pdf} \caption{A two-dimensional illustration of the mapping from a reference element $V$ to the actual curved element $v$, for the case $p=4$.} \label{plt} \end{figure} We use a standard finite element procedure to solve (\ref{galerkinscheme2}) for $\bm{r}_{jk}(\xi)$. Introduce the nodal Lagrange basis functions $\phi_i\in\mathcal{P}_p([0,1])$ such that $\phi_i(s_j)=\delta_{ij}$, for $i,j=0,\ldots,p$, and set \begin{align} \bm{u}_{jk}(\xi) &= \sum_{i=0}^p \bm{u}_{ijk} \phi_i(\xi), \\ \bm{r}_{jk}(\xi) &= \sum_{i=0}^p \bm{r}_{ijk} \phi_i(\xi), \label{reqn} \end{align} To find the $m(p+1)$ coefficients along the curve $\bm{x}_{jk}(\xi)$, we set $\bm{v}(\xi)=\bm{e}_\ell \phi_i(\xi)$, for each $i=0,\ldots,p$ and $\ell=1,\ldots,m$, where $(\bm{e}_\ell)_n = \delta_{\ell n}$ for $n=1,\ldots,m$. Our Galerkin scheme (\ref{galerkinscheme2}) then gets the discrete form $\bm{M} \bm{r}_{jk} = \bm{b}$, and we find the coefficients $\bm{r}_{jk}$ by solving $m$ linear systems with the $(p+1)$-by-$(p+1)$ mass matrix $\bm{M}$. Repeating the procedure for each $j,k=0,\ldots,p$ we obtain all coefficients $\bm{r}_{ijk}=\bm{r}^{(1)}_{ijk}$, which is the grid function for our numerical approximation of $\partial \widetilde{\bm{F}}_1/\partial X_1$ at each grid point $\bm{x}_{ijk}$. In an analogous way, we calculate coefficients $\bm{r}_{ijk}^{(2)}$ and $\bm{r}_{ijk}^{(3)}$ that approximate $\partial \widetilde{\bm{F}}_2/\partial X_2$ and $\partial \widetilde{\bm{F}}_3/\partial X_3$, respectively, at the grid points. The curves considered are now $\bm{x}_{ik}=\bm{x}(X_i,\xi,X_k)$ and $\bm{x}_{ij}=\bm{x}(X_i,X_j,\xi)$, and with the reference normals $\bm{N}_2^{\pm}=(0,\pm 1,0)$ and $\bm{N}_3^{\pm}=(0,0,\pm 1)$ the contravariant fluxes $\widetilde{\bm{f}}_2$ and $\widetilde{\bm{f}}_3$ can again be written as $\bm{F}\cdot\bm{n}$ where $\bm{n}=J\bm{G}^{-T}\bm{N}$ is a non-normalized normal vector to the element at the boundary points. The solution procedure involves the same mass matrix $\bm{M}$ and is identical to before. Using the calculated numerical approximations to each partial derivative in (\ref{conslawref}), we obtain our final semi-discrete formulation: \begin{align} \frac{d\bm{u}_{ijk}}{dt} + \frac{1}{J_{ijk}}\sum_{n=1}^3 \bm{r}_{ijk}^{(n)} = \bm{S}(\bm{u}_{ijk}), \label{semidisc} \end{align} where $J_{ijk}=J(\bm{x}_{ijk})$. \subsection{Implementation details} For the mapping $\bm{x}(\bm{X})$ it is natural to use an iso-parametric approach. The node positions $\bm{x}_{ijk}$ are given by some curved mesh generation procedure \cite{persson09curved}, and we define \begin{align} \bm{x}(\bm{X})=\sum_{i,j,k=0}^p \bm{x}_{ijk} \phi_i(X_1)\phi_j(X_2)\phi_k(X_3), \end{align} which clearly satisfies our interpolation requirement \begin{align} \bm{x}(\bm{X}_{ijk}) &= \sum_{i',j',k'=0}^p \bm{x}_{i'j'k'} \phi_{i'}(s_i)\phi_{j'}(s_j)\phi_{k'}(s_k) =\sum_{i',j',k'=0}^p \bm{x}_{i'j'k'} \delta_{ii'}\delta_{jj'}\delta_{kk'} = \bm{x}_{ijk}. \end{align} This allows us to easily compute $\bm{G}(\bm{X})$ at any point $\bm{X}$, which will involve the derivatives $\phi_i'(\xi)$ of the shape functions. To evaluate the one-dimensional integrals in (\ref{galerkinscheme2}), we use Gauss-Legendre integration of sufficiently high degree. For all our problems, a precision of $3p$ appears to be enough, so we use integration rules with $\lceil (3p+1)/2 \rceil$ integration points. The computation of the discretization (\ref{galerkinscheme2}) is remarkably simple compared to a nodal DG scheme, primarily because (a) The integrals are only one-dimensional, and (b) The numerical fluxes are only evaluated point-wise. We note that the left-hand side of (\ref{galerkinscheme2}), which contributes to the mass matrix $\bm{M}$, is constant regardless of solution component, line, and element (even if the actual mapped element is curved). Therefore it can be pre-computed and pre-factorized using a standard Cholesky method. Furthermore, many lines and components can be processed simultaneously, which might further increase the performance through the use of BLAS3-type cache-optimized linear algebra libraries. For the integral in the right-hand side of (\ref{galerkinscheme2}), the term $d\bm{v}/d\xi$ is again constant for all components, lines, and elements, so its discretization at the Gauss integration points can be pre-computed and combined with the inverted mass matrix and the Gauss integration weights $\bm{w}$. For non-linear problems, the only part that requires re-evaluation at each Gauss integration point is $\widetilde{\bm{f}}_1(\bm{u}_{jk}(\xi))$, although the deformation gradient $\bm{G}$ can be pre-computed if necessary. For the numerical fluxes, we pointed out above that (\ref{numflux1}) and (\ref{numflux2}) have exactly the same form as standard numerical flux functions. We pre-compute the outward normals $\bm{n}_i^+$ and $\bm{n}_i^-$, for $i=1,2,3$, at all boundary nodes. Note that our scheme only computes point-wise numerical fluxes, unlike the nodal DG method which involves integrals of the numerical fluxes. Sometimes existing numerical flux functions require the normal vector to be of unit length, in this case we normalize $\bar{\bm{n}}=\bm{n}/|\bm{n}|$ and use the fact that \begin{align} \widehat{\bm{F}\cdot\bm{n}} = |\bm{n}| \widehat{\bm{F}\cdot\bar{\bm{n}}}. \end{align} Finally, the multipliers $J_{ijk}$ are defined at the node points (not the Gauss integration points), and can also be pre-computed. \subsection{Stencil size and sparsity pattern} To illustrate the drastic reduction of the number of entries in the Jacobian matrices for the Line-DG method, consider the $(p+1)^3$ nodes in an (interior) element and its six neighboring elements. For a first-order operator, we note that a standard nodal DG formulation will in general produce full block matrices, that is, each degree of freedom will depend on all the other ones within the element. In addition, the face integrals will connect all nodes on an element face to all neighboring element face nodes. This gives $6(p+1)^2(p+1)^2=6(p+1)^4$ additional connections per element, or in average $6(p+1)^4/(p+1)^3=6(p+1)$ connections per degree of freedom. In total, the average number of connections is $(p+1)^3+6(p+1)$, which illustrates why matrix-based DG methods are considered memory intensive and expensive even at modest values of $p$. As a contrast, in our line-based method each node will only connect to other nodes within the same lines, and to only one node in each neighboring element, for a total of $(3p+1)+6=3p+7$ connectivities. This is similar to that of the DGSEM/SD methods, although with Gauss-Legendre solution points these schemes also connect entire lines of nodes in the neighboring element, giving a total of $(3p+1)+6(p+1)=9p+7$ connectivities. These numbers are tabulated for a range of degrees $p$ in three dimensions in table~\ref{costtab}. The sparsity patterns are illustrated in figure~\ref{fig1} for two-dimensional quadrilateral elements, for all three methods. The connectivities are shown both by a nodal plot, with bold nodes corresponding to the dependencies of the single red node, and by sparsity plots of the Jacobian matrices. We note that in three dimensions, already for $p=3$ the Line-DG method is 5.5 times sparser than nodal DG, and for $p=10$ it is almost 40 times sparser. This reduction in stencil size translates into lower assembly times, but more importantly, for matrix-based solvers it means drastically lower storage requirements and faster matrix-vector products for iterative implicit solvers. \begin{table} \begin{center} \begin{tabular}{ll|rrrrrrrrrr} \hline & Polynomial order $p$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\ \hline \multirow{3}{*}{\rot{2-D}} & Line-DG connectivities & $7$ & $9$ & $11$ & $13$ & $15$ & $17$ & $19$ & $21$ & $23$ & $25$ \\ & DGSEM/SD connectivities & $11$ & $17$ & $23$ & $29$ & $35$ & $41$ & $47$ & $53$ & $59$ & $65$ \\ & Nodal DG connectivities & $8$ & $13$ & $20$ & $29$ & $40$ & $53$ & $68$ & $85$ & $104$ & $125$ \\ \hline \multirow{3}{*}{\rot{3-D}} & Line-DG connectivities & $10$ & $13$ & $16$ & $19$ & $22$ & $25$ & $28$ & $31$ & $34$ & $37$ \\ & DGSEM/SD connectivities & $16$ & $25$ & $34$ & $43$ & $52$ & $61$ & $70$ & $79$ & $88$ & $97$ \\ & Nodal DG connectivities & $20$ & $45$ & $88$ & $155$ & $252$ & $385$ & $560$ & $783$ & $1060$ & $1397$ \\ \hline \end{tabular} \end{center} \caption{The number of connectivities per node for a first-order operator and 2-D quadrilateral / 3-D hexahedral elements with the Line-DG, the DGSEM/SD, and the nodal DG methods.} \label{costtab} \end{table} \begin{figure} \begin{center} \includegraphics[width=.75\textwidth]{plt3schemesparsity.pdf} \end{center} \caption{The connectivities (blue circles) to a single node (red circle) for the Line-DG method, the DGSEM/SD method, and the nodal DG method (2-D quadrilateral elements, a first-order operator).} \label{fig1} \end{figure} \section{Second-order equations} We now consider the discretization of equations with second-order derivatives, in the form of a system of conservation laws \begin{align} \frac{\partial \bm{u}}{\partial t} + \nabla \cdot \bm{F}(\bm{u}, \nabla{\bm{u}}) = \bm{S}(\bm{u},\nabla\bm{u}). \label{system1} \end{align} We first use a standard technique in many finite difference and discontinuous Galerkin methods, and introduce the auxiliary variables $\bm{q}$ and rewrite as a split system \begin{align} \frac{\partial \bm{u}}{\partial t} + \nabla \cdot \bm{F}(\bm{u}, \bm{q}) &= \bm{S}(\bm{u},\bm{q}), \label{spliteq1} \\ \nabla \bm{u} &= \bm{q} . \label{spliteq2} \end{align} This essentially has the form of our first-order system (\ref{conslaw}), and we can apply Line-DG to each solution component as described above. More specifically, the change of variables from $\bm{x}$ to $\bm{X}$ transforms (\ref{spliteq1}), (\ref{spliteq2}) into \begin{align} J \frac{\partial\bm{u}}{\partial t} + \nabla_{\bm{X}} \cdot \widetilde{\bm{F}}(\bm{u},\bm{q}) &= J \bm{S}(\bm{u},\bm{q}), \label{splitdisc1} \\ \nabla_{\bm{X}} \cdot \widetilde{\bm{u}}(\bm{u}) &= J \bm{q}, \label{splitdisc2} \end{align} where $\widetilde{\bm{u}} = (\widetilde{\bm{u}}_1,\widetilde{\bm{u}}_2,\widetilde{\bm{u}}_3) = \bm{u} \otimes J \bm{G}^{-1}$. We discretize (\ref{splitdisc1}) as described before for the first-order case, treating $\bm{q}$ as additional solution components. For (\ref{splitdisc2}), we use a completely analogous procedure. We introduce the grid function $\bm{q}_{ijk}=\bm{q}(\bm{X}_{ijk})$, and along each curve $\bm{X}_{jk}(\xi)$ we define the polynomial $\bm{q}_{jk}(\xi)\in\mathcal{P}_p([0,1])^{m\times 3}$ that interpolates $\bm{q}_{ijk}$, $i=0,\ldots,p$. We find the numerical approximation $\bm{d}_{jk}(X_1)$ to $\partial \widetilde{\bm{u}}_1/\partial X_1$ by the Galerkin formulation: Find $\bm{d}_{jk}(\xi)\in\mathcal{P}_p([0,1])^{m\times 3}$ such that \begin{align} &\int_0^1 \bm{d}_{jk}(\xi) : \bm{\tau}(\xi)\,d\xi = \int_0^1 \frac{d \widetilde{\bm{u}}_1}{d\xi} : \bm{\tau}(\xi)\,d\xi \nonumber \\ &\qquad\qquad=\widehat{\widetilde{\bm{u}}}_1(\bm{u}_{jk}^+(1),\bm{q}_{jk}^+(1),\bm{u}_{jk}(1),\bm{q}_{jk}(1)) : \bm{\tau}(1) - \widehat{\widetilde{\bm{u}}}_1(\bm{u}_{jk}(0),\bm{q}_{jk}(0),\bm{u}_{jk}^-(0),\bm{q}_{jk}^-(0)) : \bm{\tau}(0) \nonumber \\ & \qquad \qquad\ \ \ \ % - \int_0^1 \widetilde{\bm{u}}_1(\bm{u}_{jk}(\xi)) : \frac{d\bm{\tau}}{d\xi}\,d\xi \end{align} for all test functions $\bm{\tau}(\xi)\in\mathcal{P}_p([0,1])^{m\times 3}$. Note that we allow for the numerical flux $\widehat{\widetilde{\bm{u}}}_1$ to depend on both $\bm{u}$ and $\bm{q}$ on each side of the face, even though the actual flux $\widetilde{\bm{u}}_1$ is only a function of $\bm{u}$. Again, the numerical contravariant fluxes can be written in terms of the actual fluxes and the actual normal vector: \begin{align} \widehat{\widetilde{\bm{u}_1}} = \widehat{\widetilde{\bm{u}}\cdot\bm{N}_1^+} = \widehat{\bm{u}\otimes \bm{n}_1^+} = \widehat{\bm{u}}\otimes\bm{n}_1^+, \label{numfluxu1} \end{align} and similarly in the negative direction and along the other coordinate directions. It remains only to define the numerical fluxes $\widehat{\bm{F}\cdot\bm{n}}=\widehat{\bm{F}}\cdot\bm{n}$ and $\widehat{\bm{u}}$. We could in principle consider any scheme that can be written in this form \cite{arnold02unified}, such as the interior penalty method, the BR2 method, the LDG method \cite{cockburn98ldg}, and the CDG method \cite{peraire08cdg}. Here we use a scheme based on the LDG method, because it has a simple upwind/downwind character, it does not evaluate derivatives of grid functions at the boundaries, and it appears well-suited for our Line-DG discretization. Furthermore, since our implicit solvers avoid the elimination of $\bm{q}$, the scheme has a compact connectivity (only connects neighboring elements). First, we separate the fluxes $\bm{F}$ into an inviscid and a viscous part: \begin{align} \bm{F}(\bm{u},\nabla \bm{u}) = \bm{F}^\mathrm{inv}(\bm{u}) + \bm{F}^\mathrm{vis}(\bm{u},\nabla \bm{u}). \end{align} This decomposition is clearly not unique, but it is understood that for many problems there is a natural separation into a convection-dominated inviscid component and a diffusion-dominated viscous component. This allows us to use standard approximate Riemann solvers for $\widehat{\bm{F}}^\mathrm{inv}$ as before, and we will now consider only the treatment of the viscous fluxes $\widehat{\bm{F}}^\mathrm{vis}$. For shorter notation, we assume below that $\bm{F}=\bm{F}^\mathrm{vis}$ and that $\bm{n}$ is a unit vector. We will define the fluxes in terms of a so-called switch function, which simply assigns a sign to each internal element face. For one-dimensional problems, the natural switch function is to set all these signs equal (either positive or negative), and we will mimic this for our Line-DG method by identifying globally connected lines in our hexahedral meshes. In our notation, instead of assigning switches to each face, we introduce $S^{\pm}_i \in \{-1,1\}$ for the switches at local coordinate $\xi=1$ and $\xi=0$ along direction $i=1,2,3$. There is some redundancy here, since we require that $S^+_i = -S^-_i$, and also that the switch function for a shared face between two neighboring elements have opposite signs. See figure~\ref{switchplt} for an example quadrilateral mesh and switch function. This was generated by a straight-forward algorithm, where an arbitrary element face is chosen and assigned an arbitrary sign, which then defines the alternating pattern along a sequence of elements in both directions. This procedure is repeated until all faces have been processed. With the switch function defined, we can formulate the LDG numerical fluxes for the second-order terms: \begin{align} \widehat{\bm{F}}(\bm{u},\bm{q},\bm{n}) &= \{\!\{ \bm{F}(\bm{u},\bm{q}) \}\!\} + C_{11} [\![ \bm{u} \otimes \bm{n} ]\!] + \bm{C}_{12} \otimes [\![ \bm{F}(\bm{u},\bm{q})\cdot\bm{n} ]\!], \label{ldgflux1} \\ \widehat{\bm{u}}(\bm{u},\bm{q},\bm{n}) &= \{\!\{ \bm{u} \}\!\} -\bm{C}_{12}\cdot [\![ \bm{u}\otimes\bm{n} ]\!] + C_{22}[\![\bm{F}(\bm{u},\bm{q})\cdot\bm{n}]\!]. \label{ldgflux2} \end{align} for a solution $\bm{u},\bm{q}$ and a face normal vector $\bm{n}$. Here, $\{\!\{ \cdot \}\!\}$ denotes the mean value and $[\![ \cdot ]\!]$ denotes the jump over a face: \begin{align} \{\!\{ \bm{v} \}\!\} \equiv \frac12 (\bm{v}^++\bm{v}^-),\qquad [\![ \bm{v} \odot \bm{n} ]\!] \equiv \bm{v}^+\odot\bm{n}-\bm{v}^-\odot\bm{n} \end{align} where $\bm{v}^+$ is the quantity $\bm{v}$ on the positive side of the face (according to the normal $\bm{n}$), $\bm{v}^-$ is $\bm{v}$ on the negative side, and $\odot$ is any multiplication operator. The coefficients $C_{11},\bm{C}_{12},C_{22}$ give the scheme different properties, and we note in particular that: \begin{itemize} \item If $C_{22}=0$, the fluxes (\ref{ldgflux2}) do not depend on $\bm{q}$, which means the discretized equation (\ref{splitdisc2}) immediately give $\bm{q}$ within each element (no coupling to equation (\ref{splitdisc1})). \item For the particular choice $\bm{C}_{12} = \bm{n} S^\pm_i /2$, where $S^\pm_i$ is the switch for the considered face and direction, the fluxes can be written in the form: \begin{align} \widehat{\bm{F}}(\bm{u}_R,\bm{q}_R,\bm{u}_L,\bm{q}_L,\bm{n}) &= C_{11} [\![ \bm{u} \otimes \bm{n} ]\!] + \begin{cases} \bm{F}(\bm{u}_R,\bm{q}_R) & \text{if } S^\pm_i=+1 \\ \bm{F}(\bm{u}_L,\bm{q}_L) & \text{if } S^\pm_i=-1 \\ \end{cases} \\ \widehat{\bm{u}}(\bm{u}_R,\bm{q}_R,\bm{u}_L,\bm{q}_L,\bm{n}) &= C_{22}[\![\bm{F}(\bm{u},\bm{q})\cdot\bm{n}]\!] + \begin{cases} \bm{u}_L & \text{if } S^\pm_i=+1 \\ \bm{u}_R & \text{if } S^\pm_i=-1 \\ \end{cases} \end{align} where it is clear how the method is upwinding/downwinding the two numerical fluxes, depending on the switch $S^\pm_i$. The constants $C_{11}$ and $C_{22}$ are additional stabilization parameters, which can be seen as penalties on the jumps in the solution and in the normal fluxes, respectively. In many of our problems we set both of these coefficients to zero (the so-called minimal dissipation LDG method \cite{cockburn07mdldg}). This makes the scheme particularly simple, and also further reduces the number of connectivities in the Jacobian matrices. \end{itemize} \begin{figure} \begin{center} \includegraphics[width=.5\textwidth]{switchplt.pdf} \end{center} \caption{A sample quadrilateral mesh and switch function $S^\pm_i$ for $i=1,2$ in each element. Note that the switches have consistent directions along each one-dimensional global curve (dashed blue lines).} \label{switchplt} \end{figure} At the boundaries we impose conditions by appropriate choices of numerical fluxes. For example, at a Dirichlet-type boundary with a prescribed solution $\bm{u}=\bm{g}_D$, we set: \begin{align} \widehat{\bm{F}}(\bm{u},\bm{q},\bm{n}) &= \bm{F}(\bm{u},\bm{q}) + C_{11} ( \bm{u} - \bm{g}_D ) \otimes \bm{n} \label{dirichlet1} \\ \widehat{\bm{u}}(\bm{u},\bm{q},\bm{n}) &= \bm{g}_D, \label{dirichlet2} \end{align} where $C_{11}$ in (\ref{dirichlet1}) must be positive, even though we often choose $C_{11}=0$ for the interior fluxes. At a Neumann-type boundary with prescribed normal fluxes $\bm{F}\cdot\bm{n}=\bm{g}_N$, we set: \begin{align} \widehat{\bm{F}}(\bm{u},\bm{q},\bm{n}) &= \bm{g}_N \otimes \bm{n} \label{neumann1} \\ \widehat{\bm{u}}(\bm{u},\bm{q},\bm{n}) &= \bm{u} -C_{22} (\bm{F}(\bm{u},\bm{q})\cdot\bm{n}-\bm{g}_N). \label{neumann2} \end{align} For mixed conditions we apply combinations of these fluxes for the different components of $\bm{u}$ and $\bm{q}$. With the fluxes defined, we can calculate $\bm{d}_{ijk}=\bm{d}^{(1)}_{ijk}$ for all $j,k=0,\ldots,p$, and similarly for $\bm{d}^{(2)}_{ijk}$ and $\bm{d}^{(3)}_{ijk}$ along the other two coordinate directions. These are essentially numerical approximations to the gradient $\bm{q}=\nabla\bm{u}$, but again we point out that they might depend implicitly on $\bm{q}$ through the numerical fluxes (if $C_{22}\ne 0$). Our final semi-discrete formulation for (\ref{splitdisc1}), (\ref{splitdisc2}) gets the form \begin{align} \frac{d\bm{u}_{ijk}}{dt} + \frac{1}{J_{ijk}}\sum_{n=1}^3 \bm{r}_{ijk}^{(n)} &= \bm{S}(\bm{u}_{ijk},\bm{q}_{ijk}) \label{semidisc2-1} \\ \frac{1}{J_{ijk}}\sum_{n=1}^3 \bm{d}_{ijk}^{(n)} &= \bm{q}_{ijk} \label{semidisc2-2} \end{align} \section{Temporal discretization and nonlinear solvers} \subsection{Method of lines and time integration} We use various techniques to solve the semi-discrete system of equations (\ref{semidisc2-1}), (\ref{semidisc2-2}), either by integrating in time or solving for steady-state solutions. First, we define the vectors $\bm{U},\bm{Q}$ with all solution components $\bm{u}_{ijk},\bm{q}_{ijk}$, respectively, and write the system as \begin{align} \frac{d\bm{U}}{dt} &= \bm{R}(\bm{U},\bm{Q}), \label{semidiscvector1} \\ \bm{Q} &= \bm{D}(\bm{U},\bm{Q}). \label{semidiscvector2} \end{align} This split form can be useful for implicit time-stepping or steady-state solutions, in particular if the coefficient $C_{22}\ne 0$. With a standard Newton's method, this requires the solution of linear systems involving the matrix \begin{align} \bm{K} = \begin{bmatrix} \frac{\partial \bm{R}}{\partial \bm{U}} & \frac{\partial \bm{R}}{\partial \bm{Q}} \\ \frac{\partial \bm{D}}{\partial \bm{U}} & \frac{\partial \bm{D}}{\partial \bm{Q}} \end{bmatrix} \equiv \begin{bmatrix} \bm{K}_{11} & \bm{K}_{12} \\ \bm{K}_{21} & \bm{K}_{22} \end{bmatrix}. \end{align} This system solves for both $\bm{U}$ and $\bm{Q}$ but it retains the high level of sparsity of the method. Also, it allows for non-zero $\bm{K}_{22}$ which can be used to give the scheme several attractive properties \cite{cockburn08hybrid}. In our examples, we solve these equations using a standard sparse direct solver \cite{davis06sparse}. However, in most of our problems we set $C_{22}=0$ to allow for elimination of the discrete derivatives $\bm{Q}$. Then $\bm{D}(\bm{U},\bm{Q})=\bm{D}(\bm{U})$, and substituting (\ref{semidiscvector2}) into (\ref{semidiscvector1}) leads to a reduced system \begin{align} \frac{d\bm{U}}{dt} &= \bm{R}(\bm{U},\bm{D}(\bm{U})) \equiv \bm{F}(\bm{U}). \label{semidiscprimal} \end{align} This is clearly the preferred choice for explicit time-stepping, since it is a regular system of ODEs. In our examples we use a standard fourth-order explicit Runge-Kutta method. We also use this form for implicit time-stepping using Diagonally Implicit Runge-Kutta (DIRK) schemes \cite{alexander77dirk}. In particular, we use the following L-stable, three-stage, third-order accurate method \cite{alexander77dirk}: \begin{align} \bm{K}_i &= \bm{F}\bigg(\bm{U}_n+\Delta t\sum_{j=1}^s a_{ij} \bm{K}_j\bigg),\quad i=1,\ldots,s \label{dirk1} \\ \bm{U}_{n+1} &= \bm{U}_n + \Delta t\sum_{j=1}^s b_j \bm{K}_j, \label{dirk2} \end{align} with $s=3$ and the coefficients given by the Runge-Kutta tableaux below. \begin{center} \begin{minipage}{.12\textwidth} \vspace{11mm} \begin{tabular}{c|c} $c$ & $A$ \\ \hline & $b^T$ \end{tabular} $=$ \end{minipage} \begin{minipage}{.22\textwidth} \vspace{0mm} \renewcommand{\arraystretch}{1.1} \begin{tabular}{c|ccc} $\alpha$ & $\alpha$ & 0 & \ \ 0 \\ $\tau_2$ & $\tau_2-\alpha$ & $\alpha$ & \ \ 0 \\ \rule[-5pt]{0pt}{0pt} $1$ & $b_1$ & $b_2$ & \ \ $\alpha$ \\ \hline \rule[12pt]{0pt}{0pt} & $b_1$ & $b_2$ & $\alpha$ \end{tabular} \end{minipage} \hspace{10mm} \begin{minipage}{.25\textwidth} \vspace{-4mm} \begin{align*} \alpha &= 0.435866521508459\\ \tau_2 &= (1+\alpha)/2\\ b_1 &= -(6\alpha^2-16\alpha+1)/4 \\ b_2 &= (6\alpha^2-20\alpha+5)/4 \end{align*} \end{minipage} \end{center} We also use implicit time-stepping for computing steady-state solutions, by a sequence of increasing timesteps $\Delta t$ and a final step without the time derivatives. Since this does not require time-accuracy, we use a standard backward Euler scheme. We solve the nonlinear systems (\ref{dirk1}) using Newton's method with preconditioned iterative solvers, as described below. \subsection{Newton-Krylov solvers} \label{sec:newtonkrylov} When Newton's method is applied to the reduced problem (\ref{semidiscprimal}), it requires the solution of systems of equations of the form \begin{align} (\bm{I}-\alpha \Delta t \bm{A}) \Delta \bm{U}^{(i)} = \Delta t \bm{R} (\bm{U}^{(i)},\bm{D}(\bm{U}^{(i)})) \label{dirkeqn} \end{align} where $\Delta t$ is the timestep and \begin{align} \bm{A} = \frac{d \bm{R}}{d\bm{U}} = \frac{\partial \bm{R}}{\partial \bm{U}} + \frac{\partial \bm{R}}{\partial \bm{Q}} \frac{\partial \bm{D}}{\partial \bm{U}} = \bm{K}_{11} + \bm{K}_{12}\bm{K}_{21}. \label{fulldiscprimal} \end{align} Forming this matrix $\bm{A}$ has the drawback that for second-order systems, the product $\bm{K}_{12}\bm{K}_{21}$ is in general much less sparse than the individual matrices $\bm{K}_{11},\bm{K}_{12},\bm{K}_{21}$. This is expected due to the repeated differentiation along two different directions, but it requires special solvers to avoid explicitly forming the denser matrix $\bm{A}$. This phenomenon is not unique for our method, in fact many other numerical schemes including finite difference methods and nodal DG methods suffer from sparsity reduction for second-order systems. In this work, we use a simple approach to solve the system (\ref{dirkeqn}) without forming the full Jacobian matrix. In a preconditioned Krylov subspace method, we need to perform two operations: Multiplication of a vector $\bm{p}$ by the matrix $(\bm{I}-\alpha \Delta t \bm{A})$, and approximate solution of $(\bm{I}-\alpha \Delta t \bm{A})\bm{x}=\bm{b}$ for the preconditioning. The matrix-vector product can by computed by keeping the individual matrix in a separated form and nesting the products: \begin{align} (\bm{I}-\alpha \Delta t \bm{A})\bm{p} = \bm{p}-\alpha\Delta t \left(\bm{K}_{11}\bm{p} + \bm{K}_{12} (\bm{K}_{21} \bm{p}) \right). \label{matvecsplit} \end{align} This avoids explicitly forming the matrix $\bm{A}$, and the cost per matrix-vector product is proportional to the number of entries in the matrices $\bm{K}_{11},\bm{K}_{12},\bm{K}_{21}$. For preconditioning, we use a sparse block-Jacobi approach which forms an approximate matrix $\widetilde{\bm{A}}$ that ignores all fill from the product $\bm{K}_{12}\bm{K}_{21}$ and any inter-element connectivities. In other words, $\widetilde{\bm{A}}$ is equal to $\bm{A}$ only at the block-diagonal Line-DG sparsity pattern, and zero everywhere else. This simple preconditioner requires very little storage (even less than a first-order discretization or the matrix $\bm{K}_{11}$) and it performs well for time-accurate simulations with small timesteps. It can certainly be improved upon, for example allowing higher levels of fill or using block-ILU and $h/p$-multigrid schemes \cite{persson08newtongmres}. When solving the linear systems involving the preconditioning matrix $(\bm{I}-\alpha \Delta t \widetilde{\bm{A}})$, we use a sparse direct LU-factorization with fill-reducing ordering for each block \cite{davis06sparse}. This results in some additional fill, but in our 2-D examples we find that even for $p$ as high as $7$ the number of entries in $\widetilde{\bm{A}}$ is about the same as in the original sparse matrices $\bm{K}_{11}$, $\bm{K}_{12}$, and $\bm{K}_{21}$. For 3-D problems, it is more critical to retain the line-based sparsity in the preconditioner, and a number of alternatives should be applicable such as low-order approximations \cite{orszag80spectral}, ADI-iterations \cite{canuto90adi}, and subiterations \cite{rumsey95subiterations}. In our implementation, we store all matrices in a general purpose compressed column storage format \cite{davis06sparse}. We also point out that the matrix $\bm{K}_{21}$ can be handled very efficiently, since it is a discrete gradient operator and therefore (a) linear, (b) constant in time, and (c) equal for all solution components (except possibly at the boundaries for certain boundary conditions). \subsection{Re-using Jacobian matrices} \label{sec:reuse} In many problems it is more computationally expensive to form the matrix $\bm{I}-\alpha\Delta t \bm{A}$ than to solve the linear system (\ref{dirkeqn}). This is especially true for time-accurate integration where the timesteps $\Delta t$ are relatively small (but still large enough to motivate the use of implicit solvers). We therefore use a standard technique for Newton's method that attempts to re-use old Jacobian matrices for many iterations, until the convergence is too slow as determined by the number of iterations exceeding a threshold number. In our numerical experiments this is sufficient to allow for a reuse of the Jacobian for a large number of Newton steps, DIRK stages, and timesteps at a time. The linear systems are solved using a preconditioned GMRES method \cite{saad86gmres}, with the low-cost sparse block-Jacobi preconditioner $\widetilde{\bm{A}}$ described above. These equations can be solved with relatively low accuracy using a small number of GMRES iterations, since we are using old Jacobian matrices which already limit the potential improvements from each Newton step. The tolerance in the Newton solver is set well below the estimated truncation error of the time integrator. \section{Results} \subsection{Poisson's equation} Our first test is Poisson's equation \begin{align} -\nabla \cdot (\nabla u) = f(x,y) \label{poisson} \end{align} on the unit square domain $\Omega = [0,1]^2$. Dirichlet conditions are imposed at all the boundaries ($\partial \Omega_D = \partial \Omega$) and we choose the analytical solution \begin{align} u(x,y) = \mathrm{exp} \left[ \alpha \sin (ax+by) + \beta \cos (cx+dy) \right] \label{solution} \end{align} with numerical parameters $\alpha=0.1, \beta=0.3, a=5.1, b=-6.2, c=4.3, d=3.4$. We then solve (\ref{poisson}) with Dirichlet boundary conditions $g_D(x,y)=u(x,y)|_{\partial \Omega_D}$. The source term, $f(x,y)$, is obtained by analytical differentiation of (\ref{solution}). We discretize $\Omega$ using an unstructured mesh of quadrilateral elements, see Figure~\ref{poitest} (left). We solve the split system (\ref{semidiscvector1}), (\ref{semidiscvector2}) using a direct sparse solver, for polynomial degrees $p=1,\ldots,7$. We consider two sets of parameters: the minimal dissipation scheme with $C_{11}=C_{22}=0$, which has the benefit that it allows for elimination of the gradients $\bm{q}$, and the slightly over-stabilized scheme $C_{11}=C_{22}=1/20$, which may provide a higher-order of convergence for $\bm{q}$ \cite{cockburn08hybrid}. \begin{figure}[t] \begin{center} \begin{minipage}{.3\textwidth} \begin{center} \includegraphics[width=.8\textwidth]{poiconv_msh1.pdf} \\ Coarsest mesh, $p=5$ \end{center} \end{minipage} \begin{minipage}{.3\textwidth} \begin{center} \includegraphics[width=.8\textwidth]{poiconv_msh2.pdf} \\ One refinement, $p=5$ \end{center} \end{minipage} \begin{minipage}{.3\textwidth} \begin{center} \includegraphics[width=.8\textwidth]{poiconv_sol.png} \\ Solution $u(x,y)$ \end{center} \end{minipage} \end{center} \caption{The Poisson test problem (\ref{poisson}). The left figure shows the coarse unstructured quadrilateral mesh, which is uniformly refined repeatedly, and the solution nodes for $p=5$. The right figure shows the solution as colored contours.} \label{poitest} \end{figure} \begin{table}[h] \begin{footnotesize} \begin{center} \textbf{Error in solution $|u_h-u|_\infty$, $C_{11}=C_{22}=0$, $\bm{C}_{12}=\bm{n}S_i^\pm/2$} \\ \begin{tabular}{r|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r} & \multicolumn{2}{c}{$p=1$} & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=3$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=5$} & \multicolumn{2}{c}{$p=6$} & \multicolumn{2}{c}{$p=7$} \\ $n$ & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate \\ \hline 1 & $4.6 \cdot 10^{-2}$ & & $3.7 \cdot 10^{-3}$ & & $4.1 \cdot 10^{-4}$ & & $1.0 \cdot 10^{-4}$ & & $8.9 \cdot 10^{-6}$ & & $2.1 \cdot 10^{-6}$ & & $3.7 \cdot 10^{-7}$ & \\ 2 & $1.1 \cdot 10^{-2}$ & 2.0 & $4.3 \cdot 10^{-4}$ & 3.1 & $2.8 \cdot 10^{-5}$ & 3.9 & $2.5 \cdot 10^{-6}$ & 5.4 & $1.3 \cdot 10^{-7}$ & 6.0 & $1.5 \cdot 10^{-8}$ & 7.1 & $9.4 \cdot 10^{-10}$ & 8.6 \\ 4 & $2.7 \cdot 10^{-3}$ & 2.1 & $5.1 \cdot 10^{-5}$ & 3.1 & $1.7 \cdot 10^{-6}$ & 4.0 & $6.0 \cdot 10^{-8}$ & 5.4 & $2.1 \cdot 10^{-9}$ & 6.0 & $8.1 \cdot 10^{-11}$ & 7.5 & $2.7 \cdot 10^{-12}$ & 8.4 \\ 8 & $6.6 \cdot 10^{-4}$ & 2.0 & $6.2 \cdot 10^{-6}$ & 3.0 & $1.0 \cdot 10^{-7}$ & 4.0 & $1.8 \cdot 10^{-9}$ & 5.1 & $3.0 \cdot 10^{-11}$ & 6.1 & $5.4 \cdot 10^{-13}$ & 7.2 & $9.1 \cdot 10^{-14}$ & * \\ \end{tabular} \\ \ \\ \ \\ \textbf{Error in gradient $|\bm{q}_h-\nabla u|_\infty$, $C_{11}=C_{22}=0$, $\bm{C}_{12}=\bm{n}S_i^\pm/2$} \\ \begin{tabular}{r|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r} & \multicolumn{2}{c}{$p=1$} & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=3$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=5$} & \multicolumn{2}{c}{$p=6$} & \multicolumn{2}{c}{$p=7$} \\ $n$ & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate \\ \hline 1 & $4.6 \cdot 10^{-2}$ & & $3.7 \cdot 10^{-3}$ & & $4.1 \cdot 10^{-4}$ & & $1.0 \cdot 10^{-4}$ & & $8.9 \cdot 10^{-6}$ & & $2.1 \cdot 10^{-6}$ & & $3.7 \cdot 10^{-7}$ & \\ 2 & $1.1 \cdot 10^{-2}$ & 1.1 & $4.3 \cdot 10^{-4}$ & 2.2 & $2.8 \cdot 10^{-5}$ & 3.2 & $2.5 \cdot 10^{-6}$ & 4.4 & $1.3 \cdot 10^{-7}$ & 5.0 & $1.5 \cdot 10^{-8}$ & 6.4 & $9.4 \cdot 10^{-10}$ & 7.3 \\ 4 & $2.7 \cdot 10^{-3}$ & 1.0 & $5.1 \cdot 10^{-5}$ & 2.1 & $1.7 \cdot 10^{-6}$ & 3.0 & $6.0 \cdot 10^{-8}$ & 4.2 & $2.1 \cdot 10^{-9}$ & 5.0 & $8.1 \cdot 10^{-11}$ & 6.4 & $2.7 \cdot 10^{-12}$ & 6.9 \\ 8 & $6.6 \cdot 10^{-4}$ & 1.0 & $6.2 \cdot 10^{-6}$ & 2.0 & $1.0 \cdot 10^{-7}$ & 2.9 & $1.8 \cdot 10^{-9}$ & 4.0 & $3.0 \cdot 10^{-11}$ & 5.0 & $5.4 \cdot 10^{-13}$ & * & $9.1 \cdot 10^{-14}$ & * \\ \end{tabular} \\ \ \\ \ \\ \textbf{Error in solution $|u_h-u|_\infty$, $C_{11}=C_{22}=1/20$, $\bm{C}_{12}=\bm{n}S_i^\pm/2$} \\ \begin{tabular}{r|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r} & \multicolumn{2}{c}{$p=1$} & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=3$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=5$} & \multicolumn{2}{c}{$p=6$} & \multicolumn{2}{c}{$p=7$} \\ $n$ & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate \\ \hline 1 & $4.5 \cdot 10^{-2}$ & & $3.9 \cdot 10^{-3}$ & & $4.9 \cdot 10^{-4}$ & & $1.2 \cdot 10^{-4}$ & & $9.3 \cdot 10^{-6}$ & & $2.5 \cdot 10^{-6}$ & & $3.8 \cdot 10^{-7}$ & \\ 2 & $1.0 \cdot 10^{-2}$ & 2.1 & $4.9 \cdot 10^{-4}$ & 3.0 & $3.7 \cdot 10^{-5}$ & 3.7 & $3.2 \cdot 10^{-6}$ & 5.2 & $1.8 \cdot 10^{-7}$ & 5.7 & $2.2 \cdot 10^{-8}$ & 6.9 & $1.1 \cdot 10^{-9}$ & 8.4 \\ 4 & $2.4 \cdot 10^{-3}$ & 2.1 & $5.8 \cdot 10^{-5}$ & 3.1 & $2.3 \cdot 10^{-6}$ & 4.0 & $7.7 \cdot 10^{-8}$ & 5.4 & $3.4 \cdot 10^{-9}$ & 5.7 & $1.2 \cdot 10^{-10}$ & 7.4 & $4.5 \cdot 10^{-12}$ & 8.0 \\ 8 & $5.9 \cdot 10^{-4}$ & 2.0 & $7.2 \cdot 10^{-6}$ & 3.0 & $1.3 \cdot 10^{-7}$ & 4.1 & $2.2 \cdot 10^{-9}$ & 5.1 & $4.5 \cdot 10^{-11}$ & 6.2 & $8.2 \cdot 10^{-13}$ & 7.2 & $1.4 \cdot 10^{-13}$ & * \\ \end{tabular} \\ \ \\ \ \\ \textbf{Error in gradient $|\bm{q}_h-\nabla u|_\infty$, $C_{11}=C_{22}=1/20$, $\bm{C}_{12}=\bm{n}S_i^\pm/2$} \\ \begin{tabular}{r|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r@{\ \ }|l@{\ }r} & \multicolumn{2}{c}{$p=1$} & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=3$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=5$} & \multicolumn{2}{c}{$p=6$} & \multicolumn{2}{c}{$p=7$} \\ $n$ & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate & Error & Rate \\ \hline 1 & $4.5 \cdot 10^{-2}$ & & $3.9 \cdot 10^{-3}$ & & $4.9 \cdot 10^{-4}$ & & $1.2 \cdot 10^{-4}$ & & $9.3 \cdot 10^{-6}$ & & $2.5 \cdot 10^{-6}$ & & $3.8 \cdot 10^{-7}$ & \\ 2 & $1.0 \cdot 10^{-2}$ & 1.8 & $4.9 \cdot 10^{-4}$ & 2.5 & $3.7 \cdot 10^{-5}$ & 3.7 & $3.2 \cdot 10^{-6}$ & 4.6 & $1.8 \cdot 10^{-7}$ & 5.6 & $2.2 \cdot 10^{-8}$ & 6.6 & $1.1 \cdot 10^{-9}$ & 7.7 \\ 4 & $2.4 \cdot 10^{-3}$ & 1.7 & $5.8 \cdot 10^{-5}$ & 2.5 & $2.3 \cdot 10^{-6}$ & 3.7 & $7.7 \cdot 10^{-8}$ & 4.6 & $3.4 \cdot 10^{-9}$ & 5.6 & $1.2 \cdot 10^{-10}$ & 6.6 & $4.5 \cdot 10^{-12}$ & 7.6 \\ 8 & $5.9 \cdot 10^{-4}$ & 1.8 & $7.2 \cdot 10^{-6}$ & 2.6 & $1.3 \cdot 10^{-7}$ & 3.7 & $2.2 \cdot 10^{-9}$ & 4.5 & $4.5 \cdot 10^{-11}$ & 5.7 & $8.2 \cdot 10^{-13}$ & * & $1.4 \cdot 10^{-13}$ & * \\ \end{tabular} \end{center} \end{footnotesize} \caption{Convergence of $u$ and $\bm{q}$ for the Poisson problem, with $C_{12} = \bm{n}S^\pm_i /2$ and $C_{11}=C_{22}=0$ (top two tables), $C_{11}=C_{22}=1/20$ (bottom two tables). For the first case we observe approximate rates of $p+1$ for $u$ and $p$ for $\bm{q}$, while the nonzero $C_{11},C_{22}$ case appears to give a significantly higher rate for $\bm{q}$. A star symbol (*) indicates that the error is dominated by floating point rounding errors rather than the truncation error of the scheme.} \label{poitab1} \end{table} The resulting infinity norm errors and rates of convergence are shown in table~\ref{poitab1}, for both the solution $u$ and the gradients $\bm{q}$ and the two parameter cases. For the minimal dissipation scheme (top two tables), we observe the expected orders of convergence $p+1$ and $p$ for $u$ and $\bm{q}$, respectively. For the stabilized scheme (bottom two tables), we obtain a somewhat higher order for the $\bm{q}$ variables, which could be used as part of a postprocessing step to further increase the order of convergence for the solution $u$ \cite{peraire09hybrid}. \subsection{Euler vortex} Next we consider the compressible Euler and Navier-Stokes equations, which we write in the form: \begin{align} \frac{\partial \rho}{\partial t} + \frac{\partial}{\partial x_i} (\rho u_i) &= 0, \label{ns1} \\ \frac{\partial}{\partial t} (\rho u_i) + \frac{\partial}{\partial x_i} (\rho u_i u_j+ p) &= +\frac{\partial \tau_{ij}}{\partial x_j} \quad\text{for }i=1,2,3, \label{ns2} \\ \frac{\partial}{\partial t} (\rho E) + \frac{\partial}{\partial x_i} \left(u_j(\rho E+p)\right) &= -\frac{\partial q_j}{\partial x_j} +\frac{\partial}{\partial x_j}(u_j\tau_{ij}), \label{ns3} \end{align} where $\rho$ is the fluid density, $u_1,u_2,u_3$ are the velocity components, and $E$ is the total energy. The viscous stress tensor and heat flux are given by \begin{align} \tau_{ij} = \mu \left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} -\frac23 \frac{\partial u_k}{\partial x_j} \delta_{ij} \right) \qquad \text{ and } \qquad q_j = -\frac{\mu}{\mathrm{Pr}} \frac{\partial}{\partial x_j} \left( E+\frac{p}{\rho} -\frac12 u_k u_k \right). \end{align} Here, $\mu$ is the viscosity coefficient and $\mathrm{Pr = 0.72}$ is the Prandtl number which we assume to be constant. For an ideal gas, the pressure $p$ has the form \begin{align} p=(\gamma-1)\rho \left( E - \frac12 u_k u_k\right), \end{align} where $\gamma$ is the adiabatic gas constant. Our first model problem is the inviscid flow of a compressible vortex in a rectangular domain \cite{erlebacher97vortex}. The vortex is initially centered at $(x_0,y_0)$ and is moving with the free-stream at an angle $\theta$ with respect to the $x$-axis. The analytic solution at $(x,y,t)$ is given by \begin{align} u &= u_\infty \left(\cos\theta - \frac{\epsilon ((y-y_0)-\bar{v}t)}{2\pi r_c} \exp(f/2) \right), & \rho &= \rho_\infty \left(1 - \frac{\epsilon^2(\gamma-1)M_\infty^2}{8\pi^2} \exp (f) \right)^\frac{1}{\gamma-1}, \\ v &= u_\infty \left(\sin\theta + \frac{\epsilon ((x-x_0)-\bar{u}t)}{2\pi r_c} \exp (f/2) \right), & p &= p_\infty \left(1 - \frac{\epsilon^2(\gamma-1)M_\infty^2}{8\pi^2} \exp (f) \right)^\frac{\gamma}{\gamma-1}, \end{align} where $f(x,y,t) = (1-((x-x_0)-\bar{u}t)^2-((y-y_0)-\bar{v}t)^2)/r_c^2$, $M_\infty$ is the Mach number, $\gamma=c_p/c_v=1.4$, and $u_\infty$, $p_\infty$, $\rho_\infty$ are free-stream velocity, pressure, and density. The Cartesian components of the free-stream velocity are $\bar{u}=u_\infty\cos\theta$ and $\bar{v}=u_\infty\sin\theta$. The parameter $\epsilon$ measures the strength of the vortex and $r_c$ is its size. We use a domain of size 20-by-15, with the vortex initially centered at $(x_0,y_0)=(5,5)$ with respect to the lower-left corner. The Mach number is $M_\infty=0.5$, the angle $\theta=\arctan 1/2$, and the vortex has the parameters $\epsilon=0.3$ and $r_c=1.5$. We use characteristic boundary conditions and integrate until time $t_0=\sqrt{10^2+5^2}/10$, when the vortex has moved a relative distance of $(1,1/2)$. We write the Euler equations as a first-order system of conservation laws (\ref{conslaw}), in the conserved variables $(\rho,\rho u,\rho v,\rho E)$. The scheme (\ref{semidisc}) is implemented in a straight-forward way, and we use Roe's method for the numerical fluxes (\ref{numflux1}) \cite{roe}. The time-integration is done explicitly with the form (\ref{semidiscprimal}) using the RK4 solver and a timestep $\Delta t$ small enough so that all truncation errors are dominated by the spatial discretization. We start from a coarse unstructured quadrilateral mesh (figure~\ref{conv}, top left), which we refine uniformly a number of times, and we use polynomial degrees $p$ ranging between 1 and 8. The top right plot also shows the density field for a sample solution. In the bottom plot of figure~\ref{conv}, we graph the maximum errors (discretely at the solution nodes) for all simulation cases, both for the Line-DG method and the standard nodal DG method. The results clearly show the optimal order of convergence $\mathcal{O}(h^{p+1})$ for element size $h$ for both methods, and that the Line-DG errors are in all cases very close to those of the nodal DG method. \begin{figure}[t] \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.8\textwidth]{convmsh.png} \\ Coarsest mesh, with degree $p=7$ \end{center} \end{minipage} \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.8\textwidth]{convsol.png} \\ Solution (density) \end{center} \end{minipage} \\ \begin{center} \includegraphics[width=.7\textwidth]{convplt.pdf} \end{center} \caption{Convergence test for an Euler vortex test problem using the Line-DG method and the nodal DG method. The results show optimal order of convergence $\mathcal{O}(h^{p+1})$ with very small differences between the two methods.} \label{conv} \end{figure} \subsection{Inviscid flow over a cylinder} Next we study a problem with a steady-state solution and curved boundaries, and solve the Euler equations for the inviscid flow over a half-cylinder with radius 1 at a Mach number of 0.3. Structured quadrilateral meshes are used, with strong element size grading to better resolve the region close to the cylinder (see figure~\ref{pltcylspy}, top left). The outer domain boundary is a half-cylinder with radius 10, where characteristic boundary conditions are imposed. Standard slip wall/symmetry conditions are used at the cylinder and at the symmetry plane. The steady-state solutions are found using a fully consistent Newton method, applied directly to the equations (\ref{semidisc}), with the linear systems solved using a direct sparse solver \cite{davis06sparse}. Starting the iterations from an approximate analytical solution, derived from a potential flow approximation, the solver converges to machine precision in 4 to 6 iterations. The solution is shown in the bottom left of figure~\ref{pltcylspy}, and the figures to the right show portions of the Jacobian matrices for both the Line-DG and the nodal DG method. This illustrates again the reduced sparsity of the Line-DG scheme, with about a factor of 4 fewer entries than nodal DG already in two space dimensions. To evaluate the accuracy and convergence of the scheme, in figure~\ref{eulercylconv} we plot the errors in the lift coefficient $C_L$ (left) and the maximum errors in the entropy (right). These plots again confirm the convergence of the schemes as well as the minor differences in error between the Line-DG and the nodal DG schemes. \begin{figure}[t] \begin{center} \begin{minipage}{.34\textwidth} \begin{center} \includegraphics[width=.9\textwidth]{plteulercylmsh.pdf} \\ Coarsest mesh, $p=7$ \\ \ \\ \includegraphics[width=.9\textwidth]{plteulercylsol.png} \\ \ \\ \includegraphics[width=.8\textwidth]{plteulercylsolzoom.png} \\ \ \\ \includegraphics[width=.9\textwidth]{colbar_horiz065.png} \\ Solution, Mach number \end{center} \end{minipage} \ \ % \begin{minipage}{.6\textwidth} \begin{center} \includegraphics[width=\textwidth]{pltcylspy.png} \\ \end{center} \end{minipage} \end{center} \caption{Inviscid flow over a cylinder. The plots show the coarsest grid used in the convergence study and the nodes for polynomial degree $p=7$ (top left), the corresponding solution as Mach number color plot (bottom left, with zoom-in), and a sparsity plot of the Jacobian matrices for both Line-DG and nodal DG (right).} \label{pltcylspy} \end{figure} \begin{figure}[h] \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.99\textwidth]{eulercylconvCL.pdf} \\ \end{center} \end{minipage} \hfill \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.99\textwidth]{eulercylconvDS.pdf} \\ \end{center} \end{minipage} \\ \caption{The convergence of the lift coefficient $C_L$ (left) and the entropy difference (right) for the inviscid flow over cylinder problem. The plots show a series of results for varying polynomial degrees and number of refinements, for the two methods Line-DG and nodal DG.} \label{eulercylconv} \end{figure} \subsection{Laminar flow around airfoil} An example of a steady-state viscous computation is shown in figure~\ref{sdfoil}. The compressible Navier-Stokes equations are solved at Mach 0.2 and Reynolds number 5000, for a flow around an SD7003 airfoil. The quadrilateral mesh is fully unstructured except for a structured graded boundary layer region, with a total of 461 elements for the coarse mesh, and 1844 and 7376 elements for the once and twice refined meshes, respectively. With approximating polynomials of degree $p=6$, this gives a total number of high-order nodes of 22,589 for the coarse mesh and 90,356 for the first refinement. We find the steady-state solution with 10 digits of accuracy in the residual using a consistent Newton's method, with pseudo-timestepping for regularization. A solution is shown in figure~\ref{sdfoil} (top right), for the coarse mesh with $p=7$. In the bottom plots, we show the convergence of the drag and the lift coefficients, for a range of polynomial degrees $p$. While it is hard to asses the exact order of convergence from these numbers, it is clear that our scheme provides a high order of convergence even for these difficult derivative-based quantities. \begin{figure}[t!] \begin{center} \begin{minipage}{.47\textwidth} \includegraphics[width=\textwidth]{sdmsh.png} \end{minipage} \ \ % \begin{minipage}{.47\textwidth} \includegraphics[width=\textwidth]{sdsol.png} \end{minipage} \\ \ \\ \begin{minipage}{.42\textwidth} \includegraphics[width=\textwidth]{sdconverrCD.pdf} \end{minipage} \ \ \ \ \ \ \ % \begin{minipage}{.42\textwidth} \includegraphics[width=\textwidth]{sdconverrCL.pdf} \end{minipage} \end{center} \caption{Stationary flow around an SD7003 airfoil (top left: mesh, top right: Mach number), computed with the Line-DG method with $p=7$, at free-stream Mach 0.2, zero angle of attack, and Reynolds number 5,000. The bottom plots show the convergence of $C_D$ and $C_L$, for a range of polynomial degrees and with 0, 1, or 2 uniform mesh refinements.} \label{sdfoil} \end{figure} \subsection{Transient flow around airfoil at Re = 20,000} In our last example, we demonstrate time-accurate implicit solution of transient flow around an SD7003 airfoil at Re = 20,000. The mesh is highly resolved in the boundary layer, however, for this Reynolds number it is coarser than the flow features in much of the domain and the computations should therefore be considered an under-resolved ILES-type model \cite{uranga11iles}. The Mach number is 0.1 and the angle of attack is 30 degrees to force flow separation at the leading edge. We use the three-stage DIRK scheme (\ref{dirk1}), (\ref{dirk2}), solved with Newton's method as described in sections~\ref{sec:newtonkrylov} and~\ref{sec:reuse}. At each Newton step we perform 5 GMRES iterations, and if the number of Newton iterations exceeds 15 we recompute the Jacobian matrices. Our computational mesh has 1122 quadrilateral elements with polynomial degrees $p=7$. The mesh and a solution at the normalized time of $t=1.76$ are shown in the top plots of figure~\ref{sd1t}. We use the timestep $\Delta t=2\cdot 10^{-4}$, which is about 250 times larger than the largest stable explicit RK4 timestep, yet small enough to accurately capture most of the complex flow features. It is difficult to estimate the accuracy in the simulation, due to the under-resolved nature of LES and the high sensitivity of transitional flows. However, we have run the same problem using a nodal DG code which has been tested against other simulations as well as experiments \cite{uranga11iles}. The bottom left plot of figure~\ref{sd1t} shows that the lift and drag forces on the airfoil agree well between the two schemes, until small perturbations have grown enough to cause large differences between the flows. We also plot the performance of the Newton solver, in the bottom right of figure~\ref{sd1t}. It shows how the number of Newton iterations per solve remains fairly constant, and about once every 100th timestep it reaches 16 which forces a recomputation of the Jacobian matrix. The average number of iterations per Newton solve for the entire simulation is 14. Because of the sparsity of the matrices and the splitting (\ref{matvecsplit}), these iterations are relatively inexpensive compared to residual evaluation. Our implementation is not optimized for performance, but the relative times for the four operations (1) GMRES iteration, (2) residual evaluation, (3) Jacobian evaluation, and (4) preconditioner factorization are roughly 1:4:40:16. Therefore, since we only perform 5 GMRES iterations per solve, we spend about the same time in residual evaluation as in the linear solver. In this sense, the solver is similar to an explicit scheme in that it spends a large portion of its computational time in residual evaluations, which gives benefits e.g. in the parallelization on new parallel multicore computer architectures with limited memory bandwidth. The time for re-assembly and factorization of the preconditioner is negligible, since they are only performed once in about every 100th timestep, which corresponds to 1400 residual evaluations or 7000 GMRES iterations. We also point out that without preconditioning about 20 times more GMRES iterations are required for the same tolerance, showing that even our simple preconditioner makes a drastic difference on the convergence. \begin{figure}[t] \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.9\textwidth]{sd1tsolmsh1.png} \\ \end{center} \end{minipage} \ % \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.9\textwidth]{sd1tsolmsh2.png} \\ \end{center} \end{minipage} \\ \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.95\textwidth]{ns1tFs.pdf} \\ \end{center} \end{minipage} \hfill \begin{minipage}{.49\textwidth} \begin{center} \includegraphics[width=.95\textwidth]{sd1tconv2.pdf} \\ \end{center} \end{minipage} \caption{Implicit transient simulation of flow around an SD7003 airfoil, at 30 degrees angle of attack, Reynolds number 20,000, and Mach number 0.1. Line-DG with polynomial degrees of $p=7$ is used for the spatial discretization, and a three stage, third order accurate DIRK scheme is used for time integration. The nonlinear systems are solved using a Newton-Krylov solver with re-used Jacobian matrices. The CFL number compared to the explicit RK4 method is about 250. The top figures show the computational mesh and color contours of the Mach number, from 0 to 0.3. The bottom left plot compares the lift and drag coefficient with a nodal DG scheme, indicating good agreement until the small differences between the schemes have grown enough to make the solutions hard to compare. The bottom right plot shows the performance of the nonlinear Newton solver, and in particular how it only recomputes the Jacobian matrices once in about every 100th solve.} \label{sd1t} \end{figure} \section{Conclusions} We have presented a new line-based DG method for first and second-order systems of equations. The scheme has a simple structure, with only one-dimensional integrals and standard Riemann solvers applied point-wise. Compared to the standard nodal DG method, this gives a simpler assembly process and a fundamentally different sparsity structure, which we used to develop efficient matrix-based implicit solvers. Compared to collocation based methods such as the DG spectral element method, it uses fully consistent integration along each coordinate-direction, and it slightly reduces the connectivities to neighboring elements by the choice of solution nodes. We showed that the accuracy of the discretizations are very similar to the standard DG method, and we demonstrated a stiff LES-type flow simulation with high-order DIRK time-integration with Newton-Krylov solvers and re-used Jacobians. A number of further developments are needed to make the scheme competitive for real-world problems. We have not addressed the issue of nonlinear stability for under-resolved features, including shock capturing, where approaches such as artificial viscosity and limiting could be adopted. For the solvers, our simple block-Jacobi preconditioner can be much improved upon, using e.g. multigrid and ILU techniques. Finally, for large problems the implementation needs to be parallelized, in particular for the new generation of multicore and GPU chips where memory bandwidth is limited. Here the high sparsity of the Line-DG scheme might have additional benefits over the standard nodal DG scheme. \section{Acknowledgments} We would like to acknowledge all the valuable discussions about this work with Jaime Peraire, Luming Wang, and Bradley Froehle, the suggestions from the reviewers, as well as the generous support from the AFOSR Computational Mathematics program under grant FA9550-10-1-0229, the Alfred P. Sloan foundation, and the Director, Office of Science, Computational and Technology Research, U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \bibliographystyle{model1-num-names}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Automata, Modal Formulae, and the Inverse Calculus} We first describe how to decide satisfiability in \ensuremath{\mathsf{K}}{} using the automata approach and the inverse method, respectively. Then we show that both approaches are closely connected. \subsection{Automata and Modal Formulae} Given a \ensuremath{\mathsf{K}}-formula $G$, we define an automaton $\mathfrak{A}_G$ such that $L(\mathfrak{A}_G) = \emptyset$ iff $G$ is not satisfiable. In contrast to the ``standard'' automata approach, the states of our automaton $\mathfrak{A}_G$ will be subsets of $\Pi_G$ rather than sets of subformulae of $G$. Using paths instead of subformulae is mostly a matter of notation. We also require the states to satisfy additional properties (i.e., we do not allow for arbitrary subsets of $\Pi_G$). This makes the proof of correctness of the automata approach only slightly more complicated, and it allows us to treat some important optimisations of the inverse calculus within our framework. The next definition introduces these properties. \begin{definition}[Propositionally expanded, clash] Let $G$ be a $\ensuremath{\mathsf{K}}$-formula in NNF, $\Pi_G$ the set of $G$-paths, and $\Phi \subseteq \Pi_G$. An $\wedge$-path $\pi \in \Phi$ is \emph{propositionally expanded in} $\Phi$ iff $\{ \pi {\wedge_l}, \pi {\wedge_r} \} \subseteq \Phi$. An $\vee$-path $\pi \in \Phi$ is \emph{propositionally expanded in} $\Phi$ iff $\{ \pi {\vee_l} , \pi {\vee_r} \} \cap \Phi \neq \emptyset$. The set $\Phi$ is \emph{propositionally expanded} iff every \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path $\pi \in \Phi$ is propositionally expanded in $\Phi$. We use ``p.e.'' as an abbreviation for ``propositionally expanded''. The set $\Phi'$ is an \emph{expansion} of the set $\Phi$ if $\Phi \subseteq \Phi'$, $\Phi'$ is p.e.\ and $\Phi'$ is minimal w.r.t.\ set inclusion with these properties. For a set $\Phi$, we define the set of its expansions as $\exp \Phi := \{ \Phi' \mid \Phi' \text{ is an expansion of $\Phi$} \}$. $\Phi$ contains a \emph{clash} iff there are two paths $\pi_1,\pi_2 \in \Phi$ such that $G|_{\pi_1} = p$ and $G|_{\pi_2} = \neg p$ for a propositional variable $p$. Otherwise, $\Phi$ is called \emph{clash-free}. \end{definition} For a set of paths $\Psi$, the set $\exp \Psi$ can effectively be constructed by successively adding paths required by the definition of p.e. A formal construction of the closure can be found in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure}. Note that $\emptyset$ is p.e., clash-free, and $\exp \emptyset = \{ \emptyset \}$. \begin{definition}[Formula Automaton]\label{def:automaton} For a $\ensuremath{\mathsf{K}}$-formula $G$ in NNF, we fix an arbitrary enumeration $\{ \pi_1, \dots, \pi_n \}$ of the $\Diamond$-paths in $\Pi_G$. The $n$-ary looping automaton $\mathfrak{A}_G$ is defined by $\mathfrak{A}_G := (Q_G, \Sigma_G, \exp{\{ \epsilon \}} , \Delta_G)$, where $Q_G := \Sigma_G := \{ \Phi \subseteq \Pi_G \mid \Phi \text{ is p.e.} \}$ and the transition relation $\Delta_G$ is defined as follows: \begin{itemize} \item $\Delta_G$ contains only tuples of the form $(\Phi,\Phi,\dots)$. \item If $\Phi$ is clash-free, then we define $\Delta_G(\Phi,\Phi) := \exp {\Psi_1} \times \dots \times \exp{\Psi_n}$, where \[ \Psi_i = \begin{cases} \{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in \Phi \text{ is a $\Box$-path } \} & \text{if $\pi_i \in \Phi$ }\\ \emptyset & \text{else} \end{cases} \] \item If $\Phi$ contains a clash, then $\Delta_G(\Phi,\Phi) = \emptyset$, i.e., there is no transition from $\Phi$. \end{itemize} \end{definition} Note, that this definition implies $\Delta_G(\emptyset,\emptyset) = \{ (\emptyset,\dots,\emptyset) \}$ and only states with a clash have no successor states. \begin{theorem}\label{lem:emptiness-and-non-satisfiability} For a $\ensuremath{\mathsf{K}}$-formula $G$, $G$ is satisfiable iff $L(\mathfrak{A}_G) \neq \emptyset$. \end{theorem} This theorem can be proved by showing that i) every tree accepted by $\mathfrak{A}_G$ induces a model of $G$; and ii) every model $\mathcal{M}$ of $G$ can be turned into a tree accepted by $\mathfrak{A}_G$ by a) unraveling $\mathcal{M}$ into a tree model $\mathcal{T}$ for $G$; b) labeling every world of $\mathcal{T}$ with a suitable p.e. set depending on the formulae that hold in this world; and c) padding ``holes'' in $\mathcal{T}$ with $\emptyset$. \begin{proof} Let $\{\pi_1, \dots, \pi_n\}$ be an enumeration of the $\Diamond$-paths in $\Pi_G$. For the \emph{if}-direction let $L(\mathfrak{A}_G) \neq \emptyset$, $t,r : [n]^* \rightarrow \{ \Phi \subseteq \Pi_G \mid \Phi \text{ is p.e.} \}$ a tree that is accepted by $\mathfrak{A}_G$ and a corresponding run of $\mathfrak{A}_G$. By construction of $\mathfrak{A}_G$, $t(w) = r(w)$ for every $w \in [n]^*$. We construct a Kripke model $\mathcal{M} = (W,R,V)$ from $t$ by setting \begin{align*} W & = \{ w \in [n]^* \mid t(w) \neq \emptyset \}\\ R & = \{ (w,wi) \in W \times W \mid i \in [n] \}\\ V & = \lambda P . \{ p \in W \mid \exists \pi \in t(w) . G|_\pi = P \} \quad \text{ for all propositional atoms $P$ } \end{align*} \begin{claim} For all $w \in W$, if $\pi \in t(w)$ then $\mathcal{M},w \models G|_\pi$. \end{claim} \noindent\textit{Proof of the claim.} The claim is proved by induction on the structure of \ensuremath{\mathsf{K}}-formulae. Let $w \in W$ be a world and $\pi \in \Pi_G$ be a path such that $\pi \in t(w)$. \begin{itemize} \item if $G|_\pi = P$ is a propositional atom and $w \in W$, then $w \in V(P)$ and hence $\mathcal{M},w \models G|_\pi$. \item if $G|_\pi = \neg P$ is a negated propositional atom, then, since $t(w)$ is clash free, there is no $\pi' \in t(w)$ such that $G|_{\pi'} = P$. Thus, $w \not \in V(P)$ and hence $\mathcal{M},w \models \neg P$. \item if $G|_\pi = F_1 \wedge F_2$ then $\pi$ is an $\wedge$-path, and since $t(w)$ is p.e., $\{ \pi {\wedge_l}, \pi {\wedge_r}\} \subseteq t(w)$. By induction, $\mathcal{M}, w \models G|_{\pi{\wedge_*}}$ and hence $\mathcal{M},w \models G|_\pi$. \item if $G|_\pi = F_1 \vee F_2$ then $\pi$ is an $\vee$-path, and since $t(w)$ is p.e., $\{ \pi {\vee_l}, \pi {\vee_r}\} \cap t(w) \neq \emptyset$. By induction, $\mathcal{M}, w \models G|_{\pi{\vee_l}}$ or $\mathcal{M}, w \models G|_{\pi{\vee_r}}$ and hence $\mathcal{M},w \models G|_\pi$. \item if $G|_\pi = \Diamond F$ then $\pi$ is a $\Diamond$-path and, w.o.l.g., assume $\pi = \pi_i$. Since $\pi_i \in r(w)$, $\pi_i \Diamond \in r(wi) = t(wi)$ holds and hence $wi \in W$ and $(w,wi) \in R$. By induction, we have that $\mathcal{M}, wi \models G|_{\pi_i \Diamond}$ and hence $\mathcal{M}, w \models G|_{\pi_i}$. \item if $G|_\pi = \Box F$ and $(w,w') \in R$ then $w' = wi$ for some $i \in [n]$ and $t(wi) \neq \emptyset$ holds and by construction of $\mathfrak{A}_G$, this implies $\pi \Box \in r(wi) = t(wi)$. By induction, this implies $\mathcal{M}, wi \models G|_{\pi \Box}$ and since $wi = w'$ and $w'$ has been chosen arbitrarily, $\mathcal{M}, w \models G|_\pi$. \end{itemize} This finishes the proof of the claim. Since $t(\epsilon) = r(\epsilon) \in \exp{ \{ \epsilon \} }$ and hence $\epsilon \in t(\epsilon)$, $\mathcal{M}, \epsilon \models G|_\epsilon$ and $G = G|_\epsilon$ is satisfiable. For the \emph{only if}-direction, we first show an auxiliary claim: for a set $\Psi \subseteq \Pi_G$ we define $\mathcal{M}, w \models \Psi$ iff $\mathcal{M}, w \models G|_\pi$ for every $\pi \in \Psi$. \begin{claim} If $\Psi \subseteq \Pi_G$ and $w \in W$ such that $\mathcal{M}, w \models \Psi$, then there is a $\Phi \in \exp \Psi$ such that $\mathcal{M}, w \models \Phi$. \end{claim} \noindent \textit{Proof of the claim.} Let $\Psi \subseteq \Pi_G$ and $w \in W$ such that $\mathcal{M}, w \models \Psi$. We will show how to construct an expansion of $\Psi$ with the desired property. If $\Psi$ is already p.e., then $\Psi \in \exp \Psi$ and we are done. If $\Psi$ is not p.e.\ then let $\pi \in \Psi$ be a \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path that is not p.e.\ in $\Psi$. \begin{itemize} \item If $\pi$ is a $\wedge$-path then $G|_\pi = F_1 \wedge F_2$ and since $\mathcal{M}, w \models G|_\pi$, also $\mathcal{M}, w \models F_1 = G|_{\pi \wedge_l}$ and $\mathcal{M}, w \models F_2 = G|_{\pi \wedge_r}$. Hence $\mathcal{M}, w \models \Psi \cup \{ \pi{\wedge_l}, \pi{\wedge_r} \}$ and $\Psi' = \Psi \cup \{ \pi{\wedge_l}, \pi{\wedge_r}\}$ is a set with $\mathcal{M}, w \models \Psi'$ that is``one step closer'' to being p.e.\ than $\Psi$. \item If $\pi$ is a $\vee$-path then $G|_\pi = F_1 \vee F_2$ and since $\mathcal{M}, w \models G|_\pi$, also $\mathcal{M}, w \models F_1 = G|_{\pi \vee_l}$ or $\mathcal{M}, w \models F_2 = G|_{\pi \vee_r}$. Hence $\mathcal{M}, w \models \Psi \cup \{ \pi{\vee_l}\}$ or $\mathcal{M}, w \models \Psi \cup \{ \pi{\vee_r}\}$ and hence can obtain a set $\Psi'$ with $\mathcal{M}, w \models \Psi'$ that is again ``one step close'' to being p.e. than $\Psi$. \end{itemize} Restarting this process with $\Psi = \Psi'$ eventually yields an expansion $\Phi$ of the initial set $\Psi$ with $\mathcal{M},w \models \Phi$, which proves the claim. Let $\mathcal{M} = (W,R,V)$ be a model for $G$ with $w \in W$ such that $\mathcal{M}, w \models G$. From $\mathcal{M}$ we construct a tree that is accepted by $\mathfrak{A}_G$. Using this claim, we inductively define a tree $t$ accepted by $\mathfrak{A}_G$. To this purpose, we also inductively define a function $f : [n]^* \rightarrow W$ such that, if $\mathcal{M}, f(p) \models t(p)$ for all $p$. We start by setting $f(\epsilon) = w$ for a $w \in W$ with $\mathcal{M}, w \models G$. and $t(\epsilon) = \Phi$ for a $\Phi \in \exp { \{ \epsilon \} }$ such that $\mathcal{M}, w \models \Phi$. From the claim we have that such a set $\Phi$ exists because $\mathcal{M}, w \models G = G|_\epsilon$. If $f(p)$ and $t(p)$ are already defined, then, for $i \in [n]$, we define $f(pi)$ and $t(pi)$ as follows: \begin{itemize} \item if $\pi_i \in t(p)$ then $\mathcal{M}, f(p) \models G|_{\pi_i}$ and hence there is a $w' \in W$ such that $(f(p),w') \in R$ and $\mathcal{M}, w' \models G|_{\pi_i \Diamond}$. If $\pi \in t(p)$ is a $\Box$-path, then also $\mathcal{M}, w' \models G|_{\pi\Box}$ holds. Hence $\mathcal{M}, w' \models \{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in t(p) \text{ is a $\Box$-path } \}$. We set $f(pi) = w'$ and $t(pi) = \Phi$ for a $\Phi \in \exp {\{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in t(p) \text{ is a $\Box$-path } \}}$ with $\mathcal{M}, w' \models \Phi$, which exist by the claim. \item if $\pi_i \not \in t(p)$, then we set $f(pi) = w$ for an arbitrary $w \in W$ and $t(pi) = \emptyset$. \end{itemize} In both cases, we have define $f(pi)$ and $t(pi)$ such that $\mathcal{M}, f(pi) \models t(pi)$. It is easy to see that $t$ is accepted by $\mathfrak{A}_G$ with the run $r = t$. Hence $L(\mathfrak{A}_G) \neq \emptyset$ which is what we needed to show. \qed \end{proof} Together with the emptiness test for looping tree automata, Theorem~\ref{lem:emptiness-and-non-satisfiability} yields a decision procedure for $\ensuremath{\mathsf{K}}$-satisfiability. To test a \ensuremath{\mathsf{K}}-formula $G$ for unsatisfiability, construct $\mathfrak{A}_G$ and test whether $L(\mathfrak{A}_G)=\emptyset$ holds using the emptiness test for looping tree automata: $L(\mathfrak{A}_G)=\emptyset$ iff $\exp{ \{ \epsilon \}}\subseteq Q_0^\vartriangleright$, where $Q_0 \subseteq Q_G$ is the set of states containing a clash. The following is a derivation of a superset of $\exp {\{ \epsilon \}}$ from $Q_0$ for the example formula from Figure~\ref{fig:g-paths}: $$ Q_0 = \{ \underbrace{\{ \nu_5,\nu_6,\nu_7, \nu_8 \},\ \{ \nu_5,\nu_6,\nu_7, \nu_9 \}}_{=\ \exp { \nu_5, \nu_6, \nu_7 }}, \dots \} \vartriangleright Q_0 \cup \underbrace{ \{ \{ \nu_0, \nu_1, \nu_2, \nu_3, \nu_4 \}\}}_{=\ \exp { \{\epsilon \} }} $$ \section{Future Work} There are several interesting directions in which to continue this work. First, satisfiability in \ensuremath{\mathsf{K}}{} (without global axioms) is \textsc{PSpace}-complete whereas the inverse method yields only an \textsc{ExpTime}-algorithm. Can suitable optimizations turn this into a \textsc{PSpace}-procedure? Second, can the optimizations considered in Section~\ref{sec:opti} be extended to the inverse calculus with global axioms? Third, Voronkov considers additional optimizations. Can they also be handled within our framework? Finally, can the correspondence between the automata approach and the inverse method be used to obtain inverse calculi and correctness proofs for other modal or description logics? \section{Introduction} Decision procedures for (propositional) modal logics and description logics play an important r\^{o}le in knowledge representation and verification. When developing such procedures, one is both interested in their worst-case complexity and in their behavior in practical applications. From the theoretical point of view, it is desirable to obtain an algorithm whose worst-case complexity matches the complexity of the problem. From the practical point of view it is more important to have an algorithm that is easy to implement and amenable to optimizations, such that it behaves well on practical instances of the decision problem. The most popular approaches for constructing decision procedures for modal logics are i) semantic tableaux and related methods \cite{Gore-Tableau-Handbook-1998,BaaderSattler-StudiaLogica-2000}; ii) translations into classical first-order logics \cite{Schmidt98f,arec:tree00}; and iii) reductions to the emptiness problem for certain (tree) automata~\cite{VaWo86,LutzSattlerAIML00}. Whereas highly optimized tableaux and translation approaches behave quite well in practice \cite{Horrocks-Tableaux-2000,HustadtSchmidt-Tableaux2000}, it is sometimes hard to obtain exact worst-case complexity results using these approaches. For example, satisfiability in the basic modal logic \ensuremath{\mathsf{K}}{} w.r.t.\ global axioms is known to be \textsc{ExpTime}-complete \cite{Spaan93a}. However, the ``natural'' tableaux algorithm for this problem is a \textsc{NExpTime}-algorithm \cite{BaaderSattler-StudiaLogica-2000}, and it is rather hard to construct a tableaux algorithm that runs in deterministic exponential time \cite{donini00:_exptim_alc}. In contrast, it is folklore that the automata approach yields a very simple proof that satisfiability in \ensuremath{\mathsf{K}}{} w.r.t.\ global axioms is in \textsc{ExpTime}{}. However, the algorithm obtained this way is not only worst-case, but also best-case exponential: it first constructs an automaton that is always exponential in the size of the input formulae (its set of states is the powerset of the set of subformulae of the input formulae), and then applies the (polynomial) emptiness test to this large automaton. To overcome this problem, one must try to construct the automaton ``on-the-fly'' while performing the emptiness test. Whereas this idea has successfully been used for automata that perform model checking \cite{GPVW95,lncs531*233}, to the best of our knowledge it has not yet been applied to satisfiability checking. The original motivation of this work was to compare the automata and the tableaux approaches, with the ultimate goal of obtaining an approach that combines the advantages of both, without possessing any of the disadvantages. As a starting point, we wanted to see whether the tableaux approach could be viewed as an on-the-fly realization of the emptiness test done by the automata approach. At first sight, this idea was persuasive since a run of the automaton constructed by the automata approach (which is a so-called looping automaton working on infinite trees) looks very much like a run of the tableaux procedure, and the tableaux procedure does generate sets of formulae on-the-fly. However, the polynomial emptiness test for looping automata does \emph{not} try to construct a run starting with the root of the tree, as done by the tableaux approach. Instead, it computes inactive states, i.e., states that can never occur on a successful run of the automaton, and tests whether all initial states are inactive. This computation starts ``from the bottom'' by locating obviously inactive states (i.e., states without successor states), and then ``propagates'' inactiveness along the transition relation. Thus, the emptiness test works in the opposite direction of the tableaux procedure. This observation suggested to consider an approach that inverts the tableaux approach: this is just the so-called inverse method. Recently, Voronkov \cite{Voronkov-ToCL-2001} has applied this method to obtain a bottom-up decision procedure for satisfiability in \ensuremath{\mathsf{K}}, and has optimized and implemented this procedure. In this paper we will show that the inverse method for \ensuremath{\mathsf{K}}{} can indeed be seen as an on-the-fly realization of the emptiness test done by the automata approach for \ensuremath{\mathsf{K}}{}. The benefits of this result are two-fold. First, it shows that Voronkov's implementation, which behaves quite well in practice, is an optimized on-the-fly implementation of the automata-based satisfiability procedure for \ensuremath{\mathsf{K}}. Second, it can be used to give a simpler proof of the fact that Voronkov's optimizations do not destroy completeness of the procedure. We will also show how the inverse method can be extended to handle global axioms, and that the correspondence to the automata approach still holds in this setting. In particular, the inverse method yields an \textsc{ExpTime}-algorithm for satisfiability in \ensuremath{\mathsf{K}}{} w.r.t.\ global axioms. \subsection{The Inverse Calculus} In the following, we introduce the inverse calculus for \ensuremath{\mathsf{K}}. We stay close to the notation and terminology used in \cite{Voronkov-ToCL-2001}. A \emph{sequent} is a subset of $\Pi_G$. Sequents will be denoted by capital greek letters. The union of two sequents $\Gamma$ and $\Lambda$ is denote by $ \Gamma, \Lambda$. If $\Gamma$ is a sequent and $\pi \in \Pi_G$ then we denote $\Gamma \cup \{ \pi \}$ by $\Gamma, \pi$. If $\Gamma$ is a sequent that contains only $\Box$-paths then we write $\Gamma \Box$ to denote the sequent $\{ \pi \Box \mid \pi \in \Gamma \}$. Since states of $\mathfrak{A}_G$ are also subsets of $\Pi_G$ and hence sequents, we will later on use the same notational conventions for states as for sequents. \begin{definition}[The inverse path calculus] Let $G$ be a formula in NNF and $\Pi_G$ the set of paths of $G$. \emph{Axioms} of the inverse calculus are all sequents $\{ \pi_1, \pi_2 \}$ such that $G|_{\pi_1} = p$ and $G|_{\pi_2} = \neg p$ for some propositional variable $p$. The \emph{rules} of the inverse calculus are given in Figure \ref{fig:inv-calc}, where all paths occurring in a sequent are $G$-paths and, for every $\Diamond^+$ inference, $\pi$ is a $\Diamond$-path. We refer to this calculus by \ensuremath{\textsf{IC}_G}\xspace.\footnote{$G$ appears in the subscript because the calculus is highly dependent of the input formula $G$: only $G$-paths can be generated by \ensuremath{\textsf{IC}_G}\xspace.} We define $\mathcal{S}_0 := \{ \Gamma \mid \Gamma \text{ is an axiom } \}$. A \emph{derivation} of \ensuremath{\textsf{IC}_G}\xspace is a sequence of sets of sequents $\mathcal{S}_0 \vdash \dots \vdash \mathcal{S}_m$ where $\mathcal{S}_i \vdash \mathcal{S}_{i+1}$ iff $\mathcal{S}_{i+1} = \mathcal{S}_i \cup \{ \Gamma \}$ such that there exists sequents $\Gamma_1, \dots \Gamma_k \in \mathcal{S}_i$ and $\inference{\Gamma_1 & \dots & \Gamma_k}{\Gamma}$ is an inference. \end{definition} We write $\mathcal{S}_0 \mathbin{\vdash\!\!^*} \mathcal{S}$ iff there is a derivation $\mathcal{S}_0 \vdash \dots \vdash \mathcal{S}_m$ with $\mathcal{S} = \mathcal{S}_m$. The \emph{closure} $\mathcal{S}_0^\vdash$ of $\mathcal{S}_0$ under $\vdash$ is defined by $\mathcal{S}_0^\vdash = \bigcup \{ \mathcal{S} \mid \mathcal{S}_0 \mathbin{\vdash\!\!^*} \mathcal{S} \}$. Again, the closure can effectively be computed by starting with $\mathcal{S}_0$ and then adding sequents that can be obtained by an inference until no more new sequents can be added. \begin{figure}[t] \begin{center} \begin{tabular}{c@{\ \ \ }c@{\ \ \ }c} \inference[$(\vee)$] {\Gamma_l, \pi \vee_l & \Gamma_r,\pi \vee_r}{\Gamma_l, \Gamma_r, \pi} & \inference[$(\wedge_l)$]{ \Gamma, \pi \wedge_l}{\Gamma, \pi} & \inference[$(\wedge_r)$]{ \Gamma, \pi \wedge_r}{\Gamma, \pi} \\[3ex] \inference[$(\Diamond)$]{\Gamma \Box, \pi \Diamond}{\Gamma, \pi} & \inference[$(\Diamond^+)$]{\Gamma \Box}{\Gamma, \pi} \end{tabular} \end{center} \vspace{-1ex} \caption{Inference rules of \ensuremath{\textsf{IC}_G}\xspace} \vspace{-2ex} \label{fig:inv-calc} \end{figure} As shown in \cite{Voronkov-ToCL-2001}, the computation of the closure yields a decision procedure for \ensuremath{\mathsf{K}}-satisfiability: \begin{fact}\label{fact:inverse-calculus-decides-k-satisfiability} $G$ is unsatisfiable iff $\{ \epsilon \} \in \mathcal{S}_0^\vdash$. \end{fact} Figure~\ref{fig:example-inference} shows the inferences of \ensuremath{\textsf{IC}_G}\xspace that lead to $\nu_0 = \epsilon$ for the example formula from Figure~\ref{fig:g-paths}. \subsection{Connecting the Two Approaches} The results shown in this subsection imply that \ensuremath{\textsf{IC}_G}\xspace can be viewed as an on-the-fly implementation of the emptiness for $\mathfrak{A}_G$. In addition to generating states on-the-fly, states are also represented in a compact manner: one sequent generated by \ensuremath{\textsf{IC}_G}\xspace represents several states of $\mathfrak{A}_G$. \begin{definition} For the formula automaton $\mathfrak{A}_G$ with states $Q_G$ and a sequent $\Gamma \subseteq \Pi_G$ we define $\comp \Gamma := \{ \Phi \in Q_G \mid \Gamma \subseteq \Phi \}$, and for a set $\mathcal{S}$ of sequents we define $ \comp \mathcal{S} := \bigcup_{\Gamma \in \mathcal{S}} \comp \Gamma$. \end{definition} The following theorem, which is one of the main contributions of this paper, establishes the correspondence between the emptiness test and \ensuremath{\textsf{IC}_G}\xspace. \begin{theorem}[\ensuremath{\textsf{IC}_G}\xspace and the emptiness test mutually simulate each other]% \label{theo:inv-calc-to-emptiness-test}\label{theo:emptiness-test-to-inv-calc}% Let $Q_0$, $\mathcal{S}_0$, $\vartriangleright$, and $\vdash$ be defined as above. \begin{enumerate} \item Let $Q$ be a set of states such that $Q_0 \mathbin{\vtr\!\!\!^*} Q$. Then there exists a set of sequents $\mathcal{S}$ with $\mathcal{S}_0 \mathbin{\vdash\!\!^*} \mathcal{S} \text{ and } Q \subseteq \comp \mathcal{S}$. \item Let $\mathcal{S}$ be a set of sequents such that $\mathcal{S}_0 \mathbin{\vdash\!\!^*} \mathcal{S}$. Then there exists a set of states $Q \subseteq Q_G$ with $Q_0 \mathbin{\vtr\!\!\!^*} Q \text{ and } \comp \mathcal{S} \subseteq Q$. \end{enumerate} \end{theorem} The first part of the theorem shows that \ensuremath{\textsf{IC}_G}\xspace can simulate each computation of the emptiness test for $\mathfrak{A}_G$. The set of states represented by the set of sequents computed by \ensuremath{\textsf{IC}_G}\xspace may be larger than the one computed by a particular derivation of the emptiness test. However, the second part of the theorem implies that all these states are in fact inactive since a possibly larger set of states can also be computed by a derivation of the emptiness test. In particular, the theorem implies that \ensuremath{\textsf{IC}_G}\xspace can be used to calculate a compact representation of $Q_0^\vartriangleright$. This is an on-the-fly computation since $\mathfrak{A}_G$ is never constructed explicitly. \begin{corollary}\label{cor:closures-equal} $Q_0^\vartriangleright = \comp {\mathcal{S}_0^\vdash}$. \end{corollary} \begin{figure}[t] \begin{center} $ \inference[$(\vee)$]{{\wedge_l}\Diamond,\ {\wedge_r}{\wedge_r}\Box{\vee_r} & | & {\wedge_r}{\wedge_l}\Box,\ {\wedge_r}{\wedge_r}\Box{\vee_l}} {\inference[$(\Diamond)$]{{\wedge_l}\Diamond,\ {\wedge_r}{\wedge_l}\Box, \ , {\wedge_r}{\wedge_r}\Box} {\inference[$(\wedge_r)$]{{\wedge_l},\ {\wedge_r}{\wedge_l},\ {\wedge_r}{\wedge_r}} {\inference[$(\wedge_l)$]{{\wedge_l}, \ {\wedge_r},\ {\wedge_r}{\wedge_l}} {\inference[$(\wedge_r)$]{{\wedge_l},\ {\wedge_r}} {\inference[$(\wedge_l)$]{\epsilon,\ {\wedge_l}} {\epsilon}}}}}} $ \end{center} \vspace{-1ex} \caption{An example of inferences in \ensuremath{\textsf{IC}_G}\xspace} \vspace{-1ex} \label{fig:example-inference} \end{figure} \begin{proof} If $\Phi \in Q_0^\vartriangleright$ then there exists a set of states $Q$ such that $Q_0 \mathbin{\vtr\!\!\!^*} Q$ and $\Phi \in Q$. By Theorem~\ref{theo:emptiness-test-to-inv-calc}.1, there exists a set of sequents $\mathcal{S}$ with $\mathcal{S}_0 \mathbin{\vdash\!\!^*}\mathcal{S}$ and $Q \subseteq \comp \mathcal{S}$. Hence $\Phi \in \comp {\mathcal{S}_0^\vdash}$. For the converse direction, if $\Phi \in \comp {\mathcal{S}_0^\vdash}$ then there exists a set of sequents $\mathcal{S}$ with $\mathcal{S}_0 \mathbin{\vdash\!\!^*} \mathcal{S}$ and $\Phi \in \comp \mathcal{S}$. By Theorem~\ref{theo:inv-calc-to-emptiness-test}.2, there exists a set of states $Q$ with $Q_0 \mathbin{\vtr\!\!\!^*} Q$ and $\comp \mathcal{S} \subseteq Q$ and hence $\Phi \in Q_0^\vartriangleright$. \qed \end{proof} The proof of the second part of Theorem~\ref{theo:inv-calc-to-emptiness-test} is the easier one. It is a consequence of the next three lemmata. First, observe that the two calculi have the same starting points. \begin{lemma}\label{lem:dead-states-equal-axioms} If $\mathcal{S}_0$ is the set of axioms of \ensuremath{\textsf{IC}_G}\xspace, and $ Q_0$ is the set of states of $\mathfrak{A}_G$ that have no successor states, then $\comp {\mathcal{S}_0}= Q_0$. \end{lemma} \begin{proof} The set $\mathcal{S}_0$ is the set of all axioms i.e., the set of all clashes. Hence $\comp{ \mathcal{S}_0} = \{ \Phi \mid \Phi \text{ contains a clash} \} = Q_0$. \qed \end{proof} Second, since states are assumed to be p.e., propositional inferences of \ensuremath{\textsf{IC}_G}\xspace do not change the set of states represented by the sequents. \begin{lemma}\label{lem:propositional-inferences-for-free} Let $\mathcal{S} \vdash \mathcal{T}$ be a derivation of \ensuremath{\textsf{IC}_G}\xspace that employs a $\wedge_l$-, $\wedge_r$-, or a $\vee$-inference. Then $\comp \mathcal{S} = \comp \mathcal{T}$. \end{lemma} \begin{proof} Since $\mathcal{S} \subseteq \mathcal{T}$, $\comp \mathcal{S} \subseteq \comp \mathcal{T}$ holds immediately. To show $\comp \mathcal{T} \subseteq \comp \mathcal{S}$, we distinguish the different inferences used to obtain $\mathcal{T}$ from $\mathcal{S}$: \begin{itemize} \item If the employed inference is $\inference[$(\wedge_*)$]{\Gamma, \pi \wedge_*}{\Gamma,\pi}$ and $\mathcal{T} = \mathcal{S} \cup \{ \Gamma, \pi \}$ with $\Gamma, \pi \wedge_* \in \mathcal{S}$. Then $\comp \mathcal{T} = \comp \mathcal{S} \cup \comp {\Gamma,\pi}$. Let $\Phi \in \comp {\Gamma,\pi}$. $\Phi$ is p.e.\ and hence $\pi \in \Phi$ implies $\pi\wedge_* \in \Phi$. Thus, $\Gamma,\pi\wedge_* \subseteq \Phi$ and $\Phi \in \comp{\Gamma,\pi\wedge_*} \subseteq \comp \mathcal{S}$. \item Assume that the employed inference is $\inference[$(\vee)$]{\Gamma_l, \pi \vee_l & \Gamma_r, \vee_r}{\Gamma_l, \Gamma_r,\pi}$ and $\mathcal{T} = \mathcal{S} \cup \{ \Gamma_l, \Gamma_r, \pi \}$ with $\Gamma_l, \pi \vee_l \in \mathcal{S}$, $\Gamma_r, \vee_r \in \mathcal{S}$. Then $\comp \mathcal{T} = \comp \mathcal{S} \cup \comp {\Gamma_l,\Gamma_r,\pi}$. Let $\Phi \in \comp {\Gamma_l,\Gamma_r,\pi}$. $\Phi$ is p.e.\ and hence, w.o.l.g., $\pi \vee_l \in \Phi$. Thus, $\Gamma_l, \pi \vee_l \subseteq \Phi$ and $\Phi \in \comp { \Gamma_l, \pi \vee_l } \subseteq \comp \mathcal{S}$. \qed \end{itemize} \end{proof} Third, modal inferences of \ensuremath{\textsf{IC}_G}\xspace can be simulated by derivations of the emptiness test. \begin{lemma}\label{lem:modal-inference-simulated-by-emptiness-test} Let $\mathcal{S} \vdash \mathcal{T}$ be derivation of \ensuremath{\textsf{IC}_G}\xspace that employs a $\Diamond$- or $\Diamond^+$-inference. If $Q$ is a set of states with $\comp \mathcal{S} \cup Q_0 \subseteq Q$ then there exists a set of states $P$ with $Q \mathbin{\vtr\!\!\!^*} P$ and $\comp \mathcal{T} \subseteq P$. \end{lemma} \begin{proof} We only consider the $\Diamond$-inference, the case of a $\Diamond^+$-inference is analogous. If $\mathcal{S} \vdash \mathcal{T}$ by an application of a $\Diamond$-inference, then $\mathcal{T} = \mathcal{S} \cup \{ \Gamma,\pi \}$ where $\Gamma$ consists only of $\Box$-paths, $\pi$ is a $\Diamond$-path (w.o.l.g., we assume $\pi = \pi_i$, the $i$-th path in the enumeration of $\Diamond$-paths in $\Pi_G$), $\Gamma \Box, \pi_i \Diamond \in \mathcal{S}$ and $\inference[$(\Diamond)$]{\Gamma \Box, \pi_i \Diamond}{\Gamma, \pi_i}$. Also, $\comp \mathcal{T} = \comp \mathcal{S} \cup \comp { \Gamma, \pi_i }$ holds. \begin{claim} Let $\Phi \in \comp { \Gamma,\pi_i }$ and $R$ a set of states with $\comp { \Gamma \Box, \pi_i \Diamond} \cup Q_0 \subseteq R$. Then there exists a derivation $ R \mathbin{\vtr\!\!\!^*} R'$ with $\Phi \in R'$ and $\comp { \Gamma \Box, (\pi_i \Diamond)} \cup Q_0 \subseteq R'$ \end{claim} \noindent \textit{Proof of the Claim.} If $\Phi$ contains a clash then $\Phi \in Q_0 \subseteq R$ and nothing has to be done. If $\Phi$ does not contain a clash, then $\Delta_G(\Phi,\Phi) = \exp {\Psi_i } \times \dots \times \exp {\Psi_n}$ where the $\Psi_i$ are defined as in Definition \ref{def:automaton} and especially, since $\pi_i \in \Phi$, \[ \exp{\Psi_i} = \exp{ \underbrace{\{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in \Phi \text{ is a $\Box$-path } \}}_{\supseteq \Gamma \Box, \pi_i \Diamond}} \subseteq \comp {\Gamma \Box, \pi_i \Diamond} \subseteq R \] Since all states in $\exp {\Psi_i}$ have been marked inactive, the emptiness test can also mark $\Phi$ inactive and derive $ R \vartriangleright R \cup \{ \Phi \} = R'$, which proves the claim. Using this claim, we prove the lemma as follows. Let $\Phi_i, \dots \Phi_k$ be an enumeration of $\comp {\Gamma, \pi_i}$. The set $P_0 = Q$ satisfies the requirements of the claim for $ R$. Thus, we repeatedly use the claim and chain the derivations to obtain a derivation $ Q = P_0 \vartriangleright P_1 \vartriangleright \dots \vartriangleright P_k = P$ such that $\Phi_i \in P_i$. Since the sets grow monotonically, in the end $\comp {\Gamma,\pi} \subseteq P$ holds, which implies $\comp \mathcal{T} \subseteq P$. \qed \end{proof} Given these lemmata, proving Theorem~\ref{theo:inv-calc-to-emptiness-test}.2 is quite simple. \paragraph Proof of Theorem~\ref{theo:inv-calc-to-emptiness-test}.2.} The proof is by induction on the length $m$ of the derivation $\mathcal{S}_0 \vdash \mathcal{S}_1 \dots \vdash \mathcal{S}_m = \mathcal{S}$ of \ensuremath{\textsf{IC}_G}\xspace . The base case $m=0$ is Lemma~\ref{lem:dead-states-equal-axioms}. For the induction step, $\mathcal{S}_{i+1}$ is either inferred from $\mathcal{S}_i$ using a propositional inference, which is dealt with by Lemma~\ref{lem:propositional-inferences-for-free}, or by a modal inference, which is dealt with by Lemma~\ref{lem:modal-inference-simulated-by-emptiness-test}. Lemma~\ref{lem:modal-inference-simulated-by-emptiness-test} is applicable since, for every set of states $Q$ with $Q_0 \mathbin{\vtr\!\!\!^*} Q$, $Q_0 \subseteq Q$. \qed \medskip Proving the first part of Theorem~\ref{theo:inv-calc-to-emptiness-test} is more involed because of the calculation of the propositional expansions implicit in the definition of $\mathfrak{A}_G$. \begin{lemma}\label{lem:inverse-calculus-for-propositional-closure} Let $\Phi \subseteq \Pi_G$ be a set of paths and $\mathcal{S}$ a set of sequents such that $\exp \Phi \subseteq \comp \mathcal{S}$. Then there exists a set of sequents $\mathcal{T}$ with $\mathcal{S} \mathbin{\vdash\!\!^*} \mathcal{T}$ such that there exists a sequent $\Lambda \in \mathcal{T}$ with $\Lambda \subseteq \Phi$. \end{lemma} \begin{proof}If $\Phi$ is p.e., then this is immediate, as in this case $\exp {\Phi} = \{ \Phi \} \subseteq \comp \mathcal{S}$. If $\Phi$ is not p.e., then let $\mathsf{select}$ be an arbitrary \emph{selection function}, i.e., a function that maps every set $\Psi$ that is not p.e.\ to a \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path $\pi\in \Psi$ that is not p.e.\ in $\Psi$. Let $\mathsf{T}} %{\mathcal{T}_\Phi$ be the following, inductively defined tree: \begin{itemize} \item The root of $\mathsf{T}} %{\mathcal{T}_\Phi$ is $\Phi$. \item If a node $\Psi$ of $\mathsf{T}} %{\mathcal{T}_\Phi$ is not p.e., then \begin{itemize} \item if $\mathsf{select}(\Psi) = \pi$ is an $\wedge$-path, then $\Psi$ has the successor node $\Psi, \pi \wedge_l, \pi\wedge_r$ and $\Psi$ is called an $\wedge$-node. \item if $\mathsf{select}(\Psi) = \pi$ is an $\vee$-path, then $\Psi$ has the successor nodes $\Psi, \pi\vee_l$ and $\Psi, \pi\vee_l$ and $\Psi$ is called an $\vee$-node. \end{itemize} \item If a node $\Psi$ of $\mathsf{T}} %{\mathcal{T}_\Phi$ is p.e., then it is a leaf of the tree. \end{itemize} Obviously, the construction is such that the set of leaves of $\mathsf{T}} %{\mathcal{T}_\Phi$ is $\exp \Phi$. Let $\Upsilon_1, \dots \Upsilon_\ell$ be a post-order traversal of this tree, so the sons of a node occur before the node itself and $\Upsilon_\ell = \Phi$. Along this traversal we will construct a derivation $\mathcal{S} = \mathcal{T}_0 \mathbin{\vdash\!\!^*} \dots \mathbin{\vdash\!\!^*} \mathcal{T}_\ell = \mathcal{T}$ such that, for every $1\leq i \leq j \leq \ell$, $\mathcal{T}_j$ contains a sequent $\Lambda_i$ with $\Lambda_i \subseteq \Upsilon_i$. Since the sets $\mathcal{T}_j$ grow monotonically, it suffices to show that, for every $1 \leq i \leq \ell$, $\mathcal{T}_i$ contains a sequent $\Lambda_i$ with $\Lambda_i \subseteq \Upsilon_i$. Whenever $\Upsilon_i$ is a leaf of $\mathsf{T}} %{\mathcal{T}_\Phi$, then $\Upsilon_i \in \exp{\Phi} \subseteq \comp \mathcal{S}$. Hence there is already a sequent $\Lambda_i \in \mathcal{T}_0$ with $\Lambda_i \subseteq \Upsilon_i$ and no derivation step is necessary. Particularly, in a post-order traversal, $\Upsilon_1$ is a leaf. We now assume that the derivation has been constructed up to $\mathcal{T}_i$. \begin{itemize} \item If $\Upsilon_{i+1}$ is a leaf of $\mathsf{T}} %{\mathcal{T}_\Phi$, then nothing has to be done as there exists a $\Lambda_{i+1} \in \mathcal{T}_0 \subseteq \mathcal{T}_i$ with $\Lambda_{i+1} \subseteq \Upsilon_{i+1}$ \item If $\Upsilon_{i+1}$ is an $\wedge$-node with selected $\wedge$-path $\pi \in \Upsilon_{i+1}$. Then, the successor of $\Upsilon_{i+1}$ in $\mathsf{T}} %{\mathcal{T}_\Phi$ is $\Upsilon_{i+1} \pi\wedge_l, \pi\wedge_r$ and appears before $\Upsilon_{i+1}$ in the traversal. By construction there exists a sequent $\Lambda \in \mathcal{T}_i$ with $\Lambda \subseteq \Upsilon_{i+1},\pi \wedge_l, \pi \wedge_r$. If $\Lambda \cap \{ \pi \wedge_l, \pi \wedge_r \} = \emptyset$ then we are done because then also $\Lambda \subseteq \Upsilon_{i+1}$. If one or both of $\pi \wedge_l, \pi \wedge_r$ occur in $\Lambda$, then \begin{itemize} \item if $\Lambda = \Gamma, \pi \wedge_l$ for some $\Gamma$ with $\pi \wedge_r \not \in \Gamma$ then this implies that the inference \begin{equation}\label{eq:and-inference} \inference[$(\wedge_l)$]{\Gamma, \pi \wedge_l}{\Gamma, \pi} \end{equation} can be used to derive $\mathcal{T}_i \vdash \mathcal{T}_i \cup \{ \Gamma, \pi \} = \mathcal{T}_{i+1}$ and $\Gamma, \pi \subseteq \Upsilon_{i+1}$ holds. \item the case $\Lambda = \Gamma, \pi \wedge_r$ for some $\Gamma$ with $\pi \wedge_l \not \in \Gamma$ if analogous. \item if $\Lambda = \Gamma, \pi {\wedge_l}, \pi {\wedge_r}$ for some $\Gamma$ with $\{\pi {\wedge_l}, \pi {\wedge_r}\} \cap \Gamma = \emptyset$ then the inferences \begin{equation}\label{eq:and-inference-2} \inference[$(\wedge_l)$]{ \Gamma, \pi \wedge_l, \pi \wedge_r}{ \inference[$(\wedge_r)$]{\Gamma, \pi, \pi \wedge_r}{\Gamma, \pi, \pi}} \end{equation} can be used in the derivation $\mathcal{T}_i \vdash \mathcal{T}_i \cup \{ \Gamma, \pi, \pi \wedge_r \} \vdash \mathcal{T}_i \cup \{\Gamma, \pi, \pi \wedge_r \} \cup \{ \Gamma, \pi \} = \mathcal{T}_{i+1} $ and by construction $\Gamma, \pi \subseteq \Upsilon_{i+1}$ holds. \end{itemize} \item If $\Upsilon_{i+1}$ is an $\vee$-node with selected $\vee$-path $\pi \in \Upsilon_{i+1}$. Then, the successors of $\Upsilon_{i+1}$ in $\mathsf{T}} %{\mathcal{T}_\Phi$ are $\Upsilon_{i+1}, \pi\vee_l$ and $\Upsilon_{i+1}, \pi\vee_r$, and by construction there exist sequences $\Lambda_l, \Lambda_r \in \mathcal{T}_i$ with $\Lambda_* \subseteq \Upsilon_{i+1}, \pi\vee_*$. If $\pi \vee_l \not \in \Lambda_l$ or $\pi \vee_r \not \in \Lambda_r$, then $\Lambda_l \subseteq \Upsilon_{i+1} $ or $\Lambda_r \subseteq \Upsilon_{i+1}$ holds and hence already $\mathcal{T}_i$ contains a sequent $\Lambda$ with $\Lambda \subseteq \Upsilon_{i+1}$. If $\Lambda_l = \Gamma_l, \pi \vee_l$ and $\Lambda_r = \Gamma_r, \pi \vee_r$ with $\pi {\vee_*} \not \in \Gamma_*$ then \ensuremath{\textsf{IC}_G}\xspace can use the inference \begin{equation}\label{eq:or-inference} \inference[$(\vee)$]{\Gamma_l, \pi \vee_l & \Gamma_r, \pi \vee_r}{\Gamma_l, \Gamma_r, \pi} \end{equation} to derive $\mathcal{T}_i \vdash \mathcal{T}_i \cup \{ \Gamma_l, \Gamma_r, \pi \} = \mathcal{T}_{i+1}$, and and $\Gamma_l, \Gamma_r, \pi \subseteq \Upsilon_{i+1}$ holds as follows: assume there is a $\pi' \in \Gamma_l, \Gamma_r, \pi$ with $\pi' \not \in \Upsilon_{i+1}$. Since $\pi \in \Upsilon_{i+1}$, w.o.l.g., $\pi' \in \Gamma_l$. But then also $\Gamma_l \not \subseteq \Upsilon_{i+1}, \pi \vee_l $ would hold, since $\pi' \neq \pi {\vee_l}$ because $\pi {\vee_l} \not \in \Gamma_l$. \end{itemize} Proceeding in this manner, starting from $\mathcal{T}_0 = \mathcal{S}$, we can construct a derivation that yields a set $\mathcal{T} = \mathcal{T}_k$ of states containing a sequent $\Lambda$ such that $\Lambda \subseteq \Upsilon_\ell = \Phi$. \qed \end{proof} \paragraph{\textit{Proof of Theorem~\ref{theo:emptiness-test-to-inv-calc}.1.}} We show this by induction on the number $k$ of steps in the derivation $Q_0 \vartriangleright \dots \vartriangleright Q_k = Q$. Again, Lemma~\ref{lem:dead-states-equal-axioms} yields the base case. For the induction step, let $Q_0 \vartriangleright \dots \vartriangleright Q_i \vartriangleright Q_{i+1} = Q_i \cup \{ \Phi \}$ be a derivation of the emptiness test and $\mathcal{S}_i$ a set of sequents such that $\mathcal{S} \mathbin{\vdash\!\!^*} \mathcal{S}_i$ and $ Q_i \subseteq \comp{ \mathcal{S}_i}$. Such a set exists by the induction hypothesis because the derivation $Q_0 \vartriangleright \dots \vartriangleright Q_i$ is of length $i$. Now let $ Q_i \vartriangleright Q_i \cup \{ \Phi \} = Q_{i+1}$ be the derivation of the emptiness test. If already $\Phi \in Q_i$ then $ Q_{i+1} \subseteq \comp {\mathcal{S}_i}$ and we are done. If $\Phi \not \in Q_i$, then $Q_0\subseteq Q_i$ implies that $\Delta_G(\Phi,\Phi) \neq \emptyset$. Since $\emptyset$ is an active state, we know that $\emptyset \not \in Q_i$, and for $Q_i \vartriangleright Q_{i+1}$ to be a possible derivation of the emptiness test, $\Delta_G(\Phi,\Phi) = \exp {\Psi_1} \times \dots \times \exp {\Psi_n} \neq\{ (\emptyset, \dots, \emptyset) \}$ must hold, i.e., there must be a $\Psi_i \neq \emptyset$ such that $\exp {\Psi_i} \subseteq Q_i \subseteq \comp {\mathcal{S}_i}$. Hence $\pi_i \in \Phi$ and $\Psi_i = \{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in \Phi \mbox{ is a $\Box$-path}\}$. Lemma~\ref{lem:inverse-calculus-for-propositional-closure} yields the existence of a set of sequents $\mathcal{T}_i$ with $\mathcal{S}_i \mathbin{\vdash\!\!^*} \mathcal{T}$ containing a sequent $\Lambda$ with $\Lambda \subseteq \Psi_i$. This sequent is either of the form $\Lambda = \Gamma \Box, \pi_i \Diamond$ or $\Lambda = \Gamma \Box$ for some $\Gamma \subseteq \Phi$. In the former case, \ensuremath{\textsf{IC}_G}\xspace can use a $\Diamond$-inference \begin{equation*}\label{eq:diamond-inference} \inference[$(\Diamond)$]{\Gamma \Box, \pi_i \Diamond}{\Gamma,\pi_i} \end{equation*} and in the latter case a $\Diamond^+$-inference \begin{equation*}\label{eq:diamond-plus-inference} \inference[$(\Diamond^+)$]{\Gamma \Box}{\Gamma, \pi_i} \end{equation*} to derive $S_0 \mathbin{\vdash\!\!^*} \mathcal{S}_i \mathbin{\vdash\!\!^*} \mathcal{T} \vdash \mathcal{T} \cup \{ \Gamma, \pi_i \} = \mathcal{S}$ and $\Phi \subseteq \comp{ \Gamma, \pi_i}$ holds. \qed \section{Optimizations} \label{sec:opti} Since the inverse calculus can be seen as an on-the-fly implementation of the emptiness test, optimizations of the inverse calculus also yield optimizations of the emptiness test. We use the connection between the two approaches to provide an easier proof of the fact that the optimizations of \ensuremath{\textsf{IC}_G}\xspace{} introduced by Voronkov \cite{Voronkov-ToCL-2001} do not destroy completeness of the calculus. \subsection{Unreachable states / redundant sequents} States that cannot occur on any run starting with an initial state have no effect on the language accepted by the automaton. We call such states \emph{unreachable}. In the following, we will determine certain types of unreachable states. \begin{definition} Let $\pi, \pi_1, \pi_2 \in\Pi_G$. \begin{itemize} \item The \emph{modal length} of $\pi$ is the number of occurrences of $\Box$ and $\Diamond$ in $\pi$. \item $\pi_1, \pi_2 \in \Pi_G$ form a \emph{$\vee$-fork} if $\pi_1 = \pi {{\vee_l}} \pi_1'$ and $\pi_2 = \pi {{\vee_r}} \pi_2'$ for some $\pi, \pi_1', \pi_2'$. \item $\pi_1,\pi_2$ are \emph{$\Diamond$-separated} if $\pi_1 = \pi_1' \Diamond \pi_1''$ and $\pi_2 = \pi_2' \Diamond \pi_2''$ such that $\pi_1', \pi_2'$ have the same modal length and $\pi_1' \neq \pi_2'$. \end{itemize} \end{definition} \begin{lemma}\label{lem:non-wellformed-sets} Let $\mathfrak{A}_G$ be the formula automaton for a $\mathsf{K}$-formula $G$ in NNF and $\Phi \in Q$. If $\Phi$ contains a $\vee$-fork, two $\Diamond$-separated paths, or two paths of different modal length, then $\Phi$ is unreachable. \end{lemma} The lemma shows that we can remove such states from $\mathfrak{A}_G$ without changing the accepted language. Sequents containing a $\vee$-fork, two $\Diamond$-separated paths, or two paths of different modal length represent only unreachable states, and are thus redunant, i.e., inferences involving such sequents need not be considered. \begin{definition}[Reduced automaton] Let $\bar Q$ be the set of states of $\mathfrak{A}_G$ that contain a $\vee$-fork, two $\Diamond$-separated paths, or two paths of different modal length. The \emph{reduced} automaton $\mathfrak{A}'_G = (Q'_G, \Sigma_G, \exp { \{ \epsilon \} }, \Delta'_G)$ is defined by $$ Q'_G := Q_G \setminus \bar Q\ \ \ \mbox{and}\ \ \ \Delta'_G := \Delta_G \cap (Q'_G \times \Sigma_G \times Q'_G \times \dots \times Q'_G). $$ \end{definition} Since the states in $\bar Q$ are unreachable, $L(\mathfrak{A}_G) = L(\mathfrak{A}'_G)$. From now on, we consider $\mathfrak{A}'_G$ and define $\comp \cdot$ relative to the states on $\mathfrak{A}'_G$: $\comp \Gamma = \{ \Phi \in Q'_G \mid \Gamma \subseteq \Phi \}$. \subsection{$G$-orderings / redundant inferences} In the following, the applicability of the propositional inferences of the inverse calculus will be restricted to those where the affected paths are maximal w.r.t.\ a total ordering of $\Pi_G$. In order to maintain completeness, one cannot consider arbitrary orderings in this context. Two paths $\pi_1,\pi_3$ are \emph{brothers} iff there exists a \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path $\pi$ such that $\pi_1 = \pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_l$ and $\pi_3 = \pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_r$ or $\pi_1 = \pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_r$ and $\pi_3 = \pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_l$. \begin{definition}[$G$-ordering] Let $G$ be a $\mathsf{K}$-formula in NNF. A total ordering $\succ$ of $\Pi_G$ is called a \emph{$G$-ordering} iff \begin{enumerate} \item $\pi_1 \succ \pi_2$ whenever \begin{enumerate} \item the modal length of $\pi_1$ is strictly greater than the modal length of $\pi_2$; or \item $\pi_1,\pi_2$ have the same modal length, the last symbol of $\pi_1$ is $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_*$, and the last symbol of $\pi_2$ is \ensuremath{\mathbin{\rlap{$\Box$}\kern.5pt\raisebox{.5pt}{$\lozenge$}}}; or \item $\pi_1,\pi_2$ have the same modal length and $\pi_2$ is a prefix of $\pi_1$ \end{enumerate} \item There is no path between brothers, i.e., there exist no $G$-paths $\pi_1,\pi_2, \pi_3$ such that $\pi_1 \succ \pi_2 \succ \pi_3$ and $\pi_1,\pi_3$ are brothers. \end{enumerate} \end{definition} For the example formula $G$ of Figure~\ref{fig:g-paths}, a $G$-ordering $\succ$ can be defined by setting $\nu_9 \succ \nu_8 \succ \dots \succ \nu_1 \succ \nu_0$. Voronkov \cite{Voronkov-ToCL-2001} shows that $G$-orderings exist for every $\mathsf{K}$-formula $G$ in NNF. Using an arbitrary, but fixed $G$-ordering $\succ$, the applicability of the propositional inferences is restricted as follows. \begin{definition}[Optimized Inverse Calculus] For a sequent $\Gamma$ and a path $\pi$ we write $\pi \succ \Gamma$ iff $\pi \succ \pi'$ for every $\pi' \in \Gamma$. \begin{itemize} \item An inference $\inference[$({\wedge_*})$]{\Gamma, \pi {\wedge_*}}{\Gamma, \pi}$ \emph{respects} $\succ$ iff $\pi {\wedge_*} \succ \Gamma$. \item An inference $\inference[$(\vee)$]{\Gamma_l, \pi {\vee_l} & \Gamma_r, \pi {\vee_r}}{\Gamma_l, \Gamma_r, \pi}$ \emph{respects} $\succ$ iff $\pi {\vee_l} \succ \Gamma_l$ and $\pi {\vee_r} \succ \Gamma_r$. \item The $\Diamond$- and $\Diamond^+$-inferences always respect $\succ$. \end{itemize} % The optimized inverse calculus \ensuremath{\textsf{IC}^\succ_G}\xspace works as \ensuremath{\textsf{IC}_G}\xspace, but for each derivation $\mathcal{S}_0 \vdash \dots \vdash \mathcal{S}_k$ the following \emph{restrictions} must hold: \begin{itemize} \item For every step $\mathcal{S}_i \vdash \mathcal{S}_{i+1}$, the employed inference respects $\succ$, and \item $\mathcal{S}_i$ must not contain $\vee$-forks, $\Diamond$-separated paths, or paths of different modal length. \end{itemize} \end{definition} To distinguish derivations of \ensuremath{\textsf{IC}_G}\xspace and \ensuremath{\textsf{IC}^\succ_G}\xspace, we will use the symbol $\mathbin{{\vdash\!\!\!_\succ}}$ in derivations of $\ensuremath{\textsf{IC}^\succ_G}\xspace$. In \cite{Voronkov-ToCL-2001}, correctness of \ensuremath{\textsf{IC}^\succ_G}\xspace is shown. \begin{fact}[\cite{Voronkov-ToCL-2001}] \label{fact:optimised-calculus-correct} Let $G$ be a $\ensuremath{\mathsf{K}}$-formula in NNF and $\succ$ a $G$-ordering. Then $G$ is unsatisfiable iff $\{ \epsilon \} \in \mathcal{S}_0^{\mathbin{{\vdash\!\!\!_\succ}}}$. \end{fact} Using the correspondence between the inverse method and the emptiness test of $\mathfrak{A}'_G$, we will now give an alternative, and in our opinion simpler, proof of this fact. Since \ensuremath{\textsf{IC}^\succ_G}\xspace is merely a restriction of \ensuremath{\textsf{IC}_G}\xspace, soundness (i.e., the if-direction of the fact) is immediate. Completeness requires more work. In particular, the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure} needs to be reconsidered since the propositional inferences are now restricted: we must show that the $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$-inferences employed in that proof respect (or can be made to respect) $\succ$. To this purpose, we will follow \cite{Voronkov-ToCL-2001} and introduce the notion of $\succ$-compactness. For $\succ$-compact sets, we can be sure that all applicable $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$-inferences respect $\succ$. To ensure that all the sets $\Upsilon_{i}$ constructed in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure} are $\succ$-compact, we again follow Voronkov and employ a special selection strategy. \begin{definition}[$\succ$-compact, $\mathsf{select}_\succ$]\label{def:compact} Let $G$ be a $\mathsf{K}$-formula in NNF and $\succ$ a $G$-ordering. An arbitrary set $\Phi \subseteq \Pi_G$ is \emph{$\succ$-compact} iff, for every \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path $\pi \in \Phi$ that is not p.e.\ in $\Phi$, $\pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \Phi$. The selection function $\mathsf{select}_\succ$ is defined as follows: if $\Phi$ is not p.e., then let $\{ \pi_1, \dots, \pi_m \}$ be the set of \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-paths that are not p.e.\ in $\Phi$. From this set, $\mathsf{select}_\succ$ selects the path $\pi_i$ such that the paths $\pi_i \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_*$ are the two smallest elements in $\{ \pi_j \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \mid 1 \leq j \leq m \}$. \end{definition} The function $\mathsf{select}_\succ$ is well-defined because of Condition (2) of $G$-orderings. The definition of compact ensures that $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$-inferences applicable to not propositionally expanded sequents respect $\succ$. \begin{lemma}\label{lem:selection-enforces-compactness} Let $G$ be a $\mathsf{K}$-formula in NNF, $\succ$ a $G$-ordering, and $\mathsf{select}_\succ$ the selection function as defined above. Let $\Phi = \{ \epsilon \}$ or $\Phi = \Gamma \Box, \pi_i \Diamond$ with $\Box$-paths $\Gamma$ and a $\Diamond$-path $\pi$, all of equal modal length. If $\mathsf{T}} %{\mathcal{T}_\Phi$, as defined in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure}, is generated using $\mathsf{select}_\succ$ as selection function, then every node $\Psi$ of $\mathsf{T}} %{\mathcal{T}_\Phi$ is $\succ$-compact. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma~5.8.3 in \cite{Voronkov-ToCL-2001}. It is given by induction on the depths of the node $\Psi$ in the tree $\mathsf{T}} %{\mathcal{T}_\Phi$. For the root $\Phi$ there are two possibilities. If $\Phi = \{ \epsilon \}$ and $\epsilon$ is a $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$-path, then $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_l$ and $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_r$ have the same modal length as $\epsilon$ and $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \epsilon$ by Condition (1c) of $G$-orderings. If $\Phi = \Gamma \Box, \pi_i \Diamond$ and $\pi \in \Phi$ is a $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$-path, then $\pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \Phi$ holds by Condition (1b) of $G$-orderings because the last symbol of every path in $\Phi$ is \ensuremath{\mathbin{\rlap{$\Box$}\kern.5pt\raisebox{.5pt}{$\lozenge$}}}. For the induction step, let $\Psi$ be a node in $\mathsf{T}} %{\mathcal{T}_\Phi$ which we have already shown to be $\succ$-compact. We show that then also its successor nodes (if any) are $\succ$-compact. \begin{itemize} \item If $\Psi$ is an $\wedge$-node with selected $\wedge$-path $\pi \in \Psi$, then the successor node of $\Psi$ is $\Psi' = \Psi, \pi {\wedge_l}, \pi {\wedge_r}$. Let $\pi' \in \Phi'$ be a $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$-path that is not p.e.\ in $\Phi'$. There are two possibilities: \begin{itemize} \item $\pi' = \pi {\wedge_*}$. In this case, since $\pi {\wedge_*} \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \pi {\wedge_*}$ by Condition (1c) of $G$-orderings and $\pi {\wedge_*} \succ \Psi$, $\pi' \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \Psi'$ holds. \item $\pi' \neq \pi {\wedge_*}$. Then, $\pi' \in \Psi$ and $\pi' \neq \pi$ holds because $\pi$ is p.e.\ in $\Psi'$. Since $\Psi$ is $\succ$-compact, $\psi' \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \nu$ for every $\nu \in \Psi$. It remains to show that $\pi' \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_* \succ \pi \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}_*$, which follows from the fact that $\pi$ was selected by $\mathsf{select}_\succ$. \end{itemize} \item If $\Psi$ is an $\vee$-path and the selected $\vee$-path is $\pi \in \Psi$, then, w.o.l.g., $\Phi = \Psi, \pi {\vee_l}$. The same arguments as before apply. \qed \end{itemize} \end{proof} Given this lemma, it is easy to show that the construction employed in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure} also works for \ensuremath{\textsf{IC}^\succ_G}\xspace, provided that we restrict the set $\Phi$ as in Lemma~\ref{lem:selection-enforces-compactness}: \begin{lemma}\label{lem:opt-inverse-calculus-for-propositional-closure} Let $\Phi = \{ \epsilon \}$ or $\Phi = \Gamma \Box, \pi_i \Diamond$ with $\Box$-paths $\Gamma$ and a $\Diamond$-path $\pi$ all of equal modal length and $\mathcal{S}$ a set of sequents such that $\exp \Phi \subseteq \comp \mathcal{S}$. Then there exists a set of sequents $\mathcal{T}$ with $\mathcal{S} \mathbin{\vdash\!\!\!_\succ\!\!\!^*} \mathcal{T}$ such that there exists $\Lambda \in \mathcal{T}$ with $\Lambda \subseteq \Phi$. \end{lemma} \begin{proof}We use the same construction as in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure}, but the special selection function $\mathsf{select}_\succ$ as above. From Lemma~\ref{lem:selection-enforces-compactness} we have that all nodes $\Upsilon_i$ in $\mathsf{T}} %{\mathcal{T}_\Phi$ are $\succ$-compact. All we have to do is to make sure that the employed inferences respect $\succ$. We refer to the inferences by number assigned to them in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure}. \begin{itemize} \item [(\ref{eq:and-inference})] Since $\Upsilon_{i+1}$ is compact and $\pi \in \Upsilon_{i+1}$ is not p.e.\ in $\Upsilon_{i+1}$, $\pi {\wedge_l} \succ \Upsilon_{i+1}$ and hence $\pi {\wedge_l} \succ \Gamma$ because $\Gamma \subseteq \Upsilon_{i+1}$. \item [(\ref{eq:and-inference-2})] W.l.o.g., assume $\pi {\wedge_l} \succ \pi {\wedge_r}$. (If this is not the case, then reverse the order of the two inferences.) Since $\Upsilon_{i+1}$ is compact, $\Gamma \subseteq \Upsilon_{i+1}$ and $\pi \in \Upsilon_{i+1}$ is not p.e., $\pi {\wedge_l} \succ \Gamma$ holds as well as $\pi {\wedge_l} \succ \pi {\wedge_r}$. Also $\pi {\wedge_r} \succ \Gamma$ holds, which means that both inferences respect $\succ$. \item [(\ref{eq:or-inference})] Since $\Upsilon_{i+1}$ is compact and $\pi \in \Upsilon_{i+1}$ is not p.e.\ we have $\pi {\vee_*} \succ \Upsilon_{i+1}$ and since both $\Gamma_l$ and $\Gamma_r$ are subsets of $\Upsilon_{i+1}$, also $\pi {\vee_l} \succ \Gamma_l$ and $\pi {\vee_r} \succ \Gamma_r$ holds. \qed \end{itemize} \end{proof} \paragraph{Alternative Proof of Fact~\ref{fact:optimised-calculus-correct}.} As mentioned before, soundness (the if-direction) is immediate. For the only-if-direction, if $G$ is not satisfiable, then $L(\mathfrak{A}'_G) = \emptyset$ and there is a set of states $Q$ with $Q_0 \mathbin{\vtr\!\!\!^*} Q$ and $\exp {\{ \epsilon \}} \subseteq Q$. Using Lemma~\ref{lem:opt-inverse-calculus-for-propositional-closure} we show that there is a derivation of \ensuremath{\textsf{IC}^\succ_G}\xspace that simulates this derivation, i.e., there is a set of sequents $\mathcal{S}$ with $\mathcal{S}_0 \mathbin{\vdash\!\!\!_\succ\!\!\!^*} \mathcal{S}$ and $Q \subseteq \comp \mathcal{S}$. The proof is by induction on the length $m$ of the derivation $Q_0 \vartriangleright \dots \vartriangleright Q_m = Q$ and is totally analogous to the proof of Theorem~\ref{theo:emptiness-test-to-inv-calc}. The base case is Lemma~\ref{lem:dead-states-equal-axioms}, which also holds for \ensuremath{\textsf{IC}^\succ_G}\xspace and the reduced automaton. The induction step uses Lemma~\ref{lem:opt-inverse-calculus-for-propositional-closure} instead of Lemma~\ref{lem:inverse-calculus-for-propositional-closure}, but this is the only difference. Hence, $Q_0 \mathbin{\vtr\!\!\!^*} Q$ and $\exp {\{ \epsilon \}} \subseteq Q$ implies that there exist a derivation $\mathcal{S}_0 \mathbin{\vdash\!\!\!_\succ\!\!\!^*} \mathcal{S}$ such that $\exp {\{ \epsilon \}} \subseteq \comp {\mathcal{S}}$. Lemma~\ref{lem:opt-inverse-calculus-for-propositional-closure} yields a derivation $\mathcal{S} \mathbin{\vdash\!\!\!_\succ\!\!\!^*} \mathcal{T}$ with $\{ \epsilon \} \in \mathcal{T} \subseteq \mathcal{S}_0^{\mathbin{{\vdash\!\!\!_\succ}}}$. \qed \section{Preliminaries} First, we briefly introduce the modal logic \ensuremath{\mathsf{K}}{} and some technical definitions related to \ensuremath{\mathsf{K}}-formulae, which are used later on to formulate the inverse calculus and the automata approach for \ensuremath{\mathsf{K}}. Then, we define the type of automata used to decide satisfiability (w.r.t.\ global axioms) in \ensuremath{\mathsf{K}}. These so-called looping automata \cite{vardi94:_reason_infin_comput} are a specialization of B\"uchi tree automata. \subsubsection*{Modal Formulae} We assume the reader to be familiar with the basic notions of modal logic. For a thorough introduction to modal logics, refer to, e.g., \cite{blackburn01:_modal_logic}. \ensuremath{\mathsf{K}}-formulae are built inductively from a countably infinite set $\mathcal{P} = \{ p_1, p_2, \dots \}$ of propositional atoms using the Boolean connectives $\wedge$, $\vee$, and $\neg$ and the unary modal operators $\Box$ and $\Diamond$. The semantics of \ensuremath{\mathsf{K}}-formulae is define as usual, based on Kripke models $\mathcal{M} = (W,R,V)$ where $W$ is a non-empty set, $R \subseteq W \times W$ is an accessibility relation, and $V : \mathcal{P} \rightarrow 2^W$ is a valuation mapping propositional atoms to the set of worlds they hold in. The relation $\models$ between models, worlds, and formulae is defined in the usual way. Let $G, H$ be \ensuremath{\mathsf{K}}-formulae. Then $G$ is \emph{satisfiable} iff there exists a Kripke model $\mathcal{M} = (W,R,V)$ and a world $w \in W$ with $\mathcal{M}, w \models G$. The formula $G$ is \emph{satisfiable w.r.t.\ the global axiom H} iff there exists a Kripke model $\mathcal{M} = (W,R,V)$ and a world $w \in W$ such $\mathcal{M}, w \models G$ and $\mathcal{M}, w' \models H$ for all $w'\in W$. \ensuremath{\mathsf{K}}-satisfiability is \textsc{PSpace}-complete~\cite{ladner:1977a}, and \ensuremath{\mathsf{K}}-satisfiability w.r.t.\ global axioms is \textsc{ExpTime}-complete \cite{Spaan93a}. A \ensuremath{\mathsf{K}}-formula is in \emph{negation normal form} (NNF) if $\neg$ occurs only in front of propositional atoms. Every \ensuremath{\mathsf{K}}-formula can be transformed (in linear time) into an equivalent formula in NNF using de Morgan's laws and the duality of the modal operators. For the automata and calculi considered here, sub-formulae of $G$ play an important role and we will often need operations going from a formula to its super- or sub-formulae. As observed in \cite{Voronkov-ToCL-2001}, these operations become easier when dealing with ``addresses'' of sub-formulae in $G$ rather than with the sub-formulae themselves. \begin{definition}[$G$-Paths] \label{def:g-paths} For a $\ensuremath{\mathsf{K}}$-formula $G$ in NNF, the \emph{set of $G$-paths} $\Pi_G$ is a set of words over the alphabet $\{ {\vee_l}, {\vee_r}, {\wedge_l}, \wedge_r, \Box, \Diamond \}$. The set $\Pi_G$ and the sub-formula $G|_\pi$ of $G$ addressed by $\pi \in \Pi_G$ are defined inductively as follows: \begin{itemize} \item $\epsilon \in \Pi_G$ and $G|_\epsilon = G$ \item if $\pi \in \Pi_G$ and \begin{itemize} \item $G|_\pi = F_1 \wedge F_2$ then $\pi {\wedge_l}, \pi {\wedge_r} \in \Pi_G$, $G|_{\pi {\wedge_l}} = F_1$, $G|_{\pi {\wedge_r}} = F_2$, and $\pi$ is called \emph{$\wedge$-path} \item $G|_\pi = F_1 \vee F_2$ then $\pi {\vee_l}, \pi {\vee_r} \in \Pi_G$, $G|_{\pi {\vee_l}} = F_1$, $G|_{\pi {\vee_r}} = F_2$, and $\pi$ is called \emph{$\vee$-path} \item $G|_\pi = \Box F$ then $\pi \Box \in \Pi_G$, $G|_{\pi \Box} = F$ and $\pi$ is called \emph{$\Box$-path} \item $G|_\pi = \Diamond F$ then $\pi \Diamond \in \Pi_G$, $G|_{\pi \Diamond} = F$ and $\pi$ is called \emph{$\Diamond$-path} \end{itemize} \item $\Pi_G$ is the smallest set that satisfies the previous conditions. \end{itemize} \end{definition} We use of ${\wedge_*}$ and ${\vee_*}$ as placeholders for ${\wedge_l}, {\wedge_r}$ and ${\vee_l}, {\vee_r}$, respectively. Also, we use $\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}$ and $\ensuremath{\mathbin{\rlap{$\Box$}\kern.5pt\raisebox{.5pt}{$\lozenge$}}}$ as placeholders for $\wedge, \vee$ and $\Box,\Diamond$, respectively. If $\pi$ is an $\wedge$- or and $\vee$-path then $\pi$ is called \emph{\ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path}. If $\pi$ is a $\Box$- or a $\Diamond$-path then $\pi$ is called \emph{\ensuremath{\mathbin{\rlap{$\Box$}\kern.5pt\raisebox{.5pt}{$\lozenge$}}}-path}. \begin{figure}[t] \begin{center} \input{g-paths.pstex_t} \end{center} \caption{The set $\Pi_G$ for $G = \Diamond \neg p_1 \wedge (\Box p_2 \wedge \Box (\neg p_2 \vee p_1))$} \label{fig:g-paths} \end{figure} Figure~\ref{fig:g-paths} shows an example of a \ensuremath{\mathsf{K}}-formula $G$ and the corresponding set $\Pi_G$, which can be read off the edge labels. For example, ${\wedge_r}{\wedge_r}$ is a $G$-path and $G|_{{\wedge_r}{\wedge_r}} = \Box (\neg p_2 \vee p_1)$ \subsubsection*{Looping Automata} For a natural number $n$, let $[n]$ denote the set $\{1, \dots, n\}$. An \emph{$n$-ary infinite tree over the alphabet $\Sigma$} is a mapping $t : [n]^* \rightarrow \Sigma$. An \emph{$n$-ary looping tree automaton} is a tuple $\mathfrak{A} = (Q,\Sigma,I, \Delta)$, where $Q$ is a finite set of states, $\Sigma$ is a finite alphabet, $I \subseteq Q$ is the set of initial states, and $\Delta \subseteq Q \times \Sigma \times Q^n$ is the transition relation. Sometimes, we will view $\Delta$ as a function from $Q \times \Sigma$ to $2^{Q^n}$ and write $\Delta(q,\sigma)$ for the set $\{ \mathbf{q} \mid (q,\sigma, \mathbf{q}) \in \Delta \}$. A \emph{run} of $\mathfrak{A}$ on a tree $t$ is a $n$-ary infinite tree $r$ over $Q$ such that \[ (r(p),t(p),(r(p1),\dots,r(pn))) \in \Delta \] for every $p \in [n]^*$. The automaton $\mathfrak{A}$ \emph{accepts} $t$ iff there is a run $r$ of $\mathfrak{A}$ on $t$ such that $r(\epsilon) \in I$. The set $L(\mathfrak{A}) := \{ t \mid \text{$\mathfrak{A}$ accepts $t$} \}$ is the \emph{language} accepted by $\mathfrak{A}$. Since looping tree automata are special B\"uchi tree automata, emptiness of their accepted language can effectively be tested using the well-known (quadratic) emptiness test for B\"uchi automata \cite{VaWo86}. However, for looping tree automata this algorithm can be specialized into a simpler (linear) one. Though this is well-known in the automata theory community, there appears to be no reference for the result. Intuitively, the algorithm works by computing inactive states. A state $q \in Q$ is \emph{active} iff there exists a tree $t$ and a run of $\mathfrak{A}$ on $t$ in which $q$ occurs; otherwise, $q$ is \emph{inactive}. It is easy to see that a looping tree automaton accepts at least one tree iff it has an active initial state. How can the set of inactive states be computed? Obviously, a state from which no successor states are reachable is inactive. Moreover, a state is inactive if every transition possible from that state involves an inactive state. Thus, one can start with the set \[ Q_0 := \{ q \in Q \mid \forall \sigma \in \Sigma . \Delta(q,\sigma) = \emptyset \} \] of obviously inactive states, and then propagate inactiveness through the transition relation. We formalize this propagation process in a way that allows for an easy formulation of our main results. A \emph{derivation} of the emptiness test is a sequence $ Q_0 \vartriangleright Q_1 \vartriangleright \dots \vartriangleright Q_k$ such that $ Q_i \subseteq Q$ and $ Q_i \vartriangleright Q_{i+1}$ iff $ Q_{i+1} = Q_i \cup \{ q \}$ with \[ q \in \{ q' \in Q \mid \forall \sigma \in \Sigma . \forall (q_1, \dots, q_n) \in \Delta(q,\sigma) . \exists j . q_j \in Q_i \} . \] We write $Q_0 \mathbin{\vtr\!\!\!^*} P$ iff there is a $k \in \mathbb{N}$ and a derivation $Q_0 \vartriangleright \dots \vartriangleright Q_k$ with $P = Q_k$. The emptiness test answers ``$L(\mathfrak{A}) = \emptyset$'' iff there exists a set of states $P$ such that $ Q_0 \mathbin{\vtr\!\!\!^*} P$ and $I \subseteq P$. Note that $Q \vartriangleright P$ implies $Q \subseteq P$ and that $Q\subseteq Q'$ and $Q \vartriangleright P$ imply $Q' \mathbin{\vtr\!\!\!^*} P$. Consequently, the \emph{closure} $Q_0^\vartriangleright$ of $Q_0$ under $\vartriangleright$, defined by $ Q_0^\vartriangleright =: \bigcup \{ P \mid Q_0 \vartriangleright P \}$, can be calculated starting with $Q_0$, and successively adding states $q$ to the current set $Q_i$ such that $Q_i\vartriangleright Q_i\cup\{q\}$ and $q\not\in Q_i$, until no more states can be added. It is easy to see that this closure consists of the set of inactive states, and thus $L(\mathfrak{A}) = \emptyset$ iff $I \subseteq Q_0^\vartriangleright$. As described until now, this algorithm runs in time polynomial in the number of states. By using clever data structures and a propagation algorithm similar to the one for satisfiability of propositional Horn formulae \cite{dowling84:_linear_time_testin_horn}, one can in fact obtain a linear emptiness test for looping tree automata. \section{Global axioms} \def\textit{ax}{\textit{ax}} When considering satisfiability of $G$ w.r.t.\ the global axiom $H$, we must take subformulae of $G$ and $H$ into account. We address subformulae using paths in $G$ and $H$. \begin{definition}[$(G,H)$-Paths] For $\ensuremath{\mathsf{K}}$-formulae $G,H$ in NNF, the set of $(G,H)$-paths $\Pi_{G,H}$ is a subset of $\{ \epsilon_G, \epsilon_H\}{\cdot}\{ {\vee_l}, {\vee_r}, {\wedge_l}, \wedge_r, \Box, \Diamond \}^*$. The set $\Pi_{G,H}$ and the subformula $(G,H)|_\pi$ of $G,H$ addressed by a path $\pi \in \Pi_{G,H}$ are defined inductively as follows: \begin{itemize} \item $\epsilon_G \in \Pi_{G,H}$ and $(G,H)|_{\epsilon_G} = G$,\ \ and\ \ $\epsilon_H \in \Pi_{G,H}$ and $(G,H)|_{\epsilon_H} = H$ \item if $\pi \in \Pi_{G,H}$ and $(G,H)|_\pi = F_1 \wedge F_2$ then $\pi {\wedge_l}, \pi {\wedge_r} \in \Pi_{G,H}$, $(G,H)|_{\pi {\wedge_l}} = F_1$, $(G,H)|_{\pi {\wedge_r}} = F_2$, and $\pi$ is called \emph{$\wedge$-path}. \item The other cases are defined analogously (see also Definition~\ref{def:g-paths}). \item $\Pi_{G,H}$ is the smallest set that satisfies the previous conditions. \end{itemize} \end{definition} The definitions of \emph{p.e.} and \emph{clash} are extended to subsets of $\Pi_{G,H}$ in the obvious way, with the \emph{additional requirement} that, for $\Phi \neq \emptyset$ to be p.e., $\epsilon_H \in \Phi$ must hold. This additional requirement enforces the global axiom. \begin{definition}[Formula Automaton with Global Axioms]\label{def:axiom-automaton} For $\mathsf{K}$-for\-mu\-lae $G,H$ in NNF, let $\{ \pi_1, \dots, \pi_n \}$ be an enumeration of the $\Diamond$-paths in $\Pi_{G,H}$. The $n$-ary looping automaton $\mathfrak{A}_{G,H}$ is defined by \[ \mathfrak{A}_G := (Q_{G,H}, \Sigma_{G,H}, \exp{\{ \epsilon_G \}}, \Delta_{G,H}), \] where $Q_{G,H} := \Sigma_{G,H} := \{ \Phi \in \Pi_{G,H} \mid \Phi \text{ is p.e.} \}$ and the transition relation $\Delta_{G,H}$ is defined as for the automaton $\mathfrak{A}_G$ in Definition~\ref{def:automaton}. \end{definition} \begin{theorem}\label{lem:axioms-emptiness-and-non-satisfiability} $G$ is satisfiable w.r.t.\ the global axiom $H$ iff $L(\mathfrak{A}_{G,H}) \neq \emptyset$. \end{theorem} \begin{proof} The proof is totally analogous to the proof of Theorem~\ref{lem:emptiness-and-non-satisfiability}. We use the same constructions for both directions. Let $\{\pi_1, \dots, \pi_n\}$ be an enumeration of the $\Diamond$-paths in $\Pi_{G,H}$. For the \emph{if}-direction let $L(\mathfrak{A}_{G,H}) \neq \emptyset$, $t,r : [n]^* \rightarrow \{ \Phi \subseteq \Pi_{G,H} \mid \Phi \text{ is p.e.} \}$ a tree that is accepted by $\mathfrak{A}_{G,H}$ and a corresponding run of $\mathfrak{A}_{G,H}$. By construction of $\mathfrak{A}_{G,H}$, $t(w) = r(w)$ for every $w \in [n]^*$. We construct a Kripke model $\mathcal{M} = (W,R,V)$ from $t$ by setting \begin{align*} W & = \{ w \in [n]^* \mid t(w) \neq \emptyset \}\\ R & = \{ (w,wi) \in W \times W \mid i \in [n] \}\\ V & = \lambda P . \{ p \in W \mid \exists \pi \in t(w) . (G,H)|_\pi = P \} \quad \text{ for all propositional atoms $P$ } \end{align*} \begin{claim} For all $w \in W$, if $\pi \in t(w)$ then $\mathcal{M},w \models (G,H)|_\pi$. \end{claim} \noindent \textit{Proof of the claim.} The claim is proved by induction on the structure of \ensuremath{\mathsf{K}}-formulae. Let $w \in W$ be a world and $\pi \in \Pi_G$ be a path such that $\pi \in t(w)$. \begin{itemize} \item if $(G,H)|_\pi = P$ is a propositional atom and $w \in W$, then $w \in V(P)$ and hence $\mathcal{M},w \models (G,H)|_\pi$. \item if $(G,H)|_\pi = \neg P$ is a negated propositional atom, then, since $t(w)$ is clash free, there is no $\pi' \in \Pi_{G,H}$ such that $(G,H)|_{\pi'} = P$. Thus, $w \not \in V(P)$ and hence $\mathcal{M},w \models \neg P$. \item if $(G,H)|_\pi = F_1 \wedge F_2$ then $\pi$ is an $\wedge$-paths, and since $t(w)$ is p.e., $\{ \pi {\wedge_l}, \pi {\wedge_r}\} \subseteq t(w)$. By induction, $\mathcal{M}, w \models (G,H)|_{\pi{\wedge_*}}$ and hence $\mathcal{M},w \models (G,H)|_\pi$. \item if $(G,H)|_\pi = F_1 \vee F_2$ then $\pi$ is an $\vee$-paths, and since $t(w)$ is p.e., $\{ \pi {\vee_l}, \pi {\vee_r}\} \cap t(w) \neq \emptyset$. By induction, $\mathcal{M}, w \models (G,H)|_{\pi{\vee_l}}$ or $\mathcal{M}, w \models (G,H)|_{\pi{\vee_r}}$ and hence $\mathcal{M},w \models (G,H)|_\pi$. \item if $(G,H)|_\pi = \Diamond F$ then $\pi$ is a $\Diamond$-path and, w.o.l.g., assume $\pi = \pi_i$. Since $\pi_i \in r(w)$, $\pi_i \Diamond \in r(wi) = t(wi)$ holds and hence $wi \in W$ and $(w,wi) \in R$. By induction, we have that $\mathcal{M}, wi \models (G,H)|_{\pi_i \Diamond}$ and hence $\mathcal{M}, w \models (G,H)|_{\pi_i}$. \item if $(G,H)|_\pi = \Box F$ and $(w,w') \in R$ then $w' = wi$ for some $i \in [n]$ and $t(wi) \neq \emptyset$ holds and by construction of $\mathfrak{A}_{G,H}$, this implies $\pi \Box \in r(wi) = t(wi)$. By induction, this implies $\mathcal{M}, wi \models (G,H)|_{\pi \Box}$ and since $wi = w'$ and $w'$ has been chosen arbitrarily, $\mathcal{M}, w \models (G,H)|_\pi$. \end{itemize} This finishes the proof of the claim. Since $t(\epsilon) = r(\epsilon) \in \exp{ \{ \epsilon_G \} }$ and hence $\epsilon_G \in t(\epsilon)$, $\mathcal{M}, \epsilon \models (G,H)|_{\epsilon_G}$ and $G = (G,H)|_{\epsilon_G}$ is satisfiable. Also, since $t(w)$ is p.e., $\epsilon_H \in t(w)$ for every $w \in W$ and, by the claim, $\mathcal{M}, w \models H = (G,H)|_{\epsilon_H}$ holds for every $w \in W$. Hence $G$ is satisfiable w.r.t.\ the global axiom $H$. For the \emph{only if}-direction, we first show an auxiliary claim: for a set $\Psi \subseteq \Pi_{G,H}$ we define $\mathcal{M}, w \models \Psi$ iff $\mathcal{M}, w \models (G,H)|_\pi$ for every $\pi \in \Psi$. \begin{claim} If $\Psi \subseteq \Pi_{G,H}$ and $w \in W$ such that $\mathcal{M}, w \models \Psi$, then there is a $\Phi \in \exp \Psi$ such that $\mathcal{M}, w \models \Phi$. \end{claim} \noindent \textit{Proof of the claim.} Let $\Psi \subseteq \Pi_{G,H}$ and $w \in W$ such that $\mathcal{M}, w \models \Psi$. We will show how to construct an expansion of $\Psi$ with the desired property. If $\Psi$ is already p.e., then $\Psi \in \exp \Psi$ and we are done. \begin{itemize} \item If $\Psi$ is not p.e.\ because $\epsilon_H \not \in \Psi$ then, because $\mathcal{M}, w \models H$, $\Psi' = \Psi \cup \{ \epsilon_H \}$ is a set with $\mathcal{M}, w \models \Psi$ that is ``one step closer'' to being p.e.\ than $\Psi$. \item If $\Psi$ is not p.e.\ and $\epsilon_H \in \Psi$ then let $\pi \in \Psi$ be a \ensuremath{\mathbin{\rlap{$\vee$}\wedge}}-path that is not p.e.\ in $\Psi$. \begin{itemize} \item If $\pi$ is a $\wedge$-path then $(G,H)|_\pi = F_1 \wedge F_2$ and since $\mathcal{M}, w \models (G,H)|_\pi$, also $\mathcal{M}, w \models F_1 = (G,H)|_{\pi \wedge_l}$ and $\mathcal{M}, w \models F_2 = (G,H)|_{\pi \wedge_r}$. Hence $\mathcal{M}, w \models \Psi \cup \{ \pi{\wedge_l}, \pi{\wedge_r} \}$ and $\Psi' = \Psi \cup \{ \pi{\wedge_l}, \pi{\wedge_r}\}$ is a set with $\mathcal{M}, w \models \Psi'$ that is ``one step closer'' to being p.e.\ than $\Psi$. \item If $\pi$ is a $\vee$-path then $(G,H)|_\pi = F_1 \vee F_2$ and since $\mathcal{M}, w \models (G,H)|_\pi$, also $\mathcal{M}, w \models F_1 = (G,H)|_{\pi \vee_l}$ or $\mathcal{M}, w \models F_2 = (G,H)|_{\pi \vee_r}$. Hence $\mathcal{M}, w \models \Psi \cup \{ \pi{\vee_l}\}$ or $\mathcal{M}, w \models \Psi \cup \{ \pi{\vee_r}\}$ and hence can obtain a set $\Psi'$ with $\mathcal{M}, w \models \Psi'$ that is again ``one step close'' to being p.e. than $\Psi$. \end{itemize} \end{itemize} Restarting this process with $\Psi = \Psi'$ eventually yields an expansion $\Phi$ of the initial set $\Psi$ with $\mathcal{M},w \models \Phi$, which proves the claim. Let $\mathcal{M} = (W,R,V)$ be a model for $G$ with $w \in W$ such that $\mathcal{M}, w \models G$. From $\mathcal{M}$ we construct a tree that is accepted by $\mathfrak{A}_{G,H}$. Using this claim, we inductively define a tree $t$ accepted by $\mathfrak{A}_{G,H}$. To this purpose, we also inductively define a function $f : [n]^* \rightarrow W$ such that, if $\mathcal{M}, f(p) \models t(p)$ for all $p$. We start by setting $f(\epsilon) = w$ for a $w \in W$ with $\mathcal{M}, w \models G$. and $t(\epsilon) = \Phi$ for a $\Phi \in \exp { \{ \epsilon \} }$ such that $\mathcal{M}, w \models \Phi$. From the claim we have that such a set $\Phi$ exists because $\mathcal{M}, w \models G = (G,H)|_\epsilon$. If $f(p)$ and $t(p)$ are already defined, then, for $i \in [n]$, we define $f(pi)$ and $t(pi)$ as follows: \begin{itemize} \item if $\pi_i \in t(p)$ then $\mathcal{M}, f(p) \models (G,H)|_{\pi_i}$ and hence there is a $w' \in W$ such that $(f(p),w') \in R$ and $\mathcal{M}, w' \models (G,H)|_{\pi_i \Diamond}$. If $\pi \in t(p)$ is a $\Box$-path, then also $\mathcal{M}, w' \models (G,H)|_{\pi\Box}$ holds. Hence $\mathcal{M}, w' \models \{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in t(p) \text{ is a $\Box$-path } \}$. We set $f(pi) = w'$ and $t(pi) = \Phi$ for a $\Phi \in \exp {\{ \pi_i \Diamond \} \cup \{ \pi \Box \mid \pi \in t(p) \text{ is a $\Box$-path } \}}$ with $\mathcal{M}, w' \models \Phi$, which exist by the claim. \item if $\pi_i \not \in t(p)$, then we set $f(pi) = w$ for an arbitrary $w \in W$ and $t(pi) = \emptyset$ \end{itemize} In both cases, we have define $f(pi)$ and $t(pi)$ such that $\mathcal{M}, f(pi) \models t(pi)$. It is easy to see that $t$ is accepted by $\mathfrak{A}_{G,H}$ with the run $r = t$. Hence $L(\mathfrak{A}_{G,H}) \neq \emptyset$ which is what we needed to show. \qed \end{proof} \begin{definition}[The Inverse Calculus w.\ Global Axiom] Let $G,H$ be \ensuremath{\mathsf{K}}- formula in NNF and $\Pi_{G,H}$ the set of paths of $G,H$. Sequents are subsets of $\Pi_{G,H}$, and operations on sequents are defined as before. In addition to the inferences from Figure \ref{fig:inv-calc}, the inverse calculus for $G$ w.r.t.\ the global axiom $H$, \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace, employs the inference \[ \inference[$(\textit{ax})$]{\Gamma, \epsilon_H}{\Gamma}. \] \end{definition} From now on, $\comp \cdot$ is defined w.r.t.\ the states of $\mathfrak{A}_{G,H}$, i.e., $\comp \Gamma := \{ \Phi \in Q_{G,H} \mid \Gamma \subseteq \Phi \}$. \begin{theorem}[\ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace and the emptiness test for $\mathfrak{A}_{G,H}$ simulate each other]% \label{theo:axioms-inv-calc-to-emptiness-test}% \label{theo:axioms-emptiness-test-to-inv-calc}% Let $\mathbin{{\vdash\!\!\!_{\textit{ax}}}}$ denote derivation steps of \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace, and $\vartriangleright$ derivation steps of the emptiness test for $\mathfrak{A}_{G,H}$. \begin{enumerate} \item Let $Q\subseteq Q_{G,H}$ be a set of states such that $Q_0 \mathbin{\vtr\!\!\!^*} Q$. Then there exists a set of sequents $\mathcal{S}$ with $\mathcal{S}_0 \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{S} \text{ and } Q \subseteq \comp \mathcal{S}$. \item Let $\mathcal{S}$ be a set of sequents such that $\mathcal{S}_0 \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{S}$. Then there exists a set of states $Q \subseteq Q_G$ with $Q_0 \mathbin{\vtr\!\!\!^*} Q \text{ and } \comp \mathcal{S} \subseteq Q$. \end{enumerate} \end{theorem} Lemma~\ref{lem:dead-states-equal-axioms}, \ref{lem:propositional-inferences-for-free}, and \ref{lem:modal-inference-simulated-by-emptiness-test}, restated for $\mathfrak{A}_{G,H}$ and \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace, can be shown as before. The following lemma deals with the $\textit{ax}$-inference of \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace. \begin{lemma}\label{lem:axioms-inference-for-free} Let $\mathcal{S} \vartriangleright \mathcal{T}$ be a derivation of \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace that employs an \textit{ax}-inference. Then $\comp \mathcal{S} = \comp \mathcal{T}$. \end{lemma} \begin{proof} Let $\mathcal{T} = \mathcal{S} \cup \{ \Gamma \}$ with $\{ \Gamma, \epsilon_H \} \in \mathcal{S}$. Then we know that $\inference[$(\textit{ax})$]{\Gamma, \epsilon_H}{\Gamma}$. $\comp \mathcal{T} = \comp \mathcal{S} \cup \comp \Gamma$. Since $\mathcal{S} \subseteq \mathcal{T}$, $\comp \mathcal{S} \subseteq \comp \mathcal{T}$ holds immediately. If $\Phi \in \comp \Gamma$, then, since $\Phi$ is p.e., $\epsilon_H \in \Phi$ and $\Phi \in \comp { \Gamma, \epsilon_H } \subseteq \comp \mathcal{S}$. \qed \end{proof} The proof of Theorem~\ref{theo:axioms-inv-calc-to-emptiness-test}.2 is now analogous to the proof of Theorem~\ref{theo:inv-calc-to-emptiness-test}.2. For the proof of Theorem~\ref{theo:axioms-inv-calc-to-emptiness-test}.1, Lemma~\ref{lem:inverse-calculus-for-propositional-closure} needs to be re-proved because the change in the definition of p.e.\ now also implies that $\epsilon_H \in \Phi$ holds for every set $\Phi \in \exp \Psi$ for any $\Psi \neq \emptyset$ (see Lemma~\ref{lem:axioms-inverse-calculus-for-propositional-closure}). This is where the new inference \textit{ax}{} comes into play. In all other respects, the proof of Theorem~\ref{theo:axioms-inv-calc-to-emptiness-test}.1 is analogous to the proof of Theorem~\ref{theo:inv-calc-to-emptiness-test}.1. \begin{lemma}\label{lem:axioms-inverse-calculus-for-propositional-closure} Let $\Phi \subseteq \Pi_G$ a set of paths and $\mathcal{S}$ a set of sequents such that $\exp \Phi \subseteq \comp \mathcal{S}$. Then there exists a set of sequents $\mathcal{T}$ with $\mathcal{S} \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{T}$ such that there exists $\Lambda \in \mathcal{T}$ with $\Lambda \subseteq \Phi$. \end{lemma} \begin{proof} If $\epsilon_H \in \Phi$ than we can use the same construction used in the proof of Lemma~\ref{lem:inverse-calculus-for-propositional-closure} to construct the set $\mathcal{T}$ such that $\mathcal{S} \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{T}$ and there is a $\Lambda \in \mathcal{T}$ with $\Lambda \subseteq \Phi$. If $\epsilon_H \not \in \Phi$, then set $\Psi = \Phi,\epsilon_H$ and again use the construction from the proof of Lemma ~\ref{lem:inverse-calculus-for-propositional-closure} to construct a set $\mathcal{T}$ such that $\mathcal{S} \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{T}$ and there is a $\Lambda \in \mathcal{T}$ with $\Lambda \subseteq \Psi$. If $\epsilon_H \not \in \Lambda$ then we are done since then also $\Lambda \subseteq \Phi$. If $\Lambda = \Gamma, \epsilon_H$ for some $\Gamma$ with $\epsilon_H \not \in \Gamma$, then $\Gamma \subseteq \Phi$ and $\mathcal{T} \mathbin{{\vdash\!\!\!_{\textit{ax}}}} \mathcal{T} \cup \{ \Gamma \}$ can be derived by \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace using the inference $\inference[$(\textit{ax})$]{\Gamma, \epsilon_H}{\Gamma}$. \qed \end{proof} \begin{corollary} \ensuremath{\textsf{IC}^\textit{ax}_{G,H}}\xspace yields an \textsc{ExpTime}{} decision procedure for satisfiability w.r.t.\ global axioms in \ensuremath{\mathsf{K}}. \end{corollary} The following algorithm yields the desired procedure: \begin{algorithm}\label{alg:inverse-calculus} Let $G, H$ be \ensuremath{\mathsf{K}}-formulae in NNF. To test satisfiability of $G$ w.r.t.\ $H$, calculate $\mathcal{S}_0^{\mathbin{{\vdash\!\!\!_{\textit{ax}}}}}$. If $\{\emptyset, \{ \epsilon_G \}\} \cap \mathcal{S}_0^{\mathbin{{\vdash\!\!\!_{\textit{ax}}}}} \neq \emptyset$, then answer ``not satisfiable,'' and ``satisfiable'' otherwise. \end{algorithm} Correctness of this algorithm follows from Theorem~\ref{lem:axioms-emptiness-and-non-satisfiability} and~\ref{theo:axioms-inv-calc-to-emptiness-test}. If $G$ is not satisfiable w.r.t.\ $H$, then $L(\mathfrak{A}_{G,H}) = \emptyset$, and there exists a set of states $Q$ with $Q_0 \mathbin{\vtr\!\!\!^*} Q$ and $\exp {\{ \epsilon_G \}} \subseteq Q$. Thus, there exists a set of sequents $\mathcal{S}$ with $\mathcal{S}_0 \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{S}$ such that $Q \subseteq \comp {\mathcal{S}}$. With (the appropriately reformulated) Lemma~\ref{lem:inverse-calculus-for-propositional-closure} there exists a set of sequents $\mathcal{T}$ with $\mathcal{S} \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*} \mathcal{T}$ such that there is a sequent $\Lambda \in \mathcal{T}$ with $\Lambda \subseteq \{ \epsilon_G \}$. Consequently, $\Lambda = \emptyset$ or $\Lambda = \{ \epsilon_G \}$. Since $\mathcal{S}_0 \mathbin{\vdash\!\!\!_{\textit{ax}}\!\!\!^*}\mathcal{S}_0^{\mathbin{{\vdash\!\!\!_{\textit{ax}}}}}$, there exists a set of (inactive) states $Q$ such that $Q_0 \mathbin{\vtr\!\!\!^*} Q$ and $\comp{\mathcal{S}_0^{\mathbin{{\vdash\!\!\!_{\textit{ax}}}}}} \subseteq Q$. Since $\exp{\{\epsilon_G\}}\subseteq\comp{\{\epsilon_G\}}\subseteq\comp{\emptyset}$, we know that $\{\emptyset, \{ \epsilon_G \}\} \cap \mathcal{S}_0^{\mathbin{{\vdash\!\!\!_{\textit{ax}}}}} \neq \emptyset$ implies $\exp{\{\epsilon_G\}}\subseteq Q$. Consequently, $L(\mathfrak{A}_{G,H}) = \emptyset$ and thus $G$ is not satisfiable w.r.t.\ $H$. For the complexity, note that there are only exponentially many sequents. Consequently, it is easy to see that the saturation process that leads to $\mathcal{S}_0^{\mathbin{{\vdash\!\!\!_{\textit{ax}}}}}$ can be realized in time exponential in the size of the input formulae.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} On the technical level, quantum mechanics (QM) is a set of mathematically formulated prescriptions that serve for calculations of probabilities of different measurement outcomes. The calculated probabilities agree with experiments. This is the fact! From a pragmatic point of view, this is also enough. Pragmatic physicists are interested only in these pragmatic aspects of QM, which is fine. Nevertheless, many physicists are not only interested in the pragmatic aspects, but also want to understand nature on a deeper conceptual level. Besides, a deeper understanding of nature on the conceptual level may also induce a new development of pragmatic aspects. Thus, the conceptual understanding of physical phenomena is also an important aspect of physics. Unfortunately, the conceptual issues turn out to be particularly difficult in the most fundamental physical theory currently known -- quantum theory. Textbooks on QM usually emphasize the pragmatic technical aspects, while the discussions of the conceptual issues are usually avoided or reduced to simple authoritative claims without a detailed discussion. This causes a common (but wrong!) impression among physicists that all conceptual problems of QM are already solved or that the unsolved problems are not really physical (but rather ``philosophical"). The purpose of the present paper is to warn students, teachers, and practitioners that some of the authoritative claims on conceptual aspects of QM that they often heard or read may be actually wrong, that a certain number of serious physicists still copes with these foundational aspects of QM, and that there is not yet a general consensus among experts on answers to some of the most fundamental questions. To emphasize that some widely accepted authoritative claims on QM are not really proven, I refer to them as ``myths". In the paper, I review the main facts that support these myths, but also explain why these facts do not really prove the myths and review the main alternatives. The paper is organized such that each section is devoted to another myth, while the title of each section carries the basic claim of the corresponding myth. (An exception is the concluding section where I attempt to identify the common origin of all these myths.) The sections are roughly organized from more elementary myths towards more advanced ones, but they do not necessarily need to be read in that order. The style of presentation is adjusted to readers who are already familiar with the technical aspects of QM, but want to complete their knowledge with a better understanding of the conceptual issues. Nevertheless, the paper is attempted to be very pedagogical and readable by a wide nonexpert audience. However, to keep the balance and readibility by a wide physics audience, with a risk of making the paper less pedagogical, in some places I am forced to omit some technical details (especially in the most advanced sections, Secs.~\ref{QFTP} and \ref{BH}), keeping only those conceptual and technical details that are essential for understanding why some myths are believed to be true and why they may not be so. Readers interested in more technical details will find them in more specialized cited references, many of which are pedagogically oriented reviews. As a disclaimer, it is also fair to stress that although this paper is intended to be a review of different views and interpretations of various aspects of QM, it is certainly not completely neutral and unbiased. Some points of view are certainly emphasized more than the others, while some are not even mentioned, reflecting subjective preferences of the author. Moreover, the reader does not necessarily need to agree with all conclusions and suggestions presented in this paper, as they also express a subjective opinion of the author, which, of course, is also open to further criticism. By dissolving various myths in QM, it is certainly not intention of the author to create new ones. The main intention of the author is to provoke new thinking of the reader about various aspects of QM that previously might have been taken by him/her for granted, not necessarily to convince the reader that views presented here are the right ones. Even the claims that are proclaimed as ``facts" in this paper may be questioned by a critical reader. It also cannot be overemphasized that ``myths" in this paper do not necessarily refer to claims that are wrong, but merely to claims about which there is not yet a true consensus. \section{In QM, there is a wave-particle duality} \subsection{Wave-particle duality as a myth} In introductory textbooks on QM, as well as in popular texts on QM, a conceptually strange character of QM is often verbalized in terms of {\it wave-particle duality}. According to this duality, fundamental microscopic objects such as electrons and photons are neither pure particles nor pure waves, but both waves and particles. Or more precisely, in some conditions they behave as waves, while in other conditions they behave as particles. However, in more advanced and technical textbooks on QM, the wave-particle duality is rarely mentioned. Instead, such serious textbooks talk only about waves, i.e., wave functions $\psi({\bf x},t)$. The waves do not need to be plane waves of the form $\psi({\bf x},t)=e^{i({\bf kx}-\omega t)}$, but, in general, may have an arbitrary dependence on ${\bf x}$ and $t$. At time $t$, the wave can be said to behave as a particle if, at that time, the wave is {\em localized} around a single value of ${\bf x}$. In the ideal case, if \begin{equation}\label{x} \psi({\bf x})=\sqrt{\delta^3({\bf x}-{\bf x}')} , \end{equation} then the position ${\bf x}$ of the particle has a definite value ${\bf x}'$. The state (\ref{x}) is the eigenstate of the position operator, with the eigenvalue ${\bf x}'$. Typically, the wave attains such a localized-particle shape through a wave-function collapse associated with a measurement of a particle position. Moreover, the wave may appear as a pointlike particle for a long time if the particle position is measured many times in sequence with a small time interval between two measurements. This makes the wave to appear as a classical particle with a trajectory, which occurs, e.g., in cloud chambers. However, the position operator is just one of many (actually, infinitely many) hermitian operators in QM. Each hermitian operator corresponds to an observable, and it is widely accepted (which, as we shall see later, is also one of the myths) that the position operator does not enjoy any privileged role. From that, widely accepted, point of view, there is nothing dual about QM; electrons and photons {\em always} behave as waves, while a particlelike behavior corresponds only to a special case (\ref{x}). In this sense, the wave-particle duality is nothing but a myth. But why then the wave-particle duality is so often mentioned? One reason is philosophical; the word ``duality" sounds very ``deep" and ``mysterious" from a philosophical point of view, and some physicists obviously like it, despite the fact that a dual picture is not supported by the usual technical formulation of QM. Another reason is historical; in early days of QM, it was an experimental fact that electrons and photons sometimes behave as particles and sometimes as waves, so a dual interpretation was perhaps natural at that time when quantum theory was not yet well understood. From above, one may conclude that the notion of ``wave-particle duality" should be completely removed from a modern talk on QM. However, this is not necessarily so. Such a concept may still make sense if interpreted in a significantly different way. One way is purely linguistic; it is actually common to say that electrons and photons are ``particles", having in mind that the word ``particle" has a very different meaning than the same word in classical physics. In this sense, electrons and photons are both ``particles" (because we call them so) and ``waves" (because that is what, according to the usual interpretation, they really are). Another meaningful way of retaining the notion of ``wave-particle duality" is to understand it as a quantum-classical duality, becuse each classical theory has the corresponding quantum theory, and vice versa. However, the word ``duality" is not the best word for this correspondence, because the corresponding quantum and classical theories do not enjoy the same rights. Instead, the classical theories are merely approximations of the quantum ones. \subsection{Can wave-particle duality be taken seriously?} However, is it possible that the ``wave-particle duality" has a literal meaning; that, in some sense, electrons and photons really {\em are} both particles and waves? Most experts for foundations of QM will probably say -- no! Nevertheless, such a definite ``no" is also an unproved myth. Of course, such a definite ``no" is correct if it refers only to the usual formulation of QM. But who says that the usual formulation of QM is the ultimate theory that will never be superseded by an even better theory? (A good scientist will never say that for any theory.) In fact, such a modification of the usual quantum theory already exists. I will refer to it as the {\em Bohmian} interpretation of QM \cite{bohm}, but it is also known under the names ``de Broglie-Bohm" interpretation and ``pilot-wave" interpretation. (For recent pedagogic expositions of this interpretation, see \cite{tumul,pas}, for a pedagogic comparison with other formulations of QM, see \cite{mnogoAJP}, and for un unbiased review of advantages and disadvantages of this interpretation, see \cite{pas2}.) This interpretation consists of {\em two} equations. One is the standard Schr\"odinger equation that describes the wave-aspect of the theory, while the other is a classical-like equation that describes a particle trajectory. The equation for the trajectory is such that the force on the particle depends on the wave function, so that the motion of the particle differs from that in classical physics, which, in turn, can be used to explain all (otherwise strange) quantum phenomena. In this interpretation, {\em both} the wave function and the particle position are fundamental entities. If any known interpretation of QM respects a kind of wave-particle duality, then it is the Bohmian interpretation. More on this interpretation (which also provides a counterexample to some other myths of QM) will be presented in subsequent sections. \section{In QM, there is a time-energy uncertainty relation} \subsection{The origin of a time-energy uncertainty relation} For simplicity, consider a particle moving in one dimension. In QM, operators corresponding to the position $x$ and the momentum $p$ satisfy the commutation relation \begin{equation}\label{comrelxp} [\hat{x},\hat{p}]=i\hbar , \end{equation} where $[A,B]\equiv AB-BA$. As is well known, this commutation relation implies the position-momentum Heisenberg uncertainty relation \begin{equation}\label{Hxp} \Delta x \Delta p \geq \frac{\hbar}{2} . \end{equation} It means that one cannot measure both the particle momentum and the particle position with arbitrary accuracy. For example, the wave function correponding to a definite momentum is an eigenstate of the momentum operator \begin{equation}\label{momop} \hat{p}=-i\hbar\frac{\partial}{\partial x} . \end{equation} It is easy to see that such a wave function must be proportional to a plane wave $e^{ipx/\hbar}$. On the other hand, the wave function corresponding to an eigenstate of the position operator is essentially a $\delta$-function (see (\ref{x})). It is clear that a wave function cannot be both a plane wave and a $\delta$-function, which, in the usual formulation of QM, explains why one cannot measure both the momentum and the position with perfect accuracy. There is a certain analogy between the couple position-momentum and the couple time-energy. In particular, a wave function that describes a particle with a definite energy $E$ is proportional to a plane wave $e^{-iEt/\hbar}$. Analogously, one may imagine that a wave function corresponding to a definite time is essentially a $\delta$-function in time. In analogy with (\ref{Hxp}), this represents an essence of the reason for writing the time-energy uncertainty relation \begin{equation}\label{HtE} \Delta t \Delta E \geq \frac{\hbar}{2} . \end{equation} In introductory textbooks on QM, as well as in popular texts on QM, the time-energy uncertainty relation (\ref{HtE}) is often presented as a fact enjoying the same rights as the position-momentum uncertainty relation (\ref{Hxp}). Nevertheless, there is a great difference between these two uncertainty relations. Whereas the position-momentum uncertainty relation (\ref{Hxp}) is a fact, the time-energy uncertainty relation (\ref{HtE}) is a myth! \subsection{The time-energy uncertainty relation is not fundamental} Where does this difference come from? The main difference lies in the fact that energy is {\em not} represented by an operator analogous to (\ref{momop}), i.e., energy is not represented by the operator $i\hbar\partial/\partial t$. Instead, energy is represented by a much more complicated operator called Hamiltonian, usually having the form \begin{equation} \hat{H}=\frac{\hat{p}^2}{2m}+V(\hat{x}) . \end{equation} Nothing forbids the state $\psi(x,t)$ to be an eigenstate of $\hat{H}$ at a definite value of $t$. This difference has a deeper origin in the fundamental postulates of QM, according to which quantum operators are operators on the space of functions depending on $x$, {\em not} on the space of functions depending on $t$. Thus, space and time have very different roles in nonrelativistic QM. While $x$ is an operator, $t$ is only a classical-like parameter. A total probability that must be equal to 1 is an integral of the form \begin{equation}\label{integ1} \int_{-\infty}^{\infty} dx\, \psi^*(x,t)\psi(x,t) , \end{equation} not an integral of the form \begin{equation}\label{integ2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dx\, dt\, \psi^*(x,t)\psi(x,t) . \end{equation} In fact, if $\psi(x,t)$ is a solution of the Schr\"odinger equation, then, when the integral (\ref{integ1}) is finite, the integral (\ref{integ2}) is not finite. An analogous statement is also true for one or more particles moving in 3 dimensions; the probability density $\psi^*\psi$ is not to be integrated over time. As the time $t$ is not an operator in QM, a commutation relation analogous to (\ref{comrelxp}) but with the replacements $x\rightarrow t$, $p\rightarrow H$, does not make sense. This is another reason why the time-energy uncertainty relation (\ref{HtE}) is not really valid in QM. Nevertheless, there are attempts to replace the parameter $t$ with another quantity $T$, so that an analog of (\ref{comrelxp}) \begin{equation}\label{comrelTH} [\hat{T},\hat{H}]=-i\hbar \end{equation} is valid. However, there is a theorem due to Pauli that says that this is impossible \cite{pauli}. The two main assumptions of the Pauli theorem are that $T$ and $H$ must be hermitian operators (because only hermitian operators have real eigenvalues corresponding to real physical quantities) and that the spectrum of $H$ must be bounded from below (which corresponds to the physical requirement that energy should not have the possibility to become arbitrarily negative, because otherwise such a system would not be physically stable). Note that $p$, unlike $H$, does not need to be bounded from below. For a simple proof of the Pauli theorem, consider the operator \begin{equation}\label{pauli1} \hat{H}'\equiv e^{-i\epsilon\hat{T}/\hbar} \hat{H} e^{i\epsilon\hat{T}/\hbar} , \end{equation} where $\epsilon$ is a positive parameter with the dimension of energy. It is sufficient to consider the case of small $\epsilon$, so, by expanding the exponential functions and using (\ref{comrelTH}), one finds \begin{equation}\label{pauli2} \hat{H}' \approx \hat{H}-\epsilon . \end{equation} Now assume that the spectrum of $\hat{H}$ is bounded from below, i.e., that there exists a ground state $|\psi_0\rangle$ with the property $\hat{H}|\psi_0\rangle=E_0|\psi_0\rangle$, where $E_0$ is the minimal possible energy. Consider the state \begin{equation}\label{pauli3} |\psi\rangle=e^{i\epsilon\hat{T}/\hbar}|\psi_0\rangle . \end{equation} Assuming that $\hat{T}$ is hermitian (i.e., that $\hat{T}^{\dagger}=\hat{T}$) and using (\ref{pauli1}) and (\ref{pauli2}), one finds \begin{equation}\label{pauli4} \langle\psi| \hat{H}| \psi\rangle=\langle\psi_0 |\hat{H}'| \psi_0\rangle \approx E_0-\epsilon < E_0 . \end{equation} This shows that there exists a state $|\psi\rangle$ with the energy smaller than $E_0$. This is in contradiction with the asumption that $E_0$ is the minimal energy, which proves the theorem! There are attempts to modify some of the axioms of the standard form of quantum theory so that the commutation relation (\ref{comrelTH}) can be consistently introduced (see, e.g., \cite{busch,bostr} and references therein), but the viability of such modified axioms of QM is not widely accepted among experts. Although (\ref{HtE}) is not a fundamental relation, in most practical situations it is still true that the uncertainty $\Delta E$ and the duration of the measurement process $\Delta t$ roughly satisfy the inequality (\ref{HtE}). However, there exists also an explicit counterexample that demonstrates that it is possible in principle to measure energy with arbitrary accuracy during an arbitrarily short time-interval \cite{ahar}. It remains true that the characteristic evolution time of quantum states is given by an uncertainty relation (\ref{HtE}), but this evolution time is not to be unequivocally identified with the duration of a quantum measurement. In this sense, the time-energy uncertainty relation (\ref{HtE}) is not equally fundamental as the position-momentum uncertainty relation (\ref{Hxp}). While different roles of space and time should not be surprising in nonrelativistic QM, one may expect that space and time should play a more symmetrical role in relativistic QM. More on relativistic QM will be said in Sec.~\ref{RQM}, but here I only note that even in relativistic QM space and time do not play completely symmetric roles, because even there integrals similar to (\ref{integ2}) have not a physical meaning, while those similar to (\ref{integ1}) have. Thus, even in relativistic QM, a time-energy uncertainty relation does not play a fundamental role. \section{QM implies that nature is fundamentally random} \subsection{Fundamental randomness as a myth} QM is a theory that gives predictions on probabilities for different outcomes of measurements. But this is not a privileged property of QM, classical statistical mechanics also does this. Nevertheless, there is an important difference between QM and classical statistical mechanics. The latter is known to be an effective approximative theory useful when not all fundamental degrees of freedom are under experimental or theoretical control, while the underlying more fundamental classical dynamics is completely deterministic. On the other hand, the usual form of QM does not say anything about actual deterministic causes that lie behind the probabilistic quantum phenomena. This fact is often used to claim that QM implies that nature is fundamentally random. Of course, if the usual form of QM is really the ultimate truth, then it is true that nature is fundamentally random. But who says that the usual form of QM really {\em is} the ultimate truth? (A serious scientist will never claim that for any current theory.) {\it A priori}, one cannot exclude the existence of some {\em hidden variables} (not described by the usual form of QM) that provide a deterministic cause for all seemingly random quantum phenomena. Indeed, from the experience with classical pseudorandom phenomena, the existence of such deterministic hidden variables seems a very natural hypothesis. Nevertheless, QM is not that cheap; in QM there exist rigorous no-hidden-variable theorems. These theorems are often used to claim that hidden variables cannot exist and, consequently, that nature is fundamentally random. However, each theorem has assumptions. The main assumption is that hidden variables must reproduce the statistical predictions of QM. Since these statistical predictions are verified experimentally, one is not allowed to relax this assumption. However, this assumption alone is not sufficient to provide a theorem. In the actual constructions of these theorems, there are also some additional``auxiliary" assumptions, which, however, turn out to be physically crucial! Thus, what these theorems actually prove, is that hidden variables, if exist, cannot have these additional assumed properties. Since there is no independent proof that these additional assumed properties are necessary ingredients of nature, the assumptions of these theorems may not be valid. (I shall discuss one version of these theorems in more detail in Sec.~\ref{NOREAL}.) Therefore, the claim that QM implies fundamental randomness is a myth. \subsection{From analogy with classical statistical mechanics to the Bohmian interpretation} Some physicists, including one winner of the Nobel prize \cite{hooft}, very seriously take the possibility that some sort of deterministic hidden variables may underlie the usual form of QM. In fact, the best known and most successful hidden-variable extension of QM, the Bohmian interpretation, emerges rather naturally from the analogy with classical statistical mechanics. To see this, consider a classical particle the position of which is not known with certainty. Instead, one deals with a statistical ensemble in which only the probability density $\rho({\bf x},t)$ is known. The probability must be conserved, i.e., $\int d^3x\, \rho=1$ for each $t$. Therefore, the probability must satisfy the local conservation law (known also as the continuity equation) \begin{equation}\label{cons} \partial_t \rho +\nabla(\rho {\bf v})=0 , \end{equation} where ${\bf v}({\bf x},t)$ is the velocity of the particle at the position ${\bf x}$ and the time $t$. In the Hamilton-Jacobi formulation of classical mechanics, the velocity can be calculated as \begin{equation}\label{v} {\bf v}({\bf x},t)=\frac{\nabla S({\bf x},t)}{m}, \end{equation} where $S({\bf x},t)$ is a solution of the Hamilton-Jacobi equation \begin{equation}\label{HJ} \frac{(\nabla S)^2}{2m}+V({\bf x},t)=-\partial_t S , \end{equation} $V({\bf x},t)$ is an arbitrary potential, and $m$ is the mass of the particle. The independent real equations (\ref{cons}) and (\ref{HJ}) can be written in a more elegant form as a single complex equation. For that purpose, one can introduce a complex function \cite{ros} \begin{equation}\label{psi} \psi=\sqrt{\rho} e^{iS/\hbar} , \end{equation} where $\hbar$ is an arbitrary constant with the dimension of action, so that the exponent in (\ref{psi}) is dimensionless. With this definition of $\psi$, Eqs. (\ref{cons}) and (\ref{HJ}) are equivalent to the equation \begin{equation}\label{schcl} \left( \frac{-\hbar^2 \nabla^2}{2m} +V-Q \right) \psi= i\hbar \partial_t \psi , \end{equation} where \begin{equation}\label{Q} Q\equiv -\frac{\hbar^2}{2m}\frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}} . \end{equation} Indeed, by inserting (\ref{psi}) into (\ref{schcl}) and multiplying by $\psi^*$, it is straightforward to check that the real part of the resulting equation leads to (\ref{HJ}), while the imaginary part leads to (\ref{cons}) with (\ref{v}). The similarity of the classical equation (\ref{schcl}) with the quantum Schr\"odinger equation \begin{equation}\label{6.1} \left( \frac{-\hbar^2 \nabla^2}{2m} +V \right) \psi= i\hbar \partial_t \psi \end{equation} is obvious and remarkable! However, there are also some differences. First, in the quantum case, the constant $\hbar$ is not arbitrary, but equal to the Planck constant divided by $2\pi$. The second difference is the fact that (\ref{schcl}) contains the $Q$-term that is absent in the quantum case (\ref{6.1}). Nevertheless, the physical interpretations are the same; in both cases, $|\psi({\bf x},t)|^2$ is the probability density of particle positions. On the other hand, we know that classical mechanics is fundamentally deterministic. This is incoded in the fact that Eq. (\ref{schcl}) alone does {\em not} provide a {\em complete} description of classical systems. Instead, one is also allowed to use Eq. (\ref{v}), which says that {\em the velocity of the particle is determined whenever its position is also determined}. The classical interpretation of this is that a particle {\em always} has a definite position and velocity and that the initial position and velocity uniquely determine the position and velocity at any time $t$. From this point of view, nothing seems more natural than to assume that an analogous statement is true also in the quantum case. This assumption represents the core of the Bohmian deterministic interpretation of QM. To see the most obvious consequence of such a classical-like interpretation of the Schr\"odinger equation, note that the Schr\"odinger equation (\ref{6.1}) corresponds to a Hamilton-Jacobi equation in which $V$ in (\ref{HJ}) is replaced by $V+Q$. This is why $Q$ is often referred to as the {\em quantum potential}. The quantum potential induces a quantum force. Thus, a quantum particle trajectory satisfies a modified Newton equation \begin{equation}\label{qnewt} m\frac{d^2{\bf x}}{dt^2}=-\nabla(V+Q) . \end{equation} Such modified trajectories can be used to explain otherwise strange-looking quantum phenomena (see, e.g., \cite{pas}), such as a two-slit experiment. Note that, so far, I have only discussed a single-particle wave function $\psi({\bf x},t)$. When one generalizes this to many-particle wave functions, an additional important feature of the Bohmian interpretation becomes apparent -- the nonlocality. However, I delegate the explicit discussion of nonlocality to Sec.~\ref{L/NL}. \subsection{Random or deterministic?} As we have seen above, the analogy between classical statistical mechanics and QM can be used to interpret QM in a deterministic manner. However, this analogy does not prove that such a deterministic interpretation of QM is correct. Indeed, such deterministic quantum trajectories have never been directly observed. On the other hand, the Bohmian interpretation can explain why these trajectories are practically unobservable \cite{duerr}, so the lack of experimental evidence does not disprove this interpretation. Most experts familiar with the Bohmian interpretation agree that the observable predictions of this interpretation are consistent with those of the standard interpretation, but they often prefer the standard interpretation because the standard interpretation seems simpler to them. This is because the standard interpretation of QM does not contain Eq. (\ref{v}). I call this {\em technical simplicity}. On the other hand, the advocates of the Bohmian interpretation argue that this technical extension of QM makes QM simpler on the {\em conceptual} level. Nevertheless, it seems that most contemporary physicists consider technical simplicity more important than conceptual simplicity, which explains why most physicists prefer the standard purely probabilistic interpretation of QM. In fact, by applying a QM-motivated technical criterion of simplicity, it can be argued that even classical statistical mechanics represented by (\ref{schcl}) can be considered complete, in which case even classical mechanics can be interpreted as a purely probabilistic theory \cite{nikolcl}. But the fact is that nobody knows with certainty whether the fundamental laws of nature are probabilistic or deterministic. \section{QM implies that there is no reality besides the measured reality} \label{NOREAL} This is the central myth in QM and many other myths are based on this one. Therefore, it deserves a particularly careful analysis. \subsection{QM as the ultimate scientific theory?} On one hand, the claim that ``there is no reality besides the measured reality" may seem to lie at the heart of the scientific method. All scientists agree that the empirical evidence is the ultimate criterion for acceptance or rejection of any scientific theory, so, from this point of view, such a claim may seem rather natural. On the other hand, most scientists (apart from quantum physicists) do not find such a radical interpretation of the scientific method appealing. In particular, many consider such an interpretation too antropomorfic (was there any reality before humans or living beings existed?), while the history of science surprised us several times by discovering that we (the human beings) are not so an important part of the universe as we thought we were. Some quantum physicists believe that QM is so deep and fundamental that it is not just a science that merely applies already prescribed scientific methods, but {\em the} science that answers the fundamental ontological and epistemological questions on the deepest possible level. But is such a (certainly not modest) belief really founded? What are the true facts from which such a belief emerged? Let us see! \subsection{From a classical variable to a quantumlike representation} Consider a simple real physical {\em classical} variable $s$ that can attain only two different values, say $s_1=1$ and $s_2=-1$. By assumption, such a variable cannot change continuously. Neverheless, a quantity that can still change continuously is the {\em probability} $p_n(t)$ that, at a given time $t$, the variable attains the value $s_n$. The probabilities must satisfy \begin{equation}\label{prob} p_1(t) + p_2(t)=1 . \end{equation} The average value of $s$ is given by \begin{equation}\label{av} \langle s \rangle = s_1p_1(t)+s_2p_2(t) . \end{equation} Although $s$ can attain only two values, $s_1=1$ and $s_2=-1$, the average value of $s$ can continuously change with time and attain an arbitrary value between $1$ and $-1$. The probabilities $p_n$ must be real and non-negative. A simple formal way to provide this is to write $p_n=\psi_n^*\psi_n$, where $\psi_n$ are auxiliary quantities that may be negative or even complex. It is also convenient to view the numbers $\psi_n$ (or $\psi_n^*$) as components of a {\em vector}. This vector can be represented either as a column \begin{equation}\label{col} |\psi\rangle \equiv \left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array} \right) , \end{equation} or a row \begin{equation}\label{row} \langle \psi| \equiv (\psi_1^* , \psi_2^*) . \end{equation} The norm of this vector is \begin{equation}\label{norm} \langle \psi|\psi \rangle=\psi_1^*\psi_1+\psi_2^*\psi_2=p_1 + p_2 . \end{equation} Thus, the constraint (\ref{prob}) can be viewed as a constraint on the norm of the vector -- the norm must be unit. By introducing two special unit vectors \begin{equation}\label{basis} |\phi_1\rangle = |\!\uparrow\,\rangle \equiv \left( \begin{array}{c} 1 \\ 0 \end{array} \right) , \;\;\;\; |\phi_2\rangle = |\!\downarrow\,\rangle \equiv \left( \begin{array}{c} 0 \\ 1 \end{array} \right) , \end{equation} one finds that the probabilities can be expressed in terms of vector products as \begin{equation}\label{p12} p_1=|\langle \phi_1|\psi\rangle|^2 , \;\;\;\; p_2=|\langle \phi_2|\psi\rangle|^2 . \end{equation} It is also convenient to introduce a diagonal {\em matrix} $\sigma$ that has the values $s_n$ at the diagonal and the zeros at all other places: \begin{equation}\label{sigma} \sigma\equiv \left( \begin{array}{cc} s_1 & 0 \\ 0 & s_2 \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) . \end{equation} The special vectors (\ref{basis}) have the property \begin{equation}\label{s12} \sigma |\phi_1\rangle=s_1|\phi_1\rangle , \;\;\;\; \sigma |\phi_2\rangle=s_2|\phi_2\rangle , \end{equation} which shows that (i) the vectors (\ref{basis}) are the eigenvectors of the matrix $\sigma$ and (ii) the eigenvalues of $\sigma$ are the allowed values $s_1$ and $s_2$. The average value (\ref{av}) can then be formally written as \begin{equation} \langle s \rangle =\langle \psi(t)|\sigma |\psi(t) \rangle . \end{equation} What has all this to do with QM? First, a discrete spectrum of the allowed values is typical of quantum systems; after all, discrete spectra are often referred to as ``quantized" spectra, which, indeed, is why QM attained its name. (Note, however, that it would be misleading to claim that quantized spectra is the most fundamental property of quantum systems. Some quantum variables, such as the position of a particle, do not have quantized spectra.) A discrete spectrum contradicts some common prejudices on classical physical systems because such a spectrum does not allow a continuous change of the variable. Nevertheless, a discrete spectrum alone does not yet imply quantum physics. The formal representation of probabilities and average values in terms of complex numbers, vectors, and matrices as above is, of course, inspired by the formalism widely used in QM; yet, this representation by itself does not yet imply QM. The formal representation in terms of complex numbers, vectors, and matrices can still be interpreted in a classical manner. \subsection{From the quantumlike representation to quantum variables} The really interesting things that deviate significantly from the classical picture emerge when one recalls the following formal algebraic properties of vector spaces: Consider an arbitrary $2\times 2$ unitary matrix $U$, $U^{\dagger}U=1$. (In particular, $U$ may or may not be time dependent.) Consider a formal transformation \begin{eqnarray} & |\psi'\rangle =U|\psi\rangle , \;\;\;\; \langle\psi'|=\langle\psi|U^{\dagger} , & \nonumber \\ & \sigma'=U\sigma U^{\dagger} . & \end{eqnarray} (This transformation refers to {\em all} vectors $|\psi\rangle$ or $\langle\psi|$, including the eigenvectors $|\phi_1\rangle$ and $|\phi_2\rangle$.) In the theory of vector spaces, such a transformation can be interpreted as a new representation of the {\em same} vectors. Indeed, such a transformation does not change the physical properties, such as the norm of the vector (\ref{norm}), the probabilities (\ref{p12}), and the eigenvalues in (\ref{s12}), calculated in terms of the primed quantities $|\psi'\rangle$, $\langle\psi'|$, and $\sigma'$. This means that {\em the explicit representations}, such as those in (\ref{col}), (\ref{row}), (\ref{basis}), and (\ref{sigma}), {\em are irrelevant}. Instead, {\em the only physically relevant properties are abstract, representation-independent quantities, such as scalar products and the spectrum of eigenvalues}. What does it mean physically? One possibility is not to take it too seriously, as it is merely an artefact of an artificial vector-space representation of certain physical quantities. However, the history of theoretical physics teaches us that formal mathematical symmetries often have a deeper physical message. So let us try to take it seriously, to see where it will lead us. Since the representation is not relevant, it is natural to ask if there are {\em other} matrices (apart from $\sigma$) that do {\em not} have the form of (\ref{sigma}), but still have the same spectrum of eigenvalues as $\sigma$? The answer is {\em yes}! But then we are in a very strange, if not paradoxical, position; we have started with a consideration of a {\em single} physical variable $s$ and arrived at a result that seems to suggest the existence of some {\em other}, equally physical, variables. As we shall see, this strange result lies at the heart of the (also strange) claim that there is no reality besides the measured reality. But let us not jump to the conclusion too early! Instead, let us first study the mathematical properties of these additional physical variables. Since an arbitrary $2\times 2$ matrix is defined by 4 independent numbers, each such matrix can be written as a linear combination of 4 independent matrices. One convenient choice of 4 independent matrices is \begin{eqnarray}\label{pauli} & 1 \equiv \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) , \;\;\;\; \sigma_1 \equiv \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) , & \nonumber \\ & \sigma_2 \equiv \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) , \;\;\;\; \sigma_3 \equiv \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) . & \end{eqnarray} The matrix $\sigma_3$ is nothing but a renamed matrix $\sigma$ in (\ref{sigma}). The matrices $\sigma_i$, known also as Pauli matrices, are chosen so that they satisfy the familiar symmetrically looking commutation relations \begin{equation}\label{comsig} [\sigma_j,\sigma_k]=2i\epsilon_{jkl}\sigma_l \end{equation} (where summation over repeated indices is understood). The matrices $\sigma_i$ are all hermitian, $\sigma_i^{\dagger}=\sigma_i$, which implies that their eigenvalues are real. Moreover, all three $\sigma_i$ have the eigenvalues $1$ and $-1$. The most explicit way to see this is to construct the corresponding eigenvectors. The eigenvectors of $\sigma_3$ are $|\!\uparrow_3\rangle \equiv |\!\uparrow\,\rangle$ and $|\!\downarrow_3\rangle \equiv |\!\downarrow\,\rangle$ defined in (\ref{basis}), with the eigenvalues $1$ and $-1$, respectively. Analogously, it is easy to check that the eigenvectors of $\sigma_1$ are \begin{eqnarray}\label{basis1} & |\!\uparrow_1\rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c} 1 \\ 1 \end{array} \right)= \displaystyle\frac{ |\!\uparrow\,\rangle + |\!\downarrow\,\rangle }{\sqrt{2}} , & \nonumber \\ & |\!\downarrow_1\rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c} 1 \\ -1 \end{array} \right)= \displaystyle\frac{ |\!\uparrow\,\rangle - |\!\downarrow\,\rangle }{\sqrt{2}} , & \end{eqnarray} with the eigenvalues $1$ and $-1$, respectively, while the eigenvectors of $\sigma_2$ are \begin{eqnarray}\label{basis2} & |\!\uparrow_2\rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c} 1 \\ i \end{array} \right)= \displaystyle\frac{ |\!\uparrow\,\rangle + i|\!\downarrow\,\rangle }{\sqrt{2}} , & \nonumber \\ & |\!\downarrow_2\rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c} 1 \\ -i \end{array} \right)= \displaystyle\frac{ |\!\uparrow\,\rangle - i|\!\downarrow\,\rangle }{\sqrt{2}} , & \end{eqnarray} with the same eigenvalues $1$ and $-1$, respectively. The commutation relations (\ref{comsig}) are invariant under the unitary transformations $\sigma_i\rightarrow \sigma_i'=U\sigma_i U^{\dagger}$. This suggests that the commutation relations themselves are more physical than the explicit representation given by (\ref{pauli}). Indeed, the commutation relations (\ref{comsig}) can be recognized as the algebra of the generators of the group of rotations in 3 spacial dimensions. There is nothing quantum mechanical about that; in classical physics, matrices represent {\em operators}, that is, abstract objects that act on vectors by changing (in this case, rotating) them. However, in the usual formulation of classical physics, there is a clear distinction between operators and physical variables -- the latter are not represented by matrices. In contrast, in our formulation, the matrices $\sigma_i$ have a double role; mathematically, they are operators (because they act on vectors $|\psi\rangle$), while physically, they represent physical variables. From symmetry, it is natural to assume that all three $\sigma_i$ variables are equally physical. This assumption is one of the central assumptions of QM that makes it different from classical mechanics. For example, the spin operator of spin $\frac{1}{2}$ particles in QM is given by \begin{equation} S_i=\frac{\hbar}{2}\sigma_i , \end{equation} where the 3 labels $i=1,2,3$ correspond to 3 space directions $x,y,z$, respectively. Thus, in the case of spin, it is clear that $\sigma_3\equiv \sigma_z$ cannot be more physical than $\sigma_1\equiv \sigma_x$ or $\sigma_2\equiv \sigma_y$, despite the fact that $\sigma_3$ corresponds to the initial physical variable with which we started our considerations. On the other hand, the fact that the new variables $\sigma_1$ and $\sigma_2$ emerged from the initial variable $\sigma_3$ suggests that, in some sense, these 3 variables are not really completely independent. Indeed, a nontrivial relation among them is incoded in the nontrivial commutation relations (\ref{comsig}). In QM, two variables are really independent only if their commutator vanishes. (For example, recall that, unlike (\ref{comsig}), the position operators $x_i$ and the momentum operators $p_i$ in QM satisfy $[x_i,x_j]=[p_i,p_j]=0$. In fact, this is the ultimate reason why the most peculiar aspects of QM are usually discussed on the example of spin variables, rather than on position or momentum variables.) \subsection{From quantum variables to quantum measurements} Now consider the state \begin{equation}\label{50:50} |\psi\rangle = \frac{ |\!\uparrow\,\rangle + |\!\downarrow\,\rangle }{\sqrt{2}} . \end{equation} In our initial picture, this state merely represents a situation in which there are $50:50$ chances that the system has the value of $s$ equal to either $s=1$ or $s=-1$. Indeed, if one performs a measurement to find out what that value is, one will obtain one and only one of these two values. By doing such a measurement, the observer gaines new information about the system. For example, if the value turns out to be $s=1$, this gain of information can be described by a ``collapse" \begin{equation}\label{coll} |\psi\rangle \rightarrow |\!\uparrow\,\rangle , \end{equation} as the state $|\!\uparrow\,\rangle$ corresponds to a situation in which one is certain that $s=1$. At this level, there is nothing mysterious and nothing intrinsically quantum about this collapse. However, in QM, the state (\ref{50:50}) contains more information than said above! (Otherwise, there would be no physical difference between the two different states in (\ref{basis1}).) From (\ref{basis1}), we see that {\em the ``uncertain" state (\ref{50:50}) corresponds to a situation in which one is absolutely certain that the value of the variable $\sigma_1$ is equal to $1$}. On the other hand, if one performs the measurement of $s=\sigma_3$ and obtains the value as in (\ref{coll}), then the postmeasurement state \begin{equation}\label{50:50.2} |\!\uparrow\,\rangle = \displaystyle\frac{ |\!\uparrow_1\rangle + |\!\downarrow_1\rangle }{\sqrt{2}} \end{equation} implies that the value of $\sigma_1$ is no longer known with certainty. This means that, in some way, the measurement of $\sigma_3$ destroys the {\em information} on $\sigma_1$. But the crucial question is not whether the information on $\sigma_1$ has been destroyed, but rather whether the {\em value itself} of $\sigma_1$ has been destroyed. In other words, is it possible that all the time, irrespective of the performed measurements, $\sigma_1$ has the value $1$? The fact is that if this were the case, then it would contradict the predictions of QM! The simplest way to see this is to observe that, after the first measurement with the result (\ref{coll}), one can perform a new measurement, in which one measures $\sigma_1$. From (\ref{50:50.2}), one sees that there are $50\%$ chances that the result of the new measurement will give the value $-1$. That is, there is $0.5\cdot 0.5=0.25$ probability that the sequence of the two measurements will correspond to the collapses \begin{equation}\label{2coll} |\!\uparrow_1\rangle \rightarrow |\!\uparrow\,\rangle \rightarrow |\!\downarrow_1\rangle . \end{equation} In (\ref{2coll}), the initial value of $\sigma_1$ is $1$, while the final value of $\sigma_1$ is $-1$. Thus, QM predicts that the value of $\sigma_1$ may change during the process of the two measurements. Since the predictions of QM are in agreement with experiments, we are forced to accept this as a fact. This demonstrates that QM is {\em contextual}, that is, that the measured values depend on the context, i.e., on the measurement itself. This property by itself is still not intrinsically quantum, in classical physics the result of a measurement may also depend on the measurement. Indeed, in classical mechanics there is nothing mysterious about it; there, a measurement is a physical process that, as any other physical process, may influence the values of the physical variables. But can we talk about the value of the variable irrespective of measurements? From a purely experimental point of view, we certainly cannot. Here, however, we are talking about theoretical physics. So does the theory allow to talk about that? Classical theory certainly does. But what about QM? If {\em all} theoretical knowledge about the system is described by the state $|\psi\rangle$, then quantum theory does {\em not} allow that! This is the fact. But, can we be sure that we shall never discover some more complete theory than the current form of QM, so that this more complete theory will talk about theoretical values of variables irrespective of measurements? From the example above, it is not possible to draw such a conclusion. Nevertheless, physicists are trying to construct more clever examples from which such a conclusion could be drawn. Such examples are usually referred to as ``no-hidden-variable theorems". But what do these theorems really prove? Let us see! \subsection{From quantum measurements to no-hidden-variable theorems} To find such an example, consider a system consisting of {\em two independent} subsystems, such that each subsystem is characterized by a variable that can attain only two values, $1$ and $-1$. The word ``independent" (which will turn out to be the crucial word) means that the corresponding operators commute and that the Hamiltonian does not contain an interaction term between these two variables. For example, this can be a system with two free particles, each having spin $\frac{1}{2}$. In this case, the state $|\!\uparrow\,\rangle \otimes |\!\downarrow\,\rangle \equiv |\!\uparrow\,\rangle |\!\downarrow\,\rangle$ corresponds to the state in which the first particle is in the state $|\!\uparrow\,\rangle$, while the second particle is in the state $|\!\downarrow\,\rangle$. (The commutativity of the corresponding variables is provided by the operators $\sigma_j\otimes 1$ and $1\otimes\sigma_k$ that correspond to the variables of the first and the second subsystem, respectively.) Instead of (\ref{50:50}), consider the state \begin{equation}\label{EPR} |\psi\rangle = \frac{ |\!\uparrow\,\rangle |\!\downarrow\,\rangle + |\!\downarrow\,\rangle |\!\uparrow\,\rangle }{\sqrt{2}} . \end{equation} This state constitutes the basis for the famous Einstein-Podolsky-Rosen-Bell paradox. This state says that if the first particle is found in the state $|\!\uparrow\,\rangle$, then the second particle will be found in the state $|\!\downarrow\,\rangle$, and vice versa. In other words, the second particle will always take a direction opposite to that of the first particle. In an oversimplified version of the paradox, one can wonder how the second particle knows about the state of the first particle, given the assumption that there is no interaction between the two particles? However, this oversimplified version of the paradox can be easily resolved in classical terms by observing that the particles do not necessarily need to interact, because they could have their (mutually opposite) values all the time even before the measurement, while the only role of the measurement was to reveal these values. The case of a single particle discussed through Eqs. (\ref{50:50})-(\ref{2coll}) suggests that a true paradox can only be obtained when one assumes that the variable corresponding to $\sigma_1$ or $\sigma_2$ is also a physical variable. Indeed, this is what has been obtained by Bell \cite{bell}. The paradox can be expressed in terms of an {\em inequality} that the correlation functions among different variables must obey if the measurement is merely a revealation of the values that the noninteracting particles had before the measurement. (For more detailed pedagogic expositions, see \cite{merm1,laloe}.) The predictions of QM turn out to be in contradiction with this Bell inequality. The experiments violate the Bell inequality and confirm the predictions of QM (see \cite{genov} for a recent review). This is the fact! However, instead of presenting a detailed derivation of the Bell inequality, for pedagogical purposes I shall present a simpler example that does not involve inequalities, but leads to the same physical implications. The first no-hidden-variable theorem without inequalities has been found by Greenberger, Horne, and Zeilinger \cite{GHZ} (for pedagogic expositions, see \cite{merm2,jord,laloe}), for a system with 3 particles. However, the simplest such theorem is the one discovered by Hardy \cite{hardy} (see also \cite{hardy-more}) that, like the one of Bell, involves only 2 particles. Although pedagogic expositions of the Hardy result also exist \cite{jord,merm3,laloe}, since it still seems not to be widely known in the physics community, here I present a very simple exposition of the Hardy result (so simple that one can really wonder why the Hardy result was not discovered earlier). Instead of (\ref{EPR}), consider a slightly more complicated state \begin{equation}\label{H1} |\psi\rangle = \frac{ |\!\downarrow\,\rangle |\!\downarrow\,\rangle + |\!\uparrow\,\rangle |\!\downarrow\,\rangle + |\!\downarrow\,\rangle |\!\uparrow\,\rangle }{\sqrt{3}} . \end{equation} Using (\ref{basis1}), we see that this state can also be written in two alternative forms as \begin{equation}\label{H2} |\psi\rangle = \frac{ \sqrt{2} |\!\downarrow\,\rangle |\!\uparrow_1\,\rangle + |\!\uparrow\,\rangle |\!\downarrow\,\rangle }{\sqrt{3}} , \end{equation} \begin{equation}\label{H3} |\psi\rangle = \frac{ \sqrt{2} |\!\uparrow_1\rangle |\!\downarrow\,\rangle + |\!\downarrow\,\rangle |\!\uparrow\,\rangle }{\sqrt{3}} . \end{equation} From these 3 forms of $|\psi\rangle$, we can infer the following: \newline (i) From (\ref{H1}), at least one of the particles is in the state $|\!\downarrow\,\rangle$. \newline (ii) From (\ref{H2}), if the first particle is in the state $|\!\downarrow\,\rangle$, then the second particle is in the state $|\!\uparrow_1\,\rangle$. \newline (iii) From (\ref{H3}), if the second particle is in the state $|\!\downarrow\,\rangle$, then the first particle is in the state $|\!\uparrow_1\,\rangle$. \newline Now, by classical reasoning, from (i), (ii), and (iii) one infers that \newline (iv) It is impossible that both particles are in the state $|\!\downarrow_1\,\rangle$. \newline But is (iv) consistent with QM? If it is, then $\langle \downarrow_1\!|\langle \downarrow_1\!|\psi\rangle$ must be zero. However, using (\ref{H2}), $\langle \downarrow_1\!|\!\uparrow_1\rangle =0$, and an immediate consequence of (\ref{basis1}) $\langle \downarrow_1\!|\!\uparrow\rangle = -\langle \downarrow_1\!|\!\downarrow\rangle =1/\sqrt{2}$, we see that \begin{equation} \langle \downarrow_1\!|\langle \downarrow_1\!|\psi\rangle = \frac{ \langle \downarrow_1\!|\!\uparrow\,\rangle \langle \downarrow_1\!|\!\downarrow\,\rangle }{\sqrt{3}} =\frac{-1}{2\sqrt{3}} , \end{equation} which is not zero. Therefore, (iv) is wrong in QM; there is a finite probability for both particles to be in the state $|\!\downarrow_1\rangle$. This is the fact! But what exactly is wrong with the reasoning that led to (iv)? The fact is that there are {\em several}(!) possibilities. Let us briefly discuss them. \subsection{From no-hidden-variable theorems to physical interpretations} One possibility is that classical logic cannot be used in QM. Indeed, this motivated the development of a branch of QM called {\em quantum logic}. However, most physicists (as well as mathematicians) consider a deviation from classical logic too radical. Another possibility is that only one matrix, say $\sigma_3$, corresponds to a genuine physical variable. In this case, the true state of a particle can be $|\!\uparrow\,\rangle$ or $|\!\downarrow\,\rangle$, but not a state such as $|\!\uparrow_1\rangle$ or $|\!\downarrow_1\rangle$. Indeed, such a possibility corresponds to our starting picture in which there is only one physical variable called $s$ that was later artifically represented by the matrix $\sigma\equiv\sigma_3$. Such an interpretation may seem reasonable, at least for some physical variables. However, if $\sigma_3$ corresponds to the spin in the $z$-direction, then it does not seem reasonable that the spin in the $z$-direction is more physical than that in the $x$-direction or the $y$-direction. Picking up one preferred variable breaks the symmetry, which, at least in some cases, does not seem reasonable. The third possibility is that one should distinguish between the claims that ``the system {\em has} a definite value of a variable" and ``the system is {\em measured} to have a definite value of a variable". This interpretation of QM is widely accepted. According to this interpretation, the claims (i)-(iii) refer only to the results of measurements. These claims assume that $\sigma=\sigma_3$ is measured for at least one of the particles. Consequently, the claim (iv) is valid only if this assumption is fulfilled. In contrast, if $\sigma_3$ is not measured at all, then it is possible to measure both particles to be in the state $|\!\downarrow_1\rangle$. Thus, the paradox that (iv) seems to be both correct and incorrect is merely a manifestation of quantum contextuality. In fact, {\em all} no-hidden-variable theorems can be viewed as manifestations of quantum contextuality. However, there are at least two drastically different versions of this quantum-contextuality interpretation. In the first version, it does not make sense even to talk about the values that are not measured. I refer to this version as the {\em orthodox} interpretation of QM. (The orthodox interpretation can be further divided into a {\em hard} version in which it is claimed that such unmeasured values simply do not exist, and a {\em soft} version according to which such values perhaps might exist, but one should not talk about them because one cannot know about the existence of something that is not measured.) In the second version, the variables have some values even when they are not measured, but the process of measurement is a physical process that may influence these values. The second version assumes that the standard formalism of QM is not complete, i.e., that even a more accurate description of physical systems is possible than that provided by standard QM. According to this version, ``no-hidden-variable" theorems (such as the one of Bell or Hardy) do not really prove that hidden variables cannot exist, because these theorems {\em assume} that there are no interactions between particles, while this assumption may be violated at the level of hidden variables. The most frequent argument for the validity of this assumption is the locality principle, which then must be violated by any hidden-variable completion of standard QM. However, since the assumption of the absence of interaction between particles is much more general than the assumption that it is the locality principle that forbids such an interaction, and, at the same time, since the the discussion of the locality principle deserves a separate section, I delegate the detailed discussion of locality to the next section, Sec.~\ref{L/NL}. Most pragmatic physicists seem to (often tacitly) accept the soft-orthodox interpretation. From a pragmatic point of view, such an attitude seems rather reasonable. However, physicists who want to understand QM at the deepest possible level can hardly be satisfied with the soft version of the orthodox interpretation. They are forced either to adopt the hard-orthodox interpretation or to think about the alternatives (like hidden variables, preferred variables, or quantum logic). Among these physicists that cope with the foundations of QM at the deepest level, the hard-orthodox point of view seems to dominate. (If it did not dominate, then I would not call it ``orthodox"). However, even the advocates of the hard-orthodox interpretation do not really agree what exactly this interpretation means. Instead, there is a number of subvariants of the hard-orthodox interpretation that differ in the fundamental ontology of nature. Some of them are rather antropomorphic, by attributing a fundamental role to the observers. However, most of them attempt to avoid antropomorphic ontology, for example by proposing that the concept of {\em information} on reality is more fundamental than the concept of reality itself \cite{zeil}, or that reality is relative or ``relational" \cite{rov1,rov2}, or that correlations among variables exist, while the variables themselves do not \cite{mermcor}. Needless to say, all such versions of the hard-orthodox interpretation necessarily involve deep (and dubious) philosophical assumptions and postulates. To avoid philosophy, an alternative is to adopt a softer version of the orthodox interpretation (see, e.g., \cite{medina}). The weakness of the soft versions is the fact that they do not even try to answer fundamental questions one may ask, but their advocates often argue that these questions are not physical, but rather metaphysical or philosophical. Let us also discuss in more detail the possibility that one variable is more physical than the others, that only this preferred variable corresponds to the genuine physical reality. Of course, it does not seem reasonable that spin in the $z$-direction is more physical than that in the $x$- or the $y$-direction. However, it is not so unreasonable that, for example, the particle position is a more fundamental variable than the particle momentum or energy. (After all, most physicists will agree that this is so in classical mechanics, despite the fact that the Hamiltonian formulation of classical mechanics treats position and momentum on an equal footing.) Indeed, in practice, all quantum measurements eventually reduce to an observation of the {\em position} of something (such as the needle of the measuring apparatus). In particular, the spin of a particle is measured by a Stern-Gerlach apparatus, in which the magnetic field causes particles with one orientation of the spin to change their direction of motion to one side, and those with the opposite direction to the other. Thus, one does not really observe the spin itself, but rather the position of the particle. In general, assume that one wants to measure the value of the variable described by the operator $A$. It is convenient to introduce an orthonormal basis $\{ |\psi_a\rangle \}$ such that each $|\psi_a\rangle$ is an eigenvector of the operator $A$ with the eigenvalue $a$. The quantum state can be expanded in this basis as \begin{equation} |\psi\rangle = \sum_a c_a |\psi_a\rangle , \end{equation} where (assuming that the spectrum of $A$ is not degenerate) $|c_a|^2$ is the probability that the variable will be measured to have the value $a$. To perform a measurement, one must introduce the degrees of freedom of the measuring apparatus, which, before the measurement, is described by some state $|\phi\rangle$. In an ideal measurement, the interaction between the measured degrees of freedom and the degrees of freedom of the measuring apparatus must be such that the total quantum state exhibits entanglement between these two degrees of freedom, so that the total state takes the form \begin{equation}\label{meas} |\Psi\rangle = \sum_a c_a |\psi_a\rangle |\phi_a\rangle , \end{equation} where $|\phi_a\rangle$ are orthonormal states of the measuring apparatus. Thus, whenever the measuring apparatus is found in the state $|\phi_a\rangle$, one can be certain (at least theoretically) that the state of the measured degree of freedom is given by $|\psi_a\rangle$. Moreover, from (\ref{meas}) it is clear that the probability for this to happen is equal to $|c_a|^2$, the same probability as that without introducing the measuring apparatus. Although the description of the quantum measurement as described above is usually not discussed in practical textbooks on QM, it is actually a part of the standard form of quantum theory and does not depend on the interpretation. (For modern practical introductory lectures on QM in which the theory of measurement is included, see, e.g., \cite{lectqm}.) What this theory of quantum measurement suggests is that, in order to reproduce the statistical predictions of standard QM, it is not really necessary that all hermitian operators called ``observables" correspond to genuine physical variables. Instead, it is sufficient that only one or a few preferred variables that are really measured in practice correspond to genuine physical variables, while the rest of the ``observables" are merely hermitian operators that do not correspond to true physical reality \cite{naive}. This is actually the reason why the Bohmian interpretation discussed in the preceding section, in which the preferred variables are the particle positions, is able to reproduce the quantum predictions on {\em all} quantum observables, such as momentum, energy, spin, etc. Thus, the Bohmian interpretation combines two possibilities discussed above: one is the existence of the preferred variable (the particle position) and the other is the hidden variable (the particle position existing even when it is not measured). To conclude this section, QM does not prove that there is no reality besides the measured reality. Instead, there are several alternatives to it. In particular, such reality may exist, but then it must be contextual (i.e., must depend on the measurement itself.) The simplest (although not necessary) way to introduce such reality is to postulate it only for one or a few preferred quantum observables. \section{QM is local/nonlocal} \label{L/NL} \subsection{Formal locality of QM} Classical mechanics is local. This means that a physical quantity at some position ${\bf x}$ and time $t$ may be influenced by another physical quantity only if this other physical quantity is attached to the same ${\bf x}$ and $t$. For example, two spacially separated local objects cannot communicate directly, but only via a third physical object that can move from one object to the other. In the case of $n$ particles, the requirement of locality can be written as a requirement that the Hamiltonian $H({\bf x}_1, \ldots,{\bf x}_n,{\bf p}_1, \ldots,{\bf p}_n)$ should have the form \begin{equation}\label{hcl} H=\sum_{l=1}^{n} H_l({\bf x}_l,{\bf p}_l) . \end{equation} In particular, a nontrivial 2-particle potential of the form $V({\bf x}_1-{\bf x}_2)$ is forbidden by the principle of locality. Note that such a potential is {\em not} forbidden in Newtonian classical mechanics. However, known fundamental interactions are relativistic interactions that do not allow such instantaneous communications. At best, such a nonlocal potential can be used as an approximation valid when the particles are sufficiently close to each other and their velocities are sufficiently small. The quantum Hamiltonian is obtained from the corresponding classical Hamiltonian by a replacement of classical positions and momenta by the corresponding quantum operators. Thus, the quantum Hamiltonian takes the same local form as the classical one. Since the Schr\"odinger equation \begin{equation}\label{schloc} H|\psi(t)\rangle =i\hbar\partial_t|\psi(t)\rangle \end{equation} is based on this local Hamiltonian, any change of the wave function induced by the Schr\"odinger equation (\ref{schloc}) is local. This is the fact. For this reason, it is often claimed that QM is local to the same extent as classical mechanics is. \subsection{(Non)locality and hidden variables} The principle of locality is often used as the crucial argument against hidden variables in QM. For example, consider two particles entangled such that their wave function (with the spacial and temporal dependence of wave functions suppressed) takes the form (\ref{H1}). Such a form of the wave function can be kept even when the particles become spacially separated. As we have seen, the fact that (iv) is inconsistent with QM can be interpreted as QM contextuality. However, we have seen that there are two versions of QM contextuality -- the orthodox one and the hidden-variable one. The principle of locality excludes the hidden-variable version of QM contextuality, because this version requires interactions between the two particles, which are impossible when the particles are (sufficiently) spacially separated. However, it is important to emphasize that the principle of locality is an {\em assumption}. We know that the Schr\"odinger equation satisfies this principle, but we do not know if this principle must be valid for {\em any} physical theory. In particular, subquantum hidden variables might not satisfy this principle. Physicists often object that nonlocal interactions contradict the theory of relativity. However, there are several responses to such objections. First, the theory of relativity is just as any other theory -- nobody can be certain that this theory is absolutely correct at all (including the unexplored ones) levels. Second, nonlocality by itself does not necessarily contradict relativity. For example, a local relativistic-covariant field theory (see Sec.~\ref{QFT}) can be defined by an action of the form $\int d^4x {\cal L}(x)$, where ${\cal L}(x)$ is the local Lagrangian density transforming (under arbitrary coordinate transformations) as a scalar density. A nonlocal action may have a form $\int d^4x \int d^4x'{\cal L}(x,x')$. If ${\cal L}(x,x')$ transforms as a bi-scalar density, then such a nonlocal action is relativistically covariant. Third, the nonlocality needed to explain quantum contextuality requires {\em instantaneous} communication, which is often claimed to be excluded by the theory of relativity, as the velocity of light is the maximal possible velocity allowed by the theory of relativity. However, this is actually a myth in the theory of relativity; this theory by itself does {\em not} exclude faster-than-light communication. It excludes it {\em only} if some {\em additional} assumptions on the nature of matter are used. The best known counterexample are {\em tachyons} \cite{tachyon} -- hypothetical particles with negative mass squared that move faster than light and fully respect the theory of relativity. Some physicists argue that faster-than-light communication contradicts the principle of causality, but this is also nothing but a myth \cite{liberati,nikolcaus}. (As shown in \cite{nikolcaus}, this myth can be traced back to one of the most fundamental myths in physics according to which time fundamentally differs from space by having a property of ``lapsing".) Finally, some physicists find absurd or difficult even to conceive physical laws in which information between distant objects is transferred instantaneously. It is ironic that they probably had not such mental problems many years ago when they did not know about the theory of relativity but {\em did} know about the Newton instantaneous law of gravitation or the Coulomb instantaneous law of electrostatics. To conclude this paragraph, hidden variables, if exist, must violate the principle of locality, which may or may not violate the theory of relativity. To illustrate nonlocality of hidden variables, I consider the example of the Bohmian interpretation. For a many-particle wave function $\Psi({\bf x}_1,\ldots,{\bf x}_n,t)$ that describes $n$ particles with the mass $m$, it is straightforward to show that the generalization of (\ref{Q}) is \begin{equation}\label{Qn} Q({\bf x}_1,\ldots,{\bf x}_n,t) = -\frac{\hbar^2}{2m} \displaystyle\frac{\displaystyle\sum_{l=1}^{n} \nabla^2_l \sqrt{\rho({\bf x}_1,\ldots,{\bf x}_n,t)}} {\sqrt{\rho({\bf x}_1,\ldots,{\bf x}_n,t)}} . \end{equation} When the wave function exhibits entanglement, i.e., when $\Psi({\bf x}_1,\ldots,{\bf x}_n,t)$ is {\em not} a local product of the form $\psi_1({\bf x}_1,t)\cdots\psi_n({\bf x}_n,t)$, then $Q({\bf x}_1,\ldots,{\bf x}_n,t)$ is not of the form $\sum_{l=1}^{n}Q_l({\bf x}_l,t)$ (compare with (\ref{hcl})). In the Bohmian interpretation, this means that $Q$ is the quantum potential which (in the case of entanglement) describes a nonlocal interaction. For attempts to formulate the nonlocal Bohmian interaction in a relativistic covariant way, see, e.g., \cite{durpra1,durpra2, hort,nikolfpl3,nikddw,nikmft}. \subsection{(Non)locality without hidden variables?} Concerning the issue of locality, the most difficult question is whether QM itself, without hidden variables, is local or not. The fact is that there is no consensus among experts on that issue. It is known that quantum effects, such as the Einstein-Podolsky-Rosen-Bell effect or the Hardy effect, cannot be used to transmit information. This is because the choice of the state to which the system will collapse is {\em random} (as we have seen, this randomness may be either fundamental or effective), so one cannot choose to transmit the message one wants. In this sense, QM is local. On the other hand, the correlation among different subsystems is nonlocal, in the sense that one subsystem is correlated with another subsystem, such that this correlation cannot be explained in a local manner in terms of preexisting properties before the measurement. Thus, there are good reasons for the claim that QM is {\em not} local. Owing to the nonlocal correlations discussed above, some physicists claim that it is a fact that QM is not local. Nevertheless, many experts do not agree with this claim, so it cannot be regarded as a fact. Of course, at the conceptual level, it is very difficult to conceive how nonlocal correlations can be explained without nonlocality. Nevertheless, hard-orthodox quantum physicists are trying to do that (see, e.g., \cite{zeil,rov1,rov2,mermcor}). In order to save the locality principle, they, in one way or another, deny the existence of objective reality. Without objective reality, there is nothing to be objectively nonlocal. What remains is the wave function that satisfies a local Schr\"odinger equation and does not represent reality, but only the information on reality, while reality itself does not exist in an objective sense. Many physicists (including myself) have problems with thinking about information on reality without objective reality itself, but it does not prove that such thinking is incorrect. To conclude, the fact is that, so far, there has been no final proof with which most experts would agree that QM is either local or nonlocal. (For the most recent attempt to establish such a consensus see \cite{niknonloc}.) There is only agreement that if hidden variables (that is, objective physical properties existing even when they are not measured) exist, then they must be nonlocal. Some experts consider this a proof that they do not exist, whereas other experts consider this a proof that QM is nonlocal. They consider these as proofs because they are reluctant to give up either of the principle of locality or of the existence of objective reality. Nevertheless, more open-minded (some will say -- too open-minded) people admit that neither of these two ``crazy" possibilities (nonlocality and absence of objective reality) should be {\em a priori} excluded. \section{There is a well-defined relativistic QM} \label{RQM} \subsection{Klein-Gordon equation and the problem of probabilistic interpretation} The free Schr\"odinger equation \begin{equation}\label{sch} \frac{-\hbar^2 \nabla^2}{2m} \psi({\bf x},t)= i\hbar \partial_t \psi({\bf x},t) \end{equation} is not consistent with the theory of relativity. In particular, it treats space and time in completely different ways, which contradicts the principle of relativistic covariance. Eq.~(\ref{sch}) corresponds only to a nonrelativistic approximation of QM. What is the corresponding relativistic equation from which (\ref{sch}) can be derived as an approximation? Clearly, the relativistic equation must treat space and time on an equal footing. For that purpose, it is convenient to choose units in which the velocity of light is $c=1$. To further simplify equations, it is also convenient to further restrict units so that $\hbar=1$. Introducing coordinates $x^{\mu}$, $\mu=0,1,2,3$, where $x^0=t$, while $x^1,x^2,x^3$ are space coordinates, the simplest relativistic generalization of (\ref{sch}) is the Klein-Gordon equation \begin{equation}\label{KG} (\partial^{\mu}\partial_{\mu}+m^2)\psi(x)=0, \end{equation} where $x=\{ x^{\mu} \}$, summation over repeated indices is understood, $\partial^{\mu}\partial_{\mu}=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}$, and $\eta^{\mu\nu}$ is the diagonal metric tensor with $\eta^{00}=1$, $\eta^{11}=\eta^{22}=\eta^{33}=-1$. However, the existence of this relativistic wave equation does {\em not} imply that relativistic QM exists. This is because there are {\em interpretational} problems with this equation. In nonrelativistic QM, the quantity $\psi^*\psi$ is the probability density, having the property \begin{equation}\label{prob=1} \frac{d}{dt}\int d^3x\, \psi^*\psi =0, \end{equation} which can be easily derived from the Schr\"odinger equation (\ref{sch}). This property is crucial for the consistency of the probabilistic interpretation, because the integral $\int d^3x\, \psi^*\psi$ is the sum of all probabilities for the particle to be at all possible places, which must be equal to $1$ for each time $t$. If $\psi$ is normalized so that this integral is equal to $1$ at $t=0$, then (\ref{prob=1}) provides that it is equal to $1$ at each $t$. However, when $\psi$ satisfies (\ref{KG}) instead of (\ref{sch}), then the consistency requirement (\ref{prob=1}) is not fulfilled. Consequently, {\em in relativistic QM based on (\ref{KG}), $\psi^*\psi$ cannot be interpreted as the probability density}. In order to solve this problem, one can introduce the Klein-Gordon current \begin{equation}\label{cur} j_{\mu}=i\psi^* \!\stackrel{\leftrightarrow\;}{\partial_{\mu}}\! \psi , \end{equation} where $a \!\stackrel{\leftrightarrow\;}{\partial_{\mu}}\! b \equiv a(\partial_{\mu}b) -(\partial_{\mu} a)b$. Using (\ref{KG}), one can show that this current satisfies the local conservation law \begin{equation} \partial_{\mu}j^{\mu}=0, \end{equation} which implies that \begin{equation}\label{prob=1rel} \frac{d}{dt}\int d^3x\, j^0 =0 . \end{equation} Eq. (\ref{prob=1rel}) suggests that, in the relativistc case, it is $j^0$ that should be interpreted as the probability density. More generally, if $\psi_1(x)$ and $\psi_2(x)$ are two solutions of (\ref{KG}), then the scalar product defined as \begin{equation}\label{scalprod} (\psi_1,\psi_2)=i\int d^3x \, \psi_1^*(x) \!\stackrel{\leftrightarrow\;}{\partial_0}\! \psi_2(x) \end{equation} does not depend on time. The scalar product (\ref{scalprod}) does not look relativistic covariant, but there is a way to write it in a relativistic covariant form. The constant-time spacelike hypersurface with the infinitesimal volume $d^3x$ can be generalized to an arbitrarily curved spacelike hypersurface $\Sigma$ with the infinitesimal volume $dS^{\mu}$ oriented in a timelike direction normal to $\Sigma$. Eq.~(\ref{scalprod}) then generalizes to \begin{equation}\label{scalprod2} (\psi_1,\psi_2)=i\int_{\Sigma} dS^{\mu} \, \psi_1^*(x) \!\stackrel{\leftrightarrow\;}{\partial_{\mu}}\! \psi_2(x) , \end{equation} which, owing to the 4-dimensional Gauss law, does not depend on $\Sigma$ when $\psi_1(x)$ and $\psi_2(x)$ satisfy (\ref{KG}). However, there is a problem again. The general solution of (\ref{KG}) can be written as \begin{equation}\label{e48} \psi(x)=\psi^+(x) + \psi^-(x), \end{equation} where \begin{eqnarray}\label{psi+-} & \psi^+(x)= \displaystyle\sum_{{\bf k}} c_{{\bf k}} e^{-i(\omega_{{\bf k}}t-{\bf k}{\bf x}) } , & \\ \nonumber & \psi^-(x)= \displaystyle\sum_{{\bf k}} d_{{\bf k}} e^{i(\omega_{{\bf k}}t-{\bf k}{\bf x}) } . & \end{eqnarray} Here $c_{{\bf k}}$ and $d_{{\bf k}}$ are arbitrary complex coefficients, and \begin{equation} \omega_{{\bf k}}\equiv\sqrt{{\bf k}^2+m^2} \end{equation} is the frequency. For obvious reasons, $\psi^+$ is called a {\em positive-frequency} solution, while $\psi^-$ is called a {\em negative-frequency} solution. (The positive- and negative-{\em frequency} solutions are often referred to as positive- and negative-{\em energy} solutions, respectively. However, such a terminology is misleading because in field theory, which will be discussed in the next section, energy cannot be negative, so it is better to speak of positive and negative frequency.) The nonrelativistic Schr\"odinger equation contains only the positive-frequency solutions, which can be traced back to the fact that the Schr\"odinger equation contains a first time derivative, instead of a second time derivative that appears in the Klein-Gordon equation (\ref{KG}). For a positive-frequency solution the quantity $\int d^3x\, j^0$ is positive, whereas for a negative-frequency solution this quantity is negative. Since the sum of all probabilities must be positive, the negative-frequency solutions represent a problem for the probabilistic interpretation. One may propose that only positive-frequency solutions are physical, but even this does not solve the problem. Although the integral $\int d^3x\, j^0$ is strictly positive in that case, the local density $j^0(x)$ may still be negative at some regions of spacetime, provided that the superposition $\psi^+$ in (\ref{psi+-}) contains terms with two or more different positive frequencies. Thus, even with strictly positive-frequency solutions, the quantity $j^0$ cannot be interpreted as a probability density. \subsection{Some attempts to solve the problem} Physicists sometimes claim that there are no interpretational problems with the Klein-Gordon equation because the coefficients $c_{{\bf k}}$ and $d_{{\bf k}}$ in (\ref{psi+-}) (which are the Fourier transforms of $\psi^+$ and $\psi^-$, respectively) are time independent, so the quantities $c^*_{{\bf k}}c_{{\bf k}}$ and $d^*_{{\bf k}}d_{{\bf k}}$ can be consistently interpreted as probability densities in the momentum space. (More precisely, if $c_{{\bf k}}$ and $d_{{\bf k}}$ are independent, then these two probability densities refer to particles and antiparticles, respectively.) Indeed, in practical applications of relativistic QM, one is often interested only in scattering processes, in which the probabilities of different momenta contain all the information that can be compared with actual experiments. From a practical point of view, this is usually enough. Nevertheless, in principle, it is possible to envisage an experiment in which one measures the probabilities in the position (i.e., configuration) space, rather than that in the momentum space. A complete theory should have predictions on all quantities that can be measured in principle. Besides, if the standard interpretation of the nonrelativistic wave function in terms of the probability density in the position space is correct (which, indeed, is experimentally confirmed), then this interpretation must be derivable from a more accurate theory -- relativistic QM. Thus, the existence of the probabilistic interpretation in the momentum space does not really solve the problem. It is often claimed that the problem of relativistic probabilistic interpretation in the position space is solved by the Dirac equation. As we have seen, the problems with the Klein-Gordon equation can be traced back to the fact that it contains a second time derivative, instead of a first one. The relativistic-covariant wave equation that contains only first derivatives with respect to time and space is the Dirac equation \begin{equation}\label{dirac} (i\gamma^{\mu}\partial_{\mu}-m)\psi(x)=0 . \end{equation} Here $\gamma^{\mu}$ are the $4\times 4$ Dirac matrices that satisfy the anticommutation relations \begin{equation}\label{anticom} \{ \gamma^{\mu}, \gamma^{\nu} \}=2\eta^{\mu\nu} , \end{equation} where $\{A,B\}\equiv AB+BA$. The Dirac matrices are related to the Pauli matrices $\sigma_i$ discussed in Sec.~\ref{NOREAL}, which satisfy $\{ \sigma_i, \sigma_j \}=2\delta_{ij}$. (For more details, see, e.g., \cite{BD1}.) It turns out that $\psi$ in (\ref{dirac}) is a 4-component wave function called spinor that describes particles with spin $\frac{1}{2}$. The conserved current associated with (\ref{dirac}) is \begin{equation}\label{curdir} j^{\mu}=\bar{\psi}\gamma^{\mu}\psi , \end{equation} where $\bar{\psi}\equiv \psi^{\dagger}\gamma^0$. In particular, (\ref{anticom}) implies $\gamma^0\gamma^0=1$, so (\ref{curdir}) implies \begin{equation} j^0=\psi^{\dagger}\psi , \end{equation} which cannot be negative. Thus, the Dirac equation does not have problems with the probabilistic interpretation. However, this still does not mean that the problems of relativistic QM are solved. This is because the Dirac equation describes only particles with spin $\frac{1}{2}$. Particles with different spins also exist in nature. In particular, the Klein-Gordon equation describes particles with spin $0$, while the wave equation for spin $1$ particles are essentially the Maxwell equations, which are second-order differential equations for the electromagnetic potential $A^{\mu}$ and lead to the same interpretational problems as the Klein-Gordon equation. There are various proposals for a more direct solution to the problem of probabilistic interpretation of the Klein-Gordon equation (see, e.g., \cite{newt,ghose,gavr,nikolfpl3,nikolfol}). However, all these proposed solutions have certain disadvantages and none of these proposals is widely accepted as the correct solution. Therefore, without this problem being definitely solved, it cannot be said that there exists a well-defined relativistic QM. \section{Quantum field theory solves the problems of relativistic QM} \label{QFT} It is often claimed that the interpretational problems with relativistic QM discussed in the preceding section are solved by a more advanced theory -- {\em quantum field theory} (QFT). To see how QFT solves these problems and whether this solution is really satisfactory, let me briefly review what QFT is and why it was introduced. \subsection{Second quantization of particles} A theoretical concept closely related to QFT is the {\em method of second quantization}. It was introduced to formulate in a more elegant way the fact that many-particle wave functions should be either completely symmetric or completely antisymmetric under exchange of any two particles, which comprises the principle that identical particles cannot be distinguished. Let \begin{equation}\label{wf1} \psi({\bf x},t)=\sum_k a_k f_k({\bf x},t) \end{equation} be the wave function expanded in terms of some complete orthonormal set of solutions $f_k({\bf x},t)$. (For free particles, $f_k({\bf x},t)$ are usually taken to be the plane waves $f_k({\bf x},t) \propto e^{-i(\omega t - {\bf kx})}$.) Unlike the particle position ${\bf x}$, the wave function $\psi$ does not correspond to an operator. Instead, it is just an ordinary number that determines the probability density $\psi^*\psi$. This is so in the ordinary ``first" quantization of particles. The method of second quantization promotes the wave function $\psi$ to an operator $\hat{\psi}$. (To avoid confusion, from now on, the operators are always denoted by a hat above it.) Thus, instead of (\ref{wf1}), we have the operator \begin{equation}\label{wf2} \hat{\psi}({\bf x},t)=\sum_k \hat{a}_k f_k({\bf x},t) , \end{equation} where the coefficients $a_k$ are also promoted to the operators $\hat{a}_k$. Similarly, instead of the complex conjugated wave function $\psi^*$, we have the hermitian conjugated operator \begin{equation}\label{wf2*} \hat{\psi}^{\dagger}({\bf x},t)=\sum_k \hat{a}^{\dagger}_k f_k^*({\bf x},t) . \end{equation} The orthonormal solutions $f_k({\bf x},t)$ are still ordinary functions as before, so that the operator $\hat{\psi}$ satisfies the same equation of motion (e.g., the Schr\"odinger equation in the nonrelativistic case) as $\psi$. In the case of bosons, the operators $\hat{a}_k,\hat{a}_k^{\dagger}$ are postulated to satisfy the commutation relations \begin{eqnarray}\label{combos} & [\hat{a}_k,\hat{a}_{k'}^{\dagger}]=\delta_{kk'} , & \nonumber \\ & [\hat{a}_k,\hat{a}_{k'}]= [\hat{a}_k^{\dagger},\hat{a}_{k'}^{\dagger}]=0 . \end{eqnarray} These commutation relations are postulated because, as is well-known from the case of first-quantized harmonic oscillator (discussed also in more detail in the next section), such commutation relations lead to a representation in which $\hat{a}_k^{\dagger}$ and $\hat{a}_k$ are raising and lowering operators, respectively. Thus, an $n$-particle state with the wave function $f(k_1,\ldots,k_n)$ in the $k$-space can be abstractly represented as \begin{equation}\label{nf} |n_f\rangle =\sum_{k_1,\ldots,k_n} f(k_1,\ldots,k_n) \, \hat{a}^{\dagger}_{k_1} \cdots \hat{a}^{\dagger}_{k_n} |0\rangle , \end{equation} where $|0\rangle$ is the ground state of second quantization, i.e., the vacuum state containing no particles. Introducing the operator \begin{equation}\label{N} \hat{N}=\sum_{k} \hat{a}^{\dagger}_k \hat{a}_k , \end{equation} and using (\ref{combos}), one can show that \begin{equation}\label{N2} \hat{N}|n_f\rangle = n |n_f\rangle . \end{equation} Since $n$ is the number of particles in the state (\ref{nf}), Eq.~(\ref{N2}) shows that $\hat{N}$ is the operator of the number of particles. The $n$-particle wave function in the configuration space can then be written as \begin{equation}\label{wfn} \psi({\bf x}_1,\ldots,{\bf x}_n, t)= \langle 0|\hat{\psi}({\bf x}_1,t)\cdots\hat{\psi}({\bf x}_n,t)|n_f\rangle . \end{equation} From (\ref{combos}) and (\ref{wf2}) we see that $\hat{\psi}({\bf x},t)\hat{\psi}({\bf x}',t)= \hat{\psi}({\bf x}',t)\hat{\psi}({\bf x},t)$, which implies that the ordering of the $\hat{\psi}$-operators in (\ref{wfn}) is irrelevant. This means that (\ref{wfn}) automatically represents a bosonic wave function completely symmetric under any two exchanges of the arguments ${\bf x}_a$, $a=1,\ldots,n$. For the fermionic case, one replaces the commutation relations (\ref{combos}) with similar anticommutation relations \begin{eqnarray}\label{comfer} & \{\hat{a}_k,\hat{a}_{k'}^{\dagger}\}=\delta_{kk'} , & \nonumber \\ & \{\hat{a}_k,\hat{a}_{k'}\}= \{\hat{a}_k^{\dagger},\hat{a}_{k'}^{\dagger}\}=0 , \end{eqnarray} which, in a similar way, leads to completely antisymmetric wave functions. \subsection{Quantum fields} The method of second quantization outlined above is nothing but a convenient mathematical trick. It does not bring any new physical information. However, the mathematical formalism used in this trick can be {\em reinterpreted} in the following way: The fundamental quantum object is neither the particle with the position-operator $\hat{{\bf x}}$ nor the wave function $\psi$, but a new {\em hermitian operator} \begin{equation}\label{field} \hat{\phi}({\bf x},t) = \hat{\psi}({\bf x},t) + \hat{\psi}^{\dagger}({\bf x},t) . \end{equation} This hermitian operator is called {\em field} and the resulting theory is called {\em quantum field theory} (QFT). It is a quantum-operator version of a {\em classical} field $\phi({\bf x},t)$. (A prototype of classical fields is the electromagnetic field satisfying Maxwell equations. Here, for pedagogical purposes, we do not study the electromagnetic field, but only the simplest scalar field $\phi$.) Using (\ref{wf2}), (\ref{wf2*}), and (\ref{combos}), one obtains \begin{eqnarray} [\hat{\phi}({\bf x},t),\hat{\phi}({\bf x}',t)] & = & \sum_k f_k({\bf x},t) f_k^*({\bf x}',t) \nonumber \\ & & -\sum_k f_k^*({\bf x},t) f_k({\bf x}',t) . \end{eqnarray} Thus, by using the completeness relations \begin{equation} \sum_k f_k({\bf x},t) f_k^*({\bf x}',t)= \sum_k f_k^*({\bf x},t) f_k({\bf x}',t)=\delta^3({\bf x}-{\bf x}') , \end{equation} one finally obtains \begin{equation}\label{comf} [\hat{\phi}({\bf x},t),\hat{\phi}({\bf x}',t)]=0. \end{equation} Thus, from (\ref{wf2}), (\ref{wf2*}), (\ref{combos}), and (\ref{field}) one finds that (\ref{wfn}) can also be written as \begin{equation}\label{wfn2} \psi({\bf x}_1,\ldots,{\bf x}_n, t)= \langle 0|\hat{\phi}({\bf x}_1,t)\cdots\hat{\phi}({\bf x}_n,t)|n_f\rangle , \end{equation} which, owing to (\ref{comf}), provides the complete symmetry of $\psi$. The field equations of motion are derived from their own actions. For example, the Klein-Gordon equation (\ref{KG}) for $\phi(x)$ (instead of $\psi(x)$) can be obtained from the classical action \begin{equation} A=\int d^4x {\cal L} , \end{equation} where \begin{equation}\label{lagr} {\cal L}(\phi,\partial_{\alpha}\phi)= \frac{1}{2} [(\partial^{\mu}\phi)(\partial_{\mu}\phi) -m^2\phi^2 ] \end{equation} is the Lagrangian density. The canonical momentum associated with this action is a fieldlike quantity \begin{equation} \pi(x)=\frac{\partial{\cal L}}{\partial (\partial_0\phi(x))} =\partial^0\phi(x) . \end{equation} The associated Hamiltonian density is \begin{equation}\label{hamden} {\cal H}=\pi \partial_0\phi - {\cal L}= \frac{1}{2} [\pi^2 + (\nabla\phi)^2 +m^2\phi^2] . \end{equation} This shows that the field energy \begin{equation}\label{hamfi} H[\pi,\phi]=\int d^3x\, {\cal H}(\pi({\bf x}),\phi({\bf x}), \nabla\phi({\bf x}) ) \end{equation} (where the time-dependence is suppressed) cannot be negative. This is why, in relativistic QM, it is better to speak of negative frequencies than of negative energies. (In (\ref{hamfi}), the notation $H[\pi,\phi]$ denotes that $H$ is {\em not} a function of $\pi({\bf x})$ and $\phi({\bf x})$ at some particular values of ${\bf x}$, but a {\em functional}, i.e., an object that depends on the {\em whole} functions $\pi$ and $\phi$ at {\em all} values of ${\bf x}$.) By analogy with the particle commutation relations $[\hat{x}_l,\hat{p}_m]=i\delta_{lm}$, $[\hat{x}_i,\hat{x}_j]=[\hat{p}_i,\hat{p}_j]=0$, the fundamental field-operator commutation relations are postulated to be \begin{eqnarray}\label{comfund} & [\hat{\phi}({\bf x}),\hat{\pi}({\bf x}')]=i\delta^3({\bf x}-{\bf x}') , & \nonumber \\ & [\hat{\phi}({\bf x}),\hat{\phi}({\bf x}')]= [\hat{\pi}({\bf x}),\hat{\pi}({\bf x}')]=0 . \end{eqnarray} Here it is understood that all fields are evaluated at the same time $t$, so the $t$ dependence is not written explicitly. Thus, now (\ref{comf}) is one of the fundamental (not derived) commutation relations. Since $\hat{\phi}(x)$ is an operator in the Heisenberg picture that satisfies the Klein-Gordon equation, the expansion (\ref{field}) with (\ref{wf2}) and (\ref{wf2*}) can be used. One of the most important things gained from quantization of fields is the fact that now the commutation relations (\ref{combos}) do {\em not} need to be postulated. Instead, they can be derived from the fundamental field-operator commutation relations (\ref{comfund}). (The fermionic field-operators satisfy similar fundamental relations with commutators replaced by anticommutators, from which (\ref{comfer}) can be derived.) The existence of the Hamiltonian (\ref{hamfi}) allows us to introduce the functional Schr\"odinger equation \begin{equation}\label{funcsch} H[\hat{\pi},\phi] \Psi[\phi;t)=i\partial_t \Psi[\phi;t) , \end{equation} where $\Psi[\phi;t)$ is a functional of $\phi({\bf x})$ and a function of $t$, while \begin{equation} \hat{\pi}({\bf x}) = -i\frac{\delta}{\delta\phi({\bf x})} \end{equation} is the field analog of the particle-momentum operator $\hat{p}_j=-i\partial/\partial x_j$. (For a more careful definition of the functional derivative $\delta/\delta\phi({\bf x})$ see, e.g., \cite{ryder}.) Unlike the Klein-Gordon equation, the functional Schr\"odinger equation (\ref{funcsch}) is a first-order differential equation in the time derivative. Consequently, the quantity \begin{equation}\label{rhofi} \rho[\phi;t)=\Psi^*[\phi;t)\Psi[\phi;t) \end{equation} can be consistently interpreted as a conserved probability density. It represents the probability that the field has the configuration $\phi({\bf x})$ at the time $t$. \subsection{Does QFT solve the problems of relativistic QM?} After this brief overview of QFT, we are finally ready to cope with the validity of the title of this section. How QFT helps in solving the interpretational problems of relativistic QM? According to QFT, the fundamental objects in nature are not particles, but fields. Consequently, the fundamental wave function(al) that needs to have a well-defined probabilistic interpretation is not $\psi({\bf x},t)$, but $\Psi[\phi;t)$. Thus, the fact that, in the case of Klein-Gordon equation, $\psi({\bf x},t)$ cannot be interpreted probabilistically, is no longer a problem from this more fundamental point of view. However, does it really solve the problem? If QFT is really a more fundamental theory than the first-quantized quantum theory of particles, then it should be able to reproduce {\em all} good results of this less fundamental theory. In particular, from the fundamental axioms of QFT (such as the axiom that (\ref{rhofi}) represents the probability in the space of fields), one should be able to deduce that, at least in the nonrelativistic limit, $\psi^*\psi$ represents the probability in the space of particle positions. However, one {\em cannot} deduce it solely from the axioms of QFT. One possibility is to completely ignore, or even deny \cite{zeh}, the validity of the probabilistic interpretation of $\psi$, which indeed is in the spirit of QFT viewed as a fundamental theory, but then the problem is to reconcile it with the fact that such a probabilistic interpretation of $\psi$ is in agreement with experiments. Another possibility is to supplement the axioms of QFT with an additional axiom that says that $\psi$ in the nonrelativistic limit determines the probabilities of particle positions, but then such a set of axioms is not coherent, as it does not specify the meaning of $\psi$ in the relativistic case. Thus, instead of saying that QFT solves the problems of relativistic QM, it is more honest to say that it merely sweeps them under the carpet. \section{Quantum field theory is a theory of particles} \label{QFTP} What is the world made of? A common answer is that it is made of {\em elementary particles}, such as electrons, photons, quarks, gluons, etc. On the other hand, all modern theoretical research in elementary-particle physics is based on quantum {\em field} theory (QFT) \cite{BD2,ryder,cheng}. So, is the world made of particles or fields? A frequent answer given by elementary-particle physicists is that QFT is actually a theory of particles, or more precisely, that particles are actually more fundamental physical objects, while QFT is more like a mathematical tool that describes -- the particles. Indeed, the fact that the motivation for introducing QFT partially emerged from the method of second quantization (see Sec.~\ref{QFT}) supports this interpretation according to which QFT is nothing but a theory of particles. But is that really so? Is it really a fundamental property of QFT that it describes particles? Let us see! \subsection{A first-quantized analog of particles in QFT} From the conceptual point of view, fields and particles are very different objects. This is particularly clear for classical fields and particles, where all concepts are clear. So, if there exists a relation between {\em quantum} fields and particles that does not have an analog in the classical theory of fields and particles, then such a relation must be highly nontrivial. Indeed, this nontrivial relation is related to the nontrivial commutation relations (\ref{combos}) (or (\ref{comfer}) for fermionic fields). The classical fields commute, which implies that the classical coefficients $a_k$, $a_k^*$ do not satisfy (\ref{combos}). Without these commutation relations, we could {\em not} introduce $n$-particle states (\ref{nf}). However, are the commutation relations (\ref{combos}) sufficient for having a well-defined notion of particles? To answer this question, it is instructive to study the analogy with the first-quantized theory of particles. Consider a quantum particle moving in one dimension, having a Hamiltonian \begin{equation}\label{ham1} \hat{H}=\frac{\hat{p}}{2m}+V(\hat{x}) . \end{equation} We introduce the operators \begin{eqnarray}\label{a1p} & \hat{a}=\frac{1}{\sqrt{2}} \left( \sqrt{m\omega}\hat{x} +i \displaystyle\frac{\hat{p}}{\sqrt{m\omega}} \right) , & \nonumber \\ & \hat{a}^{\dagger}= \frac{1}{\sqrt{2}} \left( \sqrt{m\omega}\hat{x} -i \displaystyle\frac{\hat{p}}{\sqrt{m\omega}} \right) , \end{eqnarray} where $\omega$ is some constant of the dimension of energy (or frequency, which, since $\hbar=1$, has the same dimension as energy). Using the commutation relation $[\hat{x},\hat{p}]=i$, we obtain \begin{equation} [\hat{a},\hat{a}^{\dagger}]=1 . \end{equation} This, together with the trivial commutation relations $[\hat{a},\hat{a}]=[\hat{a}^{\dagger},\hat{a}^{\dagger}]=0$, shows that $\hat{a}^{\dagger}$ and $\hat{a}$ are the raising and lowering operator, respectively. As we speak of {\em one} particle, the number operator $\hat{N}=\hat{a}^{\dagger}\hat{a}$ now cannot be called the number of particles. Instead, we use a more general terminology (applicable to (\ref{N}) as well) according to which $\hat{N}$ is the number of ``quanta". But quanta of what? It is easy to show that (\ref{ham1}) can be written as \begin{equation}\label{ham2} \hat{H}=\omega\left( \hat{N}+\frac{1}{2} \right) +\left[ V(\hat{x})-\frac{m\omega^2\hat{x}^2}{2} \right] . \end{equation} In the special case in which $V(\hat{x})=m\omega^2\hat{x}^2/2$, which corresponds to the harmonic oscillator, the square bracket in (\ref{ham2}) vanishes, so the Hamiltonian can be expressed in terms of the $\hat{N}$-operator only. In this case, the (properly normalized) state \begin{equation}\label{staten} |n\rangle = \frac{(\hat{a}^{\dagger})^n}{\sqrt{n!}} |0\rangle \end{equation} has the energy $\omega (n+1/2)$, so the energy can be viewed as a sum of the ground-state energy $\omega/2$ and the energy of {\em $n$ quanta with energy $\omega$}. This is why the number operator $\hat{N}$ plays an important physical role. However, the main point that can be inferred from (\ref{ham2}) is the fact that, for a {\em general} potential $V(\hat{x})$, {\em the number operator $\hat{N}$ does not play any particular physical role}. Although the spectrum of quantum states can often be labeled by a discrete label $n'=0,1,2,\ldots$, this label, in general, has nothing to do with the operator $\hat{N}$ (i.e., the eigenstates (\ref{staten}) of $\hat{N}$ are not the eigenstates of the Hamiltonian (\ref{ham2})). Moreover, in general, the spectrum of energies $E(n')$ may have a more complicated dependence on $n'$, so that, unlike the harmonic oscillator, the spectrum of energies is {\em not equidistant}. Thus, in general, the state of a system {\em cannot} be naturally specified by a number of ``quanta" $n$. If one insists on representing the system in terms of the states (\ref{staten}), then one can treat the square bracket in (\ref{ham2}) as a perturbation $V_{\rm I}(\hat{x})$. (Here ``I" stands for ``interaction".) From (\ref{a1p}) one finds \begin{equation}\label{e81} \hat{x}=\frac{\hat{a}+\hat{a}^{\dagger}}{\sqrt{2m\omega}} , \end{equation} so $V_{\rm I}(\hat{x})=V_{\rm I}(\hat{a},\hat{a}^{\dagger})$. Consequently, various terms in the perturbation expansion can be represented in terms of creation and destruction of quanta, owing to the occurrence of $\hat{a}^{\dagger}$ and $\hat{a}$, respectively. However, treating the square bracket in (\ref{ham2}) as a perturbation is completely arbitrary. Such a treatment is nothing but a mathematical convenience and does not make the states (\ref{staten}) more physical. This is particularly clear for the cases in which the original system with the Hamiltonian (\ref{ham1}) can be solved analytically, without a perturbation expansion. The creation and destruction of quanta appearing in the perturbation expansion does not correspond to actual physical processess. These ``processess" of creation and destruction are nothing but a verbalization of certain mathematical terms appearing {\em only} in one particular method of calculation -- the perturbation expansion with the square bracket in (\ref{ham2}) treated as the perturbation. Last but not least, even if, despite the unnaturalness, one decides to express everything in terms of the operators (\ref{a1p}) and the states (\ref{staten}), there still may remain an ambiguity in choosing the constant $\omega$. All this demonstrates that, in general, {\em QM is not a theory of ``quanta" attributed to the operator $\hat{N}$.} \subsection{Particles in perturbative QFT} The analogy between the notion of ``quanta" in the first-quantized theory of particles and the notion of ``particles" in QFT is complete. For example, the QFT analog of (\ref{e81}) is the field operator in the Schr\"odinger picture \begin{equation} \hat{\phi}({\bf x})=\sum_k \hat{a}_k f_k({\bf x}) + \hat{a}_k^{\dagger} f_k^*({\bf x}) , \end{equation} which corresponds to (\ref{field}) with (\ref{wf2}) and (\ref{wf2*}), at fixed $t$. If $f_k({\bf x},t)$ are the plane waves proportional to $e^{-i(\omega_{\bf k}t-{\bf k}{\bf x})}$, then the quantum Hamiltonian obtained from the Lagrangian density (\ref{lagr}) turns out to be \begin{equation}\label{hamfield} \hat{H}=\sum_{\bf k} \omega_{\bf k}\left(\hat{N}_{\bf k} +\frac{1}{2} \right) , \end{equation} with $\hat{N}_{\bf k}\equiv \hat{a}_{\bf k}^{\dagger} \hat{a}_{\bf k}$, which is an analog of the first term in (\ref{ham2}). This analogy is related to the fact that (\ref{lagr}) represents a relativistic-field generalization of the harmonic oscillator. (The harmonic-oscillator Lagrangian is quadratic in ${\bf x}$ and its derivative, while (\ref{lagr}) is quadratic in $\phi$ and its derivatives). The Hamiltonian (\ref{hamfield}) has a clear physical interpretation; ignoring the term $1/2$ (which corresponds to an irrelevant ground-state energy $\sum_{\bf k} \omega_{\bf k}/2$), for each $\omega_{\bf k}$ there can be only an {\em integer} number $n_{\bf k}$ of quanta with energy $\omega_{\bf k}$, so that their total energy sums up to $n_{\bf k}\omega_{\bf k}$. (Concerning the irrelevance of the ground-state energy above, it should be noted that it is often claimed that this energy is relevant for the experimentally confirmed Casimir effect, but that the fact is that this effect can be derived even without referring to the ground-state energy \cite{jaffe}.) These quanta are naturally interpreted as ``particles" with energy $\omega_{\bf k}$. However, the Lagrangian (\ref{lagr}) is only a special case. In general, a Lagrangian describing the field $\phi$ may have a form \begin{equation}\label{lagr2} {\cal L}= \frac{1}{2} (\partial^{\mu}\phi)(\partial_{\mu}\phi) -V(\phi) , \end{equation} where $V(\phi)$ is an arbitrary potential. Thus, in general, the Hamiltonian contains an additional term analogous to that in (\ref{ham2}), which destroys the ``particle"-interpretation of the spectrum. Whereas the formal mathematical analogy between first quantization and QFT (which implies the irrelevance of the number operator $\hat{N}$) is clear, there is one crucial {\em physical} difference: Whereas in first quantization there is really no reason to attribute a special meaning to the operator $\hat{N}$, there is an {\em experimental} evidence that this is not so for QFT. The existence of particles is an {\em experimental fact}! Thus, if one wants to describe the experimentally observed objects, one must either reject QFT (which, indeed, is what many elementary-particle physicists were doing in the early days of elementary-particle physics and some of them are doing it even today \cite{schub}), or try to artificially adapt QFT such that it remains a theory of particles even with general interactions (such as those in (\ref{lagr2})). From a pragmatic and phenomenological point of view, the latter strategy turns out to be surprisingly successful! For example, in the case of (\ref{lagr2}), one artificially defines the interaction part of the Lagrangian as \begin{equation} V_{\rm I}(\phi)=V(\phi)-\frac{1}{2}m^2\phi^2 , \end{equation} and treats it as a perturbation of the ``free" Lagrangian (\ref{lagr}). For that purpose, it is convenient to introduce a mathematical trick called {\em interaction picture}, which is a picture that interpolates between the Heisenberg picture (where the time dependence is attributed to fields $\phi$) and the Schr\"odinger picture (where the time dependence is attributed to states $|\Psi\rangle$). In the interaction picture, the field satisfies the {\em free} Klein-Gordon equation of motion, while the time evolution of the state is governed only by the interaction part of the Hamiltonian. This trick allows one to use the free expansion (\ref{field}) with (\ref{wf2}) and (\ref{wf2*}), despite the fact that the ``true" quantum operator $\hat{\phi}({\bf x},t)$ in the Heisenberg picture cannot be expanded in that way. In fact, {\em all} operators in the interaction picture satisfy the free equations of motion, so the particle-number operator can also be introduced in the same way as for free fields. Analogously to the case of first quantization discussed after Eq.~(\ref{e81}), certain mathematical terms in the perturbation expansion can be pictorially represented by the so-called {\em Feynman diagrams}. (For technical details, I refer the reader to \cite{BD2,cheng}.) In some cases, the final measurable results obtained in that way turn out to be in excellent agreement with experiments. \subsection{Virtual particles?} The calculational tool represented by Feynman diagrams suggests an often abused picture according to which ``real particles interact by exchanging virtual particles". Many physicists, especially nonexperts, take this picture literally, as something that really and objectively happens in nature. In fact, I have {\em never} seen a popular text on particle physics in which this picture was {\em not} presented as something that really happens. Therefore, this picture of quantum interactions as processes in which virtual particles exchange is one of the most abused myths, not only in quantum physics, but in physics in general. Indeed, there is a consensus among experts for foundations of QFT that such a picture should not be taken literally. The fundamental principles of quantum theory do not even contain a notion of a ``virtual" state. The notion of a ``virtual particle" originates {\em only} from a specific mathematical method of calculation, called perturbative expansion. In fact, perturbative expansion represented by Feynman diagrams can be introduced even in {\em classical} physics \cite{thorn,penco}, but nobody attempts to verbalize these classical Feynman diagrams in terms of classical ``virtual" processes. So why such a verbalization is tolerated in quantum physics? The main reason is the fact that the standard interpretation of quantum theory does not offer a clear ``canonical" ontological picture of the actual processes in nature, but only provides the probabilities for the final results of measurement outcomes. In the absence of such a ``canonical" picture, physicists take the liberty to introduce various auxiliary intuitive pictures that sometimes help them think about otherwise abstract quantum formalism. Such auxiliary pictures, by themselves, are not a sin. However, a potential problem occurs when one forgets why such a picture has been introduced in the first place and starts to think on it too literally. \subsection{Nonperturbative QFT} In some cases, the picture of particles suggested by the ``free" part of the Lagrangian does not really correspond to particles observed in nature. The best known example is {\em quantum chromodynamics} (QCD), a QFT theory describing strong interactions between quarks and gluons. In nature we do not observe quarks, but rather more complicated particles called {\em hadrons} (such as protons, neutrons, and pions). In an oversimplified but often abused picture, hadrons are built of 2 or 3 quarks glued together by gluons. However, since free quarks are never observed in nature, the perturbative expansion, so successful for some other QFT theories, is not very successful in the case of QCD. Physicists are forced to develop other approximative methods to deal with it. The most successful such method is the so-called {\em lattice} QCD (for an introductory textbook see \cite{creutz} and for pedagogic reviews see \cite{davies,sharpe}). In this method, the spacetime continuum is approximated by a finite lattice of spacetime points, allowing the application of brutal-force numerical methods of computation. This method allows to compute the expectation values of products of fields in the ground state, by starting from first principles. However, to extract the information about particles from these purely field-theoretic quantities, one must {\em assume} a relation between these expectation values and the particle quantities. This relation is not derived from lattice QCD itself, but rather from the known relation between fields and particles in perturbative QFT. Consequently, although this method reproduces the experimental hadron data more-or-less successfully, the concept of particle in this method is not more clear than that in the perturbative approach. Thus, the notion of real particles is not derived from first principles and nothing in the formalism suggests a picture of the ``exchange of virtual particles". \subsection{Particles and the choice of time} As we have seen, although the notion of particles in interacting QFT theories cannot be derived from first principles (or at least we do not know yet how to do that), there are heuristic mathematical procedures that introduce the notion of particles that agrees with experiments. However, there are circumstances in QFT in which the theoretical notion of particles is even more ambiguous, while present experiments are not yet able to resolve these ambiguities. Consider again the {\em free} field expanded as in (\ref{field}) with (\ref{wf2}) and (\ref{wf2*}). The notion of particles rests on a clear separation between the creation operators $a_k^{\dagger}$ and the destruction operators $a_k$. The definition of these operators is closely related to the choice of the complete orthonormal basis $\{ f_k({\bf x},t) \}$ of solutions to the classical Klein-Gordon equation. However, there are infinitely many different choices of this basis. The plane-wave basis \begin{equation}\label{pwb} f_{\bf k}({\bf x},t) \propto e^{-i(\omega_{\bf k}t-{\bf k}{\bf x})} \equiv e^{-ik\cdot x} \end{equation} is only a particular convenient choice. Different choices may lead to different creation and destruction operators $a_k^{\dagger}$ and $a_k$, and thus to different notions of particles. How to know which choice is the right one? Eq.~(\ref{pwb}) suggests a physical criterion according to which the modes $f_k({\bf x},t)$ should be chosen such that they have a positive frequency. However, the notion of frequency assumes the notion of time. On the other hand, according to the theory of relativity, there is not a unique choice of the time coordinate. Therefore, the problem of the right definition of particles reduces to the problem of the right definition of time. Fortunately, the last exponential function in (\ref{pwb}) shows that the standard plane waves $f_{\bf k}(x)$ are Lorentz invariant, so that different time coordinates related by a Lorentz transformation lead to the same definition of particles. However, Lorentz transformations relate only proper coordinates attributed to inertial observers in flat spacetime. The {\em general} theory of relativity allows much more general coordinate transformations, such as those that relate an inertial observer with an accelerating one. (For readers who are not familiar with general theory of relativity there are many excellent introductory textbooks, but my favored one that I highly recommend to the beginners is \cite{carrol}. For an explicit construction of the coordinate transformations between an inertial observer and an arbitrarily moving one in flat spacetime, see \cite{nels,nikpra}, and for instructive applications, see \cite{nikpra,nikajp,niktwin}. Nevertheless, to make this paper readable by those who are not familiar with general relativity, in the rest of Sec.~\ref{QFTP}, as well as in Sec.~\ref{BH}, I omit some technical details that require a better understanding of general relativity, keeping only the details that are really necessary to understand the quantum aspects themselves.) Different choices of time lead to different choices of the positive-frequency bases $\{ f_k(x) \}$, and thus to different creation and destruction operators $a_k^{\dagger}$ and $a_k$, respectively. If \begin{eqnarray}\label{2phi} & \hat{\phi}(x)=\displaystyle\sum_k \hat{a}_kf_k(x)+\hat{a}_k^{\dagger}f_k^*(x) , & \nonumber \\ & \hat{\phi}(x)=\displaystyle\sum_l \hat{\bar{a}}_l\bar{f}_l(x)+ \hat{\bar{a}}_l^{\dagger}\bar{f}_l^*(x) & \end{eqnarray} are two such expansions in the bases $\{ f_k(x) \}$ and $\{ \bar{f}_l(x) \}$, respectively, it is easy to show that the corresponding creation and destruction operators are related by a linear transformation \begin{eqnarray}\label{bogol} & \hat{\bar{a}}_l = \displaystyle\sum_k \alpha_{lk} \hat{a}_k + \beta^*_{lk} \hat{a}_k^{\dagger} , & \nonumber \\ & \hat{\bar{a}}_l^{\dagger} = \displaystyle\sum_k \alpha^*_{lk} \hat{a}_k^{\dagger} + \beta_{lk} \hat{a}_k , \end{eqnarray} where \begin{equation}\label{bogcoef} \alpha_{lk}\equiv (\bar{f}_l,f_k) , \;\;\; \beta^*_{lk}\equiv (\bar{f}_l,f_k^*) , \end{equation} are given by the scalar products defined as in (\ref{scalprod2}). (To derive (\ref{bogol}), take the scalar product of both expressions in (\ref{2phi}) with $\bar{f}_{l'}$ on the left and use the orthonormality relations $(\bar{f}_{l'},\bar{f}_{l})=\delta_{l'l}$, $(\bar{f}_{l'},\bar{f}^*_{l})=0$.) The transformation (\ref{bogol}) between the two sets of creation and destruction operators is called {\em Bogoliubov transformation}. Since both bases are orthonormal, the Bogoliubov coefficients (\ref{bogcoef}) satisfy \begin{equation}\label{bogol2} \sum_k (\alpha_{lk}\alpha^*_{l'k}-\beta^*_{lk}\beta_{l'k})=\delta_{ll'} , \end{equation} where the negative sign is a consequence of the fact that negative frequency solutions have negative norms, i.e., $(f_k^*,f_{k'}^*)=-\delta_{kk'}$, $(\bar{f}_l^*,\bar{f}_{l'}^*)=-\delta_{ll'}$. One can show that (\ref{bogol2}) provides that $\hat{\bar{a}}_l$ and $\hat{\bar{a}}_l^{\dagger}$ also satisfy the same commutation relations (\ref{combos}) as $\hat{a}_k$ and $\hat{a}_k^{\dagger}$ do. A physically nontrivial Bogoliubov transformation is that in which at least some of the $\beta_{lk}$ coefficients are not zero. Two different definitions of the particle-number operators are \begin{equation} \hat{N}=\sum_k \hat{N}_k , \;\;\; \hat{\bar{N}}=\sum_l \hat{\bar{N}}_l , \end{equation} where \begin{equation} \hat{N}_k=\hat{a}_k^{\dagger}\hat{a}_k , \;\;\; \hat{\bar{N}}_l=\hat{\bar{a}}_l^{\dagger}\hat{\bar{a}}_l . \end{equation} In particular, from (\ref{bogol}), it is easy to show that the vacuum $|0\rangle$ having the property $\hat{N}|0\rangle=0$ has the property \begin{equation}\label{parcreat} \langle 0|\hat{\bar{N}}_l|0\rangle=\sum_k |\beta_{lk}|^2 . \end{equation} For a nontrivial Bogoliubov transformation, this means that the average number of particles in the no-particle state $|0\rangle$ is a state full of particles when the particles are defined by $\hat{\bar{N}}$ instead of $\hat{N}$! Conversely, the no-particle state $|\bar{0}\rangle$ having the property $\hat{\bar{N}}|\bar{0}\rangle=0$ is a state full of particles when the particles are defined with $\hat{N}$. So, what is the right operator of the number of particles, $\hat{N}$ or $\hat{\bar{N}}$? How to find the right operator of the number of particles? The fact is that, in general, a clear universally accepted answer to this question is not known! Instead, there are several possibilities that we discuss below. One possibility is that the dependence of the particle concept on the choice of time means that the concept of particles depends on the observer. The best known example of this interpretation is the Unruh effect \cite{unruh,unruh2,birdav}, according to which a uniformly accelerating observer perceives the standard Minkowski vacuum (defined with respect to time of an inertial observer in Minkowski flat spacetime) as a state with a huge number of particles with a thermal distribution of particle energies, with the temperature proportional to the acceleration. Indeed, this effect (not yet experimentally confirmed!) can be obtained by two independent approaches. The first approach is by a Bogoliubov transformation as indicated above, leading to \cite{unruh,birdav} \begin{equation}\label{unruh} \langle 0|\hat{\bar{N}}_l|0\rangle=\frac{1}{e^{2\pi\omega_l/a}-1}, \end{equation} where $a$ is the proper acceleration perceived by the accelerating observer, $\omega_l$ is the frequency associated with the solution $\bar{f}_l(x)$, and we use units in which $\hbar=c=1$. (Coordinates of a uniformly accelerating observer are known as Rindler coordinates \cite{rind}, so the quantization based on particles defined with respect to the Rindler time is called Rindler quantization.) We see that the right-hand side of (\ref{unruh}) looks just as a Bose-Einstein distribution at the temperature $T=a/2\pi$ (in units in which the Boltzmann constant is also taken to be unit). The second approach is by studying the response of a theoretical model of an accelerating particle detector, using only the standard Minkowski quantization without the Bogoliubov transformation. However, these two approaches are {\em not} equivalent \cite{padmun,nikolun}. Besides, such a dependence of particles on the observer is not relativistically covariant. In particular, it is not clear which of the definitions of particles, if any, acts as a source for a (covariantly transforming) gravitational field. An alternative is to describe particles in a unique covariant way in terms of local particle currents \cite{nikolcur}, but such an approach requires a unique choice of a preferred time coordinate. For example, for a hermitian scalar field (\ref{field}), the particle current is \begin{equation}\label{enik1} \hat{j}^{\rm P}_{\mu}(x)=i\hat{\psi}^{\dagger}(x) \!\stackrel{\leftrightarrow\;}{\partial_{\mu}}\! \hat{\psi}(x) , \end{equation} which (unlike (\ref{cur}) with (\ref{e48})) requires the identification of the positive- and negative-frequency parts $\hat{\psi}(x)$ and $\hat{\psi}^{\dagger}(x)$, respectively. Noting that the quantization of fields themselves based on the functional Schr\"odinger equation (\ref{funcsch}) also requires a choice of a preferred time coordinate, it is possible that a preferred time coordinate emerges dynamically from some nonstandard covariant method of quantization, such as that in \cite{nikddw}. Another possibility is that the concept of particles as fundamental objects simply does not make sense in QFT \cite{birdav,dav}. Instead, all observables should be expressed in terms of local fields that do not require the artificial identification of the positive- and negative-frequency parts. For example, such an observable is the Hamiltonian density (\ref{hamden}), which represents the $T^0_0$-component of the covariant energy-momentum tensor $T^{\mu}_{\nu}(x)$. Whereas such an approach is very natural from the theoretical point of view according to which QFT is nothing but a quantum theory of fields, the problem is to reconcile it with the fact that the objects observed in high-energy experiments are -- particles. \subsection{Particle creation by a classical field} When the classical metric $g_{\mu\nu}(x)$ has a nontrivial dependence on $x$, the Klein-Gordon equation (\ref{KG}) for the field $\phi(x)$ generalizes to \begin{equation} \left( \frac{1}{\sqrt{|g|}} \partial_{\mu}\sqrt{|g|} g^{\mu\nu}\partial_{\nu} +m^2 \right) \phi=0 , \end{equation} where $g$ is the determinant of the matrix $g_{\mu\nu}$. In particular, if the metric is time dependent, then a solution $f_k(x)$ having a positive frequency at some initial time $t_{\rm in}$ may behave as a superposition of positive- and negative-frequency solutions at some final time $t_{\rm fin}$. At the final time, the solutions that behave as positive-frequency ones are some other solutions $\bar{f}_l(x)$. In this case, it seems natural to define particles with the operator $\hat{N}$ at the initial time and with $\hat{\bar{N}}$ at the final time. If the time-independent state in the Heisenberg picture is given by the ``vacuum" $|0\rangle$, then $\langle 0|\hat{N}|0\rangle=0$ denotes that there are no particles at $t_{\rm in}$, while (\ref{parcreat}) can be interpreted as a consequence of an evolution of the particle-number operator, so that (\ref{parcreat}) refers only to $t_{\rm fin}$. This is the essence of the mechanism of particle creation by a classical gravitational field. The best known example is particle creation by a collapse of a black hole, known also as Hawking radiation \cite{hawk}. (For more details, see also the classic textbook \cite{birdav}, a review \cite{brout}, and a pedagogic review \cite{jacob}.) Similarly to the Unruh effect (\ref{unruh}), the Hawking particles have the distribution \begin{equation}\label{unruhbh} \langle 0|\hat{\bar{N}}_l|0\rangle=\frac{1}{e^{8\pi GM\omega_l}-1}, \end{equation} where $G$ is the Newton gravitational constant which has a dimension (energy)$^{-2}$ and $M$ is the black-hole mass. This is the result obtained by defining particles with respect to a specific time, that is, the time of an observer static with respect to the black hole and staying far from the black-hole horizon. Although (\ref{unruhbh}) looks exactly like a quantum Bose-Einstein thermal distribution at the temperature \begin{equation}\label{Thawk} T=\frac{1}{8\pi GM} , \end{equation} this distribution is independent of the validity of the bosonic quantum commutation relations (\ref{combos}). Instead, it turns out that the crucial ingredient leading to a thermal distribution is the existence of the horizon \cite{padmrep}, which is a {\em classical} observer-dependent general-relativistic object existing not only for black holes, but also for accelerating observers in flat spacetime. Thus, the origin of this thermal distribution can be understood even with classical physics \cite{padmclas}, while only the mechanism of particle creation itself is intrinsically quantum. In the literature, the existence of thermal Hawking radiation often seems to be widely accepted as a fact. Nevertheless, since it is not yet experimentally confirmed and since it rests on the theoretically ambiguous concept of particles in curved spacetime (the dependence on the choice of time), certain doubts on its existence are still reasonable (see, e.g., \cite{padmprl,nikolcur,bel} and references therein). Thus, the existence of Hawking radiation can also be qualified as a myth. The classical gravitational field is not the only classical field that seems to be able to cause a production of particles from the vacuum. The classical electric field seems to be able to produce particle-antiparticle pairs \cite{schw,manog,brout}, by a mechanism similar to the gravitational one. For discussions of theoretical ambiguities lying behind this theoretically predicted effect, see \cite{padmprl,nikolcur,nikolcrit}. \subsection{Particles, fields, or something else?} Having in mind all these foundational problems with the concept of particle in QFT, it is still impossible to clearly and definitely answer the question whether the world is made of particles or fields. Nevertheless, practically oriented physicists may not find this question disturbing as long as the formalism, no matter how incoherent it may appear to be, gives correct predictions on measurable quantities. Such a practical attitude seems to be justified by the vagueness of the concept of reality inherent to QM itself. Indeed, one can adopt a hard version of orthodox interpretation of QM according to which information about reality is more fundamental than reality itself and use it to justify the noncovariant dependence of particles (as well as some other quantities) on the observer \cite{peres}. However, in the standard orthodox QM, where rigorous no-hidden-variable theorems exist (see Sec.~\ref{NOREAL}), at least the operators are defined unambiguously. Thus, even the hard-orthodox interpretation of QM is not sufficient to justify the interpretation of the particle-number-operator ambiguities as different realities perceived by different observers. An alternative to this orthodox approach is an objective-realism approach in which both particles and fields separately exist, which is a picture that seems to be particularly coherent in the Bohmian interpretation \cite{nikolpf}. Finally, there is a possibility that the world is made neither of particles nor of fields, but of strings. (For an excellent pedagogic introduction to string theory see \cite{zwie}. In particular, this book also breaks one myth in physics -- the myth that string theory is mathematically an extremely complicated theory that most other physicists cannot easily understand. For a more concise pedagogic introduction to string theory see also \cite{szabo}.) In fact, many string theorists speak about the existence of strings as a definite fact. Fortunately, there is still a sufficiently large number of authoritative physicists that are highly skeptical about string theory, which does not allow string theory to become a widely accepted myth. Nevertheless, string theory possesses some remarkable theoretical properties that makes it a promising candidate for a more fundamental description of nature. According to this theory, particles are not really pointlike, but extended one-dimensional objects. Their characteristic length, however, is very short, which is why they appear as pointlike with respect to our current experimental abilities to probe short distances. However, just as for particles, there is first quantization of strings, as well as second quantization that leads to string field theory. Thus, even if string theory is correct, there is still a question whether the fundamental objects are strings or string fields. However, while first quantization of strings is well understood, string field theory is not. Moreover, there are indications that string field theory may {\em not} be the correct approach to treat strings \cite{polc}. Consequently, particles (that represent an approximation of strings) may be more fundamental than fields. Concerning the issue of objective reality, there are indications that the Bohmian interpretation of strings may be even more natural than that of particles \cite{nikolstr1,nikolstr3}. The Bohmian interpretation of strings also breaks some other myths inherent to string theory \cite{nikolstr2}. \section{Black-hole entropy is proportional to its surface} \label{BH} As this claim is not yet a part of standard textbooks, this is not yet a true myth. Nevertheless, in the last 10 or 20 years this claim has been so often repeated by experts in the field (the claim itself is about 30 years old) that it is very likely that it will soon become a true myth. Before it happens, let me warn the wider physics community that this claim is actually very dubious. \subsection{Black-hole ``entropy" in classical gravity} The claim in the title of this section is actually a part of a more general belief that there exists a deep relation between black holes and thermodynamics. The first evidence supporting this belief came from certain classical properties of black holes that, on the mathematical level, resemble the laws of thermodynamics \cite{beken,hawk2}. (For general pedagogic overviews, see, e.g., \cite{carrol,hawk3} and for an advanced pedagogic review with many technical details, see \cite{town}.) Black holes are dynamical objects that can start their evolution from a huge number of different initial states, but eventually end up in a highly-symmetric equilibrium stationary state specified only by a few global conserved physical quantities, such as their mass $M$ (i.e., energy $E$), electric charge $Q$, and angular momentum $J$. The physical laws governing the behavior of such equilibrium black holes formally resemble the laws governing the behavior of systems in thermodynamic equilibrium. The well-known four laws of thermodynamics have the following black-hole analogues: \begin{itemize} \item Zeroth law: There exists a local quantity called surface gravity $\kappa$ (which can be viewed as the general-relativistic analog of the Newton gravitational field $GM/r^2$) that, in equilibrium, turns out to be constant everywhere on the black-hole horizon. This is an analog of temperature $T$ which is constant in thermodynamic equilibrium. \item First law: This is essentially the law of energy conservation, which, both in the black-hole and the thermodynamic case, has an origin in even more fundamental laws. As such, this analogy should not be surprising, but it is interesting that in both cases the conservation of energy takes a mathematically similar form. For black holes it reads \begin{equation}\label{dM} dM=\frac{\kappa}{8\pi G}dA +\Omega dJ +\Phi dQ, \end{equation} where $A$ is the surface of the horizon, $\Omega$ is the angular velocity of the horizon, and $\Phi$ is the electrostatic potential at the horizon. This is analogous to the thermodynamic first law \begin{equation}\label{dE} dE=TdS-pdV+\mu dN , \end{equation} where $S$ is the entropy, $p$ is the pressure, $V$ is the volume, $\mu$ is the chemical potential, and $N$ is the number of particles. In particular, note that the black-hole analog of the entropy $S$ is a quantity proportional to the black-hole surface $A$. This allows us to introduce the black-hole ``entropy" \begin{equation}\label{S=A} S_{\rm bh}=\alpha A , \end{equation} where $\alpha$ is an unspecified constant. \item Second law: Although the fundamental microscopic physical laws are time reversible, the macroscopic laws are not. Instead, disorder tends to increase with time. In the thermodynamic case, it means that entropy cannot decrease with time, i.e., $dS\geq 0$. In the gravitational case, owing to the attractive nature of the gravitational force, it turns out that the black-hole surface cannot decrease with time, i.e., $dA\geq 0$. \item Third law: It turns out that, by a realistic physical process, it is impossible to reach the state with $\kappa=0$. This is analogous to the third law of thermodynamics according to which, by a realistic physical process, it is impossible to reach the state with $T=0$. \end{itemize} Although the analogy as presented above is suggestive, it is clear that classical black-hole parameters are conceptually very different from the corresponding thermodynamic parameters. Indeed, the formal analogies above were not taken very seriously at the beginning. In particular, an ingredient that is missing for a full analogy between classical black holes and thermodynamic systems is -- radiation with a thermal spectrum. Classical black holes (i.e., black holes described by the classical Einstein equation of gravity) do not produce radiation with a thermal spectrum. \subsection{Black-hole ``entropy" in semiclassical gravity} A true surprise happened when Hawking found out \cite{hawk} that {\em semiclassical} (i.e., gravity is treated classically while matter is quantized) black holes not only radiate (which, by itself, is not a big surprise), but radiate exactly with a thermal spectrum at a temperature proportional to $\kappa$. In the special case of a black hole with $J=Q=0$, this temperature is equal to (\ref{Thawk}). Since $dJ=dQ=0$, we attempt to write (\ref{dM}) as \begin{equation}\label{td1} dS_{\rm bh}=\frac{dM}{T} , \end{equation} which corresponds to (\ref{dE}) with $dV=dN=0$. From (\ref{Thawk}), we see that \begin{equation}\label{cons1} \frac{dM}{T}=8\pi GMdM . \end{equation} From the Schwarzschild form of the black-hole metric in the polar spacial coordinates ($r,\vartheta,\varphi$) (see, e.g., \cite{carrol}) \begin{equation} ds^2=\frac{dt^2}{1-\displaystyle\frac{2GM}{r}} -\left( 1-\frac{2GM}{r} \right) dr^2 -r^2 (d\vartheta^2 + {\rm sin}^2\vartheta \, d\varphi^2 ) , \end{equation} we see that the horizon corresponding to the singular behavior of the metric is at the radius \begin{equation} r=2GM . \end{equation} Consequently, the surface of the horizon is equal to \begin{equation}\label{bhpom} A=4\pi r^2=16\pi G^2 M^2. \end{equation} Therefore, (\ref{S=A}) implies \begin{equation}\label{cons2} dS_{\rm bh}=\alpha 32\pi G^2 MdM. \end{equation} Thus, we see that (\ref{cons1}) and (\ref{cons2}) are really consistent with (\ref{td1}), provided that $\alpha=1/4G$. Therefore, (\ref{S=A}) becomes \begin{equation}\label{S=A2} S_{\rm bh}=\frac{A}{4G} . \end{equation} In fact, (\ref{S=A2}) turns out to be a generally valid relation, for arbitrary $J$ and $Q$. Now, with the results (\ref{Thawk}) and (\ref{S=A2}), the analogy between black holes and thermodynamics seems to be more complete. Nevertheless, it is still only an analogy. Moreover, thermal radiation (which is a kinematical effect depending only on the metric) is not directly logically related to the four laws of classical black-hole ``thermodynamics" (for which the validity of the dynamical Einstein equations is crucial) \cite{viss}. Still, many physicists believe that such a striking analogy cannot be a pure formal coincidence. Instead, they believe that there is some even deeper meaning of this analogy. In particular, as classical horizons hide information from observers, while the orthodox interpretation of QM suggests a fundamental role of information available to observers, it is believed that this could be a key to a deeper understanding of the relation between relativity and quantum theory \cite{peres}. As the correct theory of quantum gravity is not yet known (for reviews of various approaches to quantum gravity, see \cite{carl,alvar}), there is a belief that this deeper meaning will be revealed one day when we better understand quantum gravity. Although this belief may turn out to be true, at the moment there is no real proof that this necessarily must be so. A part of this belief is that (\ref{S=A2}) is {\em not} merely a quantity {\em analogous} to entropy, but that it really {\em is} the entropy. However, in standard statistical physics (from which thermodynamics can be derived), entropy is a quantity proportional to the number of the microscopic physical degrees of freedom. On the other hand, the derivation of (\ref{S=A2}) as sketched above does not provide a direct answer to the question what, if anything, these microscopic degrees of freedom are. In particular, they cannot be simply the particles forming the black hole, as there is no reason why the number of particles should be proportional to the surface $A$ of the black-hole boundary. Indeed, as entropy is an extensive quantity, one expects that it should be proportional to the black-hole volume, rather than to its surface. It is believed that quantum gravity will provide a more fundamental answer to the question why the black-hole entropy is proportional to its surface, rather than to its volume. Thus, the program of finding a microscopic derivation of Eq.~(\ref{S=A2}) is sometimes referred to as ``holly grail" of quantum gravity. (The expression ``holly grail" fits nice with my expression ``myth".) \subsection{Other approaches to black-hole entropy} Some results in quantum gravity already suggest a microscopic explanation of the proportionality of the black-hole entropy with its surface. For example, a loop representation of quantum-gravity kinematics (for reviews, see, e.g., \cite{rov,rovbook}) leads to a finite value of the entropy of a surface, which coincides with (\ref{S=A2}) if one additional free parameter of the theory is adjusted appropriately. However, loop quantum gravity does not provide a new answer to the question why the black-hole entropy should coincide with the entropy of its boundary. Instead, it uses a classical argument for this, based on the observation that the degrees of freedom behind the horizon are invisible to outside observers, so that only the boundary of the black hole is relevant to physics observed by outside observers. (The book \cite{rovbook} contains a nice pedagogic presentation of this classical argument. Besides, it contains an excellent pedagogic presentation of the relational interpretation of general relativity, which, in particular, may serve as a motivation for the conceptually much more dubious relational interpretation of QM \cite{rov1,rov2} mentioned in Sec.~\ref{NOREAL}.) Such an explanation of the black-hole entropy is not what is really searched for, as it does not completely support the four laws of black-hole ``thermodynamics", since the other extensive quantities such as mass $M$ and charge $Q$ contain information about the matter content of the {\em interior}. What one wants to obtain is that the entropy of the {\em interior} degrees of freedom is proportional to the boundary of the interior. A theory that is closer to achieving this goal is string theory, which, among other things, also contains a quantum theory of gravity. Strings are one-dimensional objects containing an infinite number of degrees of freedom. However, not all degrees of freedom need to be excited. In low-energy states of strings, only a few degrees of freedom are excited, which corresponds to states that we perceive as standard particles. However, if the black-hole interior consists of one or a few self-gravitating strings in highly excited states, then the entropy associated with the microscopic string degrees of freedom is of the order of $GM^2$ (for reviews, see \cite{zwie,horow}). This coincides with the semiclassical black-hole ``entropy", as the latter is also of the order of $GM^2$, which can be seen from (\ref{S=A2}) and (\ref{bhpom}). The problem is that strings do not necessarily need to be in highly excited states, so the entropy of strings does not need to be of the order of $GM^2$. Indeed, the black-hole interior may certainly contain a huge number of standard particles, which corresponds to a huge number of strings in low-excited states. It is not clear why the entropy should be proportional to the black-hole surface even then. A possible reinterpretation of the relation (\ref{S=A2}) is that it does not necessarily denote the actual value of the black-hole entropy, but only the upper limit of it. This idea evolved into a modern paradigm called {\em holographic principle} (see \cite{bousso} for a review), according to which the boundary of a region of space contains a lot of information about the region itself. However, a clear general physical explanation of the conjectured holographic principle, or that of the conjectured upper limit on the entropy in a region, is still missing. Finally, let me mention that the famous black-hole entropy {\em paradox} that seems to suggest the destruction of entropy owing to the black-hole radiation (for pedagogic reviews, see \cite{gidd,stro}) is much easier to solve when $S_{\rm bh}$ is not interpreted as true entropy \cite{nikolbh}. Nevertheless, I will not further discuss it here as this paper is not about quantum paradoxes (see, e.g., \cite{laloe}), but about quantum myths. \section{Discussion and conclusion} As we have seen, QM is full of ``myths", that is, claims that are often presented as definite facts, despite the fact that the existing evidence supporting these claims is not sufficient to proclaim them as true facts. To show that they are not true facts, I have also discussed the drawbacks of this evidence, as well as some alternatives. In the paper, I have certainly not mentioned all myths existing in QM, but I hope that I have catched the most famous and most fundamental ones, appearing in several fundamental branches of physics ranging from nonrelativistic quantum mechanics of single particles to quantum gravity and string theory. The question that I attempt to answer now is -- why the myths in QM are so numerous? Of course, one of the reasons is certainly the fact that we still do not completely understand QM at the most fundamental level. However, this fact by itself does not explain why quantum physicists (who are supposed to be exact scientists) are so tolerant and sloppy about arguments that are not really the proofs, thus allowing the myths to form. To find a deeper reason, let me first note that the results collected and reviewed in this paper show that the source of disagreement among physicists on the validity of various myths is not of mathematical origin, but of conceptual one. However, in classical mechanics, which is well-understood not only on the mathematical, but also on the conceptual level, similar disagreement among physicists almost never occur. Thus, the common origin of myths in QM must lie in the fundamental {\em conceptual difference between classical and quantum mechanics}. But, in my opinion, the main conceptual difference between classical and quantum mechanics that makes the latter less understood on the conceptual level is the fact that the former introduces a clear notion of objective reality even without measurements. (This is why I referred to the myth of Sec.~\ref{NOREAL} as the {\em central} myth in QM.) Thus, I conclude that {\em the main reason for the existence of myths in QM is the fact that QM does not give a clear answer to the question what, if anything, objective reality is}. To support the conclusion above, let me illustrate it by a simple model of objective reality. Such a model may seem to be naive and unrealistic, or may be open to further refinements, but here its only purpose is to demonstrate how a model with explicit objective reality immediately gives clear unambiguous answers to the questions whether the myths discussed in this paper are true or not. The simple model of objective reality I discuss is a Bohmian-particle interpretation, according to which particles are objectively existing pointlike objects having deterministic trajectories guided by (also objectively existing) wave functions. To make the notion of particles and their ``instantaneous" interactions at a distance unique, I assume that there is a single preferred system of relativistic coordinates, roughly coinciding with the global system of coordinates with respect to which the cosmic microwave backround is homogeneous and isotropic. Now let me briefly consider the basic claims of the titles of all sections of the paper. Is there a wave-particle duality? Yes, because both particles and wave functions objectively exist. Is there a time-energy uncertainty relation? No, at least not at the fundamental level, because the theory is deterministic. Is nature fundamentally random? No, in the sense that both waves and particle trajectories satisfy deterministic equations. Is there reality besides the measured reality? Yes, by the central assumption of the model. Is QM local or nonlocal? It is nonlocal, as it is a hidden-variable theory consistent with standard statistical predictions of QM. Is there a well-defined relativistic QM? Yes, because, by assumption, relativistic particle trajectories are well defined with the aid of a preferred system of coordinates. Does quantum field theory (QFT) solve the problems of relativistic QM? No, because particles are not less fundamental than fields. Is QFT a theory of particles? Yes, because, by assumption, particles {\em are} fundamental objects. (If the current version of QFT is not completely compatible with the fundamental notion of particles, then it is QFT that needs to be modified.) Is black-hole entropy proportional to its surface? To obtain a definite answer to this last question, I have to further specify my model of objective reality. For simplicity, I assume that gravity is {\em not} quantized (currently known facts do not actually exclude this possibility), but determined by a classical-like equation that, at least at sufficiently large distances, has the form of a classical Einstein equation in which ``matter" is determined by the actual particle positions and velocities. (For more details of such a model, see \cite{niksemicl}.) In such a model, the four laws of black-hole ``thermodynamics" are a direct consequence of the Einstein equation, and there is nothing to be explained about that. The quantity $S_{\rm bh}$ is only {\em analogous} to entropy, so the answer to the last question is -- no. Whatever (if anything) the true quantum mechanism of objective particle creation near the black-hole horizon might be (owing to the existence of a preferred time, the mechanism based on the Bogoliubov transformation seems viable), the classical properties of gravity near the horizon imply that the distribution of particle energies will be thermal far from the horizon, which also does not require an additional explanation and is {\em not} directly related to the four laws of black-hole thermodynamics \cite{viss}. Of course, with a different model of objective reality, the answers to some of the questions above may be different. But the point is that the answers are immediate and obvious. With a clear notion of objective reality, there is not much room for myths and speculations. It does not prove that objective reality exists, but suggests that this is a possibility that should be considered more seriously. To conclude, the claim that the fundamental principles of quantum theory are today completely understood, so that it only remains to apply these principles to various practical physical problems -- is also a myth. Instead, quantum theory is a theory which is not yet completely understood at the most fundamental level and is open to further fundamental research. Through this paper, I have demonstrated this by discussing various fundamental myths in QM for which a true proof does not yet really exist. I have also demonstrated that all these myths are, in one way or another, related to the central myth in QM according to which objective unmeasured reality does not exist. I hope that this review will contribute to a better general conceptual understanding of quantum theory and make readers more cautios and critical before accepting various claims on QM as definite facts. \section*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgments} As this work comprises the foundational background for a large part of my own scientific research in several seemingly different branches of theoretical physics, it is impossible to name all my colleagues specialized in different branches of physics that indirectly influenced this work through numerous discussions and objections that, in particular, helped me become more open minded by understanding how the known physical facts can be viewed and interpreted in many different inequivalent ways, without contradicting the facts themselves. Therefore, I name only J. M. Karim\"aki who suggested concrete improvements of this paper itself. I am also grateful to the anonymous referees whose constructive critical objections stimulated further improvements and clarifications in the paper. This work was supported by the Ministry of Science and Technology of the Republic of Croatia.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In positive characteristic $p$, there are several discrete invariants associated with abelian varieties, e.g., the $p$-rank, the Newton polygon, and the Ekedahl--Oort type. These invariants give information about the Frobenius morphism and the number of points of the abelian variety defined over finite fields. It is a natural question to ask which of these invariants can be realized by Jacobians of smooth curves. For any prime $p$, genus $g$ and $f$ such that $0 \leq f \leq g$, Faber and van der Geer prove in \cite{FVdG} that there exists a smooth curve of genus $g$ defined over $\overline{\FF}_p$ which has $p$-rank $f$. Much less is known about the Newton polygon, more precisely, the Newton polygon of the characteristic polynomial of Frobenius. For $g=1,2,3$, it is known that every possible Newton polygon occurs for a smooth curve of genus $g$. Beyond genus $3$, very few examples of Newton polygons are known to occur. In \cite[Expectation 8.5.4]{oort05}, for $g \geq 9$, Oort observed that it is unlikely for all Newton polygons to occur for Jacobians of smooth curves of genus $g$. This project focuses on Newton polygons of cyclic covers of the projective line ${\mathbb P}^1$. One case which is well-understood is when the cover is branched at $3$ points, especially when the cover is of prime degree (see \cite{GR}, \cite{honda}, \cite{Weil}, \cite{yui}). In this case, the Jacobian is an abelian variety with complex multiplication and its Newton polygon can be computed using the Shimura--Taniyama theorem. In Section \ref{sec_dim0}, we give a survey of this material, including composite degree. Although this material is well-known, it has not been systematically analyzed for this application. We use this method to tabulate numerous Newton polygons having $p$-rank $0$ which occur for Jacobians of smooth curves. We now describe the main results of this paper in more detail. By \cite[Theorem 2.1]{VdGVdV}, if $p=2$ and $g \in \NN$, then there exists a supersingular curve of genus $g$ defined over $\overline{\FF}_2$. For this reason, we restrict to the case that $p$ is odd in the following result. In the first application, we verify the existence of supersingular curves in the following cases. In Remark \ref{Runlikely}, we explain why the $g=9$, $g=10$ and $g=11$ cases are especially interesting. \begin{theorem} \label{Tintro1} (See Theorem \ref{Tapp1}) Let $p$ be odd. There exists a smooth supersingular curve of genus $g$ defined over $\overline{\FF}_p$ in the following cases:\\ $g=4$ and $p \equiv 2 \bmod 3$, or $p \equiv 2,3,4 \bmod 5$;\\ $g=5$ and $p \equiv 2,6,7,8,10 \bmod 11$;\\ $g=6$ and $p \not\equiv 1,3,9 \bmod 13$ or $p \equiv 3,5,6 \bmod 7$;\\ $g=7$ and $p \equiv 14 \bmod 15$ or $p \equiv 15 \bmod 16$;\\ $g=8$ and $p \not\equiv 1 \bmod 17$; \\ $g=9$ and $p \equiv 2,3,8,10,12,13,14,15,18 \bmod 19$;\\ $g=10$ and $p \equiv 5,17,20 \bmod 21$;\\ $g=11$ and $p \equiv 5,7,10,11,14,15,17,19,20,21,22 \bmod 23$. \end{theorem} The second application is Theorem \ref{TotherNP}: under certain congruence conditions on $p$, we prove that nine new Newton polygons that have $p$-rank $0$ but are not supersingular occur for Jacobians of smooth curves. For context for the third application, recall that every abelian variety is isogenous to a factor of a Jacobian. This implies that every rational number $\lambda \in [0,1]$ occurs as a slope for the Newton polygon of the Jacobian of a smooth curve. In almost all cases, however, there is no control over the other slopes in the Newton polygon. In the third application, when $d=5$ or $d=11$, under congruence conditions on $p$, we show that the slopes $1/d$ and $(d-1)/d$ occur for a smooth curve in characteristic $p$ of arbitrarily large genus with complete control over the other slopes in the Newton polygon. Namely, under these congruence conditions on $p$ and for all $g \geq d$, we prove that there exists a smooth curve of genus $g$ defined over $\overline{\FF}_p$ whose Newton polygon contains the slopes $1/d$ and $(d-1)/d$ with multiplicity $d$ and slopes $0$ and $1$ with multiplicity $g-d$. This was proven earlier, for all $p$, when $d=2$ \cite[Theorem 2.6]{FVdG}; $d=3$ \cite[Theorem 4.3]{Pr:large}; and $d=4$ \cite[Corollary 5.6]{AP:gen}. Let $G_{1, d-1} \oplus G_{d-1,1}$ denote the $p$-divisible group with slopes $1/d, (d-1)/d$. \begin{theorem} \label{Tintro2} (See Theorem \ref{Tapp2}) For the following values of $d$ and $p$ and for all $g \geq d$, there exists a smooth curve of genus $g$ defined over $\overline{\FF}_p$ whose Jacobian has $p$-divisible group isogenous to $(G_{1, d-1} \oplus G_{d-1,1}) \oplus (G_{0,1} \oplus G_{1,0})^{g-d}$: \begin{enumerate} \item $d=5$ for all $p \equiv 3,4,5,9 \bmod 11$; \item $d=11$ for all $p \equiv 2,3,4,6,8,9,12,13,16,18 \bmod 23$. \end{enumerate} \end{theorem} In future work, we determine new results about Newton polygons of curves arising in positive-dimensional special families of cyclic covers of the projective line. This work relies on the Newton polygon stratification of PEL-type Shimura varieties. Then we attack the same questions for arbitrarily large genera using a new induction argument for Newton polygons of cyclic covers of ${\mathbb P}^1$. We use the Newton polygons found in this paper as base cases in this induction process. \subsection*{Organization of the paper} \mbox{ } \\ Section \ref{sec_prelim} contains basic definitions and facts about group algebras, cyclic covers of ${\mathbb P}^1$, and Newton polygons. Section \ref{sec_dim0} focuses on the Jacobians of cyclic covers branched at exactly three points. We review the Shimura--Taniyama method for computing the Newton polygon and provide examples. Section \ref{sec_table} contains tables of data. Section \ref{Sapplication} contains the proofs of the three theorems. \subsection*{Acknowledgements} \mbox{ } \\ This project began at the \emph{Women in Numbers 4} workshop at the Banff International Research Station. Pries was partially supported by NSF grant DMS-15-02227. We thank the referee for the valuable feedback and comments. \section{Notation and background} \label{sec_prelim} \subsection{The group algebra $\QM$}\label{prelim_gpalg} \mbox{ } \\ For an integer $m\geq 2$, let $\mu_m:=\mu_m(\CC)$ denote the group of $m$-th roots of unity in $\CC$. For each positive integer $d$, we fix a primitive $d$-th root of unity $\zeta_d=e^{2\pi i/d}\in\CC$. Let $K_d=\QQ(\zeta_d)$ be the $d$-th cyclotomic field over $\QQ$ of degree $\phi(d)$. Let $\QM$ denote the group algebra of $\mu_m$ over $\QQ$. It has an involution $*$ induced by the inverse map on $\mu_m$, i.e., $\zeta^*:= \zeta^{-1}$ for all $\zeta\in\mu_m$. The $\QQ$-algebra $\QM$ decomposes as a product of finitely many cyclotomic fields, namely \[\QM=\prod_{0<d\mid m}K_d.\] The involution $*$ on $\QM$ preserves each cyclotomic factor $K_d$, and for each $d\mid m$, the restriction of $*$ to $K_d$ agrees with complex conjugation. Let $\CT$ denote the set of homomorphisms $\tau:\QM\to\CC$. In the following, we write \[\QM\otimes_\QQ\CC=\prod_{\tau\in\CT} \CC,\] and for each $(\QM\otimes_\QQ\CC)$-module $W$, we write $W=\oplus_{\tau\in\CT} W_\tau$, where $W_\tau$ denotes the subspace of $W$ on which $a\otimes 1\in\QM\otimes_\QQ\CC$ acts as $\tau(a)$. For convenience, we fix an identification $\CT=\ZZ/m\ZZ$ by defining, for $n\in \ZZ/m\ZZ$, \[\tau_n(\zeta):=\zeta^n, \text{ for all }\zeta\in\mu_m.\] Note that, for any $n\in\ZZ/m\ZZ$, and $a\in\QM$, \[{\tau}_{-n}(a)=\tau_{n}(a^*)=\overline{\tau_n(a)},\] where $z\mapsto \overline{z}$ denotes complex conjugation on $\CC$. In the following, we write $\tau_n^*:=\tau_{-n}$. \begin{remark} For each $\tau\in\CT$, the homomorphism $\tau:\QM\to\CC$ factors via a projection $\QM\to K_d$, for a unique positive divisor $d$ of $m$. We refer to $d$ as {\em the order of} $\tau$. Indeed, for each $n\in\ZZ/m\ZZ$, the homomorphism $\tau_n$ factors via the cyclotomic field $K_d$ if and only if $d$ is the exact order of $n$ in $\ZZ/m\ZZ$. \end{remark} For each rational prime $p$, we fix an algebraic closure $\QQ_p^{\rm alg}$of $\QQ_p$, and an identification $\CC\simeq \CC_p$, where $\CC_p$ denotes the $p$-adic completion of $\QQ_p^{\rm alg}$. We denote by $\QQ_p^{\rm un}$ the maximal unramified extension of $\QQ_p$ in $\QQ_p^{\rm alg}$, and by $\sigma$ the Frobenius of $\QQ_p^{\rm un}$. Assume that $p$ does not divide $m$. Then $\QM$ is unramified at $p$ (i.e., the group $\mu_m$ is \'etale over $\ZZ_p$), and, for each $\tau\in\CT$, the homomorphism $\tau:\QM\to \CC\simeq \CC_p$ factors via the subfield $ \QQ_p^{\rm un}\subset\CC_p$. In particular, \[\QM\otimes_\QQ \QQ_p^{\rm un}=\prod_{\tau\in\CT} \QQ_p^{\rm un}.\] There is a natural action of the Frobenius $\sigma$ on the set $\CT$, defined by $\tau\mapsto \tau^\sigma:=\sigma\circ \tau$. Then $\tau_n^\sigma=\tau_{pn}$ for all $n\in\ZZ/m\ZZ$. We write $\CO$ for the set of $\sigma$-orbits $\co$ in $\CT$. For each $\tau\in\CT$, we denote by $\co_\tau$ its $\sigma$-orbit. The set $\CO$ is in one-to-one correspondence with the set of primes $\p$ of $\QM$ above $p$. We write $\p_\co$ for the prime above $p$ associated with an orbit $\co$ in $\CT$. For each $\sigma$-orbit $\co\in \CT$, the order of $\tau$ is the same for all $\tau\in \co$ and we denote this order by $d_\co$. Let $K_{d_\co, \p_\co}$ denote the completion of $K_{d_\co}$ along the prime $\p_\co$. Then \[\QM\otimes_\QQ\QQ_p=\prod_{\co\in\CO} K_{d_\co, \p_\co}.\] \subsection{Cyclic covers of the projective line}\label{prelim_curve} \mbox{ } \\ Fix an integer $m\geq 2$, together with a triple of positive integers $a=(a(1),a(2), a(3))$. We refer to such a pair $(m,a)$ as a {\em monodromy datum} if \begin{enumerate} \item $a(i)\not\equiv 0\bmod m$, for all $i=1, 2, 3$, \item $\gcd(m, a(1), a(2), a(3))=1$, \item $a(1)+a(2)+a(3)\equiv 0 \bmod m$. \end{enumerate} Fix a monodromy datum $(m,a)$. The equation \begin{equation} \label{Ecurve} y^m=x^{a(1)} (x-1)^{a(2)} \end{equation} defines a smooth projective curve $C=C_{(m,a)}$ defined over $\QQ$. The function $x$ on $C$ yields a map $C \to {\mathbb P}^1$, and there is a $\mu_m$-action on $C$ over $ {\mathbb P}^1$ given by $\zeta\cdot (x,y)=(x,\zeta\cdot y)$ for all $\zeta\in\mu_m$ (more precisely, this action is defined on the base change of $C$ from $\QQ$ to $K_m$). The curve $C$, together with this $\mu_m$-action, is a $\mu_m$-Galois cover of the projective line $\PP$; it is branched at $ 0,1,\infty$ and has local monodromy $a(1)$ at $0$, $a(2)$ at $1$ and $a(3)$ at $\infty$. By the hypotheses on the monodromy datum, for primes $p \nmid m$ the reduction of $C$ at $p$ is a geometrically irreducible curve of genus $g$, where \begin{equation} \label{Egenus} g=g(m,a)=1+\frac{m-\gcd(a(1),m)-\gcd(a(2),m)-\gcd(a(3),m)}{2}. \end{equation} \begin{remark} The isomorphism class of the curve $C=C_{(m,a)}$ depends only on the equivalence class of the monodromy datum $(m,a)$, where two monodromy data $(m,a)$ and $(m',a')$ are equivalent if $m=m'$, and the images of $a,a'$ in $(\ZZ/m\ZZ)^3$ are in the same orbit under the action of $(\ZZ/m\ZZ)^*\times\Sigma_3$, where $\Sigma_3$ is the symmetric group of degree $3$. \end{remark} Let $V:=H^1(C(\CC), \QQ)$ denote the first Betti cohomology group of $C$. Then $V$ is a $\QM$-module, and there is a decomposition $V\otimes_{\QQ}\CC=\oplus_{\tau \in \CT}V_\tau$. In addition, $V$ has a Hodge structure of type $(1,0)+(0,1)$ ,with the $(1,0)$ piece given by $H^0(C(\CC), \Omega^1_{C})$ via the Betti--de Rham comparison. We denote by $V^+$ (resp.\ $V^-)$ the $(1,0)$ (resp.\ $(0,1)$) piece. Both $V^+$ and $V^-$ are $\QM$-modules, so there are decompositions \[V^+=\oplus_{\tau\in \CT} V^+_\tau \ {\rm and} \ \quad V^-=\oplus_{\tau\in \CT} V^-_\tau.\] Let $\cf(\tau):=\dim_{\CC} V^+_\tau$. For any $q\in \QQ$, let $\langle q\rangle$ denote the fractional part of $q$. By \cite[Lemma 2.7, Section 3.2]{moonen} (see also \cite{deligne-mostow}), \begin{equation}\label{DMeqn} \cf(\tau_n)=\begin{cases} -1+\sum_{i=1}^3\langle\frac{-na(i)}{m}\rangle \text{ if $n\not\equiv 0 \bmod m$}\\ 0\text{ if $n\equiv 0 \bmod m$}.\end{cases} \end{equation} We call $\cf=(\cf(\tau_1), \ldots, \cf(\tau_{m-1}))$ the \emph{signature type} of the monodromy datum $(m,a)$. \begin{remark}\label{rmk_dimV} Let $n(\tau):=\dim_\CC V_\tau$. For all $\tau\in \CT$, one sees that $\dim_\CC V^+_{\tau^*}=\dim_\CC V^-_{\tau}$ and thus $\cf(\tau)+\cf(\tau^*)=n(\tau)$. Note that $n(\tau)$ depends only on the order of $\tau$, and thus only on the orbit $\co_\tau$. If $\co \in \CO$, we sometimes write $n(\co)=n(\tau)$, for any $\tau\in\co$. \end{remark} \subsection{Newton polygons}\label{prelim_NP} \mbox{ } \\ Let $X$ denote a $g$-dimensional abelian scheme over an algebraically closed field $\FF$ of positive characteristic $p$. If $\FF$ is an algebraic closure of $\FF_p$, the finite field of $p$ elements, then there exists a finite subfield $\FF_0\subset \FF$ such that $X$ is isomorphic to the base change to $\FF$ of an abelian scheme $X_0$ over $\FF_0$. Let $W(\FF_0)$ denote the Witt vector ring of $\FF_0$. Consider the action of Frobenius $\varphi$ on the crystalline cohomology group $H^1_{\rm cris}(X_0/W(\FF_0))$. There exists an integer $n$ such that $\varphi^n$, the composite of $n$ Frobenius actions, is a linear map on $H^1_{\rm cris}(X_0/W(\FF_0))$. The Newton polygon $\nu(X)$ of $X$ is defined as the multi-set of rational numbers $\lambda$ such that $n\lambda$ are the valuations at $p$ of the eigenvalues of Frobenius for this action. Note that the Newton polygon is independent of the choice of $X_0$, $\FF_0$, and $n$. Here is an alternative definition, which works for arbitrary algebraically closed field $\FF$. For each $n \in \NN$, consider the multiplication-by-$p^n$ morphism $[p^n]:X \to X$ and its kernel $X[p^n]$. The $p$-divisible group of $X$ is denoted by $X[p^\infty] = \varinjlim X[p^n]$. For each pair $(c,d)$ of non-negative relatively prime integers, fix a $p$-divisible group $G_{c,d}$ of codimension $c$, dimension $d$, and thus height $c+d$. By the Dieudonn\'e--Manin classification \cite{maninthesis}, there is an isogeny of $p$-divisible groups \[X[p^\infty] \sim \oplus_{\lambda=\frac{d}{c+d}} G_{c,d}^{m_\lambda},\] where $(c,d)$ ranges over pairs of non-negative relatively prime integers. The Newton polygon is the multi-set of values of the slopes $\lambda$. By identifying $H^1_{\rm cris}(X/W(\FF))$ with the (contravariant) Dieudonn\'e module of $X$, it is possible to show that these definitions are equivalent. The slopes of the Newton polygon are in $\QQ \cap [0,1]$. The Newton polygon is typically drawn as a lower convex polygon, with endpoints $(0,0)$ and $(2g,g)$ and slopes equal to the values of $\lambda$, with multiplicity $(c+d)m_\lambda$. It is symmetric and has integral breakpoints. The Newton polygon is an isogeny invariant of $A$; it is determined by the multiplicities $m_\lambda$. Given an abelian variety or $p$-divisible group $\CA$ defined over a local field of mixed characteristic $(0,p)$, by abuse of notation, we may write $\nu(\CA)$ for the Newton polygon of its special fiber. In this paper, we use $ord$ to denote the Newton polygon with slopes $0,1$ with multiplicity $1$ and $ss$ to denote the Newton polygon with slope $1/2$ with multiplicity $2$. For $s<t \in \ZZ_{>0}$ with ${\rm gcd}(s,t)=1$, we use $(s/t, (t-s)/t)$ to denote the Newton polygon with slopes $s/t$ and $(t-s)/t$ with multiplicity $t$. \begin{definition} The \emph{$p$-rank} of $X$ is defined to be $\dim_{\FF_p}\hom(\mu_p,X)$. Equivalently, the $p$-rank of $X$ is the multiplicity of the slope $0$ in the Newton polygon. \end{definition} \begin{definition} Given a finite set of lower convex polygons $\{\nu_i \mid i=1,\dots ,n\}$, $n\geq 2$, each $\nu_i$ having end points $(0,0)$ and $(h_i, d_i)$ and with slope $\lambda$ occurring with multiplicity $m_{i, \lambda}$, their {\em amalgamate sum} $\sum_{i=1}^n\nu_i$ is the lower convex polygon having end points $(0,0)$ and $(\sum_{i=1}^nh_i, \sum_{i=1}^nd_i)$ and with slope $\lambda$ occurring with multiplicity $\sum_{i=1}^n m_{i, \lambda}$. \end{definition} For any finite set of $p$-divisible groups $\{G_i \mid i=1,\dots , n\}$, $n\geq 2$, with Newton polygons $\nu(G_i)$, the Newton polygon of the $p$-divisible group $\oplus _{i=1}^n G_i$ is the amalgamate sum $\sum_{i=1}^n\nu(G_i)$. \section{Newton polygons of curves with complex multiplication}\label{sec_dim0} As in Section \ref{prelim_curve}, we fix a monodromy datum $(m,a)$, and consider the $\mu_m$-Galois cover $C_{(m,a)} \to \PP$ branched at $0,1, \infty$ with local monodromy $a=(a(1),a(2),a(3))$ as in \eqref{Ecurve}. We write $C=C_{(m,a)}$ and let $J=J_{(m,a)}$ be the Jacobian ${\rm Jac}(C)$ of $C$. The action of $\mu_m$ on $C$ induces an action of $\QM$ on $J$. Also, the equation of $C$ naturally defines integral models $\cC$ and $\CJ={\rm Jac}(\cC)$ of $C$ and $J$ over $\ZZ$ \cite[Section 4]{wewersthesis}; the curve $\cC$ has good reduction at all primes $p$ such that $p\nmid m$ by \cite[XIII, Corollary~2.12, Proposition 5.2]{SGA1}. \subsection{Shimura--Taniyama method}\label{sec_ST} \mbox{ } \\ It is well-known that $J$ is an abelian variety with complex multiplication, but we record a proof here with a refined statement on the CM algebra contained in $\End(J_{\QQ^{\rm alg}})\otimes \QQ$. We say that an abelian variety $A$ over $\QQ^{\rm alg}$ has complex multiplication (CM) by a $\QQ$-algebra $E$ if $E$ is an \'etale $\QQ$-subalgebra of $\End(A_{\QQ^{\rm alg}})\otimes\QQ$ of degree $2\dim A$ over $\QQ$. In particular, if $E=\prod E_i$ then $A$ has CM by $E$ if and only if $A$ is isogenous to $\prod A_i$ with $A_i$ an abelian variety with CM by $E_i$. Also, if $A$ has CM by $E$ then $H_1(A,\QQ)$ is free of rank $1$ over $E$ (\cite[Definition 3.2, and Proposition 3.6]{milneCM}). \begin{lemma}\label{lem_CM} The abelian variety $J$ has complex multiplication by $\prod_{d} K_d$, where the product is taken over all $d$ such that $1<d\mid m$ and $d\nmid a(i)$ for any $i=1,2,3$. \end{lemma} \begin{proof} By Hodge theory, $J_{\overline{\QQ}}$ is isogenous to $\prod_{1<d\mid m}A_d$, where $A_d$ is an abelian variety whose first Betti cohomology group is isomorphic to $\oplus_{\tau\text{ of order }d}V_\tau$. For $0\neq n\in \ZZ/m\ZZ$, the order $d$ of $n$ is $d=m/\gcd(n,m)$. Let $x_n$ be the number of elements in $a$ which are not divisible by $d$. If none of the $a(i)$ is divisible by $d$, then $x_n=3$. Otherwise, $x_n=2$ since $\gcd(m, a(1),a(2),a(3))=1$ and $a(1)+a(2)+a(3) \equiv 0 \bmod m$. For example, when $\gcd(n,m)=1$, then $d=m$ and hence $x_n=3$. For example, if $m$ is even and $n=m/2$, then $d=2$ and $x_n=2$. By \eqref{DMeqn}, $\frf(\tau_n)+\frf(\tau_{-n})=x_n-2$. Hence $A_d=\{1\}$, the trivial abelian variety if $x_n=2$, and $A_d$ is a simple abelian variety of dimension $[K_d:\QQ]/2$ such that $\End(A_d)\otimes \QQ=K_d$ otherwise. Then $J$ has complex multiplication by the product of the fields $K_d$ where the product is taken over all $d$ such that if $x_n=3$. \end{proof} By Lemma \ref{lem_CM}, the abelian variety $J$ has potentially good reduction everywhere. This means that there exists an abelian scheme over some finite extension of $\ZZ_p$ such that its generic fiber is $J$. For $p \nmid m$, since $C$ already has good reduction at $p$, so does $J$; no extension of $\ZZ_p$ is needed, the Jacobian $\CJ$ is a smooth integral model of $J$ defined over $\ZZ_p$. The $\QM$-action on $J$ extends naturally to a $\QM$-action on $\CJ$. Let $\CJ[p^\infty]$ be the associated $p$-divisible group scheme of $\CJ$. The $\QM$-action on $\CJ$ induces a $(\QM\otimes_\QQ \QQ_p$)-action on $\CJ[p^\infty]$ and thus a canonical decomposition \[\CJ[p^\infty]=\oplus_{\co \in \CO'} \CJ[\p_\co^\infty],\] where $\CO'$ is the subset of $\CO$ with $d_\co \nmid a(i)$ for any $i=1,2,3$ and each $p$-divisible group $\CJ[\p_\co^\infty]$ has height $\# \co$. To state the theorem by Shimura--Taniyama and Tate on $\nu(\CJ[\p_\co^\infty])$, we introduce the following notation. Recall that $\frf(\tau)=\dim_{\CC} V^+_\tau$. By the proof of Lemma \ref{lem_CM}, $\frf(\tau)\in \{0,1\}$ and $\frf(\tau)+\frf(\tau^*)=1$ for all $\tau \in \CT$ such that the order of $\tau$ does not divide $a(i)$ for any $i=1,2,3$. For $\epsilon \in \{0,1\}$, define \begin{equation} \label{EdefS} S_\epsilon=\{\tau \in \CT \mid \text{the order of }\tau \text{ does not divide }a(1),a(2), a(3), \text{ and }\frf(\tau)=\epsilon\}. \end{equation} For $\co \in \CO'$, set $\alpha_\co=\#( \co\cap S_1)$ and $\beta_\co=\#(\co\cap S_0)$. Note that $\alpha_\co+\beta_\co=\# \co$. \begin{theorem} \label{thm_ST} (Shimura--Taniyama formula \cite[Section 5]{Tate}) The only slope of the Newton polygon $\nu(\CJ[\p_\co^\infty])$ is $\alpha_\co/\# \co$. \end{theorem} \begin{proof} For completeness, we briefly sketch Tate's local proof as in \cite[Section 5]{Tate}. First, we recall the notion of a $p$-divisible group with complex multiplication. Let $G$ be a $p$-divisible group defined over the ring of integers ${\mathcal O}_L$ of a finite extension $L$ of $\Q_p$ such that $L\subset \Q_p^{\rm alg}$. We say that $G$ over ${\mathcal O}_L$ has complex multiplication by a local field $K$, for $K$ a finite extension of $\Q_p$, if $G$ has height $[K:\Q_p]$ and is equipped with a $\Q_p$-linear action of $K$ defined over ${\mathcal O}_L$ such that, for each $\tau\in H:={\rm Hom}_{\Q_p}(K, \Q_p^{\rm alg})$, $$\cf(\tau):=\dim_{\Q_p^{\rm alg}} ({\rm Lie}(G)\otimes_{{\mathcal O}_L} \Q_p^{\rm alg})_\tau\in\{0,1\}.$$ For $\Phi:=\{\tau\in H \mid \cf(\tau)=1\}$, the pair $(K,\Phi)$ is called the CM-type of $G$. Let $k_L$ denote the residue field of $L$ and set $G_0:=G\times_{{\mathcal O}_L} k_L$, the reduction of $G$ over $k_L$. We observe that, by definition, if $G$ over ${\mathcal O}_L$ has CM-type $(K,\Phi)$, then $G_0$ is isoclinic (i.e., the Newton polygon of $G_0$ has only one slope) of slope $\#\Phi/[K:\Q_p]$. Indeed, the existence of a $\Q_p$-linear embedding of $K$ into ${\rm End}(G)\otimes \QQ$, with $[K:\Q_p]={\rm height}(G)$, implies that $G_0$ is isoclinic. Also, by the definition of CM-type, the dimension of $G$ is equal to $\#\Phi$, because $$\dim (G)= {\rm rk}_{{\mathcal O}_L} {\rm Lie}(G)= \sum_{\tau\in H}\cf(\tau)=\#\Phi.$$ We deduce that the slope of $G_0$ is $\frac{\dim (G)}{{\rm height }(G)}=\frac{\#\Phi}{[K:\Q_p]}$. To conclude, it suffices to observe that for $p \nmid m$, Lemma \ref{lem_CM} implies that, for each $\co\in\CO'$, the $p$-divisible group $\CJ[\p_\co]$, after passing to a finite extension of $\ZZ_p$,\footnote{The Newton polygon is independent of the definition field of $\CJ$ and we pass to a finite extension of $\ZZ_p$ such that the CM-action is defined over this larger local ring so that we can apply Tate's theory.} has complex multiplication by $K_{d_\co,\p_\co}$ with CM-type $(K_{d_\co,\p_\co},\co\cap S_1)$. \end{proof} \begin{corollary}\label{ST_ss} Assume that all orbits $\co\in \CO'$ are self-dual, i.e., $\co=\co^*$. Then $\CJ$ has supersingular reduction at $p$. \end{corollary} \begin{proof} For each $\co\in\CO'$, if $\tau\in \co$, then $\tau^*\in \co$. Hence $\alpha_\co=\beta_\co$, and the only slope of the Newton polygon $\nu(\CJ[\p_\co^\infty])$ is $\alpha_\co/(\alpha_\co+\beta_\co) = 1/2$. \end{proof} \begin{remark} \label{pinert} Let $K_m^+$ be the maximal totally real subfield of $\Km$. If each (or, equivalently, one) prime of $K_m^+$ above $p$ is inert in $\Km/K_m^+$, then all $\sigma$-orbits $\co\in\CO'$ are self-dual. E.g., if $p\equiv -1\bmod m$, then for all $n\in(\ZZ/m\ZZ)^*$, the associated orbit is $\co_n=n\langle p\rangle=\{n,-n\}=\co_{-n}=\co^*_n$. \end{remark} \begin{example} \label{example_ss} Let $g \geq 1$, $m=2g+1$ and $a = (1,1,m-2)$. The equation $y^m=x(x-1)$ defines a smooth projective curve $\cC$ over $\ZZ[1/m]$ with geometrically irreducible fibers. It has genus $g$ by \eqref{Egenus}. Its Jacobian $\CJ$ over $\ZZ[1/m]$ has complex multiplication by $\prod_{1<d\mid m}\Kd$. Suppose that $p\nmid m$ and the order of $p\in (\ZZ/m\ZZ)^*$ is even (i.e., the inertia degree of $(\Km)_{\p}$ is even for any $\p$ over $p$). When $m$ is prime, this implies that all primes of $K_m^+$ above $p$ are inert in $K_m/K_m^+$. Then $\CJ$ has supersingular reduction at $p$ by Corollary \ref{ST_ss}. When $p\nmid 2m$, this is \cite[Theorem 1]{honda}. See also \cite[Lemma 1.1]{GR}. \end{example} \begin{remark} The Newton polygon of the Fermat curve $F_m: x^m+y^m=z^m$ is studied in \cite{yui}, with the connection to Jacobi sums going back to \cite{Weil}. Fix $k$ with $2 \leq k \leq m-1$ and consider the inertia type $a=(1, k-1, m-k)$. Then the $\mu_m$-Galois cover $\cC_{m,a} \to \PP$ with inertia type $a$ is a quotient of the Fermat curve. In certain cases, this is sufficient to determine the Newton polygon of $\cC_{m,a}$. Let $f$ be the order of $p$ modulo $m$. If $f$ is even and $p^{f/2} \equiv -1 \bmod m$, then $F_m$ is supersingular, so $\cC_{m,a}$ is supersingular as well. If $f$ is odd, then the slope $1/2$ does not occur in the Newton polygon of $F_m$ or $\cC_{m,a}$. Information about the $p$-rank of $F_m$ and $\cC_{m,a}$ can be found in \cite{gonzalez}. \end{remark} \subsection{Slopes with large denominators} \mbox{ } \\ In this section, we use Theorem \ref{thm_ST} to construct curves of genus $g \geq 1$ whose Newton polygon contains only slopes with large denominators. Consider a monodromy datum $(m,a)$ with $m=2g+1$ and $a=(a(1),a(2),a(3))$. In this section, we assume that $d \nmid a(i)$ for any $1<d|m$. For convenience, via the identification $\CT=\ZZ/m\ZZ$, we identify the sets $S_0$ and $S_1$ from \eqref{EdefS} as subsets of $\ZZ/m\ZZ$. If $p$ is a rational prime, not dividing $m$, we write $\langle p\rangle\subset (\mathbb{Z}/m\ZZ)^*$ for the cyclic subgroup generated by the congruence class of $p\bmod m$. Then $\langle p \rangle$ acts naturally on $\mathbb{Z}/m\ZZ$. Let $n\langle p \rangle$ denote the $\langle p \rangle$-orbits in $\mathbb{Z}/m\ZZ$ where $n\not\equiv 0$. The following proposition is a special case of Theorem \ref{thm_ST}. \begin{proposition}\label{prop_ST} Assume $p \nmid m$. Then the slopes of the Newton polygon $\nu(\CJ)$ at $p$ are naturally indexed by the cosets of $\hp$ in $(\mathbb{Z}/d\ZZ)^*$ for all $1<d\mid m$. For each orbit $n\hp$, the associated slope is \[\lambda_{n\hp}:=\frac{\# n\hp \cap S_1}{\#n\hp}.\] \end{proposition} Note that when the inertia type $a$ is fixed, the Newton polygon $\nu(\CJ)$ at a prime $p$ depends only on the associated subgroup $\hp\subset (\mathbb{Z}/m\ZZ)^*$. In particular, if $m=2g+1$ is prime, then $\nu(\CJ)$ at $p$ depends only on the order of $p$ in $ (\mathbb{Z}/m\ZZ)^*$. See also \cite[Theorem~2]{honda} for the case when $a=(1,1,m-2)$. \begin{corollary} \label{Clargeden} Assume that $m=2g+1$ is prime and let $f$ be a prime divisor of $g$. If $p$ is a prime with $p \nmid m$ such that the reduction of $p$ has order $f$ in $ (\mathbb{Z}/m\ZZ)^*$, then every slope $\lambda$ of the Newton polygon $\nu(\CJ)$ at $p$ with $\lambda\neq 0,1$ has denominator $f$. In particular, if the $p$-rank of $\CJ$ is $0$, then every slope has denominator $f$. \end{corollary} For any $m$, $g$ and $f$ as above, there are infinitely many primes $p$ satisfying the hypotheses of Corollary \ref{Clargeden} by the Chebotarev density theorem. \begin{proof} Under the hypotheses, if $\lambda_{n\hp}\neq 0,1$, then $f$ is the denominator of the fraction $\lambda_{n\hp}$ by Proposition \ref{prop_ST}. When the $p$-rank is $0$, then there is no slope $0$ or $1$ by definition. \end{proof} \begin{example} \label{Enotsg7} Let $g=14$ and $m=29$. For $p \equiv 7,16,20,23,24,25 \bmod 29$, the inertia degree is $f=7$. For each choice of the inertia type, the Newton polygon is $(2/7, 5/7)\oplus (3/7, 4/7)$. \end{example} A prime number $\ell$ is a \emph{Sophie Germain prime} if $2\ell+1$ is also prime. The rest of the section focuses on the case when $g$ is a Sophie Germain prime. \begin{corollary} Suppose $g$ is an odd Sophie Germain prime. Let $p$ be a prime, $p \neq 2g+1$. Then, one of the following occurs. \begin{enumerate} \item If $p \equiv 1 \bmod 2g+1$, then $\nu(\CJ)=ord^g$. \item If $p \equiv -1 \bmod 2g+1$, then $\nu(\CJ)=ss^g$. \item If $p$ has order $g$ modulo $2g+1$, then $\nu(\CJ)=(\alpha/g,(g-\alpha)/g)$, for $\alpha=\# \hp \cap S_1$. \item If $p$ has order $2g$ modulo $2g+1$, then $\nu(\CJ)=ss^g$. \end{enumerate} \end{corollary} \begin{proof} Let $m=2g+1$. Under the Sophie Germaine assumption on $g$, a prime $p\neq m$ has order either $1$, $2$, $g$, or $2g$ modulo $m$. Cases (2) and (4) follow from Corollary \ref{ST_ss} and Cases (1) and (3) follow from Proposition \ref{prop_ST}. \end{proof} Note that $p$ has order $g$ modulo $2g+1$ if and only if $p$ is a quadratic residue other than $1$ modulo $2g+1$. In this case, if $a=(1,1,2g-1)$, then $S_1=\{1, \ldots, g\}$ and $\alpha$ is the number of quadratic residues modulo $2g+1$ in $S_1$. \begin{example} \label{Esgg5} Let $g=5$ and $m=11$. Let $p \equiv 3,4,5,9 \bmod 11$. If $a=(1,1,9)$, then $\nu(\CJ)=(1/5,4/5)$. If $a=(1,2, 8)$, then $\nu(\CJ)=(2/5,3/5)$. \end{example} \begin{example} \label{Esgg11} Let $g=11$ and $m=23$. Let $p \equiv 2, 3, 4, 6, 8, 9, 12, 13, 16, 18 \bmod 23$. If $a=(1,1,21)$, then $\nu(\CJ)=(4/11,7/11)$. If $a=(1,4,18)$, then $\nu(\CJ)=(1/11,10/11)$. \end{example} In Examples \ref{Esgg5} - \ref{Esgg11}, the listed Newton polygons are the only ones that can occur, under these conditions on $p$, as $a$ varies among all possible inertia types for $\mu_m$-Galois covers of the projective line branched at three points. We did not find other examples of curves whose Newton polygon has slopes $1/d$ and $(d-1)/d$ using this method. \begin{example} \label{Esgg1013} Let $g=1013$ and $m=2027$. Suppose the congruence class of $p$ modulo $2027$ is contained in $\langle 3 \rangle$ in $(\ZZ/2027\ZZ)^*$. If the inertia type is $a=(1,1,2025)$, then $\nu(\CJ)=(523/1013,490/1013)$. \end{example} \section{Tables}\label{sec_table} The following tables contain all the Newton polygons which occur for cyclic degree $m$ covers of the projective line branched at 3 points when $3 \leq m \leq 12$. Each inertia type $a=(a_1, a_2, a_3)$ is included, up to permutation and the action of $(\ZZ/m)^*$. The signature is computed by \eqref{DMeqn} and written as $(f(1), \ldots, f(m-1))$. We denote by $ord$ the Newton polygon of $G_{0,1} \oplus G_{1,0}$ which has slopes $0$ and $1$ with multiplicity $1$ and by $ss$ the Newton polygon of $G_{1,1}$ which has slope $1/2$ with multiplicity $2$. \begin{center} $m=3$\\ \begin{tabular}{ |c|c|c|c| } \hline & p & $ 1 \bmod 3$ & $2 \bmod 3$ \\ \hline & {prime orbits} & split & $(1,2)$ \\ \hline a & signature & &\\ \hline $(1,1,1)$ & $(1,0)$ & $ord$ & $ss$ \\ \hline \end{tabular} \end{center} \bigskip \begin{center} $m=4$\\ \begin{tabular}{ |c|c|c|c| } \hline & p & $ 1 \bmod 4$ & $3 \bmod 4$ \\ \hline & {prime orbits} & split & $(1,3),(2)$ \\ \hline a & signature & &\\ \hline $(1,1,2)$ & $(1,0,0)$ & $ord$ & $ss$ \\ \hline \end{tabular} \end{center} \bigskip \begin{center} $m=5$\\ \begin{tabular}{ |c|c|c|c|c| } \hline & p & $ 1 \bmod 5$ & $2,3 \bmod 5$ & $ 4 \bmod 5$\\ \hline & {prime orbits} & split & $(1,2,3,4)$ & $(1,4)$, $(2,3)$ \\ \hline a & signature & && \\ \hline $(1,1,3)$ & $(1,1,0,0)$ & $ord^2$ & $ss^2$ & $ss^2$\\ \hline \end{tabular} \end{center} \bigskip \begin{center} $m=6$\\ \begin{tabular}{ |c|c|c|c| } \hline & p & $1 \bmod 6$ & $ 5 \bmod 6$\\ \hline & {prime orbits} & split & $(1,5),(2,4),(3)$ \\ \hline a & signature & & \\ \hline $(1,1,4)$ & $(1,1,0,0,0)$ & $ord^2$ & $ ss^2$ \\ \hline $(1,2,3)$ & $(1,0,0,0,0)$ & $ord$ & $ ss$ \\ \hline \end{tabular} \end{center} \bigskip \begin{small} \begin{center} $m=7$\\ \begin{tabular}{ |c|c|c|c|c|c| } \hline & p & $1 \bmod 7$ & $2,4 \bmod 7$& $3,5 \bmod 7$ & $6 \bmod 7$\\ \hline & {prime orbits} & split & $(1,2,4),(3,5,6)$ & $(1,2,3,4,5,6)$ & $(1,6),(2,5),(3,4)$ \\ \hline a & signature & &&& \\ \hline (1,1,5) & (1,1,1,0,0,0) & $ord^3 $& (1/3,2/3)& $ss^3 $ & $ss^3$ \\ \hline (1,2,4) & (1,1,0,1,0,0) & $ord^3$ & $ord^3$& $ss^3$ & $ss^3$ \\ \hline \end{tabular} \end{center} \end{small} \bigskip \begin{small} \begin{center} $m=8$\\ \begin{tabular}{ |c|c|c|c|c|c| } \hline & p & $1 \bmod 8$ & $ 3 \bmod 8$& $ 5\bmod 8$ & $ 7 \bmod 8$\\ \hline & & & $(1,3),(2,6)$ & $(1,5),(3,7)$ & $(1,7),(2,6)$ \\ &{prime orbits}& split & $(5,7),(4)$ & $(2),(4),(6)$ & $(3,5),(4)$ \\ \hline a & signature & &&& \\ \hline (1,1,6) & (1,1,1,0,0,0,0) & $ord^3 $& $ord^2 \oplus ss$ & $ord \oplus ss^2$ & $ss^3$ \\ \hline (1,2,5) & (1,1,0,0,1,0,0) & $ord^3$ & $ss^3$& $ord^3$ & $ss^3$ \\ \hline (1,3,4) & (1,0,1,0,0,0,0) & $ ord^2$ & $ ord^2$& $ss^2$ & $ss^2$ \\ \hline \end{tabular} \end{center} \end{small} \bigskip \begin{center} \begin{small} $m=9$\\ \begin{tabular}{ |c|c|c|c|c|c| } \hline & p & $ 1 \bmod 9$ & $ 2,5 \bmod 9$& $4,7 \bmod 9$ & $8 \bmod 9$\\ \hline & & & $(1,2,4,8,7,5)$ & $(1,4,7),(2,8,5)$ & $(1,8),(2,7)$ \\ &{prime orbits}& split & $(3,6)$ & $(3),(6)$ & $(4,5),(3,6)$ \\ \hline a & signature & &&& \\ \hline (1,1,7) & (1,1,1,1,0,0,0,0) & $ord^4$ & $ss^4$& $(1/3,2/3) \oplus ord$ & $ss^4$ \\ \hline (1,2,6) & (1,1,0,0,1,0,0,0) & $ord^3$ & $ss^3$& $(1/3,2/3)$ & $ss^3$ \\ \hline (1,3,5) & (1,1,0,1,0,0,0,0) & $ord^3$ & $ss^3$& $(1/3,2/3)$ & $ss^3 $ \\ \hline \end{tabular} \end{small} \end{center} \bigskip \begin{center} $m=10$\\ \begin{tabular}{ |c|c|c|c|c| } \hline & p & $ 1 \bmod 10$ & $3,7 \bmod 10$ & $ 9 \bmod 10$\\ \hline & & & $(1,3,9,7)$ & $(1,9),(2,8)$ \\ &{prime orbits} & split & $(2,6,8,4),(5)$ & $(3,7),(4,6),(5)$ \\ \hline a & signature & && \\ \hline (1,1,8) & (1,1,1,1,0,0,0,0,0) & $ord^4$ & $ss^4$ & $ss^4$\\ \hline (1,2,7) & (1,1,1,0,0,1,0,0,0) & $ord^4$ & $ss^4$ & $ss^4$\\ \hline (1,4,5) & (1,0,1,0,0,0,0,0,0) & $ord^2$ & $ss^2$ & $ss^2$\\ \hline \end{tabular} \end{center} \bigskip \begin{small} \begin{center} $m=11$\\ \begin{tabular}{ |c|c|c|c|c|c| } \hline & p & $ 1 \bmod 11$ & $2,6,7,8 \bmod 11$ & $ 3,4,5,9 \bmod 11$ & $ 10 \bmod 11$\\ \hline & & & & $(1,3,4,5,9)$ & $(1,10),(2,9)$ \\ &{prime orbits} & split & inert & $(2,6,7,8,10)$ & $(3,8),(4,7),(5,6)$ \\ \hline a & signature & &&& \\ \hline (1,1,9) & (1,1,1,1,1,0,0,0,0,0) & $ord^5$ & $ss^5$ & $(1/5,4/5)$& $ss^5$\\ \hline (1,2,8) & (1,1,1,0,0,1,1,0,0,0) & $ord^5$ & $ss^5$ & $(2/5,3/5)$& $ss^5$\\ \hline \end{tabular} \end{center} \end{small} \bigskip \begin{center} \begin{small} $m=12$\\ \begin{tabular}{ |c|c|c|c|c|c| } \hline & p & $1 \bmod 12$ & $5 \bmod 12$ & $ 7 \bmod 12$& $ 11 \bmod 12$\\ \hline & & & &$(1,7),(3,9)$ & $(1,11),(4,8)$ \\ & & & $(1,5),(2,10),(3)$ & $(2),(4),(5,11)$ & $(3,9),(2,10)$ \\ & {prime orbits}& split & $(4,8),(6),(7,11),(9)$ & $(6),(8),(10)$ & $(5,7),(6)$ \\ \hline a & signature & &&& \\ \hline (1,1,10) & (1,1,1,1,1,0,0,0,0,0,0) & $ord^5$ & $ord^3 \oplus ss^2$ & $ord^2 \oplus ss^3$ & $ss^5$\\ \hline (1,2,9) & (1,1,1,0,0,0,1,0,0,0,0) & $ord^4$ & $ord \oplus ss^3$ & $ord^3 \oplus ss$ & $ss^4$\\ \hline (1,3,8) & (1,1,0,0,1,0,0,0,0,0,0) & $ord^3$ & $ord^2 \oplus ss$ & $ord \oplus ss^2$ & $ss^3$\\ \hline (1,4,7) & (1,1,0,1,0,0,1,0,0,0,0) & $ord^4$ & $ss^4$ &$ord^4$ & $ss^4$\\ \hline (1,5,6) & (1,0,1,0,1,0,0,0,0,0,0) & $ord^3$ & $ord^3$ & $ss^3$ & $ss^3$\\ \hline \end{tabular} \end{small} \end{center} \section{Applications} \label{Sapplication} In the previous section, we computed the Newton polygons of cyclic degree $m$ covers of the projective line branched at $3$ points. We carried out the calculation of the Newton polygon for all inertia types that arise when $m \leq 23$. Many of these were not previously known to occur for the Jacobian of a smooth curve. We collect a list of the most interesting of these Newton polygons, restricting to the ones with $p$-rank $0$ and $4 \leq g \leq 11$. In the third part of the section, we deduce some results for arbitrarily large genera $g$. By \cite[Theorem 2.1]{VdGVdV}, if $p=2$ and $g \in \NN$, then there exists a supersingular curve of genus $g$ defined over $\overline{\FF}_2$ (or even over $\FF_2$). For this reason, we restrict to the case that $p$ is odd in the following result. \begin{theorem} \label{Tapp1} (Theorem \ref{Tintro1}) Let $p$ be odd. There exists a smooth supersingular curve of genus $g$ defined over $\overline{\FF}_p$ in the following cases: \begin{center} \begin{tabular}{ |c|c|c| } \hline genus & congruence & where \\ \hline 4 & $p \equiv 2 \bmod 3$ & m=9, $a=(1,1,7)$\\ & $p \equiv 2,3,4 \bmod 5$ & $m=10$, $a=(1,1,8)$ \\ \hline 5 & $p \equiv 2,6,7,8,10 \bmod 11$ & $m=11$, any $a$\\ \hline 6 & $p \not\equiv 1,3,9 \bmod 13$ & $m=13$, any $a$\\ & $p \equiv 3,5,6 \bmod 7$ & $m=14$, $a=(1,1,12)$\\ \hline 7 & $p \equiv 14 \bmod 15$ & $m=15$, $a=(1,1,13)$\\ & $p \equiv 15 \bmod 16$ & $m=16$, $a=(1,1,14)$\\ \hline 8 & $p \not \equiv 1 \bmod 17$ & $m=17$, any $a$\\ \hline 9 & $p \equiv 2,3,8,10,12,13,14,15,18 \bmod 19$ & $m=19$, any $a$ \\ \hline 10 & $p \equiv 5,17,20 \bmod 21$ & $m=21$, $a=(1,1,19)$\\ \hline 11 & $p \equiv 5,7,10,11,14,15,17,19,20,21,22 \bmod 23$& $m=23$, any $a$\\ \hline \end{tabular} \end{center} \end{theorem} \begin{proof} We compute the table using Corollary \ref{ST_ss}, and Remark \ref{pinert}. The genus is determined from \eqref{Egenus}. For example, for $g=5$, the congruence classes of $p\bmod 11$ are the quadratic non-residues modulo $11$. A prime above $p$ is inert in $K_{11}/K_{11}^+$ if and only if $p$ is a quadratic non-residue modulo $11$. (The same holds for $g=9$ and $m=19$, and also $g=11$ and $m=23$). For example, for $g=4$, the condition $p \equiv 3,7,9 \bmod 10$ covers all the cases $p \equiv 2,3,4 \bmod 5$ since $p$ is odd. For $p\equiv 3,7\bmod 10$, $p$ is inert in $K_5$. For $p\equiv -1\bmod 10$, each orbit $\co\in\CO'$ is self-dual. \end{proof} \begin{remark} \label{Runlikely} The existence of a smooth supersingular curve of genus $9$, $10$ or $11$ is especially interesting for the following reason. The dimension of $\CA_g$ is $(g+1)g/2$ and the dimension of the supersingular locus in $\CA_g$ is $\lfloor g^2/4 \rfloor$. Thus the supersingular locus has codimension $25$ in $\CA_9$, $30$ in $\CA_{10}$ and $36$ in $\CA_{11}$. The dimension of $\CM_g$ is $3g-3$ for $g \geq 2$. Since $\dim(\CM_9) = 24$, $\dim(\CM_{10})=27$, and $\dim(\CM_{11})=30$ the supersingular locus and open Torelli locus form an unlikely intersection in $\CA_9$, $\CA_{10}$ and $\CA_{11}$. See \cite[Section 5.3]{priesCurrent} for more explanation. \end{remark} \begin{remark} In future work, when $5 \leq g \leq 9$, we prove there exists a smooth supersingular curve of genus $g$ defined over $\overline{\FF}_p$ for sufficiently large $p$ satisfying other congruence conditions. \end{remark} In the next result, we collect some other new examples of Newton polygons of smooth curves with $p$-rank $0$. \begin{theorem} \label{TotherNP} There exists a smooth curve of genus $g$ defined over $\overline{\FF}_p$ with the given Newton polygon of $p$-rank $0$ in the following cases: \begin{center} \begin{tabular}{ |c|c|c|c| } \hline genus & Newton polygon & congruence & where \\ \hline 5 & $(1/5,4/5)$ & $3,4,5,9 \bmod 11$ & $m=11$, $a=(1,1,9)$ \\ \hline 5 & $(2/5,3/5)$ & $3,4,5,9 \bmod 11$ & $m=11$, $a=(1,2,8)$ \\ \hline 6 & $(1/3,2/3)^2$ & $3,9 \bmod 13$ & $m=13$, $a=(1,2,10)$\\ & & $9,11 \bmod 14$ & $m=13$, $a=(1,1,12)$\\ \hline 7 & $(1/4,3/4) \oplus ss^3$ & $2,8 \bmod 15$ & $m=15$, $a=(1,1,13)$ \\ \hline 9 & $(4/9,5/9)$ & $4,5,6,9,16,17 \bmod 19$ & $m=19$, $a=(1,2,16)$ \\ \hline 9 & $(1/3,2/3)^3$ & $4,5,6,9,16,17 \bmod 19$ & $m=19$, $a=(1,1,17)$ \\ & & $7,11 \bmod 19$ & $m=19$, $a=(1,2,16)$ \\ \hline 10 & $(1/3,2/3)^3 \oplus ss$ & $p \equiv 2 \bmod 21$ & $m=21$, $a=(1,1,19)$\\ \hline 11 & $(1/11, 10/11)$ & $2,3,4,6,8,9,12,13,16,18 \bmod 23$ & $m=23$, $a=(1,4,18)$ \\ \hline 11 & $(4/11, 7/11)$ & $2,3,4,6,8,9,12,13,16,18 \bmod 23$ & $m=23$, $a=(1,1,21)$ \\ \hline \end{tabular} \end{center} \end{theorem} \begin{proof} We compute the table using the Shimura--Tanayama method, as stated in Proposition \ref{prop_ST}. The genus is determined from \eqref{Egenus}, and the signature type from \eqref{DMeqn}. For example, for $m=15$ and $a=(1,1,13)$, the curve $C_{(m,a)}$ has genus 7 and signature type $(1,1,\dots ,1,0,0,\dots, 0)$. Let $p$ be a prime such that $p\equiv 2\bmod 15$. The congruence class of $p$ has order $2$ modulo $3$ and order $4$ modulo $5$. Hence the prime $p$ is inert in $K_3$ and in $K_5$, and splits as a product of two primes in $K_{15}$. We write $\p_3$ (resp.\ $\p_5$, and $\p_{15},{\p}'_{15}$) for the primes of $K_3$ (resp.\ $K_5$, and $K_{15}$) above $p$. Continuing this case, by Corollary \ref{ST_ss}, the Newton polygon of $\CJ[\p^\infty_3]$ (resp.\ $ \CJ[\p^\infty_5]) $ has slope 1/2, with multiplicity 1 (resp.\ 2). On the other hand, the two orbits in $(\ZZ/15\ZZ)^*$ are $\co=\langle 2\rangle=\{2,4,8,1\}$ and $\co'=7\langle 2\rangle=\{7,14,13,11\}$. (In particular, $\co'=\co^*$, hence $\p'_{15}={\p}^*_{15}$ in $K_{15}$.) Hence, $\alpha_\co= 3$ and $\alpha_{\co'}=1$. By Theorem \ref{thm_ST}, the Newton polygon of $\CJ[\p^\infty_{15}]$ (resp.\ $\CJ[{\p'}^\infty_{15}]$) has slope $3/4$ (resp.\ $1/4$). For example, for $m=21$ and $a=(1,1,19)$, the curve $C_{(m,a)}$ has genus 10 and signature type $(1,1,\dots,1,0,0,\dots,0)$. The congruence class of $p \equiv 2 \bmod 21$ has order $2$ modulo $3$ and order $3$ modulo $7$. Hence the prime $p$ is inert in $K_3$, and splits as a product of two primes in $K_7$ and in $K_{21}$. We write $\p_3$ (resp.\ $\p_7,\p'_7$, and $\p_{21},{\p}'_{21}$) for the primes of $K_3$ (resp.\ $K_7$, and $K_{21}$) above $p$. Continuing this case, by Corollary \ref{ST_ss}, the Newton polygon of $\CJ[\p^\infty_3]$ has slope 1/2, with multiplicity 1. The two orbits in $(\ZZ/7\ZZ)^*$ are $\co_7=\langle 2\rangle=\{2,4,1\}$ and $\co_7'=3\langle 2\rangle=\{3,6,5\}$. Hence, $a_{\co_7}= 2$ and $a_{\co_7'}=1$. By Theorem \ref{thm_ST}, the Newton polygon of $\CJ[\p^\infty_{7}]$ (resp.\ $\CJ[{\p'}^\infty_{7}]$) has slope $2/3$ (resp.\ $1/3$). Similarly, the two orbits in $(\ZZ/21\ZZ)^*$ are $\co_{21}=\langle 2\rangle=\{2,4,8,16,11,1\}$ and $\co_{21}'=5\langle 2\rangle=\{5,10, 20,19,17\}$. Again $a_{\co_{21}}= 4$ and $a_{\co_{21}'}=2$. Thus the Newton polygon of $\CJ[\p^\infty_{21}]$ (resp.\ $\CJ[{\p'}^\infty_{21}]$) has slope $4/6=2/3$ (resp.\ $2/6=1/3$). \end{proof} Consider the $p$-divisible group $G_{1, d-1} \oplus G_{d-1,1}$ with slopes $1/d, (d-1)/d$. \begin{theorem} \label{Tapp2} (Theorem \ref{Tintro2}) For the following values of $d$ and $p$ and for all $g \geq d$, there exists a smooth curve of genus $g$ defined over $\overline{\FF}_p$ whose Jacobian has $p$-divisible group isogenous to $(G_{1, d-1} \oplus G_{d-1,1}) \oplus (G_{0,1} \oplus G_{1,0})^{g-d}$: \begin{enumerate} \item $d=5$ for all $p \equiv 3,4,5,9 \bmod 11$; \item $d=11$ for all $p \equiv 2,3,4,6,8,9,12,13,16,18 \bmod 23$. \end{enumerate} \end{theorem} \begin{proof} By Example \ref{Esgg5} (resp.\ Example \ref{Esgg11}) for $d=5$ (resp.\ $d=11$), under this congruence condition on $p$, there exists a smooth projective curve of genus $g=d$ defined over $\overline{\FF}_p$ whose $p$-divisible group is isogenous to $G_{1, d-1} \oplus G_{d-1,1}$. Note that the Newton polygon for $G_{1, d-1} \oplus G_{d-1,1}$ is the lowest Newton polygon in dimension $d$ with $p$-rank $0$. Thus there is at least one component of the $p$-rank $0$ stratum of $\CM_d$ such that the generic geometric point of this component represents a curve whose Jacobian has $p$-divisible group isogenous to $G_{1, d-1} \oplus G_{d-1,1}$. The result is then immediate from \cite[Corollary~6.4]{priesCurrent}. \end{proof} \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{The Early Years: Computer Science Meets Geometry, Algebra and Topology} Our current view of persistent homology can be traced back to work of Patrizio Frosini (1992) on size functions \cite{frosini1992measuring}, and of Vanessa Robins (1999) \cite{robins1999towards} on using experimental data to infer the topology of attractors in dynamical systems. Both approaches rely on singular homology as a shape descriptor, which leads to what is known today as the \emph{``homology inference problem''}: Given a finite set $X$ (the data) sampled from/around a topological space $\mathbb{X}$ (e.g., the attractor), how can one infer the homology of $\mathbb{X}$ from $X$ with high confidence? See for instance \cite{niyogi2008finding} for the case when $\mathbb{X}$ is a compact Riemannian submanifold of Euclidean space, and $X \subset \mathbb{X}$ is sampled according to the intrinsic uniform distribution. From here on out it will be useful to think of $X$ and $\mathbb{X}$ as subspaces of a bounded metric space $(\mathbb{M}, \rho)$. In this case, one can formalize the statement ``$X$ approximates $\mathbb{X}$'' by saying that if $Z\subset \mathbb{M}$, $\epsilon \geq 0$, and $Z^{(\epsilon)} := \{x\in \mathbb{M} : \rho(x, Z) \leq \epsilon\}$, then the Hausdorff distance \[ d_H (X,\mathbb{X}) := \inf \left\{\epsilon > 0 : X \subset \mathbb{X}^{(\epsilon)} \mbox{ and } \mathbb{X} \subset X^{(\epsilon)}\right\} \] is small. The goal is then to approximate the topology of $\mathbb{X}$ by that of $X^{(\epsilon)}$. Below in Figure \ref{fig:OffSetFiltration} we illustrate the evolution of $X^{(\epsilon)}$ as $\epsilon$ increases. \begin{figure}[htb!] \centering \includegraphics[width= \textwidth]{filtration.png} \caption{Some examples of $X^{(\epsilon)}$ for $X\subset \mathbb{R}^2$ sampled around the unit circle, and $\epsilon$ values $0 < \epsilon_1 < \epsilon_2< \epsilon_3$. } \label{fig:OffSetFiltration} \end{figure} In order to capture the multiscale nature of $\mathcal{X} = \{X^{(\epsilon)}\}_{\epsilon }$, and deal with the instability of topological features in $X^{(\epsilon)}$ as $\epsilon$ changes, Frosini and Robins introduced (independently) the idea of \emph{homological persistence}: for $\epsilon, \delta \geq 0$ let \[ \iota^{\epsilon,\delta}: X^{(\epsilon)} \hookrightarrow X^{(\epsilon + \delta)} \] be the inclusion, and consider the induced linear map in homology with coefficients in a field $\mathbb{F}$ \[ \iota^{\epsilon,\delta}_* : H_n\left(X^{(\epsilon)} ; \mathbb{F}\right) \longrightarrow H_n\left(X^{(\epsilon + \delta)}; \mathbb{F} \right) \] The image of $\iota^{\epsilon,\delta}_*$ is the $\delta$-persistent $n$-th homology group of the filtered space $\mathcal{X}$ at $\epsilon$, denoted $H_n^{\epsilon,\delta}(\mathcal{X}; \mathbb{F})$; and $\mathsf{rank}\left(\iota_*^{\epsilon,\delta}\right)$ is the persistent Betti number $\beta^{\epsilon, \delta}_n(\mathcal{X}; \mathbb{F}) $. The design of algorithms to efficiently compute/approximate these integers is of course predicated on first replacing the spaces $X^{(\epsilon)}$ by finite, combinatorial models of their topology. Fortunately there is a vast literature on how to do this. Take for instance the Vietoris-Rips complex, first introduced by Leopold Vietoris in the nineteen-twenties in an attempt to define a homology theory for general metric spaces \cite{vietoris1927hoheren}. It is defined, for $Z\subset \mathbb{M}$ and $\epsilon \geq 0$, as the abstract simplicial complex \[ R_\epsilon(Z) := \big\{ \{z_0,\ldots, z_k\} \subset Z: \rho(z_i, z_j) \leq \epsilon \; \mbox{ for all }\; 0 \leq i,j \leq k\big\} \] Below in Figure \ref{fig:RipsFiltration} we show an example of how $R_\epsilon(Z)$ evolves as $\epsilon$ increases, for $Z\subset \mathbb{R}^2$ sampled around the unit circle, and for $\epsilon$ values $0< \epsilon_1 < \epsilon_2 < \epsilon_3$. \begin{figure}[htb!] \centering \includegraphics[width= \textwidth]{rips_filtration.png} \caption{Some examples of the Rips complex, for points sampled around the unit circle in $\mathbb{R}^2$.} \label{fig:RipsFiltration} \end{figure} Notice that $R_\epsilon(Z) \subset R_{\epsilon + \delta} (Z)$ whenever $\delta \geq 0$; in other words, $\mathcal{R} (Z) = \{R_\epsilon(Z)\}_{\epsilon}$ is a filtered simplicial complex. Janko Latschev shows in \cite{latschev2001vietoris} that when $\mathbb{X}$ is a closed Riemannian manifold, there is an $\epsilon_0 > 0$, so that if $ 0 < \epsilon \leq \epsilon_0$, then there exists $\delta > 0 $ for which $d_H(X, \mathbb{X}) < \delta$ implies that the geometric realization of $R_\epsilon(X)$ is homotopy equivalent to $\mathbb{X}$. Discarding the manifold hypothesis --- which is not expected to hold in general applications --- highlights the value of persistence as a homology inference tool. Indeed, in \cite{chazal2008towards} Chazal, Oudot and Yan show that if $\mathbb{X} \subset \mathbb{R}^d$ is compact with positive \emph{weak feature size}\footnote{this is a notion of how complex the embedding of $\mathbb{X}$ into Euclidean space is.} \cite{chazal2005weak}, and $X \subset \mathbb{R}^d$ is finite with $d_H(X,\mathbb{X})$ small, then there exists a range for $\epsilon > 0$ where $H^{\epsilon, 3\epsilon}_n(\mathcal{R}(X) ; \mathbb{F}) $ is isomorphic to $ H_n\left(\mathbb{X}^{(\epsilon)}; \mathbb{F}\right)$. It is worth noting that while these theorems deal with small $\epsilon$, far less is known about the large-scale regime. Indeed, aside from trivial examples, the circle is (essentially) the only space $Z$ for which the homotopy type of $R_\epsilon(Z)$ is known explicitly for all $\epsilon > 0 $ \cite{adamaszek2017vietoris, adamaszek2017vietorisEllipses}. The efficient computation of the persistent Betti numbers of a finite filtered simplicial complex $\mathcal{K} = \{K_0 \subset K_1 \subset \cdots \subset K_J = K \}$, was addressed by Edelsbrunner, Letscher and Zomorodian in (2000) \cite{edelsbrunner2000topological}, for subcomplexes of a triangulated 3-sphere and homology with coefficients in $\mathbb{F}_2 = \{-1,1\}$. This restriction was a tradeoff between generality and speed: the algorithm was based on previous work of Delfinado and Edelsbrunner \cite{delfinado1995incremental} to compute (standard) Betti numbers incrementally in time $O(N \alpha(N))$, where $N$ is the number of simplices of $K$ and $\alpha$ is the inverse of the Ackermann function \cite{cormen2009introduction}. Since the Ackermann function grows very rapidly, its inverse $\alpha$ grows very slowly. Though limited in generality, the approach by Delfinado and Edelsbrunner highlights the following idea: If $K_{j}$ is obtained from $ K_{j-1} $ by adding a single simplex $\tau\in K$, and $H_n(K_{j-1}; \mathbb{F}) \longrightarrow H_n(K_j;\mathbb{F})$ is not surjective, then either $\tau$ is an $n$-simplex creating a new homology class, or it is an $n+1$-simplex eliminating a class from $K_{j-1}$. Thus, simplices in $K$ that either create or annihilate a given persistent homology class can be put in pairs $(\tau,\sigma)$ of the form creator-annihilator. These pairings are in fact a byproduct of the incremental algorithm of Delfinado and Edelsburnner. The \emph{barcode} is also introduced in \cite{edelsbrunner2000topological} as a visualization tool for persistence: each pair $(\tau, \sigma)$ yields an interval $[j, \ell)$, where $j$ (birth time) is the smallest index so that $\tau \in K_j$, and $\ell > j$ (death time) is the smallest index for which $\sigma \in K_\ell$. Thus, long intervals indicate stable homological features throughout $\mathcal{K}$, while short ones reflect topological noise. The resulting multiset of intervals (as repetitions are allowed) is called a barcode. The notation is $\mathsf{bcd}_n(\mathcal{K})$. Moreover, the barcode subsumes the persistent Betti numbers, since $\beta_n^{\epsilon, \delta}(\mathcal{K};\mathbb{F})$ is the number of intervals $[j,\ell) \in \mathsf{bcd}_n(\mathcal{K})$ with $j \leq \epsilon$ and $\ell > \epsilon +\delta $. Below in Figure \ref{fig:FilteredComplex} we show an example of a filtered simplicial complex, the simplicial pairings $(\tau, \sigma)$, and the resulting barcodes. \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{simplicial_filtration.png} \caption{A filtered simplicial complex $\mathcal{K} = \{K_0 \subset \cdots \subset K_8\}$, along with the simplicial pairings $(\tau, \sigma)$, and the resulting barcodes for homology in dimensions 0 (orange) and 1 (green).} \label{fig:FilteredComplex} \end{figure} \section{Here Comes the Algebra} The developments up to this point can be thought of as the computational and geometric era of persistent homology. Around 2005 the focus started to shift towards algebra. Zomorodian and Carlsson introduced in \cite{zomorodian2005computing} the persistent homology \[ PH_n(\mathcal{K}; \mathbb{F}) := \bigoplus_{j \in \mathbb{Z} } H_n(K_j; \mathbb{F}) \;\;\; , \;\;\; \mathcal{K} = \{K_j\}_{j \in \mathbb{Z}} \] of a filtered complex $\mathcal{K}$, as the graded module over $\mathbb{F}[t]$ with left multiplication by $t$ on $j$-homogeneous elements given by the linear map \[\phi_j : H_n(K_j ; \mathbb{F}) \longrightarrow H_n(K_{j+1};\mathbb{F})\] induced by the inclusion $K_j \hookrightarrow K_{j+1}$. Since then, $PH_n(\mathcal{K};\mathbb{F})$ is referred to in the literature as a persistence module. More generally \cite{ bdss:1, bubenik2014categorification}, let $\mathsf{J}$ and $\mathsf{C}$ be categories with $\mathsf{J}$ small (i.e., so that its objects form a set). The category of $\mathsf{J}$-indexed persistence objects in $\mathsf{C}$ is defined as the functor category $\mathsf{Fun}(\mathsf{J}, \mathsf{C})$; its objects are functors $F:\mathsf{J}\rightarrow \mathsf{C}$, and its morphisms are natural transformations $\varphi: F_1 \Rightarrow F_2$. The typical indexing category comes from having a partially ordered set $(P,\preceq)$, and letting $\underline{P}$ denote the category with objects $\mathsf{Obj}(\underline{P}) = P$, and a unique morphism from $p_1$ to $p_2$ whenever $p_1 \preceq p_2$. We'll abuse notation and denote this morphism by $p_1 \preceq p_2$, instead of the categorical notation $p_1 \rightarrow p_2$. It can be readily checked that if $\mathsf{Mod}_R$ denotes the category of (left) modules over a commutative ring $R$ with unity, and $g\mathsf{Mod}_{R[t]}$ is the category of $\mathbb{Z}$-graded modules over the polynomial ring $R[t]$, then \begin{equation}\label{eq:PerMod_CatEquiv} \begin{array}{ccl} \mathsf{Fun}(\underline{\mathbb{Z}}, \mathsf{Mod}_R) & \longrightarrow & g\mathsf{Mod}_{R[t]} \\[.2cm] M \;\;, \;\; \varphi & \mapsto & \bigoplus\limits_{j\in \mathbb{Z}}M(j) \;\; , \;\; \bigoplus\limits_{j\in \mathbb{Z}}\varphi_j \end{array} \end{equation} is an equivalence of categories. On the graded $R[t]$-module side, multiplication by $t$ on $j$-homogeneous elements is given by $M( j \leq j+1) : M(j) \longrightarrow M(j+1)$. This equivalence shows why/how the evolution of homological features in a $\mathbb{Z}$-filtered complex $\mathcal{K}$, can be encoded as the algebraic structure of the persistence module $PH_n(\mathcal{K};\mathbb{F})$. \subsection{Persistence Modules and Barcodes} When $PH_n(\mathcal{K};\mathbb{F})$ is finitely generated as an $\mathbb{F}[t]$-module --- e.g. if $K_j = \emptyset$ for $j < 0$ and $\bigcup\limits_{j \in \mathbb{Z}} K_j$ is finite --- then one has a graded isomorphism \begin{equation}\label{eq:PerMod_Decomp} PH_n(\mathcal{K};\mathbb{F}) \cong \left( \bigoplus_{q=1}^{Q} t^{n_q} \cdot \mathbb{F}[t] \right) \oplus \left( \bigoplus_{\ell=1}^{L} \left(t^{m_\ell} \cdot \mathbb{F}[t]\right)/(t^{ m_\ell + d_\ell}) \right) \end{equation} for $n_q, m_\ell \in \mathbb{Z}$ and $d_\ell \in \mathbb{N} $ \cite{webb1985decomposition}. The decomposition (\ref{eq:PerMod_Decomp}) is unique up to permutations, and thus the intervals \begin{align*} [n_1,\infty),[n_2,\infty), \ldots, & [n_Q,\infty), \\ &[m_1, m_1 + d_1), [m_2,m_2 + d_2) ,\ldots, [m_L , m_L + d_L) \end{align*} provide a complete discrete invariant for (i.e., they uniquely determine) the $\mathbb{F}[t]$-isomorphism type of $PH_n(\mathcal{K};\mathbb{F})$. Moreover, this multiset recovers the barcode $\mathsf{bcd}_n(\mathcal{K})$ of Edelsbrunner, Letscher and Zomorodian \cite{edelsbrunner2000topological}.% Carlsson and Zomorodian also observe that $PH_n(\mathcal{K};\mathbb{F})$ is in fact the homology of an appropriate chain complex of graded $\mathbb{F}[t]$-modules. Hence, a graded version of the Smith Normal Form \cite{dumas2003computing} computes the barcode decomposition (\ref{eq:PerMod_Decomp}), providing a general-purpose algorithm. This opened the flood gates; barcodes could now be computed as a linear algebra problem for any finite filtered simplicial complex $ K_0 \subset \cdots \subset K_J = K $, over any (in practice finite) field of coefficients, and up to any homological dimension. The resulting matrix reduction algorithm, implemented initially in the JPlex library (now javaPlex) \cite{adams2014javaplex}, runs in polynomial time: its worst time complexity is $O(N^3)$, where $N$ is the number of simplices of $K$. In fact Dmitriy Morozov exhibits in \cite{morozov2005persistence} a finite filtered complex of dimension 2, attaining the worst-case. This shows that the cubic bound is tight for general barcode computations. While this sounds potentially slow, specially compared to the time complexity $O(N\cdot \alpha(N))$ of the sequential algorithm, Morozov's example should be contrasted with filtrations arising from applications. In practice the matrices to be reduced are sparse, and computing their associated barcode decomposition takes at worst matrix-multiplication time $O(N ^{\omega})$ \cite{milosavljevic2011zigzag}, where $\omega \approx 2.373 $ \cite{williams2012multiplying}. Over the last ten years or so there has been a flurry of activity towards better implementations and faster persistent homology computations. A recent survey \cite{otter2017roadmap} compares several leading open source libraries for computing persistent homology. All of them implement different optimizations, exploit new theoretical developments and novel heuristics/approximations. For instance, one improvement is to first simplify the input filtered complex without changing its persistent homology (e.g., using discrete Morse theory \cite{mischaikow2013morse}); or to compute persistent cohomology, since it is more efficient than persistent homology and gives the same answer \cite{de2011dualities}. The shift towards algebra has had other important consequences; specifically: 1) a better understanding of stability for barcodes, and 2) several theorems describing how the choice of categories $\mathsf{J} $ and $\mathsf{C}$ impacts the computability of isomorphism invariants for objects in $\mathsf{Fun}(\mathsf{J}, \mathsf{C})$. Let me say a few words about stability. \subsection{The Stability of Persistence} Let $\mathbb{X}$ be a triangulable topological space (e.g., a smooth manifold) and let $f: \mathbb{X} \longrightarrow \mathbb{R}$ be a tame function (this is a generalization of being Morse). The prototypical example in TDA arises from a compact set $X\subset \mathbb{R}^d$, and letting $f_X : \mathbb{R}^d \longrightarrow \mathbb{R}$ be $$f_X(y) = \inf\limits_{x\in X} \|x - y\|.$$ Thus $f_X^{-1}(-\infty, \epsilon] = X^{(\epsilon)}$. Given $f: \mathbb{X} \longrightarrow \mathbb{R}$, let $\mathsf{bcd}_n(f)$ denote the barcode for the $n$-th persistent homology of $\left\{f^{-1}(-\infty,\epsilon]\right\}_{\epsilon\in \mathbb{R}}$. Drawing inspiration from Morse theory, Cohen-Steiner, Edelsbrunner and Harer introduced in (2007) \cite{cohen2007stability} two foundational ideas: (1) the bottleneck distance $d_B(\,\cdot\, ,\,\cdot\,)$ between barcodes, and (2) the stability theorem asserting that for tame $f,g : \mathbb{X} \longrightarrow \mathbb{R}$ one has that\footnote{A similar result was established in \cite{d2003optimal} for $n=0$.} \[ d_B(\mathsf{bcd}_n(f), \mathsf{bcd}_n(g)) \leq \|f - g\|_\infty \] In particular, if $X, Y \subset \mathbb{R}^d$ are compact and $\mathcal{X} = \left\{X^{(\epsilon)}\right\}_\epsilon$, $\mathcal{Y}= \left\{Y^{(\epsilon)}\right\}_\epsilon$, then $d_B(\mathsf{bcd}_n(\mathcal{X}), \mathsf{bcd}_n(\mathcal{Y})) \leq d_H(X, Y)$. This inequality implies that slight changes to the input data change the barcodes slightly, which is key for applications where (Hausdorff) noise plays a role. Towards the end of 2008 Chazal et.~al.~solidified the idea of stability with the introduction of interleavings for $\mathbb{R}$-indexed persistence modules \cite{chazal2009proximity}. The construction is as follows. For $\delta \geq 0$ let $T_\delta : \underline{\mathbb{R}} \longrightarrow \underline{\mathbb{R}} $ be the translation functor $T_\delta(\epsilon) = \epsilon + \delta$. An $\delta$-interleaving between two persistence vector spaces $V,W:\underline{\mathbb{R}}\longrightarrow \mathsf{Mod}_\mathbb{F}$ is a pair $(\varphi,\psi)$ of natural transformations \[ \varphi: V \Rightarrow W\circ T_{\delta} \;\;\;\;\;\;\; \mbox{ and } \;\;\;\;\;\;\; \psi: W \Rightarrow V\circ T_\delta \] so that $\psi_{\epsilon + \delta} \circ \varphi_\epsilon = V(\epsilon \leq \epsilon + 2\delta)$ and $\varphi_{\epsilon + \delta} \circ \psi_\epsilon = W(\epsilon \leq \epsilon + 2\delta)$ for all $\epsilon \in \mathbb{R}$. The interleaving distance between $V$ and $W$, denoted $d_I(V,W)$, is defined as the infimum over all $\delta \geq 0$ so that $V$ and $W$ are $\delta$-interleaved, if interleavings exist. If there are no interleavings, the distance is defined as $\infty$. It readily follows that $d_I$ is an extended (since infinity can be a value) pseudometric on $\mathsf{Fun}(\underline{\mathbb{R}}, \mathsf{Mod}_\mathbb{F})$, and that $d_I(V,W) = 0$ whenever $V\cong W$. The converse, however, is false in general (more on this later). Chazal et. al. \cite{chazal2009proximity} show that if $V: \underline{\mathbb{R}} \longrightarrow \mathsf{Mod}_\mathbb{F}$ satisfies $\mathsf{rank}\big( V(\epsilon < \epsilon')\big) < \infty$ for all pairs $\epsilon < \epsilon'$, this is called being $\mathsf{q}$-tame, then $V$ has a well-defined barcode $\mathsf{bcd}(V) $ (see \cite{crawley2015decomposition} for a shorter proof when $\mathsf{dim}_\mathbb{F} V(\epsilon) < \infty$ for all $\epsilon$; this is called being pointwise finite). Moreover, if $V,W$ are $\mathsf{q}$-tame, then one has the algebraic stability theorem $d_B(\mathsf{bcd}(V), \mathsf{bcd}(W)) \leq d_I(V,W)$. This turns out to be an equality: \[ d_B(\mathsf{bcd}(V), \mathsf{bcd}(W)) = d_I(V,W) \] which nowadays is referred to as the Isometry Theorem; the first known proof is due to Lesnick \cite{lesnick2015theory}. As I said earlier, $d_I(V,W)$ can be zero for $V $ and $W$ nonisomorphic, and thus $\mathsf{bcd}(V)$ is not a complete invariant in the $\mathsf{q}$-tame $\mathbb{R}$-indexed case. This can be remedied as follows. Let $\mathsf{qFun}(\underline{\mathbb{R}}, \mathsf{Mod}_\mathbb{F})$ denote the full subcategory of $\mathsf{Fun}(\underline{\mathbb{R}},\mathsf{Mod}_\mathbb{F})$ comprised of $\mathsf{q}$-tame persistence modules. The ephemeral category $\mathsf{eFun}(\underline{\mathbb{R}}, \mathsf{Mod}_\mathbb{F})$, is the full subcategory of $\mathsf{qFun}(\underline{\mathbb{R}},\mathsf{Mod}_\mathbb{F})$ with objects $V : \underline{\mathbb{R}} \longrightarrow \mathsf{Mod}_\mathbb{F}$ satisfying $V(\epsilon < \epsilon' ) = 0$ for all $\epsilon < \epsilon'$. The observable category $\mathsf{oFun(\underline{\mathbb{R}},\mathsf{Mod}_\mathbb{F})}$ is the quotient category \[ \mathsf{qFun}(\underline{\mathbb{R}},\mathsf{Mod}_\mathbb{F})/ \mathsf{eFun}(\underline{\mathbb{R}},\mathsf{Mod}_\mathbb{F}) \] As shown by Chazal et. al. in \cite{chazal2014observable}, $d_I$ descends to an extended metric on the observable category, and taking barcodes \[ \mathsf{bcd} : \big(\mathsf{oFun}(\underline{\mathbb{R}}, \mathsf{Mod}_\mathbb{F}) , d_I\big) \longrightarrow \big(\mathsf{Bcd} , d_B\big) \] is an isometry. Hence, the barcode is a complete invariant for the isomorphism type of observable $\mathbb{R}$-indexed persistence vector spaces. In summary, the modern view of stability is algebraic; persistence modules are compared via interleavings, which one then tries to relate to the bottleneck distance between the associated barcodes if they exist. \subsection{Changing Indexing Categories: Multi-d Persistence, Quivers and Zigzags} One of the early realizations in TDA was the usefulness of having filtrations indexed by more than one parameter (1999) \cite{frosini1999size}. For instance, given a data set $X\subset \mathbb{R}^d$ one might want to focus on densely-populated regions \cite{carlsson2008local}, or portions with high/low curvature \cite{carlsson2005persistence}. This leads naturally to $\mathbb{Z}^n$-filtered complexes: $\{K_\mathbf{u}\}_{\mathbf{u} \in \mathbb{Z}^n }$, $\mathbf{u} = (u_1,\ldots, u_n)$, where $K_{\mathbf{u}} \subset K_{\mathbf{v}}$ whenever $\mathbf{u} \preceq \mathbf{v}$ (i.e., $u_1 \leq v_1, \ldots, u_n \leq v_n$). In this multi-filtered complex, each filtering direction $u_1,\ldots, u_n$ is meant to capture an attribute: e.g. distance/scale, density, curvature, etc. Taking homology with coefficients in $\mathbb{F}$ yields objects in $\mathsf{Fun}(\underline{\mathbb{Z}^n}, \mathsf{Mod}_\mathbb{F})$, and just like before, $\mathbb{Z}^n$-indexed persistence $\mathbb{F}$-vector spaces correspond to $n$-graded modules over the $n$-graded polynomial ring $P_n := \mathbb{F}[t_1,\ldots, t_n]$. Parameterizing the isomorphism classes of said modules, for $n\geq2$, turns out to be much more involved than the barcodes from $n=1$. Indeed, around 2009 Carlsson and Zomorodian \cite{carlsson2009theory} showed that the isomorphism type of a finitely generated $n$-graded $P_n$-module is uniquely determined by the following data: two finite multisets $\xi_0,\xi_1 \subset \mathbb{Z}^n$ encoding the location and multiplicity of birth-death events, and a point in the quotient of an algebraic variety $\mathcal{RF}(\xi_0,\xi_1)$ by the algebraic action of an algebraic group $G_{\xi_0}$. The multisets $\xi_0, \xi_1$ are the discrete portions of the resulting isomorphism invariant, while $\mathcal{RF}(\xi_0,\xi_1)/ G_{\xi_0} $ parameterizes the (potentially) continuous part. Here is an example due to Carlsson and Zomorodian \cite{carlsson2009theory} illustrating how complicated this quotient can be. For $n=2$, consider the isomorphism classes of $P_2$-modules having $\xi_0 = \{(0,0), (0,0)\}$ and $\xi_1 = \{(3,0),(2,1), (1,2), (0,3)\}$. If $\mathsf{Gr}_1(\mathbb{F}^2)$ denotes the Grassmannian of lines in $\mathbb{F}^2$, then \[ \mathcal{RF}(\xi_0,\xi_1) = \mathsf{Gr}_1(\mathbb{F}^2) \times \mathsf{Gr}_1(\mathbb{F}^2) \times \mathsf{Gr}_1(\mathbb{F}^2) \times \mathsf{Gr}_1(\mathbb{F}^2) \] and $G_{\xi_0} $ turns out to be the degree 2 general linear group $\mathsf{GL}_2(\mathbb{F})$ acting diagonally on $\mathsf{Gr}_1(\mathbb{F}^2)^4$. Since $\mathsf{Gr}_1(\mathbb{F}^2)^4 / \mathsf{GL}_2(\mathbb{F})$ contains a copy of $\mathbb{F} \smallsetminus \{0,1\}$, and each point in this set yields a distinct isomorphism class of $P_2$-modules, it follows that there is no complete discrete invariant for (finite!) multi-d persistence. The vast majority of recent results from multidimensional persistence focus on computable descriptors/visualizations of its intricate algebraic structure. Besides introducing the parametrization $$\xi_0,\xi_1, \mathcal{RF}(\xi_0,\xi_1)/G_{\xi_0},$$ Carlsson and Zomorodian also propose the rank invariant: For a $\mathsf{q}$-tame module $V :\underline{\mathbb{Z}^n} \longrightarrow \mathsf{Mod}_\mathbb{F}$, it is defined as the function $\rho_V$ sending each pair $\mathbf{u} \preceq \mathbf{v} $ in $\mathbb{Z}^n$ to the integer $\mathsf{rank} \, V(\mathbf{u} \preceq \mathbf{v})$. $\rho_V$ is computable (see \cite{carlsson2009computing} for a polynomial-time algorithm), it is discrete, and an invariant of the isomorphism type of $V$. When $n=1$ one can recover $\mathsf{bcd}(V)$ from $\rho_V$ and viceversa, and thus $\rho_V$ is complete in the 1-dimensional case. Knudson notes in \cite{knudson2008refinement} that $\xi_0(V)$ and $\xi_1(V)$ are in fact the locations/multiplicities of birth events in the torsion modules $\mathsf{Tor}^{P_n}_0 (V,\mathbb{F})$ and $\mathsf{Tor}^{P_n}_1(V,\mathbb{F})$, respectively; here $\mathbb{F}$ is identified with the $P_n$-module \[ \mathbb{F}[t_1,\ldots, t_n]/(t_1,\ldots, t_n)\] The higher-dimensional analogs $\mathsf{Tor}^{P_n}_j(V,\mathbb{F})$, $j\geq 2$, lead to a family of finite multisets $\xi_j(V) \subset \mathbb{Z}^n$, each with its own geometric interpretation, serving as isomorphism invariants for $V$. Other approaches to invariants for multidimensional persistence include the Hilbert Series of Harrington et. al. \cite{harrington2017stratifying}, the extended algebraic functions of Skryzalin and Carlsson \cite{skryzalin2017numeric}, and the feature counting invariant of Scolamiero et. al. \cite{scolamiero2017multidimensional}. Lesnick and Wright have recently released RIVET, the Rank Invariant Visualization and Exploration Tool \cite{lesnick2015interactive}. Put simply, RIVET uses the fact that if $ V : \underline{\mathbb{R}^2} \longrightarrow \mathsf{Mod}_\mathbb{F}$ is $\mathsf{q}$-tame and $L\subset \mathbb{R}^2$ is a line with nonnegative slope (hence a totally ordered subset of $(\mathbb{R}^2,\preceq)$), then $V^L : \underline{L} \longrightarrow \mathsf{Mod}_\mathbb{F}$, the 1-dimensional persistence vector space obtained by restricting $V$ to $L$, has a well-defined barcode $\mathsf{bc}\left(V^L\right)$. The key feature in RIVET is a graphical interface which, for finite bi-filtrations, displays $\mathsf{bc}\left(V^L\right)$ interactively as the user varies $L$. This is particularly useful for parameter selection and the exploratory analysis of data sets with filtering functions. Multidimensional persistence is a great example of how a seemingly innocuous change in indexing category, say from $\underline{\mathbb{Z}}$ to $\underline{\mathbb{Z}^2}$, can lead to a widely different and much more complicated classification problem. With this in mind, one would like to have a systematic approach to address the ensuing complexity. The representation theory of Quivers \cite{derksen2005quiver} offers one such avenue. It turns out that the classification of finite $\mathsf{J}$-indexed persistence vector spaces $ V: \mathsf{J} \longrightarrow \mathsf{Mod}_\mathbb{F}$ can be studied directly from the shape of the indexing category $\mathsf{J}$. Indeed, let $G(\mathsf{J})$ be the finite directed (multi)graph with the objects of $\mathsf{J}$ as vertices, and one arrow for every morphism that is neither an identity nor a composition. Also, let $\gor{G}(\mathsf{J})$ be the undirected graph obtained from $G(\mathsf{J})$ by forgetting arrow directions. When $\gor{G}(\mathsf{J})$ is acyclic, Gabriel's theorem \cite{gabriel1972unzerlegbare} implies that pointwise finite objects in $\mathsf{Fun}(\mathsf{J}, \mathsf{Mod}_\mathbb{F})$ can be classified via complete discrete invariants, if and only if the connected components of $\gor{G}(\mathsf{J})$ are Dynkin diagrams of the types described in Figure \ref{fig:Dinkyn} below. \begin{figure}[htb!] \centering \includegraphics[width=0.7\textwidth]{Dynkin_diagrams.png} \caption{Dynkin diagrams of type $A_n$ for $n\geq 1$, $D_n$ for $n\geq 4$, and $E_n$ for $n = 6,7,8$.} \label{fig:Dinkyn} \end{figure} Here is an example of how this result can be used to avoid unpleasant surprises: Suppose that $G(\mathsf{J})$ is the graph with vertices $x_0 ,\ldots ,x_N$ and $N\geq 5$ edges $x_n\rightarrow x_0$, $n =1,\ldots, N$ (see Examples 3 and 8 in \cite{derksen2005quiver}). While the resulting $\mathsf{J}$-indexed persistence vector spaces $V : \mathsf{J} \longrightarrow \mathsf{Mod}_\mathbb{F}$ may look simple (just star-shaped, right?), the connected graph $\gor{G}(\mathsf{J})$ is not a Dynkin diagram, and the ensuing classification problem is in fact of ``wild type'': complete invariants must include continuous high-dimensional pieces, just like in multidimensional persistence. These ideas entered the TDA lexicon around 2010 with the definition of Zigzag persistence by Carlsson and de Silva \cite{carlsson2010zigzag}. Regular persistence addresses the problem of identifying stable homological features in a monotone system of spaces and continuous maps $$X_1 \rightarrow X_2 \rightarrow \cdots \rightarrow X_J.$$ Zigzag persistence, on the other hand, is a generalization to the non-monotone case. Here is a practical example: suppose one has an ordered sequence of spaces $X_1,\ldots, X_J$ (e.g., from time varying data), but no obvious maps $X_j \rightarrow X_{j+1}$. The need to track topological features as $j$ varies leads one to consider the system \[ X_1 \hookrightarrow X_1 \cup X_2 \hookleftarrow X_2 \hookrightarrow \cdots \hookleftarrow X_j \cup X_{j+1} \hookrightarrow \cdots \hookleftarrow X_J \] and the resulting \emph{zigzag diagram} $$V_1 \rightarrow V_2 \leftarrow V_3 \rightarrow \cdots \leftarrow V_n$$ at the homology level. More generally, a (finite) zigzag is a sequence of vector spaces $V_1,\ldots, V_n$ and linear maps $V_j \rightarrow V_{j+1}$ or $V_{j} \leftarrow V_{j+1}$. The sequence of arrow directions, e.g. $\tau = (\mathsf{left,left,right, . . . , right,left})$, is the zigzag type. Since in this case any choice of $\tau$ forces the indexing category $\mathsf{J}_\tau$ to satisfy $\gor{G}(\mathsf{J}_\tau) = A_n$ (one of the aforementioned Dynkin diagrams), then Gabriel's theorem implies that finite zigzags $$V: \mathsf{J}_\tau \longrightarrow \mathsf{Mod}_\mathbb{F}$$ are completely classified by a discrete invariant. Just as for regular 1-dimensional persistence the invariant turns out to be a barcode, which can be efficiently computed \cite{milosavljevic2011zigzag}, and for which there is a zigzag stability theorem \cite{botnan2016algebraic} recently established by Botnan and Lesnick. When the graph $\gor{G}(\mathsf{J})$ has cycles, the functoriality of objects in $\mathsf{Fun}(\mathsf{J}, \mathsf{Mod}_\mathbb{F})$ is captured by the notion of a quiver with relations. The taxonomy from Gabriel's theorem no longer applies, but one can still find some answers in the representation theory of associative algebras. A particularly important instance is when the cycles of $\gor{G}(\mathsf{J})$ are not oriented cycles in $G(\mathsf{J})$; in this case the algebras of interest are finite dimensional (hence Artinian) and Auslander-Rieten theory \cite{auslander1997representation} becomes relevant. Escolar and Hiraoka \cite{escolar2016persistence} have recently put these ideas to use in the context of persistent objects indexed by commutative ladders; that is, the persistence of a morphism between two zigzags of the same type: \[ \begin{tikzcd} \bullet \ar{r} \ar{d}& \bullet \ar{d}& \bullet \ar{l}\ar{d} & \ar{l}\ar{r}\cdots & \bullet \ar{r}\ar{d}& \bullet \ar{d}& \bullet \ar{d} \ar{l}\\ \bullet \ar{r} & \bullet & \bullet \ar{l} & \ar{l}\ar{r}\cdots & \bullet \ar{r} & \bullet & \bullet \ar{l} \end{tikzcd} \] The resulting theory sits somewhere between zigzag persistence and multi-dimen\-sional persistence: short ladders (length $\leq $ 4) have complete discrete invariants, but longer ones do not. Escolar and Hiraoka present an algorithm for computing these invariants, and also an interesting application to computational chemistry. I think this is a good place for me to stop; hopefully it is also a good starting point for the reader interested in persistent homology. There are several books covering many of the ideas I presented here, as well as many others. The interested reader would certainly benefit from these resource \cite{ghrist2014elementary, chazal2016structure, edelsbrunner2010computational, oudot2015persistence}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction } In a recent publication~\cite{bib-newJADE} we have presented a study of event shapes observables and determinations of \as\ using data of \epem\ annihilations at $\sqrt{s} = 22$ to $44$~GeV recorded with the former JADE detector \cite{bib-JADEdet} at the PETRA collider. This study provided valuable information which was not available before from $\epem$-annihilations in the PETRA energy range. The results on \as, obtained in a similar manner as those from the experiments at LEP, demonstrated that the energy dependence of \asq\ is in good agreement with the prediction of Quantum Chromodynamics (QCD). Evolved to the $\znull$ mass scale, the results are in good agreement with those obtained at LEP, and are of similar precision. In addition, power corrections, applied to analytic QCD calculations of the mean values of event shape distributions, were found to qualitatively and quantitatively match the effects of hadronisation. Thus QCD could be tested without the need of phenomenological hadronisation models. Meanwhile the perturbative calculations for the jet broadening variables were improved by including a proper treatment of quark recoil~\cite{bib-new-jetbroadening}. Furthermore for the $C$-parameter a resummation of leading and next-to-leading logarithm terms to all orders of \as\ (NLLA) became available~\cite{bib-C-resummation}. Beside these advances in the perturbative description of event shape observables progress was made in the understanding of non-perturbative power corrections to the event shape observables and their mean values. In particular two-loop calculations of such corrections were performed \cite{bib-Milan-factor} which modify the one-loop result of the power correction to the event shapes by a factor (Milan factor). In this paper we complement our previous publication by a new \oaa+NLLA determination of \as\ from $C$-parameter and update the determination from jet broadening at $\sqrt{s} = 35$ and $44$ GeV in the Sections~\ref{sec-procedure} and \ref{sec-alphas}. We also applied power corrections to the $C$-parameter distributions and re-investigated those for the mean values of the thrust, heavy jet mass and both jet broadening observables in Section~\ref{sec-meanvalues}. We start in Section~\ref{sec-data} with a brief summary of the data samples used and draw conclusions from our results in Section~\ref{sec-conclusions}. \section{ Data samples and Monte Carlo simulation } \label{sec-data} We analysed data recorded with the JADE detector in 1984 to 1986 at centre-of-mass energies of $39.5$-$46.7$~GeV and around $35$~GeV. The JADE detector was operated from 1979 until 1986 at the PETRA electron-positron collider at centre-of-mass energies of $\sqrt{s} = 12$ to $46.7$ GeV. A detailed description of the JADE detector can be found in~\cite{bib-naroska,bib-JADEdet}. The main components of the detector were the central jet chamber to measure charged particle tracks and the lead glass calorimeter to measure energy depositions of electromagnetic showers, which both covered almost the whole solid angle of $4\pi$. Multihadronic events were selected by the standard JADE selection cuts~\cite{bib-JADEtrigger}. All charged particle tracks, assumed to be pions, with a total momentum of $|\vec{p}| > 100$~MeV$/c$ were considered in the analysis. Energy clusters in the electromagnetic calorimeter, assumed to be photons, were considered if their energies exceeded $150$ MeV after correction for energy deposited by associated tracks. Background from two-photon processes and $\tau$-pair events and from events with hard initial state photon radiation were removed by cuts on the visible energy $E_{\mathrm{vis}} = \sum E_i$, the total missing momentum $p_{\mathrm{miss}} = |\sum \vec{p}_i|$ ($\vec{p}_i$ and $E_i$ are the 3-momentum and the energy of the tracks and clusters), the longitudinal balance relative to the \epem\ beam axis of momenta $p_{\mathrm{bal}} = |\sum p^z_i/E_{\mathrm{vis}}|$ and the polar angle of the thrust axis, $\theta_{T}$: \begin{itemize} \item $E_{\mathrm{vis}} > \sqrt{s}/2$~; \item $p_{\mathrm{miss}} < 0.3\cdot \sqrt{s}$~; \item $p_{\mathrm{bal}} < 0.4$~; \item $|\cos\theta_{T}| < 0.8$~. \end{itemize} Thus the backgrounds from $\gamma\gamma$ and $\tau$-pair events were reduced to less than $0.1\%$ and $1\%$, respectively \cite{bib-JADEeventsel}. The numbers of events which were retained after these cuts for this analysis are listed in Table~\ref{tab-eventnumbers}. \begin{table \begin{center} \begin{tabular}{|c|c||c|c|} \hline year & $\sqrt{s}\ [\mathrm{GeV}]$ & data & MC \\ \hline\hline 1984/85 & $40$-$48$ & $\ 6158$ & $14\thinspace 497$ \\ \hline 1986 & $35$ & $20\thinspace 926$ & $25\thinspace 123$ \\ \hline \end{tabular} \end{center} \caption{\label{tab-eventnumbers} Number of events in data and in Monte Carlo detector simulation retained after application of the multihadron selection cuts described in the text. } \end{table} Corresponding original Monte Carlo detector simulation data for $35$ and $44$~GeV were based on the QCD parton shower event generator JETSET 6.3~\cite{bib-JETSET}. These original Monte Carlo events at $35$~GeV had the coherent branching for the parton shower while the $44$~GeV events had non-coherent branching \footnote{The different treatment of coherence in these samples of simulated data has no visible influence on the results of this study. }. The main parameters used for event generation are given in Section~\ref{subsec-systematics}. Both samples included a simulation of the acceptance and resolution of the JADE detector. \section{Experimental procedure} \label{sec-procedure} From the data samples described in the previous section, the event shape distribution of the $C$-parameter was determined. The $C$-parameter is defined as~\cite{bib-C-parameter} \begin{displaymath} C = 3 (\lambda_1 \lambda_2 + \lambda_2 \lambda_3 + \lambda_3 \lambda_1 ) \end{displaymath} where $\lambda_\gamma$, $\gamma=1, 2, 3$, are the eigenvalues of the momentum tensor \begin{displaymath} \Theta^{\alpha\beta} = \frac{\sum_i \vec{p}_i^{\,\alpha} \vec{p}_i^{\,\beta} / |\vec{p}_i|} {\sum_j |\vec{p}_j|} \ \ \ . \end{displaymath} \subsection{Correction procedure} \label{subsec-correction} Limits of the detector's acceptance and resolution and effects due to initial state photon radiation were corrected by applying a bin-by-bin correction procedure to the event shape distributions. The correction factors were defined by the ratio of the distribution calculated from events generated by JETSET 6.3 at {\em hadron level} over the same distribution at {\em detector level}. The {\em hadron level} distributions were obtained from JETSET 6.3 generator runs without detector simulation and without initial state radiation, using all particles with lifetimes $\tau > 3\cdot 10^{-10}$~s. Events at {\em detector level} contained initial state photon radiation and a detailed simulation of the detector response, and were processed in the same way as the data. Next, the data distributions were further corrected for hadronisation effects. This was done by applying bin-by-bin correction factors derived from the ratio of the distribution at {\em parton level} over the same distribution at {\em hadron level}, which were calculated from JETSET generated events before and after hadronisation, respectively. The correction factors are typically of the order $10$ to $20$\% growing large towards the $2$-jet region. In the case of the $C$-parameter the correction factors are also large next to the $3$-jet boundary which is at $C=0.75$. The data distributions, thus corrected to the {\em parton level}, can be compared to analytic QCD calculations. \subsection{Systematic uncertainties} \label{subsec-systematics} Systematic uncertainties of the corrected data distributions were investigated by modifying details of the event selection and of the correction procedure. For each variation the whole analysis was repeated and any deviation from the main result was considered a systematic error. In general, the maximum deviation from the main result for each kind of variation was regarded as symmetric systematic uncertainty. The main result was obtained using the default selection and correction procedure as described above. In detail, we considered either tracks or clusters only for the measurement of the event shape distributions. We varied the cut on $\cos\theta_T$ by $\pm 0.1$. The cut on $p_{\mathrm{miss}}$ was either removed or tightened to $p_{\mathrm{miss}} < 0.25 \cdot \sqrt{s}$. Similarly, the momentum balance requirement was either restricted to $p_{\mathrm{bal}} < 0.3$ or dropped. We also varied the cut for the visible energy $E_{\mathrm{vis}}$ by $\pm 0.05 \cdot \sqrt{s}$. In order to check the residual contributions from $\tau$-pair events we also required at least seven well-measured charged tracks. To study the impact of the hadronisation model of the JETSET 6.3 generator, the values of several significant model parameters were varied around their tuned values from Reference~\cite{bib-JADEtune} used for our main result. Different sets of correction factors to correct the data from {\em hadron level} to {\em parton level} were generated by varying single parameters of the JETSET generator. The variations were chosen to be similar to the one standard deviation percentage limits obtained by the OPAL Collaboration from a parameter tuning of JETSET at $\sqrt{s} = \mz$~\cite{bib-OPALtune}. In detail, we investigated the effects due to parton shower, hadronisation parameters, and quark masses. The amount of gluon radiation during the parton shower development was modified by varying $\Lambda_{\mathrm{LLA}}$ by $\pm 50$~MeV around the tuned value $400$~MeV. To vary the onset of hadronisation, we altered the parton shower cut-off parameter $Q_0$ by $\pm 0.5$~GeV around the tuned value of $1$~GeV. We used the full observed variation of \as\ to reflect a variation of $Q_0$ between $0$ and $2$~GeV. The width $\sigma_0=300$~MeV of the transverse momentum distribution in the hadronisation process was varied by $\pm 30$~MeV. The LUND symmetric fragmentation function, applied to hadronise events of up, down and strange quarks, was varied by changing the $a$ parameter from $0.5$ by $\pm 0.225$ whereas the $b$ parameter was kept fixed at $0.9$. As a systematic variation we used the LUND~\cite{bib-JETSET} instead of the Peterson et al.~\cite{bib-Peterson} fragmentation function for charm and bottom quarks. The effects due to the bottom quark mass were studied by restricting the model calculations which were used to determine the correction factors to up, down, strange, and charm quarks (udsc) only. Any deviation from our main result due to mass effects was treated as asymmetric error. \section{ Determination of \as } \label{sec-alphas} \subsection{ Corrected event shape distributions} After applying the corrections for detector and for initial state radiation effects we obtained the $C$-parameter event shape distributions at {\em hadron level}. In Tables~\ref{tab-eventshapes-35+44GeV} the corrected data values are listed with statistical errors and experimental systematic uncertainties. The mean values of the distributions are also given. Our measured results for the jet broadening event shape distributions can be found in Ref.~\cite{bib-newJADE}. \subsection{ Determination of \as\ using \oaa+NLLA calculations} We determined \as\ by \chisq\ fits to event shape distributions of $C$ and also $B_T$ and $B_W$ corrected to the {\em parton level}. For the sake of direct comparison to other published results we chose the so-called ln($R$)-matching scheme to merge the \oaa\ with the NLLA calculations. The fits to the $C$-parameter distributions applied the resummation results obtained in~\cite{bib-C-resummation}. For the fits to the jet broadening measures we used the improved calculation of Ref.~\cite{bib-new-jetbroadening} which includes a proper treatment of the quark recoil against an ensemble of soft gluons that is essential in the calculation of the jet broadening distributions. The renormalisation scale factor, $\xmu \equiv \mu/\sqrt{s}$, was set to $\xmu = 1$ for the main result. Here, the value of $\mu$ defines the energy scale at which the theory is renormalised. The fit ranges for each observable were determined by choosing the largest range for which the hadronisation uncertainties remained below about $10$~\%, for which the \chisqd\ of the fits did not exceed the minimum by more than a factor of two, and by aiming at results for \as\ that are independent from changes of the fit range. The remaining changes when enlarging or reducing the fit range by one bin on either side were taken as systematic uncertainties. Only statistical errors were considered in the fit thus resulting in \chisqd\ larger than unity. The finally selected fit ranges, the \as\ results of the \chisq\ fits and of the study of systematic uncertainties are tabulated in Tables~\ref{tab-asresult-44GeV} and \ref{tab-asresult-35GeV}, and are shown in Figures~\ref{fig-asresult-44GeV} and \ref{fig-asresult-35GeV}. We also changed the renormalisation scale factor in the range of $\xmu = 0.5$ to $2.0$. We found variations larger than the uncertainties from the detector correction and the hadronisation model dependence. The dependence of the fit result for \as\ on \xmu\ indicates the importance of higher order terms in the theory. It should be pointed out that the improved perturbative calculation for the jet broadening resulted in a larger $\alpha_s$ value from the fit to the data. The systematic uncertainties are not affected by the new calculation but the $\chi^2$ improved slightly. We combined these new results with the \as\ results of our previous publication~\cite{bib-newJADE} replacing the results obtained from the jet broadening observables. A single \as\ value was obtained from the individual determinations from thrust, heavy jet mass, total and wide jet broadening, $C$-parameter and differential 2-jet rate following the procedure described in References~\cite{bib-OPALresummed,bib-eventshapes,bib-globalalphas}. This procedure accounts for correlations of the systematic uncertainties. At each energy, a weighted average of the six \as\ values was calculated with the reciprocal of the square of the respective total error used as a weight. In the case of asymmetric errors we took the average of the positive and negative error to determine the weight. For each of the systematic checks, the mean of the \as\ values from all considered observables was determined. Any deviation of this mean from the weighted average of the main result was taken as a systematic uncertainty. With this procedure we obtained as final results for \as \begin{eqnarray*} \as(35\ {\mathrm{GeV}}) & = & 0.1448 \pm 0.0010{\mathrm{(stat.)}} \ ^{+0.0117} _{-0.0069}{\mathrm{(syst.)}} \\ \as(44\ {\mathrm{GeV}}) & = & 0.1392 \pm 0.0017{\mathrm{(stat.)}} \ ^{+0.0104} _{-0.0072}{\mathrm{(syst.)}} \ . \end{eqnarray*} The systematic errors at $35$ and $44$~GeV are the quadratic sums of the experimental uncertainties ($\pm 0.0017$, $\pm 0.0032$), the effects due to the Monte Carlo modelling ($^{+0.0070}_{-0.0035}$, $^{+0.0050}_{-0.0027}$) and the contributions due to the variation of the renormalisation scale ($^{+0.0092}_{-0.0057}$, $^{+0.0086}_{-0.0058}$). It should be noted that the modelling uncertainties due to quark mass effects contribute significantly to the total error. \section{Mean Values of Distributions and QCD Power Corrections} \label{sec-meanvalues} \subsection{ Power corrections} The value of \as\ can also be assessed by the energy dependence of mean values of event shape distributions. Perturbative calculations exist for the mean values of thrust, heavy jet mass, total and wide jet broadening and $C$-parameter up to \oaa. An observable $\cal F$ is given by the expression \begin{displaymath} \langle {\cal F}^{\mathrm{pert.}} \rangle = A_{\cal F} \left( \frac{\as}{2\pi}\right) + (B_{\cal F} - 2 A_{\cal F}) \left( \frac{\as}{2\pi}\right)^2 \end{displaymath} where the coefficients $A_{\cal F}$ and $B_{\cal F}$ were determined from the \oaa\ perturbative calculations~\cite{bib-ERT,bib-NLLA-1,bib-LEP1report,bib-EVENT2}. The term $-2 A_{\cal F}$ accounts for the difference between the total cross-section used in the measurement and the Born level cross-section used in the perturbative calculation. \begin{table \begin{center} \begin{tabular}{|c||c|r||c } \hline Observable $\cal F$ & $A_{\cal F}$ & \multicolumn{1}{c||}{$B_{\cal F}$} & $a_{\cal F}$ \\ \hline\hline $\langle T \rangle$ & $2.103$ & $44.99$ & $-1$ \\ $\langle M_H^2/s \rangle$ & $2.103$ & $23.24$ & $0.5$ \\ $\langle B_T \rangle$ & $4.066$ & $64.24$ & $0.5$ \\ $\langle B_W \rangle$ & $4.066$ & $-9.53$ & $0.25$ \\ $\langle C \rangle$ & $8.638$ & $146.8$ & $3\pi/2$ \\ \hline \end{tabular} \end{center} \caption{\label{tab-powcor} Coefficients of the perturbative prediction~\protect\cite{bib-ERT,bib-NLLA-1,bib-LEP1report,bib-EVENT2} and coefficients and parameters of the power corrections~\protect\cite{bib-Milan-factor} to the mean values of the event shape observables. Note that the definition of $a_{\cal F}$ does not include the $2{\cal M}/\pi$ factor introduced in~\protect\cite{bib-Milan-factor}. } \end{table} The numerical values of these coefficients are summarised in Table~\ref{tab-powcor}. In this study we corrected for hadronisation effects by additive power-suppressed corrections ($1/\sqrt{s}$) to the perturbative predictions of the mean values of the event shape observables. The non-perturbative effects are due to the emission of very low energetic gluons which can not be treated perturbatively due to the divergence of the perturbative expressions for \as\ at low scales. In the calculations of Reference~\cite{bib-webber} and also \cite{bib-Milan-factor} which we used in this analysis a non-perturbative parameter \begin{displaymath} \bar{\alpha}_0(\mu_I) = \frac{1}{\mu_I} \int_0^{\mu_I} {\mathrm{d}}k\ \ \as(k) \end{displaymath} was introduced to replace the divergent portion of the perturbative expression for $\as(\sqrt{s})$ below an infrared matching scale $\mu_I$. The general form of the power correction to the mean value of an observable $\cal F$ was first given in Reference~\cite{bib-webber}. It was the result of a one-loop calculation. In References~\cite{bib-Milan-factor} similar calculations have been performed at two-loops. It could be shown that the two-loop result modifies the one-loop result only by a factor ${\cal M} \approx 1.8$ which is known at the Milan factor. An additional factor $2/\pi$ is due to the different definitions of the non-perturbative parameters $\alpha_{\mathrm{eff}}$, being the so-called ``effective coupling''~\cite{bib-effective-coupling}, and $\bar{\alpha}_0$ as defined above. The two-loop result for the power correction assumes for thrust, heavy jet mass and $C$-parameter the form~\cite{bib-Milan-factor} \begin{eqnarray*} \langle {\cal F}^{\mathrm{pow.}} \rangle & = & a_{\cal F} \frac{4 C_F}{\pi} \cdot \frac{2{\cal M}}{\pi} \cdot \left( \frac{\mu_I}{\sqrt{s}}\right) \cdot \nonumber \\ & & \cdot \left[ \bar{\alpha}_{0}(\mu_I)-\as(\sqrt{s}) - \frac{\beta_0}{2\pi} \left(\ln\frac{\sqrt{s}}{\mu_I}+\frac{K}{\beta_0} +1\right)\as^2(\sqrt{s}) \right]\ , \end{eqnarray*} where $C_F = 4/3$, while for the two jet broadening measures the correction is logarithmically enhanced by a factor $\ln(\sqrt{s}/Q_B)$. We approximated $Q_B$ by $\mu_I$ such that \begin{eqnarray*} \langle {\cal F}^{\mathrm{pow.}} \rangle & = & a_{\cal F} \frac{4 C_F}{\pi} \cdot \frac{2{\cal M}}{\pi} \cdot \left( \frac{\mu_I}{\sqrt{s}}\right) \cdot \ln\left(\frac{\sqrt{s}}{\mu_I}\right) \cdot \nonumber \\ & & \cdot \left[ \bar{\alpha}_{0}(\mu_I)-\as(\sqrt{s}) - \frac{\beta_0}{2\pi} \left(\ln\frac{\sqrt{s}}{\mu_I}+\frac{K}{\beta_0} +1\right)\as^2(\sqrt{s}) \right]\ . \end{eqnarray*} Using this approximation an extra correction without the logarithmic enhancement \cite{bib-Milan-factor} was left out because it is small and strongly anti-correlated to the enhanced correction. Furthermore we found that the available data on the jet broadening are not yet sensitive to this extra correction. The factor $\beta_0 = (11 C_A-2 N_f)/3$ in the two expressions stems from the QCD $\beta$-function of the renormalisation group equation. It depends on the number of colours, $C_A = 3$, and number of active quark flavours $N_f$, for which we used $N_f=5$ throughout the analysis. The term $K = (67/18-\pi^2/6) C_A - 5/9 \cdot N_f$ originates from the choice of the $\overline{\mathrm{MS}}$ renormalisation scheme. The remaining coefficient $a_{\cal F}$ is listed in Table~\ref{tab-powcor} for the event shapes considered. \subsection{ Determination of \as\ using power corrections} We determined $\as(\mz)$ by \chisq\ fits of the expression \begin{displaymath} \langle {\cal F}\rangle = \langle {\cal F}^{\mathrm{pert.}} \rangle + \langle {\cal F}^{\mathrm{pow.}} \rangle . \end{displaymath} to the mean values of the five observables, thrust, heavy jet mass, $C$-parameter, and total and wide jet broadening, including the measured mean values obtained by other experiments at different centre-of-mass energies~\cite{bib-L3alphas,bib-OPALNLLA,bib-meanvalues,bib-DELPHI-powcor}. For the central values of \as\ from the fits we chose a renormalisation scale factor of $\xmu=1$ and an infrared scale of $\mu_I=2$~GeV. The \chisqd\ of all fits were between $0.8$ ($\langle M_H^2/s\rangle$) and $4.4$ ($\langle B_T\rangle$). We estimated the systematic uncertainties by varying \xmu\ from $0.5$ to $2$ and $\mu_I$ from $1$ to $3$~GeV. The results of the fits are shown in Figure~\ref{fig-as-powcor} and the numeric values are tabulated in Table~\ref{tab-as-powcor}. It presents the values for \as\ and for $\bar{\alpha}_0$, the experimental errors and systematic uncertainties of the fit results. We consider these results based on power corrections as a test of the new theoretical prediction~\cite{bib-Milan-factor}. It should be noted that the theoretically expected universality of $\bar{\alpha}_0$ is observed only at the level of $30\%$. Employing the procedure used in Section~\ref{sec-alphas} to combine the individual \as\ values, we obtained \begin{displaymath} \as(\mz) = 0.1188\ ^{+0.0044}_{-0.0034} \end{displaymath} where the error is the experimental uncertainty ($\pm 0.0016$), the renormalisation scale uncertainty ($^{+0.0033}_{-0.0023}$) and the uncertainty due to the choice of the infrared scale ($^{+0.0024}_{-0.0019}$), all combined in quadrature. This result is in good agreement with the world average value~\cite{bib-world-alphas-sb} of $\as^{\mathrm{w.a.}}(\mz)=0.119\pm 0.006$. \section{Summary and Conclusions } \label{sec-conclusions} Data recorded by the JADE experiment at centre-of-mass energies around $35$ and $44$~GeV were analysed in terms of event shape distributions. The measured distributions were corrected for detector and initial state photon radiation effects using original Monte Carlo simulation data for $35$ and $44$~GeV. The simulated data are based on the JETSET parton shower generator version~6.3. The same event generator was also employed to correct the data for hadronisation effects in order to determine the strong coupling constant \as. Our measurements of \as\ are based on the most complete theoretical calculations available to date. For all observables theoretical calculations exist in \oaa\ % and in the next-to-leading log approximation. These two calculations were combined using the ln($R$)-matching scheme. We found the improved perturbative calculation of the jet broadening to describe the data slightly better. With these calculations values of $\alpha_s$ are obtained which are about $3\%$ higher than previously. The $\alpha_s$ values were also more consistent with those from other event shape observables. Combining the values of \as\ obtained in this analysis with those from our previous publication and using the new values obtained from jet broadening, the final values at the two centre-of-mass energies are \begin{eqnarray*} \as(44\ {\mathrm{GeV}}) & = & 0.139 \ ^{+0.011} _{-0.007} \\ \as(35\ {\mathrm{GeV}}) & = & 0.145 \ ^{+0.012} _{-0.007} \ , \end{eqnarray*} where the errors are formed by adding in quadrature the statistical, experimental systematics, Monte Carlo modelling and higher order QCD uncertainties. The dominant contributions to the total error came from the choice of the renormalisation scale and from uncertainties due to quark mass effects. Evolving our \as\ measurements to $\sqrt{s} = \mz$ the results obtained at $35$ and $44$ ~GeV transform to $0.123\,^{+0.008}_{-0.005}$ and $0.123\,^{+0.008}_{-0.006}$, respectively. The combination of these values gives $\asmz = 0.123\,^{+0.008}_{-0.005}$. The energy dependence of the mean values of the distributions can be directly compared with analytic QCD predictions plus power corrections for hadronisation effects involving an universal non-perturbative parameter $\bar{\alpha}_0$~\cite{bib-webber, bib-Milan-factor}. Our studies resulted in \begin{displaymath} \as(\mz) = 0.119\ ^{+0.004} _{-0.003} \end{displaymath} which is in good agreement with our results from the \oaa+NLLA fits and also with the world average value. The universality of the non-perturbative parameter $\bar{\alpha}_0$ is found only at a level of $30$\%. \medskip \bigskip\bigskip\bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Entanglement, being viewed as one of the key features of quantum world that has no classical counterpart, is perhaps \emph{the most challenging subject of modern quantum theory}. There are two distinct directions for characterizing entanglement. One is to find proper criteria of detecting entanglement, and the other is to find a ``good'' entanglement measure, namely, to define the best measure quantifying an amount of entanglement of a given state. Among a number of entanglement measures, \emph{concurrence} is a subject of intense research interest \cite{Hill,Wootters,Wootters2,Gao,Chen,Fan,Rungta,Albeverio,Zhang,Chattopadhyay,Chattopadhyay2,Chen2,Berrada,Huang,Jafarpour,Salimi,Augusiak,Li}, which has been shown to play a key role in analyzing the ultrabright source of entangled photon pairs \cite{Dousse}, describing quantum phase transitions in various interacting quantum many-body systems \cite{Osterloh}, affecting macroscopic properties of solids significantly \cite{Ghosh}, exploring dynamics of entanglement for noisy qubits that make diploe-diploe interaction \cite{Altintas} and revealing distinct scaling behavior for different types of multipartite entanglement \cite{Carvalho}, etc. Concurrence is originally derived from the entanglement of formation (EOF) which is used to compute the amount of entanglement for pure states in two-qubit systems \cite{Hill}. Because of the EOF is a monotonically increasing function of the concurrence, thus the concurrence itself can also be regarded as an entanglement measure. Afterward, the concept of concurrence was extended to two-qubit mixed states by means of convex roof construction \cite{Wootters}, and then, to arbitrary but finite-dimensional bipartite as well as multipartite systems for both pure and mixed states \cite{Chen,Rungta}. The continuous-variable systems can also be used for quantum information processing and quantum computing \cite{Braunstein}. Most analysis of entanglement in continuous-variable systems relies on expressing the states of the system in terms of some discrete but infinite basis. Then, the following problems arisen naturally: Can the concept of concurrence be extended to infinite-dimensional case? Is it also a ``well-defined'' entanglement measure? In the present paper, we answer these questions affirmatively. In \cite{ZW}, the \emph{partial Hermitian conjugate} (PHC) criterion for pure states in finite-dimensional systems was proposed and then generalized in \cite{GY} to infinite-dimensional case. The (PHC) criterion says that: A bipartite pure state is separable if and only if it is PHC invariant (see below). The authors of \cite{GY,ZW} also pointed out that one may obtain an entanglement measure from (PHC) criterion since, for any entangled pure state, the PHC of it is not equal to itself, and thus the trace norm or the Hilbert-Schmidt norm of the difference between them may be an entanglement measure of the given state. Interestingly, as what we will show, underlying the Hilbert-Schmidt norm, the induced entanglement measure, \emph{PHC measure}, coincides with concurrence. This result makes contribution to the more profound understanding of the concurrence. In this paper, we consider the bipartite system consisting of two parties A and B which are associated with the state spaces $H_A$ and $H_B$, respectively, with $\dim H_A\otimes H_B\leq+\infty$. We denote by $\rho_A$ and $\rho_B$ the reduced density operators of $\rho$ with respect to the subsystems A and B, respectively, i.e., $\rho_A={\rm Tr}_B(\rho)$ and $\rho_B={\rm Tr}_A(\rho)$. A bipartite state $\rho$ acting on $H=H_A\otimes H_B$ is called \emph{separable} if it can be written as \begin{eqnarray} \rho=\sum\limits_ip_i\rho_i^A\otimes\rho_i^B,\quad \sum\limits_ip_i=1, \ p_i\geq0 \end{eqnarray} or it is a limit of the states of the above form under the trace norm topology \cite{Werner}, where $\rho_i^A$ and $\rho_i^B$ are pure states on the subsystems associated to the Hilbert spaces $H_A$ and $H_B$, respectively. A state that is not separable is said to be \emph{entangled}. Particularly, if a state can be represented in the form as in Eq.(1), it is called \emph{countably separable} \cite{Holevo}. It is worth mentioning that, with increasing state space dimension, quantifying entanglement becomes more and more difficult to implement in practice. The structure of this paper is as follows. In Sec.II we extend the concept of the concurrence to infinite-dimensional bipartite systems and show that it is a continuous function under the trace-class norm topology. This result is new even for finite-dimensional case, and enables us to prove that the concurrence is also a well-defined monotonic entanglement measure for infinite-dimensional case. Going further, another entanglement measure which is closely related to concurrence, \emph{tangle}, is investigated. The PHC measure is introduced and discussed in Sec.III. A brief conclusion is given in the last section. \section{Concurrence for infinite-dimensional bipartite states} We start by reviewing some results from finite-dimensional cases. For the bipartite pure state $|\psi\rangle\in H_A\otimes H_B$ with $\dim H_A\otimes H_B<+\infty$, the concurrence $C(|\psi\rangle)$ of $|\psi\rangle$ is defined in \cite{Rungta} by \begin{eqnarray} C(|\psi\rangle)=\sqrt{2[1-{\rm Tr}(\rho_A^2)]}, \end{eqnarray} where $\rho_A={\rm Tr}_B(|\psi\rangle\langle\psi|)$. Equivalently, \begin{eqnarray*} C(|\psi\rangle) =\sqrt{\sum\limits_{i,j,k,l}|a_{ik}a_{jl}-a_{il}a_{jk}|^2} \end{eqnarray*} provided that $|\psi\rangle=\sum\limits_{i,j}a_{ij}|i\rangle|j'\rangle$, where $\{|i\rangle\}$ and $\{|j'\rangle\}$ are given orthonormal bases of $H_A$ and $H_B$, respectively. The concurrence is extended to mixed states by means of convex roof construction \cite{Rungta2}, \begin{eqnarray} C(\rho) =\min\limits_{\{p_i,|\psi_i\rangle\}}\{\sum\limits_i p_i C(|\psi_i\rangle)\}, \end{eqnarray} where the minimum is taken over all possible ensembles of $\rho$ (here, $\{p_i,|\psi_i\rangle\}$ is called an ensemble of $\rho$ whenever $\rho=\sum\limits_ip_i|\psi_i\rangle\langle\psi_i|$ with $\{p_i\}$ a probability distribution and $\{|\psi_i\rangle\}$ a family of pure states). The tangle is another measure closely related to the concurrence. The tangle $\tau(|\psi\rangle)$ for pure state $|\psi\rangle$ is defined by $\tau(|\psi\rangle)=C^2(|\psi\rangle)$, and the tangle for mixed state $\rho$ is defined by \begin{eqnarray} \tau(\rho) =\min\limits_{\{p_i,|\psi_i\rangle\}} \{\sum\limits_i p_i C^2(|\psi_i\rangle)\} \end{eqnarray} (Ref. \cite{Coffman}). Note that, although the tangle and the concurrence are equivalent to each other as entanglement measures for pure states, they are different for mixed states. In fact, it holds that $\tau(\rho)\geq C^2(\rho)$ and the equality holds in the case of two-qubit states \cite{Osborne}. It is evident that $\rho$ is separable if and only if $C(\rho)=\tau(\rho)=0$. With the same spirit in mind, we extend the concepts of concurrence and tangle to infinite-dimensional bipartite systems. \\ \noindent {\bf Definition 1.} \ Let $|\psi\rangle\in H_A\otimes H_B$ with $\dim H_A\otimes H_B=+\infty$ be a pure state. \begin{eqnarray} C(|\psi\rangle):=\sqrt{2(1-{\rm Tr}(\rho_A^2))}, \end{eqnarray} is called the concurrence of $|\psi\rangle$, where $\rho_A={\rm Tr}_B(|\psi\rangle\langle\psi|)$.\\ Since the eigenvalues of $\rho_A$ coincide with that of $\rho_B={\rm Tr}_A(|\psi\rangle\langle\psi|)$, with no loss of generality, we always use the reduced density operators with respect to the subsystem A. It is clear that $C(|\psi\rangle)=0$ if and only if $|\psi\rangle$ is separable. For a mixed state $\rho$, the concurrence of $\rho$ can be defined by means of the generalized convex roof construction, namely, \begin{eqnarray} C(\rho):=\inf\limits_{\{p_i,|\psi_i\rangle\}} \{\sum\limits_i p_i C(|\psi_i\rangle)\}, \end{eqnarray} where the infimum is taken over all possible ensembles $\{p_i,|\psi_i\rangle\}$ of $\rho$. The following proposition provides two computational formulas of the concurrence for pure states.\\ \noindent{\bf Proposition 1} \ Let $|\psi\rangle\in H_A\otimes H_B$ with $\dim H_A\otimes H_B=+\infty$ be a pure state. (1) If the Fourier expansion of $|\psi\rangle$ with respect to some given product basis $\{|i\rangle|j'\rangle\}$ of $H_A\otimes H_B$ is $|\psi\rangle=\sum\limits_{i,j}a_{ij}|i\rangle|j'\rangle$, then \begin{eqnarray} C(|\psi\rangle) =\sqrt{\sum\limits_{i,j,k,l}|a_{ik}a_{jl}-a_{il}a_{jk}|^2}. \end{eqnarray} (2) If the Schmidt decomposition of $|\psi\rangle$ is $|\psi\rangle=\sum\limits_k\lambda_k|k\rangle|k'\rangle$, then \begin{eqnarray} C(|\psi\rangle)=\sqrt{2\sum\limits_{k\neq l}\lambda_k^2\lambda_l^2}. \end{eqnarray} \noindent{\sl Proof} \ (1) Consider the operator $D=D_{|\psi\rangle}=(a_{ij}) : H_B\rightarrow H_A$ defined by $D|j^\prime \rangle=\sum_i a_{ij}|i\rangle$. Since ${\rm Tr}(DD^\dag)=\sum\limits_{i,j}|a_{ij}|^2=1$, $D$ is a Hilbert-Schmidt operator. With $\rho=|\psi\rangle\langle\psi|$, it is easily checked that $\rho_A=DD^\dag$. As ${\rm Tr}((DD^\dag)^2)=\sum\limits_{i,j,k,l}a_{ik}\bar{a}_{il}a_{jl}\bar{a}_{jk}$, we have \begin{eqnarray*} &&1-{\rm Tr}(\rho_A^2)=(\sum\limits_{i,j}a_{ij}\bar{a}_{ij})^2-\sum\limits_{i,j,k,l}a_{ik}\bar{a}_{il}a_{jl}\bar{a}_{jk}\\ &=&\sum\limits_{i,j,k,l} (a_{ik}\bar{a}_{ik}a_{jl}\bar{a}_{jl}-a_{ik}\bar{a}_{il}a_{jl}\bar{a}_{jk})\\ &=&\frac{1}{2}\sum\limits_{i,j,k,l} (a_{ik}a_{jl}-a_{il}a_{jk})(\bar{a}_{ik}\bar{a}_{jl}-\bar{a}_{il}\bar{a}_{jk})\\ &=&\frac{1}{2}\sum\limits_{i,j,k,l}|a_{ik}a_{jl}-a_{il}a_{jk}|^2. \end{eqnarray*} Hence $C(|\psi\rangle)=\sqrt{2(1-{\rm Tr}(\rho_A^2))} =\sqrt{\sum\limits_{i,j,k,l}|a_{ik}a_{jl}-a_{il}a_{jk}|^2}$, as desired. (2) can be checked similarly and we omit its proof here.\hfill$\qed$ \\ It is known that the concurrence is an entanglement measure for finite dimensional systems since it meets the following conditions: (i) $E(\rho)=0$ if and only if $\rho$ is separable; (ii) $E(\rho)=E(U_A\otimes U_B \rho U_A^\dag\otimes U_B^\dag)$ holds for any local unitary operators $U_A$ and $U_B$ on the subsystems $H_A$ and $H_B$, respectively; (iii) $E$ is LOCC monotonic, i.e., $E(\Lambda(\rho))\leq E(\rho)$ holds for any local operation and classical communication (LOCC) $\Lambda$ \cite{Fan}. The conditions (i)-(iii) above are necessary for any entanglement measure $E$ \cite{Vedral}. Generally, an entanglement measure $E$ also satisfies (iv) $E(\sum\limits_ip_i\rho_i)\leq\sum\limits_ip_iE(\rho_i)$ for mixed state $\rho=\sum\limits_ip_i\rho_i$, where $p_i\geq0$, $\sum\limits_ip_i=1$ (see in \cite{Vedral2}). If (iii)-(iv) are satisfied by an entanglement measure $E$, then it is called an entanglement monotone \cite{Vidal}. In what follows we show that the concurrence $C$ defined in Eq.(4)-(5) for infinite-dimensional systems is also an entanglement monotone, i.e., (i)-(iv) are satisfied by $C$ for infinite-dimensional case as well. Checking $C$ meets Condition (ii) is straightforward. The condition (i) is obviously satisfied by the concurrence for finite-dimensional case. This is because that, every separable state $\rho$ in a finite-dimensional bipartite system is countably separable, that is, there exists an ensemble $\{p_i,|\psi_i\rangle\}$ of $\rho$ such that $|\psi_i\rangle$s are separable pure states and thus we get immediately that $0\leq C(\rho)\leq \sum_i p_iC(|\psi_i\rangle)=0$ as $C(|\psi_i\rangle)=0$. However, the fact that $\rho$ is separable implies $C(\rho)=0$ is not obvious anymore for infinite-dimensional case since there do exist some separable states in infinite-dimensional systems that are not countably separable \cite{Holevo}. For such separable states that are not \emph{countably separable}, there doesn't exist any ensemble $\{p_i,|\psi_i\rangle\}$ of $\rho$ such that $|\psi_i\rangle$s are separable and one can not get $C(\rho)=0$ directly. It is clear that, if $C$ is continuous, then $C(\rho)=0$ whenever $\rho$ is separable because it is a limit of countably separable states. The continuity of the concurrence $C$ is established in Proposition 2, which is not obvious even for finite-dimensional systems.\\ \noindent{\bf Proposition 2} \ The concurrence is continuous for both finite- and infinite-dimensional systems, i.e., \begin{eqnarray} \lim_{n\rightarrow\infty}C(\rho_n) =C(\rho)\quad {\rm whenever}\quad \lim_{n\rightarrow\infty}\rho_n=\rho \end{eqnarray} in the trace-norm topology.\\ \noindent{\sl Proof} \ To prove the continuity of $C$, let us extend the concurrence of states to that of self-adjoint trace-class operators. Let $A$ be a self-adjoint trace-class operators acting on $H_A\otimes H_B$. We define the concurrence of $A$ by $$ C(A)={\rm Tr}(|A|)C(\frac{|A|}{{\rm Tr}(|A|)}),$$ where $|A|=(A^\dag A)^{\frac{1}{2}}$. It is clear that $$C(A)=\inf\limits_{\{\lambda_i, |\psi_i\rangle\}} \sum_i\lambda_iC(|\psi_i\rangle),$$ where the infimum is taken over all $\{\lambda_i, |\psi_i\rangle\}$ with $\lambda_i\geq 0$, $\sum_i\lambda_i={\rm Tr}(|A|)$ and $|A|=\sum_i\lambda_i|\psi_i\rangle\langle\psi_i|$. It is an immediate consequence of the definition that, if $0\leq |A|\leq |B|$, then $C(A)\leq C(B)$. Assume that $\rho_n,\rho\in {\mathcal S}(H_A\otimes H_B)$ and $\lim_{n\rightarrow\infty}\rho_n=\rho$. Let $\vartheta_n=\rho-\rho_n$ and let \begin{eqnarray*}\vartheta_n =\sum_{k(n)}\lambda_{k(n)}|\eta_{k(n)}\rangle\langle\eta_{k(n)}| \end{eqnarray*} be its spectral decomposition. We claim that \begin{eqnarray} C(\rho)=C(\rho_n+\vartheta_n)\leq C(\rho_n)+C(\vartheta_n). \end{eqnarray} For any $\varepsilon>0$, there exist ensembles $\{p_{k(n)}$, $|\psi_{k(n)}\rangle\}$ and $\{q_{l(n)}$, $|\phi_{l(n)}\rangle\}$ of $\rho_n$ and ${|\vartheta_n|}$, respectively, and $0<\epsilon_1,\epsilon_2< \varepsilon$, such that \begin{eqnarray*}C(\rho_n)=\sum_{k(n)}p_{k(n)}C(|\psi_{k(n)}\rangle)-\frac{\epsilon_1}{2} \end{eqnarray*} and \begin{eqnarray*} C({|\vartheta_n|})=\sum_{l(n)}q_{l(n)}C(|\phi_{l(n)}\rangle)-\frac{\epsilon_2}{2}. \end{eqnarray*} We compute \begin{eqnarray*} &&C(\rho_n+\vartheta_n)\leq C(\rho_n+|\vartheta_n|)\\ &\leq&\sum_{k(n)}p_{k(n)}C(|\psi_{k(n)}\rangle) +\sum_{l(n)}q_{l(n)}C(|\phi_{l(n)}\rangle)\\ &=&C(\rho_n)+C(\vartheta_n)+\frac{\epsilon_1+\epsilon_2}{2}. \end{eqnarray*} Since $\varepsilon$ is arbitrarily given, the claim is proved. Similarly, using $C(\rho_n)=C(\rho-\vartheta_n)\leq C(\rho+|\vartheta_n|)$, we obtain \begin{eqnarray*} C(\rho_n)\leq C(\rho)+C(|\vartheta|_n), \end{eqnarray*} which, together with Eq.(10), implies that \begin{eqnarray*} |C(\rho_n)-C(\rho)|\leq C(|\vartheta|_n). \end{eqnarray*} Observing that $C(\vartheta_n)\rightarrow0$ $(n\rightarrow\infty)$ since $C(\vartheta_n)\leq\sum_{k(n)}\sqrt{2}|\lambda_{k(n)}|$ and ${\rm Tr}(|\vartheta_n|) =\sum_{k(n)}|\lambda_{k(n)}|\rightarrow0$, we get $\lim_{n\rightarrow\infty}C(\rho_n)=C(\rho)$, as desired. \hfill$\qed$\\ We now begin to check that $C$ satisfies properties (iii)-(iv). For finite-dimensional case, Vidal \cite{Vidal} proposed a nice recipe for determining entanglement monotones by proving that the convex roof extension of a pure sate measure $E$ satisfying the two conditions below is an entanglement monotone (Ref. \cite[Theorem 2]{Vidal}):\\ (a) For a pure state $|\psi\rangle$, $\rho_A={\rm Tr}_B(|\psi\rangle\langle\psi|)$, define a function $f$ by $f(\rho_A)=E(|\psi\rangle)$, then $$f(U\rho_AU^\dag)=f(\rho_A);$$ and (b) $f$ is concave, namely, $$f(\lambda\rho_1+(1-\lambda)\rho_2)\geq\lambda f(\rho_1)+(1-\lambda)f(\rho_2)$$ for any density matrices $\rho_1$, $\rho_2$, and any $0\leq\lambda\leq1$.\\ For infinite-dimensional bipartite systems, every LOCC admits a form of \begin{eqnarray} \Lambda(\rho) =\sum\limits_{i=1}^N(A_i\otimes B_i)\rho (A_i^\dag \otimes B_i^\dag) \end{eqnarray} with $\sum\limits_{i=1}^NA_i^\dag A_i\otimes B_i^\dag B_i\leq I_A\otimes I_B$, where $N$ may be $+\infty$ and the series converges in the strong operator topology \cite{HJ}. \if false It is proved that, any LOCC $\Lambda$ of bipartite pure state using two-way communication can be simulated by a one-way communication protocol \cite{Lo}. Namely, $\Lambda$ can be simulated by $\Lambda'(|\psi\rangle\langle\psi|)=\sum\limits_{i}(I_A\otimes \Lambda_i^B)(A_i\otimes I_B|\psi\rangle\langle\psi| A_i^\dag\otimes I_B)$, where $\Lambda_i^B$s are quantum operations on the second system [namely, $\Lambda_i^B$ is a completely positive trace preserving linear map, it admits a form of $\Lambda_i^B(\cdot)=\sum\limits_k M_{k(i)}(\cdot)M_{k(i)}^\dag$ with $\sum\limits_{k(i)}M_{k(i)}^\dag M_{k(i)}=I_B$, where the series converges in the strong operator topology (see \cite{HJ})], $\sum\limits_i A_i^\dag A_i\leq I_A$ and $\Lambda_i^B$s are quantum operations conditional on the result $i$, where the series converges in the strong operator topology \cite{HJ}. \fi Let $\mathcal{S}(H_A\otimes H_B)$ be the set of all quantum states acting on $H_A\otimes H_B$. According to the entanglement monotone scenario discussed in \cite{Vidal}, in order to prove that a function $E: {\mathcal S}(H_A\otimes H_B)\rightarrow{\mathbb {R}}^+$ satisfying (i)-(ii) is LOCC monotonic, we only need to consider the sequence of LOCC $\{\Lambda_{B,k}\}$ or $\{\Lambda_{A,l}\}$, of the form \begin{eqnarray} \Lambda_{B,k}(\rho)=\sum_{i(k)}(I_A\otimes B_{i(k)})\rho(I_A\otimes B_{i(k)}^\dag) \end{eqnarray} or \begin{eqnarray} \Lambda_{A,l}(\rho)=\sum_{j(l)}(A_{j(l)}\otimes I_B)\rho(A_{j(l)}^\dag\otimes I_B), \end{eqnarray} where $\sum_{i(k)}B_{i(k)}^\dag B_{i(k)}\leq I_B$ and $\sum_{j(l)}A_{j(l)}^\dag A_{j(l)}\leq I_A$ (here, the series converges in the strong operator topology) with $\sum\limits_k{\rm Tr}(\Lambda_{B,k}(\rho))=\sum\limits_l{\rm Tr}(\Lambda_{A,l}(\rho))=1$, $B_{i(k)}$s (resp. $A_{j(l)}$s) are operators from $H_B$ (resp. $H_A$) into $H_{B'}$ (resp. $H_{A'}$) for some Hilbert space $H_{B'}$ (resp. $H_{A'}$), and where $k$ (resp. $l$) labels different outcomes if at some stage of local manipulations part B (resp. A) performs a measurement. With no loss of generality, hereafter we consider the LOCC $\{\Lambda_{B,k}\}$ as in Eq.(12). Applying $\Lambda_{B,k}$ to $\rho$, the state becomes $$\rho_k'=\frac{\Lambda_{B,k}(\rho)}{p_k}$$ with probability $p_k={\rm Tr}(\Lambda_{B,k}(\rho))$. Therefore, the final state is $\rho'=\sum_kp_k\rho_k'$. By \cite{Vidal}, if $E(\rho')\leq E(\rho)$ holds for $\Lambda_{B,k}$, then the condition (iii) holds for $E$. We show below that, for infinite-dimensional case, if $E$ is continuous on quantum states under the trace norm topology, then (a)-(b) are sufficient conditions for $E$ to be an entanglement monotone as well.\\ \noindent{\bf Proposition 3} \ Let $E$ be an entanglement measure for pure states in infinite-dimensional systems and define $E(\rho):$=$\inf\limits_{\{p_i,|\psi_i\rangle\}}$ $\{\sum\limits_i p_i E(|\psi_i\rangle)\}$ for mixed state $\rho$. Let $f(\rho_A)=E(|\psi\rangle\langle\psi|)$, $\rho_A={\rm Tr}_B(|\psi\rangle\langle\psi|)$. Assume that $E$ is continuous and $f$ satisfying (a)-(b). Then $E$ is an entanglement monotone, i.e., $E$ satisfying (iii)-(iv).\\ \noindent{\sl Proof} \ We assume that $f$ satisfy conditions (a) and (b), namely (a) $f(U\rho_AU^\dag)=f(\rho_A)$ for any unitary operators on $H_A$ and (b) $f$ is concave. By (a), we know that $E(\rho)$ is invariant under local unitary operations, i.e., $E(U_A\otimes U_B\rho U_A^\dag\otimes U_B^\dag)=E(\rho)$ for any unitary operators $U_A$ and $U_B$ acting on $H_A$ and $H_B$ respectively (notice that condition (iii) implies that $E(\rho)$ is invariant under local unitary operations). In what follows, we show that (iii) holds for $E$ and LOCC $\{\Lambda_{B,k}\}$, from which, according to the entanglement monotone scenario proposed in \cite{Vidal}, we can thus obtain that (iii) holds for $E$ and any LOCC $\Lambda$. We assume first that $\rho$ is a pure state, $\rho=|\psi\rangle\langle\psi|$. If part B performs $\Lambda_{B,k}$ on subsystem B as in Eq.(11), then the state becomes $\rho_k'=\frac{\Lambda_{B,k}(\rho)}{p_k}$ with probability $p_k={\rm Tr}(\Lambda_{B,k}(\rho))$. Writing $\rho_{A,k}'={\rm Tr}_B(\rho_k)$, we obtain $\rho_A=\sum_kp_k\rho_{A,k}'$. For any ensemble $\{r_{kl},|\psi_{kl}\rangle\}$ of $\rho_k'$, we have $$E(\rho_k')\leq\sum_lr_{kl}E(|\psi_{kl}\rangle).$$ It yields \begin{eqnarray*} &&E(\rho)=f(\rho_A)=f(\sum_kp_k\rho_{A,k}')\\ &=&f(\sum_{k,l}p_kr_{kl}\rho_{A,kl}') \geq\sum_{k,l}p_kr_{kl}f(\rho_{A,kl}')\\ &=&\sum_{k,l}p_kr_{kl}E(|\psi_{kl}\rangle) \geq\sum_kp_kE(\rho_k'), \end{eqnarray*} where $\rho_{A,kl}'={\rm Tr}_B(|\psi_{kl}\rangle\langle\psi_{kl}|)$, the first inequality holds since $f$ is concave and continuous. Therefore, (iii) is satisfied by $E$ if $\rho$ is pure. Assume that $\rho$ is mixed. Performing $\Lambda_{B,k}$ on $\rho$ and denote $\rho_k'=\frac{\Lambda_{B,k}(\rho)}{p_k}$ with probability $p_k={\rm Tr}(\Lambda_{B,k}(\rho))$. Observe that, for any $\varepsilon>0$, there exists an ensemble $\{t_j,|\eta_j\rangle\}$ of $\rho$, and $0<\epsilon_1<\varepsilon$ such that \begin{eqnarray*} E(\rho)=\sum_jt_jE(|\eta_j\rangle)-\frac{\epsilon_1}{2}. \end{eqnarray*} For each $j$, let \begin{eqnarray*} \rho_{jk}'=\frac{1}{t_{jk}}\Lambda_{B,k}(|\eta_j\rangle\langle\eta_j|), \end{eqnarray*} where $t_{jk}={\rm Tr}(\Lambda_{B,k}(|\eta_j\rangle\langle\eta_j|))$. Then \begin{eqnarray*} \rho_k'=\frac{1}{p_k}\sum_jt_jt_{jk}\rho_{jk}' \end{eqnarray*} and \begin{eqnarray*} E(|\eta_j\rangle)\geq\sum_kt_{jk}E(\rho_{jk}') \end{eqnarray*} by what proved for pure states above. For each pair $(j,k)$, suppose that $\{t_{jkl},|\psi_{jkl}\rangle\}$ is an ensemble of $\rho_{jk}'$ such that \begin{eqnarray*} E(\rho_{jk}')=\sum_lt_{jkl}E(|\psi_{jkl}\rangle)-\frac{\epsilon_{jk}}{2},\quad 0<\epsilon_{jk}<\frac{\varepsilon}{2^k}. \end{eqnarray*} We achieve that \begin{eqnarray*} E(\rho)&=&\sum_jt_jE(|\eta_j\rangle)-\frac{\epsilon_1}{2}\\ &\geq& \sum_{j,k}t_jt_{jk}E(\rho_{jk}')-\frac{\epsilon_1}{2}\\ &=&\sum_{j,k,l}t_jt_{jk}t_{jkl} E(|\psi_{jkl}\rangle)-\epsilon'\\ &\geq&\sum_kp_kE(\rho_k')-\epsilon' \end{eqnarray*} for some $\epsilon'<\varepsilon$. Since $\varepsilon$ is arbitrarily given, we see that (iii) is satisfied for mixed states as well. Now we show that (iv) is valid. Let $\rho=\sum_kp_k\rho_k$. For any given $\varepsilon>0$, there exists an ensemble of $\rho_k$, $\{q_{kl},|\phi_{kl}\rangle\}$, and $0<\epsilon<\varepsilon$ such that \begin{eqnarray*} E(\rho_k)\geq\sum_lq_{kl}E(|\phi_{kl}\rangle)-\frac{\epsilon}{2^k}. \end{eqnarray*} As $\{p_kq_{kl},|\phi_{kl}\rangle \}_{k,l}$ is an ensemble of $\rho$, this entails that \begin{eqnarray*}E(\rho)\leq\sum_kp_k\sum_lq_{kl}E(|\phi_{kl}\rangle)\leq\sum_kp_kE(\rho_k)+{\varepsilon}, \end{eqnarray*} from which we see that $E(\rho)\leq\sum_kp_kE(\rho_k) $, finishing the proof. \hfill$\qed$ \\ Based on Proposition 3, we show below that the concurrence for infinite-dimensional systems defined in Eqs.(4)-(5) satisfies conditions (i)-(iv) and thus it is a well-defined entanglement measure (monotone).\\ \noindent{\bf Theorem 1} \ The concurrence defined in Eqs.(4)-(5) is an entanglement monotone.\\ \noindent{\sl Proof} \ By Proposition 2 and Proposition 3, we only need to verify that the function $f$ defined by $f(\rho_A)=C(|\psi\rangle)$ with $\rho_A={\rm Tr}_B(|\psi\rangle\langle\psi|)$ satisfies (a) and (b). Note that, for any $\rho\in{\mathcal S}(H_A)$, we have $f(\rho)=\sqrt{2(1-{\rm Tr}(\rho^2))}$. Thus (a) is obvious. We check that $f$ is concave. For any given states $\rho_1$ and $\rho_2$ on $H_A$, let \begin{eqnarray*} \rho=\lambda\rho_1+(1-\lambda)\rho_2,\quad 0\leq\lambda\leq1. \end{eqnarray*} Then \begin{eqnarray*} f(\lambda\rho_1+(1-\lambda)\rho_2)\geq\lambda f(\rho_1)+(1-\lambda)f(\rho_2) \end{eqnarray*} if and only if \if false \begin{eqnarray} {\rm Tr}(\rho_1\rho_2) +\sqrt{(1-{\rm Tr}(\rho_1^2))(1-{\rm Tr}(\rho_1^2))}\leq1. \end{eqnarray} Using the fact $(\rho_1-\rho_2)(\rho_1-\rho_2)^\dag=(\rho_1-\rho_2)^2=\rho_1^2+\rho_2^2-\rho_1\rho_2-\rho_2\rho_1\geq0$, one has\fi \begin{eqnarray*} {\rm Tr}(\rho_1^2)+{\rm Tr}(\rho_2^2)\geq2{\rm Tr}(\rho_1\rho_2). \end{eqnarray*} But the last inequality is always valid. Thus, $f$ is concave. \hfill$\qed$\\ With the same spirit as that for finite-dimensional case, we define the tangle of a pure state in the case of infinite-dimensional systems by \begin{eqnarray} \tau(|\psi\rangle)=C^2(|\psi\rangle). \end{eqnarray} If $|\psi\rangle=\sum\limits_k\lambda_k|k\rangle|k'\rangle$ is the Schmidt decomposition of $|\psi\rangle$ \cite{GY}, then \begin{eqnarray} \tau(|\psi\rangle)=2(1-{\rm Tr}(\rho_A^2))=2\sum\limits_{k\neq l}\lambda_k^2\lambda_l^2. \end{eqnarray} The tangle of a mixed state $\rho$, $\tau(\rho)$ can be naturally defined by \begin{eqnarray} \tau(\rho):=\inf\limits_{\{p_i,|\psi_i\rangle\}} \{\sum\limits_i p_i C^2(|\psi_i\rangle)\}, \end{eqnarray} where the infimum is taken over all possible ensembles $\{p_i,|\psi_i\rangle\}$ of $\rho$. By Proposition 2 and Theorem 1, $\tau$ is continuous and satisfies the conditions (i)-(iv) as well. Therefore, $\tau$ is an good entanglement measure, too. For mixed state $\rho$, $C(\rho)\neq \sqrt{2[1-{\rm Tr}(\rho_A^2)]}$ in general. For the finite-dimensional case, it is showed in \cite{Zhang,Mintert} that $C^2(\rho)\leq2[1-{\rm Tr}(\rho_A^2)]$. In fact, we have the following result.\\ \noindent{\bf Proposition 4} \ Let $\rho\in{\mathcal S}(H_A\otimes H_B)$ with $\dim H_A\otimes H_B\leq\infty$. Then \begin{eqnarray} C^2(\rho)\leq\tau(\rho)\leq2[1-{\rm Tr}(\rho_A^2)]. \end{eqnarray} \noindent{\sl Proof} \ For any $\epsilon>0$, there exists $\{p_i,|\psi_i\rangle\}$ such that $\rho=\sum\limits_ip_i|\psi_i\rangle\langle\psi_i|$ and \begin{eqnarray*} \tau(\rho)\geq\sum\limits_ip_iC^2(|\psi_i\rangle)-\epsilon. \end{eqnarray*} Then we have \begin{eqnarray*} &&C^2(\rho)\leq(\sum\limits_ip_iC(|\psi_i\rangle))^2\\ &=&(\sum\limits_i\sqrt{p_i}\sqrt{p_i}C(|\psi_i\rangle))^2\\ &\leq&(\sum\limits_ip_i)(\sum\limits_ip_iC^2(|\psi_i\rangle))\\ &\leq&\tau(\rho)+\epsilon, \end{eqnarray*} which establishes the inequality $C^2(\rho)\leq\tau(\rho)$ since $\varepsilon>0$ is arbitrary. Let $\rho_{i,A}={\rm Tr}_B(|\psi_i\rangle\langle\psi_i|)$. One has \begin{eqnarray*}\tau(\rho)&\leq &\sum\limits_ip_iC^2(|\psi_i\rangle)\\ &=&\sum\limits_ip_i[2(1-{\rm Tr}(\rho_{i,A}^2))]\\ &=&2(1-\sum\limits_ip_i{\rm Tr}(\rho_{i,A}^2))\\ &\leq&2(1-{\rm Tr}(\rho_A^2)) \end{eqnarray*} due to the convex property of ${\rm Tr}(\rho_A^2)$ \cite{Zhang}. \hfill$\qed$\\ \section{PHC measure: viewing concurrence from another perspective } In this section, we will establish another entanglement measure--PHC measure which is based on the PHC criterion \cite{ZW,GY} and show that it coincides with concurrence. Therefore, it provides us an alternative perspective of understanding the concurrence. It is known that entanglement measures may be induced from some entanglement criteria. For example, negativity and convex-roof extended negativity are two kinds of entanglement measures induced from the elegant PPT criterion \cite{Vidal3,Lee}. In \cite{GY,ZW}, a necessary and sufficient condition of separability for pure states is proposed. We review some notations firstly. \\ \noindent{\bf Definition 2}(\cite{ZW,GY}). \ Let $\rho=|\psi\rangle\langle\psi|$ be a pure state acting on $H_A\otimes H_B$ with $\dim H_A\otimes H_B\leq+\infty$ and \begin{eqnarray*} |\psi\rangle=\sum\limits_k\lambda_k|k\rangle|k^{\prime}\rangle \end{eqnarray*} be the Schmidt decomposition of $|\psi\rangle$. Then \begin{eqnarray*} \rho=\sum\limits_{k,l}\lambda_k\lambda_l|k\rangle|k^{\prime}\rangle\langle l|\langle l^{\prime}| \end{eqnarray*} and the partial Hermitian conjugate of $\rho$ is defined by \begin{eqnarray}\rho^{\rm PHC}=\sum\limits_{k,l}\lambda_k\lambda_l|k\rangle|l^{\prime}\rangle \langle l|\langle k^{\prime}|. \end{eqnarray} It is showed in \cite{ZW,GY} that a pure state $\rho=|\psi\rangle\langle\psi|$, $|\psi\rangle\in H_A\otimes H_B$, is separable if and only if $\rho^{\rm PHC}=\rho$. Consequently, $\rho^{\rm PHC}\neq\rho$ implies that $\rho$ is entangled, and also that $\|\rho_\psi-\rho_\psi^{\rm PHC}\|_2>0$ (here, $\|\cdot\|_2$ denotes the Hilbert-Schmidt norm, i.e., $\|A\|_2=[{\rm Tr}(A^\dag A)]^{\frac{1}{2}}$). In what follows, we will show that the PHC criterion does provide us with an entanglement measure.\\ \noindent{\bf Definition 3} \ Let $|\psi\rangle\in H_A\otimes H_B$ with $\dim H_A\otimes H_B\leq+\infty$ be a pure state. The PHC measure of a pure state $|\psi\rangle$ is defined by \begin{eqnarray} E_{\rm PHC}(|\psi\rangle):=\|\rho_\psi-\rho_\psi^{\rm PHC}\|_2. \end{eqnarray} For mixed state $\rho$, \begin{eqnarray} E_{\rm PHC}(\rho) :=\inf\limits_{\{p_i,|\psi_i\rangle\}} \{\sum\limits_i p_i E_{\rm PHC}(|\psi_i\rangle)\}, \end{eqnarray} where the infimum is taken over all possible ensembles of $\rho$. The main result of this section is the following.\\ \noindent{\bf Theorem 2} \ The PHC entanglement measure coincides with the concurrence, i.e., $E_{\rm PHC}(\rho)=C(\rho)$ for any state $\rho$ acting on $H_A\otimes H_B$ with $\dim H_A\otimes H_B\leq+\infty$.\\ \noindent{\sl Proof} \ By the generalized convex roof construction, we only need to show \begin{eqnarray} E_{\rm PHC}(|\psi\rangle)=C(|\psi\rangle) \end{eqnarray} for all pure states $|\psi\rangle$. Let $|\psi\rangle=\sum\limits_k\lambda_k|k\rangle |k^{\prime}\rangle$ be the Schmidt decomposition of $|\psi\rangle$. Then $\rho^{\rm PHC}=\sum\limits_{k,l}\lambda_k\lambda_l|k\rangle|l^{\prime}\rangle \langle l|\langle k^{\prime}|$. Therefore \begin{eqnarray*} &&\rho_\psi-\rho_\psi^{\rm PHC}\\ &=&\sum\limits_{k,l}\lambda_k\lambda_l|k\rangle|k^{\prime}\rangle\langle l|\langle l^{\prime}|-\sum\limits_{i,j}\lambda_i\lambda_j|i\rangle |j^{\prime}\rangle \langle j|\langle i^{\prime}|\\ &=&\sum\limits_{k,l}\lambda_k\lambda_l|k\rangle\langle l| \otimes(|k^\prime\rangle\langle l^\prime|-|l^\prime\rangle\langle k^\prime|). \end{eqnarray*} Now, \begin{eqnarray*} &&(\rho_\psi-\rho_\psi^{\rm PHC})(\rho_\psi-\rho_\psi^{\rm PHC})^\dagger\\ &=&\sum\limits_{k,l,i}\lambda_k\lambda_l^2\lambda_i|k\rangle\langle i|\otimes(|k^\prime\rangle\langle i^\prime|+\langle k^\prime|i^\prime\rangle|l^\prime\rangle\langle l^\prime|\\ &&-\langle l^\prime|i^\prime\rangle|k^\prime\rangle\langle l^\prime| -\langle k^\prime|i^\prime\rangle|i^\prime\rangle\langle i^\prime|) \end{eqnarray*} implies that \begin{eqnarray*} {\rm Tr}((\rho_\psi-\rho_\psi^{\rm PHC})(\rho_\psi-\rho_\psi^{\rm PHC})^\dagger)=2\sum\limits_{k\neq l}\lambda_k^2\lambda_l^2. \end{eqnarray*} Therefore \begin{eqnarray*} E_{\rm PHC}(|\psi\rangle)=\|\rho_\psi-\rho_\psi^{\rm PHC}\|_2=\sqrt{2\sum\limits_{k\neq l}\lambda_k^2\lambda_l^2}=C(|\psi\rangle) \end{eqnarray*} by Proposition 1, completing the proof. \hfill$\qed$\\ Thus, the PHC measure can also be regarded as a ``well-defined'' entanglement measure. Although the PHC measure is the same to the concurrence, it shines some new light on the nature of the concurrence. \section{Conclusion} Summarizing, the concepts of the concurrence and the tangle for infinite-dimensional bipartite quantum systems are introduced. These two functions are continuous under the trace norm topology. This enables us to prove that the concurrence as well as the tangle are still well-defined monotonic entanglement measures. The relationship between them are discussed and an upper bound is proposed: $C(\rho)\leq\sqrt{\tau(\rho)}\leq\sqrt{2[1-{\rm Tr}(\rho_A^2)]}$, where the equalities hold whenever $\rho$ is a pure state. Based on the partial Hermitian conjugate criterion, the PHC measure is introduced. Moreover, this measure coincides with the concurrence and thus a well defined entanglement measure, which answers a question suggested in \cite{ZW,GY}.\\ {\bf Acknowledgements.}\ This work is partially supported by Natural Science Foundation of China (11171249,11101250) and Research start-up fund for the Doctor of Shanxi Datong University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \name{intro} A classical result of Dirichlet \commargin{The $\to$ A} states that for any ${\mathbf{x}} \in {\mathbb{R}}^n$ there are infinitely many $q\in{\mathbb{N}}$ such that $\|q {\mathbf{x}} - {\bf p} \| < {q^{-1/n}}$ for some ${\bf p}\in{\mathbb{Z}}^n$. One says that ${\mathbf{x}} \in {\mathbb{R}}^n$ is {\sl badly approximable\/} if the right hand side of the above inequality cannot be improved by an arbitrary positive constant. In other words, if there is $c>0$ such that for any ${\bf p} \in {\mathbb{Z}}^n, \, q \in {\mathbb{N}}$ one has \begin{equation} \label{eq: defn ba} \|q {\mathbf{x}} - {\bf p} \| \geq \frac{c}{q^{1/n}}\,. \end{equation} Here $\|\cdot\|$ can be any norm on ${\mathbb{R}}^n$, which unless otherwise specified will be chosen to be the supremum norm. We denote the set of all badly approximable vectors in ${\mathbb{R}}^n$ by ${\bold{Bad}}_n$, or ${\bold{Bad}}$ if the dimension is clear from the context. \commargin{Introduced thickness as in the big paper, to get rid of dimension at any point} It is well known that Lebesgue measure of ${\bold{Bad}}$ is zero; but nevertheless this set is quite large. Namely it is {\sl thick\/}, that is, its intersection with every open set in ${\mathbb{R}}^n$ has full \hd\ (Jarnik \cite{Jarnik} for $n = 1$, \commargin{added reference to \cite{Schmidt:book}} Schmidt \cite{Schmidt games, Schmidt:book} for $n > 1$). In fact Schmidt established a stronger property of the set ${\bold{Bad}}$: that it is a so-called winning set for a certain game which he invented for that occasion, see \S\ref{sec: games} for more detail. In particular, the latter property implies that for any countable sequence of similitudes (compositions of translations and homotheties) ${f}_i:{\mathbb{R}}^n\to{\mathbb{R}}^n$, the intersection $\cap_i {f}_i({\bold{Bad}})$ is thick as well. Our purpose in this paper is to introduce a modification of Scmidt's game, and apply it to similarly study a weighted generalization of the notion of \ba\ vectors. Take a vector ${\bf r} = (r_i \mid 1\le i\le n)$ such that \eq{defn r}{ r_i > 0\quad\text{and}\quad\sum_{i=1}^m r_i = 1\,, } thinking of each $r_i$ as of a weight assigned to $x_i$. It is easy to show that the following multiparameter version of the aforementioned Dirichlet's result holds: for ${\bf r}$ as above and any ${\mathbf{x}} = (x_1,\dots,x_n)\in {\mathbb{R}}^n$ there are infinitely many $q\in{\mathbb{N}}$ such that \eq{rba}{ \max_{1\le i \le n}|q x_i- p_i|^{1/r_i} < {q^{-1}} \text{ for some }{\bf p} = (p_1,\dots,p_n)\in{\mathbb{Z}}^n\,.} This motivates the following definition: say that ${\mathbf{x}}$ is {\sl ${\bf r}$-badly approximable\/} if the right hand side of \equ{rba} cannot be improved by an arbitrary positive constant; in other words, if there is $c>0$ such that for any ${\bf p} \in {\mathbb{Z}}^n, \, q \in {\mathbb{N}}$ one has \eq{defn rba} {\max_{1\le i \le n}|q x_i- p_i|^{1/r_i} \geq \frac{c}{q}\,. } Following \cite{PV-bad} and \cite{KTV}, denote by ${\bold{Bad}}({\bf r})$ the set of ${\bf r}$-badly approximable vectors. It is not hard to make sense of the above definition when one or more of the components of ${\bf r}$ are equal to zero: one simply needs to ignore these components following a convention $a^{\infty} = 0$ when $0 \le a < 1$. For example, ${\bold{Bad}}(1,0) = {\bold{Bad}}_1 \times {\mathbb{R}}$ and ${\bold{Bad}}(0,1) = {\mathbb{R}}\times {\bold{Bad}}_1$. Also it is clear that ${\bold{Bad}}_n = {\bold{Bad}}({\bf n})$ where \eq{def n}{{\bf n} = (1/n,\dots,1/n)\,.} One of the main results of \cite{PV-bad} states that the set ${\bold{Bad}}({\bf r})$ is thick for any ${\bf r}$ as above (this was conjectured earlier in \cite{K-india}). A complete proof is given in \cite{PV-bad} for the case $n = 2$, but the method, based on some ideas of Davenport, straightforwardly extends to higher dimensions as noted by the authors of \cite{PV-bad}. A slightly different proof can be found in \cite{KTV}. In this paper we present a modification (in our opinion, a simplification) of the argument from the aforementioned papers which yields a stronger result. \commarginnew{added more intro material} Namely, in \S\S \ref{sec: games}--\ref{sec: contr} we describe a variation of Schmidt's game, which we call {\sl modified Schmidt game\/} (to be abbreviated by MSG) induced by a family of contracting automorphisms of ${\mathbb{R}}^n$, and study properties of winning sets of those modified games. We show that winning sets of MSGs are thick (Corollary \ref{cor: msg3}), and a countable intersection of sets winning for the same game is winning as well (Theorem \ref{thm: countable general}). In \S\ref{sec: win} we prove \begin{thm}\name{thm: main} Let ${\bf r}$ be as in \equ{defn r}, and let $\mathcal{{F}}^{({\bf r})} = \{{\Phi}^{({\bf r})}_t: t > 0\}$ be the one-parameter semigroup of linear contractions of ${\mathbb{R}}^n$ defined by \eq{defn ar}{{\Phi}^{({\bf r})}_t = {\rm diag}(e^{-(1+r_1)t},\dots,e^{-(1+r_n)t)})\,.} Then the set $\,{\bold{Bad}}({\bf r})$ is a winning set for the modified Schmidt game (to be abbreviated by MSG) induced by $\mathcal{{F}}^{({\bf r})}$; in particular, it is thick. \end{thm} Note that the original Schmidt's game can be viewed as a MSG induced by \commarginnew{rearranged this part} the family of homotheties of ${\mathbb{R}}^n$; thus Schmidt's theorem on ${\bold{Bad}}$ being a winning set is a special case of Theorem \ref{thm: main}. The countable intersection property of winning sets of MSGs makes it possible to intersect ${\bold{Bad}}({\bf r})$ with its countably many dilates and translates (see a remark after Theorem \ref{thm: precise}), as well as establish, in a simpler way, another result of \cite{PV-bad}, namely that the set \eq{tripleint}{{\bold{Bad}}(r_1,r_2) \cap {\bold{Bad}}(1,0) \cap {\bold{Bad}}(0,1)} is thick for any $0 < r_1,r_2 < 1$ with $r_1 + r_2 = 1$. This and other concluding remarks are made in \S\ref{sec: next}. \smallskip {\bf Acknowledgements:} The authors are grateful to the hospitality of Tata Institute of Fundamental Research \commargin{removed info about Dani's conference, kind of irrelevant} (Mumbai) where they had several conversations which eventually led to results described in this paper. Thanks are also due \commarginnew{more thanks added} to Elon Lindenstrauss for motivating discussions, to the referee for useful comments, and to Max Planck Institute for Mathematics (Bonn) where the paper was completed. This work was supported by BSF grant 2000247, ISF grant 584/04, and NSF Grants DMS-0239463, DMS-0801064. \section{Modified Schmidt Games}\name{sec: games} \subsection{Schmidt's game}\name{special} Let $(E,d)$ be a complete metric space, and let $\Omega {\, \stackrel{\mathrm{def}}{=}\, } E\times {\mathbb{R}}_+$ (the set of formal balls in $E$). Following \cite{Schmidt games}, define a partial ordering (Schmidt's containment) on $\Omega$ as follows: \eq{cont} {(x',r') \le_s (x,r)\quad \iff \quad d(x',x) + r' \leq r\,. } To each pair $(x,r)\in\Omega$ we associate a closed ball in $E$ via the `ball' function $B$: \commargin{changed notation so that $B(\cdot)$ stands for closed balls now} $ B(x,r) {\, \stackrel{\mathrm{def}}{=}\, } \{y\in E : d(x,y) \le r\}$. Note that $(x', r') \leq_s (x,r)$ implies $B(x', r') \subset B(x,r)$; while in Euclidean space these conditions are in fact equivalent, in a general metric space the converse need not hold. Now pick $0 < \alpha,\beta < 1$ and consider the following game, commonly referred to as {\sl Schmidt's game\/}, played by two players, whom we will call\footnote{Schmidt originally named his players `white' and `black'; in the subsequent literature letters $A$ and $B$ were often used instead. We are grateful to Andrei Zelevinsky for suggesting the Alice/Bob nomenclature following a convention common in computer science.} Alice and Bob. \commarginnew{Renamed the players and added a footnote} The game starts with Bob choosing $x_1\in E$ and $r > 0$, hence specifying a pair $\omega_1 {\, \stackrel{\mathrm{def}}{=}\, } (x_1,r)$. Alice may now choose any point $x_1'\in E$ provided that $\omega_1'{\, \stackrel{\mathrm{def}}{=}\, } (x_1',\alpha r)\le_s\omega_1$. Next, Bob chooses a point $x_2\in E$ such that $\omega_2{\, \stackrel{\mathrm{def}}{=}\, }(x_2,\alpha \beta r)\le_s \omega_1'$, and so on. Continuing in the same manner, one obtains a nested sequence of balls in $E$: $$ B(\omega_1) \supset B(\omega_1') \supset B(\omega_2) \supset B(\omega_2') \supset \ldots\supset B(\omega_k) \supset B(\omega_k') \supset \ldots $$ A subset $S$ of $E$ is called {\sl $(\alpha,\beta)$-winning \/} if Alice can play in such a way that the unique point of intersection \eq{int}{ \bigcap_{k = 1}^\infty B(\omega_k) = \bigcap_{k = 1}^\infty B(\omega_k')} lies in $S$, no matter how Bob plays. $S$ is called {\sl $\alpha$-winning \/} if it is $(\alpha,\beta)$-winning for all $\beta > 0$, and {\sl winning \/} if it is $\alpha$-winning for some $\alpha > 0$. We will denote balls chosen by Bob (resp., Alice) by $B_k {\, \stackrel{\mathrm{def}}{=}\, } B(\omega_k) $ and $A_k {\, \stackrel{\mathrm{def}}{=}\, } B(\omega_k')$. \smallskip The following three theorems are due to Schmidt \cite{Schmidt games}. \begin{thm} \name{thm: countable} Let $S_i\subset E$, $i\in{\mathbb{N}}$, be a sequence of $\alpha$-winning sets for some $0 < \alpha < 1$; then $\cap_{i = 1}^\infty S_i$ is also $\alpha$-winning. \end{thm} \begin{thm} \name{thm: full dim} Suppose the game is played on $E={\mathbb{R}}^n$ with the Euclidean metric; then any winning set is thick. \ignore{ \eq{conclusion full dim}{ \dim(S\cap U) \ge \frac{\log c_n\beta^{-n}}{|\log \alpha\beta|}\,, } where $c_n$ is a constant depending only on $n$; in particular any $\alpha$-winning subset of ${\mathbb{R}}^n$ has \hd\ $n$.} \end{thm} \begin{thm} \name{thm: ba} For any $n\in{\mathbb{N}}$, ${\bold{Bad}}_n$ is $(\alpha,\beta)$-winning whenever $2\alpha < 1 + \alpha\beta$; in particular, it is $\alpha$-winning for any $0 < \alpha \le 1/2$. \end{thm} It can also be shown that for various classes of continuous maps of metric spaces, \commargin{Removed details} the images of winning sets are also winning for suitably modified values of constants. See \cite[Theorem 1]{Schmidt games} and \cite[Proposition 5.3]{Dani survey} for details. \subsection{A modification}\name{modified} We now introduce a variant of this game, which is in fact a special case of the general framework of $(\frak F, \frak S)$-games described by Schmidt in \cite{Schmidt games}. As before, let $E$ be a complete metric space, and let $\mathcal{C}(E)$ stand for the set of nonempty compact subsets of $E$. Fix $t_*\in{\mathbb{R}}\cup\{-\infty\}$ and define $\Omega = E \times (t_*,\infty)$\footnote{Note that everywhere one could replace ${\mathbb{R}}$ with some fully ordered semigroup. This more general setup presents no additional difficulties but we omit it to simplify notation.}. Suppose in addition that we are given \begin{itemize} \item[(a)] a partial ordering $\le$ on $\Omega$, and \item[(b)] a monotonic function $\psi: (\Omega,\le)\to \big(\mathcal{C}(E),\subset\big)$. \end{itemize} Here monotonicity means that $\omega' \le\omega$ implies $\psi(\omega') \subset \psi(\omega)$. Now fix $a_* \geq 0$ and suppose that the following property holds: \begin{itemize} \item[(MSG0)] For any $(x,t)\in\Omega$ and any $s >a_*$ there exists $x' \in E$ such that $(x', t+s)\le (x,t)$. \end{itemize} Pick two numbers ${\mathbf{a}}$ and ${\mathbf{b}}$, both bigger than $a_*$. Now Bob begins the $\psi$-$(a, b)$-game by choosing $x_1\in E$ and $t_1 > t_*$, hence specifying a pair $\omega_1 {\, \stackrel{\mathrm{def}}{=}\, } (x_1,t_1)$. Alice may now choose any point $x_1'\in E$ provided that $\omega_1'{\, \stackrel{\mathrm{def}}{=}\, } (x_1',t_1 + {\mathbf{a}})\le\omega_1$. Next, Bob chooses a point $x_2\in E$ such that $\omega_2{\, \stackrel{\mathrm{def}}{=}\, }(x_2,t_1 + {\mathbf{a}} + {\mathbf{b}})\le \omega_1'$, and so on. Continuing in the same manner, one obtains a nested sequence of compact subsets of $E$: $$ B_1 = \psi(\omega_1) \supset A_1 = \psi(\omega_1') \supset \ldots\supset B_k = \psi(\omega_k) \supset A_k = \psi(\omega_k') \supset \ldots $$ where $\omega_k = (x_k,t_k)$ and $\omega_k' = (x_k',t_k')$ with \eq{si ti}{t_k = t_1+(k-1)({\mathbf{a}}+{\mathbf{b}})\text{ and } t_k' = t_1+(k-1)({\mathbf{a}}+{\mathbf{b}}) + {\mathbf{a}} \,.} Note that Bob and Alice can always make their choices by virtue of (MSG0), and that the intersection \eq{intmodified}{ \bigcap_{k = 1}^\infty \psi(\omega_k) = \bigcap_{k = 1}^\infty \psi(\omega_k') is nonempty and compact. Let us say that $S\subset E$ is {\sl $(a, b)$-winning\/} for the {\sl modified Schmidt game corresponding to\/} $\psi$, to be abbreviated as $\psi$-MSG, if Alice can proceed in such a way that the set \equ{intmodified} is contained in $S$ no matter how Bob plays. Similarly, say that $S$ is an {\sl $a$-winning\/} set of the game if $S$ is $(a, b)$-winning for any choice of $b > a_*$, and that $S$ is {\sl winning} if it is $a$-winning for some $a > a_*$. Note that we are suppressing $a_*$ and $t_*$ from our notation, hopefully this will cause no confusion. Clearly the game described above coincides with the original $(\alpha, \beta)$-game if we let \eq{rn}{ \begin{aligned} \psi(x,t) = B(x,e^{-t}), \quad (x',t')\le(x,t) \Leftrightarrow (x',e^{-t'}) \le_s (x,e^{-t}),\\ {\mathbf{a}} =- \log \alpha, \ {\mathbf{b}} =- \log \beta, \ a_*=0,\ t_* = - \infty\,.\qquad\quad \end{aligned}} Here is some more notation which will be convenient later. For $t > t_*$ we let $$\Omega_t{\, \stackrel{\mathrm{def}}{=}\, } \{(x,t) : x\in E\}\,,$$ so that $\Omega$ is a disjoint union of the `slices' $\Omega_t$, $t > t_*$. Then for $s > 0$ and $\omega\in\Omega_t$ define $$ I_s(\omega) {\, \stackrel{\mathrm{def}}{=}\, } \{\omega'\in \Omega_{t+s} : \omega' \le \omega\}\,. $$ In other words, $ I_{\mathbf{a}}(\omega)$ and $ I_{\mathbf{b}}(\omega)$ are the sets of allowed moves of Alice and Bob respectively starting from position $\omega$. Using this notation condition (MSG0) can be reworded as \begin{itemize} \item[(MSG0)] $I_s(\omega) \ne\varnothing$ for any $\omega\in\Omega$, $s >a_*$. \end{itemize} \subsection{General properties}\name{general} Remarkably, even in the quite general setup described in \S\ref{modified}, an analogue of Theorem \ref{thm: countable} holds and can be proved by a verbatim repetition of the argument from \cite{Schmidt games}: \begin{thm} \name{thm: countable general} Let a metric space $E$, partially ordered $\Omega= X\times (t_*,\infty)$ and $\psi$ be as above, let ${\mathbf{a}} > a_*$, and let $S_i\subset E$, $i\in{\mathbb{N}}$, be a sequence of ${\mathbf{a}}$-winning sets of the $\psi$-MSG. Then $\cap_{i = 1}^\infty S_i$ is also ${\mathbf{a}}$-winning. \end{thm} \begin{proof} Take an arbitrary ${\mathbf{b}} > a_*$, and make Alice play according to the following rule. At the first, third, fifth \dots\ move Alice will make a choice according to an $({\mathbf{a}}, 2{\mathbf{a}} + {\mathbf{b}},S_1)$-strategy (that is, will act as if playing an $({\mathbf{a}}, 2{\mathbf{a}} + {\mathbf{b}})$-game trying to reach $S_1$). At the second, sixth, tenth \dots\ move she will use an $({\mathbf{a}}, 4{\mathbf{a}} + 3{\mathbf{b}},S_2)$-strategy. In general, at the $k$th move, where $k \equiv 2^{i -1} (\text{mod}\, 2^i)$, she will play the $\big({\mathbf{a}}, {\mathbf{a}} + (2^i - 1)({\mathbf{a}} + {\mathbf{b}})\big)$-game trying to reach a point in $S_i$. It is easy to see that, playing this way, Alice can enforce that the intersection of the chosen sets belongs to $S_i$ for each $i$.\end{proof} Here are two more general observations about MSGs and their winning sets. \begin{lem} \name{lem: dummy} Let $E$, $\Omega$ and $\psi$ be as above, and suppose that $S\subset E$, ${\mathbf{a}}, {\mathbf{b}} > a_*$ and ${t_0} > t_*$ are such that whenever Bob initially chooses $\omega_1 \in \Omega_{t}$ with $t \ge {t_0}$, Alice can win the game. Then $S$ is an $(a,b)$-winning set of the $\psi$-MSG. \end{lem} \begin{proof} Regardless of the initial move of Bob, Alice can make arbitrary (dummy) moves waiting until $t_k$ becomes at least ${t_0}$, and then apply the strategy he/she is assumed to have. \end{proof} This lemma shows that the collection of $(a,b)$-winning sets of a given $\psi$-MSG depends only on the `tail' of the family $\{\Omega_t\}$ and not on the value of $t_*$. \commargin{removed `ignore $t_*$', in fact we have $t_* = \infty$} \begin{lem} \name{lem: product} Let $E_1,E_2$ be complete metric spaces, and consider two games corresponding to $\psi_i: \Omega_i\to\mathcal{C}(E_i)$, where $\Omega_i = E_i \times (t_*,\infty)$. Suppose that $S_i\subset E_i$ is an $({\mathbf{a}}, {\mathbf{b}})$-winning set of the $\psi_i$-MSG, $i = 1,2$. Then $S_1\times S_2$ is an $({\mathbf{a}}, {\mathbf{b}})$-winning set of the $\psi$-MSG played on $E = E_1\times E_2$ with the product metric, where $\psi$ is defined by $$\psi(x_1,x_2,t) = \psi_1(x_1,t)\times \psi_2(x_2,t)\,.$$ \end{lem} \begin{proof} Play a game in the product space by playing two separate games in each of the factors. \end{proof} \commargin{Removed details here as well} It is also possible to write down conditions on $f:E\to E$, quite restrictive in general, sending winning sets of the $\psi$-MSG to winning sets. We will exploit this theme in \S \ref{images contr}. \subsection{Dimension estimates}\name{dimest} Our next goal is to generalize Schmidt's lower estimate for the \hd\ of winning sets in ${\mathbb{R}}^n$. Note that in general it is not true, even for original Schmidt's game \equ{rn} played on an arbitrary complete metric space, that winning sets have positive \hd: see Proposition \ref{prop: lower dim example} for a counterexample. We are going to make some assumptions that will be sufficient to ensure that a winning set for the $\psi$-MSG is big enough. Namely we will assume: \begin{itemize} \item[(MSG1)] For any open $\varnothing\ne U \subset E$ there is $\omega \in \Omega$ such that $\psi(\omega) \subset U$. \item[(MSG2)] There exist $C,\sigma > 0$ such that {\rm diam}$\big(\psi(\omega)\big) \le C e^{-\sigma t}$ for all $t \ge t_*$, $\omega \in \Omega_t$. \end{itemize} We remark that it follows from (MSG1) that any $({\mathbf{a}},{\mathbf{b}})$-winning set of the game is dense, and from (MSG2) that the intersection \equ{intmodified} consists of a single point. To formulate two additional assumptions, we suppose that we are given a locally finite Borel measure $\mu$ on $E$ satisfying the following conditions: \begin{itemize} \item[($\mu$1)] $\mu\big(\psi(\omega)\big) > 0$ for any $\omega \in \Omega$. \item[($\mu$2)] \commargin{added $\rho$} For any ${\mathbf{a}} > a_*$ there exist $c,\rho>0$ with the following property: $\forall\,\omega \in \Omega$ with ${\rm diam}\big(\psi(\omega)\big) \le \rho$ and $\forall\,{\mathbf{b}} > a_*$ $\exists\,\theta_1,\dots,\theta_N\in I_{\mathbf{b}}(\omega)$ such that $\psi(\theta_i), \ i = 1,\dots,N,$ are essentially disjoint, and that for every $\theta_i'\in I_{\mathbf{a}}(\theta_i)$, $ i = 1,\dots,N$, one has $$\mu\big(\bigcup_i \psi(\theta_i')\big) \geq c \mu\big(\psi(\omega)\big)\,.$$ \end{itemize} The utility of the latter admittedly cumbersome condition will become clear in the sequel, see Proposition \ref{cor: Federer}. Here and hereafter we say that $A,B\subset E$ are {\sl essentially disjoint\/} if $\mu(A\cap B) = 0$. In particular, it follows from ($\mu$1) and (MSG1) that such a measure $\mu$ must have full support (this will be our standing assumption from now on). Also, note that (MSG0) is a consequence of ($\mu$2). Now recall that the {\sl lower pointwise dimension\/} of $\mu$ at $x\in E$ is defined by\footnote{This and other properties, such as the Federer property introduced in \S\ref{sec: fulldim}, are usually stated for open balls, but versions with closed balls are clearly equivalent, modulo a slight change of constants if necessary.} $$ \underline{d}_\mu(x) {\, \stackrel{\mathrm{def}}{=}\, } \liminf_{r\to 0}\frac{\log \mu\big(B(x,r)\big)}{\log r}\,, $$ and for $U\subset E$ let us put $$ \underline{d}_\mu(U) {\, \stackrel{\mathrm{def}}{=}\, } \inf_{x\in U}\underline{d}_\mu(x) \,. $$ It is known, see e.g.\ \cite[Proposition 4.9(a)]{Falconer} or \cite[Theorem 7.1(a)]{Pesin}, that $ \underline{d}_\mu(U)$ is a lower bound for the \hd\ of $U$ for any nonempty open $U\subset E$, and very often it is possible to choose $\mu$ such that $ \underline{d}_\mu(x)$ is equal to $\dim(E)$ for every $x$. For instance this is the case when $\mu$ {\sl satisfies a power law\/}, that is, if there exists $\gamma, c_1, c_2, r_0 > 0$ such that \eq{pl}{c_1r^\gamma \le \mu\big(B(x,r)\big) \le c_2 r^\gamma\text{ whenever }r \le r_0\text{ and } x\in E } (then necessarily $\dim(U) = \gamma$ for any nonempty open $U\subset E$). \ignore{ Now for any $s > 0$ define $N(s)$ to be the maximal $N$ such that for any $t \ge t_*$ and any $x\in E$ there exist $N$ essentially disjoint sets $D_{ x_1,t+s} ,\dots D_{x_N,t+s} $ of $D_{t+,s}$ which are contained in $D_{x,t}$. Here `essentially disjoint' means zero measure of the intersection. Note that in order for $(a,b)$-winning sets to exist we must at least demand that $N(a) > 0$, otherwise Alice might be left with no moves to make. } \begin{thm} \name{thm: WD} Suppose that $E$, $\Omega$, $\psi$ and a measure $\mu$ on $E$ are such that {\rm (MSG0--2)} and {\rm ($\mu$1--2)} hold. Take ${\mathbf{a}},{\mathbf{b}} > a_*$ and let \commargin{added $({\mathbf{a}},{\mathbf{b}})$-winning} $S$ be an $({\mathbf{a}},{\mathbf{b}})$-winning set of the $\psi$-MSG. Then for any open $\varnothing \ne U\subset E$, one has \eq{wd}{\dim (S \cap U) \geq \underline{d}_{\mu}(U) + \frac1\sigma\left({ \frac{\log c}{{\mathbf{a}}+{\mathbf{b}}} }\right) \,,} where $\sigma$ is as in {\rm (MSG2)} and $c$ as in {\rm ($\mu$2)}. In particular, $\dim (S \cap U) $ is not less than $ \underline{d}_{\mu}(U) $ whenever $S$ is winning. \end{thm} Before proving this theorem let us observe that it generalizes Theorem \ref{thm: full dim}, with Lebesgue measure playing the role of $\mu$. Indeed, conditions (MSG0--2) are trivially satisfied in the case \equ{rn}. It is also clear that ($\mu$1) holds and that $ \underline{d}_{\mu}(x) = n$ for all $x\in{\mathbb{R}}^n$. As for ($\mu$2), note that there exists a constant $\bar c$, depending only on $n$, such that for any $0<\beta < 1$, the unit ball in ${\mathbb{R}}^n$ contains a disjoint collection of closed balls $D_i'$ of radius $\beta$ of relative measure at least $\bar c$; and no matter how balls $D_i\subset D_i'$ of radius $\alpha\beta$ are chosen, their total relative measure will not be less than $\bar c \alpha^n$. Rescaling, one obtains ($\mu$2). See Lemma \ref{prop: msg3} and Proposition \ref{cor: Federer} for further generalizations. \smallskip For the proof of Theorem \ref{thm: WD} we will use a construction suggested in \cite{McMullen, Urbanski} and formalized in \cite{KM}. Let $E$ be a complete metric space equipped with a locally finite Borel measure $\mu$. Say that a countable family ${\mathcal A}$ of compact subsets of $E$ of positive measure is {\sl tree-like\/} (or {\sl tree-like with respect to $\mu$}) if ${\mathcal A}$ is the union of finite subcollections ${\mathcal A}_k$, $k\in{\mathbb{Z}}_+$, such that ${\mathcal A}_0 = \{A_0\}$ and the following four conditions are satisfied: \begin{itemize} \item[(TL0)] $\mu(A) > 0$ for any $A\in {\mathcal A}\,;$ \smallskip \item[(TL1)] $\forall\,k\in{\mathbb{N}}\quad \forall\, A, B \in {\mathcal A}_k\quad\text{either }A=B\quad\text{or}\quad {\mu}(A\cap B) = 0\,;$ \smallskip \item[(TL2)] $\forall\,k\in{\mathbb{N}}\quad \forall\,B \in {\mathcal A}_k\quad\,\exists\,A\in {\mathcal A}_{k-1} \quad\text{such that}\quad B\subset A\,;$ \smallskip \item[(TL3)] $\forall\,k\in{\mathbb{N}}\quad \forall\,A \in {\mathcal A}_{k-1} \quad\, \exists\,B\in {\mathcal A}_{k} \quad\text{such that}\quad B\subset A $. \end{itemize} Then one has $A_0\supset \cup {\mathcal A}_1\supset \cup {\mathcal A}_2\dots$, a decreasing intersection of nonempty compact sets (here and \commarginnew{The referee complained about this notation, but now it is explained} elsewhere we denote $\cup \mathcal{A}_k = \bigcup_{A\in \mathcal{A}_k}A$), which defines the (nonempty) {\sl limit set\/} of ${\mathcal A}$, $$ {\bf{A}_\infty} = \bigcap_{k\in {\mathbb{N}}}\cup {\mathcal A}_k\,. $$ Let us also define the {\sl $k$th stage diameter\/} $d_k({\mathcal A})$ of ${\mathcal A}$: $$ d_k({\mathcal A}){\, \stackrel{\mathrm{def}}{=}\, }\max_{A\in{\mathcal A}_k}\text{diam}(A)\,, $$ and say that ${\mathcal A}$ is {\sl strongly tree-like\/} if it is tree-like and in addition \begin{itemize} \item[(STL)] \qquad $\lim_{k\to\infty}d_k({\mathcal A}) = 0\,.$ \end{itemize} Finally, for $k\in {\mathbb{Z}}_+$ let us define the $k$th stage `density of children' of ${\mathcal A}$ by $$ \Delta_k({\mathcal A}) {\, \stackrel{\mathrm{def}}{=}\, } \min_{B\in{\mathcal A}_k}\frac{{\mu}\big(\cup {\mathcal A}_{k+1}\cap B\big)}{{\mu}(B)}\,, $$ the latter being always positive due to (TL3). The following lemma, proved in \cite{bad} and generalizing results of C.~McMullen \cite[Proposition 2.2]{McMullen} and M.~Urbanski \cite[Lemma 2.1]{Urbanski}, provides a needed lower estimate for the \hd\ of ${\bf{A}_\infty}$: \begin{lem \name{lem: Urbanski} Let ${\mathcal A}$ be a strongly tree-like (relative to $\mu$) collection of subsets of $A_0$. Then for any open $U$ intersecting ${\bf{A}_\infty}$ one has $$ \dim ({\bf{A}_\infty} \cap U) \geq \underline{d}_\mu(U) - \limsup_{k\to\infty}\frac{ \sum_{i=0}^{k}\log\Delta_i({\mathcal A})}{\log\, d_{k}({\mathcal A})}\,. $$ \end{lem} Note that even though \cite[Lemma 2.5]{bad} is stated for $E = {\mathbb{R}}^n$, its proof, including the Mass Distribution Principle on which the lower estimate for the \hd\ is based, is valid in the generality of an arbitrary complete metric space. \begin{proof}[Proof of Theorem \ref{thm: WD}] Our goal is to find a \stl\ collection ${\mathcal A}$ of sets whose limit set is a subset of $S \cap U$. It will be constructed by considering possible moves for Bob at each stage of the game, and the corresponding counter-moves specified by Alice's winning strategy. Fix ${\mathbf{a}},{\mathbf{b}} > a_*$ for which $S$ is $({\mathbf{a}},{\mathbf{b}})$-winning. By assumption (MSG1), Bob may begin the game by choosing $t_1 > t_*$ and $\omega_1 \in \Omega_{t_1}$ such that $\psi(\omega_1)\subset U$ and ${\rm diam}\big(\psi(\omega_1)\big) < \rho$, where $\rho$ is as in ($\mu$2). Since $S$ is winning, Alice can choose $\omega_1' \in I_{\mathbf{a}}(\omega)$ such that $A_0{\, \stackrel{\mathrm{def}}{=}\, } \psi(\omega_1')$ has nonempty intersection with $S$; it will be the ground set of our tree-like family. Now let $\theta_1, \ldots, \theta_N\in I_{\mathbf{b}}(\omega_1')$ be as in ($\mu$2) for $\omega = \omega_1'$. Each of \commarginnew{added some more explanations, is this enough? feel free to edit...} these could be chosen by Bob at the next step of the game. Since $S$ is $({\mathbf{a}},{\mathbf{b}})$-winning, for each of the above choices $\theta_i$ Alice can pick $\theta_i' \in I_{\mathbf{a}}(\theta_i)$ such that every sequence of possible further moves of Bob can be counter-acted by Alice resulting in her victory in the game. The collection of images $ \psi(\theta_i')$ of these choices of Alice, essentially disjoint in view of ($\mu$2), will comprise the first level ${\mathcal A}_1$ of the tree. Repeating the same for each of the choices we obtain ${\mathcal A}_2$, ${\mathcal A}_3$ etc. Property (TL0) follows from ($\mu$1), and (TL1--3) are immediate from the construction. Also, in view of (MSG2) and \equ{si ti}, the $k$th stage diameter $d_k$ is not bigger than $C e^{-\sigma (t_1 + k({\mathbf{a}} + {\mathbf{b}}) + {\mathbf{a}})}$, hence (STL). Since Alice makes choices using her winning strategy, the limit set ${\bf{A}_\infty}$ of the collection must lie in $S$. Assumption ($\mu$2) implies that $\Delta_k({\mathcal A})$ is bounded below by a positive constant $c$ independent of $k$ and ${\mathbf{b}}$. \ignore{ Let $D \in {\mathcal A}_k \cap {\mathcal D}_{s_k}$, let $D'_1, \ldots, D'_N \in {\mathcal D}_{t_k}$ be Bob's essentially disjoint possible choices contained in $D$, as in (MSG1). Let $A_1, \ldots, A_N$ be Alice's corresponding choices, so that $A_i \subset D'_i,\, A_i \in {\mathcal D}_{s_{k+1}}$ for $i=1, \ldots, N$. Let $p$ and $D_1, \ldots, D_N \in {\mathcal D}_{t_{k+1}-p}$ be as in assumption (MSG1), so that $D \subset \bigcup D_i.$ By the strong Federer assumption, there are positive $c_1, c_2,$ depending only on ${\mathbf{a}}$ and $p$ respectively, such that $$\mu(A_i) \geq c_1\mu(D'_i) \ \ \mathrm{and} \ \mu(D'_i) \geq c_2 \mu(D_i).$$ Therefore \[ \begin{split} \mu \left(\bigcup_i A_i\right ) & \geq c_1 \mu \left(\bigcup_i D'_i\right ) = c_1 \sum_i \mu \left(D'_i \right ) \\ & \geq c_1c_2 \sum_i \mu \left(D_i \right ) \geq c_1c_2 \mu(D). \end{split} \] Since ${\mathcal A}_{k+1}(B) = \left\{A_1, \ldots, A_N \right\}$ we obtain $\Delta_k({\mathcal A}) \geq c_1c_2.$ Finally, every parent contains at least $N({\mathbf{b}})$ children, and, by assumption (MSG1), each has measure $e^{-\delta ({\mathbf{a}}+{\mathbf{b}})}$ times the measure of the parent. Hence $\Delta_i({\mathcal A}) \ge N({\mathbf{b}})e^{-\delta ({\mathbf{a}}+{\mathbf{b}})}$ for each $i$, and, by } Applying Lemma \ref{lem: Urbanski} we find \begin{equation*} \begin{aligned} \dim ({\bf{A}_\infty} \cap U)& \geq \underline{d}_\mu(U) -\limsup_{k\to\infty}\frac{ (k+1)\big(\log c\big)} {\log\, C - \sigma (t_1 + k({\mathbf{a}} + {\mathbf{b}}) + {\mathbf{a}} }\\ & = \underline{d}_\mu(U) + \frac1\sigma\left({ \frac{\log c}{{\mathbf{a}}+{\mathbf{b}}} }\right) \to_{{\mathbf{b}} \to \infty} \underline{d}_\mu(U)\,. \end{aligned} \end{equation*} \end{proof} \ignore{ We remark that it follows from (MSG1) that $N(s)$ is always not greater than $e^{\delta s}$, which implies that the quantity subtracted from $ \underline{d}_\mu(U)$ in \equ{wd} is always nonnegative. On the other hand, if we assume that \begin{itemize} \item[(MSG3)] $$\limsup_{s\to\infty}\frac {\log N(s)} s = \delta\,,$$ \end{itemize} where $\delta$ is the same as in (MSG1) (roughly speaking, the above condition means that the union of disjoint sets $D_{x_i,t+s}$ inside $D_{x,t}$ fills up a positive proportion of the measure of $D_{x,t}$ if $s$ is large, uniformly in $t$ and $x$), then the subtrahend in \equ{wd} will be very small when $b$ is large. This proves \begin{cor} \name{cor: full dim} In addition to the assumptions of Theorem \ref{thm: WD}, suppose that {\rm (MSG3)} also holds, and that $\underline{d}_\mu(x)$ is equal to $\dim(E)$ for a dense set of $x\in E$. Then the intersection of any winning set of the ${\mathcal D}$-MSG with any open $U\subset E$ has full \hd. \end{cor} \begin{proof} It follows from (MSG3) that he right hand side of \equ{wd} tends to $ \underline{d}_\mu(U) = \dim(E)$ as $b\to\infty$. \end{proof} Clearly both assumptions of the corollary hold in the set-up of Schmidt's original game. In the next section we will describe a more general situation when conditions (MSG1--3) can be verified. } \section{Games induced by contracting automorphisms}\name{sec: contr} \subsection{Definitions}\name{def contr} In this section we take $E = H$ to be a connected Lie group with a right-invariant Riemannian metric $d$, and assume that it admits a one-parameter group of automorphisms \{{\Phi}_t: t \in{\mathbb{R}}\}$ such that $\Phi_t$ is contracting for $t > 0$ (recall that ${\Phi}:H\to H$ is {\sl contracting\/} if for every $g\in H$, ${\Phi}^k(g)\to e$ as $k \to \infty$). It is not hard to see that $H$ must be simply connected and nilpotent, and the differential of each ${\Phi}_t$, $t > 0$, must be a linear isomorphism of the Lie algebra $\goth h$ of $H$ with the modulus of all eigenvalues strictly less than $1$. In other words, ${\Phi}_t = \exp(tX)$ where $X \in {\rm End} ( \goth h)$ and the real parts of all eigenvalues of $X$ are negative. Note that $X$ is not assumed to be diagonalizable, although this will be the case in our main example. \commargin{removed `all the applications of the present paper', kind of a strong prediction} Say that a subset $D_0$ of $H$ is {\sl admissible\/} if it is compact and has non-empty interior. For such $D_0$ and any $t \in{\mathbb{R}}$ and $x\in H$, define \eq{def psi}{\psi(x,t) = {\Phi}_{t}(D_0)x\,, } and then introduce a partial ordering on $\Omega {\, \stackrel{\mathrm{def}}{=}\, } H \times {\mathbb{R}}$ by \eq{def order}{ (x',t')\le (x,t)\quad\iff\quad\psi(x',t')\subset\psi(x,t)\,. } Monotonicity of $\psi$ is immediate from the definition, and we claim that, with $t_* = -\infty$ and some $a_*$, it satisfies conditions (MSG0--2). Indeed, let $\sigma>0$ be any number such that the real parts of all the eigenvalues of $X$ are smaller than $-\sigma$. Then, since $D_0$ is bounded, it follows that for some $c_0 > 0$ one has \eq{dist squeeze}{d\big({\Phi}_t(g),{\Phi}_t(h)\big) \le c_0 e^{-\sigma t } for all $g,h\in D_0$, thus (MSG2) is satisfied (recall that the metric is chosen to be right-invariant, so all the elements of $\psi(\Omega_t)$ are isometric to ${\Phi}_{t}(D_0)$). For the same reasons, for any open $U\subset H$ there exists $s = s(U) > 0$ such that $U$ contains a translate of ${\Phi}_{t}(D_0)$ for any $t \ge s$, which implies (MSG1). Since $D_0$ is assumed to have nonempty interior, (MSG0) follows as well, with $a_* = s(\operatorname{Int} D_0)$. We denote $\mathcal{F} {\, \stackrel{\mathrm{def}}{=}\, } \{{\Phi}_t : t > 0\}$ and refer to the game determined by \equ{def psi} and \equ{def order} as the modified Schmidt game {\sl induced\/} by $\mathcal{F}$. Note that in this situation the map $\psi$ is injective, i.e.\ the pair $(x,t)$ is uniquely determined by $D_0$ and the translate ${\Phi}_t(D_0)x$. Consequently, without loss of generality we can describe the game in the language of choosing translates of ${\Phi}_{\mathbf{a}}(D)$ or ${\Phi}_{\mathbf{b}}(D)$ inside $D$, where $D$ is a domain chosen at some stage of the game. Clearly when $H = {\mathbb{R}}^n$, $D_0$ is a closed unit ball and ${\Phi}_t = e^{-t}\text{Id}$, we recover Schmidt's original game. Note also that we have suppressed $D_0$ from the notation. This is justified in light of the following proposition: \begin{prop} \name{prop: initial domain} Let $D_0, D_0'$ be admissible, and define $\psi$ and $\psi'$ as in \equ{def psi} using $D_0$ and $D_0'$ respectively. Let $s>0$ be such that for some $x,x'\in H$, \eq{squeeze}{ {\Phi}_{s}(D_0)x\subset D_0' \text{ and } {\Phi}_{s}(D_0')x'\subset D_0 } (such an $s$ exists in light of \equ{dist squeeze} and the admissibility of $D_0, D_0'$). Suppose that ${\mathbf{b}} > 2s$ and $S\subset H$ is $({\mathbf{a}}, {\mathbf{b}})$-winning for the $\psi$-MSG; then it is $({\mathbf{a}} + 2s,{\mathbf{b}} - 2s)$-winning for the $\psi'$-MSG. In particular, if $S$ is $a$-winning for the $\psi$-MSG, then it is $({\mathbf{a}}+ 2s)$-winning for the $\psi'$-MSG. \end{prop} \begin{proof} We will show how, using an existing $({\mathbf{a}}, {\mathbf{b}})$-strategy of the $\psi$-MSG, one can define an $({\mathbf{a}} + 2s,{\mathbf{b}} - 2s)$-strategy for the $\psi'$-MSG. Given a translate $B'_{k}$ of ${\Phi}_{t}(D_0')$ chosen by Bob at the $k$th step, pick a translate $B_{k}$ of ${\Phi}_{t+s}(D_0)$ contained in it, and then, according to the given $\psi$-winning strategy, a translate $A_{k}$ of ${\Phi}_{t+s + {\mathbf{a}}}(D_0)$. In the latter one can find a translate of ${\Phi}_{t+2s + {\mathbf{a}}}(D_0')$; this will be the next choice $A'_{k}$ of Alice. Indeed, any move that could be made by Bob in response, that is, a translate $B'_{k+1}$ of ${\Phi}_{t+2s + a+ b - 2s}(D_0') = {\Phi}_{t+ {\mathbf{a}}+ {\mathbf{b}} }(D_0')$ inside $A'_{k}$, will contain a translate of ${\Phi}_{t+ {\mathbf{a}}+ {\mathbf{b}} + s}(D_0)$, and the latter can be viewed as a move responding to $A_{k}$ according to the $\psi$-strategy. Thus the process can be continued, eventually yielding a point from $S$ in the intersection of all the chosen sets. \end{proof} \subsection{A dimension estimate}\name{dimest contr} Choose a Haar measure $\mu$ on $H$ (note that $\mu$ is both left- and right-invariant since $H$ is unimodular). Our next claim is that conditions ($\mu$1--2) are also satisfied. Indeed, since $D_0$ is admissible, $\mu(D_0)$ is positive, and one has \eq{measures}{\mu\big({\Phi}_t(D_0)x\big) = e^{-\delta t}\mu(D_0)\quad\text{for any }x\in H,\,t \in{\mathbb{R}}\,,} hence ($\mu$1), where $\delta = - \operatorname{Tr}(X)$ \ignore{Using Jordan decomposition of $X$ one easily finds that there is a norm $\| \cdot \|$ on ${\mathbb{R}}^n$ such that \eq{eq: norm squeeze}{\forall x \in {\mathbb{R}}^n, \ \ \|{\Phi}_{-t}(x)\| \leq e^{-\lambda t} \|x\|, } so (ii) is clear. By admissibility of $D_0$, there is $a_*$ such that for all $t \geq a_*$, $D_0$ contains a translate of ${\Phi}_{-t}(D_0)$. Then (i) is satisfied with this value of $a_*$. } Also, in view of \equ{measures} and the definition of $\psi$, to verify ($\mu$2) it suffices to show \begin{lem}\name{prop: msg3} Let $D_0$ be admissible and let $a_*$ be as in {\rm (MSG0)}. Then there exists $\bar c > 0$ such that for any $b > a_*$, $D_0$ contains essentially disjoint right translates $D_1, \ldots, D_N$ of ${\Phi}_b(D_0)$ such that \eq{density}{\mu\big(\bigcup_i D_i\big) \geq \bar c \mu(D_0)\,.} \end{lem} Indeed, if this holds, then, in view of \equ{measures} the conclusion of the lemma holds with $D_0$ replaced by ${\Phi}_t(D_0)x$ for every $x$ and $t$, and then, by (MSG0) and \equ{measures}, ($\mu$2) holds with $c = \bar c e^{-\delta a}$. \smallskip For the proof of Lemma \ref{prop: msg3} we use the following result from \cite{KM}: \begin{prop}\name{lem: tess} Let $H$ be a connected simply connected nilpotent Lie group. Then for any $r>0$ there exists a neighborhood $V$ of identity in $H$ with piecewise-smooth boundary and with ${\rm diam}(V)<r$, and a countable subset $\Delta\subset H$ such that $H = \bigcup_{\gamma\in\Delta}\overline{V}\gamma$ and \eq{disj}{V\gamma_1 \cap V\gamma_2 = \varnothing\text{ for different }\gamma_1, \gamma_2 \in \Delta\,.} \end{prop} For example, if $H = {\mathbb{R}}^n$ one can take $V$ to be the unit cube, $$V = \big\{(x_1,\dots,x_n): |x_j|<1/2\big\}\,,$$ and $\Delta = {\mathbb{Z}}^n$, or rescale both $V$ and $\Delta$ to obtain domains of arbitrary small diameter. See \cite[Proposition 3.3]{KM} for a proof of the above proposition. \begin{proof} [Proof of Lemma \ref{prop: msg3}] First note that, since $b$ is assumed to be greater than $a_*$, $D_0$ contains at least one translate of ${\Phi}_b(D_0)$, thus the left hand side of \equ{density} is not less than $e^{-\delta b}\mu(D_0)$. Now, in view of \equ{measures}, while proving the lemma one can replace $D_0$ by ${\Phi}_t(D_0)$ for any $t\ge 0$. Thus without loss of generality one can assume that $D_0$ is contained in $V$ as in Proposition \ref{lem: tess} with $r \le 1$, and that \equ{dist squeeze} holds $\forall\,g,h\in V$. Now choose a nonempty open ball $B\subset D_0$. We are going to estimate from below the measure of the union of sets of the form ${\Phi}_b(D_0\gamma)$, where $\gamma \in \Delta$, contained in $B$; they are disjoint in view of \equ{disj}. Since \begin{equation*} \bigcup_{\gamma \in \Delta,\,{\Phi}_b(\overline V\gamma)\subset B}{\Phi}_b(\overline V\gamma = \bigcup_{\gamma \in \Delta,\,{\Phi}_b(\overline V\gamma)\cap B\ne\varnothing}{\Phi}_b(\overline V\gamma) \ \ \ssm \bigcup_{\gamma \in \Delta,\,{\Phi}_b(\overline V\gamma)\cap \partial B\ne\varnothing}{\Phi}_b(\overline V\gamma \,,\end{equation*} we can conclude that the measure of the set in the left hand side is not less than $ \mu(B) - \mu\left(\big\{ {\rm diam}\big({\Phi}_b( V)\big)\text{-neighborhood of }\partial B\big\}\right).$$ Clearly for any $0 < \varepsilon < 1$ the measure of the $\varepsilon$-neighborhood of $\partial B$ is bounded from above by $c'\varepsilon$ where $c'$ depends only on $B$. In view of \equ{dist squeeze} and \equ{measures}, it follows that \begin{equation*} \begin{split} \mu\left( \bigcup_{\gamma \in \Delta,\,{\Phi}_b(D_0\gamma)\subset B}{\Phi}_b(D_0\gamma)\right) &\ge \frac{ \mu(D_0)}{ \mu(V)} \left(\mu(B) - c_0 c' e^{-\sigma b} {\rm diam}(V)\right)\\ & = \mu(D_0)\left(\frac{ \mu(B)}{ \mu(V)} - \frac{ c_0 c' {\rm diam}(V)}{ \mu(V)} e^{-\sigma b}\right), \end{split} \end{equation*} which is not less than $\frac{ \mu(B)}{ 2\mu(V)} \mu(D_0)$ if $e^{-\sigma b} \le{ \mu(B)}/{ 2c_0c' {\rm diam}(V)}$. Combining this with the remark made in the beginning of the proof, we conclude that \equ{density} holds with $$\bar c = \min\left( \frac{ \mu(B)}{ 2\mu(V)}, \left(\frac{ \mu(B)}{ 2c_0c' {\rm diam}(V)} \right)^{\delta/\sigma}\right). $$ \end{proof} \ignore{ Then (MSG1) obviously holds with $t_* = 0$ and $\delta = \operatorname{Tr}(X)$. Also it is easy to see that the norm of ${\Phi}_{-t}$ is bounded from above by $P(t)e^{-\lambda t}$, where $P$ is a polynomial of degree at most $n$ and $\lambda$ is the real part of an eigenvalue of $X$ with the smallest real part. Therefore for every positive $\sigma < \lambda$ there exists $c > 0$ (dependent on $D$ and $\sigma$) such that (MSG2) holds. The plan is to show that condition (MSG3) is also satisfied; thus any winning set of the game defined by $\{D_{x,t}\}$ as above will have \hd\ $n$. \begin{prop} \name{prop: msg3} Let $D_0$ be admissible, and let $\{{\Phi}_t\}$ and $\delta$ be as above. Then there exist positive v$c,s_0$ such that $s\ge s_0$ $\Rightarrow$ $N(s) \ge c e^{\delta s}$. \end{prop} \begin{proof} Note that it suffices to show that whenever $t\ge s_0$, there exist at least $c e^{\delta t}$ essentially disjoint translates of ${\Phi}_{-t}(D_0)$ contained in $D_0$. Since $\mu(D_0) > 0$ and $\mu(\partial D_0) = 0$, there exists $\varepsilon > 0$ such that measure of the $\varepsilon$-neighborhood of $\partial D_0$ is strictly less than $\mu(D_0)/2$. Now denote $R = {\rm diam}(D_0)$ and let $I_0 = [-R/2,R/2]^n$; note that $D_0$ is contained in a translate of $I_0$. Then let $s_0$ be such that the diameter of ${\Phi}_{-t}(I_0)$ is less than $\varepsilon$ for any $t > s_0$. Now choose a collection $\{I_k: k\in {\mathbb{N}}\}$ of essentially disjoint translates of $I_0$ covering ${\mathbb{R}}^n$, and consider $$ D' {\, \stackrel{\mathrm{def}}{=}\, } \bigcup_{{\Phi}_{-t}(I_k) \cap D_0 \ne \varnothing} {\Phi}_{-t}(I_k) $$ and $$ D'' {\, \stackrel{\mathrm{def}}{=}\, } \bigcup_{{\Phi}_{-t}(I_k) \subset D_0} {\Phi}_{-t}(I_k) $$ Clearly $\mu(D') \ge \mu(D_0)$ and, since $D'\ssm D''$ is contained in the $\varepsilon$-neighborhood of $\partial D_0$, it follows that $m(D'') \ge \mu(D_0)/2$. But the boxes ${\Phi}_{-t}(I_k) $ are essentially disjoint, therefore $D''$ consists of at least $$ \mu(D'')/ \mu\big( {\Phi}_{-t}(I_0)\big) = e^{\delta t} \mu(D'')/ \mu(I_0)\ge \frac{ \mu(D_0)}{2 \mu(I_0)}e^{\delta t} $$ of them. It remains to observe that each of them is contained in $D_0$ and contains a translate of ${\Phi}_{-t}(D_0)$, which yields the desired conclusion. \end{proof} } In view of the discussion preceding Proposition \ref{prop: msg3}, an application of Theorem \ref{thm: WD} yields \begin{cor}\name{cor: msg3} Any winning set for the MSG induced by $\mathcal{F}$ as above is thick. \end{cor} \subsection{Images of winning sets}\name{images contr} One of the nice features of the original Schmidt's game is the stability of the class of its winning sets under certain maps, see e.g.\ \cite[Theorem 1]{Schmidt games} or \cite[Proposition 5.3]{Dani survey}. We close this section by describing some self-maps of $H$ which send winning sets of the game induced by $ {\mathcal{F}}$ to winning sets: \commargin{I left just one combined statement, which is likely to be used in the future for the bounded orbits business; please check the proof!} \begin{prop}\name{prop: affine commuting} Let $\varphi$ be an automorphism of $H$ commuting with $\Phi_t$ for all $t$. Then there exists $s > 0$ (depending on $\varphi$ and the choice of an admissible $D_0$) such that the following holds. Take $t_0\in{\mathbb{R}}$ and $x_0\in H$, and consider \eq{def f}{ f:H\to H, \ x\mapsto \Phi_{t_0}\big( \varphi (x)\big)x_0\,. } \commargin{Maybe it is worth mentioning that some mysterious more general classes of maps can be considered, or maybe it is clear from the proof anyway} Then for any $a > a_*$, $b > a_* + 2s$ and any $S\subset H$ which is $(a,b)$-winning for the MSG induced by ${\mathcal{F}}$, the set $f(S)$ is $(a+2s, b-2s)$-winning for the same game. \end{prop} \begin{proof} Since $D_0$ is admissible and $\varphi$ is a homeomorphism, there exists $s > 0$ such that some translates of both $\varphi(D_0)$ and $\varphi^{-1}(D_0)$ contain $\Phi_s(D_0)$. Then, for $f$ as in \equ{def f}, since $\varphi$ is a group homomorphism and $\Phi_t\circ \varphi = \varphi \circ \Phi_t$ for all $t$, it follows that \eq{bilipschitz}{ \begin{aligned}\text{for any }t \in {\mathbb{R}}\text{ and }x \in H \quad\exists\, x', x'' \in H\text{ such that}\quad\qquad\\ f\big(\Phi_{t}(D_0)x \big)\supset \Phi_{t+t_0+s}(D_0)x' ,\ f^{-1}\big(\Phi_{t}(D_0)x \big)\supset \Phi_{t-t_0+s}(D_0)x''\,. \end{aligned} } Suppose that Alice and Bob are playing the game with parameters $({\mathbf{a}}+2s, {\mathbf{b}}-2s)$ and target set $f(S)$. Meanwhile their clones $\widetilde{\text{Alice}}$ and $\widetilde{\text{Bob}}$ are playing with parameters $({\mathbf{a}},{\mathbf{b}})$, and we are given a strategy for $\widetilde{\text{Alice}}$ to win on $S$. Let $B_k = \Phi_t(D_0)x$ be a move made by Bob at the $k$th stage of the game. Thus by \equ{bilipschitz}, $f^{-1}(B_k)$ contains a set $\widetilde B_k = \Phi_{t-t_0+s}(D_0)y$ for some $y\in H$. Then, in response to $\widetilde B_k$ as if it were $\widetilde{\text{Bob}}$'s choice, $\widetilde{\text{Alice}}$'s strategy specifies $\widetilde A_k=\Phi_{t-t_0+s+a}(D_0)y' \subset \widetilde B_k$, a move which ensures convergence to a point of $S$. Again by \equ{bilipschitz}, the set $f(\widetilde A_k)$ contains $A_k = \Phi_{t+a+2s}(D_0)x'$ for some $x'\in H$, which can be chosen by Alice as her next move. Now for any choice made by Bob of $$B_{k+1} = \Phi_{t+a+2s+ b-2s}(D_0)z= \Phi_{t+a+ b}(D_0)z\subset A_k$$ Alice can proceed as above, since $f^{-1}(B_{k+1})$ will contain a valid move for $\widetilde{\text{Bob}}$ in response to $\widetilde A_k$. Continuing this way, Alice can enforce $$\bigcap_{k = 1}^\infty A_k = \bigcap_{k = 1}^\infty f(\widetilde A_k) = f\left(\bigcap_{k = 1}^\infty \widetilde A_k\right)\in f(S)\,,$$ winning the game. \end{proof} \ignore{ Putting these together we obtain: \begin{cor}\name{cor: putting together} If $f_1, f_2, \ldots$ are $({\mathcal{F}}, D_0, s)$-bilipschitz (for some fixed $s$), and $g_1, g_2, \ldots$ are $\psi$-dilations for $\psi$ as in \equ{def psi}, then for any winning set $S$, $\bigcap_i f_i \circ g_i (S)$ is also winning. \end{cor} } \section{${\bold{Bad}}({\bf r})$ is winning}\name{sec: win} In this section we take $H = {\mathbb{R}}^n$ and prove Theorem \ref{thm: main}, that is, exhibit a strategy for the MSG induced by $\mathcal{F}^{({\bf r})}$ as in \equ{defn ar} which ensures that Alice can always zoom to a point in ${\bold{Bad}}({\bf r})$. Our argument is similar to that from \cite{PV-bad}, which in turn is based on ideas of Davenport. In view of remarks made at the end of the previous section, we can make an arbitrary choice for the initial admissible domain $D_0$, and will choose it to be the unit cube in ${\mathbb{R}}^n$, so that translates of ${\Phi}^{({\bf r})}_t(D_t)$ are boxes with sidelengths $e^{-(1+r_1)t},\dots,e^{-(1+r_n)t}$. The main tool will be the so-called `simplex lemma', the idea of which is attributed to Davenport in \cite{PV-bad}. Here is a version suitable for our purposes. \begin{lem} \name{lem: simplex} Let $D\subset {\mathbb{R}}^n$ be a box with sidelengths $\rho_1,\dots,\rho_n$, and for $N > 0$ denote by $\,\mathcal{Q}(N)$ the set of rational vectors ${\bf p}/q$ written in lowest terms with $0 < q < N$. Also let ${f}$ be a nonsingular affine transformation of ${\mathbb{R}}^n$ and $J$ the Jacobian of ${f}$ (that is, the absolute value of the determinant of its linear part). Suppose that \eq{smallvolume}{\rho_1\cdots\rho_n < \frac{J}{n!N^{n+1}}\,.} Then there exists an affine hyperplane $\mathcal{L}$ such that ${f}\big(\mathcal{Q}(N)\big)\cap D \subset\mathcal{L} $. \end{lem} \begin{proof} Apply \cite[Lemma 4]{KTV} to the set ${f}^{-1}(D)$. \ignore{The claim holds trivially if $\#(\mathcal{Q}(N)\cap D) \le n$. Otherwise, denote $n+1$ different elements of $\mathcal{Q}(N)\cap D$ by ${\bf p}^{(0)}/q^{(0)},\dots,{\bf p}^{(n)}/q^{(n)}$, assume they do not lie on a single affine hyperplane, and let $\Delta$ be the $n$-dimensional simplex subtended by them. Then $n!m(\Delta)$ is equal to the absolute value of the determinant \eq{det}{ \left|\begin{matrix}1 & p^{(0)}_1/q^{(0)} & \dots & p^{(0)}_n/q^{(0)} \\1 & p^{(1)}_1/q^{(1)} & \dots& p^{(1)}_n/q^{(1)} \\ \vdots& \vdots& \vdots& \vdots\\1 & p^{(n)}_1/q^{(n)} & \dots & p^{(n)}_n/q^{(n)} \end{matrix}\right| \,. } By assumption, the above determinant is nonzero; hence its absolute value is at least $1/q_0\cdots q_n > N^{-(n+1)}$. Since $\Delta \subset D$, we get $ N^{-(n+1)} < n!m(D) = n!\rho_1\cdots\rho_n$, contradicting \equ{smallvolume}.} \end{proof} Now let us state a strengthening of Theorem \ref{thm: main}: \begin{thm}\name{thm: precise} Let ${\bf r}$ be as in \equ{defn r}, ${\Phi}_t = {\Phi}^{({\bf r})}_t$ as in \equ{defn ar}, and $\psi$ as in \equ{def psi} where $D_0 = [-1/2,1/2]^n$. Then for any nonsingular affine transformation ${f}$ of ${\mathbb{R}}^n$ whose linear part commutes with ${\Phi}_1$ and any $a > \max_i\frac{\log 2}{1 + r_i}$, ${f}\big({\bold{Bad}}({\bf r})\big)$ is an $a$-winning set for the $\psi$-MSG. \end{thm} \commargin{I decided against a special corollary, but can put it back if you want} We remark that in this case ${\mathbf{a}}$ can be chosen independently of the linear part of $f$; note that this is not guaranteed by a general result as in Proposition \ref{prop: affine commuting}, but relies on special properties of the set ${\bold{Bad}}({\bf r})$. Consequently, in view of Theorem \ref{thm: countable general}, for any sequence $\{L_i\}$ of nonsingular diagonal matrices and a sequence $\{{\bf y}_i\}$ of vectors in ${\mathbb{R}}^n$, the intersection $\bigcap_{i=1}^\infty \left(L_i\big({\bold{Bad}}({\bf r})\big) + {\bf y}_i\right)$ is also ${\mathbf{a}}$-winning, hence thick. \begin{proof}% [Proof of Theorem \ref{thm: precise}] We first claim that ${\bf y} \in {f}\big({\bold{Bad}}( {\bf r})\big) $ if and only if there is $c'>0$ such that \eq{shifted}{\max_{1 \leq i \leq n} \left|y_i-{f}\left(\frac{{\bf p}}{q} \right)_i \right| \geq \frac{c'}{q^{1+r_i}} } for all ${\bf p} \in {\mathbb{Z}}^n$ and $q \in {\mathbb{N}}$ (here ${f}(\mathbf{x})_i$ denotes the $i$th coordinate of ${f}(\mathbf{x})$). To see this, let ${\bf r} = (r_i)$ be as in \equ{defn r}, and let \eq{decomposition}{{\mathbb{R}}^n = \bigoplus V_j} be the eigenspace decomposition for ${\Phi}_1$. Letting $\{{\mathbf{e}}_i\}$ denote the standard basis of ${\mathbb{R}}^n$, we have that $V_j = {\rm span} \left( {\mathbf{e}}_i :i \in I_j\right)$ where $I_j$ is a maximal subset of $\{1, \ldots, n\}$ with $r_i$ the same for all $i \in I_j$. Since the linear part of ${f}$ commutes with ${\Phi}_1$, it preserves each $V_j$, so that we may write $${f}({\mathbf{x}}) = {\mathbf{x}}_0 + \sum_j A_jP_j({\mathbf{x}}),$$ where ${\mathbf{x}}_0 \in {\mathbb{R}}^n$, $P_j$ is the projection onto $V_j$ determined by the direct sum decomposition \equ{decomposition}, and $A_j : V_j \to V_j$ is an invertible linear map. Let $K$ be a positive constant such that for each ${\mathbf{x}} \in V_j$, $$\frac{1}{K} \|{\mathbf{x}} \| \leq \|A_j {\mathbf{x}}\| \leq K\|{\mathbf{x}}\|.$$ With these choices it is easy to show that \equ{defn rba} implies \equ{shifted} for ${\bf y} = {f}({\mathbf{x}})$ with $c' = c^{\max r_i}/K$, and similarly that \equ{shifted} for ${\bf y} = {f}({\mathbf{x}})$ implies \equ{defn rba} with $c= \left({c'}/{K} \right)^{1+ \max r_i^{-1}}.$ Let us fix $a > \max_i\frac{\log 2}{1 + r_i}$ and ${t_0} > 0$ such that \eq{t1 {e^{-{t_0}(n+1)} < \frac J {2^n n!} \,,} where $J$ is the Jacobian of ${f}$. We will specify a strategy for Alice. Bob makes a choice of (arbitrarily large) $b$ and an initial rectangle, that is, a translate $B_1$ of ${\Phi}_{t_1}(D_{0})$ for some $t_1$ which we demand to be at least ${t_0}$ (the latter is justified by Lemma \ref{lem: dummy}). We then choose a positive constant $c'$ such that \eq{c1}{e^{(a+b)(1+r_i)}c' < \left (\tfrac12 - e^{-a(1+r_i)} \right)e^{-t_0(1+r_i) } for each $i$ (we remark that $\tfrac12 - e^{-a(1+r_i)}$ is positive because of the choice of $a$). Our goal will be to prove the following \begin{prop}\name{prop: induction} For any choices of $B_1,\dots,B_k$ made by Bob it is possible for Alice to choose $A_{k}\subset B_{k}$ such that whenever ${\bf y}\in A_{k}$, inequality \equ{shifted} holds for all ${\bf p},q$ with $0 < q < e^{(k-1)(a+b)}$. \end{prop} If the above claim is true, then the intersection point ${\bf y}$ of all balls will satisfy \equ{shifted} for all ${\bf p}\in{\mathbb{Z}}^n$ and $q\in{\mathbb{N}}$, that is, will belong to ${f}\big({\bold{Bad}}({\bf r})\big)$.\end{proof} \begin{proof}[Proof of Proposition \ref{prop: induction}] We proceed by induction on $k$. In case $k = 1$, the statement is trivially true since the set of $q\in{\mathbb{N}}$ with $0 < q < 1$ is empty. Now suppose that $A_1,\dots,A_{k-1}$ are chosen according to the claim, and Bob picks $B_k\subset A_{k-1}$. Note that $B_k$ is a box with sidelengths $\rho_i {\, \stackrel{\mathrm{def}}{=}\, } e^{-(t_1 + (k-1) (a + b))(1+r_i)}$, $i = 1,\dots,n$. By induction, for each ${\bf y}\in B_k\subset A_{k-1}$ \equ{shifted} holds for all ${\bf p},q$ with $0 < q < e^{(k-2)(a+b)}$. Thus we need to choose $A_{k}\subset B_{k}$ such that the same is true for \eq{new}{ e^{(k-2)(a+b)}\le q < e^{(k-1)(a+b)}\,.} Let ${\bf p}/q$, written in lowest terms, be such that \equ{new} holds, and that \equ{shifted} does not hold for some ${\bf y}\in B_k$; in other words, for each $i$ one has $$|y_i- {f}({\bf p}/q)_i|< \frac{c'}{q^{1+r_i}} \le \frac{e^{(a+b)(1+r_i)} c'}{e^{(k-1)(a+b)(1+r_i)}}$$ for some ${\bf y}\in B_k$. Denote by $\tilde {\bf y}$ the center of $B_k$, so that $$|y_i - \tilde y_i| \le \frac{\rho_i}2 \,;$$ then for each $i$, \begin{equation*} |\tilde y_i- {f}({\bf p}/q)_i| < \left(e^{(a+b)(1+r_i)}c' + e^{-t_0(1+r_i)}/2\right)e^{-(k-1)(a+b)(1+r_i) \under \equ{c1}}< \rho_i \,. \end{equation*} Thus, if we denote by $D$ the box centered at $\tilde {\bf y}$ with sidelengths $2\rho_i$, we can conclude that ${f}({\bf p}/q)\in D$; but also ${\bf p}/q\in \mathcal{Q}(e^{(k-1)(a+b)})$, and $$2^n\rho_1\cdots\rho_n = 2^n e^{-\left(t_0 + (k-1)(a+b)\right)(n+1)} \under{ \equ{t1}}< \frac J {{ n! }(e^{(k-1)(a+b)})^{n+1}} \,.$$ Therefore, by Lemma \ref{lem: simplex}, there exists an affine hyperplane $\mathcal{L}$ containing all ${f}({\bf p}/q)$ as above. \ignore{ By induction, for each ${\mathbf{x}}\in B_k$ \equ{shifted} holds for all ${\bf p},q$ with $0 < q < e^{(k-1)(a+b)}$. Thus we need to choose $A_{k+1}\subset B_{k}$ such that the same is true for $ e^{(k-1)(a+b)}\le q < e^{k(a+b)}$. Let ${\bf p}/q$, written in lowest terms, be such that $ e^{(k-1)(a+b)}\le q < e^{k(a+b)}$ and that \equ{shifted} does not hold for some ${\mathbf{x}}\in B_k$, in other words, for each $i$ one has $$|x_i- {f}({\bf p}/q)_i|< \frac{c^{r_i}}{q^{1+r_i}} \le \frac{e^{(a+b)(1+r_i)}c^{r_i}}{e^{k(a+b)(1+r_i)}}$$ for some ${\mathbf{x}}\in B_k$. Denote by $\tilde {\mathbf{x}}$ the center of $B_k$, so that $$\max_i|x_i - \tilde x_i| \le \frac12 e^{-(t_0 + k (a + b))(1+r_i)}\,;$$ then for each $i$, \begin{equation*} \begin{split} |\tilde x_i- p_i/q| &< \left(e^{(a+b)(1+r_i)}c^{r_i} + e^{-t_0}/2\right)e^{-k(a+b)(1+r_i)}\\ &\under{\equ{t1}, \equ{c1}}< \frac3{8(n!J)^{1/n}}{e^{-k(a+b)(1+r_i)}}\,. \end{split} \end{equation*} Thus, if we denote by $D$ the box centered at $\tilde {\mathbf{x}}$ with sidelengths $$\rho_i = \frac3{4(n!J)^{1/n}}e^{-k(a+b)(1+r_i)}\,,$$ we can conclude that ${\bf p}/q\in D$; but also ${\bf p}/q\in \mathcal{Q}(e^{k(a+b)})$, and $$\rho_1\cdots\rho_n = \big(\tfrac34\big)^n \frac1{ n!J } e^{-k(a+b)(n+1)} < 1/{ n!J } \big(e^{k(a+b)}\big)^{n+1} \,.$$ Therefore, by Lemma \ref{lem: simplex}, there exists an affine hyperplane $\mathcal{L}$ containing all ${\bf p}/q$ as above. } Clearly it will be advantageous for Alice is to stay as far from all those vectors as possible, i.e., choose $A_{k}\subset B_{k}$ to be a translate of ${\Phi}_{t_1 + (k-1) (a + b) + a}(D_0)$ which maximizes the distance from $\mathcal{L}$. A success is guaranteed by the assumption $a > \max_i\frac{\log 2}{1 + r_i}$, which amounts to saying that for each $i$, the ratio of the length of the $i$th side of the new box to the length of the $i$th side of $B_k$ is $e^{-a(1+r_i)} < 1/2$. This implies that for each ${\mathbf{x}}\in A_{k}$ chosen this way and any ${\mathbf{x}}'\in \mathcal{L}$, there exists $i$ such that $ |x_i - x'_i|$ is not less than the length of the $i$th side of $B_k$ times $(\frac12 - e^{-a(1+r_i)})$. Therefore, whenever \equ{new} holds and ${\mathbf{x}}\in A_{k}$, for some $i$ one has \begin{equation*} \begin{split} |x_i - {f}({\bf p}/q)_i| &\ge e^{-(t_0 + (k-1) (a + b))(1+r_i)}\big(\tfrac12 - e^{-a(1+r_i)}\big)\\ & = e^{-t_0(1+r_i)}\big(\tfrac12 - e^{-a(1+r_i)}\big)e^{-(a + b)(1+r_i)}e^{-( (k-2) (a + b))(1+r_i)}\\ &\ge e^{-t_0(1+r_i)}\big(\tfrac12 - e^{-a(1+r_i)}\big)e^{-(a + b)(1+r_i)} q^{-(1 + r_i)} \under{\equ{c1}}\ge c' q^{-(1 + r_i)}\,, \end{split} \end{equation*} establishing \equ{shifted}. \end{proof} \ignore{Combining this with Propositions \ref{prop: general dilations} and \ref{prop: general bilipschits} we obtain: \begin{cor}\name{cor: for intro} Let $f_1, f_2, \ldots$ be a sequence of invertible maps ${\mathbb{R}}^n \to {\mathbb{R}}^n$ of the form $f_i = A_i \circ B_i \circ C_i$, where: \begin{itemize} \item Each $A_i$ is an $({\mathcal{F}}^{({\bf r})})$-dilation. \item Each $B_i$ is $({\mathcal{F}}^{({\bf r})}, D_0, s)$-bilipschitz for some fixed $s$. \item Each $C_i$ is an affine map whose linear part commutes with $\Phi_t$ for all $t$; \end{itemize} Then $\dim \, \bigcap_i {f}_i\big({\bold{Bad}}({\bf r})\big) = n.$ \end{cor} } \section{Concluding remarks}\name{sec: next} \subsection{Dimension of winning sets for Schmidt's game}\name{sec: fulldim} The formalism developed in \S\S\ref{sec: games}--\ref{sec: contr} appears to be quite general, and we expect it to be useful in a wide variety of situations. In particular, new information can be extracted even for the original Schmidt's game. Namely, here we state a condition on a metric space sufficient to conclude that any winning set of Schmidt's game \equ{rn} has big enough dimension. This will be another application of Theorem \ref{thm: WD}. Recall that a locally finite Borel measure $\mu$ on a metric space $X$ is called {\sl Federer\/}, or doubling, if there is $K>0$ and $\rho > 0$ such that for all $x \in {\rm supp}\,\mu$ and $0 < r < \rho$, \eq{defn Federer}{ \mu\big({B}(x,3r) \big) \leq K \mu\big({B}(x,r) \big)\,. } \begin{prop} \name{cor: Federer} Let $E$ be a complete metric space which is the support of a Federer \commargin{added $\alpha,\beta$} measure $\mu$. Then there exist $c_1,c_2 > 0$, depending only on $K$ as in \equ{defn Federer}, such that whenever $0 < \alpha < 1$, $0 < \beta < 1/2$, and $S$ is an $(\alpha, \beta)$-winning set for Schmidt's game as in \equ{rn} played on $E$ and $ \varnothing \ne U \subset E$ is open, one has \eq{federer estimate}{\dim (S \cap U) \geq \underline{d}_{\mu}(U) - \frac{c_1|\log \alpha| + c_2}{|\log \alpha|+ |\log \beta|} \,.} In particular, $\dim (S\cap U) \geq \underline{d}_{\mu}(U)$ if $S$ is winning. \end{prop} Clearly Theorem \ref{thm: full dim} is a special case of the above result. In addition, Proposition \ref{cor: Federer} generalizes a recent result of L.\ Fishman \cite[Thm.\ 3.1 and Cor.\ 5.3]{Fishman} that for a measure $\mu$ satisfying a power law (see \equ{pl}; this condition obviously implies Federer) a winning set for Schmidt's original game \equ{rn} played on $E = {\rm supp} \, \mu$ has full \hd. See \cite[Example 7.5]{bad} for an example of \commargin{closed is automatic} subset of ${\mathbb{R}}$ (a similar construction is possible in ${\mathbb{R}}^n$ for any $n$) \commarginnew{changed} supporting a measure of full \hd\ which is Federer but does not satisfy a power law. \ignore{It follows that the metric space constructed in Proposition \ref{prop: lower dim example} cannot support a Federer measure of positive dimension.} \begin{proof}[Proof of Proposition \ref{cor: Federer}] We need to check the assumptions of Theorem \ref{thm: WD}. Conditions (MSG0--2) are immediate, and ($\mu$1) holds since ${\rm supp}\,\mu = E$. Thus it only suffices to verify ($\mu$2). It will be convenient to switch back to Schmidt's multiplicative notation of \S\ref{special}. Fix $0 < \alpha < 1$; we claim that there exists $c' > 0$ such that for any $x,x'\in E$ and $0 < r < \rho$ one has \eq{strong federer}{{B}(x', \alpha r) \subset {B}(x, r) \implies \mu\big({B}(x' ,\alpha r) \big) \geq c' \mu\big({B}(x ,r) \big)\,.} Indeed, choose $m\in{\mathbb{N}}$ and $c' > 0$ such that \eq{def const}{\alpha/6 < 3^{-m} \leq \alpha/2\text{ and }c' = K^{-m}\,.} Iterating \equ{defn Federer} $m$ times, we find $\mu\big({B}(x' ,\alpha r)\big) \geq c'\mu \big({B}(x',2r) \big)$ for any $x' \in E$ and $r>0$. Since for any $x' \in {B}(x, r)$ the latter ball is contained in $ {B}(x', 2r)$, \equ{strong federer} follows. Now take $0 < \beta < 1/2$ and $\omega = (x,r)\in E\times {\mathbb{R}}_+$ with $r < \rho$, and let $x_i,\ i=1, \ldots, N$, be a maximal collection of points such that $\theta_i {\, \stackrel{\mathrm{def}}{=}\, } (x_i,\beta r)\le_s \omega$ and balls $ B(\theta_i)$ are pairwise disjoint. By maximality, $$B\big(x,(1-\beta)r\big) \subset \bigcup_{i=1}^N {B}(x_i, 3\beta r)\,.$$ (Indeed, otherwise there exists $y\in B\big(x,(1-\beta)r\big)$ with $d(y,x_i) > 3\beta r$, which implies that $(y,\beta r)\le_s \omega$ and $ B(y,\beta r)$ is disjoint from $ B(\theta_i)$ for each $i$.) In view of \equ{strong federer}, for any choices of $\theta_i' {\, \stackrel{\mathrm{def}}{=}\, } (x_i', \alpha \beta r) \le_s \theta_i $ one has $\mu\big( B(\theta_i')\big) \geq c'\mu\big({B}(\theta_i)\big)$. This implies \begin{equation*} \begin{aligned} \mu\big (\bigcup B(\theta_i') \big) & = \sum \mu\big ( B(\theta_i) \big) \geq c' \sum \mu \big({B}(\theta_i) \big) \\ & \geq \frac{c'}{K} \sum \mu \big({B}(x_i, 3\beta r) \big) \geq \frac{c'}{K}\mu\big( B(x,(1-\beta)r\big)\\ &\geq \frac{c'}{K}\mu\big( B(x,r/2) \big)\geq \frac{c'}{K^2}\mu\big( B(\omega) \big) \,. \end{aligned} \end{equation*} Hence ($\mu$2) holds with $c = c'/K^2$, and \equ{federer estimate}, with explicit \commargin{We can compute $c_1 = \log K / \log 3$, $c_2 = \log K (3-\log 2/\log 3) $ but this is kind of boring..} $c_1$ and $c_2$, follows from \equ{wd} and \equ{def const}. \end{proof} \ignore{We remark that, in view of \equ{wd} and an estimate for $c$ obtained above, whenever $S$ is $(\alpha,\beta)$-winning one has $${\dim (S \cap U) \geq \underline{d}_{\mu}(U) - \frac{\log K \left(\frac{|\log \alpha/2|}{\log 3} + 3\right)}{|\log \alpha|+ |\log \beta|} \,.} $$ \medskip} \ignore{ Then one has $\sigma = 1$, $\delta = n = \underline{d}_\mu(x)$ for all $x$, and $N(s) \ge c_n e^{ns}$ for some constant $c_n$ dependent only on $n$. } \ignore{We will say that $\mu$ is {\sl strong Federer with respect to ${\mathcal D}$} if for any $p>0$ there is $c=c(p)>0$ such that for all $t \geq t_*$, all $D \in {\mathcal D}_t$ and all $D' \in {\mathcal D}_{t+p}$ with $D' \subset D$, one has $\mu(D') \geq c\mu(D).$ Note that this assumption implies that for any $D \in {\mathcal D}_t$ and any $p>0$, a maximal collection of disjoint elements of ${\mathcal D}_{t+p}$ contained in $D$ is finite. Specializing to the setup \equ{rn} of Schmidt's original game and taking $p = \log 3$, this implies that there is $c>0$ such that if $B = {B}(x,3r)$ and $B'={B}(z, r) \subset B$ then $\mu(B') \geq c\mu(B).$ This implies the Federer condition and is implied by the power law condition (see \cite{bad} for a discussion of conditions on measures). For an example of a measure $\mu$ which is strongly Federer (with respect to ${\mathcal D}$ as in \equ{rn}) but does not satisfy a power law, see \cite[Example 7.5]{bad}. } The next proposition shows that, as was mentioned in \S\ref{dimest}, without additional assumptions on a metric space the conclusion of Theorem \ref{thm: full dim} could fail: \begin{prop}\name{prop: lower dim example} There exists a complete metric space $E$ of positive \hd\ containing a countable (hence zero-dimensional) winning set $S$ for the game \equ{rn} \end{prop} \begin{proof} Let $X = \{0,1,2\}^{{\mathbb{N}}}$, equipped with the metric $$d\big((x_n), (y_n) \big) = 3^{-k}, \ \ \mathrm{where \ } k = \min\{j: x_j \neq y_j\}.$$ Let $E \subset X$ be the subset of sequences in which the digit 0 can only be followed by 0; i.e. $$x_{\ell} =0, \ k \geq \ell \implies x_k =0. $$ Then $E$ is a closed subset of $X$ so is a complete metric space when equipped with the restriction of $d$. Let $S$ be the set of sequences in $E$ for which the digit 0 appears. Then $S$ is a countable dense subset in $E$ but no point in $S$ is an accumulation point of $E \smallsetminus S$. In particular $\dim (S) =0$, and it is easily checked that $\dim (E) = \log 2 / \log 3 >0.$ Let $\alpha = 1/27,$ and let $\beta$ be arbitrary. Suppose that Bob chooses $\omega = (x, r)$, where $x = (x_n)$. Letting Alice play arbitrarily we can assume that $r<1$. Let $\ell \in {\mathbb{N}}$ be chosen so that $3^{-(\ell+1)} < r \leq 3^{-\ell}.$ Note that $ B(\omega)$ contains all sequences $(y_n)$ with $y_i = x_i$ for all $i \leq \ell$, and in particular the sequence $z = (x_1, \ldots, x_{\ell}, x_{\ell+1}, 0, 0, \ldots) \in S.$ Now Alice chooses $\omega' = (z, \alpha r)$; it is easy to see that $\omega'\le_s \omega$ and that $ B(\omega')=\{z\}$ (a singleton), since any other sequence in this ball must begin with $(x_1, \ldots, x_{\ell}, x_{\ell+1}, 0)$. Thus the outcome of the game is $z$ and Alice is the winner. \end{proof} It is not hard to see that such an example can be realized as a compact subset of ${\mathbb{R}}$ with the induced metric (e.g.\ by identifying sequences $(x_n)$ with real numbers $0.x_1x_2\dots$ expanded in base $3$). It is also worth remarking that another special case of our general framework is an $(\alpha,\beta)$-game played on arbitrary metric space $E$ but with Schmidt's containment relation \equ{cont} replaced by \eq{weakcont} {(x',r') \le (x,r)\quad \iff \quad B(x',r') \subset B(x,r)\,, } similarly to the way it was done in \equ{def order}. The two conditions are equivalent when $E$ is a Euclidean space. However in general, e.g.\ when $E$ is a proper closed subset of ${\mathbb{R}}$ or ${\mathbb{R}}^n$ such as those considered in \cite{Fishman}, \equ{weakcont} is weaker, and the classes of winning sets for the two games could differ. Still, by modifying the argument of this subsection one can show that the conclusions of both propositions hold when the game is played according to the weaker containment relationship. \subsection{Sets of the form \equ{tripleint} and their generalizations}\name{sec: int} Take $E = {\mathbb{R}}^n$ and let $\mathcal{F}$ be a one-parameter semigroup of its linear contracting transformations. Suppose that $E = E_1\oplus E_2$ where both $E_1$ and $E_2$ are invariant under $\mathcal{F}$, denote by $\mathcal{F}_1$ the restriction of $\mathcal{F}$ to $E_1$, and suppose that $S_1\subset E_1$ is a winning subset of the MSG induced by $\mathcal{F}_1$. Then it immediately follows from Lemma \ref{lem: product} that $S_1\times E_2$ is a winning subset of the MSG induced by $\mathcal{F}$. Applying it to $\mathcal{F} = \mathcal{F}^{({\bf r})}$ as in \equ{defn ar} we obtain \begin{prop}\name{prop: intersections} For ${\bf r}$ as in \equ{defn r} and $1\le k \le n$, define ${\bf{s}}\in{\mathbb{R}}^k$ by \eq{defn s}{ s_i = \frac{1 + (k+ 1) r_i - \sum_{l=1}^k r_l}{k + \sum_{l=1}^k r_l}\,,\quad i = 1,\dots, k\,. } Then {\bold{Bad}}({\bf{s}}) \times {\mathbb{R}}^{n-k} is a winning set for the MSG induced by $\mathcal{F}^{({\bf r})}$, and therefore so is its intersection with ${\bold{Bad}}({\bf r})$. \end{prop} \begin{proof} Note that ${\bf{s}}$ is defined so that $\sum_i s_i$ is equal to $ 1$, and the vector $(1 + s_1,\dots,1+s_k)$ is proportional to $(1 + r_1,\dots,1+r_k)$. Therefore the semigroup $\mathcal{F}^{({\bf{s}})}$ is simply a reparameterization of the restriction of $\mathcal{F}^{({\bf r})}$ to ${\mathbb{R}}^k$, and the claim follows from Lemma~\ref{lem: product}. \end{proof} It is clear that the winning property of the set \equ{tripleint} follows from a special case of the above proposition. The same scheme of proof, which seems to be much less involved than that of \cite{PV-bad}, is applicable to multiple intersections of sets of weighted \ba\ vectors. E.g.\ given ${\bf r}\in{\mathbb{R}}^3$ with $r_1 + r_2 + r_3 = 1$, equation \equ{defn s} can be used to define $s_{ij}$ for $i,j = 1,\dots,3$, $i\ne j$, such that \eq{sum 1}{\sum_{i} s_{ij} = 1\text{ for }j = 1,2,3\,,} and that \eq{7int}{ \begin{aligned} {\bold{Bad}}({\bf r}) &\cap {\bold{Bad}}(s_{13},s_{23},0) \cap {\bold{Bad}}(s_{12},0,s_{32}) \cap {\bold{Bad}}(0, s_{21},s_{31}) \\ &\cap {\bold{Bad}}(1,0,0) \cap {\bold{Bad}}(0,1,0)\cap {\bold{Bad}}(0,0,1) \end{aligned}} is a winning set for the MSG induced by $\mathcal{F}^{({\bf r})}$, and therefore is thick. Take for example ${\bf r} = (\frac12, \frac13, \frac16)$; our conclusion is that $$\begin{aligned} {\bold{Bad}}({\bf r}) &\cap {\bold{Bad}}(\tfrac{10}{17}, \tfrac7{17},0) \cap {\bold{Bad}}(\tfrac9{16},0,\tfrac7{16}) \cap {\bold{Bad}}(0, \tfrac2{3},\tfrac1{3}) \\ &\cap {\bold{Bad}}(1,0,0) \cap {\bold{Bad}}(0,1,0)\cap {\bold{Bad}}(0,0,1) \end{aligned}$ is thick. We remark that the assertion made in \cite[p.\ 32]{PV-bad}, namely that given ${\bf r}$ as above, the set \equ{7int} is thick for an {\it arbitrary\/} choice of $s_{ij}$ satisfying \equ{sum 1}, does not seem to follow from either our methods of proof or those of \cite{PV-bad}. \commarginnew{changed} \subsection{Games and dynamics}\name{sec: dyn} The appearance of the semigroup $\mathcal{F}^{({\bf r})}$ in our analysis of the set ${\bold{Bad}}({\bf r})$ can be naturally explained from the point of view of homogeneous dynamics. Let $G=\operatorname{SL}_{n+1}({\mathbb{R}})$, $\Gamma = \operatorname{SL}_{n+1}({\mathbb{Z}})$. The \hs\ $G/\Gamma$ can be identified with the space of unimodular lattices in ${\mathbb{R}}^{n+1}$. To a vector ${\mathbf{x}}\in{\mathbb{R}}^n$ one associates a unipotent element $\tau({\mathbf{x}}) = \left( \begin{array}{ccccc} I_n & {\mathbf{x}} \\ 0 & 1 \end{array} \right)$ of $G$, which gives rise to a lattice $$\tau({\mathbf{x}}) {\mathbb{Z}}^{n+1} = \left\{\left( \begin{array} {c} q {\mathbf{x}} - {\bf p} \\ q \end{array} \right): q\in {\mathbb{Z}},\ {\bf p}\in{\mathbb{Z}}^n\right\}\inG/\Gamma\,.$$ Then, given ${\bf r}$ as in \equ{defn r}, consider the one-parameter subgroup $ \{g^{({\bf r})}_t\}$ of $G$, where \eq{def gtr}{g^{({\bf r})}_t {\, \stackrel{\mathrm{def}}{=}\, } {\rm diag}(e^{r_1t}, \ldots, e^{r_n t}, e^{-t})\,. } It was observed by Dani \cite{Dani-div} for ${\bf r} = {\bf n}$ and by the first named author \cite{K-matrices} for arbitrary ${\bf r}$ that ${\mathbf{x}}\in{\bold{Bad}}({\bf r})$ if and only if the trajectory $$\big\{g^{({\bf r})}_t {\tau}({\mathbf{x}}){\mathbb{Z}}^{n+1} : t \ge 0\big\}$$ is bounded in $G/\Gamma$. Note that the $g^{({\bf r})}_t$-action on $G/\Gamma$ is partially hyperbolic, and it is straightforward to verify that the $\tau({\mathbb{R}}^n)$-orbit foliation is $g^{({\bf r})}_t$-invariant, and that the action on the foliation induced by the $g^{({\bf r})}_t$-action on $G/\Gamma$ is realized by $\mathcal{F}^{({\bf r})}$. Namely, one has $$g^{({\bf r})}_t {\tau}({\mathbf{x}})y = g^{({\bf r})}_t\tau\big({\Phi}^{({\bf r})}_{-t}({\mathbf{x}})\big)y$$ for any $y \inG/\Gamma$. Dani used Schmidt's result on the winning property of the set ${\bold{Bad}}$ and the aforementioned correspondence to prove that the set of points of $G/\Gamma$ with bounded $g^{({\bf n})}_t$-trajectories, where ${\bf n}$ is as in \equ{def n}, is thick. Later \cite{KM} this was established for arbitrary flows $(G/\Gamma,g_t)$ `with no non-trivial quasiunipotent factors'. \commargin{changed this a little bit} In fact the following was proved: denote by $H^+$ the $g_1$-expanding horospherical subgroup of $G$, that is, $$H^+ = \{h\in G : g_{-t}hg_t \to e\text{ as }t\to\infty\}\,;$$ then for any $y\inG/\Gamma$ the set \eq{ehs}{ \big\{h\in H^+ : \{g_thy : t \ge 0\} \text{ is bounded in } G/\Gamma\big\} } is thick. The main result of the present paper strengthens the above conclusion in the case $G=\operatorname{SL}_{n+1}({\mathbb{R}})$, $\Gamma = \operatorname{SL}_{n+1}({\mathbb{Z}})$ and $g_t = g^{({\bf r})}_t $ as in \equ{def gtr}. Namely, consider the subgroup $H = \tau({\mathbb{R}}^n)$ of $H^+$ (the latter for generic ${\bf r}$ is isomorphic to the group of all upper-triangular unipotent matrices). Then for any $y\inG/\Gamma$, the intersection of the set \equ{ehs} with an arbitrary coset $H h'$ of $H$ in $H^+$ is winning for a certain MSG determined only by ${\bf r}$ (hence is thick). In particular, in view of Theorem \ref{thm: countable} and Proposition \ref{prop: affine commuting}, this implies that for an arbitrary countable sequence of points $y_k\in G/\Gamma$, the intersection of all sets $ \big\{g\in G : \{g_t gy_k \} \text{ is bounded in } G/\Gamma\big\}$ is thick. We note that the proof in \cite{KM} is based on mixing of the $g_t$-action on $G/\Gamma$, while to establish the aforementioned stronger winning property mixing does not seem to be enough, and additional arithmetic considerations are necessary. In a recent work \cite{new}, for any flow $(G/\Gamma,g_t)$ with no nontrivial quasiunipotent factors we describe a class of subgroups $H$ of the $g_1$-expanding horospherical subgroup of $G$ which are normalized by $g_t$ and have the property that for any $y\inG/\Gamma$, the set \eq{bounded}{ \big\{h\in H : \{g_thy : t \ge 0\} \text{ is bounded in } G/\Gamma\big\} } is winning for the MSG induced by contractions $h\mapsto g_{-t}hg_t$. The argument is based on reduction theory for arithmetic groups, that is, on an analysis of the structure of cusps of arithmetic \hs s. Another result obtained in \cite{new} is that for $G$, $\Gamma$, $\{g_t\}$, $H$, $y$ as above and any $z\inG/\Gamma$, sets \eq{nondense}{ \left\{h\in H : z\notin \overline{\{g_thy : t \ge 0\}} \right\} } are also winning for the same MSG. Again this is a strengthening of results on the thickness of those sets existing in the literature, see \cite{K}. Combining the two statements above and using the intersection property of winning sets \equ{bounded} and \equ{nondense}, one finds a way to construct orbits which are both bounded and stay away from a given countable subset of $G/\Gamma$, which settles a conjecture made by Margulis in \cite{Ma}. \subsection{Systems of linear forms}\name{sec: sys} A special case of the general theorem mentioned in the previous subsection is a generalization of the main result of the present paper to the case of systems of linear forms. Namely, let $m,n$ be positive integers, denote by $\mr$ the space of $m\times n$ matrices with real entries (system of $m$ linear forms in $n$ variables), and say that \amr\ is {\sl $({\bf r},{\bf{s}})$-\ba\/} if $$ \inf_{{\bf p}\in{\mathbb{Z}}^m,\,{\bf q}\in{\mathbb{Z}}^n\nz} \max_i |Y_i{\bf q} - p_i|^{1/r_i} \cdot \max_j|q_j|^{1/s_j} > 0\,, $$ where $Y_i$, $i = 1,\dots,m$ are rows of $Y$ and ${\bf r}\in{\mathbb{R}}^m$ and ${\bf{s}}\in{\mathbb{R}}^n$ are such that \eq{def rs}{r_i,s_j > 0\quad\text{and}\quad\sum_{i=1}^m r_i = 1 = \sum_{j=1}^n s_j\,.} (Here the components of vectors ${\bf r},{\bf{s}}$ can be thought of as weights assigned to linear forms $Y_i$ and integers $q_j$ respectively.) The correspondence described in the previous subsection extends to the matrix set-up, with $G = \operatorname{SL}_{m+n}({\mathbb{R}})$ and $\Gamma = \operatorname{SL}_{m+n}({\mathbb{Z}})$ and \begin{equation*} \label{eq: new defn g_t} g^{({\bf r},{\bf{s}})}_t = {\rm diag}(e^{r_1t}, \ldots, e^{r_{m} t}, e^{-s_1t}, \ldots, e^{-s_{n}t} \end{equation*} acting on $G/\Gamma$. This way one can show that the set ${\bold{Bad}}({\bf r},{\bf{s}})\subset \mr$ of $({\bf r},{\bf{s}})$-\ba\ systems is winning for the MSG induced by the semigroup of contractions ${\Phi}_t : (y_{ij}) \mapsto ( e^{- (r_i + s_j)t }y_{ij})$ of $\mr$ (a special case where all weights are equal is a theorem of Schmidt \cite{Schmidt}). This generalizes Theorem \ref{thm: main} and strengthens \cite[Corollary 4.5]{di} where it was shown that {\bold{Bad}}({\bf r},{\bf{s}}) $ is thick for any choice of ${\bf r},{\bf{s}}$ as in \equ{def rs}. \subsection{Playing games on other metric spaces}\name{sec: other} The paper \cite{KTV}, where it was first proved that the set of weighted badly approximable vectors in ${\mathbb{R}}^n$ has full \hd, contains a discussion of analogues of the sets ${\bold{Bad}}({\bf r})$ over local fields other than ${\mathbb{R}}$. In \cite[\S\S5.3--5.5]{KTV} it is explained how to apply the methods of \cite[\S\S2--4]{KTV} to studying weighted badly approximable vectors in vector spaces over ${\mathbb{C}}$ as well as over non-Archimedean fields\footnote{See also \cite{kr} where Schmidt's result on the winning property of the set of \ba\ systems of linear forms is extended to the field of formal power series.}. Similarly one can apply the methods of the present paper to replace Theorems 17--19 of \cite{KTV} by stronger statements that the corresponding sets are winning sets of certain MSGs. For that one needs to generalize the set-up of \S \ref{sec: contr} and consider modified Schmidt games induced by contracting automorphisms of arbitrary locally compact topological groups (not necessarily real Lie groups). Another theme of the papers \cite{KTV} and \cite{bad} is intersecting the set of badly approximable vectors with some nice fractals in ${\mathbb{R}}^n$. For example \cite[Theorem 11]{KTV}, slightly generalized in \cite[Theorem 8.4]{bad}, states the following: let $\mu = \mu_1\times\dots\times \mu_d$, where each $\mu_i$ is a measure on ${\mathbb{R}}$ satisfying a power law (called `condition (A)' in \cite{KTV}); then $\dim\big({\bold{Bad}}({\bf r}) \cap{\rm supp}\,\mu\big) = \dim({\rm supp}\,\mu)$. Following an approach developed recently by Fishman \cite{Fishman}, it seems possible to strengthen this result; in particular, one can consider a modified Schmidt game played on $E = {\rm supp}\,\mu$, with $\mu$ as above, and prove that the intersection of $E$ with $ {\bold{Bad}}({\bf r})$ is a winning set of this game. \subsection{Schmidt's Conjecture}\name{sec: schmidt} Finally we would like to mention a question posed by W.\ Schmidt \cite{Schmidt:open} in 1982: is it true that for ${\bf r}\ne {\bf r}'$, the intersection of ${\bold{Bad}}({\bf r})$ and ${\bold{Bad}}({\bf r}')$ is nonempty? Schmidt conjectured that the answer is affirmative in the special case $n = 2$, ${\bf r} = (\frac13, \frac23)$ and ${\bf r}' = (\frac23, \frac13)$, pointing out that disproving his conjecture would amount to proving Littlewood's Conjecture (see \cite{EKL} for its statement, history and recent developments). Unfortunately, the results of the present paper do not give rise to any progress related to Schmidt's Conjecture. Indeed, each of the weight vectors ${\bf r}$ comes with its own set of rules for the corresponding modified Schmidt game, and there are no reasons to believe that winning sets of different games must have nonempty intersection. One can also observe that ${\bold{Bad}}(\frac23, \frac13) = {f}\big({\bold{Bad}}(\frac13, \frac23)\big)$ where ${f}$ is a reflection of ${\mathbb{R}}^2$ around the line $y = x$. This reflection however does not commute with $\mathcal{F}^{(1/3,2/3)}$, hence Theorem \ref{thm: precise} cannot be used to conclude\footnote{Recently a solution to the conjecture was announced by D.\ Badziahin, A.\ Pollington and S.\ Velani.} that ${f}\big({\bold{Bad}}(\frac13, \frac23)\big)$ is a winning set of the MSG induced by $\mathcal{F}^{(1/3,2/3)}$. \commargin{Should we mention the announcement of Pollington et al? I decided to add a footnote...}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sect:intro} In 2012, Deng, Guan and Zhang \cite{squ_def1} introduced the squeezing function of a bounded domain $\Omega$ in $\mathbb{C}^n$ as follows. For any $z \in \Omega$, let $\mathcal{F}_{\Omega}(z)$ be the collection of all embeddings $f$ from $\Omega$ to $\mathbb{C}^n$ such that $f(z)=0$. Let $B(0;r)= \left\lbrace z \in \mathbb{C}^n \: : \: \| z \| < r \right\rbrace $ denote the $n$-dimensional open ball centered at the origin $0$ with radius $r>0$. Then the squeezing function $S_\Omega (z)$ of $\Omega$ at $z$ is defined to be \[ S_\Omega (z) = \sup\limits_{f \in \mathcal{F}_{\Omega}(z)} \left\lbrace \frac{a}{b} \: : \: B(0;a) \subset f(\Omega ) \subset B(0;b) \right\rbrace. \] \paragraph{Remark:} \begin{enumerate} \item For the supremum in the definition of the squeezing function, we can restrict the family $\mathcal{F}_{\Omega}(z)$ to the subfamily of functions $f$ such that $f(\Omega)$ is bounded. \item For any $\lambda \neq 0$, we have $f \in \mathcal{F}_{\Omega}(z)$ if and only if $\lambda f \in \mathcal{F}_{\Omega}(z)$. As a consequence, we may assume that $b=1$. \end{enumerate} It is clear from the definition that the squeezing function on $\Omega$ is positive and bounded above by $1$. Also, it is invariant under biholomorphisms, that is, $S_{g(\Omega)} (g(z)) =S_{\Omega} (z)$ for any biholomorphism $g$ of $\Omega$. If the squeezing function of a domain $\Omega$ is bounded below by a positive constant, i.e., if there exists a positive constant $c$ such that $S_\Omega (z) \geq c >0$ for all $z \in \Omega$, then the domain $\Omega$ is said to be \textit{holomorphic homogeneous regular} by Liu, Sun and Yau \cite{liu2004canonical} or with \textit{uniform squeezing property} by Yeung \cite{yeung2009geometry}. The consideration of such domains appears naturally when one applies the Bers embedding theorem to the Teichm\"uller space of genus $g$ hyperbolic Riemann surfaces. The squeezing function is interesting because it provides some geometric information about the domain $\Omega$. For instance, Joo and Kim proved in \cite{joo2016boundary} that if $\Omega \subset \mathbb{C}^2$ is a bounded domain with smooth pseudoconvex boundary and if $p \in \partial \Omega$ is of finite type such that $\lim_{\Omega \ni z \to p} S_{\Omega} (z)=1$, then $\partial \Omega$ is strictly pseudoconvex at $p$. For another instance, Zimmer showed in \cite{zimmer2018gap} and \cite{zimmer2019characterizing} that if $\Omega \subset \mathbb{C}^n$ is a bounded convex domain with $C^{2,\alpha}$ boundary and $K$ is a compact subset of $\Omega$ such that $S_\Omega (z) \geq 1 - \epsilon$ for every $z \in \Omega \setminus K$ and for some positive constant $\epsilon = \epsilon (n)$, then $\Omega$ is strictly pseudoconvex. In addition to providing geometric information, the squeezing function is related to some estimates of intrinsic metrics on $\Omega$. For example, in \cite{deng2016properties}, Deng, Guan and Zhang showed that \[ S_\Omega (z) K_\Omega (z, v) \leq C_\Omega (z,v) \leq K_\Omega (z,v) \] for any point $z$ in $\Omega$ and for any tangent vector $v \in T_z{\Omega}$, where $C_\Omega$ and $K_\Omega$ denote the Carath\'{e}odory seminorm and Kobayashi seminorm on $\Omega$ respectively. For other properties and applications of squeezing functions, see \cite{squ_def1,fornaess2016estimate,fornaess2018domain,fornaess2015estimate,fornaess2016non,kim2016uniform,nikolov2018behavior,nikolov2017boundary,zimmer2018smoothly}. Given a bounded domain $\Omega \subset \mathbb{C}^n$, it is then natural to ask whether one can estimate or even compute the precise form for the squeezing function $S_\Omega (z)$ on $\Omega$. In \cite{arosio2017squeezing}, Arosio, Forn{\ae}ss, Shcherbina and Wold provided an estimate of $S_\Omega (z)$ for $\Omega= \mathbb{P}^1 \backslash K$ where $K$ is a Cantor set. In \cite{squ_def1}, Deng, Guan and Zhang showed that the squeezing functions of classical symmetric domains are certain constants (using a result of Kubota in \cite{kubota1982note}); they also showed that the squeezing function of the $n$-dimensional punctured unit ball $B(0;1) \setminus \{0\}$ is given by $S_{B(0;1) \setminus \{0\}} (z)=\| z \| $. We now consider the $n=1$ case, and introduce the following notation which will be used in this paper. $\mathbb{D}_{r}= \{z\in\mathbb{C}:|z|<r\}$, the disk of radius $r$ centered at $0$, and $\mathbb{D}=\mathbb{D}_{1}$; $C_{r}= \{z\in\mathbb{C}:|z|=r\}$, the circle of radius $r$ centered at $0$; and $A_{r}= \{z\in\mathbb{C}:r<|z|<1\}$, the annulus with inner radius $r$ and outer radius $1$. By the Riemann mapping theorem, the simply-connected case is trivial: $S_{D}(z)\equiv 1$ for any simply-connected domain $D$. In \cite{squ_def1}, Deng, Guan and Zhang considered the squeezing function of an annulus $A_r$. They conjectured that for any $z \in A_r$ with $|z| \geq \sqrt{r}>0$, \[ S_{A_r} (z) = \sigma^{-1} \left( \log \dfrac{(1+|z|)(1-r)}{(1-|z|)(1+r)} \right) \] where \[ \sigma (z)= \log \dfrac{1+|z|}{1-|z|}.\] In this paper, we will disprove this conjecture by establishing the formula for $S_{A_{r}}(z)$. This also answers a question asked by Wold about the precise form for $S_{A_r}(z)$ in his lecture given in the Mini-workshop on Complex Analysis and Geometry at the Institute for Mathematical Sciences, NUS in May 2017. \begin{theorem} \label{MainResult1} For $0<r<1$, and $r<|z|<1$, \[ S_{A_r} (z) = \max \left\lbrace |z| , \frac{r}{|z|} \right\rbrace. \] \end{theorem} \paragraph{Remark:}\begin{enumerate} \item The case of the punctured disk $A_0=\mathbb{D} \setminus \{0\}$ follows by letting $r \to 0$ so that $S_{A_0} (z) = |z|$. This is the $n=1$ case of the result of Deng, Guan and Zhang for the punctured ball in $\mathbb{C}^n$ referred to above. \item Since any doubly-connected domain (other than the punctured plane) is conformally equivalent to $A_{r}$ for some $0\leq r<1$, this result determines the squeezing function in the doubly-connected case up to biholomorphisms. \end{enumerate} Let us define \[ \widetilde{\mathcal{F}}_{r}(z) = \left\lbrace f\in \mathcal{F}_{A_r}(z) : f(A_r) \subset \mathbb{D}, f(\partial \mathbb{D})=\partial \mathbb{D} \right\rbrace, \] \[ \widetilde{S}_{r} (z) = \sup\limits_{f \in \widetilde{\mathcal{F}}_{r}(z)} \left\lbrace a \: : \: \mathbb{D}_{a} \subset f(A_r ) \subset \mathbb{D} \right\rbrace. \] By restricting $\mathcal{F}_{A_{r}}(z)$ to $\widetilde{\mathcal{F}}_{A_{r}}(z)$, we will see in Section \ref{sect:proof} that Theorem \ref{MainResult1} will follow if we show that \[\widetilde{S}_{r}(z)=|z|.\] To do this, we will identify a candidate for the extremal function in $\widetilde{\mathcal{F}}_{r}(z)$. This will be the conformal map from $A_r$ onto a circularly slit disk, that is a domain of the form $\mathbb{D} \setminus L$ where $L$ is a proper subarc of the circle with radius $R\in(0,1)$ and center 0. Through the results of Crowdy in \cite{crowdy2005schwarz} and \cite{crowdy2011schottky}, this conformal map can be expressed explicitly in terms of the Schottky-Klein prime function (see Theorem \ref{CrowdyConformal}). It will be shown that, in this case, the radius of the slit is $|z|$. Then the following theorem will show that this conformal map is indeed extremal (which has been suggested by Wold in his lecture just mentioned). \begin{theorem} \label{MainResult3} Let $\widetilde{E} \subset \mathbb{D}$ be a closed set with $0 \notin \widetilde{E}$ and there exists some constant $y>0$ such that $|z| \geq y$ for any $z \in \widetilde{E}$. Furthermore assume that $\Omega=\mathbb{D} \setminus \widetilde{E}$ is doubly connected. If $g$ is a conformal map of $A_{r}$ onto $\Omega$, for some $r\in(0,1)$, such that $g$ maps the $\partial\mathbb{D}$ onto $\partial\mathbb{D}$, then we have \[ | g^{-1}(0) | \geq y. \] \end{theorem} To prove this result, we will start with the case where $\widetilde{E}$ is a circular arc so that $\Omega=\mathbb{D} \setminus \widetilde{E}$ is a circular slit disk. We will then grow a curve from the circular slit so that $\widetilde{E}$ is now the union of the curve with the circular slit; the conformal maps onto these domains $\Omega$ will then satisfy a version of the Loewner differential equation due to Komatu (see \cite{Komatu_proof,Komatu_origin}). Studying this differential equation will enable us to prove Theorem \ref{MainResult3}. The remaining cases for $E$ then follow by letting the length of the circular slit tend to $0$. \ \\ Finally, we obtain the following lower bound for the squeezing function on product domains in $\mathbb{C}^{n}$. \begin{theorem} \label{thm:several} Suppose that $\Omega \subset \mathbb{C}^n$ and $\Omega=\Omega_1 \times \cdots \Omega_n$ where $\Omega_i$ is a bounded domain in $\mathbb{C}$ for each $i$. Then for $z=(z_1, \cdots , z_n) \in \Omega$, we have \[ S_{\Omega} (z) \geq \left( S_{\Omega_1}(z_1)^{-2} + \cdots + S_{\Omega_n}(z_n)^{-2} \right)^{-1/2}. \] \end{theorem} \paragraph{Remark:} The argument we use to prove the above theorem can be modified to obtain a similar result when $\Omega_{i}$ are not necessarily planar. In \cite{squ_def1}, Deng, Guan and Zhang show that this inequality is attained in the case when each $\Omega_i$ is a classical symmetric domain. \hfill Theorem \ref{thm:several} allows us to use the formula given in Theorem \ref{MainResult1} for the squeezing function of a doubly-connected domain to get a lower bound on the squeezing function of the product of several doubly-connected and simply-connected domains. For example, considering $\Omega = A_r \times \mathbb{D}$, Theorem \ref{thm:several} together with Theorem \ref{MainResult1} yields \[ S_{A_r \times \mathbb{D}}(z) \geq \begin{cases} \dfrac{r}{\sqrt{r^2+|z_1|^2}} & \mbox{if } r < |z_1| \leq \sqrt{r} \\ \dfrac{|z_1|}{\sqrt{1+|z_1|^2}} & \mbox{if } \sqrt{r} \leq |z_1| < 1. \end{cases} \] Obtaining the exact form for $S_{A_r \times \mathbb{D}}(z)$ would be of interest. The rest of the paper is organised as follows. Firstly, in Section \ref{sect:prelim}, we review some results and concepts that are necessary for this paper including the formula for the conformal map of an annulus $A_r$ to a circularly slit disk $\mathbb{D} \setminus L$ in terms of the Schottky-Klein prime function and a version of the Loewner differential equation that we will need. Then, in Section \ref{sec:main_proof1}, we give a proof for Theorem \ref{MainResult3}; the proof of Theorem \ref{MainResult1} is provided in Section \ref{sect:proof} and we prove Theorem \ref{thm:several} in Section \ref{sect:several}. Finally, we discuss the multiply-connected cases in Section \ref{sect:conjecture}. \section{Preliminary Results} \label{sect:prelim} \subsection{Basic Definitions and Notations} \label{sect:prelim:subsect:notation} Throughout this paper, we will make use of the following definitions and notations: \begin{itemize} \item For $z \in \mathbb{C}^n$ and $r>0$, $B(z;r)$ denotes the open ball centered at $z$ with radius $r$. When $n=1$, we also set $\mathbb{D}_r=B(0;r)$ and in particular, $\mathbb{D}=B(0;1)$. Then $C_r = \partial \mathbb{D}_r$. \item Let $p>0$ and $r=e^{-p}$ so that $0<r<1$. Then $A_r$ denotes the annulus centered at $0$ with inner radius $r$ and outer radius $1$. In this case, $A_r$ is said be of \textit{modulus} $p$. \item Let $\Omega$ be a doubly-connected domain in $\mathbb{C}$. The \textit{modulus} $p$ of $\Omega$ is defined to be the unique positive real number $p$ such that there exists a biholomorphism $\phi$ from $\Omega$ to $A_r$ where $r=e^{-p}$. \item By a \textit{(doubly-connected) circularly slit disk}, we refer to a domain of the form $\mathbb{D} \setminus L$ where $L$ is a proper closed subarc of the circle with radius $R \in (0,1)$. \item For any set $E\subset\mathbb{C}$, $\partial E$ denotes the topological boundary of $E$ in $\mathbb{C}$. \item Let $\gamma : I \to \mathbb{C}$ be a curve where $I$ is an interval in $\mathbb{R}$. We will write $\gamma I$ instead of $\gamma (I)$ for notational simplicity. \item We assume that the argument function $\mathrm{Arg}$ takes values in $[0,2\pi)$. \end{itemize} \subsection{The Schottky-Klein Prime Function} \label{sect:prelim:subsect:SKPF} The Schottky-Klein prime function $\omega(z,y) $ on the annulus $A_{r}$ is defined by \begin{equation} \label{def:Schottky prime} \omega(z,y) = (z-y) \prod_{n=1}^{\infty} \dfrac{(z-r^{2n}y)(y-r^{2n} z)}{(z-r^{2n} z)(y-r^{2n} y)} \qquad \mbox{for $z,y \in \mathbb{C} \setminus \{ 0\} $. } \end{equation} Moreover, $\omega(z,y)$ satisfies the following symmetry properties (see \cite{baker1897abel,crowdy2011schottky} or \cite{hejhal1972theta}): \begin{equation} \label{prop1:Schottky prime} \overline{ \omega ( \overline{z}^{-1} , \overline{y}^{-1} )} = \dfrac{- \omega (z, y)}{z y} \end{equation} and \begin{equation} \label{prop2:Schottky prime} \omega ( r^{-2} z , y ) = \dfrac{ rz \: \omega (z , y ) }{y} . \end{equation} Recall from the previous section that a circularly slit disk is a domain of the form $\mathbb{D} \setminus L$ where $L$ is a proper subarc of the circle with radius $R\in(0,1)$ and center 0. In \cite{crowdy2011schottky}, Crowdy established the following result. \begin{theorem} \label{CrowdyConformal} Let $y$ be a point in $A_r$ and define \begin{equation} \label{ConformalMap} f(z,y) = \dfrac{\omega(z,y)}{|y| \omega(z,\overline{y}^{-1})} \qquad \mbox{for $z,y \in A_r $.} \end{equation} Then $f( \cdot,y)$ is a conformal map from $A_r$ onto a circularly slit disk with $f(\partial\mathbb{D})=\partial\mathbb{D}$ and $y$ is mapped to $0$. \end{theorem} See also \cite{book_crowdy2020}. Theorem \ref{CrowdyConformal} allows us to compute the radius of the circular arc in $f(A_r)$. For any $z=re^{i \theta}$, we have \begin{align*} | f(z,y) |^2 &= f(z,y) \overline { f(z,y)} \\ &= \left( \dfrac{\omega(z,y)}{|y| \omega(z,\overline{y}^{-1})} \right) \left( \dfrac{ \overline {\omega (z,y)} }{|y| \overline {\omega(z,\overline{y}^{-1})} } \right) \\ &= \dfrac{1}{|y|^2} \left( \dfrac{\omega(z,y)}{ \omega(z,\overline{y}^{-1})} \right) \left( \dfrac{ \overline {\omega ( r^2 \overline{z}^{-1}, y )} }{ \overline {\omega ( r^2 \overline{z}^{-1}, \overline{y}^{-1}) }} \right). \end{align*} Using (\ref{prop1:Schottky prime}) and (\ref{prop2:Schottky prime}), \begin{align*} | f(z,y) |^2&= \dfrac{1}{|y|^2} \left( \dfrac{\omega(z,y)}{ \omega(z,\overline{y}^{-1})} \right) \left( \dfrac{r^{-2}z y \: \omega ( r^{-2} z, \overline{y}^{-1} ) }{ r^{-2}z \overline{y}^{-1} \: \omega ( r^{-2} z, y) } \right) \\ &= \left( \dfrac{\omega(z,y)}{ \omega(z,\overline{y}^{-1})} \right) \left( \dfrac{ rz y \: \omega ( z, \overline{y}^{-1} ) }{ rz \overline{y}^{-1} \: \omega ( r^{-2} z, y) } \right) \\&= |y|^2. \end{align*} This shows that the radius of the circular arc in $f(A_r)$ is $|y|$ and, in particular, it does not depend on $r$. Note that if $\phi$ is a conformal map from a circularly slit disk $\Omega_1$ to another circularly slit disk $\Omega_2$ such that $\phi$ maps $\partial\mathbb{D}$ to $\partial\mathbb{D}$ and $\phi(0)=0$, then $\phi$ must be a rotation (Lemma 6.3 in \cite{conway2012functions}). We restate this result as the following lemma. \begin{lemma} \label{Lemma1} For any annulus $A_r$ and for any circularly slit disk $\Omega$ whose arc has radius $y$, if $f$ is a conformal map which maps $A_r$ onto $\Omega$ with $f(\partial \mathbb{D}) = \partial \mathbb{D}$, then we have \[ |f^{-1} (0)| =y. \] \end{lemma} \paragraph{Remark:} We thank the referee for informing us that Lemma \ref{Lemma1} is the same as Lemma $3$ of \cite{reich1960canonical}. Our proof of the above lemma is different from that of Lemma $3$ in \cite{reich1960canonical}. When $y$ is positive, we have the following lemma. \begin{lemma} \label{lemma:sym} Let $\Omega=\mathbb{D} \setminus L$ be a circularly slit disk such that $L$ is symmetric across the real axis, that is, $\overline{z} \in L$ if and only if $z \in L$. Let $y$ be a point on an annulus $A_r$ such that $r<y<1$. Let $g: A_r \to \Omega$ be the conformal map from $A_r$ onto $\Omega$ such that $g(y)=0$ and $g(\partial \mathbb{D}) =\partial \mathbb{D}$. Then for any $z \in A_r$, \[g(z)=\overline{g(\overline{z})}.\] In particular, if $p_1$ and $p_2$ denote the two end points of $L$ and $\pi_1,\pi_2 \in C_r$ denote the two points on the inner boundary of $A_r$ such that $g(\pi_1)=p_1$ and $g(\pi_2)=p_2$; then we have $\overline{\pi_1} = \pi_2$. \end{lemma} \begin{proof} Consider the map $G$ defined by $G(z)=\overline{g(\overline{z})}$. Then $G$ is a conformal map of $A_{r}$ onto $\Omega$ with $G(\partial\mathbb{D})=\partial\mathbb{D}$ and $G(y)=0$. Hence, $G^{-1}\circ g$ is a conformal automorphism of $A_{r}$ that fixes $y$ and does not interchange the boundary components. Since any such conformal automorphism of an annulus is the identity mapping, this implies that $G^{-1} \circ g$ is the identity and so $g(z)=\overline{g(\overline{z})}$ on $A_r$. Let $\{ p_{i,n}\} \subset \Omega$ be sequences of points such that $p_{2,n}=\overline{p_{1,n}} $ and $\lim\limits_{n \to \infty} p_{i,n} = p_i$. Define $\pi_{i,n}=g^{-1} (p_{i,n}) \in A_r$ for $i=1,2$. It follows that for each $n\in \mathbb{N}$, \[ g(\pi_{2,n}) = p_{2,n} = \overline{p_{1,n}} = \overline{ g(\pi_{1,n}) }= g (\overline{\pi_{1,n}} ) .\] Since $g$ is conformal, we have $\overline{\pi_{1,n}}=\pi_{2,n}$ for each $n\in \mathbb{N}$. Note that $\lim\limits_{n \to \infty} \pi_{i,n} = \pi_i$. Hence we conclude that $\overline{\pi_1} =\pi_2$. \end{proof} \subsection{Approximating by Slit Domains} \label{sect:prelim:subsect:Caratheodory} Let $\mathcal{D}$ be the collection of doubly-connected domains $\Omega$ so that $\Omega \subset A_r$ for some $r>0$ and $\partial \mathbb{D}$ be one of the boundary components of $\Omega$. Let $\{ \Omega_n \}$ be a sequence in $\mathcal{D}$. Define the \textit{kernel} $\Omega$ of the sequence $\{ \Omega_n \}$ as follows: \begin{itemize} \item If there exists some $\Omega^* \in \mathcal{D}$ such that $\Omega^* \subset \bigcap\limits_{n=1}^{\infty} \Omega_n$, then the kernel $\Omega$ is defined to be the maximal doubly-connected domain in $\mathcal{D}$ such that for any compact subset $K$ of $\Omega$, there exists an $N \in \mathbb{N}$ so that $K \subset \Omega_n$ whenever $n>N$; \item otherwise, the kernel $\Omega$ is defined to be $\partial \mathbb{D}$. \end{itemize} Then a sequence $\{ \Omega_n \}$ converges to $\Omega$ \textit{in the sense of kernel convergence}, if $\Omega$ is the kernel of every subsequence of $\{ \Omega_n \}$. Let $\{ \Omega_n \}$ be a sequence of doubly-connected domains in $\mathcal{D}$. Since every doubly-connected domain of finite modulus is conformally equivalent to an annulus $A_{r}$ for some $r\in(0,1)$, there exists a sequence $\{ \psi_n \}$ of conformal maps such that $\psi_n$ maps $A_{r_n}$ onto $\Omega_n$ and $\psi_n$ is normalized appropriately. A version of the Carath\'{e}odory kernel convergence theorem for doubly-connected domains will show that the kernel convergence of $\{ \Omega_n \}$ implies local uniform convergence of $\{ \psi_n \}$. This is Theorem 7.1 in \cite{Komatu_proof}. We restate this result in a form which we will need later. \begin{theorem} \label{dense2} Suppose that $r>0$ and $r<y<1$. Let $\{ \Omega_n \}$ be a sequence of doubly connected domains in $\mathcal{D}$ such that $y\in \bigcap_{n=1}^\infty \Omega_n$. Let $\{ r_n \}$ be a sequence with $r< r_n <1$ for $n \geq 1$ such that there exists a conformal map $\Phi_n$ of $\Omega_n$ onto $A_{r_n}$ satisfying $\Phi_n (y) > 0$ and $\Phi_n (\partial \mathbb{D})= \partial \mathbb{D}$ for every $n$. Then the kernel convergence of $\Omega_n$ to a doubly connected domain $\Omega$ in $\mathcal{D}$ implies that the sequence $\{r_n\}$ converges to $r$ and that the sequence $\{ \Phi_n \}$ converges locally uniformly to a conformal map $\Phi$ of $\Omega$ onto $A_r$ satisfying $\Phi(y)>0$ and $\Phi (\partial \mathbb{D})= \partial \mathbb{D}$. \end{theorem} Theorem \ref{dense2} can be obtained from Theorem 7.1 in \cite{Komatu_proof} by renormalising the conformal maps. This theorem leads to the following proposition. \begin{proposition} \label{dense} Let $E \subset \overline{A}_r$ be a closed set such that $A_r \setminus E$ is doubly connected and $E \cap C_r \neq \emptyset$. Assume there exists some $y \in A_r \setminus E$ with $y>0$ and let $\Phi$ be the conformal map from $A_r \setminus E$ onto some annulus $A_{r'}$ normalized such that $\Phi (y) >0$ and $\Phi (\partial \mathbb{D})= \partial \mathbb{D}$. \begin{enumerate} \item Suppose that $\partial E \cap A_{r}$ is a Jordan arc. Then we can find a Jordan arc $\gamma : [0,T) \to \overline{A_r}\setminus\{y\}$ satisfying $\gamma (0) \in C_r$ and $\gamma (0,T) \subset A_r$ and an increasing function $q:[0,T) \to [r,r']$ with $q(0)=r$ and $q(T)=r'$ such that the conformal maps $\Phi_t$ of $A_r \setminus \gamma (0,t] $ onto $A_{q(t)}$ with $\Phi_{t} (y)>0$ and $\Phi_t (\partial \mathbb{D})= \partial \mathbb{D}$ satisfy $\Phi_t \to \Phi$ locally uniformly as $t \to T$. \item For the cases where $\partial E \cap A_r$ is not a Jordan arc, we can find an increasing sequence $\lbrace q_{n} \rbrace$ with $r<q_{n}<1$ for all $n$ and $q_{n}\to r'$ as $n\to \infty$; a sequence of Jordan arcs $\Gamma_{n}\subset \overline{A_{r}} \setminus \{y \}$ which starts from $C_{r}$; conformal maps $\Phi_{n}$ which map $A_r\setminus\Gamma_{n}$ onto the annulus $A_{q_{n}}$ with $\Phi_{n}(y)>0$ and $\Phi_n (\partial \mathbb{D})= \partial \mathbb{D}$ such that $\Phi_n \to \Phi$ locally uniformly as $n \to \infty$. \end{enumerate} \end{proposition} \begin{proof} For part 1, suppose that $\partial E \cap C_r \neq \emptyset$, then we can find a Jordan arc $\gamma:[0,T] \to \overline{A_r}$ such that $|\gamma(0)|=r$ and $\gamma(0,T) =\partial E \cap A_r$ Otherwise, $\partial E \cap C_r = \emptyset$. Then $E$ is bounded by $C_r$ and a closed curve in $A_r$. In this case, we define a Jordan arc $\gamma:[0,T] \to \overline{A_r}$ such that $\gamma [0,T_1]$ is the straight line segment in $E$ from a point in $C_r$ to a point in $\partial E$ for some $0<T_1<T$ and $\gamma [T_1,T) =\partial E \cap A_r$. We define $\Omega_t = A_r \setminus \gamma (0,t] $. The conformal equivalence of any doubly-connected domains to an annulus implies that there exists an increasing function $q:[0,T] \to [r,r']$ with $q(0)=r$ and $q(T)=r'$ and a family of conformal maps $\Phi_t$ of $\Omega_t$ onto $A_{q(t)}$ with $\Phi_t (y) > 0$ and $\Phi_t (\partial \mathbb{D})= \partial \mathbb{D}$. Then as $t \to T$, $ \Omega_t \to A_r \setminus E$ in the sense of kernel convergence. Hence, by Theorem \ref{dense2}, the sequence $\{ \Phi_t\}$ converges locally uniformly to a conformal map $\Phi$ of $A_r \setminus E$ onto $A_{r'}$ such that $\Phi (y) > 0$ and $\Phi (\partial \mathbb{D})= \partial \mathbb{D}$. This proves part 1. For part 2, since $A_r \setminus E$ is doubly connected, there exists an annulus $A_s$ and a conformal map $f$ of $A_s$ onto $A_r \setminus E$ for some $s>0$ such that $f(\partial \mathbb{D}) = \partial \mathbb{D}$. For any $0< \delta < 1-s$, we let \[E_\delta = \overline{f(\{ z \in A_s : s< |z| < s+ \delta \}) }.\] Also, when $\delta$ is small enough, we have $y \notin E_\delta$. Since $f$ is conformal, $\partial E_{\delta} \cap A_r = f(C_{s+\delta})$ is an analytic Jordan arc. So part 1 of the proposition applies to $A_r \setminus E_{\delta}$. That is, we can find a Jordan arc $\gamma_{\delta} : [0,T) \to \overline{A_r}\setminus\{y\}$ satisfying $\gamma_\delta (0) \in C_r$ and $\gamma_\delta (0,T) \subset A_r$ and an increasing function $q_\delta :[0,T) \to [r,r']$ with $q_{\delta}(0)=r$ and $q_{\delta}(T)=r'_\delta$ such that the conformal maps $\Phi_{t}^{\delta}$ of $A_r \setminus \gamma_{\delta} (0,t] $ onto $A_{q_{\delta}(t)}$ with $\Phi_{t}^{\delta} (y)>0$ and $\Phi_{t}^{\delta} (\partial \mathbb{D})= \partial \mathbb{D}$ satisfy $\Phi_{t}^{\delta} \to \Phi^\delta$ locally uniformly as $t \to T$. Letting $\delta \to 0$, and applying a diagonal argument we get the desired result. More precisely, let $\{ t_n \} \subset [0,T)$ be an increasing sequence and the desired result will follow by letting $\Gamma_n = \gamma_{1/n} (t_n)$ and $\Phi_n = \Phi_{t_n}^{1/n}$. \end{proof} The simply-connected version of this proposition is given in Theorem 3.2 in \cite{duren2001univalent}. This proposition allows us to consider slit domains in the proof of Theorem \ref{MainResult3} via a Loewner-type differential equation which is introduced in the next section. This is analogous to the approach to solving various coefficient problems for univalent functions on $\mathbb{D}$ (including the De Branges-Bieberbach Theorem); see \cite{duren2001univalent} and references therein. \subsection{The Loewner-type Differential Equation in Annuli} \label{sect:prelim:subsect:LDE} In this section, we introduce the Loewner differential equation which is a differential equation for the conformal maps (normalized and parametrized appropriately) onto a slit domain. We first introduce the classical setting in annuli, where the slit grows from the outer boundary. For $p>0$ , define $r=e^{-p}$ and $r_t=e^{-p+t}$. Suppose that $\gamma : [0,T] \to \overline{A_r}$ is a Jordan arc satisfying $\gamma (0) \in \partial \mathbb{D}$ and $\gamma (0,T] \subset A_r$, such that $A_r \setminus \gamma (0,t]$ has modulus $p-t$. Then there exists a family of conformal maps $\phi_t : A_r \setminus \gamma (0,t] \to A_{r_t}$, continuously differentiable in $t$, such that \[ \alpha (t) := \phi_t ( \gamma (t) ) \in \partial \mathbb{D} \] and $\phi_t (z)$ satisfies the Komatu's version of the Loewner Differential Equation on an annulus (see \cite{Komatu_proof}), \begin{equation} \label{LDE_out} \partial_t \phi_t (z) = \phi_t (z) \mathcal{K}_{r_t} (\phi_t (z),\alpha (t) ) . \end{equation} Here, for $\alpha \in \partial \mathbb{D}$ and $r>0$, $\mathcal{K}_r(z,\alpha)$ is the Villat's kernel, defined by $\mathcal{K}_r(z,\alpha) = \mathcal{K}_r \left( \frac{z}{\alpha} \right),$ where \[ \mathcal{K}_r(z) = \lim\limits_{N \to \infty} \sum\limits^N_{n=-N} \dfrac{r^{2n}+z}{r^{2n}-z} . \] For our purposes, we need a version of Loewner-type differential equations where the curve grows from the inner boundary circle of $A_{r_0}$. Let $C_{r_0}$ be the circle centered at $0$ with radius $r_0$. Suppose that $\gamma : [0,T] \to \overline{A_{r_0}}$ is a Jordan arc satisfying $\gamma (0) \in C_{r_0}$ and $\gamma (0,T] \subset A_{r_0}$ such that $A_{r_0} \setminus \gamma (0,t]$ has modulus $p-t$. Define the inversion map $\rho_t (z)=\dfrac{r_t}{z}$ which is a conformal automorphism of $A_{r_{t}}$ that interchanges inner boundary and outer boundary of $A_{r_t}$. Hence $\rho_t \circ \gamma $ is a Jordan arc satisfying the conditions given at the beginning of this subsection and for this Jordan arc $\rho_t \circ \gamma $, let $\phi_t$ be the corresponding conformal map satisfying (\ref{LDE_out}). Then we define \[ \Phi_t(z):= \rho_t \circ \phi_t \circ \rho^{-1}_0 (z) = \dfrac{r_t}{\phi_t \left( \dfrac{r_0}{z} \right)} . \] Clearly $\Phi_t $ is a conformal map from $A_{r_0} \setminus \gamma (0,t]$ onto $A_{r_t}$ with $\Phi_t (\gamma (t)) \in C_{r_t}$. By the chain rule, $\Phi_t$ satisfies the differential equation \begin{equation*} \partial_t \Phi_t (z) = \Phi_t (z) \left( 1- \mathcal{K}_{r_t} \left( \dfrac{r_t}{\Phi_t (z)}, e^{-i \beta (t) } \right) \right) \end{equation*} where $\beta (t) $ satisfies $\Phi_t ( \gamma (t) ) = r_t e^{i \beta (t)} \in \overline{A_{r_t}} $. Let $y_0>0$ be a fixed point in $A_{r_0}$. By composing $\Phi_t$ with a suitable rotation, we can normalize $\Phi_t$ such that $y_t:=\Phi_t(y_0)>0$ for any $t$. Then $\Phi_t $ satisfies \begin{align} \partial_t \Phi_t (z) &= \Phi_t (z) \left( 1- \mathcal{K}_{r_t} \left( \dfrac{r_t}{\Phi_t (z)}, e^{-i \beta (t) } \right) + i J(r_t,y_t, \beta (t) ) \right) \label{eq:LDE_in} \\ &= \Phi_t (z) \left( 1- \lim\limits_{N \to \infty} \sum\limits_{n=-N}^N \dfrac{r_t^{2n-1} \Phi_t (z) + e^{i\beta (t) } }{r_t^{2n-1} \Phi_t (z) - e^{i\beta (t) } } + i J(r_t,y_t, \beta (t) ) \right) \nonumber \end{align} for some function $ J(r_t,y_t, \beta (t) )$. Notice that the normalization accounts for multiplying a rotation factor to $\Phi_t$ at each time $t$. So the function $ J(r_t,y_t, \beta (t) )$ is real-valued. Also, since $y_t >0$ for any $t$, we have \[ \partial_t \mathrm{Im} \left( \log y_t \right) =0 \] and hence we have \[ J(r_t,y_t, \beta (t) ) = \mathrm{Im} \left[ \mathcal{K}_{r_t} \left( \dfrac{r_t}{y_t}, e^{-i\beta (t) } \right) \right]. \] We call the function $\beta(t)$ the \textit{Loewner driving function }of the curve $\gamma$. We now define \[ Q(r,y,\theta,w) := 1- \lim\limits_{N \to \infty} \sum\limits_{n=-N}^N \dfrac{r^{2n-1} w + e^{i \theta} }{r^{2n-1} w - e^{i\theta} } + i J(r,y, \theta), \] \[ R (r,\theta; w) := \mathrm{Re}\left[ Q(r,y,\theta,w) \right], \qquad \mbox{$r>0$} \] and \[ I (r,y,\theta;w) := \mathrm{Im}\left[ Q(r,y,\theta,w) \right], \qquad \mbox{$r>0$, $y>0$}. \] Hence \[ \frac{ \partial_t \Phi_t (z)}{\Phi_t (z)} = Q(r_t,y_t,\beta (t) , \Phi_t (z)) = R(r_t,\beta (t) , \Phi_t (z)) +i I (r_t,y_t,\beta (t), \Phi_t (z)) \] Substituting $z=y_0$ in the above equation and noting that $y_t =\Phi_t (y_0)>0$, we get \begin{equation} \label{LDE} \partial_t \log y_t = P(r_t , y_t, \beta (t) ) \end{equation} where \[ P (r,y,\theta) := R(r,\theta,y) = \mathrm{Re}\left[ 1 - \lim\limits_{N \to \infty} \sum\limits_{n=-N}^N \dfrac{r^{2n-1} y + e^{i \theta }} { r^{2n-1} y - e^{i \theta} } \right] \] for $0<r < y<1$. \subsection{Multi-slit Loewner-Type Differential Equation} \label{subsec:Prelim-Muti} In this subsection, we will develop a version of the Loewner differential equation for multiple slits on an annulus. Write $r_0=e^{-p}$. Let $y_0$ be a point in $A_{r_0}$, $\gamma_1:[0,T_1] \to \overline{A_{r_0}}$ and $\gamma_2:[0,T_2] \to \overline{A_{r_0}}$ be Jordan arcs such that $\gamma_1[0,T_1] \cap \gamma_2[0,T_2] =\emptyset$. Moreover, $\gamma_1$ and $\gamma_2$ are parametrized such that $A_{r_0} \setminus \left( \gamma_1(0,t_1] \cup \gamma_2(0,t_2] \right)$ has modulus $p-|\tau|$, where $\tau = ( t_1 , t_2)$ and $|\tau| := t_1 +t_2 $. We now make the following construction which is illustrated in Figure \ref{fig:fig1}. \begin{itemize} \item Let $y_{(0,0)}:=y_0$. \item Let $\widetilde{\Phi}_{(t_1,0)}$ be the conformal map of $A_{r_0} \setminus \gamma_1(0,t_1]$ onto $A_{r_{t_1}}$ with $\widetilde{y}_{(t_1,0)} := \widetilde{\Phi}_{(t_1,0)} (y_{(0,0)}) >0 $. \item Let $\widehat{\Phi}_{(0,t_2)}$ be the conformal map of $A_{r_0} \setminus \gamma_2(0,t_2]$ onto $A_{r_{t_2}}$ with $\widehat{y}_{(0,t_2)} := \widehat{\Phi}_{(0,t_2)} (y_{(0,0)}) >0 $. \item Let $\widetilde{\Phi}_{\tau}$ be the conformal map of $A_{r_{t_2}} \setminus \widehat{\gamma}_1(0,t_1]$ onto $A_{r_{|\tau|}}$ with $\widetilde{y}_{\tau} = \widetilde{\Phi}_{\tau} (\widehat{y}_{(0,t_2)}) >0 $. Here $\widehat{\gamma}_1= \widehat{\Phi}_{(0,t_2)} \circ \gamma_1$ is the image of $\gamma_1(0,t_1]$ under $\widehat{\Phi}_{(0,t_2)}$. Note that because of the conformal invariance and the fact that $A_{r_0} \setminus \left( \gamma_1(0,t_1] \cup \gamma_2(0,t_2] \right)$ has modulus $p-|\tau|$, we have $A_{r_{t_2}} \setminus \widehat{\gamma}_1 (0,t_1]$ has modulus $p-|\tau|$. \item Let $\widehat{\Phi}_{\tau}$ be the conformal map of $A_{r_{t_1}} \setminus \widetilde{\gamma}_2(0,t_2]$ onto $A_{r_{|\tau|}}$ with $\widehat{y}_{\tau} = \widehat{\Phi}_{\tau} (\widetilde{y}_{(t_1,0)}) >0 $. Here $\widetilde{\gamma}_2= \widetilde{\Phi}_{(t_1,0)} \circ \gamma_2$ is the image of $\gamma_2 (0,t_2]$ under $ \widetilde{\Phi}_{(t_1,0)}$. Note that because of the conformal invariance and the fact that $A_{r_0} \setminus \left( \gamma_1(0,t_1] \cup \gamma_2(0,t_2] \right)$ has modulus $p-|\tau|$, we have $A_{r_{t_1}} \setminus \widetilde{\gamma}_2 (0,t_2]$ has modulus $p-|\tau|$. \item Let $\Phi_{\tau} $ be the conformal map of $A_{r_0} \setminus \left( \gamma_1 (0,t_1] \cup \gamma_2 (0,t_2] \right)$ onto $A_{r_{|\tau|}}$ with $y_{\tau} := \Phi_{\tau} (y_{(0,0)})$. \item Let $r_{|\tau|} e^{i \xi_1 (\tau) } = \Phi_{\tau} ( \gamma_1 (t_1))$ and $r_{|\tau|} e^{i \xi_2 (\tau)} = \Phi_{\tau} ( \gamma_2 (t_2))$. \end{itemize} \begin{figure} \centering \includegraphics[width=12cm]{fig1.png} \caption{Construction of two slit Loewner differential equation.} \label{fig:fig1} \end{figure} Note that the only conformal automorphism of an annulus which fixes a point and does not interchange the boundary components is the identity mapping. Hence, we have \begin{equation} \label{Eq3} \Phi_{\tau} = \widehat{\Phi}_{\tau} \circ \widetilde{\Phi}_{(t_1,0)} = \widetilde{\Phi}_{\tau} \circ \widehat{\Phi}_{(0,t_2)} \end{equation} and \[ y_{\tau} = \Phi_{\tau} (y_{(0,0)}) = \widehat{\Phi}_{\tau} (\widetilde{y}_{(t_1,0)}) = \widetilde{\Phi}_{\tau} (\widehat{y}_{(0,t_2)}). \] Then $\widetilde{\Phi}_{\tau}$ satisfies (\ref{eq:LDE_in}) \[ \partial_{t_{1}} \widetilde{\Phi}_{\tau}(w) = \widetilde{\Phi}_{\tau} (w) Q( r_{|\tau|} , y_\tau ,\xi_1 (\tau) , \widetilde{\Phi}_{\tau}(w)). \] By substituting $w=\widehat{\Phi}_{(0,t_{2})}(z)$ and $w=\widehat{y}_{(0,t_2)}$ respectively, and by (\ref{Eq3}) we have \[\partial_{t_{1}} \Phi_{\tau}(z) = \Phi_{\tau} (z) Q( r_{|\tau|} , y_\tau ,\xi_1 (\tau) , \Phi_{\tau}(z))\] and \begin{equation} \label{LDE:tau-t2} \partial_{t_1} \log y_{\tau} = P(r_{|\tau|},y_{\tau}, \xi_1 (\tau)). \end{equation} Similarly, $\widehat{\Phi}_{\tau}$ also satisfies (\ref{eq:LDE_in}), \[ \partial_{t_2}\widehat{\Phi}_{\tau}(w) = \widehat{\Phi}_{\tau}(w) Q( r_{|\tau|} , y_\tau ,\xi_2 (\tau) , \widehat{\Phi}_{\tau}(w)). \] Substituting $w=\widetilde{\Phi}_{(t_{1},0)}(z)$ and $w=\widetilde{y}_{(t_1,0)}$ respectively, we have \[\partial_{t_2}\Phi_{\tau}(z) = \Phi_{\tau}(w) Q( r_{|\tau|} , y_\tau ,\xi_2 (\tau) , \Phi_{\tau}(w))\] and \begin{equation} \label{LDE:tau-t1} \partial_{t_2}\log y_{\tau} = P(r_{|\tau|},y_{\tau},\xi_2 (\tau)). \end{equation} Now let $\gamma_3:[0,T_3] \to \overline{A_{r_0}}$ be another Jordan arc such that $\gamma_3(0,T_3] \cap \gamma_2(0,T_2] = \emptyset$ and $\gamma_3(0,T_3] \cap \gamma_1(0,T_1] = \emptyset$. Moreover, $\gamma_1,\gamma_2$ and $\gamma_3$ are parametrized such that $A_{r_0} \setminus \left( \gamma_1(0,t_1] \cup \gamma_2(0,t_2] \cup \gamma_3(0,t_3] \right)$ has modulus $p-|\tau|$, where $\tau =(t_1,t_2,t_3)$ and $|\tau| = t_1+t_2+t_3$. A similar construction to the above allows us to find a family of conformal maps \[ \Phi_{\tau}: A_{r_0} \setminus \left( \gamma_1(0,t_1] \cup \gamma_2(0,t_2] \cup \gamma_3(0,t_3] \right) \to A_{r_{|\tau|}}\] with $y_\tau :=\Phi_\tau (y_{(0,0,0)}) >0$, where $y_{(0,0,0)}=y_0$. These satisfy \[\partial_{t_i}\Phi_{\tau}(z) = \Phi_{\tau}(w) Q( r_{|\tau|} , y_\tau ,\xi_i (\tau) , \Phi_{\tau}(w))\] and \[\partial_{t_i}\log y_{\tau} = P(r_{|\tau|},y_{\tau},\xi_i (\tau))\] for $i=1,2,3$. Moreover, suppose that $t_1,t_2,t_3$ are real-valued functions of $s$, that is $t_1=t_1(s)$, $t_2=t_2(s)$ and $t_3=t_3(s)$. By the chain rule, $\Phi_\tau$ and $y_\tau$ satisfies \begin{equation} \label{MLDE} \partial_{s} \log \Phi_{\tau} (z) = \sum\limits_{i=1}^3 (\partial_s t_i) Q \left( r_{|\tau|},y_\tau, \xi_i (\tau) , \Phi_{\tau} (z) \right) \end{equation} and \begin{equation} \label{MLDE2} \partial_{s}\log y_{\tau} = \sum\limits_{i=1}^3 (\partial_s t_i) P(r_{|\tau|},y_{\tau},\xi_i (\tau) ) \end{equation} \section{Proof of Theorem \ref{MainResult3}} \label{sec:main_proof1} \subsection{Idea of the Proof} \label{subsec:idea_of the proof} Suppose that $y>0$. Let $E\subset\mathbb{D}$ be a closed set with $0\not\in E$ and $|z|\geq y$ for all $z\in E$. Let $g$ be a conformal map of an annulus $A_{r}$ onto $\mathbb{D}\setminus E$. By further composing with a rotation we can assume that $g^{-1}(0) > 0$. We need to show that $g^{-1}(0)\geq y$. We first consider the case where $E$ is the union of a circular arc $L$ (with radius $y$ centred at $0$) and a Jordan arc starting from $L$. Proposition \ref{dense} will allow us to obtain the general case by an approximation argument. Denote by $f$ the conformal map of the annulus $A_{{r}_{0}}$ onto a circularly slit domain $\mathbb{D}\setminus L$ which maps a point $y\in A_{r_{0}}$ with $y>0$ to $0$. Let $\widetilde{\gamma}$ be a Jordan arc growing from the circular arc $L$. In other words, $\widetilde{\gamma} : [0,T] \to \mathbb{D}$ is a Jordan arc satisfying $\widetilde{\gamma} (0) \in L$ and $\widetilde{\gamma} (0,T] \subset \Omega$, such that $\Omega \setminus \widetilde{\gamma} (0,t]$ has modulus $-\log r_t = -(\log r_0)-t$. Now, we let $\gamma = f^{-1} \circ \widetilde{\gamma}$. Let $y_0=y$ and we then define $y_{t}$ and $\beta(t)$ as in Section \ref{sect:prelim:subsect:LDE}. Now let $f_{t}$ be the conformal map of the annulus $A_{{r}_{t}}$ onto a circularly slit domain $\mathbb{D} \setminus L_t$, where $L_t$ is a circular arc, which maps the point $y_{t}\in A_{r_{t}}$ to $0$. The maps $f_{t}$ can each be extended continuously to the inner circle $C_{r_{t}}$ of $A_{r_{t}}$ and $f_{t}$ maps $C_{r_{t}}$ onto the circular arc $L_t$. The preimages under $f_{t}$ of the two endpoints of the circular arc $L_t$ then partition $C_{r_{t}}$ into two circular arcs which are symmetric under the transformation $z\mapsto \overline{z}$ according to Lemma \ref{lemma:sym}. We call these circular arcs $\Gamma^{+}_{t}$ and $\Gamma^{-}_{t}$ respectively, where $\Gamma_{t}^{+}$ is the circular arc which intersects the negative real axis. It can be shown that $\beta(t)\in\Gamma^{+}_{t}$ for all $t \in [0,T]$ implies that $y_{t}$ is strictly increasing. Similarly, $\beta(t)\in\Gamma^{-}_{t}$ for all $t \in [0,T]$ implies that $y_{t}$ is strictly decreasing. It would thus be sufficient to show that if $|\widetilde{\gamma}(t)|>y$, then $\beta(t)\in\Gamma^{+}_{t}$ for all $t \in [0,T]$. However, in general, this may not be the case. The idea of our method is as follows: Since $|\widetilde{\gamma}(t)|>y$, we can extend the length of $L$ to a longer circular arc $L^{*}_0$ without ever intersecting $\widetilde{\gamma}$. As the curve $\widetilde{\gamma}$ grows from $L^{*}_0$, we simultaneously shrink $L^{*}_{0}$ to $L^{*}_t$ such that at time $T$, $L^{*}_T$ coincides with $L$. By choosing a suitable rate at which $L^{*}_t$ shrinks to $L$ from each end of the circular arc $L^{*}_{0}$, we will be able to show that the preimage of 0 at each $t$, $y^{*}_{t}$, is now strictly increasing. This will prove the desired result since $y_{T}^{*}=y_{T}$ (as $L^{*}_{T}=L$ ) and $y_{0}^{*}=y_{0}$. The second equality follows from the fact that extending the length of $L$ to get $L^{*}_0$ does not change the preimage of $0$ by Lemma \ref{Lemma1}. It is for this part of the argument that we will need to use the three-slit Loewner differential equation from Section \ref{subsec:Prelim-Muti}: the three slits will be the curve $\gamma$, the circular arc in the clockwise direction and the circular arc in the anticlockwise direction. The rest of this section provides the formal construction of the above argument. \subsection{Properties of $P(r,y,\theta)$ in the Loewner-Type Differential Equation} \label{sect:main_cal} In this subsection, we study properties of the differential equation in (\ref{LDE}). The following lemma gives some properties of $P(r,y,\theta )$ that we will need. \begin{lemma} \label{lem:P} For $y \in (r,1)$, we have \begin{enumerate} \item $P(r,y,\theta)= P(r,y,2\pi-\theta)$ for $\theta \in [0,2\pi]$. \item $P(r,y,\theta)$ is increasing in $\theta$ for $\theta \in [0,\pi]$. \item $P(r,y,\theta)$ is decreasing in $\theta$ for $\theta \in [\pi,2\pi]$. \end{enumerate} \end{lemma} \begin{proof} As $y>0$, we can see that \[ 1 - \lim\limits_{N \to \infty} \sum\limits_{n=-N}^N \dfrac{r^{2n-1} y + e^{i (2\pi-\theta) }} { r^{2n-1} y - e^{i (2\pi -\theta )} } = \overline{ 1 - \lim\limits_{N \to \infty} \sum\limits_{n=-N}^N \dfrac{r^{2n-1} y + e^{i \theta }} { r^{2n-1} y - e^{i \theta } } }. \] Taking real parts of the above equation proves part 1. To prove part 2, we write $r=e^{-p}$, $p>0$ and define \[ A (z; p) = \prod\limits_{k=1}^\infty \left( 1- e^{-2kp} \right) \left( 1- e^{-(2k-1)p+iz} \right) \left( 1- e^{-(2k-1)p-iz} \right) \] This function $A$ is related to the Jacobi theta function $\vartheta_4$ defined in Section 13.19 of \cite{Book:theta}. Direct calculations show that \begin{align*} P(r,y,\theta) &= 2 \mathrm{Im} \left( \frac{ A' (\theta + i \ln y ; p)}{ A (\theta + i \ln y ; p)} \right) \end{align*} where $A'(z;p) = \partial_z A(z;p)$. It then follows that \begin{align*} \partial_\theta P(r,y,\theta) &= 2 \mathrm{Im} \left( \frac{ A'' (\theta + i \ln y ; p) A (\theta + i \ln y ; p) - \left( A' (\theta + i \ln y ; p) \right)^2 }{ \left( A (\theta + i \ln y ; p) \right)^2} \right). \end{align*} Define \[ G_1 (z;p) = \frac{ A'' (z ; p) A (z ; p) - \left( A' (z ; p) \right)^2 }{ \left( A (z ; p) \right)^2}. \] Then $G_1 (z;p)$ is an elliptic function (of $z$) with periods $2\pi$ and $2ip$. Note that $G_1$ has poles of order $2$ at $z=2n\pi + (2m-1)ip$ for any $n,m\in \mathbb{Z}$ and these are the only poles of $G_1$. Let $\wp$ be the Weierstrass's $\wp$ function with period $2\pi$ and $2ip$. It follows that there exists an constant $c_1$ such that $G_2 (z):=G_1 (z;p)-c_1 \wp (z + ip)$ has no pole on $\mathbb{C}$. Thus $G_2 (z) = c_2$ for some constant $c_2$ by the Liouville's theorem. This means that we have \[ G_1 (z;p) = c_1 \wp (z + ip) + c_2 \] for some constant $c_1 ,c_2$. By considering the Laurent series expansions of $G_1$ and $\wp$ and comparing coeffients, we have $c_1 =-1$ and $c_2$ is real, that is, \[ G_1 ( z ; p ) = - \wp \left( z+ ip \right) + c \] for some real constant $c$. (For details, see the demonstration by Dixit and Solynin \cite{Paper:kernel} for equation (3.3) of \cite{Paper:kernel}.) Hence, \begin{align*} \partial_\theta P(r,y,\theta) &= - 2\mathrm{Im} \left( \wp \left( z +ip \right) \right). \end{align*} Note that $\wp $ maps the interior of the rectangle $R$ with vertices $z_1= 0$, $z_2= -ip$, $z_3= \pi - ip$ and $z_4= \pi$ conformally into the lower half plane $\mathcal{H}=\lbrace z \in \mathbb{C} \: : \: \mathrm{Im} (z)<0 \rbrace$ and maps $\partial R$ injectively onto the real line (See for example, Section 13.25 of \cite{Book:theta}). Then, for any fixed $p = - \ln r$, for any $\theta \in (0, \pi)$ and for any $y \in (r,1)$, we have $z= \theta + i \ln y$ lies inside the interior of $R$ and hence $\partial_\theta P(r,y,\theta) >0$. Also, when $\theta=0$ or $\theta= \pi$, $z= \theta + i \ln y$ lies on $\partial R$ and hence $\partial_\theta P(r,y,\theta) =0$. This proves part 2. Part 3 follows from part 1 and part 2. \end{proof} As a consequence of Lemma \ref{lem:P}, we have the following lemma. \begin{lemma} \label{lem:theta} Let $y \in (r,1)$. Suppose $\theta_1,\theta_2 \in [0,2\pi)$ satisfy $|\pi-\theta_1| \leq |\pi-\theta_2|$. Then we have \[ P(r,y,\theta_1) \geq P(r,y,\theta_2). \] \end{lemma} \begin{proof} When $\pi\leq\theta_{1}\leq\theta_{2}\leq2\pi$ or $0\leq\theta_{2}\leq\theta_{1}\leq \pi$, this result is a direct consequence of parts 2 and 3 of Lemma \ref{lem:P}. The remaining cases reduce to the above using part 1 in Lemma \ref{lem:P}. \end{proof} \subsection{The key result} \label{subsec:proof:yt} Let $\Omega$ be a circularly slit disk $\mathbb{D} \setminus L$ where $L$ is a circular arc centered at $0$ with radius $y_0$ for some $y_0 \in (0,1)$. Let $f$ be a conformal map of $A_{r_0}$ onto $\Omega$ such that $\left| f(z) \right| \to y_0$ as $\left| z \right| \to r_0$. By Lemma \ref{Lemma1}, we have $|f^{-1}(0)|=y_0$. By composing $f$ with a rotation if necessary, we can assume without loss of generality that $f^{-1}(0)=y_0$. Suppose that $\widetilde{\gamma} : [0,T] \to \mathbb{D}$ is a Jordan arc satisfying $\widetilde{\gamma} (0) \in L$ and $\widetilde{\gamma} (0,T] \subset \Omega$, such that $\Omega \setminus \widetilde{\gamma} (0,t]$ has modulus $-(\log r_0)-t$ and let $\gamma = f^{-1} \circ \widetilde{\gamma}$. \begin{figure} \centering \includegraphics[width=12cm]{fig2.png} \caption{Construction of $\Phi_{\tau}$.} \label{fig:fig2} \end{figure} Let $\gamma_+: [0,T_+] \to \overline{A_{r_0}}$ be the Jordan arc such that $f\circ \gamma_+$ starts from an endpoint of the circular arc $L$ and extends $L$ along the circular arc in the anticlockwise direction, i.e. $|f\circ \gamma_+ (t)|=y_0$ for all $t \in [0,T_+]$ and $f\circ \gamma_+ [0,T_+] \cap L = f\circ \gamma_+ (0)$. Similarly, let $\gamma_-: [0,T_-] \to \overline{A_{r_0}}$ be the Jordan arc such that $f\circ \gamma_-$ starts from an endpoint of the circular arc $L$ and extends $L$ along the circular arc in the clockwise direction, i.e. $|f\circ \gamma_- (t)|=y_0$ for all $t \in [0,T_-]$ and $f\circ \gamma_-[0,T_-] \cap L = f\circ \gamma_- (0) \neq f\circ \gamma_+ (0)$. Moreover, $\gamma$, $\gamma_-$ and $\gamma_+$ are parametrized such that $A_{r_0} \setminus \left( \gamma(0,t_1] \cup \gamma_+(0,t_2] \cup \gamma_-(0,t_3] \right)$ has modulus $p-|\tau|$, where $\tau= (t_1,t_2,t_3)$ and $|\tau|=t_1+t_2+t_3$. Since, by assumption, $| \tilde{\gamma}(t)| > y_0$ for all $t \in (0,T]$, we have that $\widetilde{\gamma}$ does not intersect the circle of radius $y_{0}$. Hence $\gamma (0,T] \cap \gamma_-(0,T_-] =\emptyset$ and $\gamma (0,T] \cap \gamma_+ (0,T_+] =\emptyset$. We can also assume that $\gamma_- [0,T_-] \cap \gamma_+ [0,T_+] =\emptyset$. Define $r_{|\tau|}$ and the conformal maps \[ \Phi_{\tau} : A_{r_0} \setminus \left( \gamma (0,t_1] \cup \gamma_+ (0,t_2] \cup \gamma_- (0,t_3] \right) \to A_{r_{|\tau|}} \] as in Section \ref{subsec:Prelim-Muti} where the three curves are $\gamma,\gamma_{-},\gamma_{+}$. Let $y_{\tau}=\Phi_{\tau}(y_{0})$. In particular, for $\tau=(0,0,0)$ \[ y_{(0,0,0)} = \Phi_{(0,0,0)} (y_0)= \mathrm{id} (y_0) = y_0.\]Also let $\beta(\tau)=\mathrm{Arg} \left( \Phi_{\tau}(\gamma(t_{1})) \right) \in [0,2\pi )$, $\xi_{+}(\tau)=\mathrm{Arg} \left( \Phi_{\tau}(\gamma_{+}(t_{2})) \right) \in [0,2\pi)$, $\xi_{-}(\tau)=\mathrm{Arg} \left( \Phi_{\tau}(\gamma_{-}(t_{3})) \right) \in [0,2\pi)$. Now suppose that $a(s)$ is a real-valued differentiable function for $s\in[0,T]$ such that $\partial_s a(s)\in[0,1]$ for all $s\in[0,T]$. From now on, we assume that $\tau $ is a function of $s$ of the form, \begin{equation} \tau (s) =(s, t_{+}(s) , \ t_{-}(s) )\label{tau} \end{equation} for $s \in[0,T]$ where $t_{+}(s)=(T-a(T))-(s-a(s))$ and $t_{-}(s)=a(T)-a(s)$ so that $|\tau (s)| \equiv T$. This construction is illustrated in Figure \ref{fig:fig2}. Since $\partial_{s}a(s)\in[0,1]$, both $a(s)$ and $s-a(s)$ are non-decreasing and hence $t_{+}(s),t_{-}(s)$ are non-negative and non-increasing for $s\in [0,T]$. The function $a(s)$ affects the rate that the circular arc \[L\cup \gamma_{+}[0,T_{+}]\cup \gamma_{-}[0,T_{-}]\] is shrinking from the clockwise end and the anticlockwise end. In the following lemma, we will choose a particular function $a(s)$ that will enable us to apply Lemma \ref{lem:theta}. \begin{lemma} \label{lemma:Fixing} There exists a real-valued differentiable function $a^{*}(s)$ with $0 \leq \partial_s a^*(s) \leq 1$ and $a^{*}(0) \geq 0$ such that, defining $\tau (s)$ as in equation (\ref{tau}) with $a(s)=a^{*}(s)$, we have \[ 2\pi -\xi_+(\tau (s) ) = \xi_{-}(\tau (s) ) \] for all $s \in [0,T] $. \end{lemma} \begin{proof} Recall equation (\ref{MLDE}), \[ \partial_{s} \log \Phi_{\tau} (z) = \sum\limits_{i=1}^3 (\partial_s t_i) Q \left( r_{|\tau|},y_\tau, \xi_i (\tau) , \Phi_{\tau} (z) \right) . \] As $|\tau (s) |=T$ and noting that $s, t_{+}(s),t_{-}(s)$ are real-valued, taking imaginary parts on both sides of this equation yields \begin{equation*} \partial_{s} \mathrm{Arg} \left( \Phi_{\tau (s) } (z) \right) = H \left( r_{T},y_{\tau (s) }, \Phi_{\tau (s) } (z), \theta (s) , \partial_s a (s) \right) \end{equation*} where $\theta(s)= \left(\beta(\tau (s) ),\xi_{+}( \tau (s) ),\xi_{-}( \tau (s) ) \right)$ and \[ H(r,y,w,\theta,\lambda) = I(r,y,w,\theta_{1}) - (1-\lambda) I(r,y,w,\theta_2) - \lambda I(r,y,w,\theta_3) \] for $\theta=(\theta_{1},\theta_{2},\theta_{3})$. First of all, notice that when $s=0$, \[ \Omega_0 = \mathbb{D} \setminus \left( L \cup \left( f \circ \gamma_{+} (0,t_{+}(0)] \right) \cup \left( f \circ\gamma_{-} (0,t_{-}(0) ] \right) \right) \] is a circularly slit domain and $r_T e^{i \xi_{+}(\tau(0))}$ and $r_T e^{i \xi_{-}(\tau(0))}$ will be mapped to the end points $ f ( \gamma_{+} (t_{+}(0) ))$, $f (\gamma_{-} ( t_{-}(0) ) )$ of the circular slit under the conformal map $f \circ \Phi^{-1}_{\tau(0)} $. By applying a rotation to $\Omega_{0}$, Lemma \ref{lemma:sym} implies that \begin{equation*} \label{initialcondition} \xi_{+}(\tau(0))= 2\pi- \xi_-(\tau (0)). \end{equation*} Suppose that $\epsilon>0$ is sufficiently small and $s\in(0,T-\epsilon]$. Note that both $\gamma_{+}(t_{+}( s+ \epsilon ))$ and $\gamma_{-}(t_{-}(s+\epsilon )$ have two preimages under $\Phi_{\tau (s) }$, We define $u (s) $ and $v (s)$ to be the preimage under $\Phi_{\tau (s)}^{-1}$ of $\gamma_{+}(t_{+}( s+ \epsilon ))$ and $\gamma_{-}(t_{-}(s+\epsilon )$ respectively such that \[ \mathrm{Arg} (u(s)),\mathrm{Arg} (v (s))\in (\xi_{-}(\tau (s)),\xi_{+}(\tau (s))) \subset [0,2\pi). \] Again, by applying a rotation to $\Omega_{0}$, Lemma \ref{lemma:sym} implies that \begin{equation} \label{initialcondition2} \mathrm{Arg} (u(0)) = 2\pi - \mathrm{Arg} (v (0)) . \end{equation} Then by the chain rule, \begin{align*} & \partial_{s} \mathrm{Arg} \left( u(s) \right) \\ = & H(r_{T},y_{\tau (s)},u(s),\theta(s), \partial_s a(s) )+ \mathrm{Im}\left[ \frac{\Phi_{\tau (s)}'(\gamma_{+}(t_{+}(s+\epsilon ) ))}{\Phi_{\tau (s)}(\gamma_{+}(t_{+}(s+\epsilon )))}\partial_{s} [\gamma_{+}(t_{+}(s+\epsilon))] \right] \\ & \partial_{s} \mathrm{Arg} \left( v(s) \right) \\ = & H(r_{T},y_{\tau (s)},v(s),\theta(s), \partial_s a(s) )+ \mathrm{Im}\left[ \frac{\Phi_{\tau (s)}'(\gamma_{-}(t_{-}(s+\epsilon ) ))}{\Phi_{\tau (s)}(\gamma_{-}(t_{-}(s+\epsilon )))}\partial_{s} [\gamma_{-}(t_{-}(s+\epsilon))] \right] \end{align*} Note that $\Phi_{\tau (s)} (z)$ is locally $2$ to $1$ at $\gamma_{+} ( t_{+} (s) )$ and $\gamma_{-} ( t_{-} (s) )$. Hence, $\Phi_{\tau (s)}'(\gamma_{+}(t_{+}(s)))=0$ and $\Phi_{\tau (s)}'(\gamma_{-}(t_{-}(s)))=0$. So for small enough $\epsilon$, $\Phi_{\tau (s)}'(\gamma_{+}(t_{+}(s+\epsilon )))$ and $\Phi_{\tau (s)}'(\gamma_{-}(t_{-}(s+\epsilon )))$ are bounded. Also, note that $\Phi_{\tau (s) } (z) \neq 0 $ near $\gamma_{-} (0, t_{-} (s)] \cup \gamma_{+} (0,t_{+} (s) ]$. Thus, \[ \mathrm{Im}\left[ \frac{\Phi_{\tau (s) }'(\gamma_{+}(t_{+}(s+\epsilon )))}{\Phi_{\tau (s)}(\gamma_{+}(t_{+}(s+\epsilon )))}\partial_{s} [\gamma_{+}(t_{+}(s+\epsilon ))] \right] \] and \[ \mathrm{Im} \left[ \frac{\Phi_{\tau (s)}'(\gamma_{-}(t_{-}(s+\epsilon)))}{\Phi_{\tau (s)}(\gamma_{-}(t_{-}(s+\epsilon )))} \partial_{s}[\gamma_{-}(t_{-}(s+\epsilon ))] \right] \] are bounded. Also, note that $H(r,y,w,\theta,\lambda)$ is continuous with respect to each variable for $w \neq re^{i\theta_1},re^{i\theta_2},re^{i\theta_3}$. When $\lambda=0$, $H(r,y,w,\theta,0)$ is bounded near $w= re^{i\theta_3}$, and has a simple pole at $w=re^{i\theta_2}$. The pole at $w=re^{i\theta_2}$ arises from the expression $\mathrm{Im} \left( \frac{w+re^{i\theta_2}}{w-re^{i\theta_2}} \right)$ coming from the term $-I(r,y,w,\theta_{2})$ in the definition of $H(r,y,w,\theta,\lambda)$. In particular, the pole at $w=re^{i\theta_2}$ has residue $2$. Hence, for any given $M>0$, we can find some $\epsilon>0$ such that $H(r_{T},y_{\tau (s)},u(s),\theta(s),0) < -M$ since $\mathrm{Arg} (u(s)) \in (\xi_{-}(\tau (s)),\xi_{+}(\tau (s)))$ so that $\mathrm{Arg} (u(s))$ approaches the pole at $\mathrm{Arg}(w)=\xi_{+}(\tau(s))$ from the left. This implies that, when $a(s)=0$, \[ \partial_{s} \mathrm{Arg} \left( u(s) \right) +\partial_{s} \mathrm{Arg} \left( v(s) \right) \rightarrow -\infty \text{ as } \epsilon\rightarrow 0\] Similarly, when $\lambda=1$, $H(r,y,w,\theta,1)$ is bounded near $w= re^{i\theta_2}$, and has a simple pole at $w=re^{i\theta_3}$. Again, the pole at $w=re^{i\theta_3}$ has residue $2$. Hence, for any given $M>0$, we can find some $\epsilon>0$ such that $H(r_{T},y_{\tau (s)},v(s),\theta(s),0) >M$ since $\mathrm{Arg} (v(s)) \in (\xi_{-}(\tau (s)),\xi_{+}(\tau (s)))$ so that $\mathrm{Arg} (v(s))$ approaches the pole at $\mathrm{Arg}(w)=\xi_{-}( \tau (s) )$ from the right. This implies that, when $a(s)=1$, \[ \partial_{s} \mathrm{Arg} \left( u(s) \right) +\partial_{s} \mathrm{Arg} \left( v(s) \right) \rightarrow \infty \text{ as } \epsilon\rightarrow 0\] Consequently, the intermediate value theorem implies that, for each $s\in[0,T]$, we can find $\lambda_{\epsilon}(s)\in [0,1]$ such that \[ \partial_{s} \mathrm{Arg} \left( u(s) \right) +\partial_{s} \mathrm{Arg} \left( v(s) \right) =0\] Hence, with \[a(s)=\int_{0}^{s}\lambda_{\epsilon}(s) ds,\] and using equation (\ref{initialcondition2}), we have \begin{equation} \label{eq:fixed} \mathrm{Arg} \left( u(s) \right)=2\pi-\mathrm{Arg} \left( v(s) \right) \text{ for all } s. \end{equation} Let $\lambda^{*}(s) $ be the pointwise limit of $\lambda_\epsilon (s)$ as $\epsilon \to 0$. For all $s\in [0,T]$, $0 \leq \lambda_{\epsilon}(s) \leq 1$ and hence $0\leq \lambda^* (s)\leq 1$. Moreover, $\lambda^* (s)$ is integrable by the dominated convergence theorem. Define \[ a^* (s) = \int_0^s \lambda^* (s) ds. \] Then with $a(s)=a^{*}(s)$ and using equation (\ref{eq:fixed}), we have \[ \xi_{+}(\tau (s))=2\pi-\xi_{-}(\tau (s)) \text{ for all } s.\] \end{proof} As a consequence of Lemma \ref{lemma:Fixing}, we obtain the following key result. \begin{proposition} \label{thm:key} If $| \tilde{\gamma}(t)| > y_0$ for all $t \in (0,T]$, we have $y_T > y_0$. \end{proposition} \begin{proof} Let $a(s)=a^*(s)$ where $a^{*}(s)$ is given in Lemma \ref{lemma:Fixing}. When $s=0$, we have \[ y_{\tau (0)}=y_0 \] by Lemma \ref{Lemma1} as $L \cup f\circ\gamma_{+} [0,T-a(T)+a(0)] \cup f\circ\gamma_{-} [0,a(T)-a(0)]$ is a circular arc. When $s=T$, we have $y_{(T,0,0)}=y_T$. So it suffices to show that $\partial_{s}\log y_{\tau (s)} >0$. By construction, we have \[ 2\pi - \xi_{-} (\tau (s)) = \xi_{+} (\tau (s)) \] for all $s \in [0,T] $. Note that we have $|\pi -\beta (\tau (0))| < |\pi - \xi_{-} (\tau (0))|=|\pi - \xi_{+} (\tau (0))|$. It follows that $|\pi -\beta (\tau (s))| < |\pi - \xi_{-} (\tau (s)) |=|\pi - \xi_{+} (\tau (s))|$ for all $s \in [0,T]$. Now note that equation (\ref{MLDE2}) can be rewritten as \begin{align*} \partial_{s}\log y_{\tau (s)} =& \left(1-\partial_s a \right) \left( P(r_{T},y_{\tau (s)}, \beta (\tau (s)) )- P(r_{T},y_{\tau (s)}, \xi_{+} (\tau (s))) \right) \\ & +\partial_s a \left( P(r_{T},y_{\tau (s)}, \beta (\tau (s)) )- P(r_{T},y_{\tau (s)}, \xi_{-} (\tau (s))) \right) \end{align*} As $0 \leq \partial_s a(s) \leq 1$ for all $s\in [0,T]$, Lemma \ref{lem:theta} implies that \[\partial_{s}\log y_{\tau (s)} >0\] Then the result follows. \end{proof} The above proposition allows us to prove Theorem \ref{MainResult3}. \begin{proof}[Proof of Theorem \ref{MainResult3}.] Define $E_{\min}= \left\lbrace z \in E : |z| \leq |w| \mbox{ for any $w\in E$} \right\rbrace $. So $E_{\min}$ consists of all the points in $E$ closest to the origin. Thus, $E_{min}$ is the union of circular arc(s) with the same radius $y_{0} \geq y$ and clearly $E_{min} \subset \partial E$. Note that either $E_{min} =E$ or $E_{min} \subsetneq E$. If $E_{min}=E$, then the connected set $E$ is a circular arc centered at $0$. In this situation, Lemma \ref{Lemma1} implies that $|g^{-1}(0)|=y_{0}$. If $E_{min} \subsetneq E$, then there are two subcases: either there exists a connected component $L$ of $E_{min}$ containing more than one point or $E_{min}$ is a set of disconnected points. Suppose that there is a connected component $L$ of $E_{min}$ containing more than one point. We first assume that $E \setminus L$ is a Jordan arc $\widetilde{\gamma}$. Then we can find $0<r_0<1$ such that $A_{r_0}$ has the same modulus as the domain $\mathbb{D} \setminus L$. Let $y_0>r_0$ and let $f(\cdot,y_0)$ be the conformal map of $A_{r_0}$ onto $\mathbb{D} \setminus L$ with $f(y_0,y_0)=0$. Define $\gamma : [0,T] \to \overline{A_{r_0}}$ to be the Jordan arc such that $\gamma (0) \in C_r$ and the image of $\gamma(0,T]$ under $f(\cdot,y_0)$ is $\widetilde{\gamma}$. In addition, $\gamma$ is parametrized such that $A_{r_0} \setminus \gamma (0,t]$ has modulus $-(\log r_0)+t$. We define $y_t$ as in Section \ref{sect:prelim:subsect:LDE}, namely $y_t=\Phi_t (y_0) $ where $\Phi_t$ is a conformal map from $A_{r_0} \setminus \gamma (0,t]$ onto $A_{r_t}$. Note that $\Phi_T=\Phi_{\tau (T)}$ and hence $y_T=y_{\tau (T)}$. Then by Proposition \ref{thm:key}, $y_T > y_0$. Since $g^{-1}(0)=y_T$, we have $|g^{-1}(0)| > y_0$. The case where $E \setminus L$ is not a Jordan arc follows from part 2 of Proposition \ref{dense}. The final case where $E_{min}$ is a set of disconnected points follows from the previous case by letting the arc length of $L$ shrink to $0$. In all cases, $|g^{-1}(0)| \geq y_{0}\geq y$. \end{proof} \section{Proof of the Main Result} \label{sect:proof} Recall that the squeezing function is defined to be \[ S_{A_r} (z) = \sup\limits_{f \in \mathcal{F}_{A_r}(z)} \left\lbrace a \: : \: \mathbb{D}_{a} \subset f(\Omega ) \subset \mathbb{D} \right\rbrace. \] where \[ \mathcal{F}_{A_{r}}(z)=\left\lbrace f : \mbox{$f$ is a conformal map from $A_r$ to $\mathbb{C}$ such that $f(z)=0$.} \right\rbrace. \] To simplify notation, we write $S_{r}(z)=S_{A_{r}}(z)$ and $\mathcal{F}_{r}(z)=\mathcal{F}_{A_{r}}(z)$. We have the following corollary of Theorem \ref{MainResult3}. \begin{corollary} \label{reduce2} Let \[ \widetilde{\mathcal{F}}_{r}(z) = \left\lbrace f \in \mathcal{F}_{r}(z) : f(A_r) \subset \mathbb{D}, f(\partial \mathbb{D})=\partial \mathbb{D} \right\rbrace \] and define \[ \widetilde{S}_{r}(z) = \sup\limits_{f \in \widetilde{\mathcal{F}}_{r}(z)} \left\lbrace a \: : \: \mathbb{D}_{a} \subset f(\Omega ) \right\rbrace. \] Then \[ \widetilde{S}_{r}(z) = |z| . \] \end{corollary} \begin{proof} The conformal map $f$ of $A_{r}$ onto a circularly slit disk with $z$ mapping to 0 is in $\widetilde{\mathcal{F}}_{r}(z)$. Lemma \ref{Lemma1} implies that $\widetilde{S}_r(z) \geq |z|$. Now suppose that we can find $f^{*}\in \widetilde{F}_{r}(z)$ such that $\mathbb{D}_{a^{*}}\subset f^{*}(A_{r})$ for some $a^{*}>|z|$. Let $E$ be the bounded component of the complement of $f^{*}(A_{r})$ in $\mathbb{C}$. Then $|w|\geq a^{*}$ for all $w\in E$. Theorem \ref{MainResult3} implies that, $|(f^{*})^{-1}(0)|\geq a^{*}$ which is a contradiction since $(f^{*})^{-1}(0)=z$. \end{proof} It now remains to prove Theorem \ref{MainResult1}. For any bounded doubly-connected domain $\Omega$, $\partial\Omega$ has two connected components: we denote the component that separates $\Omega$ from $\infty$ (i.e. the outer boundary) by $\partial^{o}\Omega$; we denote the other component (i.e. the inner boundary) by $\partial^{i}\Omega$. We decompose the family $\mathcal{F}_{r}(z)$ into two disjoint subfamilies \[\mathcal{F}^{1}_{r}(z)=\left\{ f\in \mathcal{F}_r(z): f(\partial\mathbb{D})=\partial^{o}f(A_{r})\right\}\] and \[\mathcal{F}^{2}_{r}(z)=\left\{ f\in \mathcal{F}_r(z): f(\partial\mathbb{D})=\partial^{i}f(A_{r})\right\}.\] $\mathcal{F}^{1}_{r}(z)$ consists of functions that map outer boundary to outer boundary; $\mathcal{F}^{2}_{r}(z)$ consists of functions that interchange inner and outer boundary. We will consider a squeezing function on each subfamily separately. Define \[ S^{1}_{r} (z) = \sup\limits_{f \in \mathcal{F}^{1}_{r}(z)} \left\lbrace a \: : \: \mathbb{D}_{a} \subset f(A_{r}) \subset \mathbb{D} \right\rbrace \] and \[ S^{2}_{r} (z) = \sup\limits_{f \in \mathcal{F}^{2}_{r}(z)} \left\lbrace a \: : \: \mathbb{D}_{a} \subset f(A_{r} ) \subset \mathbb{D} \right\rbrace. \] Then the squeezing function satisfies $S_{r}(z)=\max\{S^{1}_{r}(z),S^{2}_{r}(z)\}$. \begin{lemma}\label{reduce1} \[S^{1}_{r}(z)=S^{2}_{r}\left(\frac{r}{z}\right)\] \end{lemma} \begin{proof} This follows from the fact that $f\in\mathcal{F}^{1}_{r}(z)$ if any only if $ f \circ \rho \in \mathcal{F}^{2}_{r}(\frac{r}{z})$ where $\rho(z)=\frac{r}{z}$. \end{proof} \begin{proof}[Proof of Theorem \ref{MainResult1}.] By Corollary \ref{reduce2} and Lemma \ref{reduce1}, it is sufficient to prove that $S_{r}^{1}(z)=\widetilde{S}_{r}(z)$. First we note that $\widetilde{\mathcal{F}}_{r}(z)\subset \mathcal{F}_{r}^{1}(z)$ and hence $\widetilde{S}_{r}(z)\leq S^{1}_{r}(z)$. Since $\mathcal{F}^{1}_{r}(z)$ is a normal family of holomorphic functions, it follows easily that we can replace the $\sup$ in the definition of $S^{1}_{r}(z)$ with $\max$. Let $f\in \mathcal{F}^{1}_{r}(z)$ be any function that attains this maximum with corresponding $a$ i.e. \[\mathbb{D}_{a}\subset f(A_{r})\subset \mathbb{D} \text{ and } a=S^{1}_{r}(z).\] We denote by $\Omega$ the simply-connected domain satisfying $0 \in \Omega$ and $\partial \Omega = \partial^{o}f(A_{r})$. Note that $\mathbb{D}_{a}\subset\Omega\subset\mathbb{D}$. By the Riemann mapping theorem, we can find a conformal map $g$ of $\mathbb{D}$ onto $\Omega$ such that $g(0)=0$. Let $F=g^{-1}\circ f$. Then $F\in \widetilde{\mathcal{F}}_{r}(z)$ and \[ \mathbb{D}_{a}\subset f(A_{r}) \Rightarrow g^{-1}(\mathbb{D}_{a})\subset F(A_{r}).\] In addition, $g$ maps the unit disk into itself and so by the Schwarz lemma, $g( \mathbb{D}_{a} )\subset \mathbb{D}_{a}$. Combining this with the above, we deduce that \[\mathbb{D}_{a}\subset g^{-1}(\mathbb{D}_{a})\subset F(A_{r}).\] Hence $a\leq\sup\{\rho: \mathbb{D}_{\rho}\subset F(A_{r})\}$ which implies that $S^{1}_{r}(z)\leq \widetilde{S}_{r}(z)$. Therefore $S_{r}^{1}(z)=\widetilde{S}_{r}(z)$ as required. \end{proof} \section{The squeezing function on product domains in $\mathbb{C}^{n}$} \label{sect:several} It remains to prove Theorem \ref{thm:several}. \begin{proof}[Proof of Theorem \ref{thm:several}] The squeezing function is scale invariant and hence in the definition of the squeezing function $S_{\Omega_{i}}(z_{i})$, we can restrict the family $\mathcal{F}_{\Omega_{i}}(z_i)$ to \[ \mathcal{F}^{b}_{\Omega_{i}} ( z_i )=\{f\in\mathcal{F}_{\Omega_{i}}(z_{i}):|f(w)|< 1 \text{ for all } w\in\Omega_{i} \}.\] Since $\mathcal{F}^{b}_{\Omega_{i}}(z_i)$ is a normal family, one can easily show that there is a function $f_i$ in $\mathcal{F}^{b}_{\Omega_{i}}(z_i)$ attaining the supremum in the definition of $S_{\Omega_i}(z_i)$. By scaling, we can assume also that $\sup\{ \left| f_{i}(w) \right| : w\in\Omega_{i}\}=1$. Consider $\lambda_i = S_{\Omega_i}^{-1}(z_i)$ and $g(w)=(g_1(w_1) , \cdots , g_n (w_n))$ where $g_i = \lambda_i f_i$. Since $g_i$ are holomorphic and injective for all $i$, $g(w)$ is a holomorphic embedding of $\Omega$ into $\mathbb{C}^n$. Also, $f_{i}(z_{i})=0$ for each $i$ and thus $g\in\mathcal{F}_{\Omega}(z)$. Moreover, since $f_i$ attains the supremum in $S_{\Omega_i}(z_i)$, we have $\mathbb{D}_{\lambda_i^{-1}} \subset f_i (\Omega_i) \subset \mathbb{D}$ and hence $\mathbb{D} \subset g_i (\Omega_i) \subset \mathbb{D}_{\lambda_i}$. It follows that \[\mathbb{D}^{n}\subset g (\Omega) \subset \mathbb{D}_{\lambda_1}\times\cdots\times \mathbb{D}_{\lambda_{n}}.\] However, $B(0;1)\subset \mathbb{D}^{n}$ and $\mathbb{D}_{\lambda_1}\times\cdots\times \mathbb{D}_{\lambda_{n}}\subset B(0;\Lambda)$ where \[\Lambda = \sqrt{\lambda_{1}^{2}+\cdots+\lambda_{n}^{2}}.\] Hence \[B(0;1)\subset g (\Omega) \subset B(0;\Lambda)\] and we deduce that \[ S_{\Omega} (z) \geq \frac{1}{\Lambda}=\left( S_{\Omega_1}^{-2}(z_1) + \cdots + S_{\Omega_n}^{-2}(z_n) \right)^{-1/2}. \] \end{proof} \section{The squeezing function on multiply-connected domains} \label{sect:conjecture} We now discuss some future directions regarding the squeezing function on planar domains of higher connectivity. Let $\Omega \subset \mathbb{C}$ be a finitely connected domain with disjoint boundary components $\gamma_0 , \gamma_1, \cdots \gamma_n$. As a corollary of conformal equivalence for finitely connected regions \cite{conway2012functions}, every finitely connected domain with non-degenerate boundary components is conformally equivalent to a circular domain, (i.e., the unit disk with smaller disks removed). Thus we can assume that $\gamma_{0}$ is the unit circle and $\gamma_{1},\ldots, \gamma_{n}$ are circles contained inside the unit disk. In light of our results, for a fixed $z\in \Omega$, we propose that the function which attains the maximum in the extremal problem \[ \sup\limits_{f \in \mathcal{F}_{\Omega}(z)} \left\lbrace \frac{a}{b} \: : \: \mathbb{D}_a \subset f(\Omega ) \subset \mathbb{D}_b \right\rbrace. \] is given by the conformal map of $\Omega$ onto a circularly slit disk of the same connectivity (i.e. the unit disk with proper arcs of circles centered at $0$ removed). For $j=1,\ldots,n$, let $\mu_{j}$ be a M\"{o}bius transformation that interchanges $\gamma_{j}$ and the unit circle $\partial \mathbb{D}$ and let $\Omega_{j}=\mu_{j}(\Omega)$. We also let $\mu_{0}$ be the identity mapping and $\Omega_{0}=\Omega$. Then, for $j=0,\ldots,n$, let $f_{j}$ denote the conformal map of $\Omega_{j}$ onto a circularly slit disk with $f_{j}(\mu_{j}(z))=0$ and let $\mathrm{Rad}(\Omega_j)$ be the minimum of the radii of the circular arcs in $f_{j}(\Omega_{j})$. Note that $\mathrm{Rad}(\Omega_{j})$ does not depend on the choice of $\mu_{j}$ or $f_{j}$ (by Theorem 6.2 in \cite{conway2012functions}). We make the following conjecture regarding the squeezing function on $\Omega$. \begin{flushleft} \textbf{Conjecture \:} \textit{ $S_{\Omega} (z) = \max \{ \mathrm{Rad}(\Omega_j) \: : \: j=0,1,\cdots, n\}$.} \end{flushleft} The Schottky-Klein prime function $\omega(\cdot,\cdot)$ can be defined on domains of connectivity $n$ in terms of the Schottky Group of $\Omega$ (see \cite{crowdy2011schottky}). By \cite{crowdy2005schwarz}, the same expression in Theorem \ref{CrowdyConformal}, \[ f( \cdot , z ) = \dfrac{\omega( \cdot ,z )}{|z| \omega( \cdot ,\overline{z}^{-1})} \] gives the formula for the conformal map of $\Omega$ onto a circularly slit disk mapping $z$ to $0$. In addition, an expression for $\mathrm{Rad}(\Omega)$ is also given in \cite{crowdy2005schwarz}. Recently, B\"ohm and Lauf \cite{bohm} have obtained an expression for a version of the Loewner differential equation on $n$-connected circularly slit disks. Using Crowdy's version of the Schwarz-Christoffel formula for multiply-connected domains (given in \cite{crowdy2005schwarz}), it should be possible to express the Loewner differential equation in terms of the Schottky-Klein prime function. It is anticipated that the methods in our paper could then be used to prove this conjecture. \bigskip \paragraph{Acknowledgments:} The first author was partially supported by the RGC grant 17306019. The second author was partially supported by a HKU studentship. \bibliographystyle{spmpsci}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Understanding of the structure and dynamics of complex networks found in nature, society, and elsewhere have been greatly facilitated by recent advances in the physics of networks~\cite{Newman:2003ia,Barabasi:2005}. Fundamental network problems that have garnered interest include the highly skewed (often power-law) degree (connectivity) distributions, identification of communities or modules in networks, and various critical phenomena and their implications~\cite{Barabasi:1999ef,Newman:2008,Cho08032013}. ``Centrality'' is another concept that is often studied that represents the influence, relevance, or power of a node~\cite{Freeman:1979kg,Newman:2010fk}. Perhaps the best known example is Google's PageRank of webpages, based on a combination of the topology of the hyperlink network and how well the contents of a webpage matches the user search terms (query)~\cite{Page:1998bf}. The idea of ranking the nodes based on their relative strengths or relevance -- which we can view as effectively representing competitions between the nodes -- can be useful in many networks; in fact, in many complex systems -- natural, social, or man-made -- the competition-and-reward mechanism is an essential ingredient for their functions and dynamics. Even our daily lives involve continuous decision making based on competitions or comparisons between alternatives in many contexts, ranging from such mundane tasks as choosing where to dine to those very consequential as making critical political or business decisions. \begin{figure} \includegraphics[width=80mm]{01.eps} \caption{(a) An actual incomplete competition network (middle) can be thought of as an intermediary stage of a competition schedule that starts from an empty network (left) and ends as a complete network (right) when all possible competitions have taken place between nodes. The natural ranking of nodes, applicable in a complete network, is to be estimated (inferred) from the information of wins and losses available at the incomplete stage. (b) The setup of a single-strength parameter model for estimating expected win score of a team from a potential contest. The distribution of strength parameters $\set{\phi}$ can be chosen so that $p_{j\leftarrow i}\equiv\phi_i/(\phi_i+\phi_j)$, the probability that $i$ beats $j$, is fully consistent with Bayes' formula.} \label{fig01} \end{figure} The \textbf{dominance hierarchy} or \textbf{ranking} refers to the linear ordering of things from the strongest to the weakest based on the results of competitions or comparisons. In the case where the thins undergo pairwise (one-to-one) competitions, the entire set of competitions can be represented as a directed network where an arrow points from the winner to the loser of the competition (Fig.~\ref{fig01}~(a)). Food webs in ecological systems (with an arrow pointing from a predator to its prey), sport schedules (with an arrow pointing from the winner to the loser of a game), and merchandise preference testing (with an arrow pointing from the preferred merchandise to those not preferred) are common examples of pairwise competition networks. The dominance hierarchy may take different names -- called the ``trophic level'' in ecology, and ``ranking'' or ``standings'' in sports, for example -- although they are identical. In the remainder of this letter, for convenience we use familiar sports terminology, e.g. ranking, contestant (or player or team), win, loss, tie, and so forth. A \textbf{complete competition} is one in which every player competes against every body else, also called a round robin~(Fig.~\ref{fig01}~(a)). It would show as a full (complete) network. In such a competition determining the ranking is the easiest: we can simply rank the players in the decreasing order of their total wins $\set{W}$, i.e. the out-degree. When there exists a tie (multiple players with the same $W$), we can employ the following \emph{tie breaker}: We consider the reduced round robin among those tied, and rank them according to their wins therein. This can be applied iteratively to obtain the final ranking in a very simple manner. We call the ranking of nodes obtained this way the ``Natural Ranking,'' as it results from the complete and thus the fairest competition -- every player competes against every other. Note that this is applicable to multiple round-robins as well, as long as each node pair contest an equal number of times. (Note that no tie may be further broken in some cases, for instance when three teams $i$, $j$, and $k$ have the same total wins ($w_i=w_j=w_k$), and $i$ lost to $j$, $j$ lost to $k$, and $k$ lost to $i$, i.e. $\set{\sigma_{ij},\sigma_{jk},\sigma_{ki}}=\set{1,1,1}$ in adjacency matrix notation). Despite its simplicity and intuitive nature, natural ranking is often inapplicable as many real-world competition networks are incomplete; expecting a real-world competition to be complete is perhaps excessively stringent and unnatural in reality for several reasons. First, the cost of a complete competition can be very high even for moderately large systems -- in a network of $n$ contestants, the number of competitions required is ${n\choose 2}\sim O(n^2)$ -- and thus in the case of the popular US college football of 120 teams, for instance, there may simply be not enough time in a year if one cares for the athletes' health. Second, there may be insurmountable physical constraints as in an ecological food web where the spatial separation between the habitats of two species may hinder them from interacting directly~\cite{Williams:2000oh}. We can, nevertheless, try to estimate the final natural ranking by imagining that the actual incomplete network we have on hand is merely an intermediary stage of the ``schedule'' of a complete competition that starts from an empty network and ends with a complete one when all competitions have been made (see Fig.~\ref{fig01}~(a)). Then this becomes the problem of inference of a quantity based on currently available information (data), for which the Bayesian framework is one of the most accepted ones in statistics~\cite{MacKay:2003tp}. It can be presented compactly as follows: Labeling $\mathcal{P}(x)$ the current estimate (also called the Prior) of the distribution of a parameter $x$, and $P(\mathcal{D}|x)$ the probability of data $\mathcal{D}$ given $x$ (called the Likelihood), the Bayes formula tells us that one should update $\mathcal{P}(x)$ via \begin{align} C\cdot P(\mathcal{D}|x)\cdot\mathcal{P}(x) \rightarrow \mathcal{P}(x) \label{bayesian} \end{align} where the new $\mathcal{P}(x)$ is called the Posterior, and $C$ is the normalization factor so that $\int\mathcal{P}(x)\d x=1$. Here we use the Bayesian formula~Eq.~\eref{bayesian} to estimate $\{\widetilde{W}\}$, the projected total win score based on an incomplete competition network to obtain the projected natural ranking. As a generalization of the out-degree, $\widetilde{W}_i$ of node $i$ is the sum of two quantities: The number of actual wins thus far (which we call $w_i$) and the expected number of wins from yet-to-be-played games. Since the latter quantity is equal to the sum of the probabilities of winning the games, our goal becomes estimating $p_{ij}=p_{i\leftarrow j}$, our estimation of the probability that $i$ gets defeated by $j$ given the current state of the competition. To decide $p_{i\from j}$ consistent with Eq.~\eref{bayesian}, we consider the following. First, when we have no basis on which to judge the two teams' strengths, e.g. when they have not played any game yet, we are maximally ignorant of $p_{i\from j}$. This means that it can be any value, i.e. $\mathcal{P}(p_{i\from j})=1$ for $p_{i\from j}\in[0,1]$~\cite{Jaynes:2003ba,MacKay:2003tp}. Now assume that we observe that $i$ loses to $j$, i.e. we have a datum $\mathcal{D}=\set{\sigma_{ij}=1}$. Using Eq.~\eref{bayesian} we have the updated $\mathcal{P}(p_{i\from j}) = C p_{i\from j} \mathcal{P}(p_{i\from j})=p_{i\from j}/2$. This step offers the foundation for estimating $\mathcal{P}(p_{i\from j})$ between a yet-to-play node pair at any point in the schedule that reflects the strengths of each contestant implied from their performance record. To achieve this we introduce a strength parameter $\phi_i\in[0,\infty)$ for each contestant such that $p_{i\from j}$ between two contestants is \begin{align} p_{i\from j} \equiv \frac{\phi_j}{\phi_i+\phi_j}. \label{defpij} \end{align} Using this and the distribution of the strength that we write as $\Phi(\phi)$ we now have \begin{align} \dist{p_{ij}} &\equiv \int_0^{\infty}\int_0^{\infty}\delta\biggl(p_{ij}-\frac{\phi_j}{\phi_i+\phi_j}\biggr)\Phi(\phi_{i})\Phi(\phi_{j})\d\phi_i\d\phi_j. \label{fp} \end{align} The $\Phi(\phi)$ consistent with Eq.~\eref{bayesian} and Bayes formula can be found as follows. For the flat $\mathcal{P}(p_{i\from j})=1$, we can check using Eq.~\eref{fp} that $\Phi(\phi)=\ex{-\phi}$ for both $\phi_i$ and $\phi_j$. When we observe $\sigma_{ij}=1$, we have $\mathcal{P}(p_{i\from j})=p_{i\from j}/2$, which is satisfied by the following changes: \begin{align} \Phi(\phi_i) \to \ex{-\phi_i}~~~\textrm{and}~~~\Phi(\phi_j) \to \phi_j\ex{-\phi_j}, \label{newcP} \end{align} agreeing with the intuition that $\phi_j$ is likely larger than $\phi_i$ as $\sigma_{ij}=1$ implies that $j$ is likely stronger than $i$. This procedure can be repeated to find a general pattern. Assume now that $j$ (with the one win against $i$) competes against $k$ that has no win, i.e. $\Phi(\phi_k)=\ex{-\phi_k}$. We use $\Phi(\phi_j)$, $\Phi(\phi_k)$, Eqs.~\eref{defpij}~and~\eref{fp} to find the prior $\mathcal{P}(p_{kj})=p_{kj}/2$ between $j$ and $k$. Then using Eq.~\eref{bayesian} again we have the following two possible updates: \begin{align} \mathcal{P}(p_{kj}) \leftarrow \bigl\{\begin{array}{ll} C\cdotp_{kj}\cdotp_{kj} = 3p_{kj}^{2},~\mbox{if $\sigma_{kj}=1$} \\ C\cdot(1-p_{kj})\cdotp_{kj} = 6(1-p_{kj})p_{kj},~\mbox{if $\sigma_{jk}=1.$} \nonumber \\ \end{array} \label{cPupdatefar} \end{align} In a fashion similar to Eq.~\eref{newcP}, the following update rules for$\Phi$s are consistent with Eq.~\eref{cPupdatefar} for \emph{the winner}, while no change is necessary for the loser: \begin{align} \Phi(\phi_{j}): \phi_{j}\ex{-\phi_{j}} \to \frac{\phi_{j}^{2}\ex{-\phi_{j}}}{2},~\mbox{if $\sigma_{kj}=1$} \nonumber \\ \Phi(\phi_{k}): \ex{-\phi_{k}} \to \phi_{k}\ex{-\phi_{k}},~\mbox{if $\sigma_{jk}=1.$} \end{align} Generally, at a point in the schedule when a player has gathered $w$ wins its $\Phi(\phi)$ is given as \begin{align} \Phi(\phi;w) = \frac{\phi^{w}\ex{-w}}{w!}. \label{Phiphi} \end{align} Using this and Eq.~\eref{fp} the $\mathcal{P}(p_{i\from j})$ between two teams with $w_i$ and $w_j$ actual wins is \begin{align} \mathcal{P}(p_{i\from j})\bigl|_{w_{i},w_{j}} = \frac{\Gamma(w_{i}+w_{j}+2)}{\Gamma(w_{i}+1)\Gamma(w_{j}+1)}(1-p_{i\from j})^{w_{i}}p_{i\from j}^{w_{j}}, \end{align} from which we have the following simple win score gain for $i$: \begin{align} \av{\Delta\widetilde{W}_i}=\av{p_{j\from i}}=\int_{0}^{1}p_{j\from i}\mathcal{P}(p_{j\from i})\dp_{j\from i}=\frac{w_{i}+1}{w_{i}+w_{j}+2}. \end{align} Finally, at any given point in the competition, the expected final win score for team $i$ is \begin{align} \widetilde{W}_{i} &=\sum_{j\in\Omega_i}\sigma_{ji}+\sum_{j\notin\Omega_i}\av{p_{j\from i}} \nonumber \\ &=w_{i}+\sum_{j\notin\Omega_i}\frac{w_{i}+1}{w_{i}+w_{j}+2} \label{finalscore} \end{align} where $\Omega_i$ is the set of nodes that $i$ has competed against. Once a competition becomes complete the second term is zero. In an incomplete competition network, however, the non-zero second term serves as a tiebreaker for teams with the same $w$; an inspection of its functional form tells us that having beaten a stronger opponent counts more than a weaker opponent, which is very intuitive -- -- in sports this is often called the ``strength of schedule''. Using the exact form for $\Phi(\phi)$, Eq.~\eref{Phiphi}, we can calculate the variance of $\widetilde{W}_{i}$ from \begin{align} \Delta^{2}\widetilde{W}_{i} &= \bigl\langle (W_{i}-\av{W_{i}})^{2}\bigr\rangle \nonumber \\ &= \sum_{j\notin\Omega_i}(p_{j\from i}-p_{j\from i}^{2})+2\sum_{(j<k)\notin\Omega_i}\bigl[\av{\sigma_{ji}\sigma_{ki}}-p_{j\from i}p_{ki}\bigr], \end{align} which needs to be marginalized over $\phi$ in the fashion of Eq.~\eref{fp}. The first part is simple enough: \begin{align} \sum_{j\notin\Omega_i}(p_{j\from i}-p_{j\from i}^{2}) &\to \sum_{j\notin\Omega_i}\frac{1+w_{i}}{2+w_{j}+w_{i}}\biggl(1-\frac{1+w_{i}}{2+w_{j}+w_{i}}\biggr) \nonumber \\ &= \sum_{j\notin\Omega_i} \frac{(1+w_{j})(1+w_{i})}{(2+w_{i}+w_{j})^{2}}. \end{align} To evaluate the second part, we note that $\av{\sigma_{ji}\sigma_{ki}}\nep_{j\from i}p_{ki}=\av{\sigma_{ji}}\av{\sigma_{ki}}$; we say that $\sigma_{ji}$ and $\sigma_{ki}$ are \emph{connected} via $i$, analogous to evaluating Feynman diagrams in field theory. For applications in network theory, see~\cite{Park:2004sm,Park:2010fk}. To evaluate $\av{\sigma_{ji}\sigma_{ki}}$ correctly we need the \emph{joint} probability distribution of $p_{j\from i}$ and $p_{ki}$ given as \begin{align} \mathcal{P}&(p_{j\from i},p_{ki}) \nonumber \\ &= \int_{\set{\phi}}\delta\biggl(p_{j\from i}-\frac{\phi_{i}}{\phi_{j}+\phi_{i}}\biggr)\delta\biggl(p_{ki}-\frac{\phi_{i}}{\phi_{k}+\phi_{i}}\biggr) \nonumber \\ &~~~~~~~~~~\times\Phi(\phi_{i})\Phi(\phi_{j})\Phi(\phi_{k})\mathrm{d}\phi_{i}\mathrm{d}\phi_{j}\mathrm{d}\phi \nonumber \\ &= \frac{(\frac{1}{p_{j\from i}}-1)^{w_j}(\frac{1}{p_{ki}}-1)^{w_k}(w_j+w_k+w_i+2)!}{(\frac{1}{p_{j\from i}}+\frac{1}{p_{ki}}-1)^{w_j+w_k+w_i+3}p_{j\from i}^{2}p_{ki}^{2}~w_i!w_j!w_k!}, \label{fjiki} \end{align} from which we have \begin{align} \av{\sigma_{ji}\sigma_{ki}} &=\int_{0}^{1}\int_{0}^{1}p_{j\from i}p_{ki}\mathcal{P}(p_{j\from i},p_{ki})\dp_{j\from i}\dp_{ki} \nonumber \\ &\equiv \mathcal{B}(w_{i},w_{j},w_{k}), \end{align} for which we have no closed solution at the time of this writing, although a numerical evaluation is straightforward using symbolic computation packages such as Mathematica. Finally the variance is \begin{align} \Delta^{2}\widetilde{W}_{i} = \sum_{j\notin\Omega_i}&\frac{(1+w_{i})(1+w_{j})}{(2+w_{i}+w_{j})^{2}}+\sum_{(j,k)\notin\Omega_i}\biggl[\mathcal{B}(w_i,w_j,w_k) \nonumber \\ &-\frac{(w_i+1)^{2}}{(w_i+w_j+2)(w_i+w_k+2)}\biggr]. \label{variance} \end{align} \begin{figure} \includegraphics[width=80mm]{02.eps} \caption{(a) The calculated projected final win scores $\set{\widetilde{W}}$ and their mean squared variances $\set{\Delta\widetilde{W}}$ for the universities that participated the 2010 US football schedule network. The expected scores of universities with identical actual wins are separated by the strength of schedule incorporated in our method. (Inset: Monte Carlo simulation results) (b) The method can be used to estimate the appropriate size of playoff tournaments by investigating the number of highest-ranking teams with overlapping final score ranges in a simulated schedule. A Monte Carlo simulation shows that the network connectance (density) needs to be $\sim 70\%$ for the proposed current four-team playoff system in US college football to be reasonable.} \label{fig02} \end{figure} We now apply our method to US college football to showcase its features and potential. The governing body of the sport called the BCS (short for Bowl Championship Series) which, as in other sports, employs an ``official'' ranking system for the purpose of setting schedules or seeding tournaments~\cite{Stefani:1997bb,Callagan:2004mh,Dunnavant:2004ns}. Given the popularity of the sport and the substantial benefits -- financial and otherwise -- to successful contestants, the importance of a robust ranking method is essential. Yet the official BCS ranking system, a mixture of human polls and select computer algorithms, is annually an object of outcry from dissatisfied fans. The fundamental origin of the problem is, as mentioned above, the incompleteness of the competition (only $\sim10\%$ of the games are played). Our method applied to this network, presented in Fig.~\ref{fig02}, shows quantitatively the severity of the problem and suggests possible solutions. Fig.~\ref{fig02}~(a) shows the projected win scores $\widetilde{W}$ with the error bars indicating the squared-root--variance $\set{\Delta W=\Delta^{2} W^{1/2}}$ as the measure of the uncertainty in $\set{\widetilde{W}}$. First, we note the separation in $\widetilde{W}$ between teams with the same $w$ originating from the strength of schedule in Eq.~\eref{finalscore}, as expected. Also useful for our purposes are $\set{\Delta W}$. They clearly show the fundamental limits of the current BCS system that picks the top two teams for the lucrative national championship match: the expected range of $\widetilde{W}$ for the first-ranked team (University of Texas-Austin) overlaps with that of the 30th ranked team; it indicates that the uncertainty is indeed too significant to justify the current BCS method. In 2014 the BCS is poised to adopt a four-team (two-round) playoff tournament to ameliorate the problem, but our results suggest that that too is still insufficient -- in fact, a larger playoff tournament of 32 teams would be more reasonable. Using the fact that the uncertainty decreases as more games are played, we investigated numerically (by creating random schedules beyond what was actually played in the year) the reasonable size of playoff tournaments as the function of the fraction of the games played (i.e. the connectance of the network), shown in Fig.~\ref{fig02}~(b): We see that about 30\% of the possible games need to be played for a sixteen-team playoffs, 50\% for an eight-team playoffs, and 70\%, nearly seven times what is the reality, for the four-team playoffs to be implemented very soon to be reasonable. In this letter we saw that as a ranking method the natural ranking is attractive as it is intuitive and straightforward, but in many real-life networks it unfortunately cannot be used, as it is applicable only to the rare complete round-robin competitions. In this letter we proposed an analytical model and method that allows us to use the concept of natural ranking in an incomplete network by framing it as an Bayesian inference problem. Starting from the fundamental Bayesian formula Eq.~\eref{bayesian}, we were able to establish a one-parameter model that has incorporates a clear update rule as new information (wins and losses) are uncovered as the competition progresses. Bayesian inference is fundamentally distribution-based, meaning that it produces not one specific value of a variable but a range of values. This allowed us to estimate not only the mean expectation of the final win scores of teams but their uncertainties (variance), enabling us to answer important questions of practical value, such as the sufficiency of a given playoff system in a major sport, for instance. We hope to see our general method applied to studying various issues in rankings in many complex systems. We would like to thank Thilo Gross for useful discussions. This work was supported by the National Research Foundation of Korea (Grant NRF-20100004910), Korea Advanced Institute of Science and Technology, and Kyung Hee University (Grant KHU-201020100116). \bibliographystyle{apsrev}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The goal of this paper is to develop a framework to extend the rough solution theory for the conformally rescaled Einstein constraint equations when the mean curvature is constant. In the event that the mean curvature is constant, the conformally rescaled constraint equations decouple, leaving only a semilinear elliptic equation to solve. In attempting to extend the rough solution theory in this case, one is confronted with the problem of solving a semilinear elliptic equation with distributional coefficients. If these coefficients do not lie in certain Sobolev spaces with somewhat exacting restrictions on their indices, the resulting elliptic problem will not be well-defined in the normal weak sense. In an effort to circumvent these restrictions on the Sobolev classes of our coefficients, we develop a method to reformulate the ill-posed, semilinear PDE with singular coefficients as a PDE in what is known as a Colombeau algebra. These Colombeau algebras contain the space of distributions via an embedding, so one solves the PDE in the Colombeau algebra and attempts to associate the generalized Colombeau solution with a distribution, thereby obtaining a distributional solution to the original ill-defined problem. \subsection{ The Einstein Constraint Equations and Conformal Method} The Einstein field equation $G_{\mu\nu} = \kappa T_{\mu\nu}$ can be formulated as an initial value (or Cauchy) problem where the initial data consists of a Riemannian metric $\hat{g}_{ab}$ and a symmetric tensor $\hat{k}_{ab}$ on a specified $3$-dimensional manifold ${\mathcal M}$ \cite{HE75,RW84}. However, one is not able to freely specify such initial data. Like Maxwell's equations, the initial data $\hat{g}_{ab}$ and $\hat{k}_{ab}$ must satisfy constraint equations, where the constraints take the form \begin{align} \hat{R}+\hat{k}^{ab}\hat{k}_{ab}+\hat{k}^2 = 2\kappa\hat{\rho}, \label{eq1:5aug12}\\ \hat{D}_b\hat{k}^{ab}-\hat{D}^a\hat{k} = \kappa \hat{j}^a. \label{eq2:5aug12} \end{align} Here $\hat{R}$ and $\hat{D}$ are the scalar curvature and covariant derivative associated with $\hat{g}_{ab}$, $\hat{k}$ is the trace of $\hat{k}_{ab}$ and $\hat{\rho}$ and $\hat{j}^a$ are matter terms obtained by contracting $T_{\mu\nu}$ with a vector field normal to ${\mathcal M}$. As the Cauchy formulation of the Einstein field equations is one of the most important means of modeling and studying astrophysical phenomena, knowledge of the constraint equations is very important because of the influence that solutions to these equations has on solutions to the evolution problem. Moreover, a number of central questions in general relativity are addressed entirely through the study of the constraint equations alone (cf.~\cite{BI04} for discussion). Equation \eqref{eq1:5aug12} is known as the Hamiltonian constraint while \eqref{eq2:5aug12} is known as the momentum constraint, and collectively the two expressions are known as the Einstein constraint equations. These equations form an underdetermined system of four equations to be solved for twelve unknowns $\hat{g}_{ab}$ and $\hat{k}_{ab}$. In order to transform the constraint equations into a determined system, one divides the unknowns into freely specifiable data and determined data by using what is known as the conformal method. In this method introduced by Lichnerowicz \cite{AL44} and York \cite{JY71}, we assume that the metric $\hat{g}_{ab}$ is known up to a conformal factor and that the trace $\hat{k}$ and a term proportional to a trace-free divergence-free part of $\hat{k}_{ab}$ is known. Therefore the determined data in this formulation of the constraints is the conformal factor $\phi$ and a vector field ${\bf w}$ whose symmetrized derivative represents the undetermined portion of $\hat{k}_{ab}$. One obtains the following system \begin{align} &-8\Delta\phi + R\phi + \frac23\tau^2\phi^5-[\sigma_{ab} +({\mathcal L} {\bf w})_{ab}][\sigma^{ab} +({\mathcal L} {\bf w})^{ab}]\phi^{-7}-2\kappa\rho\phi^{-3} = 0\label{eq4:7aug12}, \\ &-D_b({\mathcal L} {\bf w})^{ab}+\frac23D^a\tau \phi^6+\kappa j^a = 0, \label{eq5:7aug12} \end{align} which forms a determined, coupled nonlinear system of elliptic equations that is referred to as the conformal, transverse, traceless (CTT) formulation of the constraints. In equations \eqref{eq4:7aug12}-\eqref{eq5:7aug12} the quantities $g_{ab}, \sigma_{ab}$, $\tau$, $\rho$, $j^a$ are freely specified and satisfy \begin{align}\label{eq1:15nov12} &\hat{g}_{ab} = \phi^4g_{ab}, \quad \hat{k}^{ab} = \phi^{-10}[\sigma^{ab}+({\mathcal L} w)^{ab}] + \frac13\phi^{-4}\tau g^{ab},\\ &\hat{j}^a = \phi^{-10}j^a,\quad \hspace{14mm} \quad \hat{\rho} = \phi^{-8}\rho, \end{align} and $\Delta, {\mathcal L}, D$ and $R$ are the Laplace-Beltrami operator, conformal Killing operator, covariant derivative and scalar curvature associated with $g_{ab}$. For a given choice of $g_{ab}, \sigma_{ab}, \\ \tau, \rho, j^a$, if one can solve \eqref{eq4:7aug12}-\eqref{eq5:7aug12} for $\phi$ and ${\bf w}$, they obtain a solution to the constraint equations \eqref{eq1:5aug12}-\eqref{eq2:5aug12} by using Eq. \eqref{eq1:15nov12} to reconstruct the physical solutions $\hat{g}_{ab}$ and $\hat{k}_{ab}$. \subsection{Solution Theory for the CTT Formulation} The solution theory for the CTT formulation of the Einstein constraint equations on a closed manifold ${\mathcal M}$ can be roughly classified according to the Yamabe class of the given metric $g_{ab}$, the properties of $\tau$ (the mean extrinsic curvature) and the regularity of the specified data $(g_{ab},\tau, \sigma,\rho,{\bf j})$. The mean curvature plays perhaps the largest role. If the mean curvature is constant, then the analysis of the conformal formulation simplifies greatly because the Hamiltonian constraint and the momentum constraint decouple, leaving a single semilinear elliptic PDE to analyze. For $C^2$ metrics, the classical solution theory for the conformal formulation with constant mean curvature (CMC) is now understood for all three Yamabe classes, and is summarized in \cite{JI95}. The solution theory for low-regularity data $(g_{ab},\tau, \sigma, \rho,{\bf j})$, or so-called ``rough solution theory", is also well developed in the CMC case. The most complete rough solution theory to date appears in~\cite{HNT09}, and allows for metrics $g_{ab} \in W^{s,p}$, with any pair $s,p$ satisfying $s> \frac3p$ and specified data $\sigma,~ \rho,~ {\bf j}$ satisfying \begin{align}\label{conditions1} &\bullet \sigma \in W^{e-1,q},\hspace{3.75 in}\\ &\bullet \rho \in W^{s-2,p}, \nonumber\\ &\bullet {\bf j} \in {\bf W}^{e-2,q},\nonumber \end{align} where $q$ and $e$ satisfy \begin{align}\label{conditions2} &\bullet \frac1q \in (0,1)\cap [\frac{3-p}{3p},\frac{3+p}{3p}]\cap [\frac{1-d}{3},\frac{3+sp}{6p}),\hspace{1.50 in}\\ &\bullet e \in [1,\infty)\cap [s-1,s]\cap [\frac3q+d-1,\frac3q+d]\cap(\frac3q+\frac d2,\infty),\nonumber \end{align} with $d = s-p$. There are also some additional assumptions on the Yamabe class of $g_{ab}$ and the sign of $\sigma,~ \rho ~\text{and}~ {\bf j}$. (cf. \cite{CB04,DMa05,DMa06,HNT08,HNT09}). \subsection{Rough Solutions to the Constraint Equations} There is an incentive to develop a low regularity solution framework for the Einstein field equations to model plausible astronomical phenomena such as cosmic strings and gravitational waves \cite{GKOS01}. The solutions to the constraint equations not only place a restriction on which metrics and extrinsic curvature tensors can be considered as initial data, but they also determine the function spaces of maximally globally hyperbolic solutions to the evolution problem \cite{BI04}. The solution theory for the constraint equations must therefore keep pace with the theory for the evolution equations, in order to avoid limiting the further theoretical development of the theory for the evolution problem. Historically, the rough solution theory of the constraints has in fact lagged behind that of the evolution problem. The local well-posedness result for quasilinear hyperbolic systems in \cite{HKM76} allows for initial data $(g,K)$ in $H^{s}\times H^{s-1}$ for $s>\frac52$; however it was not until \cite{CB04,DMa05,DMa06} that solutions of this regularity existed to the constraint equations, and even these initial results were restricted to CMC solutions. Low regularity solutions became increasingly important when Klainerman and Rodnianski developed {\em a priori} estimates in \cite{KR05} for the time existence of solutions to the vacuum Einstein equations in terms of the $H^{s-1}\times H^{s-1}$ norm of $( Dg, K)$, again with $s>2$. This prompted Maxwell's work on the CMC case in \cite{DMa05} and \cite{DMa06}, Choquet-Bruhat's work on the CMC case in \cite{CB04}, and Holst's et al.'s work on both the CMC and non-CMC cases in \cite{HNT08,HNT09}. One of the difficulties associated with obtaining rough solutions to the conformal formulation is that in general, Sobolev spaces are not closed under multiplication. With the exception of the Banach spaces $W^{s,p}({\mathcal M})$ with $s> d/p$ (where $d$ is the spatial dimension), the product of two Sobolev functions in a given space will not in general lie in that space. This restriction is a by-product of a more general problem, which is that in general, there is no well-behaved definition of distributional multiplication that allows for the multiplication of arbitrary distributions. One is instead confined to work with subspaces such as Sobolev spaces where point-wise multiplication is only well-defined for certain choices of Sobolev indices. This greatly limits the Sobolev spaces that one considers when attempting to develop a weak formulation of a given elliptic partial differential equation, and in particular, places a restriction on the regularity of the specified data $(g_{ab},\tau,\sigma,\rho,{\bf j})$ of the CTT equations. In order to overcome these limitations, we developed a framework to solve semilinear elliptic problems similar to the Hamiltonian constraint in generalized function spaces known as Colombeau algebras. This work is a natural extension of the work done by Mitrovic and Pilipovic in \cite{MP06}, where the authors found generalized solutions to linear, elliptic equations with distributional coefficients. The advantage of solving PDE in these generalized function spaces is that it allows one to circumvent the restrictions associated with Sobolev coefficients and data, and thereby consider problems with coefficients and data of much lower regularity. \subsection{Low Regularity Semilinear Elliptic Problems} If the mean curvature $\tau$ is constant, the CTT formulation \eqref{eq4:7aug12}-\eqref{eq5:7aug12} reduces to \begin{align} -\Delta \phi + \frac18 R\phi + \frac{1}{12}\tau^2\phi^5-\sigma^2\phi^{-7} - 2\kappa\rho\phi^{-3} = 0. \end{align} Locally, on a given chart element $\Omega = \psi(U)$, this problem assumes the form \begin{align}\label{eq1:18dec11} -\sum_{i,j=1}^3 D_i(a^{ij} D_j u) &+ b_1u^5-b_2u^{-7}-b_3u^{-3} = 0 ~~~\text{on $\Omega$},\\ &u = \varphi \quad \text{on $\partial\Omega$} \nonumber. \end{align} In an effort to extend the rough solution theory of the constraints, we are interested in solving \eqref{eq1:18dec11} with minimal regularity assumptions on the coefficients $a^{ij}, b_1, b_2 , b_3$ and boundary data $\varphi$. Therefore, in this paper we consider a family of elliptic, semilinear Dirichlet problems that are of the form \begin{align} \label{problem} -\sum_{i,j=1}^ND_i(a^{ij}D_ju) &+ \sum^K_{i=1}b^i u^{n_i} = 0 \quad \text{ in $\Omega$},\\ &u= \rho \quad \text{on $\partial\Omega$}, \nonumber \end{align} where $a^{ij}, b^i$ and $\rho$ are potentially distributional and $n_i \in \mathbb{Z}$ for each $i$. The main contributions of this article are an existence result for \eqref{problem} in a Colombeau-type algebra, and an existence result in $H^{1}(\Omega)$ for an ill-posed, critical exponent problem of the form \begin{align}\label{eq3:25oct11} -\Delta u +au^m&+bu^i =0 ~~~~\text{in $\Omega$},\\ &u = \rho \quad \text{on $\partial\Omega$}, \nonumber \end{align} where $m \ge 5$, $1\le i \le 4$ are in $\mathbb{N}$, $\Omega \subset \mathbb{R}^3$, $b\in L^{\infty}(\Omega)$ and $a \in L^p(\Omega)$ with $\frac65 \le p < \infty$. The framework we use to prove existence for \eqref{problem} consists of embedding the singular data and coefficients into a Colombeau-type algebra so that multiplication of the distributional coefficients is well-defined. To solve \eqref{eq3:25oct11}, we do not explicitly require the Colombeau machinery that we develop to solve \eqref{problem}, but we use similar ideas to produce a sequence of functions that converge to a solution of \eqref{eq3:25oct11} in $H^{1}(\Omega)$. The Colombeau solution framework for this paper is based mainly on the ideas found in~\cite{MP06}. Here we extend the work done by Mitrovic and Pilipovic in~\cite{MP06} to include a certain collection of semilinear problems. While Pilipovic and Scarpalezos solved a divergent type, quasilinear problem in a Colombeau type algebra in \cite{PS06}, the class of nonlinear problems we consider here does not fit naturally into that framework. Here we provide a solution method that is distinct from those posed in \cite{PS06} and \cite{MP06} that is better suited for the class of semilinear problems that we are interested in solving. The set up of our problem is completely similar to the set-up in~\cite{MP06}: given the semilinear Dirichlet problem in \eqref{problem}, we consider the family of problems \begin{align} \label{problem2} P_{\epsilon}(x,D)u_{\epsilon} & = f_{\epsilon}(x,u_{\epsilon}) \quad \text{on $\Omega$},\\ u_{\epsilon} &= \rho_{\epsilon} \quad \text{on $\partial\Omega$},\nonumber \end{align} where $f_{\epsilon}, h_{\epsilon}$, and $P_{\epsilon}(x,D)$ are obtained by convolving the data and coefficients of \eqref{problem} with a certain mollifier. Thus a solution to the problem in a Colombeau algebra is a net of solutions to the above family satisfying certain growth estimates in $\epsilon$. This is discussed in detail in Sections \ref{Colombeau} and \ref{netsofproblems}. This basic concept underlies both the solution process in our paper and in \cite{MP06} and \cite{PS06}. However it is our solution process in the Colombeau algebra that is quite distinct from that laid out in~\cite{MP06}, where the authors used linear elliptic theory to determine a family of solutions and then classical elliptic, {\em a priori} estimates to prove certain growth estimates. Most notably, the authors developed a precise maximum principle-type argument necessary to obtain polynomial growth estimates required to find a solution. Our strategy for solving~\eqref{problem} differs in a number of ways. First, in Section \ref{bounds} we develop a family of { \em a priori} $L^{\infty}$ bounds to the family of problems \eqref{problem2}. Then in Section \ref{supersolution} we show that these estimates determine sub- and super-solutions to \eqref{problem2}. We then employ the method of sub- and super-solutions in Section \ref{subsolution} to determine a family of solutions. Finally, $\epsilon$-growth estimates on the sub- and super-solutions are established in Section \ref{supersolution}, and in Section \ref{results} these estimates are used in conjunction with the {\em a priori} estimates in Section \ref{holder} to prove the necessary $\epsilon$-growth estimates on our family of solutions. This paper can be broken down into two distinct, but related parts. The first part is dedicated to solving \eqref{eq3:25oct11}. Our solution to this problem does not explicitly require the techniques that we develop to solve problems with distributional data in Colombeau algebras and only relies on standard elliptic PDE theory. However, the ideas that we use to solve the problem are closely related: we obtain a solution by solving a family of problems similar to \eqref{problem2} and then show that these solutions converge to a function in $H^{1}(\Omega)$. Therefore, we present our existence result for \eqref{eq3:25oct11} first to convey the benefit that the more general Colombeau solution strategy has, not only for solving problems in the Colombeau Algebra, but also for obtaining solutions in more classical spaces. The remainder of the paper is dedicated to developing the Colombeau framework described in the preceding paragraph. This consists of defining an algebra appropriate for a Dirichlet problem and properly defining a semilinear elliptic problem in the algebra. Once a well-posed elliptic problem in the Colombeau algebra has been formed, we discuss the conditions under which the problem has a solution in the algebra and finally, describe how to translate a given problem of the form \eqref{problem} into a problem that can be solved in the algebra. It should be noted that while the intention is to find solutions to \eqref{problem}, the main result pertaining to Colombeau algebras in this paper is Theorem~\ref{thm1june27}, which is the main solution result for semilinear problems in our particular Colombeau algebra. {\em Outline of the paper.} The remainder of the paper is structured as follows. In Section~\ref{Example1} we motivate this article by proving the existence of a solution to \eqref{eq3:25oct11}. In Section~\ref{prelim} we state a number of preliminary results and develop the technical tools required to solve~\eqref{problem}. Among these tools and results are the explicit {\em a priori} estimates found in~\cite{MP06} and a description of the Colombeau framework in which the coefficients and data will be embedded. Then in Section~\ref{overview} we state the main existence result in Theorem~\ref{thm1june27}, give a statement and proof of the method of sub- and super solutions in Theorem~\ref{thm2june27}, and then give an outline of the method of proof of Theorem~\ref{thm1june27}. Following our discussion of elliptic problems in Colombeau algebras, we discuss a method to embed \eqref{problem} into the algebra to apply our Colombeau existence theory. The remainder of the paper is dedicated to developing the tools to prove Theorem~\ref{thm1june27}. In Section~\ref{bounds1} we determine {\em a priori} $L^{\infty}$ bounds of solutions to our semilinear problem and a net of sub- and super-solutions satisfying explicit $\epsilon$-growth estimates. Finally, in Section~\ref{results} we utilize the results from Section~\ref{bounds1} to prove the main result outlined in Section~\ref{overview}. \section{Solution Construction using a Sequence of Approximate Problems}\label{Example1} If $\Omega \subset \mathbb{R}^3$, the Sobolev embedding theorem tells us that $H^{1}(\Omega)$ will compactly embed into $L^p(\Omega)$ for $1 \le p < 6$ and continuously embed for $1 \le p \le 6$. Given functions $u,v \in H^{1}(\Omega)$, this upper bound on $p$ places a constraint on the values of $i$ that allow for the product $u^iv$ to be integrable. In particular, Sobolev embedding and standard H\"{o}lder inequalities imply that this product will be integrable for arbitrary elements of $H^{1}(\Omega)$ only if $1 \le i \le 5$. More generally, if $a \in L^{\infty}(\Omega)$, the term $au^5v$ will also be integrable. However, if $a$ is an unbounded function in $L^p(\Omega)$ for some $p \ge 1$, then this product is not necessarily integrable without some sort of {\em a priori} bounds on $a, u$, and $v$. Therefore, the following problem does not have a well-defined weak formulation in $H^{1}(\Omega)$: \begin{align}\label{eq1:24oct11} -\Delta u +au^m&+bu^i = 0 \quad \text{in $\Omega$},\\ &u = \rho \quad \text{on $\partial\Omega$} \nonumber, \end{align} where $\Omega \subset\subset \Omega'$, $m \ge 5$, $1 \le i \le 4$ are in $\mathbb{N}$, $\rho \in H^{1}(\Omega')$, $b \in L^{\infty}(\Omega')$ and $a \in L^p(\Omega')$ for $\frac65 \le p < \infty$. The objective of this section is to find a solution to the above problem. In order to solve \eqref{eq1:24oct11}, we solve a sequence of approximate, smooth problems and use a compactness argument to obtain a convergent subsequence. We first define necessary notation and then present the statements of two theorems that will be necessary for our discussion in this section. Then we prove the existence of a solution to \eqref{eq1:24oct11}. Finally, we show that if a solution exists, then under certain conditions we can construct a net of problems whose solutions converge to the given solution. \subsection{Overview of Spaces and Results for the Critical Exponent Problem}\label{prelim1} For the remainder of the paper, for a fixed domain $\Omega\subset \mathbb{R}^n$, we denote the standard Sobolev norms on $\Omega$ by \begin{align}\label{Sobolev} &\|u\|_{L^p} = \left( \int_{\Omega} |u(x)|^p~dx\right)^{\frac1p},\\ &\|u\|_{W^{k,p}} = \left(\sum_{i=1}^k \|D^iu\|^p_{L^p}\right)^{\frac1p} \nonumber. \end{align} For the special case that $p=2$, we let $H^k(\Omega) = W^{k,2}(\Omega)$ and $H^1_0(\Omega)$ denote the functions in $H^1(\Omega)$ that have trace zero. Furthermore, let \begin{align} &\text{ess sup $u$} = \hat{u},\\ &\text{ess inf $u$} = \check{u}.\nonumber \end{align} In our subsequent work we will also require regularity conditions on the domain $\Omega$ and its boundary. Therefore, we will need the following definition taken from \cite{GiTr77}: \begin{definition} A bounded domain $\Omega \subset \mathbb{R}^n$ and its boundary are of class $C^{k,\alpha}$, $0 \le \alpha \le1$, if for each $x_0 \in \partial\Omega$ there is a ball $B(x_0)$ and a one-to-one mapping $\Psi$ of $B$ onto $D \subset \mathbb{R}^n$ such that: \begin{enumerate} \item $\Psi(B \cap \Omega) \subset \mathbb{R}^n_+$, \item $\Psi(B \cap \partial\Omega) \subset \partial\mathbb{R}^n_+$, \item $\Psi \in C^{k,\alpha}(B), ~~~ \Psi^{-1} \in C^{k,\alpha}(D). $ \end{enumerate} \end{definition} We say that a domain $\Omega$ is of class $C^{\infty}$ if for a fixed $0 \le \alpha \le 1$ it is of class $C^{k,\alpha}$ for each $k \in \mathbb{N}$. Additionally, for this section and the next we will require the following Theorem and Proposition: \begin{theorem} \label{thm1:24oct11} Suppose $\Omega \subset \mathbb{R}^n$ is a $C^{\infty}$ domain and assume $f:\overline{\Omega} \times \mathbb{R}^+\to \mathbb{R}$ is in $C^{\infty}(\overline{\Omega}\times\mathbb{R}^+)$ and $\rho \in C^{\infty}(\overline{\Omega})$. Let $L$ be an elliptic operator of the form \begin{align} Lu = -D_i(a^{ij}D_{j}u) + cu, \quad\text{and}\quad a^{ij}, c \in C^{\infty}(\overline{\Omega}). \end{align} Suppose that there exist sub- and super-solutions $u_-:\overline{\Omega} \to\mathbb{R}$ and $u_+:\overline{\Omega}\to \mathbb{R}$ such that the following hold: \begin{enumerate} \item $u_-,u_+ \in C^{\infty}(\overline{\Omega})$, \item $0<u_-(x) < u_+(x) \hspace{3mm}\forall x\in \overline{\Omega}.$ \end{enumerate} Then there exists a solution $u\in C^{\infty}(\overline{\Omega})$ to \begin{align} \label{eq3june30-relabled-by-mjh} Lu &= f(x,u) \hspace{3mm} \text{on $\Omega$},\\ & u = \rho \quad \text{on $\partial\Omega$}, \nonumber \end{align} such that $u_-(x)\le u(x) \le u_+(x). $ \end{theorem} \begin{proposition} \label{prop2:24oct11} Let $u$ be a solution to a semilinear equation of the form \begin{align}\label{eq1:24nov11} -\sum_{i,j}^ND_i (a^{ij} D_j u) &+ \sum_{i=1}^K b^iu^{n_i}=0 \text{ in $\Omega$}, \\ u &= \rho, \quad \rho(x) > 0\hspace{2mm} \text{on} \hspace{2mm}\partial\Omega \nonumber \end{align} where $a^{ij}, b^i$ and $\rho \in C^{\infty}(\ol{\Omega}) $. Suppose that the semilinear operator in \eqref{eq1:24nov11} has the property that $n_i>0$ for all $1 \le i \le K$. Let $n_K$ be the largest positive exponent and suppose that $b^K(x)>0$ in $\overline{\Omega}$. Define \begin{align}\label{eq3:24oct11} &\beta' = \inf_{c\in\mathbb{R}} \left\{\sum_{i=1}^K\inf_{x\in\ol{\Omega}}b^i(x)y^{n_i}> 0 \hspace{3mm}\forall y\in (c,\infty)\right\},\\ &\beta = \max\{\beta', \sup_{x\in \partial\Omega} \rho(x)\}. \end{align} \noindent Then if $u \in H^1(\Omega)$ is a positive weak solution to Eq. \eqref{eq1:24nov11}, it follows that $ 0 \le u \le \beta < \infty$. \end{proposition} For the proof of Theorem~\ref{thm1:24oct11}, see Section~\ref{subsolution}. A more detailed version of Proposition~\ref{prop2:24oct11} and its proof can be found in Section~\ref{bounds1}. Now that we have all of the tools we need, we shall now prove the existence of a solution to a problem of the form \eqref{eq1:24oct11}. \subsection{Existence of a Solution to an Ill-Posed Critical Exponent Problem}\label{example2} For the following discussion, let $\Omega' \subset \mathbb{R}^3$ be an open and bounded domain and assume that $\Omega \subset\subset \Omega'$ is also open and of $C^{1,\alpha}$-class. Here we seek a weak solution $u\in H^{1}(\Omega)$ to the problem \begin{align}\label{eq2:19oct11} -\Delta u +au^m&+bu^i =0 \quad \text{in $\Omega$},\\ u & = \rho \quad \text{on $\Omega$}, \nonumber \end{align} where $m \ge 5$, $1 \le i \le 4$ are in $\mathbb{N}$, \begin{align}\label{eq1:21oct11} b \in L^{\infty}(\Omega'), \quad a \in L^p(\Omega'), \quad \frac65 \le p <\infty, \quad \rho \in H^{1}(\Omega') , \end{align} and \begin{align}\label{eq2:21oct11} \check{a} >0, \quad \hat{b} <0, \quad \text{and} \quad \check{\rho} >0. \end{align} If our test function space is $H^{1}_0(\Omega)$, Eq. \eqref{eq2:19oct11} is ill-posed due to the term $au^m$. The weak formulation of Eq. \eqref{eq2:19oct11} would contain the integral $$ \int_{\Omega} au^mv~dx, $$ where $v\in H^{1}(\Omega)$, $a\in L^p(\Omega)$, and $u\in H^{1}(\Omega)$. For these choices of function spaces this integral need not be finite. We show that this problem does in fact have a weak solution by regularizing the coefficients of our problem and solving a sequence of approximating problems. We obtain the following proposition. \begin{proposition}\label{prop1:8nov11} The semilinear problem \eqref{eq2:19oct11} has a solution $u\in H^{1}(\Omega)$ if $a, b,\text{and} ~\rho$ satisfy the conditions in \eqref{eq1:21oct11} and \eqref{eq2:21oct11}. \end{proposition} \begin{proof} To determine a solution to \eqref{eq2:19oct11}, we consider the sequence of solutions to the approximate problems \begin{align}\label{eq1:19oct11} -\Delta u_n +a_n(u_n)^m+b_n(u_n)^i &=0 \quad \text{in $\Omega$},\\ u_n = \rho_n \quad \text{on $\partial\Omega$}, \nonumber \end{align} where $a_n= a\ast \phi_n$, $b_n= b\ast \phi_n$, and $\rho_n = \rho\ast \phi_n$ and $\phi_n = n^3\phi(nx)$ is a positive mollifier where $\int \phi(x)~dx = 1$. Given that $\phi$ is a positive mollifier, it is clear that for each $n\in \mathbb{N}$, $$ \check{a}_{n}>0, ~~~\hat{b}_n < 0~~~~ \text{and} \quad \check{\rho}_n>0. $$ We first verify that the sequence of problems \eqref{eq1:19oct11} has a solution for each $n$. To do this, we will utilize Theorem~\ref{thm1:24oct11} and Proposition~\ref{prop2:24oct11}. Let $\beta_n$ have the same properties as $\beta$ in Proposition \ref{prop2:24oct11} for the sequence of problems \eqref{eq1:19oct11}. Then using the notation in Proposition \ref{prop2:24oct11}, we can write explicit expressions for $ \beta$ for \eqref{eq2:19oct11} and $\beta_n$. It is not hard to show that $$ \beta = \max\left\{\left(-\frac{\check{b}}{\check{a}}\right)^{\frac{1}{m-i}}, \hat{\rho}\right\}, $$ and $$ \beta_n'= \left(-\frac{\check{b}_n}{\check{a}_n}\right)^{\frac{1}{m-i}}, \quad \beta_n = \max\left\{\beta_n', \hat{\rho}_n\right\}. $$ By Proposition~\ref{prop2:24oct11}, for each $n \in \mathbb{N}$, $\beta_n$ determines an {\em a priori} upper bound for the approximate problems. Furthermore, it is not difficult to see that for each $n\in\mathbb{N}$ that $0 $ and $\beta_n$ are sub- and super-solutions for \eqref{eq1:19oct11}. See Section~\ref{supersolution} and Theorem~\ref{thm2june27} for more details. Therefore Theorem~\ref{thm1:24oct11} implies that for $n$ sufficiently large $0 \le u_n\le \beta_n$ is a solution in $ C^{\infty}(\overline{\Omega})$ to \eqref{eq1:19oct11} given that $\rho_n, a_n, b_n \in C^{\infty}(\overline{\Omega})$ for $n$ sufficiently large. Now observe that for each $n\in \mathbb{N}$, $\beta_n \le \beta$, which follows from the fact that \begin{align}\label{eq1:22oct11} -b_{n}(x) = \int (-b(y)) \phi_{n}(x-y)~dy \le \int (-\check{b})\phi_{n}(x-y) = -\check{b}, \end{align} and $a_{n}(x) \ge \check{a}$, which is verified by a similar calculation. Therefore, by standard $L^p$ elliptic regularity theory \begin{align} \|u_n\|_{W^{2,p}} \le& C( \|-a_n(u_n)^m-b_n(u_n)^i\|_{L^p} + \|u_n\|_{L^p}) \\ \le &C(\beta_n^m\|a_n\|_{L^p}+\beta_n^i\|b_n\|_{L^p}+\beta_n)< M < \infty, \nonumber \end{align} where $M$ is independent of $n$ given that $\beta_n \le \beta$, $a_n \to a$ in $L^p$, $b_n \to b$ in $L^p$. Because $p > \frac65$ and $\Omega$ is of $C^{1,\alpha}$-class, $W^{2,p}(\Omega)$ embeds compactly into $H^{1}(\Omega)$. Therefore, there exists a convergent subsequence $u_{n_j} \to u $ in $H^{1}(\Omega)$. We claim now that $u$ satisfies the following two properties: \begin{enumerate} \item $0 \le u \le \beta$ almost everywhere, \item $u$ weakly solves \eqref{eq2:19oct11}. \end{enumerate} The inequality $0 \le u \le \beta~a.e.$ follows from the fact the $u_{n_j} \to u$ in $H^{1}(\Omega)$ and $$ 0 \le u_{n_j} \le \beta_{n_j} \le \beta \quad \text{for each $j \in \mathbb{N}$}. $$ Indeed, if we assume that $u>\beta$ on some set of nonzero measure, then for some $n$ the set $A_n =\{x\in \Omega: u(x) > \beta+\frac1n\}$ has positive measure. Then for all $j\in \mathbb{N}$, we have that $$ \int |u_{n_j} - u|^2~dx \ge \int_{A_n} |u_{n_j}-u|^2~dx \ge \frac{1}{n^2} \mu(A_n) > 0. $$ But this clearly contradicts the fact that $u_{n_j} \to u$ in $H^{1}(\Omega)$. A similar argument shows that $ u \ge 0, ~a.e$ in $\Omega$. Finally, we want to show that $u$ weakly solves \eqref{eq2:19oct11}. Let $\epsilon >0$. Then for any $v\in H^{1}_0(\Omega)$ we have that \begin{align} & \left|\int \left( \nabla u\cdot\nabla v + au^mv + bu^i v\right)~ dx\right| = \left|\int \left(\nabla u\cdot\nabla v + au^mv + bu^i v\right)~ dx \right. \\ & \qquad \qquad \qquad \qquad \left. - \int \left(\nabla u_{n_j}\cdot\nabla v + a_{n_j}(u_{n_j})^mv + b_{n_j}(u_{n_j})^i v\right)~ dx\right|, \nonumber \end{align} given that $u_{n_j}$ solves \eqref{eq1:19oct11}. Then expanding the second line of the above equation we find that \begin{align} \label{eq3:19oct11} &\left|\int \nabla u\cdot\nabla v + au^mv + bu^i v~ dx\right| \\ & \qquad \le \int \left|\nabla u\cdot\nabla v- \nabla u_{n_j}\cdot\nabla v\right|~dx+\int\left| au^mv- a_{n_j}(u_{n_j})^mv\right|~dx \nonumber \\ & \qquad \qquad + \int\left|bu^i v- b_{n_j}(u_{n_j})^i v\right|~ dx \\ & \qquad \le \int \left|\nabla u\cdot\nabla v- \nabla u_{n_j}\cdot\nabla v\right|~dx +\int\left| au^mv-a(u_{n_j})^mv\right|~dx \nonumber \\ & \qquad \qquad +\int\left|a(u_{n_j})^mv- a_{n_j}(u_{n_j})^mv\right|~dx+ \int\left|bu^i v-b(u_{n_j})^iv\right|~dx \nonumber \\ & \qquad \qquad + \int \left|b(u_{n_j})^iv- b_{n_j}(u_{n_j})^i v\right|~ dx. \label{eq3:19oct11-b} \end{align} Every term in \eqref{eq3:19oct11-b} tends to $0$ given that $u_{n_j} \to u$ in $H^{1}(\Omega)$, $a_{n_j} \to a$ in $L^p(\Omega)$, $b_{n_j} \to b$ in $L^p(\Omega)$ and $0 \le u \le \beta$. To show that the expression $$ \int\left| au^mv-a(u_{n_j})^mv\right|~dx \to 0, $$ we apply H$\ddot{\text{o}}$lder's inequality to obtain $$ \int\left| au^mv-a(u_{n_j})^mv\right|~dx \le \| a\|_{L^{\frac65}} \|u^mv-u_{n_j}^mv\|_{L^6}. $$ Given that $u_{n_j} \to u $ in $H^{1}(\Omega)$, $u_{n_j} \to u~~a.e$, where we pass to a subsequence if necessary. Therefore $u_{n_j}^mv \to u^mv~~a.e$. Finally, we observe that $$ |u^m-u_{n_j}^m|^6|v|^6 \le 64\beta^{6m}|v|^6, $$ and given that $v \in H^{1}(\Omega)$ and $\Omega$ is bounded, the Dominated Convergence Theorem implies that $\|u^mv-u_{n_j}^mv\|_{L^6} \to 0$. Therefore $$ \int\left| au^mv-a(u_{n_j})^mv\right|~dx \to 0. $$ We apply a similar argument to show that the lower order terms in Eq. \eqref{eq3:19oct11} converge to zero and conclude that $u$ is a weak solution to \eqref{eq2:19oct11}. \end{proof} \section{Preliminary Material: H\"{o}lder Spaces and Colombeau Algebras} \label{prelim} We now begin to develop the Colombeau Algebra framework that will be used to solve \eqref{problem}. We first define H\"{o}lder Spaces and state precise versions of the classical Schauder estimates given in \cite{MP06}. The definition of the Colombeau Algebra in which we will be working and these classical elliptic regularity estimates make these spaces the most natural choice in which to do our analysis. Therefore we will work almost exclusively with H\"{o}lder spaces for the remainder of the paper. Following our discussion of function spaces, we define the Colombeau algebra in which we will work and then formulate an elliptic, semilinear problem in this space. \subsection{Function Spaces and Norms} \label{holder} In this paper we will make frequent use of Schauder estimates on H\"{o}lder spaces defined on an open set $\Omega \subset \mathbb{R}^n$. Here we give notation for the H\"{o}lder norms and then state the regularity estimates that will be used. All notation and results are taken from~\cite{GiTr77}. Assume that $\Omega \subset \mathbb{R}^n$ is open, connected and bounded. Then define the following norms and seminorms: \begin{align} &[u]_{\alpha;\Omega} = \sup_{\stackrel{x,y \in \Omega}{x \ne y}} \frac{|u(x)-u(y)|}{|x-y|^{\alpha}},\\ &[u]_{k,0;\Omega} = \sup_{|\beta|=k} \sup_{x\in\ol{\Omega}}|D^{\beta}u|,\\ &[u]_{k,\alpha;\Omega} = \sup_{|\beta|=k}[D^{\beta}u]_{\alpha;\Omega},\\ &\|u\|_{C^k(\overline{\Omega})} =|u|_{k;\Omega}= \sum_{j=0}^{k}[u]_{j,0;\Omega},\\ &\|u\|_{C^{k,\alpha}(\overline{\Omega})} =|u|_{k,\alpha;\Omega} = |u|_{k;\Omega}+[u]_{k,\alpha;\Omega}. \end{align} We interpret $C^{k,\alpha}(\overline{\Omega})$ as the subspace of functions $f \in C^k(\overline{\Omega})$ such that $f^{(k)}$ is $\alpha$-H\"{o}lder continuous. Also, we view the subspace $C^{k,\alpha}(\Omega)$ as the subspace of functions $f\in C^k(\Omega)$ such that $f^{(k)}$ is locally $\alpha-$H\"{o}lder continuous (over compact sets $K\subset\subset\Omega$). Now we consider the equation \begin{align} \label{eq4june29} Lu = a^{ij}D_{ij}u + b^i uD_iu +cu &=f \quad \text{in $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$}, \end{align} where $L$ is a strictly elliptic operator satisfying $$ a^{ij} = a^{ji} \quad \text{and} \quad a^{ij}(x)\xi_i\xi_j \ge \lambda|\xi|^2, \quad x\in \Omega, \quad \xi \in \mathbb{R}^n. $$ The following regularity theorems can be found in~\cite{GiTr77} and~\cite{MP06}. See~\cite{GiTr77} for proofs. Note that the constant $C$ in the following theorems has no dependence on $\Lambda$ or $\lambda$. \begin{theorem} \label{thm1june30} Assume that $\Omega$ is a $C^{2,\alpha}$-class domain in $\mathbb{R}^n$ and that $u\in C^{2,\alpha}(\overline{\Omega})$ is a solution~\eqref{eq4june29}, where $f\in C^{\alpha}(\overline{\Omega})$ and $\rho\in C^{2,\alpha}(\overline{\Omega})$. Additionally assume that $$ |a^{ij}|_{0,\alpha;\Omega}, |b^i|_{0,\alpha;\Omega}, |c|_{0,\alpha;\Omega} \le \Lambda. $$ Then there exists $C>0$ such that $$ |u|_{2,\alpha;\Omega} \le C\left(\frac{\Lambda}{\lambda}\right)^3(|u|_{0;\Omega}+|\rho|_{2,\alpha;\Omega}+|f|_{0,\alpha;\Omega}). $$ \end{theorem} \noindent This theorem can then be extended to higher order derivatives by repeatedly applying Theorem~\ref{thm1june30}. See~\cite{MP06} for details. We summarize this result in the next theorem. \begin{theorem} \label{thm2june30} Let $\Omega$ be a $C^{k+2,\alpha}$-class domain and $u\in C^2({\Omega})\cap C^0(\overline{\Omega})$ be a solution of~\eqref{eq4june29}, where $f\in C^{k,\alpha}(\overline{\Omega})$ and $\rho\in C^{k+2,\alpha}(\overline{\Omega})$. Additionally assume that $$ |a^{ij}|_{k,\alpha;\Omega}, |b^i|_{k,\alpha;\Omega}, |c|_{k,\alpha;\Omega} \le \Lambda. $$ Then $u\in C^{k+2,\alpha;\Omega}(\overline{\Omega})$ and $$ |u|_{k+2,\alpha;\Omega} \le C^{k+1}\left(\frac{\Lambda}{\lambda}\right)^{3(k+1)}(|u|_{0;\Omega}+|\rho|_{k+2,\alpha;\Omega}+|f|_{k,\alpha;\Omega}), $$ where $C$ is the constant from Theorem~\ref{thm1june30}. \end{theorem} \subsection{Colombeau Algebras} \label{Colombeau} Now that we have defined the basic function spaces that we will be working with and stated the regularity theorems that will be required to obtain necessary growth estimates, we are ready to define the Colombeau algebra with which we will be working and formulate our problem in this algebra. Let $V$ be a topological vector space whose topology is given by an increasing family of seminorms $\mu_k$. That is, for $u \in V$, $\mu_{i}(u) \le \mu_{j}(u)$ if $i \le j$. Then letting $I = (0,1]$, we define the following: \begin{align} \label{eq1july7} \mathcal{E}_V &= (V)^I \hspace{4mm} \text{where $u\in \mathcal{E}_V$ is a net $(u_{\epsilon})$ of elements in $V$ with $\epsilon \in (0,1]$}, \\ \mathcal{E}_{M,V} &= \{ (u_{\epsilon}) \in \mathcal{E}_V \hspace{2mm} | \hspace{2mm} \forall k \in \mathbb{N} \hspace{2mm} \exists a\in \mathbb{R} \hspace{2mm} : \hspace{2mm} \mu_k(u_{\epsilon}) = \mathcal{O}(\epsilon^a) \hspace{2mm} \text{as } \epsilon \to 0\}, \\ \mathcal{N}_V &= \{ (u_{\epsilon}) \in \mathcal{E}_{V,M} \hspace{2mm} | \hspace{2mm} \forall k \in \mathbb{N} \hspace{2mm} \forall a\in \mathbb{R} \hspace{2mm} : \hspace{2mm} \mu_k(u_{\epsilon}) = \mathcal{O}(\epsilon^a) \hspace{2mm} \text{as } \epsilon \to 0\} \label{eq1:15nov11}. \end{align} Then the polynomial generalized extension of $V$ is formed by considering the quotient $\mathcal{G}_V = \mathcal{E}_{M,V}/\mathcal{N}_V$. We now give a few examples of generalized extensions. See \cite{MP06,GKOS01} for a more detailed discussion. \begin{definition}\label{eq1:3nov11} If $V = \mathbb{C}$, $r \in \mathbb{C}$, $\mu_k(r) =|r|$, then one obtains $\overline{\mathbb{C}}$, the ring of generalized constants. This ring contains all nets of complex numbers that grow no faster than a polynomial in $\epsilon^{-1}$ as $\epsilon \to 0$. For example, $(\textnormal{e}^{\epsilon^{-1}}) \not\in \mathbb{\ol{C}}$ given that this net grows exponentially in $\epsilon^{-1}$ as $\epsilon \to 0$. \end{definition} \begin{definition}\label{ex1:27oct11} Let $\Omega \subset \mathbb{R}^n$ be an open set, $U_k \subset\subset \Omega$ an exhaustive sequence of compact sets and $\alpha \in \mathbb{N}^n_0$ a multi-index. Then if $$V = C^{\infty}(\Omega), \hspace{2mm}f \in C^{\infty}(\Omega), \hspace{4mm} \mu_{k}(f)= \sup\{|D^{\alpha}f| \hspace{1mm} : \hspace{1mm} x \in U_k,\hspace{2mm}|\alpha| \le k\},$$ one obtains $\mathcal{G}^s(\Omega)$, the simplified Colombeau Algebra. \end{definition} \begin{definition}\label{def1:3feb13} If $V=C^{\infty}(\overline{\Omega})$, where $\Omega\subset \mathbb{R}^n$ is bounded and $$\mu_k(f)=\sup\{|D^{\alpha} f |:|\alpha|\le k,\hspace{2mm} x\in\overline{\Omega}\},$$ we denote the generalized extension by $\mathcal{G}(\overline{\Omega})$. The set $\mathcal{E}_{M,C^{\infty}(\overline{\Omega})}$ will be denoted by $\mathcal{E}_M(\overline{\Omega})$ and be referred to as the space of moderate elements. The set $\mathcal{N}_{C^{\infty}(\overline{\Omega})}$ will be denoted by $\mathcal{N}(\overline{\Omega})$ and will be referred to as the space of null elements. \end{definition} Both $\mathcal{G}^s(\Omega)$ and $\overline{\mathbb{C}}$ were developed by Colombeau and laid the basis for the more general construction described in \eqref{eq1july7}-\eqref{eq1:15nov11}. See~\cite{JCo84} for more details. As in~\cite{MP06}, for the purposes of this paper we are concerned with $\mathcal{G}(\overline{\Omega})$ given that we are interested in solving the Dirichlet problem and require a well-defined boundary value. If $(u_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$ is a representative of an element $u \in \mathcal{G}(\overline{\Omega})$, we shall write $u = [(u_{\epsilon})]$ to indicate that $u$ is the equivalence class of $(u_{\epsilon})$. At times we will drop the parentheses and simply write $[u_{\epsilon}]$. Addition and multiplication of elements in $\mathcal{G}(\overline{\Omega})$ is defined in terms of addition and multiplication of representatives. That is, if $u = [(u_{\epsilon})]$ and $v = [(v_{\epsilon})]$, then $uv = [(u_{\epsilon}v_{\epsilon})]$ and $u+v = [(u_{\epsilon}+v_{\epsilon})]$. Derivations are defined for $u = [(u_{\epsilon})] \in \mathcal{G}(\overline{\Omega})$ by $\partial_{x_i} u = [(\partial_{x_i}u_{\epsilon})]$. \begin{theorem} With the above definitions of addition, multiplication and differentiation, $\mathcal{G}(\overline{\Omega})$ is a associative, commutative, differential algebra. \end{theorem} \begin{proof} This follows from the fact component-wise addition, multiplication, and differentiation makes $V^I = (C^{\infty}(\overline{\Omega}))^I$ into a differential algebra. By design, $\mathcal{E}_M(\overline{\Omega})$ is the largest sub-algebra of $(C^{\infty}(\overline{\Omega}))^I$ that contains $\mathcal{N}(\overline{\Omega})$ as an ideal. Therefore $\mathcal{G}(\overline{\Omega})$ is a differential algebra as well. See~\cite{GKOS01}. \end{proof} Now that we have given the basic definition of a Colombeau algebra, we can discuss how distributions can be embedded into a space of this type. \subsection{Embedding Schwartz Distributions into Colombeau Algebras}\label{embed} While the algebras defined above are somewhat unwieldy, these spaces are well suited for analyzing problems with distributional data. The primary reason for this is that for a given open set $\Omega\subset \mathbb{R}^n$, the Schwartz distributions $\mathcal{D}'(\Omega)$ can be linearly embedded into ${\mathcal G}^s(\Omega)$. This allows one to define an {\em extrinsic} notion of distributional multiplication that is consistent with the pointwise product of $C^{\infty}(\Omega)$ functions. Here we briefly discuss the method use to embed $\mathcal{D}'(\Omega)$ into $\mathcal{G}^s(\Omega)$. Given that we will primarily be working with the generalized extension ${\mathcal G}(\ol{\Omega})$ defined in \eqref{def1:3feb13}, we will then discuss how to embed certain subsets of ${\mathcal D}'(\Omega)$into ${\mathcal G}(\ol{\Omega})$. We begin by recalling the definitions of the spaces that will be relevant to our discussion. The Schwartz distributions on an open set $\Omega\subset \mathbb{R}^n$ are denoted $\mathcal{D}'(\Omega)$ and are defined to be the dual of $\mathcal{D}(\Omega)$, the space of $C^{\infty}(\Omega)$ functions with support contained in $\Omega$. For a given $\varphi \in \mathcal{D}(\Omega)$ and $T \in \mathcal{D}'(\Omega)$, the action of $T$ on $\varphi$ will be denoted by $\left\langle T, \varphi \right\rangle $. We let ${\mathcal E}'(\Omega)\subset {\mathcal D}'(\Omega)$ denote the denote the space of compactly supported distributions. Finally, we define the space of Schwartz functions ${\mathcal S}(\mathbb{R}^n)$ by \begin{align} {\mathcal S}(\mathbb{R}^n) = \left\{ f \in C^{\infty}(\mathbb{R}^n)~|~\|f\|_{\alpha,\beta} < \infty, ~~ \forall \alpha,\beta \right\}, \quad \|f\|_{\alpha,\beta} = \sup_{x\in\mathbb{R}^n}|x^{\alpha} D^{\beta}f(x)|, \end{align} where $\alpha, \beta$ are multi-indices. Let $\varphi \in {\mathcal D}(\mathbb{R}^n)$ satisfy \begin{align}\label{eq1:5feb13} \varphi(x) \ge 0, \quad \int_{\mathbb{R}^n} \varphi(x)~dx = 1, \quad \lim_{\epsilon\to 0} \varphi_{\epsilon}(x) = \lim_{\epsilon to 0} \epsilon^{-n}\varphi\left(\frac{x}{\epsilon}\right) \to \delta(x). \end{align} So $\varphi(x)$ is a standard, positive mollifying function. To construct our embedding, we will also require another function with more restrictive properties. Let $\psi \in \mathcal{S}(\mathbb{R}^n)$ be a function such that $\psi \equiv 1$ on some neighborhood of $0$. Then define $\phi \in \mathcal{S}(\mathbb{R}^n)$ by $\phi = \mathcal{F}^{-1}[\psi]$, the inverse Fourier transform of $\psi$. It is easy to see that \begin{align}\label{eq4:28oct11} \int_{\mathbb{R}^n} \phi \hspace{1mm}dx = 1 \hspace{2mm} \text{ and} \hspace{2mm} \int_{\mathbb{R}^n} x^{\alpha}\phi \hspace{1mm}dx = 0 \quad \forall |\alpha| \ge 1. \end{align} Let $\phi_{\epsilon} = \epsilon^{-n}\phi(\frac{x}{\epsilon})$. The properties of $\phi$ specified in \eqref{eq4:28oct11} are extremely important. By convolving with the function $\phi_{\epsilon}$ and using the sheaf properties of the space ${\mathcal G}^s(\Omega)$, one is able to construct a linear embedding \begin{align}\label{eq3:28oct11} &\it{i} : \mathcal{D}'(\Omega) \to \mathcal{G}^s(\Omega) \end{align} See \cite{GKOS01} for details. An important property of this embedding is that for any ${f, g \in C^{\infty}(\Omega)}$, ${\it{i}(fg) = \it{i}(f)\it{i}(g)}$. Therefore, multiplication in ${\mathcal G}^s(\Omega)$ is an extension of point-wise multiplication of $C^{\infty}$ functions. We now discuss a method of embedding certain subsets of ${\mathcal D}'(\Omega)$ into ${\mathcal G}(\overline{\Omega})$. The reason that we must restrict our embedding to certain subsets of ${\mathcal D}'(\Omega)$ is that our generalized extension ${\mathcal G}(\ol{\Omega})$ is defined on the closed set $\ol{\Omega}$. Colombeau algebras of this form no longer have a sheaf structure and so we can no longer take advantage of the general embedding \eqref{eq3:28oct11} constructed in \cite{GKOS01}. However, as in the case with the embedding constructed in \cite{GKOS01}, our main tool for constructing our embedding will be convolution with the function $\phi$ satisfying the properties in \eqref{eq4:28oct11}. The most natural way to associate a given element $u \in {\mathcal D}'(\Omega)$ with a net of $C^{\infty}(\ol{\Omega})$ functions is by mollifying $u$ with a function like $\varphi$ defined in \eqref{eq1:5feb13}. But in order for our embedding to preserve point-wise multiplication of $C^{\infty}(\ol{\Omega})$ functions, we need the functions that we convolve with $u$ to have the same properties as $\phi_{\epsilon}$ defined in \eqref{eq4:28oct11}. However, $\phi_{\epsilon} \in {\mathcal S}(\mathbb{R}^n)$ for each $\epsilon\in (0,1]$, so convolution with an arbitrary element $u \in {\mathcal D}'(\Omega)$ is not well-defined. This is where the sheaf properties of ${\mathcal G}(\Omega)$ are instrumental in constructing the embedding \eqref{eq3:28oct11}. However, we no longer have this option, and therefore focus on finding a subsets of ${\mathcal D}'(\Omega)$ for which the convolution is defined. Given $f\in C^{\infty}(\ol{\Omega})$, we again observe that the convolution $(f\ast \phi_{\epsilon})$ is not well-defined for all $x \in \ol{\Omega}$, $\epsilon \in (0,1]$. This follows because $f$ has no value outside of $\ol{\Omega}$. What we seek is a way to extend $C^{\infty}(\ol{\Omega})$ functions to $C^{\infty}(\mathbb{R}^n)$ functions, and more generally, elements of ${\mathcal D}'(\Omega)$ to ${\mathcal D}'(\mathbb{R}^n)$, so that the convolution has meaning. We note that it is not possible to extend an arbitrary element of ${\mathcal D}'(\Omega)$ to ${\mathcal D}'(\mathbb{R}^n)$, so we will restrict ourselves to a subspace of ${\mathcal D}'(\Omega)$. The following theorem taken from \cite{RA75} will provide us with a large subspace of ${\mathcal D}'(\Omega)$ that we can extend. \begin{theorem}\label{thm1:4feb13} Suppose that $\Omega' \subset\subset \Omega$, and that $\Omega$ is bounded and of $C^{\infty}$-class. Then there exists a total extension operator, which has the property that for each $0 \le k\le \infty$, $1 \le p \le \infty$ \begin{align} &E: W^{k,p}(\Omega) \to W^{k,p}(\mathbb{R}^n),\\ &E(u)|_{\Omega} = u,\nonumber \end{align} and \begin{align}\label{eq2:6feb13} \|Eu\|_{W^{k,p}(\mathbb{R}^n)} \le C(n,p)\|u\|_{W^{k,p}(\Omega)}. \end{align} \smallskip \noindent Moreover, $E$ can be extended to ${\mathcal E}'(\Omega') \subset {\mathcal E}'(\Omega)$ so that if $u \in {\mathcal E}'(\Omega')$, \begin{align} &E(u)|_{\Omega} = u \\ &E(u)|_{\Omega^c} = 0. \nonumber \end{align} \end{theorem} \begin{proof} Let $ \alpha = d(\Omega',\partial\Omega) > 0$. Given that $\Omega$ is of $C^{\infty}$-class, we may cover $\partial\Omega$ with finitely many balls of radius $\alpha/2$ (or smaller if necessary) that are $C^{\infty}$-diffeomorphic with some subset of ${B_1(0) \cap \mathbb{R}^n_+}$. As in the proof of Theorem 4.28 in \cite{RA75}, we may use these neighborhoods to construct a total extension operator which has the property that for every $0 \le k \le \infty$, $1 \le p \le \infty$, \begin{align} &E: W^{k,p} \to W^{k,p}(\mathbb{R}^n),\\ &E(f)|_{\Omega} = f. \end{align} We can extend this extension operator to ${\mathcal E}'(\Omega')$. It is well know that for any $u \in W^{k,p}(\Omega)$, there exists an approximating net $\{u_{\epsilon}\} \subset C^{\infty}(\ol{\Omega})$ such that $u_{\epsilon} \to u$ in $W^{k,p}(\Omega)$. See the Global Approximation Theorem in \cite{LE98}. Using the same argument as in the proof of this theorem, we can obtain an approximating net of $C^{\infty}(\ol{\Omega})$ functions for $u\in {\mathcal E}'(\Omega')$. We have that $u = \partial^{\alpha} f$ for some continuous $f$ with support in an arbitrary neighborhood of $\text{supp}(u)$. By shifting the argument of $f$ and mollifying up to the boundary in each of the balls covering $\partial\Omega$ defined above, and then applying a partition of unity argument, we obtain a net $\{u_{\epsilon} \} \subset C^{\infty}(\ol{\Omega})$ such that $u_{\epsilon} \to u$ in ${\mathcal D}'(\Omega)$. Furthermore, for this net $\{u_{\epsilon}\}$ there exists $\epsilon_0 \in (0,1)$ for which $u_{\epsilon} \equiv 0$ on $\Omega\cap \ol{\Omega'}^c$~~ if $0<\epsilon < \epsilon_0$. For a given $u \in {\mathcal E}'(\Omega')$, we let $\{u_{\epsilon}\}$ denote this approximating net and we define $$ E(u) = \lim_{\epsilon\to 0} E(u_{\epsilon}). $$ Based on the properties of $u_{\epsilon}$, this extension will extend $u$ by zero outside of $\Omega$. We note that for $u\in W^{k,p}(\Omega) \cap {\mathcal E}'(\Omega')$, this definition of extension on ${\mathcal E}'(\Omega')$ will be consistent with the extension on $W^{k,p}(\Omega)$ given the properties of $E$ in \eqref{eq2:6feb13}. \end{proof} We now define the following subspace of ${\mathcal D}'(\Omega)$ that we will embed into ${\mathcal G}(\ol{\Omega})$. Fix an open subset of $\Omega' \subset\subset \Omega$, where $\Omega$ is of $C^{\infty}$-class. Let \begin{align} {\mathcal F}'(\Omega) = {\mathcal E}'(\Omega') + \left( \bigcup_{0 \le k \le \infty, 1 \le p \le \infty} W^{k,p}(\Omega) \right), \end{align} \noindent where the above notation indicates the subspace formed by the sum of ${\mathcal E}'(\Omega)$ and the union of the Sobolev spaces as subspaces of ${\mathcal D}'(\Omega)$. \begin{theorem}\label{thm1:5feb13} Let $E$ be the extension operator defined in Theorem~\ref{thm1:4feb13} and let $\phi_{\epsilon} \in {\mathcal S}(\mathbb{R}^n)$ be the net of functions defined in \eqref{eq4:28oct11}. Then the map \begin{align}\label{eq1:6feb13} &\it{i}: {\mathcal F}'(\Omega) \to {\mathcal G}(\ol{\Omega}),\\ &\it{i}(u) = \left.(E(u)\ast \phi_{\epsilon})\right|_{\ol{\Omega}} + {\mathcal N}(\ol{\Omega}) \nonumber, \end{align} is a linear embedding of ${\mathcal F}'(\Omega)$ into ${\mathcal G}(\ol{\Omega})$. \end{theorem} \begin{proof} By the linearity of the extension operator $E$, we observe that for any $u \in {\mathcal E}'(\Omega')$ and $v\in W^{k,p}(\Omega)$, $0 \le k \le \infty$, $1 \le p \le \infty$, $E(u+v)$ is well defined and unique in the distributional sense. Therefore, for any element $u \in {\mathcal F}'(\Omega)$, $E(u) \in {\mathcal D}'(\mathbb{R}^n)$ is unique and $\it{i}$ is well-defined. For any $u \in W^{k,p}(\Omega)$ and multi-index $\alpha$, we have that \begin{align}\label{eq3:6feb13} \partial^{\alpha}(&E(u)\ast \phi_{\epsilon}) = \\ &\int E(u)(x)\partial^{\alpha}\phi_{\epsilon}(x-y)~dy = \int E(u)(x-\epsilon y)\epsilon^{-|\alpha|}\partial^{\alpha}\phi(y)~dy = {\mathcal O}(\epsilon^{-|\alpha |}), \nonumber \end{align} given that $E(u) \in W^{k,p}(\mathbb{R}^n)$ and $\phi \in {\mathcal S}(\mathbb{R}^n)$. So $\it{i}(u) \in {\mathcal G}(\ol{\Omega})$. A similar argument can be used to show that $\it{i}(v) \in {\mathcal G}(\ol{\Omega})$ if $v \in {\mathcal E}'(\Omega')$. By linearity, $\it{i}(u) \in {\mathcal G}(\ol{\Omega})$ for any $u \in {\mathcal F}'(\Omega)$. Now we only need to show that $\it{i}$ is injective. Suppose that $\it{i}(u) \in {\mathcal N}(\ol{\Omega})$. Then $E(u) \ast \phi_{\epsilon}(x) \to 0$ uniformly on $\ol{\Omega}$. Therefore, for any $\psi \in {\mathcal D}(\Omega)$, $$ \langle u, \psi \rangle = \lim_{\epsilon \to 0} \langle u\ast \phi_{\epsilon}, \psi \rangle = \lim_{\epsilon \to 0} \langle E(u) \ast \phi_{\epsilon}, \psi\rangle = 0. $$ So $u \equiv 0$ in ${\mathcal D}'(\Omega)$ and $\it{i}$ is injective. \end{proof} The embedding $\it{i}$ has the important property that it preserves point-wise multiplication of $C^{\infty}(\ol{\Omega})$ functions. We prove this by following the argument in \cite{GKOS01}. We first observe that we may embed $f\in C^{\infty}(\ol{\Omega})$ into ${\mathcal G}(\ol{\Omega})$ by the map \begin{align}\label{eq4:6feb13} \sigma: C^{\infty}(\ol{\Omega}) \to {\mathcal G}(\ol{\Omega}),\\ \sigma(f) = (f)_{\epsilon} + {\mathcal N}(\ol{\Omega})\nonumber \end{align} where $(f)_{\epsilon}$ the constant net such that $f_{\epsilon} = f $ for all $\epsilon \in (0,1]$. \begin{proposition}\label{prop1:6feb13} The embedding $\it{i}$ has the property that $\it{i}|_{C^{\infty}(\ol{\Omega}) }= \sigma$. \end{proposition} \begin{proof} This follows from the proof of Proposition 1.2.11 in \cite{GKOS01} and the fact that if $u \in C^{\infty}(\ol{\Omega})$, then $E(u) \in C^{\infty}(\mathbb{R}^n)$ and $E(u)|_{\ol{\Omega}} = u$. \end{proof} Proposition~\ref{prop1:6feb13} allows us to conclude that $\it{i}$ preserves point-wise multiplication of $C^{\infty}(\ol{\Omega})$ functions. Indeed, if $f,g \in C^{\infty}(\ol{\Omega})$, then $$ \it{i}(fg) = \sigma(fg) = \sigma(f)\sigma(g) = \it{i}(f)\it{i}(g). $$ \noindent Now that we have a means of embedding a rather large class of elements of ${\mathcal D}'(\Omega)$ into ${\mathcal G}(\ol{\Omega})$ that are useful for solving PDE, we can begin to formulate what a semilinear problem in ${\mathcal G}(\ol{\Omega})$ looks like. \subsection{Nets of Semilinear Differential Operators} \label{netsofproblems} We begin by defining a semilinear differential operator on $\mathcal{G}(\ol{\Omega})$. Our construction strongly resembles the construction by Mitrovic and Pilipovic in \cite{MP06}. For $\epsilon<1$, if $(a^{ij}_{\epsilon})$, $(b^i_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$, we obtain a net of operators by defining $A_{\epsilon}$ to be $$ A_{\epsilon}u_{\epsilon} = - D_i( a_{\epsilon}^{ij}D_j u) + \sum_i^K b^i_{\epsilon}u^{n_i} = - a^{ij}_{\epsilon} D_iD_ju_{\epsilon}- (D_ia^{ij}_{\epsilon})(D_j u_{\epsilon})+\sum_{i=1}^K b^i_{\epsilon}(u_{\epsilon})^{n_i}, $$ where $n_i \in \mathbb{Z}$. Under certain conditions, we can view a net of operators of the above form as an operator on $\mathcal{G}(\ol{\Omega})$. Here we determine these conditions, which will guarantee that this net of operators is a well-defined operator on $\mathcal{G}(\overline{\Omega})$. Given an element $u$ in $\mathcal{G}(\ol{\Omega})$, we first need to ensure that $(A_{\epsilon}u_{\epsilon}) \in \mathcal{E}_{M}(\ol{\Omega})$. Based on how derivations and multiplication are defined in $\mathcal{G}(\ol{\Omega})$, the only serious obstacle to this is if $n_i <0$ for some $i \le K$. Therefore, we must guarantee that the element $((u_{\epsilon})^{n_i})$ is a well-defined representative in $\mathcal{G}(\overline{\Omega})$ if $n_i <0$. It suffices to ensure that $u =[(u_{\epsilon})]$ has an inverse in $\mathcal{G}(\overline{\Omega})$. This is true if for each representative $(u_{\epsilon})$ of $u$, there exists $\epsilon_0 \in (0,1]$ and $m\in \mathbb{N}$ such that for all $\epsilon \in (0,\epsilon_0)$, $\inf_{x\in \overline{\Omega}}|u_{\epsilon}(x)| \ge C\epsilon^m$. See \cite{GKOS01} for more details. So $u\in \mathcal{G}(\overline{\Omega})$ must possess this property in order for the above operator to have any chance of being well-defined. For the rest of this section we assume that $u$ satisfies this condition. Now suppose $(\overline{a}^{ij}_{\epsilon})$, $(\overline{b}^i_{\epsilon})$ in $\mathcal{E}_M(\overline{\Omega})$, and let $$ \overline{A}_{\epsilon}u = -\sum_{i,j=1}^N D_i( \overline{a}_{\epsilon}^{ij}D_j u) + \sum_i^K \overline{b}^i_{\epsilon}u^{n_i} = \overline{a}^{ij}_{\epsilon}D_iD_ju_{\epsilon}-(D_i \overline{a}^{ij}_{\epsilon})(D_ju_{\epsilon})+\sum_{i=1}^K \overline{b}^i_{\epsilon}(u_{\epsilon})^{n_i}. $$ We say that $(A_{\epsilon}) \sim (\overline{A}_{\epsilon})$ if $(a^{ij}_{\epsilon}-\overline{a}^{ij}_{\epsilon}),(b^i_{\epsilon}-\overline{b}^i_{\epsilon}) \in \mathcal{N}^s(\overline{\Omega})$. Then $(A_{\epsilon})\sim (\overline{A}_{\epsilon})$ if and only if $(A_{\epsilon}u_{\epsilon}-\overline{A}_{\epsilon}u_{\epsilon})\in\mathcal{N}(\overline{\Omega})$ for all $(u_{\epsilon})\in\mathcal{E}_M(\overline{\Omega})$ due to the fact that the above operators are linear in $(a^{ij}_{\epsilon})$ and $(b^i_{\epsilon})$. Let $\mathcal{A}$ be the family of nets of differential operators of the above form and define $\mathcal{A}_0=\mathcal{A}/\sim$. Then for $A\in \mathcal{A}_0$ and $u\in \mathcal{E}_M(\overline{\Omega})$, define $$ A:\mathcal{G}(\overline{\Omega})\to\mathcal{G}(\overline{\Omega}) \text{ by } Au=[A_{\epsilon}u_{\epsilon}], $$ where \begin{align} \label{eq1june26} [A_{\epsilon}u_{\epsilon}]=[-a^{ij}_{\epsilon}][D_iD_ju_{\epsilon}]+[-D_ia^{ij}_{\epsilon}][D_j u_{\epsilon}] + \sum_{i=1}^K[b^i_{\epsilon}][u^{n_i}_{\epsilon}]. \end{align} Using this definition, $A\in \mathcal{A}_0$ is a well-defined operator on $\mathcal{G}(\overline{\Omega})$. We summarize this statement in the following proposition. \begin{proposition} \label{prop1july1} $\mathcal{A}_0$ is a well-defined class of differential operators from $\mathcal{G}(\overline{\Omega})$ to $\mathcal{G}(\overline{\Omega})$. \end{proposition} \begin{proof} Based on the construction of $\mathcal{A}_0$, it is clear that for a given representative $(u_{\epsilon})$ of $u \in \mathcal{G}(\overline{\Omega})$, $(A_{\epsilon}u_{\epsilon})$ and $(\overline{A}_{\epsilon}u_{\epsilon})$ represent the same element in $\mathcal{G}(\overline{\Omega})$. Furthermore, given a representative $(A_{\epsilon})$ of $\mathcal{A}_0$, we also have that $[A_{\epsilon}u_{\epsilon}] = [A_{\epsilon}\overline{u}_{\epsilon}]$ for any two representatives of $u\in \mathcal{G}(\overline{\Omega})$. To see this, we first observe that for each $\epsilon$, every term in $A_{\epsilon}u_{\epsilon}$ is linear except for the $(u_{\epsilon})^{n_i}$ terms. So to verify the previous statement it suffices to show that for each $n_i \in \mathbb{Z}$, $((u_{\epsilon})^{n_i}) = ((\overline{u}_{\epsilon})^{n_i})+(\overline{\eta}_{\epsilon})$, where $(\overline{\eta}_{\epsilon}) \in \mathcal{N}(\overline{\Omega})$. Given that $[(u_{\epsilon})] = [(\overline{u}_{\epsilon})]$ in $\mathcal{G}(\overline{\Omega})$, we have $(\overline{u}_{\epsilon}) = (u_{\epsilon})+(\eta_{\epsilon})$ for $(\eta_{\epsilon}) \in \mathcal{N}(\overline{\Omega})$. For fixed $\epsilon$, $n_i \in \mathbb{Z}^+$, $$ (\overline{u}_{\epsilon})^{n_i} = (u_{\epsilon}+\eta_{\epsilon})^{n_i} = \sum_{j=0}^{n_i} \binom{n_i}{j} (u_{\epsilon})^j(\eta_{\epsilon})^{n_i-j} = (u_{\epsilon})^{n_i} + \overline{\eta}_{\epsilon}, $$ where $\overline{\eta}_{\epsilon}$ consists of the summands that each contain some nonzero power of $\eta_{\epsilon}$. Clearly the net $(\overline{\eta}_{\epsilon}) \in \mathcal{N}(\overline{\Omega})$. If $n_i \in \mathbb{Z}^-$, then for a fixed $\epsilon$, $$ (\overline{u}_{\epsilon})^{n_i} =\frac{1}{(u_{\epsilon}+\eta_{\epsilon})^{|n_i|}} = \frac{1}{\sum_{j=0}^{|n_i|} \binom{|n_i|}{j} (u_{\epsilon})^j(\eta_{\epsilon})^{|n_i|-j} }= \frac{1}{(u_{\epsilon})^{|n_i|} + \overline{\eta}_{\epsilon}}. $$ By looking at the difference $$ (u_{\epsilon})^{n_i}- \frac{1}{(u_{\epsilon})^{|n_i|} + \overline{\eta}_{\epsilon}} = \frac{\overline{\eta}_{\epsilon}}{((u_{\epsilon})^{|n_i|})((u_{\epsilon})^{|n_i|}+\overline{\eta}_{\epsilon})} = \hat{\eta}_{\epsilon}, $$ we see that the net $((u_{\epsilon})^{n_i}) = ((\overline{u}_{\epsilon})^{n_i}) + (\hat{\eta}_{\epsilon})$, where $(\hat{\eta}_{\epsilon}) \in \mathcal{N}(\overline{\Omega})$. Therefore for any $u\in \mathcal{G}(\overline{\Omega})$ possessing an inverse, and any $A \in \mathcal{A}_0$, the expression $Au = [A_{\epsilon}u_{\epsilon}] \in \mathcal{G}(\overline{\Omega})$ is well-defined. \end{proof} \subsection{The Dirichlet Problem in $\mathcal{G}(\overline{\Omega})$} \label{Dirichlet} Using the above definition of $\mathcal{A}$, we can now define our semilinear Dirichlet problem on $\mathcal{G}(\ol{\Omega})$. Let $u,\rho \in \mathcal{G}(\overline{\Omega})$ where $\Omega \subset \mathbb{R}^n$ is open, bounded and of $C^{\infty}$-class. Then let $E$ be a total extension operator of $\Omega$ such that for $f \in C^{\infty}(\ol{\Omega})$, $Ef \in C^{\infty}(\mathbb{R}^n)$ and $Ef|_{\ol{\Omega}} =f$. See \cite{RA75} for details. Using $E$ we may may define $u|_{\partial\Omega}=\rho|_{\partial\Omega}$ for elements $u, \rho \in \mathcal{G}(\ol{\Omega})$ if there are representatives $(u_{\epsilon})$ and $(\rho_{\epsilon})$ such that $$ u_{\epsilon}|_{\partial\Omega}=\rho_{\epsilon}|_{\partial\Omega}+n_{\epsilon}|_{\partial\Omega}, $$ where $n_{\epsilon}$ is a net of $C^{\infty}$ functions defined in a neighborhood of $\partial\Omega$ such that \begin{align} \label{boundary} \sup_{x\in\partial\Omega}|n_{\epsilon}(x)|=\textit{o}(\epsilon^a)\hspace{3mm} \forall a\in \mathbb{R}. \end{align} This will ensure that $u|_{\partial\Omega}=\rho|_{\partial\Omega}$ does not depend on representatives~\cite{MP06}. With this definition of boundary equivalence, for a given operator $A \in \mathcal{A}_0$, the Dirichlet problem \begin{align}\label{eq1:1nov11} Au &= 0 \hspace{3mm} \text{in $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$} \nonumber \end{align} is well-defined in $\mathcal{G}(\overline{\Omega})$. Now we state the conditions under which the above problem can be solved in $\mathcal{G}(\ol{\Omega})$. \section{Overview of the Main Results} \label{overview} We begin this section by stating the main existence result for the Dirichlet problem \eqref{eq1:1nov11}. Let $A\in\mathcal{A}_0$ be an operator on $\mathcal{G}(\overline{\Omega})$ defined by \eqref{eq1june26}. Also assume that the coefficients of $A$ have representatives $(a^{ij}_{\epsilon}), (b^i_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$ that satisfy the following properties for $\epsilon \in (0,1)$: \begin{align} \label{eq1june27} &a^{ij}_{\epsilon} = a^{ji}_{\epsilon}, \hspace{5mm} a^{ij}_{\epsilon}\xi_i\xi_j \ge \lambda_\epsilon|\xi|^2 \ge C_1\epsilon^a|\xi|^2, \\ &|a^{ij}_{\epsilon}|_{k+1,\alpha;\Omega}, \hspace{2mm} |b^i_{\epsilon}|_{k,\alpha;\Omega} \le \Lambda_{k,\epsilon} \le C_2(k)\epsilon^{b(k)}, \quad \forall k \in \mathbb{N} \nonumber \\ &b^1_{\epsilon} \le -C_3\epsilon^c, \hspace{3mm} \hspace{3mm} \{n_i:n_i< 0\} \ne \emptyset, \hspace{3mm} n_1=\min\{n_i : n_i < 0\} \nonumber \\ &b^K_{\epsilon} \ge C_4\epsilon^d, \hspace{3mm} \{n_i: n_i > 0 \} \ne \emptyset ,\hspace{3mm} n_K=\max\{n_i : n_i > 0\}, \nonumber \end{align} where $C_1, C_2, C_3$ and $C_4$ are positive constants independent of $\epsilon$ and the constants \\ ${a,b,c, d \in \mathbb{R}}$ are also independent of $\epsilon$. The notation $C_2(k)$ and $b(k)$ is meant to indicate that these constants may depend on $k$. Then the following Dirichlet problem has a solution in $\mathcal{G}(\overline{\Omega})$: \begin{align} \label{eq3june27} Au =[A_{\epsilon}&u_{\epsilon}] = 0 \hspace{2mm}\text{in }\Omega,\\ u &=\rho \quad \text{on $\partial\Omega$}. \nonumber \end{align} We summarize this result in the following theorem, which will be the focus of the remainder of the paper: \begin{theorem} \label{thm1june27} Suppose that $A:\mathcal{G}(\overline{\Omega}) \to \mathcal{G}(\overline{\Omega})$ is in $\mathcal{A}_0$ and that the conditions of \eqref{eq1june27} hold. Assume that $\rho \in \mathcal{G}(\overline{\Omega})$ has a representative $(\rho_{\epsilon})$ such that for $\epsilon < 1$, $\rho_{\epsilon} \ge C\epsilon^a$ for some $C>0$ and $a\in \mathbb{R}$. Then there exists a solution to the Dirichlet problem \eqref{eq3june27} in $\mathcal{G}(\overline{\Omega})$. \end{theorem} \begin{proof} The proof will be given in Section~\ref{results}. \end{proof} \begin{remark}\label{rem1:11nov11} We can actually weaken the assumptions in \eqref{eq1june27} so that the conditions on the representatives $(a^{ij}_{\epsilon}), (b^1_{\epsilon}),(b^K_{\epsilon}), (\rho_{\epsilon})$ only have to hold for all $\epsilon \in (0,\epsilon_{0})$ for some $\epsilon_0 \in (0,1)$. Suppose that this is the case, and that using these conditions we are able to show that for all $\epsilon \in (0,\epsilon_0)$, there exists $u_{\epsilon}$ that solves \begin{align}\label{eq1:11nov11} A_{\epsilon}u_{\epsilon} &= 0 \quad \text{ in $\Omega$},\\ u_{\epsilon} &= \rho_{\epsilon} \quad \text{on $\partial\Omega$}. \nonumber \end{align} If $u_{\epsilon}$ satisfies the additional property that for all $k \in \mathbb{N}$, there exists some $\epsilon_0' \in (0,\epsilon_0)$, $C>0$, and $a \in \mathbb{R}$ such that for all $\epsilon \in (0,\epsilon_0')$, $|u_{\epsilon}|_{k,\alpha} \le C\epsilon^a$, then we can form a solution $(v_{\epsilon}) \in \mathcal{E}_M(\ol{\Omega})$ to \eqref{eq3june27} by defining $v_{\epsilon} = u_{\epsilon}$ for $\epsilon \in (0, \epsilon_0)$ and $v_{\epsilon} = u_{\epsilon_0}$ for $\epsilon \in [\epsilon_0,1]$. The solution theory that we develop to prove Theorem~\ref{thm1june27} with the stronger conditions \eqref{eq1june27} will also imply the existence of the partial net $(u_{\epsilon})$ of solutions to \eqref{eq1:11nov11} in the event that the constraints outlined in \eqref{eq1june27} only hold for $\epsilon \in (0,\epsilon_0) \subset (0,1)$. We will require this fact when we consider how to embed and solve \eqref{problem} in $\mathcal{G}(\ol{\Omega})$ later on in Section~\ref{embed}. \end{remark} We begin assembling the tools we will need to prove Theorem~\ref{thm1june27}. The first tool we need is a method capable of solving a large class of semilinear problems. The method of sub- and super-solutions meets this need, and we discuss this process of solving elliptic, semilinear problems in the following section. \subsection{The Method of Sub- and Super-Solutions} \label{subsolution} In Theorem~\ref{thm2june27} below, we state a fixed-point result that will be essential in proving Theorem~\ref{thm1june27}. This fixed-point result is known as the method of sub- and super-solutions due to the fact that for a given operator $A$, the method relies on finding a sub-solution $u_-$ and super-solution $u_+$ such that $u_- < u_+$. A large part of this paper is devoted to finding a net of positive sub- and super-solutions for~\eqref{eq3june27} and establishing growth conditions for them. In the proof below, let \begin{align} \label{eq1june29} Lu = -D_i(a^{ij}D_{j}u) + cu, \end{align} be an elliptic operator where $$ a^{ij} = a^{ji}, \quad a^{ij}\xi_i\xi_j \ge \lambda|\xi|^2 \quad \text{and} \quad a^{ij}, c \in C^{\infty}(\overline{\Omega}). $$ We now state and prove the sub- and super-solution fixed-point result for these assumptions. \begin{theorem} \label{thm2june27} Suppose $\Omega \subset \mathbb{R}^n$ is a $C^{\infty}$ domain and assume $f:\overline{\Omega} \times \mathbb{R}^+\to \mathbb{R}$ is in $C^{\infty}(\overline{\Omega}\times\mathbb{R}^+)$ and $\rho \in C^{\infty}(\overline{\Omega})$. Let $L$ be of the form \eqref{eq1june29}. Suppose that there exist functions $u_-:\overline{\Omega} \to\mathbb{R}$ and $u_+:\overline{\Omega}\to \mathbb{R}$ such that the following hold: \begin{enumerate} \item $u_-,u_+ \in C^{\infty}(\overline{\Omega})$, \item $0<u_-(x) \le u_+(x) \hspace{3mm}\forall x\in \overline{\Omega}$, \item $ Lu_- \le f(x,u_-)$, \item $ Lu_+ \ge f(x,u_+)$, \item $ u_- \le \rho \hspace{2mm}\text{on} \hspace{2mm} \partial\Omega$, \item $ u_+ \ge \rho \hspace{2mm}\text{on} \hspace{2mm} \partial\Omega$. \end{enumerate} Then there exists a solution $u$ to \begin{align} \label{eq3june30} Lu &= f(x,u) \hspace{3mm} \text{on $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$}, \nonumber \end{align} such that \begin{itemize} \item[(i)] $u\in C^{\infty}(\overline{\Omega})$, \item[(ii)] $u_-(x)\le u(x) \le u_+(x). $ \end{itemize} \end{theorem} \begin{proof} The general approach of the proof will be to construct a monotone sequence $\{u_n\}$ that is point-wise bounded above and below by our super- and sub-solutions, $u_+$ and $u_-$. We will then apply elliptic regularity estimates and the Arzela-Ascoli Theorem to conclude that the sequence $\{u_n\}$ has a $C^{\infty}(\overline{\Omega})$ limit $u$ that is a solution to \begin{align} Lu &= f(x,u) \hspace{3mm}\text{on $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$}. \nonumber \end{align} Given that $u_-(x), u_+(x) \in C^{\infty}(\overline{\Omega})$, the interval $[\min u_-(x), \max_+ u_+(x)] \subset \mathbb{R}^+$ is well-defined. We then restrict the domain of the function $f$ to the compact set $K = \overline{\Omega}\times [\min u_-(x), \max_+ u_+(x)]$. Given that $f \in C^{\infty}(\overline{\Omega}\times \mathbb{R}^+)$, it is clearly in $C^{\infty}(\overline{\Omega}\times [\min u_-(x), \max_+ u_+(x)])$ and so the function $|\frac{\partial f(x,t)}{\partial t}|$ is continuous and attains a maximum on $ K$. Denoting this maximum value by $m$, let $M= \max\{m, -\inf_{x\in \ol{\Omega}} c(x)\}$. Then consider the operator $$ Au = Lu + Mu, $$ and the function $$ F(x,t) = Mu+f(x,t). $$ Note that this choice of $M$ ensures that $F(x,t)$ is an increasing function in $t$ on $K$ and that $A$ is an invertible operator. Also, we clearly have the following: \begin{align} A(u) = F(x,u) &\Longleftrightarrow Lu = f(x,u), \\ A(u_-) \le F(x,u_-) &\Longleftrightarrow L(u_-) \le f(x,u_-), \\ A(u_+) \ge F(x,u_+) &\Longleftrightarrow L(u_+) \ge f(x,u_+). \end{align} The first step in the proof is to construct the sequence $\{u_n\}$ iteratively. Let $u_1$ satisfy the equation \begin{align} A(u_1) &= F(x,u_-) \hspace{2mm} \text{on $\Omega$},\\ u_1&= \rho \quad \text{on $\partial\Omega$}. \nonumber \end{align} We observe that for $u, v\in H^1_0(\Omega)$, the operator $A$ satisfies $$ C_1\|u\|_{H^1(\Omega)}^2 \le \left\langle Au,u \right\rangle, \hspace{5mm} \text{ and} \hspace{5 mm} \left\langle Au,v\right\rangle \le \|u\|_{H^1(\Omega)}^2\|v\|_{H^1(\Omega)}^2, $$ where $$ \left\langle u,v\right\rangle = \int_{\Omega} uv dx, \hspace{5mm}\text{and} \hspace{5 mm} \left\langle Lu,v\right\rangle = \int_{\Omega} (a^{ij} D_ju D_iv + cuv) dx. $$ Therefore the Lax-Milgram theorem implies that there exists a weak solution ${u_1 \in H^1(\Omega)}$ satisfying ${u_1-\rho \in H^1_0(\Omega)}$. Given our assumptions on $F(x,t)$ and $\rho$, ${F(x,u_+) \in H^{m}(\Omega)}$ and ${\rho \in H^m(\Omega)}$ for all $m \in \mathbb{N}$. Therefore, by standard elliptic regularity arguments, $u_1 \in H^m(\Omega)$ for all $m\in \mathbb{N}$. This, the assumption that $\Omega$ is of $C^{\infty}$-class and the assumption that $a^{ij}, c, \rho \in C^{\infty}(\ol{\Omega})$ imply that $u_1\in C^{\infty}(\overline{\Omega})$ and $u_1 = \rho \hspace{2mm} \text{on $\partial\Omega$}$. Therefore, we may iteratively define the sequence $\{u_j\} \subset C^{\infty}(\overline{\Omega})$ where \begin{align} \label{eq3june29} A(u_{j}) &= F(x,u_{j-1}) \hspace{3mm} \text{on $\Omega$},\\ u_{j} &= \rho \quad \text{on $\partial\Omega$}. \nonumber \end{align} The next step is to verify that the sequence $\{u_j\}$ is a monotonic increasing sequence satisfying $u_- \le u_1 \le \cdots \le u_{j-1} \le u_j \le \cdots \le u_+$. We prove this by induction. First we observe that \begin{align} A(u_- -u_1) \le F(x,u_-)-F(x,u_-) &= 0 \hspace{3mm}\text{on $\Omega$}, \\ ( u_-- u_1)|_{\partial\Omega} &\le 0. \nonumber \end{align} Therefore, by the weak maximum principle, $u_-\le u_1$ on $\overline{\Omega}$. Now suppose that $u_{j-1} \le u_{j}$. Then \begin{align} A(u_j-u_{j+1}) = F(x,u_{j-1})-F(x,u_j) &\le 0 \hspace{3mm}\text{ on $\Omega$}, \\ ( u_j-u_{j+1})|_{\partial\Omega} &= 0 . \nonumber \end{align} given that $F(x,t)$ is an increasing function in the variable $t$ and $u_{j-1} \le u_j$. The weak maximum principle again implies that $u_j \le u_{j+1}$, so by induction we have that $\{u_j\}$ is monotonic increasing sequence that is point-wise bounded below by $u_-(x)$. Now we show that our increasing sequence is point-wise bounded above by $u_+(x)$ by proceeding in a similar manner. Given that $u_- \le u_+$ and $u_+$ is a super-solution, we have that \begin{align} A(u_1-u_+) \le F(x,u_-)-F(x,u_+) &\le 0 \hspace{3mm} \text{ on $\Omega$},\\ (u_1-u_+)|_{\partial\Omega} &\le 0 . \nonumber \end{align} The weak maximum principle implies that $u_1\le u_+$. Now assume that $u_j \le u_+$. Then \begin{align} A(u_{j+1}-u_+) \le F(x,u_j)-F(x,u_+) &\le 0 \hspace{3mm} \text{on $\Omega$},\\ (u_{j+1} - u_+)|_{\partial\Omega} &\le 0, \nonumber \end{align} given that $F(x,t)$ is an increasing function and $u_j \le u_+$. So by induction the sequence $\{u_j\}$ is a monotonic increasing sequence that is point-wise bounded above by $u_+(x)$ and point-wise bounded below by $u_-(x)$. Up to this point, we have constructed a monotonic increasing sequence ${\{u_j\} \subset C^{\infty}(\overline{\Omega})}$ such that for each $j$, $u_j$ satisfies the Dirichlet problem \eqref{eq3june29} and is point-wise bounded below by $u_-$ and above by $u_+$. The next step will be to apply the Arzela-Ascoli theorem and a bootstrapping argument to conclude that this sequence converges to $u\in C^{\infty}(\overline{\Omega})$. We first show that it converges to $u\in C(\overline{\Omega})$ by an application of the Arzela-Ascoli Theorem. Clearly the family of functions $\{u_j\}$ is point-wise bounded, so it is only necessary to establish the equicontinuity of the sequence. Given that each function $u_j$ solves the problem \eqref{eq3june29}, by standard $L^p$ elliptic regularity estimates (cf.~\cite{GiTr77}) we have that $$ \|u_j\|_{W^{2,p}} \le C(\|u_j\|_{L^p}+\|F(x,u_{j-1})\|_{L^p}). $$ The regularity of $F(x,t)$ and the sequence $\{u_j\}$ along with the above estimate and the compactness of $\overline{\Omega}\times [\inf u_-, \sup u_+]$ imply that there exists a constant $N$ such that $ \|F(x,u_{j-1})\|_{L^p} \le N$ for all $j$. Therefore, if $p> 3$, the above bound and the fact that $u_- \le u_j \le u_+$ imply that for each $j\in \mathbb{N}$, $$ |u_j|_{1,\alpha;\Omega} \le C\|u\|_{W^{2,p}} \le \infty, $$ where $\alpha = 1-\frac3p$. This implies that the sequence $\{u_j\}$ is equicontinuous. The Arzela-Ascoli Theorem then implies that there exists a $u\in C(\overline{\Omega})$ and a subsequence $\{u_{j_k}\}$ such that $u_{j_k} \to u$ uniformly. Furthermore, due to the fact that the sequence $\{u_j\}$ is monotonic increasing, we actually have that $u_j \to u$ uniformly on $\overline{\Omega}$. Once we have that $u_{j} \to u $ in $C(\overline{\Omega})$, we apply $L^p$ regularity theory again to conclude that \begin{align}\label{eq1june30} |u_{j} - u_{k}|_{1,\alpha;\Omega} \le& C\|u_j-u_k\|_{W^{2,p}}\\ \le& C'(\|u_{j}-u_{k}\|_{L^p}+\|F(x,u_{j-1})-F(x,u_{k-1})\|_{L^p}).\nonumber \end{align} Note that the above estimate follows from the fact that $u_{j_k+1}-u_{j_l+1}$ satisfies \begin{align} A(u_{j}-u_{k}) &= F(x,u_{j-1})-F(x,u_{k-1}) \hspace{3mm} \text{on $\Omega$}, \\ (u_{j} - u_{k})|_{\partial\Omega} &= 0. \nonumber \end{align} Given that $u_j \to u$ in $C(\overline{\Omega})$, \eqref{eq1june30} implies that the sequence $\{u_{j}\}$ is a Cauchy sequence in $C^{1}(\overline{\Omega})$. The completeness of $C^{1}(\overline{\Omega})$ then implies that this subsequence has a limit $v \in C^{1}(\overline{\Omega})$, and given that $u_{j} \to u$ in $C(\overline{\Omega})$, it follows that $u = v$. Similarly, by repeating the above argument and using higher order $L^p$ estimates we have that \begin{align}\label{eq2june30} |u_{j} - u_{k}|_{2,\alpha;\Omega} \le &C(\|u_j-u_k\|_{W^{3,p}} ) \\ \le & C'(\|u_j-u_k\|_{W^{1,p}}+\|F(x,u_{j-1})-F(x,u_{k-1})\|_{W^{1,p}})\nonumber, \end{align} where $u_{j} \to u$ in $C^{1}(\overline{\Omega})$ as $k \to \infty$. Again, \eqref{eq2june30}, the regularity of $F$ and the fact that $u_j \to u$ in $C^1(\ol{\Omega})$ imply that the sequence $\{u_{j} \}$ is Cauchy in $C^{2}(\overline{\Omega})$. A simple induction argument then shows that $u \in C^{\infty}(\overline{\Omega})$. The final step of the proof is to show that $u$ is an actual solution to the problem~\eqref{eq3june30}. It suffices to show that $u$ is a weak solution to the above problem. It is clear that $u = \rho \hspace{2mm} \text{on $\partial\Omega$}$, so we only need to show that $u$ satisfies \eqref{eq3june30} on $\Omega$. Fix $v \in H^1_0(\Omega)$. Then based on the definition of the sequence $\{u_j\}$, we have $$ \int_{\Omega}( a^{ij}D_j u_j D_i v + Mu_{j}v )dx = \int_{\Omega} (f(x,u_{j-1})+ Mu_{j-1})v dx. $$ As $u_j \to u$ uniformly in $C(\overline{\Omega})$, we let let $j\to \infty$ to conclude that $$ \int_{\Omega}( a^{ij}D_j u D_i v + Muv )dx = \int_{\Omega} (f(x,u)+ Mu)v dx. $$ Upon canceling the term involving $M$ from both sides, we find that $u$ is a weak solution. \end{proof} \subsection{Outline of the Proof of Theorem~\ref{thm1june27}} \label{over} Now that the sub- and super-solution fixed-point theorem is in place, we give an outline for how to prove Theorem~\ref{thm1june27}. \begin{itemize} \label{steps} \item[Step 1:] { \it Formulation of the problem.} We phrase~\eqref{eq3june27} in a way that allows us to solve a net of semilinear elliptic problems. We assume that the coefficients of $A$ and boundary data $\rho$ have representatives $(a^{ij}_{\epsilon}), (b^i_{\epsilon}),$ and $(\rho_{\epsilon})$ in $\mathcal{E}_M(\overline{\Omega})$ satisfying the assumptions \eqref{eq1june27}. Then for this particular choice of representatives, we solve the family of problems: \begin{align}\label{11feb15eq1} A_{\epsilon}u_{\epsilon} = -\sum_{i,j=1}^N D_i( a_{\epsilon}^{ij}D_j u_{\epsilon}) &+ \sum_i^N b^i_{\epsilon}u_{\epsilon}^{n_i}=0 \quad \text{ in $\Omega$},\\ u_{\epsilon} &= \rho_{\epsilon} \quad \text{on $\partial\Omega$}. \nonumber \end{align} Then we must ensure that the net of solutions $(u_{\epsilon})\in \mathcal{E}_M(\overline{\Omega})$ and ensure that~\eqref{11feb15eq1} is satisfied for other representatives of $A,\rho,u$. \item[Step 2:] {\it Determine $L^{\infty}$-estimates and a net of generalized constant sub-solutions and super-solutions}. We determine constant, {\em a priori} $L^{\infty}$ bounds such that for a positive net of solutions $(u_{\epsilon})$ of the semilinear problem \eqref{11feb15eq1}, there exist constants $a_1,a_2 \in \mathbb{R}$, $C_1, C_2>0$ independent of $\epsilon\in (0,1)$ such that $$C_1\epsilon^{a_1}<\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon} <C_2\epsilon^{a_2}.$$ These estimates are constructed in such a way that for each $\epsilon$, the pair $\alpha_{\epsilon}, \beta_{\epsilon}$ are sub- and super-solutions for \eqref{11feb15eq1}. \item[Step 3:]{\it Apply fixed-point theorem to solve each semilinear problem in \eqref{11feb15eq1}}. Using the sub- and super-solutions $\alpha_{\epsilon}, \beta_{\epsilon}$, we apply Theorem~\ref{thm2june27} to obtain a net of solutions $(u_{\epsilon}) \in C^{\infty}(\overline{\Omega})$. \item[Step 4:]{\it Verify that the net of solutions $(u_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$}. Here we show that the net of solutions satisfies the necessary growth conditions in $\epsilon$ using the growth conditions on the sub- and super- solutions and Theorem~\ref{thm1june30}. \item[Step 5:]{\it Verify that the solution is well-defined}. Once we've determined that the net of solutions $(u_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$, we conclude that $[(u_{\epsilon})] \in \mathcal{G}(\overline{\Omega})$ is a solution to the Dirichlet problem \eqref{eq3june27} by showing that the solution is independent of the representatives chosen. Note that most of the work for this step was done in Proposition~\ref{prop1july1}. \end{itemize} We shall carry out the above steps in our proof of Theorem~\ref{thm1june27} in Section~\ref{results}. We still need to determine a net of sub- and super- solutions for \eqref{eq1june27}, which we do in Section~\ref{bounds1}. But before we move on to this and the other steps in the above outline, we briefly return to the motivating problem \eqref{problem} by discussing how to embed a problem with distributional data into $\mathcal{G}(\ol{\Omega})$. \subsection{Embedding a Semilinear Elliptic PDE with Distributional Data into $\mathcal{G}(\ol{\Omega})$.} \label{embed2} Now that we have defined what it means to solve a differential equation in $\mathcal{G}(\ol{\Omega})$, we are ready to return to the problem discussed at the beginning of the paper. We are interested in solving an elliptic, semilinear Dirichlet problem of the form \begin{align}\label{eq1:27oct11} -\sum_{i,j=1}^ND_i(a^{ij}D_ju) &+ \sum^K_{i=1}b^i u^{n_i} = 0 \quad \text{ in $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$} \nonumber, \end{align} where $a^{ij}, b^i$ and $\rho$ are potentially distributional and $n_i \in \mathbb{Z}$ for each $i$. If we can formulate this problem as a family of equations similar to \eqref{11feb15eq1}, then it can readily be solved in $\mathcal{G}(\ol{\Omega})$ by Theorem~\ref{thm1june27}. The key to formulating our problem with singular data as a net of problems is Theorem~\ref{thm1:5feb13}. Suppose that $\Omega' \subset\subset \Omega$ and $\Omega$ is of $C^{\infty}$-class. For this choice of $\Omega'$, we can construct an extension operator $E$ as in Theorem~\ref{thm1:4feb13} and then use Theorem~\ref{thm1:5feb13} to define an embedding of ${\mathcal F}'(\Omega)$ into ${\mathcal G}(\ol{\Omega})$, where we defined ${\mathcal F}'(\Omega)\subset {\mathcal D}'(\Omega)$ in section~\ref{embed}. If we are given a problem of the form \eqref{eq1:27oct11} with data $a^{ij}, b^i, \rho$ in $\mathcal{F}'(\Omega)$, then we may use Theorem~\ref{thm1:5feb13} to embed the coefficients $a^{ij}, b^i$ and $\rho$ into $\mathcal{G}(\ol{\Omega})$. We will denote a representative of the image of each these terms in $\mathcal{G}(\ol{\Omega})$ by $(a^{ij}_{\epsilon}), (b^i_{\epsilon})$ and $(\rho_{\epsilon})$. Then for a choice of representatives, we obtain a net of problems of the form \eqref{11feb15eq1}. In order to solve this net of problems using Theorem~\ref{thm1june27}, we need there to exist a choice of representatives $(a^{ij}_{\epsilon}), (b^i_{\epsilon})$ and $(\rho_{\epsilon})$ that satisfy the conditions specified in \eqref{eq1june27}. While these conditions might seem exacting, this solution framework still admits a wide range of interesting problems. This is evident when one considers the following proposition: \begin{proposition}\label{eq1:24feb12} Let $\Omega'\subset\subset \Omega$, where $\Omega$ is bounded and of $C^{\infty}$-class, and define ${\mathcal F}'(\Omega)$ as in section~\ref{embed}. Let $n_i \in \mathbb{Z}$ be a collection of integers for $1 \le i \le K$ and assume that there exist $1 \le i,j \le K$ such that $n_i <0$ and $n_j >0$. Then assume that $$n_1 = \min\{n_i:~ n_i < 0 \}, \quad \text{and} \quad n_K = \max\{n_i:~ n_i > 0\}.$$ Suppose that $a^{ij}, b^1, b^K, \rho \in C(\ol{\Omega})$ and $b^2, \cdots, b^{K-1} \in \mathcal{F}'(\Omega)$. Additionally assume that $a^{ij}$ satisfies the symmetric, ellipticity condition and $\rho>0$, $b_1<0$ and $b_K>0$ in $\Omega$. Then the problem \begin{align}\label{eq1:28oct11} -\sum_{i,j=1}^ND_i(a^{ij}D_ju) &+ \sum^K_{i=1}b^i u^{n_i} = 0 \quad \text{ in $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$} \nonumber, \end{align} admits a solution in $\mathcal{G}(\ol{\Omega})$. \end{proposition} \begin{proof} This follows from Proposition~\ref{thm1:5feb13}, Theorem~\ref{thm1june27}, Remark~\ref{rem1:11nov11} and the fact that \\$(a^{ij}\ast \phi_{\epsilon})$, $(b^1\ast \phi_{\epsilon})$, $(b^K\ast \phi_{\epsilon})$ and $(\rho \ast \phi_{\epsilon})$ converge uniformly to $a^{ij}, b^1, b^K$ and $\rho$ in $\ol{\Omega}$. For $\epsilon$ sufficiently small, the corresponding problem \eqref{11feb15eq1} in $\mathcal{G}(\ol{\Omega})$ will satisfy the conditions specified in \eqref{eq1june27}. Therefore, Theorem~\ref{thm1june27} and Remark~\ref{rem1:11nov11} imply the result. \end{proof} With the issue of solving \eqref{eq1:27oct11} at least partially resolved, we return to the task of proving Theorem~\ref{thm1june27}. We begin by establishing some {\em a priori} $L^{\infty}$-bounds for a solution to our semilinear problem \eqref{eq1:28oct11} if the given data is smooth. \section{Sub- and Super-Solution Construction and Estimates} \label{bounds1} Given an operator $A \in \mathcal{A}_0$ with coefficients satisfying \eqref{eq1june27}, our solution strategy for the Dirichlet problem \eqref{eq3june27} is to solve the family of problems \eqref{11feb15eq1} and then establish the necessary growth estimates. In order for this to be a viable strategy, we first need to show that \eqref{11feb15eq1} has a solution for each $\epsilon \in (0,1)$. Given that $n_i <0$ for some $1 \le i \le K$, for each $\epsilon$, we must restrict the operator $$A_{\epsilon}u_{\epsilon} = -\sum_{i,j=1}^N D_i(a_{\epsilon}^{ij}D_j u_{\epsilon})+ \sum_{i=1}^K b^i u_{\epsilon}^{n_i},$$ to a subset of functions in $C^{\infty}(\ol{\Omega})$ to guarantee that $A_{\epsilon}$ is well-defined. In particular, for each $\epsilon$ we consider functions $u_{\epsilon}\in C^{\infty}(\ol{\Omega})$ such that $0 < \alpha_{\epsilon}\le u_{\epsilon} \le \beta_{\epsilon} < \infty$ for some choice of $\alpha_{\epsilon}$ and $\beta_{\epsilon}$. The first part of this section is dedicated to making judicious choices of $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ for each $\epsilon$ such that a solution $u_{\epsilon}$ to \eqref{11feb15eq1} exists that satisfies $\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}$. Once a net of solutions $(u_{\epsilon})$ is determined, it is necessary to show that if $(u_{\epsilon}) \in \mathcal{E}_M(\ol{\Omega})$, then an operator $A \in \mathcal{A}_0$ whose coefficients satisfy \eqref{eq1june27} is well-defined for $(u_{\epsilon})$. Recall that $A$ is only a well defined operator for elements $u \in \mathcal{G}(\ol{\Omega})$ satisfying $u_{\epsilon} \ge C\epsilon^a$ for $\epsilon \in (0,\epsilon_0) \subset (0,1)$, $a\in \mathbb{R}$ and some constant $C$ independent of $\epsilon$. This will require us to establish certain $\epsilon$-growth estimates on $\alpha_{\epsilon}$, which we do later in this section. \subsection{$L^{\infty}$ Bounds for the Semilinear Problem} \label{bounds} We begin by determining the net of {\em a priori} bounds $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ described above. For now we disregard the $\epsilon$ notation. In the following proposition we determine {\em a priori} estimates for a weak solution $u\in H^{1}(\Omega)$ to a problem of the form \begin{align} \label{eq1june20} -\sum_{i,j}^ND_i (a^{ij} D_j u) &+ \sum_{i=1}^K b^iu^{n_i}=0 \text{ in $\Omega$}, \\ u &= \rho \quad \text{on $\partial\Omega$} ,\nonumber \end{align} with certain conditions imposed on the coefficients and exponents. In particular, in the following proposition we assume that $\Omega \subset \mathbb{R}^n$ is connected, bounded, and of $C^{\infty}$-class, and $a^{ij}, b^i, \rho \in C(\overline{\Omega})$ with $\rho >0$ in $\ol{\Omega}$. \begin{proposition} \label{prop1july20} Suppose that the semilinear operator in \eqref{eq1june20} has the property that $n_i>0$ for some $1 \le i \le K$. Let $n_K$ be the largest positive exponent and suppose that $b_K(x)>0$ in $\overline{\Omega}$. Additionally, assume that one of the following two cases holds: \begin{align} \text{{(1)}} &\text{ $n_i < 0$ for some $1 \le i < K$ and if $n_1= \min\{n_i: n_i < 0\}$}, \label{eq1:10feb13} \\ & \qquad \text{then $b^1(x) < 0$ in $\overline{\Omega}$.} \nonumber \\ \text{{(2)}} &\text{ $n_K$ is odd and $0<n_i~$ for all $1 \le i\le K$. } \label{eq2:10feb13} \end{align} If case \eqref{eq1:10feb13} holds, define \begin{align}\label{eq3:10feb13} &\alpha_1' = \sup_{c\in\mathbb{R}_+} \left\{\sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i}< 0 \hspace{3mm}\forall y\in (0,c)\right\},\\ &\alpha_1 = \min\{\alpha_1', \inf_{x\in \partial\Omega}\rho(x)\}. \end{align} If case \eqref{eq2:10feb13} holds, define \begin{align} &\alpha_2' = \sup_{c\in\mathbb{R}} \left\{\sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i}< 0 \hspace{3mm}\forall y\in (-\infty,c)\right\},\\ &\alpha_2 = \min\{\alpha_2', \inf_{x\in \partial\Omega}\rho(x)\}. \end{align} If case \eqref{eq1:10feb13} or case \eqref{eq2:10feb13} holds, define \begin{align}\label{eq4:10feb13} &\beta' = \inf_{c\in\mathbb{R}} \left\{\sum_{i=1}^K\inf_{x\in\ol{\Omega}}b^i(x)y^{n_i}> 0 \hspace{3mm}\forall y\in (c,\infty)\right\},\\ &\beta = \max\{\beta', \sup_{x\in \partial\Omega} \rho(x)\}. \end{align} \noindent Under these assumptions and definitions, if case \eqref{eq1:10feb13} holds and $u \in H^1(\Omega)$ is a positive weak solution to Eq. \eqref{eq1june20}, then $0< \alpha_1 \le u \le \beta < \infty $. Otherwise, if case \eqref{eq2:10feb13} holds and $u \in H^1(\Omega)$ is a weak solution to Eq. \eqref{eq1june20}, then $ -\infty < \alpha_2 \le u \le \beta < \infty$. \end{proposition} \begin{remark} We observe that Eq. \eqref{eq1june20} does not have a well-defined weak formulation for arbitrary $u \in H^1(\Omega)$. The way to interpret Proposition~\ref{prop1july20} is that if we seek a positive solution $u \in H^1(\Omega)$ that weakly solves Eq. \eqref{eq1june20} and satisfies condition \eqref{eq1:10feb13}, then we only need to look for solutions in $H^1(\Omega)\cap [\alpha_1,\beta]$, where $[\alpha_1,\beta]$ denotes the $L^{\infty}(\Omega)$ interval of functions $u$ such $\alpha_1\le u \le \beta~a.e$. Similarly, we only need to look for $u \in H^1(\Omega)\cap [\alpha_2,\beta]$ if condition \eqref{eq2:10feb13} holds. \end{remark} \begin{remark} Note that for the purposes of proving Theorem~\ref{thm1june27}, we are primarily concerned with case \eqref{eq1:10feb13}. This is the case that we will focus on for the remainder of the paper. However, with a little extra work we could very easily generalize Theorem~\ref{thm1june27} to allow for $n_i > 0$ for all $1 \le i \le K$ and $n_K> 0$ odd. Then we could use case \eqref{eq2:10feb13} to establish the necessary bounds. \end{remark} \begin{proof} We first note that in all cases $\alpha_1$, $\alpha_2$ and $\beta$ are well-defined given the conditions on $b^1(x)$ and $b^K(x)$ and the exponents $n_i$ for $1 \le i \le K$. In particular, the assumption that $b^1(x) < 0$ in \eqref{eq1:10feb13} ensures that $\alpha_1$ is well-defined and the assumption that $n_K$ is odd and $n_i >0$ ensures that $\alpha_2$ is well-defined. Based on the definitions of $\alpha_1$, $\alpha_2$ and $\beta$, if $u$ is a solution to \eqref{eq1june20} (we assume $u$ is nonnegative in the case of \eqref{eq1:10feb13}), then it is easy to verify that the functions $\overline{\phi}_1 = (u-\beta)^+$ and $\underline{\phi}_1 = (u-\alpha_1)^-$ are in $H^1_0(\Omega)$ if \eqref{eq1:10feb13} holds and $\underline{\phi}_2 = (u-\alpha_2)^-$ and $\overline{\phi}_2 = (u-\beta)^+$ are in $H_0^1(\Omega)$ if \eqref{eq2:10feb13} holds. Indeed, we may write $u = u_0+u_D$, where $u_0 \in H^{1}_0(\Omega)$ and we have that \begin{align} &0 \le \ol{\phi}_1 = (u-\beta)^+ = (u_0+u_D -\beta)^+ \le (u_D - \beta)^+ +u_0^+ \label{eq5:10feb13}\\ &0 \ge \underline{\phi}_1 = (u-\alpha_1)^- = (u_0+u_D - \alpha_1)^- \ge (u_D-\alpha_1)^- + u_0^- . \label{eq6:10feb13} \end{align} Taking the trace of Eqs. \eqref{eq5:10feb13} and \eqref{eq6:10feb13} and using the definition of $\alpha_1$ and $\beta$ we find that $\underline{\phi}_1$ and $\ol{\phi}_1$ are in $H^1_0(\Omega)$. By applying a similar argument we can conclude that $\underline{\phi}_2 \in H^1_0(\Omega)$. Define the set \begin{align*} \overline{\mathcal{Y}} &=\left\{x\in\overline{\Omega}~|~u\ge \beta \right\} \end{align*} if case \eqref{eq1:10feb13} or \eqref{eq2:10feb13} holds. If case \eqref{eq1:10feb13} holds, let \begin{align*} \underline{\mathcal{Y}}_1 &= \{x\in\overline{\Omega}~|~0< u \le \alpha_1\}, \end{align*} and if case \eqref{eq2:10feb13} holds, let \begin{align*} \underline{\mathcal{Y}}_2 &=\{x \in \overline{\Omega}~|~u < \alpha_2\}. \end{align*} Then if $u \in H^1(\Omega)^+$ is a weak solution to~\eqref{eq1june20}, supp($\underline{\phi}_1$) = $\underline{\mathcal{Y}}_1$. Similarly, if $u \in H^1(\Omega)$ is a weak solution to $\eqref{eq1june20}$, then supp($\overline{\phi}_1$) = supp($\ol{\phi}_2$) = $\overline{\mathcal{Y}}$ and supp($\underline{\phi}_2$) = $\underline{\mathcal{Y}}_2$. We have the following string of inequalities for $\underline{\phi}_1$ if condition \eqref{eq1:10feb13} holds: \begin{align} C_2\|\underline{\phi}_1\|^2_{H^1(\Omega)} &\le C_1\|\nabla((u-\alpha)^-)\|^2_{L^2(\Omega)} \\ &\le \int_{\Omega} a^{ij} D_j((u-\alpha)^-)D_j ((u-\alpha)^-) ~dx \nonumber \\ & = \int_{\Omega}a^{ij} D_j(u-\alpha)D_j ((u-\alpha)^-)~ dx \nonumber \\ &= \int_{\underline{{\mathcal Y}}_1}(-\sum_{i=1}^K b^i(x)u^{n_i}) (u-\alpha)~ dx \le 0. \nonumber \end{align} We can make a similar argument to show that $\|\underline{\phi}_2\|_{H^1(\Omega)} = 0$ if condition \eqref{eq2:10feb13} holds. We also have the following string of inequalities for $\overline{\phi} = \ol{\phi}_1 = \ol{\phi}_2$ if either condition \eqref{eq1:10feb13} or \eqref{eq2:10feb13} holds: \begin{align} C_2\|\ol{\phi}\|^2_{H^1(\Omega)} & \le C_1\|\nabla((u-\beta)^+)\|^2_{L^2(\Omega)} \\ &\le \int_{\Omega} a^{ij} D_j ((u-\beta)^+)D_i ((u-\beta)^+)~ dx \nonumber \\ &= \int_{\Omega} a^{ij} D_j (u-\beta)D_i ((u-\beta)^+) ~dx \nonumber \\ &=\int_{\overline{\mathcal{Y}}}(-\sum_{i=1}^K b^i(x)u^{n_i})(u-\beta) ~dx \le 0. \nonumber \end{align} The above inequalities imply the result. \end{proof} Now that we've established $L^{\infty}$-bounds for solutions to \eqref{eq1june20}, we can apply these bounds for each fixed $\epsilon$ to determine a net of bounds for the following net of problems: \begin{align} \label{eq1july1} -\sum_{i,j}^ND_i a_{\epsilon}^{ij} D_j u_{\epsilon} &+ \sum_{i=1}^K b_{\epsilon}^i u_{\epsilon}^{n_i}=0\quad \text{ in $\Omega$}\\ u_{\epsilon} &= \rho_{\epsilon} \quad \text{on $\partial\Omega$} , \nonumber \end{align} where $(a^{ij}_{\epsilon}), (b^i_{\epsilon}), (\rho_{\epsilon}) \in \mathcal{E}_M(\ol{\Omega})$ satisfy the following for all $\epsilon <1$: \begin{align} \label{eq1july2} &a^{ij}_{\epsilon} = a^{ji}_{\epsilon}, \hspace{5mm} a^{ij}_{\epsilon}\xi_i\xi_j \ge \lambda_\epsilon|\xi|^2 \ge C_1\epsilon^{a_1}|\xi|^2 \\ &|a^{ij}_{\epsilon}|_{k,\alpha;\Omega}, \hspace{2mm} |b^i_{\epsilon}|_{k,\alpha;\Omega} \le \Lambda_{k,\epsilon} \le C_2(k)\epsilon^{a_2(k)}, \hspace{3mm} \forall k \in \mathbb{N} \nonumber \\ &b^1_{\epsilon} \le -C_3\epsilon^{a_3}, \hspace{3mm} \{n_i: n_i< 0 \} \ne \emptyset, \hspace{3mm} n_1=\min\{n_i : n_i < 0\} \nonumber \\ &b^K_{\epsilon} \ge C_4\epsilon^{a_4}, \hspace{3mm} \{n_i: n_i > 0\} \ne \emptyset, \hspace{3mm} n_K=\max\{n_i : n_i > 0\} \nonumber \\ &\rho_{\epsilon} \ge C_5\epsilon^{a_5} \nonumber, \end{align} and $C_1,\cdots,C_5$ are positive constants that are independent of $\epsilon$ and $a_1,\cdots,a_5 \in \mathbb{R}$ are independent of $\epsilon$. Then notation $C_2(k)$ and $a_2(k)$ is meant to indicate that these constants may depend on $k$. \begin{proposition} \label{prop2july20} Suppose that for each fixed $\epsilon \in (0,1]$, $u_{\epsilon}$ is a positive solution to~\eqref{eq1july1} with coefficients satisfying~\eqref{eq1july2}. Then there exist $L^{\infty}$-bounds $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ such that for each $\epsilon$, $0< \alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}$. \end{proposition} \begin{proof} For each fixed $\epsilon$, if the assumptions in~\eqref{eq1july2} hold, then case \eqref{eq1:10feb13} of Proposition~\ref{prop1july20} is satisfied. Therefore, for each ${\epsilon \in (0,1]}$, there exists $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ such that ${0<\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}}$. \end{proof} \subsection{Sub- and Super-Solutions} \label{supersolution} In the previous section we showed that if the data of \eqref{eq1july1} satisfies \eqref{eq1july2} and if $u_{\epsilon}\in C^{\infty}(\ol{\Omega})$ solves \eqref{eq1july1} for each $\epsilon$, then $0< \alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}$. Now, for each $\epsilon \in (0,1]$, we want to show that there actually exists a solution $u_{\epsilon}\in C^{\infty}(\ol{\Omega})$ satisfying $0<\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}$. The key to proving this result lies in the fact that $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ are sub- and super-solutions to \eqref{eq1july1} for each $\epsilon$. \begin{proposition}\label{prop1:15nov11} Suppose that the coefficients in the net of problems~\eqref{eq1july1} satisfy~\eqref{eq1july2}. Then there exists a net $(u_{\epsilon}) \in (C(\ol{\Omega}))^I$ such that for each $\epsilon$, $u_{\epsilon}$ solves~\eqref{eq1july1} and ${0 < \alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}}$, where $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ be the bounds established in Proposition~\ref{prop2july20}. . \end{proposition} \begin{proof} To solve the above family of problems in~\eqref{eq1july1}, we show that the net of $L^{\infty}$-bounds $(\alpha_{\epsilon})$ and $(\beta_{\epsilon})$ found in Proposition~\ref{prop2july20} is a net of sub and super-solutions to~\eqref{eq1july1}. We then apply Theorem~\ref{thm2june27} to conclude that for each $\epsilon$, there exists a solution $u_{\epsilon} \in C^{\infty}(\overline{\Omega})$. Fix $\epsilon$ and let $\alpha'_{\epsilon}$ and $\beta'_{\epsilon}$ be defined by~\eqref{eq3:10feb13} and~\eqref{eq4:10feb13} respectively, and let \begin{align*} \alpha_{\epsilon} &=\min\{\alpha'_{\epsilon},\inf_{\partial\Omega}\rho_{\epsilon}(x)\}, \\ \beta_{\epsilon} &= \max\{\beta'_{\epsilon},\sup_{x\in\partial\Omega}\rho_{\epsilon}(x)\}. \end{align*} The conditions in Eq. \eqref{eq1july2} and the fact that $\rho_{\epsilon} > 0$ imply that $\alpha_{\epsilon} > 0$. Then the definition of $\alpha_{\epsilon}$ implies that \begin{align} A_{\epsilon}\alpha_{\epsilon} &= \sum_{i=1}^K b^i_{\epsilon}(\alpha_{\epsilon})^{n_i} \le \sum_{i=1}^K \sup_{x\in\overline{\Omega}}b^i_{\epsilon}(\alpha_{\epsilon})^{n_i}\le 0,\\ \alpha_{\epsilon} & \le \inf_{x\in\partial\Omega}\rho_{\epsilon}(x) \le \rho_{\epsilon}, \nonumber \end{align} which shows that $\alpha_{\epsilon}$ is sub-solution for each $\epsilon$. Similarly, the conditions in Eq. \eqref{eq1july2} and the definition of $\beta'_{\epsilon}$ imply that \begin{align} A_{\epsilon}\beta_{\epsilon} &= \sum_{i=1}^K b^i_{\epsilon}(\beta_{\epsilon})^{n_i} \ge \sum_{i=1}^K \inf_{x\in\overline{\Omega}}b^i_{\epsilon}(\beta_{\epsilon})^{n_i}\ge 0,\\ \beta_{\epsilon} & \ge \sup_{x\in\partial\Omega}\rho_{\epsilon} \ge \rho_{\epsilon}, \nonumber \end{align} which shows that $\beta_{\epsilon}$ is a super-solution for each $\epsilon$. What remains is to show that that $\alpha_{\epsilon} \le \beta_{\epsilon}$. Given the definition of $\alpha_{\epsilon}$ and $\beta_{\epsilon}$, it suffices to show that $\alpha'_{\epsilon} \le \beta'_{\epsilon}$. Define $$ \gamma_{\epsilon}=\inf_{c\in\mathbb{R}}\{\sum_{i=1}^K\sup_{x\in\overline{\Omega}}b^i(x)d^{n_i} \ge 0 \hspace{3mm} \forall d\in(c,\infty)\}. $$ Then we have that $\alpha'_{\epsilon}\le\gamma_{\epsilon}$ by the definition of $\alpha_{\epsilon}'$. Furthermore, for a fixed $\epsilon$, given the assumptions on $b_{\epsilon}^i(x)$, $$ \sum_{i=1}^K\inf_{x\in\overline{\Omega}}b_{\epsilon}^i(x)y^{n_i} \le \sum_{i=1}^K\sup_{x\in\overline{\Omega}}b_{\epsilon}^i(x)y^{n_i}, \hspace{3mm} \forall y\in \mathbb{R}. $$ Therefore the definition of $\beta_{\epsilon}'$ and the above inequality clearly imply that $ \gamma_{\epsilon} \le \beta'_{\epsilon}$. Therefore $\alpha'_{\epsilon}\le\beta'_{\epsilon}$ and the interval $[\alpha_{\epsilon},\beta_{\epsilon}]$ is a nonempty subset of $\mathbb{R}^+$. For each $\epsilon \in (0,1]$, the hypotheses of Theorem~\ref{thm2june27} are satisfied for the elliptic problem \eqref{eq1july2}, so we may conclude that there exists a net of solutions $(u_{\epsilon}) \in (C^{\infty}(\ol{\Omega}))^I$ that satisfy $0<\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}$ for each fixed $\epsilon$. \end{proof} The final task in this section is to show that an operator $A \in \mathcal{A}_0$, with coefficients satisfying \eqref{eq1july2}, is a well-defined operator on any element $u \in \mathcal{E}_M(\ol{\Omega})$ satisfying $$ \alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon} \quad \forall \epsilon \in (0,1].$$ Recall that in Section~\ref{netsofproblems} we determined that $A$ is only well-defined for invertible $u \in \mathcal{G}(\ol{\Omega})$. Therefore, it suffices to show that $(\alpha_{\epsilon}), (\beta_{\epsilon})$ and $(\frac{1}{\alpha_{\epsilon}}),(\frac{1}{\beta_{\epsilon}})$ are generalized constants \eqref{eq1:3nov11}, which we verify in the following lemma. \begin{lemma} \label{lem1july4} Let $(\alpha_{\epsilon})$ and $(\beta_{\epsilon})$ be the net of sub- and super-solutions to~\eqref{eq1july1} determined in Section~\ref{bounds}. Suppose that the coefficients of~\eqref{eq1july1} satisfy~\eqref{eq1july2}. Then $(\alpha_{\epsilon})$, $(\beta_{\epsilon})$,$ (\frac{1}{\alpha_{\epsilon}})$, and $(\frac{1}{\beta_{\epsilon}})$ are in $\overline{\mathbb{C}}$, the ring of generalized constants. \end{lemma} \begin{remark} Note that if $(\frac{1}{\alpha_{\epsilon}}) \in \ol{\mathbb{C}}$, then this implies that there exists an $\epsilon_0\in (0,1)$, some constant $C$ independent of $\epsilon$ and $a\in \mathbb{R}$ such that $\alpha_{\epsilon} \ge C\epsilon^a$ for all $\epsilon \in (0,\epsilon_0)$. Then if $(u_{\epsilon}) \in \mathcal{E}_M(\ol{\Omega})$ satisfies $\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}~$ for each $\epsilon$, $(\frac{1}{\alpha_{\epsilon}}) \in \ol{\mathbb{C}}$ implies that $u = [(u_{\epsilon})]$ is invertible in $\mathcal{G}(\ol{\Omega})$. See Section~\ref{netsofproblems} and \cite{GKOS01} for more details. \end{remark} \begin{proof} We need to show that there exists constants $D_1,D_2$ independent of $\epsilon$ and $\epsilon_0 \in (0,1)$ such that for all $\epsilon \in (0,\epsilon_0)$, \begin{align*} \alpha_{\epsilon}&\ge D_1\epsilon^{b_1} \hspace{3mm} \text{for some $b_1\in\mathbb{R}$}, \\ \beta_{\epsilon} &\le D_2\epsilon^{b_2} \hspace{3mm} \text{for some $b_2\in\mathbb{R}.$} \end{align*} So it is necessary to verify that there exists constants $D_1$ and $D_2$ so that for $\epsilon$ sufficiently small \begin{align*} \alpha'_{\epsilon} &\ge D_1\epsilon^{b_1}, \hspace{3mm} \text{and} \hspace{3mm}\inf_{x\in \partial\Omega} \rho_{\epsilon} \ge D_1\epsilon^{b_1}, \\ \beta'_{\epsilon} &\le D_2\epsilon^{b_2}, \hspace{3mm} \text{and} \hspace{3mm}\sup_{x\in\partial\Omega} \rho_{\epsilon} \le D_2\epsilon^{b_2}. \end{align*} Given that $(\rho_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$, $$\sup_{x\in\partial\Omega} \rho_{\epsilon} \le \sup_{x\in\overline{\Omega}}\rho_{\epsilon} = \mathcal{O}(\epsilon^b),$$ for some $b\in \mathbb{R}$. This and the assumption on $(\rho_{\epsilon})$ in \eqref{eq1july2} imply that we only need to obtain the necessary $\epsilon$-bounds on $\alpha'_{\epsilon}$ and $\beta'_{\epsilon}$. For now, drop the $\epsilon$ notation and consider $\alpha'$ defined in \eqref{eq3:10feb13}. For a given function $f$, define $$ \underline{\gamma}_{f} = \sup_{c\in\mathbb{R}_+}\left\{f(b) \le 0 \hspace{3mm}\forall b\in (0,c)\right\}.$$ Given that $$\alpha' = \sup_{c\in\mathbb{R}_+} \left\{\sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i}\le 0 \hspace{3mm}\forall y\in (0,c)\right\},$$ it is clear that for another function $f(y)$ such that $$f(y) \ge \sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i} \hspace{3mm} \text{on $(0,c)$},$$ if $\underline{\gamma}_f$ is defined and $\alpha' \in (0,c)$, it must hold that $\underline{\gamma}_f \le \alpha'$. Let $C_1 =|\{n_i:n_i\ge0\}|$ and $C_2 =|\{n_i:n_i<0\}|$ and if $C_2 > 1$, let $n_{i_2} =\min\{n_i: n_1<n_i<0\}$. Note that $C_1, C_2 \ge 1$ based on the assumptions in~\eqref{eq1july2}. Then recalling that $b_1(x)<0$, $b_K(x)>0$ correspond to the coefficients of the terms with the smallest negative and largest positive exponent of $\sum^K_i b^i(x)u^{n_i}$, if $\sup_{x\in\overline{\Omega}}|b^i(x)| \le \Lambda$ for each $i$, the following must hold for $y\in (0,1)$: \begin{align} \sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i}\le \sup_{x\in\overline{\Omega}}b_1(x)y^{n_1}+C_1\Lambda +(C_2-1)\Lambda y^{n_{i_2}}. \end{align} Define $$ d=\left(\frac{-\sup_{x\in\ol{\Omega}} (b_1(x))}{2(C_2-1)\Lambda}\right)^{\frac{1}{n_{i_2}-n_1}} $$ if $ C_2 > 1$ and let $d =1$ if $C_2 =1$. Then let $c = \min\{1,d\}$. The definition of $c$ implies that $$(C_2-1)\Lambda y^{n_{i_2}}\le -\frac{\sup_{x\in\overline{\Omega}}b_1(x)}{2} y^{n_1},$$ for all $y\in (0,c)$. So for $y\in (0,c),$ $$ \sum_{i=1}^K\sup_{x\in\overline{\Omega}}b^i(x)y^{n_i}\le \frac{\sup_{x\in\overline{\Omega}}b_1(x)}{2} y^{n_1} + C_1\Lambda =f(y). $$ Then if $\alpha' \in (0,c)$, $\alpha' \ge \underline{\gamma}_f$. Given that $f(y)$ is a monotone increasing function on $\mathbb{R}_+$, $\underline{\gamma}_f$ is the lone positive root of $f(y)$. Thus, $$\underline{\gamma}_f = \left(\frac{-\sup_{x\in\ol{\Omega}}b_1(x)}{2C_1\Lambda}\right)^{\frac{1}{-n_1}},$$ which implies that if $\alpha' \in (0,c)$, $$\alpha' \ge \left(\frac{-\sup_{x\in\ol{\Omega}}b_1(x)}{2C_1\Lambda}\right)^{\frac{1}{-n_1}}.$$ Similarly, for a fixed $\epsilon \in (0,1)$, define $$ d_{\epsilon}=\left(\frac{-\sup_{x\in \ol{\Omega}}( b_{\epsilon}^1(x))}{2(C_2-1)\Lambda_{\epsilon}}\right)^{\frac{1}{n_{i_2}-n_1}}, $$ if $C_2 > 1 $ and let $d_{\epsilon} = 1$ if $C_2 =1$. Let $c_{\epsilon} = \min\{1,d_{\epsilon}\}$. Then for $y \in (0,c_{\epsilon})$, we have that $$(C_2-1)\Lambda_{\epsilon} y^{n_{i_2}}\le -\frac{\sup_{x\in\overline{\Omega}}b_{\epsilon}^1(x)}{2} y^{n_1}.$$ So the above arguments imply that if $\alpha_{\epsilon}' \in (0,c_{\epsilon})$, then $\alpha_{\epsilon}' \ge \underline{\gamma}_{f,\epsilon}$ and $$\alpha'_{\epsilon} \ge \left(\frac{-\sup_{x\in \ol{\Omega}}b^1_{\epsilon}(x)}{2C_{1}\Lambda_{\epsilon}}\right)^{\frac{1}{-n_1}}.$$ Given the assumptions on $b_{\epsilon}^1(x)$ and $\Lambda_{\epsilon}$ in~\eqref{eq1july2}, in this case we have that $\alpha'_{\epsilon} \ge C\epsilon^a$ for some constant $C>0$, $a\in \mathbb{R}$ and $\epsilon$ sufficiently small. Now we must show that $c_{\epsilon} \ge C\epsilon^a$ for some constant $C>0$, $a\in \mathbb{R}$ and $\epsilon$ sufficiently small in the event that $\alpha'_{\epsilon} \notin (0,c_{\epsilon})$. It suffices to show that $d_{\epsilon} \ge C\epsilon^a$ in the event that $C_2 > 1$. But clearly, for $\epsilon$ sufficiently small $$ d_{\epsilon} = \left(-\frac{\sup_{x\in \ol{\Omega}}b^1_{\epsilon}(x)}{2(C_2-1)\Lambda_{\epsilon}}\right)^{\frac{1}{n_{i_2}-n_1}} \ge C\epsilon^{a}, $$ given the assumptions on $b^1_{\epsilon}$ and $\Lambda_{\epsilon}$ in~\eqref{eq1july2}. Therefore $\alpha'_{\epsilon} \ge D_1\epsilon^a$ for some constant $D_1>0$, $a \in \mathbb{R}$ and $\epsilon$ sufficiently small. Now we determine bounds on the net $(\beta'_{\epsilon})$. Again, we temporarily drop the $\epsilon$ and only consider $\beta'$. Recall that $$\beta' = \inf_{c\in\mathbb{R}} \left\{\sum_{i=1}^K\inf_{x\in\ol{\Omega}}b^i(x)y^{n_i}\ge 0 \hspace{3mm}\forall y\in (c,\infty)\right\}.$$ For a given function $f(y)$, define $$ \overline{\gamma}_f=\inf_{c\in\mathbb{R}} \left\{f(b) \ge 0 \hspace{3mm}\forall b\in (c,\infty)\right\}. $$ Then if $f(y) \le \sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i}$ on some interval $(c,\infty)$ and $\beta' \in (c,\infty)$, it must hold that $\overline{\gamma}_f \ge \beta'$ if $\ol{\gamma}_{f}$ is defined. Let $C_1,C_2$ be as before and let ${n_{i_1} = \max\{n_i:0 \le n_i<n_K\}}$ if $C_1 > 1$. If $y>1$, then $$ \sum_{i=1}^K\inf_{x\in\ol{\Omega}}b^i(x)y^{n_i}\ge \inf_{x\in\ol{\Omega}}( b_K(x))y^{n_K}-(C_1-1)\Lambda y^{n_{i_1}}-C_2\Lambda. $$ Now define $$ d = \left(\frac{2(C_1-1)\Lambda}{\inf_{x\in\ol{\Omega}} (b_K(x))}\right)^{\frac{1}{n_k-n_{i_1}}} $$ if $C_1 > 1$ and let $d = 1$ if $C_1 =1$. Let $c = \max\{1,d\}$. Then our choice of $d$ ensures that if $C_1>1$, then $$ -(C_1-1)\Lambda y^{n_{i_1}} \ge -\frac{\inf_{x\in\ol{\Omega}}(b_K(x))y^{n_K}}{2}, $$ and that for $y \in (c,\infty)$, $$ \sum_{i=1}^K\sup_{x\in\ol{\Omega}}b^i(x)y^{n_i}\ge \frac{\inf_{x\in\ol{\Omega}}( b_K(x))}{2}y^{n_K}-C_2\Lambda =f(y). $$ So if $\beta' \in (c,\infty)$, $\beta' \le \overline{\gamma}_f$, where $\overline{\gamma}_f$ is the lone positive root of $f$ on $\mathbb{R}_+$ given that $f$ is monotone increasing on this interval. So if $\beta' \in (c, \infty)$, $$ \beta' \le \overline{\gamma}_f= \left(\frac{2C_2\Lambda}{\inf_{x\in\ol{\Omega}}( b_K(x))}\right)^{\frac{1}{n_K}}. $$ By defining \begin{align} d_{\epsilon} = \left(\frac{2(C_1-1)\Lambda_{\epsilon}}{\inf_{x\in\ol{\Omega}} b_{\epsilon}^K(x)}\right)^{\frac{1}{n_k-n_{i_1}}}, \quad \text{and} \quad c_{\epsilon} = \max\{1,d_{\epsilon}\}, \end{align} and applying the above argument for $\beta'$ to the net $(\beta'_{\epsilon})$ for each fixed $\epsilon$, it is clear that if $\beta_{\epsilon}' \in (c_{\epsilon},\infty)$, then $$ \beta'_{\epsilon} \le \left(\frac{2C_2\Lambda_{\epsilon}}{\inf_{x\in\ol{\Omega}} b_{\epsilon}^K(x)}\right)^{\frac{1}{n_K}} \le C\epsilon^a, $$ given the assumptions on $b^K_{\epsilon}$ and $\Lambda_{\epsilon}$ in~\eqref{eq1july2}. Now assume that $\beta'_{\epsilon} \notin (c_{\epsilon},\infty)$. Then it suffices to show that if $C_1>1$, then $d_{\epsilon} \le C\epsilon^a$ for $\epsilon$ sufficiently small and some positive constants $C$ and $a \in \mathbb{R}$. But again, this is clearly true given the assumptions~\eqref{eq1july2} and the fact that $$ d_{\epsilon} = \left(\frac{2(C_1-1)\Lambda_{\epsilon}}{\inf_{x\in\ol{\Omega}} b_{\epsilon}^K(x)}\right)^{\frac{1}{n_k-n_{i_1}}}. $$ \end{proof} \section{Proof of the Main Results} \label{results} We now prove Theorem~\ref{thm1june27} using the results from Section~\ref{bounds1}. For clarity, we break the proof up into the steps outlined in Section~\ref{over}. \subsection{Proof of Theorem~\ref{thm1june27}}\label{proofofthm} \begin{proof} \begin{itemize} \item[Step 1:] { \it Formulation of the problem.} For convenience, we restate the problem and the formulation that we will use to find a solution. Given an operator $A \in \mathcal{A}_0$, defined by~\eqref{eq1june26}, we want to solve the following Dirichlet problem in $\mathcal{G}(\overline{\Omega})$: \begin{align} \label{eq1july4} Au &= 0 \hspace{3mm} \text{in $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$} \nonumber. \end{align} We phrase~\eqref{eq1july4} in a way that allows us to solve a net of semilinear elliptic problems. We assume that the coefficients of $A$ and boundary data $\rho$ have representatives $(a^{ij}_{\epsilon}), (b^i_{\epsilon}),$ and $(\rho_{\epsilon})$ in $\mathcal{E}_M(\overline{\Omega})$ satisfying the assumptions~\eqref{eq1june27}. Then for this particular choice of representatives, our strategy for solving~\eqref{eq1july4} is to solve the family of problems \begin{align} \label{eq2july4} A_{\epsilon}u_{\epsilon} = -\sum_{i,j=1}^N D_i( a_{\epsilon}^{ij}D_j u_{\epsilon}) + \sum_i^N b^i_{\epsilon}u_{\epsilon}^{n_i} &= 0 \text{ in $\Omega$},\\ u_{\epsilon}&= \rho_{\epsilon} \quad \text{on $\partial\Omega$} \nonumber, \end{align} and then show that the net of solutions $(u_{\epsilon}) \in \mathcal{E}_M(\ol{\Omega})$. \item[Step 2:] {\it Determine $L^{\infty}$-estimates and a net of sub-solutions and super-solutions}. In Section~\ref{bounds1}, we concluded that for each $\epsilon$, the pair $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ determine sub- and super- solutions to~\eqref{eq2july4} such that $0<\alpha_{\epsilon}<\beta_{\epsilon}$. Furthermore, in Lemma~\ref{lem1july4} we concluded that there exist $C_1,C_2>0$ and $a_1,a_2 \in \mathbb{R}$ such that for $\epsilon$ sufficiently small, the nets $(\alpha_{\epsilon})$ and $(\beta_{\epsilon})$ satisfy $C_1\epsilon^{a_1}\le \alpha_{\epsilon} < \beta_{\epsilon} \le C_2\epsilon^{a_2}$, thereby verifying that $(\alpha_{\epsilon}), (\beta_{\epsilon}), (\frac{1}{\alpha_{\epsilon}}), (\frac{1}{\beta_{\epsilon}}) \in \overline{\mathbb{C}}$, the ring of generalized constants. \item[Step 3:]{\it Apply fixed-point theorem to solve each semilinear problem in~\eqref{11feb15eq1}.} This follows from Proposition~\ref{prop1:15nov11}. We briefly reiterate the proof here. We simply verify the hypotheses of Theorem~\ref{thm2june27}. For each fixed $\epsilon$ we have sub- and super-solutions $\alpha_{\epsilon}$ and $\beta_{\epsilon}$ satisfying $0<\alpha_{\epsilon}<\beta_{\epsilon}$ and $a^{ij}_{\epsilon}, b^i_{\epsilon}, \rho_{\epsilon} \in C^{\infty}(\overline{\Omega})$ satisfying~\eqref{eq1july2}. Finally, $\Omega$ is of $C^{\infty}$-class and the function $$f(x,y) = -\sum_{i=1}^Kb^i_{\epsilon}(x)y^{n_i} \in C^{\infty}(\overline{\Omega}\times \mathbb{R}^+),$$ so we may apply Theorem~\ref{thm2june27} to conclude that there exists a net of solutions $(u_{\epsilon})$ to~\eqref{eq1july1} satisfying $0<\alpha_{\epsilon} \le u_{\epsilon} \le \beta_{\epsilon}$. \item[Step 4:]{\it Verify that the net of solutions $(u_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega}).$} Now that it is clear that a solution exists for~\eqref{eq1july1} for each $\epsilon \in (0,1]$, it is necessary to establish estimates that show that the net of solutions $(u_{\epsilon})$ is in $\mathcal{E}_M(\overline{\Omega})$. That is, we want to show that for each $k \in \mathbb{N}$ and all multi-indices $|\beta | \le k$, there exists $a \in \mathbb{R}$ such that $$ \sup_{x \in \overline{\Omega}}\{|D^{\beta}u_{\epsilon}(x)|\} = \mathcal{O}(\epsilon^a). $$ By standard interpolation inequalities, it suffices to show that for $\gamma \in (0,1)$ and each $k \in \mathbb{N}$, there exists an $a \in \mathbb{R}$ such that $$ |u_{\epsilon}|_{k,\gamma;\Omega} = \mathcal{O}(\epsilon^a). $$ By Theorem~\ref{thm1june30}, we have that if $u_{\epsilon}$ is a solution to~\eqref{eq1july1} with coefficients satisfying~\eqref{eq1july2}, then \begin{align} |u_{\epsilon}|_{2,\gamma;\Omega}\le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3 (|u_{\epsilon}|_{0;\Omega}+|\rho_{\epsilon}|_{2,\gamma;\Omega}+\sum_{i=1}^K |b^i_{\epsilon}(u_{\epsilon})^{n_i}|_{0,\gamma;\Omega}). \end{align} Observe that \begin{align} \label{eq1july5} |u_{\epsilon}^{n_i}|_{0,\gamma;\Omega} \le |u^{n_i}_{\epsilon}|_{0;\Omega} + n_i[u_{\epsilon}]_{0,\gamma;\Omega}|u_{\epsilon}|^{n_i-1}_{0;\Omega} \end{align} if $n_i>0$ and \begin{align} \label{eq2july5} |u_{\epsilon}^{n_i}|_{0,\gamma;\Omega} \le |u^{n_i}_{\epsilon}|_{0;\Omega} + \frac{1}{|u_{\epsilon}^{-n_i}|^2_{0;\Omega}}(-n_i)[u_{\epsilon}]_{0,\gamma;\Omega}|u_{\epsilon}|^{-n_i-1}_{0;\Omega}, \end{align} if $n_i<0$. The above inequality implies that \begin{align} |u_{\epsilon}|_{2,\gamma;\Omega}&\le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3(|u_{\epsilon}|_{0;\Omega}+|\rho_{\epsilon}|_{2,\gamma;\Omega} \\ & \qquad +\sum_{i=1}^K |b^i_{\epsilon}(x)|_{0,\gamma;\Omega}(C_1(\alpha_{\epsilon},\beta_{\epsilon},n_i)+C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon})|u_{\epsilon}|_{0,\gamma;\Omega})), \nonumber \end{align} where $$C_1(n_i,\alpha_{\epsilon},\beta_{\epsilon}) = \beta_{\epsilon}^{n_i} \hspace{2mm} \text{ and} \hspace{2mm} C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon}) = n_i\beta_{\epsilon}^{n_i-1}, \hspace{2mm} \text{ if} \hspace{2mm} n_i>0 \hspace{2mm} \text{ and}$$ $$C_1(n_i,\alpha_{\epsilon},\beta_{\epsilon}) = \alpha_{\epsilon}^{n_i} \hspace{2mm} \text{ and} \hspace{2mm} C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon})=\frac{(-n_i)\beta_{\epsilon}^{-n_i-1}}{\alpha_{\epsilon}^{-2n_i}} \hspace{2mm} \text{ if} \hspace{2mm} n_i<0.$$ Application of the interpolation inequality $$|u_{\epsilon}|_{0,\gamma} \le C(\delta_{\epsilon}^{-1}|u_{\epsilon}|_0 + \delta_{\epsilon}|u_{\epsilon}|_{2,\gamma}),$$ where $\delta_{\epsilon}$ is arbitrarily small and $C$ is independent of $\delta_{\epsilon}$, implies that \begin{align} |u_{\epsilon}|_{2,\gamma;\Omega} & \le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3(|u_{\epsilon}|_{0;\Omega} +|\rho_{\epsilon}|_{2,\gamma;\Omega} \\ &\qquad +\sum_{i=1}^K |b^i_{\epsilon}(x)|_{0,\gamma;\Omega}(C_1(n_i,\alpha_{\epsilon},\beta_{\epsilon}) \nonumber \\ & \qquad +C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon})(C(\delta_{\epsilon}^{-1}|u_{\epsilon}|_{0;\Omega}+\delta_{\epsilon}|u_{\epsilon}|_{2,\gamma;\Omega}) ))). \nonumber \end{align} Therefore, \begin{align} &\left(1-\delta_{\epsilon}\left( \frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}} \right) \sum_{i=1}^K |b^i_{\epsilon}(x)|_{0,\gamma;\Omega}C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon})\right)|u_{\epsilon}|_{2,\gamma;\Omega} \\ & \qquad \quad \le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3(|u_{\epsilon}|_{0;\Omega}+|\rho_{\epsilon}|_{2,\gamma;\Omega} \nonumber \\ & \qquad \quad \quad +\sum_{i=1}^K |b^i_{\epsilon}(x)|_{0,\gamma;\Omega}( C_1(n_i,\alpha_{\epsilon},\beta_{\epsilon})+C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon})\delta_{\epsilon}^{-1}|u_{\epsilon}|_{0;\Omega})). \nonumber \end{align} But given the assumptions on $\Lambda_{\epsilon}$, $\lambda_{\epsilon}$, the bounds previously established for the nets $(\alpha_{\epsilon}) \text{ and } (\beta_{\epsilon})$ in Lemma~\ref{lem1july4}, and given that $(b^i_{\epsilon}(x)) \in \mathcal{E}_M(\overline{\Omega})$, there exists $\epsilon_0 \in (0,1)$, $a \in \mathbb{R}$ and $C>0$ such that for all $\epsilon \in (0,\epsilon_0)$, $$ \left( \frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}} \right) \sum_{i=1}^K |b^i_{\epsilon}(x)|_{0,\gamma}C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon}) \le C\epsilon^a . $$ Therefore, choosing $$ \delta_{\epsilon} = \frac{1}{2C\epsilon^a}, $$ it is clear that for $\epsilon \in (0,\epsilon_0)$, \begin{align} |u_{\epsilon}|_{2,\gamma;\Omega} &\le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3(|u_{\epsilon}|_{0;\Omega}+|\rho_{\epsilon}|_{2,\gamma;\Omega} \\ & \qquad + \sum_{i=1}^K |b^i_{\epsilon}(x)|_{0,\gamma;\Omega}( C_1(n_i,\alpha_{\epsilon},\beta_{\epsilon})+C_2(n_i,\alpha_{\epsilon},\beta_{\epsilon},\epsilon^a)|u_{\epsilon}|_{0;\Omega})). \nonumber \end{align} Given that $(\alpha_{\epsilon})$, $(\beta_{\epsilon}) \in \ol{\mathbb{C}}$, $\alpha_{\epsilon}\le u_{\epsilon}\le \beta_{\epsilon}$ and $(\rho_{\epsilon})$, $ (b^i_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$, the above inequality implies that for some $a\in \mathbb{R}$, $$ |u_{\epsilon}|_{2,\gamma;\Omega} = \mathcal{O}(\epsilon^a). $$ Now we need to utilize the $\epsilon$-growth conditions on $|u_{\epsilon}|_{2,\gamma;\Omega}$ and induction to show that for any $k >2$ that \begin{align} \label{eq2july4-B} |u_{\epsilon}|_{k,\gamma;\Omega}=\mathcal{O}(\epsilon^a) \quad \text{for some $a\in \mathbb{R}$}. \end{align} Let $(u_{\epsilon})$ be a smooth net of solutions to~\eqref{eq2july4} and additionally assume that \eqref{eq2july4-B} holds for all $j\le k$. Let $\nu$ be a multi-index of length $k-1$. Then by differentiating both sides of~\eqref{eq2july4}, we see that for each $\epsilon$, $u_{\epsilon}$ satisfies the Dirichlet problem \begin{align} \label{eq3uly4} \sum_{i,j=1}^N D^{\nu}( -D_i( a_{\epsilon}^{ij}D_j u_{\epsilon})) &= -\sum_{i=1}^K D^{\nu}( b^i_{\epsilon}u_{\epsilon}^{n_i}) \text{ in $\Omega$} \\ D^{\nu}u_{\epsilon} &= D^{\nu}\rho_{\epsilon} \quad \text{on $\partial\Omega$} . \nonumber \end{align} Rearranging the above equation and applying the multi-index product rule we find that \begin{align} \label{eq4july4} \sum_{i,j=1}^N a^{ij}_{\epsilon}D_{ij}(D^{\nu}u_{\epsilon}) &= -\sum_{i,j=1}^N D^{\nu}((D_ia_{\epsilon}^{ij})(D_ju_{\epsilon})) \\ & \qquad -\sum_{i,j=1}^N\sum_{\stackrel{\sigma+\mu = \nu}{\sigma \ne \nu}}\frac{\nu!}{\sigma!\mu!}(D^{\mu}a_{\epsilon}^{ij})(D^{\sigma}D_{ij}u_{\epsilon}) \nonumber \\ & \qquad +\sum_{i=1}^K\sum_{\sigma+\mu = \nu}\frac{\nu!}{\sigma!\mu!}(D^{\mu}b^i_{\epsilon})(D^{\sigma}((u_{\epsilon})^{n_i})). \nonumber \end{align} Therefore, we may apply Theorem~\ref{thm1june30} to~\eqref{eq4july4} to conclude that for an arbitrary multi-index $\nu$ such that $|\nu| = k-1$, \begin{align} |D^{\nu}u_{\epsilon}|_{2,\gamma;\Omega} &\le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3 \left(|D^{\nu}u_{\epsilon}|_{0;\Omega} + |D^{\nu}\rho_{\epsilon}|_{2,\gamma;\Omega} \right. \\ & \qquad + |\sum_{i,j=1}^N D^{\nu}((D_ia_{\epsilon}^{ij})(D_ju_{\epsilon}))|_{0,\gamma;\Omega} \nonumber \\ & \qquad +\sum_{i,j=1}^N \sum_{\stackrel{\sigma+\mu = \nu}{\sigma \ne \nu}}\frac{\nu!}{\sigma!\mu!}|D^{\mu}a_{\epsilon}^{ij}|_{0,\gamma;\Omega}|D^{\sigma}D_{ij}u_{\epsilon}|_{0,\gamma;\Omega} \nonumber \\ & \qquad +\sum_{i=1}^K\sum_{\sigma+\mu = \nu}\frac{\nu!}{\sigma!\mu!}|D^{\mu}b^i_{\epsilon}|_{0,\gamma;\Omega}|D^{\sigma}((u_{\epsilon})^{n_i})|_{0,\gamma;\Omega} ) \nonumber \\ & \le C\left(\frac{\Lambda_{\epsilon}}{\lambda_{\epsilon}}\right)^3(|D^{\nu}u_{\epsilon}|_{0;\Omega} + |D^{\nu}\rho_{\epsilon}|_{2,\gamma;\Omega} \nonumber \\ & \qquad +\sum_{i,j=1}^N \sum_{\sigma+\mu = \nu}\frac{\nu!}{\sigma!\mu!}|D^{\mu}(D_ia_{\epsilon}^{ij})|_{0,\gamma;\Omega}|D^{\sigma}(D_ju_{\epsilon})|_{0,\gamma;\Omega} \nonumber \\ & \qquad +\sum_{i,j=1}^N \sum_{\stackrel{\sigma+\mu = \nu}{\sigma \ne \nu}}\frac{\nu!}{\sigma!\mu!}|D^{\mu}a_{\epsilon}^{ij}|_{0,\gamma;\Omega}|D^{\sigma}D_{ij}u_{\epsilon}|_{0,\gamma;\Omega} \nonumber \\ & \qquad +\sum_{i=1}^K \sum_{\sigma+\mu = \nu}\frac{\nu!}{\sigma!\mu!}|D^{\mu}b^i_{\epsilon}|_{0,\gamma;\Omega}|D^{\sigma}((u_{\epsilon})^{n_i})|_{0,\gamma;\Omega} ). \nonumber \end{align} By our inductive hypothesis and the assumptions on the coefficients, it is immediate that every term in the above expression is $\mathcal{O}(\epsilon^a)$ for some $a \in \mathbb{R}$ except for the last term. So to show $$ |D^{\nu}u_{\epsilon}|_{2,\gamma;\Omega} = \mathcal{O}(\epsilon^a) \quad \text{for some $a \in \mathbb{R}$}, $$ it suffices to show that $$ \sum_{i=1}^K \sum_{\sigma+\mu = \nu}\frac{\nu!}{\sigma!\mu!}|D^{\mu}b^i_{\epsilon}|_{0,\gamma;\Omega}|D^{\sigma}((u_{\epsilon})^{n_i})|_{0,\gamma;\Omega} = \mathcal{O}(\epsilon^a) \quad \text{for some $a \in \mathbb{R}$}. $$ Given that $b^i_{\epsilon} \in \mathcal{E}_M(\ol{\Omega})$ for each $1 \le i \le K$, $$ |D^{\mu}b^i_{\epsilon}|_{0,\gamma;\Omega} = \mathcal{O}(\epsilon^a) \quad \text{for some $a\in \mathbb{R}$}. $$ Therefore, it is really only necessary to show that for any multi-index $\sigma$, such that $|\sigma| = j \le k-1$, that there exists an $a \in \mathbb{R}$ such that $$ |D^{\sigma}((u_{\epsilon})^{n_i})|_{0,\gamma;\Omega} = \mathcal{O}(\epsilon^a). $$ But observe that $D^{\sigma}((u_{\epsilon})^{n_i})$ is a sum of terms of the form $$ (u_{\epsilon})^{n_i-m}D^{\sigma_1}u_{\epsilon}D^{\sigma_2}u_{\epsilon}\cdots D^{\sigma_m}u_{\epsilon}, $$ where $\sigma_1+\sigma_2 +\cdots \sigma_m = \sigma$ and $m \le j \le k-1$. This follows immediately from the chain rule. Therefore we have the following bound: \begin{align} |D^{\sigma}((u_{\epsilon})^{n_i})|_{0,\gamma;\Omega} & \le (n_i)|(u_{\epsilon})^{n_i-1}|_{0,\gamma;\Omega}|D^{\sigma}u_{\epsilon}|_{0,\gamma;\Omega} \\ & \qquad +\sum_{\sigma_1+\sigma_2=\sigma}\frac{\sigma!}{\sigma_1!\sigma_2!}(n_i)(n_i-1)|(u_{\epsilon})^{n_i-2}|_{0,\gamma;\Omega} \nonumber \\ & \qquad \qquad \cdot |D^{\sigma_1}u_{\epsilon}|_{0,\gamma;\Omega}|D^{\sigma_2}u_{\epsilon}|_{0,\gamma;\Omega}+ \cdots \nonumber \\ & \qquad +\sum_{\sigma_1+\sigma_2+\cdots+\sigma_j = \sigma}\frac{\sigma!}{\sigma_1!\sigma_2!\cdots \sigma_j!}(n_i)(n_i-1) \nonumber \\ & \qquad \qquad \cdots(n_i-j) |(u_{\epsilon})^{n_i-j}|_{0,\gamma;\Omega} |D^{\sigma_1}u_{\epsilon}|_{0,\gamma;\Omega} \nonumber \\ & \qquad \qquad \cdots |D^{\sigma_j}u_{\epsilon}|_{0,\gamma;\Omega}. \nonumber \end{align} Using~\eqref{eq1july5} and~\eqref{eq2july5}, for each $m \le j$ we may bound the terms of the form $|(u_{\epsilon})^{n_i-m}|_{0,\gamma;\Omega}$ using $|u_{\epsilon}|_{0, \gamma;\Omega}$, $\alpha'_{\epsilon}$ and $\beta'_{\epsilon}$. Then our inductive hypothesis and the growth conditions on $(\alpha'_{\epsilon})$ and $(\beta'_{\epsilon})$ imply that $$ |D^{\sigma}((u_{\epsilon})^{n_i})|_{0,\gamma;\Omega} = \mathcal{O}(\epsilon^a) \quad \text{for some $a \in \mathbb{R}$} $$ This implies that $$ |D^{\nu}u_{\epsilon}|_{2,\gamma;\Omega} = \mathcal{O}(\epsilon^a) \quad \text{ for some $a \in \mathbb{R}$}. $$ As $\nu$ was an arbitrary multi-index such that $|\nu| = k-1$, this implies there exists $a\in \mathbb{R}$ such that $$ |u_{\epsilon}|_{k+1,\gamma;\Omega} = \mathcal{O}(\epsilon^a). $$ Therefore, $(u_{\epsilon}) \in \mathcal{E}_M(\overline{\Omega})$. \item[Step 5:]{\it Verify that the solution is well-defined}. Proposition~\ref{prop1july1} and the definition of the Dirichlet problem in $\mathcal{G}(\overline{\Omega})$ given in Section~\ref{Dirichlet} imply that $[(u_{\epsilon})]$ is indeed a solution to the problem \begin{align} \label{welldefined} Au &= 0 \hspace{3mm} \text{in $\Omega$},\\ u &= \rho \quad \text{on $\partial\Omega$} , \nonumber \end{align} in $\mathcal{G}(\overline{\Omega})$. To see this, we consider other representatives $(\overline{a}_{\epsilon}^{ij}), (\overline{b}^i_{\epsilon}), (\overline{\rho}_{\epsilon})$, and $(\overline{u}_{\epsilon})$ of $[(a^{ij}_{\epsilon})], [(b^i_{\epsilon})], [(\rho_{\epsilon})]$, and $[(u_{\epsilon})]$. Then the proof of Proposition~\ref{prop1july1} clearly implies that \begin{align} -\sum_{i,j=1}^N D_i(\overline{a}_{\epsilon}^{ij}D_j\overline{u}_{\epsilon}) &+ \sum_{i=1}^K \overline{b}^i_{\epsilon}(\overline{u}_{\epsilon})^{n_i} = \eta_{\epsilon} \hspace{3mm} \text{in $\Omega$},\\ \overline{u_{\epsilon}} &= \overline{\rho}_{\epsilon}+\overline{\eta}_{\epsilon} \quad \text{on $\partial\Omega$} ,\nonumber \end{align} where $\eta_{\epsilon} \in \mathcal{N}(\overline{\Omega})$ and $\overline{\eta}_{\epsilon}$ is a net of functions satisfying~\eqref{boundary}. But this implies that this choice of representatives also satisfies~\eqref{welldefined} in $\mathcal{G}(\overline{\Omega})$, so our solution $[(u_{\epsilon})]$ is independent of the representatives used. \end{itemize} \end{proof} This completes our proof of Theorem~\ref{thm1june27}. We now conclude by giving a brief summary and making some final remarks. \section{Summary and Remarks} \label{sec:conc} We began the paper with an example to motivate the Colombeau Algebra method for solving the target semilinear problem \eqref{problem} with potentially distributional data. In particular, in Section~\ref{Example1} we proved the existence of a solution to a simpler ill-posed critical exponent problem~\eqref{eq3:25oct11} in Proposition~\ref{prop1:8nov11}. Our proof technique consisted of mollifying the data of the original problem, and then solving a sequence of "approximate" problems with the smooth coefficients. We then obtained a sequence of solutions that yielded a convergent subsequence. This proof framework, which required only basic elliptic PDE theory, was modeled on the more general Colombeau approach that we then subsequently developed and applied in the remainder of the paper to solve the more difficult problem~\eqref{problem}. Following the approach of Mitrovic and Pilipovic in~\cite{MP06}, in Section~\ref{prelim} we stated a number of preliminary results and developed necessary technical tools for solving~\eqref{problem}. Among these tools and results were the explicit {\em a priori} estimates found in~\cite{MP06}, and a description of the Colombeau framework in which the coefficients and data were embedded. In particular, in Section~\ref{holder} we introduced notation for H\"{o}lder norms and stated two {\em a priori} estimates from~\cite{GiTr77} that were made more precise by Mitrovic and Pilipovic in~\cite{MP06}. In Section~\ref{Colombeau}, we then introduced the general framework for constructing Colombeau-type algebras and the Colombeau algebra $\mathcal{G}(\overline{\Omega})$ used in this paper. We then stated the main result in Section~\ref{overview}, namely Theorem~\ref{thm1june27}, and also gave a statement and proof of the method of sub- and super solutions as Theorem~\ref{thm2june27}. We then gave a detailed outlined of the plan of the proof of Theorem~\ref{thm1june27}, the execution of which was the focus of the remainder of the paper. In Section~\ref{embed2} we also discussed methods to embed \eqref{problem} into the algebra for applying our Colombeau existence theory. The remainder of the paper was then dedicated to developing the remaining tools necessary to proving Theorem~\ref{thm1june27}, and then carrying out the proof. In Section~\ref{bounds1} we determine {\em a priori} $L^{\infty}$ bounds of solutions to our semilinear problem and a net of sub- and super-solutions satisfying explicit $\epsilon$-growth estimates. We first determined a net of $L^{\infty}$ bounds for positive solutions to our problem. In Section~\ref{supersolution} we then showed that this net of $L^{\infty}$ bounds is in fact a net of sub- and super-solutions contained in $\mathbb{\overline{C}}$, the ring of generalized constants described in Section~\ref{Colombeau}. Finally, after developing sub- and super-solutions and some related results in Section~\ref{bounds1}, we proved the main result, Theorem~\ref{thm1june27} in Section~\ref{results}, following the plan we had laid out in Section~\ref{overview}. We note that although the problem we set up in a manner similar to that used by Mitrovic and Pilipovic in~\cite{MP06}, our approach to solving our semilinear problem was distinct from theirs; we first determined a net of solutions $(u_{\epsilon})$ to the family of semilinear problems~\eqref{eq2july4} by using the method of sub-and super-solutions (Theorem~\ref{thm2june27}), and our net of sub- and super-solutions determined in Section~\ref{supersolution}. Once our net of solutions was determined, we then employed Theorems~\ref{thm1june30} and our net of sub- and super-solutions to show that our net of solutions was contained in $\mathcal{E}_M(\overline{\Omega})$. In this article we have attempted to develop some basic tools to allow for a more general study of the Einstein constraint equations with distributional data. Our goal was to extend the current solution theory for scalar, critical exponent semilinear problems such as the Lichnerowicz equation, allowing for more irregular data than is currently covered by the existing solutions theories (cf.~\cite{HNT08,HNT09} for a summary of the known results for the CMC, near-CMC, and Far-CMC cases through 2009). As a next step, we hope to use the tools developed in this article to extend the near-CMC and Far-CMC existence framework for rough metrics developed in \cite{HNT08,DMa05,DMa06,CB04} to cover the rough data example studied by Maxwell in \cite{DMa11}. \section*{Acknowledgments} \label{sec:ack} MH was supported in part by NSF Awards~1217175 and 1065972. CM was supported in part by NSF Award~1065972.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The indications for neutrino oscillations from the atmospheric and solar neutrino anomalies and from the LSND experiment are now so overwhelming that the discourse in neutrino physics has changed. One no longer asks if these particles indeed oscillate, one rather debates the most plausible pattern of masses and mixing angles, and if the existence of a sterile neutrino is required. Of course, all of the indications for oscillations are to various degrees preliminary, yet so intruiging that it is difficult to resist their charm. It is a truism that astrophysics and cosmology play a unique role in neutrino physics, and conversely, that these light, weakly interacting particles are absolutely crucial for some of the most interesting astrophysical phenomena such as core-collapse supernovae and for the universe at large. Therefore, in this brief survey of current topics in neutrino astrophysics it behoves us to discuss what astrophysics and cosmology contribute to the current debate in neutrino physics and what the future perspectives are. To this end we begin in Sec.~\ref{sec:neutrinooscillations} with an overview of the current indications for neutrino oscillations and possible global interpretations. Astrophysical neutrinos, i.e.\ those from the Sun and from cosmic-ray interactions in the upper atmosphere, play a dominant role in this context. In Sec.~\ref{sec:cosmology} we next turn to the cosmological arguments relevant to neutrino physics (dark matter, structure formation, cosmic microwave background, big-bang nucleosynthesis). Supernova (SN) neutrinos are the topic of Sec.~\ref{sec:supernova} where we discuss the role of neutrino masses and oscillations in this environment, the interpretation of the SN~1987A neutrino burst, and what one could learn from a future galactic SN. The recent developments in high-energy neutrino astronomy are touched upon in Sec.~\ref{sec:neutrinoastronomy}, while in Sec.~\ref{sec:electromagneticproperties} astrophysical aspects of neutrino electromagnetic properties are briefly discussed in the light of the current evidence for oscillations. Finally, in Sec.~\ref{sec:conclusions} we summarize our conclusions. \section{Evidence for Neutrino Oscillations} \label{sec:neutrinooscillations} \subsection{Atmospheric Neutrinos} The current evidence for neutrino oscillations arises from the atmospheric neutrino anomaly, the solar neutrino problem, and the LSND experiment. It is probably fair to say that at present the most convincing indication comes from atmospheric neutrinos. We thus begin our short survey with this spectacular case that has changed the perception of this field. The Earth is immersed in a diffuse flux of high-energy cosmic rays consisting of protons and nuclei. The upper atmosphere acts as a ``beam dump'' where these particles quickly lose their energy by the production of secondary pions (and some kaons) which subsequently decay according to the simple scheme \begin{eqnarray}\label{eq:beamdump} \pi^+\to\mu^++\nu_\mu,\qquad \mu^+\to e^++\nu_e+\bar\nu_\mu, \nonumber\\ \pi^-\to\mu^-+\bar\nu_\mu,\qquad \mu^-\to e^-+\bar\nu_e+\nu_\mu. \end{eqnarray} The expected unequal flavor distribution $\nu_e:\nu_\mu:\nu_\tau\approx 1:2:0$ allows one to use the atmospheric neutrino flux to search for flavor oscillations. Of course, at energies beyond a few GeV the muons do not all decay before hitting the Earth so that the $\nu_\mu/\nu_e$ flavor ratio increases with energy. Still, while the absolute neutrino flux predictions have large uncertainties, perhaps on the 20\% level, the expected flavor ratio is thought to be nearly model independent and calculable for all relevant energies to within a few percent~\cite{Gaisser96,Honda95,Agrawal96}. First events from atmospheric neutrinos were measured in two pioneering experiments in the mid-sixties~\cite{Achar65,Reines65}, but it is only since the late eighties that several large underground detectors began to address the question of flavor oscillations in earnest~\cite{Nusex89,Frejus89,Frejus95,Kamiokande88,% Kamiokande94,Kamiokande98,IMB91,IMB92,Soudan99,Macro98,SuperK98a,% SuperK98b}. Around 1988 the Kamiokande water Cherenkov detector revealed a significantly reduced $\nu_\mu/\nu_e$ flavor ratio---the atmospheric neutrino anomaly~\cite{Kamiokande88}. There was no alternative explanation to oscillations, but a ``smoking-gun'' signature became available only with the high counting rates of SuperKamiokande~\cite{SuperK98a,SuperK98b} which has taken data since April~1996. For a given solid angle, the atmospheric neutrino flux from above should be equal to that produced in the atmosphere of the antipodes because the $r^{-2}$ flux dilution with distance cancels a corresponding increase in surface area. However, SuperKamiokande observed a pronounced up-down-asymmetry in the multi-GeV sample (visible energy deposition in the detector exceeding 1.33~GeV). Using the zenith-angle range $-1.0\leq\cos\theta\leq-0.2$ as defining ``up,'' and the corresponding range $0.2\leq\cos\theta\leq1.0$ for ``down,'' the $\nu_e+\bar\nu_e$ flux shows a ratio~\cite{Kajita98} ${\rm up/down}=0.93^{+0.13}_{-0.12}$ while $\nu_\mu+\bar\nu_\mu$ has $0.54^{+0.06}_{-0.05}$. It is this up-down-asymmetry which gives one confidence that there is no simple explanation in terms of the neutrino production process in the atmosphere or the experimental flavor identification. Neutrino oscillations, on the other hand, provide a simple and consistent interpretation. In the usual two-flavor formalism with a vacuum mixing angle $\Theta$, the appearance probability for the oscillation from a flavor $\nu$ to $\nu'$ is \begin{equation}\label{eq:oscillation} P(\nu\to\nu')=\sin^22\Theta\,\, \sin^2\left(1.27\,\frac{\Delta m_\nu^2}{\rm eV^2}\, \frac{L}{{\rm km}}\,\frac{{\rm GeV}}{E_\nu}\right). \end{equation} If the $\nu_\mu$'s oscillate into $\nu_\tau$'s with a nearly maximal mixing angle and if $\Delta m_\nu^2$ is of order $10^{-3}~{\rm eV}^2$, one obtains the observed behavior since the relevant energies are a few GeV and $L$ to the other side of the Earth is around $10^4~{\rm km}$. The detailed 90\% CL contours for the allowed range of mixing parameters from different signatures in Kamiokande and SuperKamiokande are summarized in Fig.~\ref{fig:atmo1}. Meanwhile, more data have been taken, shifting the curve~(1) to somewhat larger $\Delta m_\nu^2$ values~\cite{DPF99atmo} with the boundaries shown in Table~\ref{tab:osci}. \begin{figure}[b] \epsfxsize=5.8cm \hbox to\hsize{\hss\epsfbox{fig01.eps}\hss} \caption{Allowed mixing parameters at 90\% CL from atmospheric neutrinos for $\nu_\mu\to\nu_\tau$ oscillations~\protect\cite{Kajita98}. They are based on the contained events in SuperKamiokande~(1) and Kamiokande~(2), the upward through-going muons in SuperKamiokande~(3) and Kamiokande~(4), and the stopping fraction of upward going muons in SuperKamiokande~(5). (Figure reproduced with kind permission of T.~Kajita.) \label{fig:atmo1}} \end{figure} \begin{figure}[ht] \epsfxsize=7cm \hbox to\hsize{\hss\epsfbox{fig02.eps}\hss} \caption{$L/E_\nu$ plot for the fully contained events at SuperKamiokande~\protect\cite{SuperK98b}. The points show the ratio of measured counts over Monte Carlo expectation in the absence of oscillations. The dashed lines show the expectation for $\nu_\mu\to\nu_\tau$ oscillations with $\Delta m_\nu^2=2.2\times10^{-3}~{\rm eV^2}$ and $\sin^22\Theta=1$. (Figure reproduced with kind permission of T.~Kajita.) \label{fig:atmo2}} \end{figure} Equation~(\ref{eq:oscillation}) suggests that one should plot the data according to their $L/E_\nu$ as in Fig.~\ref{fig:atmo2}. This representation provides perhaps the most convincing argument for the reality of atmospheric neutrino oscillations. The flat distribution of the $\nu_e$ points excludes $\nu_\mu\to\nu_e$ oscillations as a dominant channel, in agreement with the CHOOZ limits on this mode~\cite{Chooz98}. Therefore, $\nu_\mu\to\nu_\tau$ or oscillations into a sterile channel $\nu_\mu\to\nu_s$ are favored. A calculation of the $\nu_\mu\to\nu_s$ oscillation probability must include the refrac\-tive energy shift in the Earth. Recall that the neutrino weak potential~is \begin{equation} V_{\rm weak}=\pm\,\frac{G_F n_B}{2\sqrt2}\times \cases{-2Y_n+4Y_e&for $\nu_e$,\cr -2Y_n&for $\nu_{\mu,\tau}$,\cr 0&for $\nu_s$,\cr} \end{equation} where the upper sign refers to neutrinos, the lower sign to antineutrinos, $G_F$ is the Fermi constant, $n_B$ the baryon density, $Y_n$ the neutron and $Y_e$ the electron number per baryon (both about 1/2 in normal matter). Numerically we have \begin{equation} \frac{G_F n_B}{2\sqrt2}= 1.9\times10^{-14}~{\rm eV}~\frac{\rho}{\rm g~cm^{-3}}. \end{equation} The dispersion relation is $E_\nu=V_{\rm weak}+ \sqrt{p_\nu^2+m_\nu^2}$ so that $V_{\rm weak}$ should be compared with $m_\nu^2/2p_\nu$. For $\Delta m_\nu^2$ around $10^{-3}~{\rm eV}^2$, $p_\nu$ of a few GeV, and $\rho$ of a few $\rm g~cm^{-3}$, the energy difference between $\nu_\mu$ and $\nu_s$ arising from $V_{\rm weak}$ is about the same as that from $\Delta m_\nu^2/2p_\nu$. The resulting modification of the oscillation pattern can cause rather peculiar zenith-angle distributions~\cite{Liu98a,Liu98b,Lipari98}, but the current data do not allow one to exclude the $\nu_s$ channel. While the $\nu_\tau$ is quasi-sterile in the detector because of the large mass of the $\tau$-lepton, there is still an important difference to a $\nu_s$ because the $\nu_\tau$ produces pions in neutral-current collisions such as $\nu N\to N \nu \pi^0$ which can be seen by $\pi^0\to2\gamma$. With better statistics and a dedicated analysis one may be able to distinguish the $\nu_\tau$ and $\nu_s$ oscillation channels~\cite{Vissani98,Learned98,Hall98}. The evidence for atmospheric neutrino oscillations is very compelling, yet an independent confirmation is urgently needed. Hopefully it will come from one of the long-baseline experiments where an accelerator neutrino beam is directed toward a distant detector. The most advanced project is the K2K experiment~\cite{K2K} between KEK and Kamioka with a baseline of 250~km. Other projects include detectors in the Soudan mine at a distance of 730~km from Fermilab~\cite{Minos,EmulsionSandwich}, or in the Gran Sasso Laboratory at 732~km from CERN~\cite{Icarus,Noe,Opera}. \begin{figure}[b] \epsfxsize=10cm \hbox to\hsize{\hss\epsfbox{fig03.eps}\hss} \caption{Solar neutrino flux at Earth. Continuum spectra in ${\rm cm^{-2}~s^{-1}~MeV^{-1}}$, line spectra in ${\rm cm^{-2}~s^{-1}}$. Solid lines are the sources of dominant experimental significance. \label{fig:sunspectrum}} \end{figure} \subsection{Solar Neutrinos} The Sun, like other hydrogen-burning stars, liberates nuclear binding energy by the effective fusion reaction $4p+2e^-\to{}^4{\rm He}+2\nu_e+ 26.73~{\rm MeV}$ so that its luminosity implies a $\nu_e$ flux at Earth of $6.6\times10^{10}~{\rm cm^{-2}~s^{-1}}$. In detail, the production of helium involves primarily the pp-chains---the CNO cycle is important in stars more massive than the Sun. The expected solar neutrino flux is shown in Fig.~\ref{fig:sunspectrum}, where solid lines are for the three contributions which are most important for the measurements, \begin{eqnarray} \hbox to 1.7cm{pp:\hfil} &\hbox to5.0cm{$p+p\to{}^2{\rm H}+e^++\nu_e$\hfil} &(E_\nu<0.420~{\rm MeV}), \nonumber\\ \hbox to 1.7cm{Beryllium:\hfil} &\hbox to5.0cm{$e^-+{}^7{\rm Be}\to{}^7{\rm Li}+\nu_e$\hfil} &(E_\nu=0.862~{\rm MeV}), \nonumber\\ \hbox to 1.7cm{Boron:\hfil} &\hbox to5.0cm{$p+{}^7{\rm Be}\to{}^8{\rm B} \to{}^8{\rm Be}^*+e^++\nu_e$\hfil} &(E_\nu\alt15~{\rm MeV}). \end{eqnarray} A crucial feature of these reactions is that the beryllium and boron neutrinos both arise from ${}^7$Be which may either capture a proton or an electron so that their relative fluxes depend on the branching ratio between the two reactions. \begin{figure}[b] \epsfxsize=9cm\hbox to\hsize{\hss\epsfbox{fig04.eps}\hss} \caption{Solar neutrino fluxes measured in five experiments vs.\ theoretical predictions from a standard solar model~\protect\cite{BP98}. (Figure courtesy of J.~Bahcall.) \label{fig:sunproblem}} \end{figure} The solar neutrino flux has been measured in five different experiments with three different spectral response characteristics; the relevant energy range is indicated by the hatched bars above Fig.~\ref{fig:sunspectrum}. The radiochemical gallium experiments GALLEX~\cite{Gallex99a,Gallex99b} and SAGE~\cite{Sage} reach to the lowest energies and pick up fluxes from all source reactions. The Homestake chlorine experiment~\cite{Homestake} picks up beryllium and boron neutrinos, while the Kamiokande~\cite{Kamiokande} and SuperKamiokande~\cite{SuperKsun,DPF99solar} water Cherenkov detectors see only the upper part of the boron flux. All of the experiments see a flux deficit relative to standard-solar model predictions as summarized in Fig.~\ref{fig:sunproblem} and in a recent overview~\cite{Bahcall98a}. It has been widely discussed that there is no possibility to account for the measured fluxes by any apparent astrophysical or nuclear-physics modification of the standard solar models so that an explanation in terms of neutrino oscillations is difficult to avoid~\cite{Bahcall98a,Castellani97}. Moreover, at something like the 99.8\% CL one cannot account for the measurements by an energy-independent global suppression factor~\cite{Bahcall98a}. Therefore, one cannot appeal to neutrino oscillations with an arbitrary $\Delta m_\nu^2$ and a large mixing angle. One viable possibility are vacuum oscillations with a large mixing angle and a $\Delta m_\nu^2$ around $10^{-10}~{\rm eV}^2$, providing an oscillations length of order the Sun-Earth distance and thus an energy-dependent suppression factor. Second, one can have solutions with $\Delta m_\nu^2$ in the neighborhood of $10^{-5}~{\rm eV}^2$ where the mass difference between the oscillating flavors (energy of order 1~MeV) can be canceled by the neutrino refractive effect in the Sun, leading to resonant or MSW oscillations~\cite{Mikheyev85,Kuo89}, again with an energy-dependent suppression factor of the $\nu_e$ flux. In this case one may have a nearly maximal mixing angle, or a small one as shown in Table~\ref{tab:osci}. The large-angle MSW region does not provide a credible fit for $\nu_e\to\nu_s$ oscillations while the other solutions are possible for the $\nu_e\to\nu_{\mu,\tau}$ or $\nu_e\to\nu_s$ channels, of course with somewhat different contours of preferred mixing parameters~\cite{Bahcall98a}. It is noteworthy that the spectral distortion of the spectrum of recoil electrons measured at SuperKamiokande seems to single out the vacuum case as the preferred solution~\cite{DPF99solar}, although this must be considered a rather preliminary conclusion at present. \subsection{LSND} The LSND (Liquid Scintillation Neutrino Detector) experiment is the only case of a pure laboratory experiment which shows indications for neutrino oscillations~\cite{LSND96}. It utilizes a proton beam at the Los Alamos National Laboratory in the US. The protons are directed at a target where neutrinos arise from the same basic mechanism Eq.~(\ref{eq:beamdump}), upper line, that produces them in the atmosphere. From $\pi^+$ decay-in-flight one obtains a $\nu_\mu$ beam of up to 180~MeV while the subsequent decay-at-rest of stopped $\mu^+$'s provides a $\bar\nu_\mu$ beam of less than 53~MeV. The beam should not contain any $\bar\nu_e$'s; they can be detected by $\bar\nu_e p\to n e^+$ in coincidence with $np\to d\gamma$(2.2~MeV). For energies above 36~MeV, the 1993--95 data included 22 such events above an expected background of $4.6\pm0.6$; this excess is interpreted as evidence for $\bar\nu_\mu\to\bar\nu_e$ oscillations. The LSND data favor a large range of $\nu_e$-$\nu_\mu$-mixing parameters. After taking the exclusion regions of other experiments into account, one is left with a sliver of mixing parameters in the range indicated in Table~\ref{tab:osci}. The KARMEN experiment is also sensitive in this range, but has not seen any events~\cite{KARMEN98}. This lack of confirmation, however, does not exclude the LSND evidence as the non-observation of only a few expected events is not a statistically persuasive conflict. Moreover, if one excludes the background-infested 20--36~MeV data in LSND one finds a much broader range of allowed mixing parameters than could have been probed by KARMEN~\cite{Caldwell98b}. Within 2--3 years all of the LSND area will be covered with high sensitivity by MiniBooNE~\cite{Boone}, a new experiment at Fermilab, which will settle this case. \begin{table}[b] \caption{Experimentally favored neutrino mass differences and mixing angles.\label{tab:osci}} \smallskip \hbox to\hsize{\hss\vbox{\hbox{\begin{tabular}[4]{llll} \hline\noalign{\vskip2pt}\hline\noalign{\vskip2pt} Experiment&Favored Channel&$\Delta m^2$ [$\,\rm eV^2$] &$\sin^22\Theta$\\ \noalign{\vskip2pt}\hline\noalign{\vskip2pt} LSND&$\bar\nu_\mu\to\bar\nu_e$&0.2--10&(0.2--$3)\times10^{-2}$\\ Atmospheric&$\nu_\mu\to\nu_\tau$&(1--8)${}\times10^{-3}$&0.85--1\\ &$\nu_\mu\to\nu_s$&(2--7)${}\times10^{-3}$&0.85--1\\ Solar\\ \quad Vacuum&$\nu_e\to{}$anything&$(0.5$--$8)\times10^{-10}$&0.5--1\\ \quad MSW (small angle)&$\nu_e\to{}$anything&(0.4--1)${}\times10^{-5}$ &$10^{-3}$--$10^{-2}$\\ \quad MSW (large angle) &$\nu_e\to\nu_\mu$ or $\nu_\tau$& (3--30)${}\times10^{-5}$&0.6--1\\ \noalign{\vskip2pt} \hline \noalign{\vskip2pt} \hline \end{tabular}}}\hss} \end{table} \subsection{Global Interpretation} In Table~\ref{tab:osci} we summarize the neutrino oscillation channels and mixing parameters indicated by the atmospheric and solar neutrino anomalies and the LSND experiment. Clearly there is no straightforward interpretation because there are too many indications! If only three different mass eigenstates $m_i$, $i=1,2,3$, exist, the mass splittings must satisfy \begin{equation} \sum_{\rm Splittings} \Delta m_\nu^2 =(m_3^2-m_2^2)+(m_2^2-m_1^2)+(m_1^2-m_3^2)=0, \end{equation} a trivial condition which is not met by the independent $\Delta m_\nu^2$ from Table~\ref{tab:osci}. Some of the experiments may not be due to a single $\Delta m_\nu^2$ but rather to nontrivial three-flavor oscillation patterns~\cite{Acker97,Cardall97,Teshima98,Thun98}. Even then it appears that one must ignore some of the experimental evidence or stretch the errors beyond plausible limits to accommodate all experiments in a three-flavor scheme. If one has to throw out one of the indications, LSND is usually taken as the natural victim because there is no independent confirmation, and because the other cases simply look too strong to be struck from the list. Once LSND has been disposed of, a typical mass and mixing scheme may be as shown in Fig.~\ref{fig:mass1} where the small-angle MSW solution has been taken for solar neutrinos. \begin{figure}[ht] \epsfxsize=7cm \hbox to\hsize{\hss\epsfbox{fig05.eps}\hss} \caption{Hierarchical mass and mixing scheme to account for solar and atmospheric neutrinos, the former by the small-angle MSW solution. The flavor content of each mass eigenstate is indicated by the fill-patterns. (Figure~\protect\cite{Smirnov99} reproduced with kind permission of A.~Smirnov.) \label{fig:mass1}} \end{figure} However, the large mixing angle which is needed to account for the atmospheric neutrino anomaly suggests that more than one mixing angle may be large. Moreover, the spectral distortion observed in SuperKamiokande suggests that the solar vacuum solution may be preferred~\cite{DPF99solar}. Of course, the vastly different values for $\Delta m_\nu^2$ implied by atmospheric neutrinos and the solar vacuum solution looks unnatural. Shrugging off this objection, there are several workable schemes involving more than one large mixing angle, for example bi-maximal mixing or threefold maximal mixing~\cite{Smirnov99}. It is also conceivable that the mass differences are not representative of the masses themselves, i.e.\ that all three flavors have, say, an eV-mass with small splittings as implied by solar and atmospheric neutrinos (degenerate mass pattern). Of course, such a scheme is very different from the hierarchical patterns that we know in the quark and charged-lepton sectors, but the large mixing angle or angles look very unfamiliar, too. If the neutrino masses are all Majorana, one may still evade bounds on the effective $\nu_e$ Majorana mass $\langle m_{\nu_e}^2\rangle_{\rm eff}$ relevant for neutrinoless $\beta\beta$ decay. For example, in the bi-maximal mixing case there is an exact cancellation so that $\langle m_{\nu_e}^2\rangle_{\rm eff}=0$ in the limit where the mass differences can be neglected relative to the common mass scale. At the present time there is no objective reason to ignore LSND. As a consequence, a very radical conclusion follows: there must be four independent mass eigenstates, i.e.\ at least one low-mass neutrino degree of freedom beyond the three sequential flavors. This fourth flavor $\nu_s$ would have to be sterile with regard to the standard weak interactions. Probably the most natural mass and mixing pattern is one like Fig.~\ref{fig:mass2}, but there are also other possibilities~\cite{Smirnov99,Valle98}. \begin{figure}[ht] \epsfxsize=7cm \hbox to\hsize{\hss\epsfbox{fig06.eps}\hss} \caption{Representative four-flavor mass and mixing scheme to account for all experimental evidence. (Figure~\protect\cite{Smirnov99} reproduced with kind permission of A.~Smirnov.) \label{fig:mass2}} \end{figure} Of course, it would be an extremely radical and unexpected finding if the oscillation experiments had not only turned up evidence for neutrino masses, but for an additional, previously unsuspected low-mass sterile neutrino. A confirmation of LSND by MiniBooNE~\cite{Boone} would make this conclusion difficult to avoid so that this new experiment is perhaps the most urgent current effort in experimental neutrino physics. \section{Cosmology} \label{sec:cosmology} \subsection{Big-Bang Nucleosynthesis} Massive neutrinos and the existence of sterile neutrinos can have a variety of important cosmological consequences. One immediately wonders if a fourth neutrino flavor is not in conflict with the well-known big-bang nucleosynthesis (BBN) limit on the effective number of thermally excited primordial neutrino degrees of freedom~\cite{Malaney93,Sarkar96,Schramm98}. However, there are several questions. The first and most obvious one is whether the observationally inferred light-element abundances strictly exclude a fourth flavor at the epoch of BBN. The unfortunate answer is that, while a fourth flavor clearly would make a very significant difference, BBN is not in a position to exclude this possibility with the sort of confidence that would be required to dismiss the sterile-neutrino hypothesis~\cite{Olive98}. Second, a sterile neutrino need not attain thermal equilibrium in the first place. It is excited by oscillations in conjunction with collisions so that its contribution to the cosmic energy density at the BBN epoch depends on the mass difference and mixing angle with an active flavor~\cite{Barbieri91b,Cline92,Enqvist92,Shi93,Cardall96,% Kirilova97,Bilenky98}. If the atmospheric neutrino anomaly is due to $\nu_\mu\to\nu_s$ oscillations, the large mixing angle and large $\Delta m_\nu^2$ imply that the sterile neutrino would be fully excited at the time of BBN. On the other hand, for the small-angle MSW solution or the vacuum solution of the solar neutrino problem, it is barely excited so that the additional energy density is negligible. Therefore, of the different four-flavor patterns BBN favors those where $\nu_e$-$\nu_s$ oscillations solve the solar neutrino problem over those where $\nu_\mu$-$\nu_s$ oscillations explain the atmospheric neutrino anomaly. Even this conclusion can be avoided if a lepton asymmetry of order $10^{-5}$ exists at the time of the primordial $\nu_\mu\to\nu_s$ oscillations~\cite{Foot95}. It may be possible to create such asymmetries among the active neutrinos by oscillations between, say, $\nu_\tau$ ($\bar\nu_\tau$) and sterile states~\cite{Shi96,Foot97}, although the exact requirements on the mass and mixing parameters are controversial in some cases~\cite{Bell98,Foot98,Shi98a,Foot98b,Shi98b}. Be that as it may, a sterile neutrino provides for a rich oscillation phenomenology in the early universe, but at the same time BBN is not quite enough of a precision tool to distinguish seriously between different four-flavor patterns. As it stands, BBN would benefit more from pinning down the neutrino mass and mixing pattern experimentally than the other way round. \subsection{Dark Matter} Irrespective of the possible existence of a sterile neutrino, it has become difficult to dispute that neutrinos have masses. Therefore, they could play an important role for the cosmological dark matter. Standard calculations in the framework of the big-bang cosmology reveal that the present-day universe contains about $100~{\rm cm^{-3}}$ neutrinos and antineutrinos per active flavor~\cite{Kolb90}, leading to a cosmological mass fraction of \begin{equation} \Omega_\nu h^2=\sum_{i=1}^3\frac{m_i}{93~{\rm eV}}, \end{equation} where $h$ is the Hubble constant in units of $100~{\rm km~s^{-1}~Mpc^{-1}}$. The observed age of the universe together with the measured expansion rate reveals that $\Omega h^2\alt0.4$, leading to the most restrictive limit on the masses of all neutrino flavors~\cite{Gershtein66,Cowsik72}. Once we believe the current indications for oscillations, the mass differences are so small that this limit reads $m_\nu\mathrel{\mathpalette\vereq<} 13~{\rm eV}$ for the common mass scale of all flavors, roughly identical with the world-averaged tritium endpoint limit on $m_{\nu_e}$ of about~\cite{Caso98} 15~eV. If the neutrino masses were in this range they could be the cosmic dark matter as first pointed out more than 25 years ago~\cite{Cowsik73}. However, it was quickly recognized that neutrinos do not make for a good universal dark matter candidate. The simplest counter-argument (``Tremaine-Gunn-limit'') arises from the phase space of spiral galaxies which cannot accommodate enough neutrinos to account for their dark matter unless the neutrino mass obeys a {\it lower} limit~\cite{Tremaine79,Madsen91}. For typical spiral galaxies it is~\cite{Salucci97} $m_\nu\agt20~{\rm eV}$, for dwarf galaxies even $m_\nu\agt100$--200~eV, difficult to reconcile with the cosmological upper limit. \subsection{Large-Scale Structure} The Tremaine-Gunn-limit is only the tip of the iceberg of evidence against neutrino dark matter. The most powerful argument arises from cosmic structure formation. At early times the universe was extremely smooth as demonstrated by the tiny amplitude of the temperature fluctuations of the cosmic microwave background radiation across the sky. The present-day distribution of matter, on the other hand, is very clumpy. There are stars, galaxies, clusters of galaxies, and large-scale coherent structures on scales up to about $100~{\rm Mpc}$. A perfectly homogeneous expanding universe stays that way forever. The standard theory~\cite{Kolb90,Boerner92,Coles95,Primack97} for the formation of structure has it that the universe was initially almost, but not quite, perfectly homogeneous, with a tiny modulation of its density field. The action of gravity enhances the density contrast as time goes on, leading to the observed structures. The outcome of this evolution depends on the initial spectrum of density fluctuations which is usually taken to be approximately flat, i.e.\ of the ``Harrison-Zeldovich-type,'' corresponding to the power-law-index $n=1$. However, the {\it effective\/} spectrum relevant for structure formation is the processed spectrum which obtains at the epoch when the universe becomes matter dominated. As the matter which makes up the cosmic fluid can diffuse around, the smallest-scale density fluctuations will be wiped out. This effect is particularly important for weakly interacting particles which can diffuse far while they are relativistic. Low-mass particles stay relativistic for a long time and thus wipe out the primordial fluctuations up to large scales. Massive particles stay put earlier and thus have this effect only on small scales. One speaks of ``hot dark matter'' (HDM) if the particle masses are small enough that all fluctuations are wiped out beyond scales which later correspond to a galaxy. Conversely, ``cold dark matter'' (CDM) has this effect only on sub-galactic scales. One way of presenting the results of calculations of structure formation is to show the expected power-spectrum of the present-day matter distribution (Fig.~\ref{fig:struc}) which can be compared to the observed galaxy distribution. The theory of structure formation then predicts the form, but not the amplitude of the spectrum which can be fit either on large scales to the observed temperature fluctuations of the cosmic microwave background radiation as observed by the COBE satellite, or else on small scales to the observed galaxy distribution. Figure~\ref{fig:struc} illustrates that HDM (neutrinos) suppresses essentially all small-scale structure below a cut-off corresponding to a supercluster scale and thus does not seem to be able to account for the observations. \begin{figure}[b] \epsfxsize=8cm \hbox to\hsize{\hss\epsfbox{fig07.eps}\hss} \caption{Comparison of matter-density power spectra for cold dark matter (CDM), tilted cold dark matter (TCDM), hot dark matter (HDM), and mixed hot plus cold dark matter (MDM) for large-scale structure formation~\protect\cite{Steinhardt95}. All curves are normalized to COBE and include only linear approximation; nonlinear corrections become important on scales below about $10\,\rm Mpc$. (Figure reproduced with kind permission of P.~Steinhardt.) \label{fig:struc}} \end{figure} While cold dark matter works impressively well, it has the problem of producing too much clustering on small scales. Ways out include a primordial power spectrum which is not strictly flat (tilted dark matter), a mix of cold and hot dark matter, or the assumption of a cosmological constant. Currently there is a broad consensus that some variant of a CDM cosmology where structure forms by gravitational instability from a primordial density fluctuations of approximately the Harrison-Zeldovich type is probably how our universe works. Thus, while it is widely accepted that neutrinos are not the main dark-matter component, quite conceivably they contribute something like 20\%, giving rise to a hot plus cold dark matter (HCDM) scenario which avoids the overproduction of small-scale structure of a pure CDM cosmology~\cite{Primack95,Pogosyan95,Klypin97,Gross98}. A HDM fraction exceeding about 20\% is inconsistent with the size of voids in the galaxy distribution~\cite{Ghigna94}. It was claimed that the HCDM picture with about 20\% HDM provides the best fit to all current large-scale structure data~\cite{Gross98,Gawiser98}. Moreover, if LSND is confirmed, especially with a $\Delta m_\nu^2$ of around $6~{\rm eV}^2$, there would be a cosmic HDM component of just the right magnitude~\cite{Primack95}. The LSND signal and the HCDM cosmologies have become closely intertwined issues. \begin{figure}[b] \epsfxsize=6cm \hbox to\hsize{\hss\epsfbox{fig08.eps}\hss} \caption{Effect of a 1~eV neutrino mass on the power spectrum of the distribution of bright-red galaxies compared with the expected $1\,\sigma$ sensitivity of the Sloan Digital Sky Survey (error boxes)~\protect\cite{Hu98}. Upper curves: $\Omega_M=1$, $h=0.5$ with or without a neutrino mass. Lower curves: $\Omega_M=0.2$, $h=0.65$. (Figure reproduced with kind permission of M.~Tegmark) \label{fig:sloan}} \end{figure} However, important arguments against a HCDM scenario have appeared. First, a cosmological model with the critical amount of dark matter is hard to reconcile with all the evidence on the matter density; something like 30\% looks far more convincing. Moreover, the high-redshif type~Ia supernova Hubble diagram now indicates the existence of a cosmological constant $\Lambda$~\cite{Perlmutter98,Garnavich98,Riess98,Schmidt98}. If correct, one is naturally led to a critical cosmological model with something like 5\% baryonic matter, 25\% CDM, and 70\% ``vacuum energy.'' Likewise, the observed abundance of high-redshift ($z\sim3$) galaxies is reproduced in this type of $\Lambda$CDM model, but not by HCDM~\cite{Somerville98}. A small amount of HDM is still possible in a $\Lambda$CDM scenario, but not especially needed for anything~\cite{Primack98}. The cosmic large-scale structure is sensitive to small neutrino masses, whether or not they are needed. Put another way, the unknown common mass scale which is left open by oscillation experiments has a measurable impact on the power spectrum of the large-scale matter distribution. For example, the upcoming Sloan Digital Sky Survey~\cite{Sloan} will produce precision data where a neutrino mass as small as 0.1~eV makes a noticeable difference~\cite{Hu98}, even though a statistically meaningful neutrino mass limit may not lie far below 1~eV. This is illustrated in Fig.~\ref{fig:sloan} where the expected Sloan sensitivity to the power spectrum of bright red galaxies is compared with theoretical predictions in a universe with the critical mass in dark matter ($\Omega_M=1$) and a low-density universe ($\Omega_M=0.2$), each time with or without a 1~eV neutrino. In the long-term future, weak lensing of galaxies by large-scale structure may provide even more precise information on cosmological parameters. An ultimate sensitivity to a neutrino mass as low as 0.1~eV has been suggested~\cite{Hu98b}. \subsection{Cosmic Microwave Background Radiation} Another sensitive probe of large-scale structure is the cosmic microwave background radiation (CMBR), and more specifically the power-spectrum of its temperature fluctuations across the sky. The anticipated sky maps of the future MAP~\cite{MAP} and PLANCK~\cite{PLANCK} satellite missions have already received advance praise as the ``Cosmic Rosetta Stone''~\cite{Bennett97} because of the wealth of cosmological precision information they are expected to reveal~\cite{White94,Jungman96b,Bond97,Hu98c}. \begin{figure}[ht] \hbox to\hsize{\hss\epsfxsize=6.7cm\epsfbox{fig09.eps}\hss} \caption{{\it Top:} CMBR fluctuation spectrum for SCDM with $h=0.5$, $\Omega_M=1$, $\Omega_B=0.05$, and $N_{\rm eff}=3$ (solid line)~\protect\cite{Hannestad99}. The dotted line is for $N_{\rm eff}=4$, and the dashed line when two of these four neutrinos have equal masses corresponding together to $\Omega_{\rm HDM}=0.2$ ($\Omega_{\rm CDM}=0.75$). {\it Bottom:}~Relative difference of these nonstandard models to SCDM. The shaded band represents the cosmic variance. (Spectra calculated with the CMBFAST~\protect\cite{CMBFAST} package.)} \label{fig:cmbr} \end{figure} CMBR sky maps are characterized by their fluctuation spectrum $C_\ell=\langle a^{}_{\ell m} a^*_{\ell m}\rangle$ where $a_{\ell m}$ are the coefficients of a spherical-harmonic expansion. Figure~\ref{fig:cmbr} (solid line) shows $C_\ell$ for standard cold dark matter (SCDM) with $N_{\rm eff}=3$ for the effective number of neutrino degrees of freedom. Sterile neutrinos increase the radiation content and thus modify this pattern in a characteristic way illustrated by the dotted line, which corresponds to $N_{\rm eff}=4$. While this shift appears small, the lower panel of Fig.~\ref{fig:cmbr} shows that for $\ell\mathrel{\mathpalette\vereq>} 200$ it is large on the scale of the expected measurement precision. It is fundamentally limited by the ``cosmic variance'' $\Delta C_\ell/C_\ell=\sqrt{2/(2\ell+1)}$, i.e.\ by the fact that at our given location in the universe we can measure only $2\ell+1$ numbers $a_{\ell m}$ to obtain the expectation value $\langle a^{}_{\ell m} a^*_{\ell m}\rangle$. The actual sensitivity will be worse, but the cosmic variance gives us an optimistic idea of what one may hope to achieve. The true sensitivity to $\Delta N_{\rm eff}$ is further limited by our lack of knowledge of several other cosmological parameters. Even then it is safe to assume that we are sensitive to $|\Delta N_{\rm eff}|\alt0.3$, and much better with prior knowledge of other parameters~\cite{Jungman96b}. Thus it appears that the CMBR is a more powerful tool to measure $N_{\rm eff}$ than the standard BBN argument, although a more pessimistic assessment was put forth in a more recent analysis~\cite{Hu98c}. If LSND is right, some of the neutrinos have eV masses which imprint themselves on the CMBR fluctuation spectrum~\cite{Ma95,Dodelson96}. For example, if the atmospheric neutrino anomaly is due to $\nu_\mu$-$\nu_s$-oscillations, we will have approximately $N_{\rm eff}=4$, and two of these states will have an eV-range mass. The CMBR imprint of this scenario is illustrated with the dashed curve in Fig.~\ref{fig:cmbr} where $\Omega_\nu=0.2$. With $\Omega_{2\nu}h^2=2m_\nu/93~{\rm eV}$ and taking $h=0.5$ this implies $m_\nu\approx2.4~{\rm eV}$, well within the range suggested by~LSND. The range of $\Delta N_{\rm eff}$ and the HDM fraction that can be determined by the future CMBR sky maps, together with large-scale galaxy surveys, cannot be foretold with certainty, but surely these cosmological precision observables are significantly affected by the currently debated neutrino mass and mixing patterns. Cosmology may be our best bet to pin down the overall neutrino mass scale which is left undetermined by oscillation experiments. \section{Supernova Physics} \label{sec:supernova} \subsection{Kinematical Mass Limits} When SN~1987A exploded on 23 February 1987 in the Large Magellanic Cloud at a distance of about 50~kpc (165,000~lyr), it produced the third case of a measured neutrino signal from an astrophysical source after the Sun and the Earth's atmosphere. Therefore, we turn to the role of masses and mixings for SN neutrinos in general, and for the SN~1987A burst in particular. A type~II SN explosion~\cite{Brown82,Bethe90,Petschek90} marks the end of the life of a massive star ($M\agt8\,M_\odot$) which has developed a degenerate iron core, surrounded by several burning shells. As the core reaches its Chandrasekhar limit of 1--$2\,M_\odot$ (solar masses) it becomes unstable and collapses down to nuclear density ($3\times10^{14}~{\rm g~cm^{-3}}$) where the equation of state stiffens and the implosion is halted. At this point a shock wave forms which ejects the mantle of the progenitor star---the SN explosion is the reversed core implosion. At about nuclear density and a temperature of several 10~MeV the newly formed neutron star is opaque to neutrinos which are thus emitted from a shell at about unit optical depth, the ``neutrino sphere,'' crudely with a thermal spectrum. One expects that the total binding energy~\cite{Sato87,Janka89} \begin{equation}\label{eq:bindingenergy} E_{\rm b}=\hbox{1.5--4.5}\times10^{53}~{\rm erg} \end{equation} is roughly equipartioned between all (anti)neutrino flavor degrees of freedom and that it is emitted within several seconds. This picture agrees well with the SN~1987A observations in the Kamiokande~\cite{Hirata88} and IMB~\cite{Bratton88} water Cherenkov detectors and the Baksan Scintillator Telescope~\cite{Alexeyev87} which were all primarily sensitive to the positrons from the $\bar\nu_e+p\to n+e^+$ capture reaction. A neutrino mass can manifest itself by a time-of-flight dispersion of the SN burst~\cite{Zatsepin68}. The neutrino arrival time from a distance $D$ is delayed by \begin{equation}\label{eq:sndelay} \Delta t=2.57~{\rm s}\, \left(\frac{D}{50~{\rm kpc}}\right)\, \left(\frac{10~{\rm MeV}}{E_\nu}\right)^2\, \left(\frac{m_\nu}{10~{\rm eV}}\right)^2. \end{equation} As the $\bar\nu_e$'s from SN~1987A were registered within a few seconds and had energies in the 10~MeV range, the $m_{\nu_e}$ limit is around 10~eV. Detailed analyses reveal that the pulse duration is consistently explained by the SN cooling time and that $m_{\nu_e}\mathrel{\mathpalette\vereq<} 20~{\rm eV}$ is implied at something like 95\% CL~\cite{Loredo89,Kernan95}. The high-statistics observation of a future galactic SN with a large detector like SuperKamiokande allows one to improve the $m_{\nu_e}$-sensitivity to about $3~{\rm eV}$ because one can use the fast rise-time of the signal as a dispersion measure rather than the overall burst duration itself~\cite{Totani98}. On the other hand, the neutral-current signal in a large water Cherenkov detector like SuperKamiokande or SNO provides a direct handle on $m_{\nu_\mu}$ and $m_{\nu_\tau}$ of no better than 30~eV~\cite{Seckel91,Krauss92,Fiorentini97a,Beacom98}. Even with a future neutral-current detector like OMNIS it is not realistically possible to probe $m_{\nu_\mu}$ and $m_{\nu_\tau}$ down to a few eV~\cite{Cline94,Smith97}. \subsection{SN~1987A and Flavor Oscillations} While the SN~1987A limit on $m_{\nu_e}$ is not truly interesting for the current debate, the event energies bear on the large-angle solutions of the solar neutrino problem, and especially on the vacuum solution. In typical numerical simulations one finds for the average energies for the different flavors~\cite{Janka93} \begin{equation}\label{eq:energies} \langle E_{\nu}\rangle=\cases{10{-}12\,{\rm MeV}&for $\nu_e$,\cr 14{-}17\,{\rm MeV}&for $\bar\nu_e$,\cr 24{-}27\,{\rm MeV}&for $\nu_{\mu,\tau}$ and $\bar\nu_{\mu,\tau}$,} \end{equation} so that $\langle E_{\nu_e}\rangle:\langle E_{\bar\nu_e}\rangle: \langle E_{\rm others}\rangle\approx \frac{2}{3}:1:\frac{5}{3}$. Large mixing angle oscillations between $\bar\nu_e$ and $\bar\nu_\mu$ would partially swap their fluxes and thus ``stiffen'' the $\bar\nu_e$ spectrum observable at Earth~\cite{Kernan95,Wolfenstein87,Lagage87,Smirnov94,Jegerlehner96}. (We take $\bar\nu_\mu$ to stand for either $\bar\nu_\mu$ or~$\bar\nu_\tau$.) Therefore, some of the SN~1987A events would have been oscillated $\bar\nu_\mu$'s which should have been correspondingly more energetic. \begin{figure}[ht] \epsfxsize=7cm \hbox to\hsize{\hss\epsfbox{fig10.eps}\hss} \caption{Best-fit values for the spectral $\bar\nu_e$ temperature $T_{\bar\nu_e}$ and the neutron-star binding energy $E_{\rm b}$, as well as contours of constant likelihood corresponding to 95\% confidence regions~\protect\cite{Jegerlehner96}. They are based on a joint analysis between the Kamiokande and IMB data, assuming maximum mixing and the indicated values for $\tau=T_{\bar\nu_\mu}/T_{\bar\nu_e}$, where $\tau=1$ corresponds to no oscillations. The hatched region represents the predictions of Eqs.~(\protect\ref{eq:bindingenergy}) and~(\protect\ref{eq:energies}). \label{fig:sn}} \end{figure} A maximum-likelihood analysis of the $\bar\nu_e$ spectral temperature and the neutron-star binding energy inferred from the Kamiokande~\cite{Hirata88} and IMB~\cite{Bratton88} data (Fig.~\ref{fig:sn}) reveals that even in the no-oscillation case there is only marginal overlap with the theoretical expectation of Eq.~(\protect\ref{eq:energies}). The observed neutrinos were softer than predicted, especially at Kamiokande. Including a spectral swap exacerbates this problem in that the energies should have been even higher. In Fig.~\ref{fig:sn} we show 95\% likelihood contours for the infered $\bar\nu_e$ spectral temperature $T_{\bar\nu_e}=\langle E_{\bar\nu_e}\rangle/3$ and the neutron-star binding energy $E_{\rm b}$ for maximum $\bar\nu_e$-$\bar\nu_\mu$-mixing and for several values of $\tau=T_{\bar\nu_\mu}/T_{\bar\nu_e}$. Even for moderate spectral differences a maximum mixing between $\bar\nu_e$ and the other flavors causes a conflict with the SN~1987A data~\cite{Smirnov94,Jegerlehner96}. It may be premature to exclude the solar vacuum solution on these grounds as the spectral differences may have been overestimated. They arise because of flavor-dependent opacities. The electron-flavored neutrinos are trapped by $\nu_e n\to p e^-$ and $\bar\nu_e p\to n e^+$. The other flavors interact by neutral-current collisions which have smaller cross sections so that these particles emerge from deeper and hotter layers. They escape from their ``transport sphere'' where collisions are no longer effective, but most critical for their spectrum is the ``energy sphere'' where they last exchanged energy with the medium~\cite{Janka95}. Electron scattering $\nu e^-\to e^-\nu$ was taken to dominante for energy-exchange and $e^+e^-\to \nu\bar\nu$ for pair production. However, the dominant pair-process is nucleonic bremsstrahlung~\cite{Suzuki91,Hannestad98} $NN\to NN\nu\bar\nu$, the dominant energy-exchange processes are recoils and inelasticities in $\nu N\to N\nu$ scattering~\cite{Janka96,Hannestad98}. Including these effects clearly makes the $\bar\nu_\mu$ spectrum more similar to $\bar\nu_e$. A preliminary estimate suggests that the remaining spectral differences may be small enough to avoid a conflict between SN~1987A and the solar vacuum solution~\cite{Hannestad98}. Since neutrino oscillations can be crucial for the interpretation of the signal from a future galactic SN~\cite{Qian94,Choubey98,Fuller98}, one should indeed spend more effort at understanding details of the spectra formation process~\cite{Hardy99}. An interesting case which does not depend on the spectral differences is the ``prompt $\nu_e$ burst,'' originating from the deleptonization of the outer core layers at about 100~ms after bounce when the shock wave breaks through the edge of the collapsed core. This ``deleptonization burst'' propagates through the mantle and envelope of the progenitor star so that resonant oscillations take place for a large range of mixing parameters between $\nu_e$ and some other flavor, notably for some of those values where the MSW effect operates in the Sun~\cite{Mikheyev86,Notzold87,Rosen88}. In a Cherenkov detector one can see this burst by $\nu_e$-$e$-scattering which is forward peaked, but one would have expected only a fraction of an event from SN~1987A. The first event in Kamiokande may be attributed to this signal, but this interpretation is statistically insignificant. The experimental signal of the prompt $\nu_e$ burst from a future galactic SN is closely intertwined with the mixing parameters which solve the solar neutrino problem. \subsection{Flavor Oscillations and Supernova Physics} Flavor oscillations can have interesting ramifications for SN physics itself, independently of neutrino flux measurements at Earth. As galactic SNe are rare (one every few decades or even less) it is not guaranteed that we will observe neutrinos from another SN anytime soon. Therefore, it is even more important to use the SN phenomenon itself as a laboratory for neutrino physics. \begin{figure}[b] \hbox to\hsize{\hss\epsfxsize=7cm\epsfbox{fig11.eps}\hss} \caption{Mixing parameters between $\nu_e$ and $\nu_\mu$ or $\nu_\tau$ where a spectral swap would help explode supernovae~\protect\cite{Fuller92} and where it would prevent r-process nucleosynthesis~\protect\cite{Qian93,Qian95,Sigl95}. \label{fig:snosci}} \end{figure} For example, flavor oscillations can help with the explosion~\cite{Fuller92}. The standard scenario of a type~II SN explosion has it that a shock wave forms near the edge of the core when its collapse halts at nuclear density and that this shock wave ejects the mantle of the progenitor star. However, in typical numerical calculations the shock wave stalls so that this ``prompt explosion'' scenario does not seem to work. In the ``delayed explosion'' picture the shock wave is revived by neutrino heating, perhaps in conjunction with convection, but even then it appears difficult to obtain a successful or sufficiently energetic explosion. The efficiency of neutrino heating can increase by resonant flavor oscillations which swap the $\nu_e$ flux with, say, the $\nu_\tau$ one. Therefore, what passes through the shock wave as a $\nu_e$ was born as a $\nu_\tau$ at the proto neutron star surface. It has on average higher energies and thus is more effective at transfering energy. In Fig.~\ref{fig:snosci} the shaded range of mixing parameters is where SNe are helped to explode, assuming a ``normal'' neutrino mass spectrum with $m_{\nu_e}<m_{\nu_\tau}$. Below the shaded region the resonant oscillations take place beyond the shock wave and thus do not affect the explosion. A few seconds after core bounce the shock wave has long taken off, leaving behind a relatively dilute ``hot bubble'' above the neutron-star surface. This region is one suspected site for the r-process heavy-element synthesis, which requires a neutron-rich environment~\cite{Woosley92,Witti94,Hoffman96,Meyer97,% Meyer98}. The neutron-to-proton ratio, which is governed by the beta reactions $\nu_e+n\to p+e^-$ and $\bar\nu_e+p\to n+e^+$, is shifted to a neutron-rich phase if $\langle E_{\nu_e}\rangle<\langle E_{\bar\nu_e}\rangle$ as for standard neutrino spectra. Resonant oscillations can again swap the $\nu_e$ flux with another one, inverting this hierarchy of energies. In the hatched range of mixing parameters shown in Fig.~\ref{fig:snosci} the r-process would be disturbed~\cite{Qian93,Qian95,Sigl95}, in conflict with the upper range of LSND-inspired mass differences. On the other hand, oscillations $\nu_e\to\nu_s$ into a sterile neutrino could actually help the r-process by depleting the neutron-stealing $\nu_e$ flux~\cite{Nunokawa97a,Caldwell98}. \subsection{Pulsar Kicks by Oscillations?} Radio pulsars often move with velocities~\cite{Lyne94,Lorimer97,Cordes98} of several $100~{\rm km~s^{-1}}$, a phenomenon yet to be explained. The acceleration probably takes place in the context of their formation in a core-collapse SN, i.e.\ they likely receive a kick at birth. One explanation appeals to a ``neutrino rocket'' because the momentum carried by the neutrino burst is so large that an emission anisotropy as small as 1\% suffices to account for a recoil of about $300~{\rm km~s^{-1}}$. However, even such a small anisotropy is difficult to explain. Pulsars tend to have strong magnetic fields which may well be suspected to cause the asymmetry. The neutrino refractive index depends on the direction of the neutrino momentum relative to ${\bf B}$. For suitable conditions, resonant neutrino oscillations occur between the neutrinospheres of $\nu_e$ and $\nu_\tau$, deforming the effective $\nu_\tau$~sphere. The $\nu_\tau$'s would thus emerge from regions of varying effective temperature and thus, it was argued, would be emitted anisotropically~\cite{Kusenko96}. This argument was then taken up in several papers with modified neutrino oscillation scenarios~\cite{Kusenko97,Akhmedov97a,Grasso98,Horvat98}. Unfortunately, this intruiging idea does not work for plausible magnetic field strengths~\cite{Janka99}. The oscillations take place in the ``atmosphere'' of the neutron star, while the neutrino flux is fixed much deeper inside. The atmosphere adjusts itself to transport the neutrino flux, not the other way round. Neutrino oscillations in the atmosphere leave the overall flux unchanged except for a higher-order backreaction effect which obtains because of the anisotropically modified atmospheric structure. It may still be that a neutrino rocket effect is responsible for the pulsar kicks, but the cause for the anisotropy remains unclear and if it is related to nonstandard neutrino properties. \subsection{Neutrino Mass Limit from Neutron-Star Stability?} In a thought-provoking paper~\cite{Fischbach96} it was recently claimed that neutron stars provided a {\it lower\/} neutrino mass limit of $m_\nu\agt0.4~{\rm eV}$. Two-neutrino exchange between fermions gives rise to a long-range force. A neutrino may also pass around several fermions, so to speak, producing a much smaller potential. This multibody neutrino exchange, it was argued, would be a huge effect in neutron stars because combinatorial factors among many neutrons win out against the smallness of the potential for a given set of them. One way out is to suppress the long-range nature of neutrino exchange by a nonzero $m_\nu$. This idea triggered a series of papers where it was shown that a proper resummation of a seemingly divergent series of terms leads to a well-behaved and small ``neutron-star self-energy,'' invalidating the claim of a lower neutrino mass limit~\cite{Kiers98,Abada96,Abada98,Arafune98}. As naively expected, there is no mysterious long-range force from neutrino exchange, but these papers are still interesting reading for anyone interested in questions of neutrino physics in media. \section{Neutrino Astronomy} \label{sec:neutrinoastronomy} \subsection{Neutrino Telescopes} For twenty years after the first observation of solar neutrinos at the Homestake detector, neutrino astronomy remained a one-experiment field. The SN~1987A neutrino observations mark a turning point---the number of experiments and observatories has multiplied since about that time, with more than a dozen previous, operating or projected neutrino detectors measuring solar and atmospheric neutrinos or searching for a new SN burst. The neutrino sky at low energies is dominated by these sources with a solar $\nu_e$ flux of around $6.6\times10^{10}~{\rm cm^{-2}~s^{-1}}$ in the MeV range and that from a SN at a distance of 10~kpc of around $3\times10^{12}~{\rm cm^{-2}~s^{-1}}$ in the 10--100~MeV range during the burst of a few seconds. At around 1~GeV the atmospheric neutrino flux for all flavors together and integrated over all angles is $dN_\nu/d\ln E_\nu\approx0.7~{\rm cm^{-2}~s^{-1}}$, dropping with energy approximately as $E_\nu^{-2}$. A new development is the emergence of huge neutrino telescopes with the goal of observing astrophysical sources of neutrinos with energies in the TeV range and beyond~\cite{Gaisser95,Halzen97a,Halzen98a}. The existence of cosmic rays with energies reaching beyond $10^{20}~{\rm eV}$ proves that they must have been accelerated somewhere, but the nature of the accelerators remains mysterious. Protons are deflected in the micro-Gauss galactic magnetic field so that the cosmic rays hitting the Earth do not point back to their sources, a problem not shared by neutrinos. High-energy neutrinos are expected from ``cosmic beam dumps'' whenever the protons interact with matter or even photons to produce pions---the Earth's atmosphere as a neutrino source is the simplest case in point. Estimates of the expected neutrino fluxes vary, but certainly one needs detectors far exceeding the size of SuperKamiokande. For a useful neutrino Cherenkov telescope one probably needs a cubic-kilometer of water or ice instrumented with photomultipliers which can be placed on a grid with a typical spacing of order 30~m. There are now several such utopian-sounding projects on their way. A small but functioning instrument has been deployed in Lake Baikal~\cite{Baikal97} but probably it will not grow to the ${\rm km^3}$ scale. Two Mediterranean projects, NESTOR~\cite{Nestor99} and ANTARES~\cite{Antares98}, are in the R\&D and feasibility-study phase. At present the most advanced detector with a realistic ${\rm km}^3$ perspective is AMANDA~\cite{Amanda98} at the South Pole (Fig.~\ref{fig:amanda}). The antarctic ice is used both as a Cherenkov medium and as a mechanical support structure for strings of photomulipliers which are frozen into 2~km deep holes. \begin{figure} \hbox to\hsize{\hss\epsfxsize=11.8cm\epsfbox{fig12.eps}\hss} \caption{Schematic view of the AMANDA South Pole high-energy neutrino telescope~\protect\cite{Amanda98}. (Figure reproduced with permission of F.~Halzen.) \label{fig:amanda}} \end{figure} The main focus of these exciting projects is neutrino astronomy, i.e.\ to study the sky in a new form of radiation and to learn about the nature of the astrophyscial sources. However, high-energy neutrino astronomy has several important ramifications of direct particle-physics interest. \subsection{Search for Particle Dark Matter} First, one may search for dark matter in the form of weakly interacting massive particles (WIMPs), especially in the guise of the supersymmetric neutralinos. The case for these particles has become stronger as massive neutrinos no longer seem tenable as a main dark-matter constituent. Galactic WIMPs are accreted by the Sun or Earth where they annihilate with each other, leading to a secondary GeV--TeV neutrino flux. Depending on details of the assumed supersymmetric model, this ``indirect'' method to search for particle dark matter is competitive with the direct laboratory experiments~\cite{Jungman96,Bergstrom97}. \subsection{Tau-Neutrinos from Astrophysical Sources} Neutrinos produced in cosmic beam dumps should have the same flavor content as those produced in the atmosphere. If atmospheric neutrinos indeed oscillate, so do the ones from high-energy astrophysical sources. If the $\nu_\mu\to\nu_\tau$ oscillation channel is what explains the atmospheric anomaly, then the astrophysical beam dumps produce a flux which includes high-energy $\nu_\tau$'s. One signature in a Cherenkov detector are so-called double-bang events~\cite{Learned95} which consist of a big hadronic shower from the initial $\nu_\tau$ interaction, a muon-like $\tau$-track, and then a second big particle cascade when the $\tau$ decays. This could be 100~m downstream from the first interaction if the primary energy was in the PeV ($10^{15}~{\rm eV}$) range as expected from active galactic nuclei (AGNs) as neutrino sources~\cite{Halzen97b}. However, such signatures may be difficult to detect in a first-generation telescope like AMANDA. The Earth is opaque to neutrinos with energies above something like 100~TeV, but $\nu_\tau$'s can still make it to the detector from below~\cite{Halzen98b}. The main idea is that a $\tau$ produced in a charged-current interaction of the primary $\nu_\tau$ decays back into a $\nu_\tau$ before losing much energy, thereby piling up $\nu_\tau$'s at energies around 100~TeV. Moreover, this effect would manifest itself by a flat zenith-angle dependence of source intensity at the highest energies~\cite{Halzen98b}. The atmospheric neutrino anomaly has rather immediate consequences for high-energy neutrino astronomy! \subsection{Neutrino Masses} Besides AGNs, gamma-ray bursts are one of the favored suspects for producing the highest energy cosmic rays and for producing high-energy neutrinos~\cite{Waxman97}. Their pulsed nature allows one to search for neutrino masses by time-of-flight dispersion in analogy to the SN~1987A mass limit. Since typical gamma-ray bursts are at cosmological distances of order 1000~Mpc, one gains enormously in Eq.~(\ref{eq:sndelay}) relative to SN~1987A, but of course the final mass sensitivity depends on the time-structure (perhaps as short as milliseconds) and the observed neutrino energies. If neutrinos with energies as high as $10^{22}~{\rm eV}$ are copiously produced in astrophysical sources, and if eV-mass neutrinos exist as a hot-dark matter component and are locally clustered, then high-energy particle cascades would be initiated which could produce, as secondary products, the highest-energy observed cosmic rays which have energies beyond $10^{20}~{\rm eV}$~\cite{Weiler97,Fargion97,Yoshida98}. The universe is opaque for protons above $4\times10^{19}~{\rm eV}$, the Greisen-Zatsepin-Kuzmin cutoff, due to photo-pion production on the cosmic microwave radiation. Therefore, the highest-energy cosmic rays, if they are protons, must have a local source, but the observed events do not point toward any plausible structure which might serve as such. Neutrinos thus offer one of many speculative explanations for the puzzle of the highest-energy cosmic rays. \section{Neutrino Electromagnetic Properties} \label{sec:electromagneticproperties} \subsection{Form Factors} A survey of neutrino astrophysics would be incomplete without a discussion of neutrino electromagnetic properties which could have several important astrophysical consequences. The most general neutrino interaction with the electromagnetic field is~\cite{Mohapatra91,Winter91} \begin{equation} {\cal L}_{\rm int}=-F_1\bar\psi\gamma_\mu\psi A^\mu -G_1\bar\psi\gamma_\mu\gamma_5\psi\partial_\mu F^{\mu\nu} -{\textstyle\frac{1}{2}} \bar\psi\sigma_{\mu\nu}(F_2+G_2\gamma_5)\psi F^{\mu\nu}, \end{equation} where $\psi$ is the neutrino field, $A^\mu$ the electromagnetic vector potential, and $F^{\mu\nu}$ the field-strength tensor. The form factors are functions of $Q^2$ with $Q$ the energy-momentum transfer. In the $Q^2\to0$ limit $F_1$ is a charge, $G_1$ an anapole moment, $F_2$ a magnetic, and $G_2$ an electric dipole moment. Charge neutrality implies $F_1(0)=0$. What remains is a charge radius which, like the anapole moment, vanishes in the $Q^2\to0$ limit. Therefore, it provides for a contact interaction and as such a correction to processes with $Z^0$ exchange~\cite{Degrassi89,Gongora92}. As astrophysics provides no precision test for the effective strength of neutral-current interactions, these form factors are best probed in laboratory experiments~\cite{Salati94}. Therefore, the only astrophysically interesting possibility are magnetic and electric dipole and transition moments. If the standard model is extended to include neutrino Dirac masses, the magnetic dipole moment is $\mu_\nu=3.20\times10^{-19}\,\mu_{\rm B}\,m_\nu/{\rm eV}$ where $\mu_{\rm B}=e/2m_e$ is the Bohr magneton~\cite{Mohapatra91,Winter91}. An electric dipole moment $\epsilon_\nu$ violates CP, and both are forbidden for Majorana neutrinos. Flavor mixing implies electric and magnetic transition moments for both Dirac and Majorana neutrinos, but they are even smaller due to a GIM cancelation. Neutrino electromagnetic form factors which are large enough to be of experimental or astrophysical interest require a more radical extension of the standard model, for example the existence of right-handed currents. \subsection{Astrophysical Limits} Assuming that neutrinos have nonstandard electric or magnetic dipole or transition moments, how large can they be? Astrophysics, not laboratory experiments, provides the most restrictive limits. Dipole or transition moments allow for several interesting processes (Fig.~\ref{fig:processes}). For the purpose of deriving limits, the most important case is $\gamma\to\nu\bar\nu$ which is kinematically possible in a plasma because the photon acquires a dispersion relation which roughly amounts to an effective mass. Even without anomalous couplings, the plasmon decay proceeds because the charged particles of the medium provide an effective neutrino-photon interaction~\cite{Adams63,Zaidi65,Haft94}. Put another way, even standard neutrinos have nonvanishing electromagnetic form factors in a medium~\cite{DOlivo89,Altherr94}. \begin{figure} \hbox to\hsize{\hss\epsfxsize=7cm\epsfbox{fig13.eps}\hss} \caption{Processes with neutrino electromagnetic dipole or transition moments.\label{fig:processes}} \end{figure} The standard plasma process dominates the neutrino production in white dwarfs or the degenerate helium core of globular-cluster red giants. The presence of a direct neutrino-photon coupling by a dipole or transition moment enhances the neutrino losses, delaying the ignition of helium. Observations of globular-cluster stars thus reveal a limit~\cite{Raffelt90,Raffelt92,Castellani93,Catelan96,Raffelt96} \begin{equation}\label{eq:dipolelimit} \mu_\nu\alt3\times10^{-12}\,\mu_{\rm B}, \end{equation} applicable to magnetic and electric dipole and transition moments for Dirac and Majorana neutrinos. Of course, the final-state neutrinos must be lighter than the photon plasma mass of around 10~keV for the relevant conditions. A slightly weaker bound obtains from the white-dwarf luminosity function~\cite{Blinnikov94}. Right-handed (sterile) states are produced in electromagnetic spin-flip collisions if neutrinos have Dirac dipole or transition moments. The duration of the SN~1987A neutrino signal precludes excessive cooling by sterile states, yielding a limit on $\mu_\nu({\rm Dirac})$ which is numerically equivalent to Eq.~(\ref{eq:dipolelimit})~\cite{Barbieri88a,Ayala98}. The corresponding laboratory limits are much weaker~\cite{Caso98}. The most restrictive bound is $\mu_{\nu_e}<1.8\times10^{-10}\,\mu_{\rm B}$ at 90\% CL from a measurement of the $\bar\nu_e$-$e$-scattering cross section involving a reactor source. A significant improvement should become possible with the MUNU experiment~\cite{Broggini99}, but it is unlikely that the globular-cluster limit can be reached anytime soon. \begin{figure}[b] \hbox to\hsize{\hss\epsfxsize=7cm\epsfbox{fig14.eps}\hss} \caption{Astrophysical limits on neutrino dipole moments. The light-shaded~\protect\cite{Ressell90} and dark-shaded~\protect\cite{Biller98,Raffelt98} exclusion range is from the absence of excessive cosmic diffuse background photons. The dashed line represents the approximation formula in Eq.~(\protect\ref{eq:munulimits}), bottom line. \label{fig:munu}} \end{figure} A neutrino mass eigenstate $\nu_i$ may decay to another one $\nu_j$ by the emission of a photon, where the only contributing form factors are the magnetic and electric transition moments. The inverse radiative lifetime is found to be~\cite{Mohapatra91,Winter91} \begin{eqnarray}\label{eq:radiativedecay} \tau_\gamma^{-1}&=&\frac{|\mu_{ij}|^2+|\epsilon_{ij}|^2}{8\pi} \left(\frac{m_i^2-m_j^2}{m_i}\right)^3\nonumber\\ &=&5.308~{\rm s}^{-1} \left(\frac{\mu_{\rm eff}}{\mu_{\rm B}}\right)^2 \left(\frac{m_i^2-m_j^2}{m_i^2}\right)^3 \left(\frac{m_i}{{\rm eV}}\right)^3, \end{eqnarray} where $\mu_{ij}$ and $\epsilon_{ij}$ are the transition moments while $|\mu_{\rm eff}|^2\equiv|\mu_{ij}|^2+|\epsilon_{ij}|^2$. Radiative neutrino decays have been constrained from the absence of decay photons of reactor $\bar\nu_e$ fluxes~\cite{Oberauer87}, the solar $\nu_e$ flux~\cite{Raffelt85}, and the SN~1987A neutrino burst~\cite{Feilitzsch88,Chupp89,Kolb89,Bludman92,Oberauer93}. For $m_\nu\equiv m_i\gg m_j$ these limits can be expressed as \begin{equation}\label{eq:munulimits} \frac{\mu_{\rm eff}}{\mu_{\rm B}}\;\mathrel{\mathpalette\vereq<}\; \cases{\hbox to1.8cm{$0.9{\times}10^{-1}$\hfil}({\rm eV}/m_\nu)^2 &Reactor ($\bar\nu_e$),\cr \hbox to1.8cm{$0.5{\times}10^{-5}$\hfil}({\rm eV}/m_\nu)^2 &Sun ($\nu_e$),\cr \hbox to1.8cm{$1.5{\times}10^{-8}$\hfil}({\rm eV}/m_\nu)^2 &SN~1987A (all flavors),\cr \hbox to1.8cm{$1.0{\times}10^{-11}$\hfil}({\rm eV}/m_\nu)^{9/4} &Cosmic background (all flavors).\cr} \end{equation} In this form the SN~1987A limit applies for $m_\nu\mathrel{\mathpalette\vereq<} 40~{\rm eV}$. The decay of cosmic background neutrinos would contribute to the diffuse photon backgrounds, excluding the shaded areas in Fig.~\ref{fig:munu}. They are approximately delineated by the dashed line, corresponding to the analytic expression in Eq.~(\ref{eq:munulimits}). More restrictive limits obtain for certain masses above 3~eV from the absence of emission features from several galaxy clusters~\cite{Henry81,Davidsen91,Bershady91}. For low-mass neutrinos the $m_\nu^3$ phase-space factor in Eq.~(\ref{eq:radiativedecay}) is so punishing that the globular-cluster limit is the most restrictive one for $m_\nu$ below a few eV, i.e.\ in the mass range which today appears favored from neutrino oscillation experiments. Turning this around, if neutrino mass differences are indeed as small as currently believed, the globular-cluster limit implies that radiative neutrino decays do not have observable consequences. \subsection{Spin and Spin-Flavor Precession} Neutrinos with magnetic or electric dipole moments spin-precess in external magnetic fields~\cite{Fujikawa80,Okun86a}, an effect which may have a number of astrophysical consequences for $\mu_\nu$-values below the globular-cluster limit of Eq.~(\ref{eq:dipolelimit}). For example, solar neutrinos can precess into sterile and thus undetectable states in the Sun's magnetic field~\cite{Werntz70,Cisneros71,Voloshin86a}. The same for SN neutrinos in the galactic magnetic field where an important effect obtains for $\mu_\nu\agt10^{-12}\,\mu_{\rm B}$. Moreover, the high-energy sterile states emitted by spin-flip collisions from the inner SN core could precess back into active ones and cause events with anomalously high energies in SN neutrino detectors, an effect which probably requires $\mu_\nu({\rm Dirac})\alt10^{-12}\,\mu_{\rm B}$ from the SN~1987A signal~\cite{Barbieri88a,Notzold88}. For the same $\mu_\nu$-range one may expect an anomalous rate of energy transfer to the shock wave in a SN, helping with the explosion \cite{Dar87c,Nussinov87a,Goldman88,Voloshin88,Okun88a,Blinnikov88}. The refractive energy shift in a medium for active neutrinos relative to sterile ones creates a barrier to spin precessions~\cite{Voloshin86b}. The neutrino mass difference has the same effect if the precession is between different flavors through a transition moment~\cite{Schechter81}. Combining the effects one arrives at spin-flavor precession in a medium. The mass difference and the refractive term can cancel, leading to resonant oscillations in the spirit of the MSW effect~\cite{Akhmedov88a,Akhmedov88b,Barbieri88b,Lim88}. Large magnetic fields exist in SN cores so that spin-flavor precession could play an important role, with possible consequences for the explosion mechanism, r-process nucleosynthesis, or the measurable neutrino signal~\cite{Athar95,Totani96,Akhmedov97,Bruggen97,Nunokawa97b}. The downside of this richness of phenomena is that there are so many unknown parameters (electromagnetic neutrino properties, masses, mixing angles) as well as the unknown magnetic field strength and distribution that it is difficult to come up with reliable limits or requirements on neutrino properties. The SN phenomenon is probably too complicated to serve as a laboratory to pin down electromagnetic neutrino properties, but it clearly is an environment where these properties could have far-reaching consequences. Resonant spin-flavor precessions can explain all solar neutrino data~\cite{Akhmedov95,Guzzo98}, but require somewhat large toroidal magnetic fields in the Sun since the neutrino magnetic (transition) moments have to obey the globular-cluster limit of Eq.~(\ref{eq:dipolelimit}). The main original motivation for magnetically induced oscillations was an apparent correlation between the Homestake solar neutrino data and indicators of solar magnetic activity. Very recent re-analyses reveal that there is no significant correlation with Sun spots~\cite{Walther97}, but also that the hypothesis of a constant flux should be rejected with a significance level of 0.1--6\%, depending on the test~\cite{Sturrock97}. For Majorana neutrinos, the spin-flavor precession amounts to transitions between neutrinos and antineutrinos. The observation of antineutrinos from the Sun would be a diagnostic for this effect~\cite{Barbieri91,Fiorentini97b,Pastor98}, and probably the only convincing one. \section{Conclusions} \label{sec:conclusions} As it stands, the most titillating question of neutrino physics no longer is if these elusive particles have masses at all, but rather if a fourth, hitherto unsuspected and otherwise noninteracting degree of freedom exists to reconcile all current indications for neutrino oscillations. If shockingly this were the case, the mass differences suggested by LSND would imply that neutrinos are significant as a hot dark matter component, corresponding to an eV-mass for one or two flavors, which is what nowadays one means with a ``cosmologically significant neutrino mass.'' Sterile neutrinos and a cosmological hot dark matter component have become closely intertwined issues. Oscillation experiments reveal only mass differences, leaving a common offset from zero undetermined. Even if LSND is right, the common mass scale may exceed the indicated mass difference, and if LNSD is wrong and sterile neutrinos do not exist, the sequential neutrinos could still have nearly degenerate eV-masses and play a role for hot dark matter. Fixing the common mass scale may soon become the major challenge of neutrino physics. There are few realistic opportunities to achieve this goal. While neutrinoless $\beta\beta$ decay experiments and precise tritium endpoint $\beta$-spectra remain crucial, cosmology likely will play a key role for this task. The cosmological precision information expected from the MAP and PLANCK microwave background missions and from large-scale redshift surveys are in principle sensitive to sub-0.1~eV masses. Whether or not they will actually pin down such a small mass remains to be seen, but surely they cannot ignore it as one of about a dozen nontrivial cosmological parameters which are not fixed by other data. A direct kinematical mass limit from signal dispersion of a future galactic supernova could get down to about 3~eV for $\nu_e$, probably not good enough for the questions at hand. If high-energy neutrinos from pulsed sources such as gamma-ray bursts are observed in upcoming neutrino telescopes one may get down to much smaller masses. The atmospheric neutrino anomaly requires a large mixing angle, suggesting that all mixing angles in the neutrino sector could be large, in blunt contrast to what is observed in the quark sector. A large mixing angle between $\nu_e$ and other flavors radically changes the interpretation of the SN~1987A neutrino signal and that from a future galactic SN. Therefore, it is of paramount importance to develop a better theoretical understanding of the neutrino spectra formation in SNe to see if swapping flavors by oscillations indeed has significant and observable effects. Apart from this important issue it does not look as if neutrino oscillations had much to do with SN physics itself, i.e.\ with the explosion mechanism, pulsar kicks, or r-process nucleosynthesis, except perhaps if sterile neutrinos exist. The large mixing angle implied by atmospheric neutrinos definitely means that the neutrinos from ``cosmic beam dumps'' have a modified flavor spectrum, presumably containing a large fraction of $\nu_\tau$'s, which produce unique signatures in high-energy neutrino telescopes. Neutrino physics and neutrino astrophysics are at the cross roads. On the one hand, it is now almost impossible to deny that neutrinos oscillate and thus presumably have small masses. On the other hand, unless a sterile neutrino truly exists, there is a sense that neutrino masses are too small to be of very much cosmological or astrophysical interest. Neutrino astrophysics could turn out to be more interesting than one would have originally suspected, or more boring, depending on whether sterile states exist or not. Either way, it may not be long until the neutrino mass and mixing pattern has been reconstructed. The main beneficiary may be neutrino astronomy. As we better understand the behavior of the neutrino beam from distant sources, neutrino astronomy will return to its roots and focus on the physics of the sources rather than worrying about the behavior of the radiation. It may not be long until flavor oscillations in neutrino astronomy are as commonplace a phenomenon as the Faraday effect in radio astronomy! \section*{Acknowledgments} This work was supported, in part, by the Deutsche Forschungsgemeinschaft under grant No.\ SFB-375. \newpage \section*{References} \frenchspacing
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1}\setcounter{equation}{0}} \makeatletter \let\old@startsection=\@startsection \let\oldl@section=\l@section \renewcommand{\@startsection}[6]{\old@startsection{#1}{#2}{#3}{#4}{#5}{#6\mathversion{bold}}} \renewcommand{\l@section}[2]{\oldl@section{\mathversion{bold}#1}{#2}} \makeatother \numberwithin{equation}{section} \newcommand{\mathcal{J}}{\mathcal{J}} \def{1\over \mathcalJ}{{1\over \mathcal{J}}} \def\tilde\lambda{\tilde\lambda} \def \mathcal{D} {\pta} \def \theta {\theta} \def \tau {\tau} \def \rho {\rho} \def {\rm N} {{\rm N}} \def{\tilde w}{{\tilde w}} \def{K}{{K}} \def{w}{{w}} \def{\lambda}{{\lambda}} \def{\theta}{{\theta}} \def { \bar w} {{ \bar w}} \def {\vec n} {{\vec n}} \def \mathcal{O} {{\mathcal O}} \def \mu {\mu} \def \vec \sigma {\vec \sigma} \def i.e. {i.e.} \def \tilde {\tilde} \newcommand{\indup}[1]{_{\mathrm{#1}}} \newcommand{\indups}[1]{_{\mathrm{\scriptscriptstyle #1}}} \newcommand{\supup}[1]{^{\mathrm{#1}}} \newcommand{\rep}[1]{{\mathbf{#1}}} \newcommand{\matr}[2]{\left(\begin{array}{#1}#2\end{array}\right)} \newcommand{\alg}[1]{\mathfrak{#1}} \newcommand{\grp}[1]{\mathrm{#1}} \newcommand{\grp{SU}}{\grp{SU}} \newcommand{\grp{U}}{\grp{U}} \newcommand{\grp{SO}}{\grp{SO}} \newcommand{\grp{SL}}{\grp{SL}} \newcommand{\grp{PSU}}{\grp{PSU}} \newcommand{\alg{su}}{\alg{su}} \newcommand{\alg{so}}{\alg{so}} \newcommand{\alg{osp}}{\alg{osp}} \newcommand{\alg{sl}}{\alg{sl}} \newcommand{\alg{psu}}{\alg{psu}} \newcommand{\alg{u}}{\alg{u}} \newcommand{\mathrm{w}}{\mathrm{w}} \newcommand{\frac{1}{4}}{\frac{1}{4}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\phi}{\partial} \newcommand{\nu}{\nu} \newcommand{\footnote}{\footnote} \usepackage{color} \newcommand{\colb}[1]{\textcolor{blue}{#1}} \newcommand{\colr}[1]{\textcolor{red}{#1}} \newcommand{\colg}[1]{\textcolor{green}{#1}} \def\VF#1{{\color [rgb]{0,0.6,0} [VF: #1]}} \def\VGMP#1{{\color [rgb]{0.6,0.0,0.6} [VGMP: #1]}} \def\EV#1{{\color [rgb]{0.9,0,0} [EV: #1]}} \def {\mathbb{E}} {{\mathbb{E}}} \def {\mathbb{K}} {{\mathbb{K}}} \def\nonumber{\nonumber} \def\varphi{\varphi} \def\mathcal{S}{\mathcal{S}} \def\rho{\rho} \def\alpha{\alpha} \def\beta{\beta} \def\kappa{\kappa} \def\mathcal{E}{\mathcal{E}} \DeclareMathOperator{{\rm sn}}{sn} \DeclareMathOperator{{\rm cn}}{cn} \DeclareMathOperator{{\rm dn}}{dn} \def\sigma{\sigma} \def\rho{\rho} \def\mathcal{O}{\mathcal{O}} \def\mathcal{D}{\mathcal{D}} \def\omega{\omega} \def Lam\'e\ {Lam\'e\ } \def \Omega {\Omega } \def\tilde{\tilde} \def \over {\over} \def\label{\label} \def \cite {\cite} \setcounter{tocdepth}{2} \begin{document} \renewcommand{\thefootnote}{\arabic{footnote}} \overfullrule=0pt \parskip=2pt \parindent=12pt \headheight=0in \headsep=0in \topmargin=0in \oddsidemargin=0in \vspace{ -3cm} \thispagestyle{empty} \vspace{-1cm} \begin{flushright} \footnotesize HU-EP-15/58\\ \end{flushright}% \begin{center} \vspace{1.2cm} {\Large\bf \mathversion{bold} Precision calculation of 1/4-BPS Wilson loops in AdS$_5\times \textup{\textrm{S}}^5$ } \vspace{0.8cm} { V.~Forini$^{a,}$\footnote{ {\tt $\{$valentina.forini,edoardo.vescovi$\}$@\,physik.hu-berlin.de}}, V.~Giangreco M. Puletti$^{b,}$\footnote{ {\tt [email protected]}}, L.~Griguolo$^{c,}$\footnote{ {\tt [email protected]}}, D.~Seminara$^{d,}$\footnote{ {\tt [email protected]}}, E.~Vescovi$^{a,1}$} \vskip 0.5cm \small {\em $^{a}$Institut f\"ur Physik, Humboldt-Universit\"at zu Berlin, IRIS Adlershof, \\Zum Gro\ss en Windkanal 6, 12489 Berlin, Germany \vskip 0.05cm $^{b}$ University of Iceland, Science Institute, Dunhaga 3, 107 Reykjavik, Iceland \vskip 0.05cm $^{c}$ Dipartimento di Fisica e Scienze della Terra, Universit\'a di Parma and INFN Gruppo Collegato di Parma, Viale G.P. Usberti 7/A, 43100 Parma, Italy \vskip 0.05cm $^{d}$ Dipartimento di Fisica, Universit\'a di Firenze and INFN Sezione di Firenze, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy } \normalsize \end{center} \vspace{0.3cm} \begin{abstract} \noindent We study the strong coupling behaviour of $1/4$-BPS circular Wilson loops (a family of ``latitudes") in ${\cal N}=4$ Super Yang-Mills theory, computing the one-loop corrections to the relevant classical string solutions in AdS$_5\times$S$^5$. Supersymmetric localization provides an exact result that, in the large 't Hooft coupling limit, should be reproduced by the sigma-model approach. To avoid ambiguities due to the absolute normalization of the string partition function, we compare the $ratio$ between the generic latitude and the maximal 1/2-BPS circle: Any measure-related ambiguity should simply cancel in this way. We use the Gel'fand-Yaglom method with Dirichlet boundary conditions to calculate the relevant functional determinants, that present some complications with respect to the standard circular case. After a careful numerical evaluation of our final expression we still find disagreement with the localization answer: The difference is encoded into a precise ``remainder function". We comment on the possible origin and resolution of this discordance. \end{abstract} \newpage \tableofcontents \newpage \section{Introduction and main result} \label{sec:intro} The harmony between exact QFT results obtained through localization procedure for BPS-protected Wilson loops in $\mathcal{N}=4$ SYM and their stringy counterpart is a thorny issue beyond the supergravity approximation. For the $1/2$-BPS circular Wilson loop~\cite{Berenstein:1998ij,Drukker:1999zq}, in the fundamental representation, supersymmetric localization~\cite{Pestun:2007rz} in the gauge theory confirms the all-loop prediction based on a large $N$ resummation of ladder Feynman diagrams~\cite{Erickson:2000af}~and generalized to finite $N$ in~\cite{Drukker:2000rr}. On the string theory side, this should equate the disc partition function for the $\textup{\textrm{AdS}}_5\times \textup{\textrm{S}}^5$ superstring. Its one-loop contribution, encoding fluctuations above the classical solution, has been formally written down in~\cite{Drukker:2000ep}, explicitly evaluated in~\cite{Kruczenski:2008zk}~\footnote{See also~\cite{Sakaguchi:2007ea}.} using the Gel'fand-Yaglom method, reconsidered in~\cite{Kristjansen:2012nz} with a different choice of boundary conditions and reproduced in~\cite{Buchbinder:2014nia}~\footnote{See Appendix B in~\cite{Buchbinder:2014nia}.} with the heat-kernel technique. No agreement was found with the subleading correction in the strong coupling ($\lambda\gg1$) expansion of the gauge theory result in the planar limit \begin{equation}\label{circlevev} \log\langle\mathcal{W}\left(\lambda,\theta_0=0\right)\rangle=\log{\textstyle\frac{2}{\sqrt{\lambda}}}I_1(\sqrt{\lambda}) =\sqrt{\lambda}-\frac{3}{4}\,\log\lambda+\frac{1}{2}\,\log\frac{2}{\pi}+\mathcal{O}(\lambda^{-\frac{1}{2}})~, \end{equation} where $I_1$ is the modified Bessel function of the first kind, the meaning of the parameter $\theta_0$ is clarified below, and the term proportional to $\log\lambda$ in \eqref{circlevev} is argued to originate from the $\grp{SL}(2,\mathbb{R})$ ghost zero modes on the disc~\cite{Drukker:2000rr}. The discrepancy occurs in the $\lambda$-independent part above~\footnote{See formula \eqref{kructirziu} below.}, originating from the one-loop effective action contribution \emph{and} an unknown, overall numerical factor in the measure of the partition function. The situation becomes even worse when considering a loop winding $n$-times around itself \cite{Kruczenski:2008zk,Bergamin:2015vxa}, where also the functional dependence on $n$ is failed by the one-loop string computation. The case of different group representations has also been considered: For the $k$-symmetric and $k$-antisymmetric representations, whose gravitational description is given in terms of D3- and D5-branes, respectively, the first stringy correction again does not match the localization result \cite{Faraggi:2014tna}. Interestingly, the Bremsstrahlung function of ${\cal N}=4$ SYM, derived in \cite{Correa:2012at} again using a localization procedure, is instead correctly reproduced \cite{Drukker:2011za} through a one-loop computation around the classical cusp solution \cite{Drukker:1999zq,Drukker:2007qr}. Localization has been proven to be one of the most powerful tools in obtaining non perturbative results in quantum supersymmetric gauge theories~\cite{Pestun:2007rz}: An impressive number of new exact results have been derived in different dimensions, mainly when formulated on spheres or products thereof~\cite{Pestun:2007rz, Kapustin:2009kz}. In order to gain further intuition on the relation between localization and sigma-model perturbation theory in different and more general settings, we re-examine this issue addressing as follows the problem of how to possibly eliminate the ambiguity related to the partition function measure. We consider the string dual to a non-maximal circular Wilson loop - the family of 1/4-BPS operators with path corresponding to a latitude in $\textup{\textrm{S}}^2\in \textup{\textrm{S}}^5$ parameterized by an angle $\theta_0$ and studied at length in~\cite{Drukker:2005cu,Drukker:2006ga,Drukker:2007qr} - and evaluate the corresponding string one-loop path integral. We then calculate the \emph{ratio} between the latter and the corresponding one representing the maximal circle - the case $\theta_0=0$ in \eqref{circlevev}. Our underlying assumption is that the measure is actually independent on the geometry of the worldsheet associated to the Wilson loop~\footnote{ About the \emph{topological} contribution of the measure, its relevance in canceling the divergences occurring in evaluating quantum corrections to the string partition function has been first discussed in~\cite{Drukker:2000ep} after the observations of~\cite{Forste:1999qn,Forste:1999cp}. We use this general argument below, see discussion around \eqref{Eulervolume}. }, and therefore in such ratio measure-related ambiguities should simply cancel. It appears non-trivial to actually prove a background independence of the measure, whose diffeo-invariant definition includes in fact explicitly the worldsheet fields~\footnote{See for example the discussion in~\cite{Roiban:2007jf}.}. Our assumption -- also suggested in~\cite{Kruczenski:2008zk} -- seems however a reasonable one, especially in light of the absence of zero mode in the classical solutions here considered~\footnote{In presence of zero mode, a possible dependence of the path integral measure on the classical solution comes from the integration over collective coordinates associated to them. In this framework, see discussion in \cite{Zarembo:2002an}.} and of the explicit example of (string dual to) the ratio of a cusped Wilson loop with a straight line~\cite{Drukker:2011za}, where a perfect agreement exists between sigma model perturbation theory and localization/integrability results~\cite{Correa:2012at}~\footnote{See also~\cite{Forini:2010ek}, which analyzes the (string dual to the) ratio between the Wilson loop of ``antiparallel lines'' and straight line.}. The family of 1/4-BPS latitude Wilson loops falls under the more general class of 1/8-BPS Wilson loops with arbitrary shape on a two-sphere introduced in~\cite{Drukker:2007dw,Drukker:2007yx,Drukker:2007qr} and studied in ~\cite{Pestun:2009nn}. There are strong evidences that they localize into Yang-Mills theory on $\textup{\textrm{S}}^2$ in the zero-instanton sector \cite{Pestun:2009nn,Young:2008ed,Bassetto:2008yf,Drukker:2007qr} and their vacuum expectation values are therefore related to the 1/2-BPS one by a simple rescaling. As originally argued in~\cite{Drukker:2006ga} the expectation value of such latitude Wilson loops is obtained from the one of the maximal circle provided one replaces $\lambda$ with an effective 't Hooft coupling $\lambda'=\lambda \cos^2\theta_0$. The ratio of interest follows very easily \begin{gather}\label{mainratio} \frac{\langle\mathcal{W}\left(\lambda,\theta_0\right)\rangle}{\langle\mathcal{W}\left(\lambda,0\right)\rangle}\biggr\rvert_{\rm loc}=e^{\sqrt{\lambda}\left(\cos\theta_0-1\right)}\left[(\cos\theta_0)^{-\frac{3}{2}}+\mathcal{O} (\lambda^{-\frac{1}{2}} )\right] +\mathcal{O}\left(e^{-\sqrt{\lambda}}\right)\,, \end{gather} where in the large $\lambda$ expansion only the dominant exponential contribution is kept (and $\rm loc$ stands for ``localization''). In terms of string one-loop effective actions $\Gamma=-\log Z\equiv -\log\langle W\rangle$, this leads to the prediction \begin{gather}\label{mainratiolog} \log\frac{\langle\mathcal{W}\left(\lambda,\theta_0\right)\rangle}{\langle\mathcal{W}\left(\lambda,0\right)\rangle}\biggr\rvert_{\rm loc}=\left[ \Gamma(\theta_0=0)-\Gamma(\theta_0)\right]_{\textrm{loc}}=\sqrt{\lambda}\,(\cos\theta_0-1 )-\frac{3}{2}\log\cos\theta_0+\mathcal{O} (\lambda^{-\frac{1}{2}} ) ~, \end{gather} where the leading term comes from the regularized minimal-area surface of the strings dual to these Wilson loops, while the semiclassical string fluctuations in the string sigma-model account for the subleading correction. As usual, the one-loop contribution derives from the evaluation of ratios of functional determinants in the quadratic expansion of the type IIB Green-Schwarz action about the string classical background. The axial symmetry of the worldsheet surface simplifies these two-dimensional spectral problem to infinitely-many one-dimensional spectral problems. To solve them, we use the Gel'fand-Yaglom method originally developed in \cite{Gelfand:1959nq} and later improved in a series of papers \cite{Forman1987, Forman1992, McKane:1995vp, Kirsten:2003py, Kirsten:2004qv, Kirsten:2007ev}. A concise review of this technique is presented in Appendix \ref{app:gelfand_yaglom}. Unlike other procedures ({\it e.g.} heat kernel \cite{Bergamin:2015vxa}), this method of regularizing determinants effectively introduces a fictitious boundary for the worldsheet surface, besides the expected conformal one. We then proceed with the analytical computation of the functional determinants by imposing Dirichlet boundary conditions on the bosonic and fermionic fluctuation fields at the conformal (AdS boundary) and fictitious boundaries, whose contribution effectively vanish in the chosen regularization scheme \cite{Frolov:2004bh, Dekel:2013kwa}. We emphasise that this procedure differs from the one employed in \cite{Kruczenski:2008zk}, since the non-diagonal matrix structure of the fermionic-fluctuation operator for arbitrary $\theta_0$ prevents us from factorizing the value of the fermionic determinants into a product of two contributions. In the $\theta_0\to 0 $ limit, we analytically recover the constant one-loop coefficient in the expansion of the 1/2-BPS circular Wilson loop as found in \cite{Kruczenski:2008zk,Buchbinder:2014nia} \begin{equation}\label{kructirziu} \log\langle\mathcal{W}\left(\lambda,\theta_0=0\right)\rangl =\sqrt{\lambda}-\frac{3}{4}\,\log (\lambda)+\log c+\frac{1}{2}\,\log\frac{1}{2\pi}+\mathcal{O}(\lambda^{-\frac{1}{2}})~, \end{equation} up to an unknown contribution of ghost zero-modes (the constant $c$). The expression above is in disagreement with the gauge theory prediction \eqref{circlevev}. We regularize and normalize the latitude Wilson loop with respect to the circular case. The summation of the one-dimensional Gel'fand-Yaglom determinants is quite difficult, due to the appearance of some Lerch-type special functions, and we were not able to obtain a direct analytic result. We resort therefore to a numerical approach. Our analysis shows that the disagreement between sigma-model and localization results \eqref{mainratiolog} is not washed out yet. Within a certain numerically accuracy, we claim that the discovered $\theta_0$-dependent discrepancy is very well quantified as \begin{eqnarray} \log\frac{\langle\mathcal{W}\left(\lambda,\theta_0\right)\rangle}{\langle\mathcal{W}\left(\lambda,0\right)\rangle}\biggr\rvert_{\rm sm} = \sqrt{\lambda}\,(\cos\theta_0-1 )-\frac{3}{2}\log\cos\theta_0+\log \cos\frac{\theta_0}{2} +\mathcal{O} (\lambda^{-\frac{1}{2}} ) ~, \end{eqnarray} suggesting that the ``remainder function" should be \begin{equation} {\rm Rem }(\theta_0)=\log \cos\frac{\theta_0}{2}. \end{equation} \bigskip Before proceeding with the numerical analysis a series of non-trivial steps have been performed and the final expression appears as the result of precise cancellations. As already remarked, the fermionic determinants do not trivially factorize, and consequently we have to solve a coupled Schr\"{o}dinger system to deal with the Gel'fand-Yaglom method. It turns out that the decoupling of the fictitious boundary relies on delicate compensations between bosonic and fermionic contributions, involving different terms of the sum. As a matter of fact, after removing the infrared regulator we obtain the correct ultraviolet divergencies from the resulting effective actions. These are subtracted from the final ratio in order to obtain a well-behaved sum, amenable of a numerical treatment. Unfortunately the final result is inconsistent with the QFT analysis, opening the possibility that something subtle is missing in our procedure. On the other hand we think that our investigation elucidates several points at least in the standard setup to solve the spectral problem, and thus should be helpful for further developments. We will comment on the possible origin of the discrepancy at the end of the manuscript. \bigskip The paper proceeds as follows. In Section \ref{sec:classical} we recall the classical setting, in Section \ref{sec:determinants} we evaluate the relevant functional determinants which we collect in Section \ref{sec:partitionfunctions} to form the corresponding partition functions. Section \ref{sec:conclusions} contains concluding remarks on the disagreement with the localization result and its desirable explanation. After a comment on notation in Appendix \ref{app:notation}, we devote Appendix \ref{app:gelfand_yaglom} to a concise survey on the Gel'fand -Yaglom method. Appendix \ref{app:detailsferm} elucidates some properties which simplify the evaluation of the fermionic contribution to the partition function, while in Appendix \ref{bclower} we comment on a possible different choice of boundary condition for lower Fourier modes which that not affect our results. \section{Classical string solutions dual to latitude Wilson loops} \label{sec:classical} The classical string surface describing the strong coupling regime of the $1/4$-BPS {\it latitude} was first found in~\cite{Drukker:2005cu} and discussed in details in \cite{Drukker:2006ga,Drukker:2007qr}. Endowing the $\textup{\textrm{AdS}}_5\times \textup{\textrm{S}}^5$ space with a Lorentzian metric in global coordinates \begin{eqnarray} ds^2_{\textrm{10D}} & = & -\cosh^2\rho dt^2+d\rho^2+\sinh^2\rho \left( d\chi^2 +\cos^2\chi d\psi^2+\sin^2\chi d\varphi_1^2 \right)\nonumber\\ && +d\theta^2+\sin^2\theta d\phi^2+\cos^2\theta \left(d\vartheta_1^2+\sin^2\vartheta_1\left(d\vartheta_2^2+\sin^2\vartheta_2 d\varphi_2^2\right) \right)\,, \label{metric_old} \end{eqnarray} with the AdS radius set to 1, the corresponding classical configuration in $\textup{\textrm{AdS}}_3\times \textup{\textrm{S}}^2$ \begin{equation} \label{background_old} \begin{split} t&=0, \qquad \rho=\rho(\sigma), \qquad \chi=0, \qquad \psi=\tau, \qquad \qquad \varphi_1=\text{const}, \\ \theta&=\theta(\sigma), \qquad \phi=\tau, \qquad \vartheta_1=0, \qquad \vartheta_2=\text{const}, \qquad \varphi_2=\text{const}, \end{split} \end{equation} parametrizes a string worldsheet, ending on a unit circle at the boundary of $\textup{\textrm{AdS}}_5$ and on a latitude sitting at polar angle $\theta_0$ on a two-sphere inside the compact space \footnote{There exist other solutions with more wrapping in $\textup{\textrm{S}}^5$, but they are not supersymmetric~\cite{Drukker:2006ga}.}. Here the polar angle $\theta$ spans the interval $[-\frac{\pi}{2},\frac{\pi}{2}].$ The worldsheet coordinates instead take values in the range $\tau\in [0,2\pi)$ and $\sigma\in [0,\infty)$. \\ The ansatz \eqref{background_old} does not propagate along the time direction and defines an Euclidean surface embedded in a Lorentzian target space. It satisfies the equation of motions (supplemented by the Virasoro constraints in the Polyakov formulation) when we set \begin{equation} \label{rho_and_theta} \begin{split} & \sinh\rho(\sigma)=\frac{1}{\sinh\sigma},\qquad \qquad~ \cosh\rho(\sigma)=\frac{1}{\tanh\sigma}, \\ & \sin\theta(\sigma)=\frac{1}{\cosh\left(\sigma_0\pm\sigma\right)}, \qquad \cos\theta(\sigma)=\tanh\left(\sigma_0\pm\sigma\right). \end{split} \end{equation} An integration constant in \eqref{rho_and_theta} that shifts $\sigma$ was chosen to be zero so that the worldsheet boundary at $\sigma=0$ is located at the boundary of $\textup{\textrm{AdS}}_5$. The remaining one, $\sigma_0\in[0,\infty)$, spans the one-parameter family of latitudes on $\textup{\textrm{S}}^5$ at the boundary $\sigma=0$, whose angular position $\theta_0\in[0,\frac{\pi}{2}]$ relates to $\sigma_0$ through \begin{gather} \label{theta_0_and_sigma_0} \cos\theta_0=\tanh\sigma_0. \end{gather} Here the dual gauge theory operator interpolates between two notable cases. The 1/2-BPS circular case falls under this class of Wilson loops when the latitude in $\textup{\textrm{S}}^2$ shrinks to a point for $\theta_0=0$, which implies $\theta(\sigma)=0$ and $\sigma_0=+\infty$ from (\ref{rho_and_theta})-(\ref{theta_0_and_sigma_0}). In this case the string propagates only in $\textup{\textrm{AdS}}_3$. The other case is the circular 1/4-BPS Zarembo Wilson loop when the worldsheet extends over a maximal circle of $\textup{\textrm{S}}^2$ for $\theta_0=\frac{\pi}{2}$ and $\sigma_0=0$ \cite{Zarembo:2002an}~\footnote{ See also~\cite{Miwa:2015bta} for an analysis of the contribution to the string partition function due to (broken) zero modes of the solution in~\cite{Zarembo:2002an}.}. The double sign in \eqref{rho_and_theta} accounts for the existence of two solutions, effectively doubling the range of $\theta_0$: The stable (unstable) configuration mimizes (maximizes) the action functional and wraps the north pole $\theta=0$ (south pole $\theta=\pi$) of $\textup{\textrm{S}}^5$. \\ The semiclassical analysis is more conveniently carried out in the stereographic coordinates $\upsilon^m$ ($m=1,2,3$) of $\textup{\textrm{S}}^3\subset \textup{\textrm{AdS}}_5$ and $w^n$ ($n=1,2,3,4,5$) of $\textup{\textrm{S}}^5$ \begin{gather} \label{metric_new} ds^2_{\textrm{10D}} = -\cosh^2\rho dt^2+d\rho^2+\sinh^2\rho\, \frac{d\upsilon_m d\upsilon_m}{ (1+\frac{\upsilon^2}{4} )^2} +\frac{d w_n d w_n}{ (1+\frac{w^2}{4} )^2}\,,\\ \upsilon^2= \upsilon_m \upsilon_m \qquad w^2= w_n w_n \end{gather} where the classical solution reads~\footnote{The background of $\varphi_1,\varphi_2,\vartheta_2$ was set to zero in \eqref{background_old}, but the bosonic quadratic Lagrangian does not have the standard form (kinetic and mass terms for the eight physical fields) in the initial angular coordinates.} \begin{equation} \label{background_new} \begin{split} t&=0, \qquad\quad~~~ \rho=\rho(\sigma), \qquad ~~~ \upsilon_1=2\sin\tau, \qquad~~~ \upsilon_2=2\cos\tau, \qquad~~~ \upsilon_3=0\,,\\ w_1&=w_2=0, \qquad w_3=2\cos\theta(\sigma), \qquad w_4=2\sin\theta(\sigma)\sin\tau, \qquad w_5=2\sin\theta(\sigma)\cos\tau\, . \end{split} \end{equation} The induced metric on the worldsheet depends on the latitude angle $\theta_0$ through the conformal factor ($\sigma^i=(\tau,\sigma)$)\begin{gather}\label{induced_metric} ds^2_\textrm{2D} =h_{ij}d\sigma^i d\sigma^j =\Omega^2(\sigma) \left(d\tau^2+d\sigma^2\right)\,,\qquad \Omega^2(\sigma)\equiv\sinh^2\rho(\sigma)+\sin^2\theta(\sigma)\, . \end{gather} The two-dimensional Ricci curvature is then \begin{align}\label{R2} ^{(2)}\!\!\, R =&-\frac{2\,\partial_\sigma^2\log\Omega(\sigma)}{\Omega^2(\sigma)}=\\ =&\mbox{\small $\displaystyle -\frac{ \left(2 \cosh 2 \sigma _0\pm 2 \sinh \sigma _0 \sinh \left(6 \sigma\pm3 \sigma _0 \right)-3 \cosh \left(2 \left(\sigma\pm \sigma _0 \right)\right)+6 \cosh \left(4 \sigma\pm 2 \sigma _0 \right)+3 \cosh 2 \sigma \right)}{4{\cosh}\left( \sigma _0\right) {\cosh}^3\left(2 \sigma\pm\sigma_0 \right)}$}\, .\nonumber \end{align} The string dynamics is governed by the type IIB Green-Schwarz action, whose bosonic part is the usual Nambu-Goto action \begin{gather} \label{bosonic_action} S_B = T\int d\tau d\sigma \sqrt{h}\equiv \int d\tau d\sigma \mathcal{L}_B \end{gather} in which $h$ is the determinant of the induced metric \eqref{induced_metric} and the string tension $T=\frac{\sqrt{\lambda}}{2\pi}$ depends on the 't~Hooft coupling $\lambda$. The leading contribution to the string partition function comes from the regularized classical area~\cite{Drukker:2006ga} \begin{flalign} \label{classical_action} S_B^{(0)}(\theta_0) = \frac{\sqrt{\lambda}}{2\pi} \int_{0}^{2\pi} d\tau\int_{\epsilon_0}^{\infty}d\sigma\left[\sin^{2}\theta(\sigma)+\sinh^{2}\rho(\sigma)\right] =\sqrt{\lambda}\left(\mp\cos\theta_{0}+\frac{1}{\epsilon}+\mathcal{O}(\epsilon)\right)~. \end{flalign} Following \cite{Kruczenski:2008zk} we have chosen to distinguish the cutoff $\epsilon_0$ in the worldsheet coordinate from the cutoff $\epsilon=\tanh\epsilon_0$ in the Poincar\'{e} radial coordinate $z$ of $\textup{\textrm{AdS}}$. The pole in the IR cutoff $\epsilon$ in \eqref {classical_action} keeps track of the boundary singularity of the $\textup{\textrm{AdS}}$ metric and it is proportional to the circumference of the boundary circle. The standard regularization scheme, equivalent to consider a Legendre transform of the action \cite{Drukker:1999zq, Drukker:2005kx}, consists in adding a term $-\sqrt{\lambda} \chi_b$ proportional to the boundary part of the Euler number \begin{eqnarray} \label{Eulerboundary} \chi_b(\theta_0)&=&\frac{1}{2\pi}\int ds \,\, {\kappa}_g \\\nonumber &=&\frac{3-\cosh (2 \epsilon_0)+\cosh(2\epsilon_0\pm2\sigma_0)+\cosh (4\epsilon_0\pm2\sigma_0)}{4 \sinh \epsilon_0 \cosh (\epsilon_0\pm\sigma_0) \cosh (2\epsilon_0\pm \sigma_0)} = \frac{1}{\epsilon}\,+\mathcal{O}(\epsilon) . \end{eqnarray} Here $\kappa_g$ stands for the geodesic curvature of the boundary at $\sigma=\epsilon_0$ and $ds$ is the invariant line element. With this subtraction, we have the value of the regularized classical area \begin{equation} \label{classical_action} S^{\left(0\right)}_B(\theta_0)-\sqrt{\lambda} \chi_b(\theta_0) = \mp\sqrt{\lambda} \cos\theta_{0}~, \end{equation} The (upper-sign) solution dominates the string path integral and is responsible for the leading exponential behaviour in \eqref{mainratio} and so, in the following, we will restrict to the upper signs in \eqref{rho_and_theta}. \section{One-loop fluctuation determinants} \label{sec:determinants} This section focusses on the semiclassical expansion of the string partition function around the stable classical solution \eqref{background_new} (taking upper signs in \eqref{rho_and_theta}) and the determinants of the differential operators describing the semiclassical fluctuations around it. The $2\pi$-periodicity in $\tau$ allows to trade the 2D spectral problems with infinitely-many 1D spectral problems for the (Fourier-transformed in $\tau$) differential operators in $\sigma$. Let us call $\mathcal{O}$ one of these one-loop operators. For each Fourier mode $\omega$, the evaluation of the determinant $\textrm{Det}_{\omega}\mathcal{O}$ is a one-variable eigenvalue problem on the semi-infinite line $\sigma\in[0,\infty)$ which we solve using the Gel'fand-Yaglom method, a technique based on $\zeta$-function regularization reviewed in Appendix \ref{app:gelfand_yaglom}. Multiplying over all frequencies $\omega$ (which are integers or semi-integers according to the periodicity of the operator $\mathcal{O}$) gives then the full determinant \\ \begin{equation} \label{detomega} \textrm{Det}\mathcal{O}=\prod_{\omega}\textrm{Det}_{\omega}\mathcal{O}. \end{equation} All our worldsheet operators are intrinsically singular on this range of $\sigma$, since their principal symbol diverges at $\sigma=0$, the physical singularity of the boundary divergence for the $\textup{\textrm{AdS}}_5$ metric. Moreover the interval is non-compact, making the spectra continuous and more difficult to deal with. We consequently introduce an IR cutoff at $\sigma=\epsilon_0$ (related to the $\epsilon=\tanh\epsilon_0$ cutoff in $z$) and one at large values of $\sigma=R$~\cite{Kruczenski:2008zk}. While the former is necessary in order to tame the near-boundary singularity, the latter has to be regarded as a mere regularization artifact descending from a small fictitious boundary on the tips of the surfaces in $\textup{\textrm{AdS}}_3$ and $\textup{\textrm{S}}^2$. Indeed it disappears in the one-loop effective action. \subsection{Bosonic sector} \label{subsec:bosonicdets} The derivation of the bosonic fluctuation Lagrangian around the minimal-area surface (\ref{background_new}) is readily available in Section 5.2 of~\cite{Forini:2015mca}. The one-loop fluctuation Lagrangian in static gauge is \begin{gather} \mathcal{L}^{\left(2\right)}_B\equiv\Omega^2(\sigma)\, y^T \, \mathcal{O}_B\left(\theta_0\right) \,y~, \end{gather} where the differential operator $\mathcal{O}_B\left(\theta_0\right)$ acts on the vector of fluctuation fields orthogonal to the worldsheet $y\equiv\left(y_i\right)_{i=1,... 8}$. In components it reads~\footnote{To compare with~\cite{Forini:2015mca}, and using the notation used therein, notice that the bosonic Lagrangian is derived as \begin{eqnarray} && \mathcal L_B^{(2)}= \delta^{\alpha\beta}\phi_\alpha y_i \phi_\beta y^i - \delta^{\alpha\beta} \left(\phi_\alpha y^i A_{\beta \, ij} y^j + A^i_{\alpha\, j}y^j \phi_\beta y_i\right)+ \left(\delta^{\alpha\beta}A^\ell_{\alpha\, i}A_{\beta\, \ell j} -\sqrt{\gamma} \mathcal M_{i j}\right) y^i y^j \,,\quad \end{eqnarray} which defines in an obvious way $m_{ij}$ and $n_{ij}$ in \eqref{Lagrbos}. } \begin{gather}\label{Lagrbos} \left[\mathcal{O}_B\left(\theta_0\right)\right]_{ij}=-\frac{1}{\Omega^2(\sigma)} \delta_{ij} \left(\partial_\tau^2+\partial_\sigma^2\right)+m_{ij} + n_{ij} \partial_\tau\,, \end{gather} where the non-vanishing entries of the matrices are~\footnote{There would be an overall minus sign in the kinetic and mass term of the $y_1$ fluctuation, which we disregard in \eqref{Lagrbos} for simplifying the formula, considering that it does not play a practical role in the evaluation of determinants with Gel'fand-Yaglom and is reabsorbed in the Wick-rotation of the time coordinate $t$.} \begin{equation} \begin{split} \!\!\!\!\!\!\!\!\! m_{11}&=m_{22}=m_{33}= \frac{2}{\Omega^2(\sigma)\,\sinh^2\sigma}\,,\qquad m_{44}= m_{55}=m_{66}=-\frac{2}{\Omega^2(\sigma)\,\cosh^2\left(\sigma+\sigma_0\right)}\,, \\ \!\!\!\!\!\!\!\!\! m_{77}&= m_{88}= \frac{-2+3\tanh^{2}\left(2\sigma+\sigma_{0}\right)}{\Omega^2(\sigma)}, \qquad\!\!\!\!\! n_{78} = -n_{87}= \frac{2\tanh\left(2\sigma+\sigma_{0}\right)}{\Omega^2(\sigma)}\,. \end{split} \end{equation} The worldsheet excitations decouple in the bosonic sector, apart from $y_7$ and $y_8$ which are coupled through a $2\times 2$ matrix-valued differential operator. The determinant of the bosonic operator is decomposed into the product \begin{gather} \label{prodbos} \textrm{Det}\mathcal{O}_B\left(\theta_0\right)= \textrm{Det}^3\mathcal{O}_1\, \textrm{Det}^3\mathcal{O}_2\left(\theta_0\right)\, \textrm{Det}\mathcal{O}_3\left(\theta_0\right). \end{gather} Going to Fourier space ($\partial_{\tau}\to i\omega$), formula \eqref{prodbos} holds for each frequency $\omega$ with \begin{eqnarray}\label{O1} \!\!\! \mathcal{O}_1& \equiv& -\partial^2_\sigma+\omega^2+\frac{2}{\sinh^{2}\sigma}\\\label{O2} \!\!\! \mathcal{O}_2\left(\theta_0\right) & \equiv& -\partial^2_\sigma+\omega^2-\frac{2}{\cosh^{2}\left(\sigma+\sigma_{0}\right)} \\\label{O3} \!\!\! \mathcal{O}_3\left(\theta_0\right) & \equiv& \left(\begin{array}{cc} -\partial^2_\sigma+\omega^2-2+3\tanh^{2}\left(2\sigma+\sigma_{0}\right) & 2\,i\,\tanh\left(2\sigma+\sigma_{0}\right)\omega\\ -2\,i\,\tanh\left(2\sigma+\sigma_{0}\right)\omega& -\frac{d^2}{d\sigma^2}+\omega^2-2+3\tanh^{2}\left(2\sigma+\sigma_{0}\right) \end{array}\right) \end{eqnarray} The unitary matrix $U=\frac{1}{\sqrt{2}}\big( \begin{smallmatrix} i & 1\\ -i & 1 \end{smallmatrix}\big)$ diagonalizes the operator \eqref{O3} \begin{eqnarray}\label{O3plusminus} \mathcal{O}_3\left(\theta_0\right)&=&U^{\dagger}\,{\rm diag}\{ \mathcal{O}_{3+}, \mathcal{O}_{3-} \}\,U\,,\nonumber\\ \mathcal{O}_{3+}\left(\theta_0\right)&=&-\partial_{\sigma}^{2}+\omega^{2}-2+3\tanh^{2}\left(2\sigma+\sigma_{0}\right)-2\omega\tanh\left(2\sigma+\sigma_{0}\right)\,,\\ \mathcal{O}_{3-}\left(\theta_0\right)&=&-\partial_{\sigma}^{2}+\omega^{2}-2+3\tanh^{2}\left(2\sigma+\sigma_{0}\right)+2\omega\tanh\left(2\sigma+\sigma_{0}\right)~.\nonumber \end{eqnarray} We performed a rescaling by $\sqrt{h}=\Omega^2(\sigma)$ (as in the analogous computations of~\cite{Kruczenski:2008zk,Forini:2010ek,Drukker:2011za}) which will not affect the final determinant ratio \eqref{Zlatitude} (see discussions in Appendix A of~\cite{Drukker:2000ep} and in \cite{Kruczenski:2008zk,Forini:2010ek,Drukker:2011za}) and is actually instrumental for the analysis in Appendices~\ref{app:gelfand_yaglom_formulas} and \ref{sec:square}. \noindent We rewrite \eqref{prodbos} as follows \begin{gather} \label{bosonic_operator_product} \textrm{Det}_{\omega} \mathcal{O}_B\left(\theta_0\right)= \textrm{Det}_{\omega}^3\mathcal{O}_1\, \textrm{Det}_{\omega}^3\mathcal{O}_2\left(\theta_0\right)\, \textrm{Det}_{\omega}\mathcal{O}_{3+}\left(\theta_0\right)\, \textrm{Det}_{\omega}\mathcal{O}_{3-}\left(\theta_0\right)~, \end{gather} where all the determinants are taken at fixed $\omega$. To reconstruct the complete bosonic contribution we have to perform an infinite product over all possible frequencies. \noindent The operator $\mathcal{O}_1$ does not depend on $\theta_0$, and indeed also appears among the circular Wilson loop fluctuation operators~\cite{Kruczenski:2008zk}. While its contribution formally cancels in the ratio \eqref{mainratiolog}, we report it below along with the others for completeness. Both $\mathcal{O}_2\left(\theta_0\right) $ and $\mathcal{O}_3\left(\theta_0\right) $ become massless (scalar- and matrix-valued respectively) operators in the circular Wilson loop limit, which is clear for $\mathcal{O}_3\left(\theta_0\right)$ upon diagonalization \emph{and} an integer shift in $\omega$~\footnote{In the language of~\cite{Forini:2015mca}, this shift corresponds to a different choice of orthonormal vectors that are orthogonal to the string surface.}, irrelevant for the determinant at given frequency, as long as we do not take products over frequencies into consideration. Thus, in this limit one recovers the bosonic partition function of~\cite{Kruczenski:2008zk}. The evaluation of one-dimensional spectral problems is outlined in Appendix \ref{app:gelfand_yaglom_formulas}. The fields satisfy Dirichlet boundary conditions at the endpoints of the compactified interval $\sigma\in[\epsilon_0, R]$. Then we take the limit of the value of the regularized determinants for $R\to\infty$ at fixed $\omega$ and $\epsilon_0$. As evident from the expressions below, the limit on the physical IR cutoff ($\epsilon$ in $z$ or equivalently $\epsilon_0$ in $\sigma$) would drastically change the $\omega$-dependence at this stage and thus would spoil the product over the frequencies. It is a crucial, a posteriori, observation that it is only keeping $\epsilon_0$ \emph{finite} while sending $R$ to infinity that one precisely reproduces the expected large $\omega$ (UV) divergences~\cite{Drukker:2000ep, Forini:2015mca}. This comes at the price of more complicated results for the bosonic (and especially fermionic) determinants. Afterwards we will remove the IR divergence in the one-loop effective action by referring the latitude to the circular solution. The solutions of the differential equations governing the different determinants are singular for small subset of frequencie : We shall treat apart these special values when reporting the solutions. For the determinant of the operator $\mathcal{O}_1$ in \eqref{O1} in the limit of large $R$ one obtains~\cite{Kruczenski:2008zk} \begin{flalign}\label{detO1} \textrm{Det}_{\omega} \mathcal{O}_1=\begin{cases} e^{|\omega| (R-\epsilon_0 )} \frac{(|\omega| +\coth\epsilon_0)}{2 |\omega| (|\omega| +1)}& \,\qquad\qquad \omega\neq0\\ R \coth\epsilon_0& \,\qquad\qquad \omega=0 \end{cases} \end{flalign} and only the case $\omega=0$ has to be considered separately. Next we examine the initial value problem (\ref{O2first})-(\ref{O2second}) associated to $\mathcal{O}_2(\theta_0)$, whose solution is \begin{flalign} f_{(II)1}(\sigma) =\begin{cases} \frac{1}{2 \omega \cosh \left(\sigma +\sigma_0\right) \cosh \left(\sigma_0+\epsilon_0\right)} \left(\cosh \left(\sigma+\epsilon_0 +2 \sigma_0 \right) \sinh (\omega (\sigma -\epsilon_0 )) \vphantom{\frac{(\omega -1) \sinh ((\omega +1) (\sigma -\epsilon_0 ))}{2 (\omega +1)}}+ \right.&\\ \qquad\qquad\qquad \left.+\frac{(\omega +1) \sinh ((\omega -1) (\sigma -\epsilon_0 ))}{2 (\omega -1)}+\frac{(\omega -1) \sinh ((\omega +1) (\sigma -\epsilon_0 ))}{2 (\omega +1)}\right) & \,\ \omega\neq-1,0,1\\ \frac{2\left(\sigma-\epsilon_0\right)-\sinh2\left(\sigma_{0}+\epsilon_0\right)+\sinh2\left(\sigma+\sigma_{0}\right)}{4\cosh\left(\sigma_{0}+\epsilon_0\right)\cosh\left(\sigma+\sigma_{0}\right)}& \,\ \omega=-1,1\\ \frac{\left(\sigma-\epsilon_0\right)\sinh\left(\sigma_{0}+\epsilon_0\right)\sinh\left(\sigma+\sigma_{0}\right)+\sinh\left(\sigma-\epsilon_0\right)}{\cosh\left(\sigma_{0}+\epsilon_0\right)\cosh\left(\sigma+\sigma_{0}\right)}& \,\ \omega=0 \,. \end{cases} \end{flalign} The determinant is then given by $f_{(II)1}(R)$ and for $R$ large one obtains the simpler expression \begin{flalign}\label{detO2} \textrm{Det}_{\omega} \mathcal{O}_2(\theta_0)=\begin{cases} e^{|\omega| (R-\epsilon_0 )}\frac{ (|\omega| +\tanh (\sigma_0+\epsilon_0 ))}{2 |\omega| (|\omega| +1)}\,& \,\qquad\qquad \omega\neq0\\ R\,\tanh(\sigma_0+\epsilon_0) & \,\qquad\qquad \omega=0\,. \end{cases} \end{flalign} We repeat the same procedure for $\mathcal{O}_{3+}\left(\theta_0\right)$. From the solutions \begin{flalign} f_{(II)1}(\sigma) =\begin{cases} \frac{(\omega +1) e^{2 \left(\sigma +\sigma_0+\epsilon_0\right)} \sinh \left((\omega -1) \left(\sigma-\epsilon_0 \right)\right)+(\omega -1) \sinh \left((\omega +1) \left(\sigma-\epsilon_0 \right)\right)} {\left(\omega ^2-1\right)\sqrt{(1+e^{4 \sigma+2 \sigma_0})(1+e^{2 \sigma_0+4 \epsilon_0})}} & \,\qquad \omega\neq-1,0,1\\ \frac{\left(e^{2 \sigma -2\epsilon_0}-e^{-2\sigma+2 \epsilon_0}+4 e^{2 \sigma +2 \sigma_0+2 \epsilon_0} \left(\sigma-\epsilon_0 \right)\right)} {4 \sqrt{(1+e^{4 \sigma +2 \sigma_0})(1+e^{2 \sigma_0+4 \epsilon_0})}}& \,\qquad \omega=-1,1\\ \frac{\left(e^{ \sigma-\epsilon_0 }-e^{-\sigma+ \epsilon_0}\right) \left(e^{2 \left(\sigma +\sigma_0+\epsilon_0\right)}+1\right)}{2 \sqrt{(1+e^{4 \sigma +2 \sigma_0})(1+e^{2 \sigma_0+4 \epsilon_0})}} & \,\qquad \omega=0 \end{cases} \end{flalign} one finds for large $R$ \begin{flalign}\label{detO3plus} ~~ \textrm{Det}_{\omega} \mathcal{O}_{3+}(\theta_0)=\begin{cases} \frac{e^{R (\omega-1)-\sigma_0-(\omega +1) \epsilon_0} \left(\omega +(\omega +1) e^{2 \sigma_0+4 \epsilon_0}-1\right)} {2 \left(\omega ^2-1\right) \sqrt{1+e^{2 \sigma_0+4 \epsilon_0}}}& \,\qquad\qquad \omega\geq2\\ \frac{R e^{\sigma_0+2 \epsilon_0 }}{\sqrt{1+e^{2 \sigma_0+4 \epsilon_0 }}} & \,\qquad\qquad \omega=1\\ \frac{e^{-R (\omega-1 )+\sigma_0+(\omega +1) \epsilon_0}}{2(1-\omega ) \sqrt{1+e^{2 \sigma_0+4 \epsilon_0}}}& \,\qquad\qquad \omega\leq0\,. \end{cases} \end{flalign} In view of the relation $\mathcal{O}_{3-}\left(\theta_0\right)=\mathcal{O}_{3+}\left(\theta_0\right)\mid_{\omega\to-\omega}$, which follows from \eqref{O3plusminus}, we can easily deduce the results for $\textrm{Det}_{\omega} \mathcal{O}_{3-}(\theta_0)$ by flipping the frequency in the lines above \begin{flalign}\label{detO3minus} \qquad \textrm{Det}_{\omega} \mathcal{O}_{3-}(\theta_0)=\begin{cases} \frac{e^{R (\omega+1 )+\sigma_0+(-\omega +1) \epsilon_0}}{2(1+\omega ) \sqrt{1+e^{2 \sigma_0+4 \epsilon_0}}}& \,\qquad\omega\geq0\\ \frac{R e^{\sigma_0+2 \epsilon_0 }}{\sqrt{1+e^{2 \sigma_0+4 \epsilon_0 }}} & \,\qquad \omega=-1\\ \frac{e^{-R (\omega +1)-\sigma_0-(-\omega +1) \epsilon_0} \left(-\omega +(-\omega +1) e^{2 \sigma_0+4 \epsilon_0}-1\right)}{2 \left(\omega ^2-1\right) \sqrt{1+e^{2 \sigma_0+4 \epsilon_0}}}& \,\qquad \omega\leq-2\,. \end{cases} \end{flalign} Notice that a shift of $\omega\to\omega-1$ in $\textrm{Det}_\omega\mathcal{O}_{3+}(\theta_0)$ and $\omega\to\omega+1$ in $\textrm{Det}_\omega\mathcal{O}_{3-}(\theta_0)$ gives back the symmetry around $\omega=0$ in the distribution of power-like and exponential large-$R$ divergences which characterizes the other determinants \eqref{detO1} and \eqref{detO2}. Such a shift -- also useful for the circular Wilson loop limit as discussed below \eqref{bosonic_operator_product} -- does not affect the determinant, and we will perform it in Section \ref{sec:partitionfunctions}. \bigskip \subsection{Fermionic sector} \label{sec:fermionic_Lagrangian} The fluctuation analysis in the fermionic sector can be easily carried out following again the general approach~\cite{Forini:2015mca}, which includes the local $\grp{SO}(1,9)$ rotation in the target space~\cite{Kavalov:1986nx,Sedrakian:1986np,Langouche:1987mw,Langouche:1987my,Langouche:1987mx,Drukker:2000ep} that allows to cast the quadratic Green-Schwarz fermionic action into eight contributions for two-dimensional spinors on the curved worldsheet background. The standard Type IIB $\kappa$-symmetry gauge-fixing for the rotated fermions $\Psi^I$ \begin{equation} \label{kappa_symmetry} \Psi^1=\Psi^2\equiv\Psi\, \end{equation} leads to the Lagrangian~\footnote{We perform the computations in a Lorentzian signature for the induced worldsheet metric and only at the end Wick-rotate back. The difference with (5.37)-(5.38) of~\cite{Forini:2015mca} is only in labeling the spacetime directions.} \begin{eqnarray} \mathcal{L}_F^{\left(2\right)} &=& 2i \,\Omega^2(\sigma)\,\bar{\Psi}\, \mathcal{O}_F\left(\theta_0\right)\,\Psi\, \end{eqnarray} where the operator $\mathcal{O}_F\left(\theta_0\right)$ is given by \begin{eqnarray} \label{fermionic_operator_initial} \mathcal{O}_F\left(\theta_0\right) &=&\frac{i}{\Omega(\sigma)}\left(\Gamma_{4}\partial_{\tau}+\Gamma_{3}\partial_{\sigma}-a_{34}(\sigma)\Gamma_{3}+a_{56}(\sigma)\Gamma_{456}\right)\nonumber\\ &&+\frac{1}{\Omega (\sigma)^{2}}\left(\sinh^{2}\rho(\sigma)\Gamma_{012}+\sin^{2}\theta(\sigma)\Gamma_{0123456}\right).\, \end{eqnarray} The coefficients $a_{34} (\sigma )$ and $a_{56} (\sigma )$ can be expressed as derivatives of the functions appearing in the classical solution: \begin{flalign} a_{34} (\sigma ) & =-\frac{1}{2}\frac{d}{d\sigma}\log\Omega (\sigma )\quad\mathrm{and}\quad a_{56}(\sigma) =\frac{1}{4}\frac{d}{d\sigma}\log\frac{\cosh\rho(\sigma)+\cos\theta(\sigma)}{\cosh\rho(\sigma)-\cos\theta(\sigma)}. \end{flalign} In the $\theta_0\to0$ limit (hence $\theta(\sigma)\to0$), one gets \begin{eqnarray} \mathcal{O}_{F}\left(\theta_0=0\right) \label{fermionic_action_circle_us} =i\sinh\sigma\Gamma_{4}\partial_{\tau}+i\sinh\sigma\Gamma_{3}\partial_{\sigma}-\frac{i}{2}\cosh\sigma\Gamma_{3}+\frac{i}{2}\sinh\sigma\Gamma_{456}+\Gamma_{012}\,, \end{eqnarray} which coincides with the operator found in the circular Wilson loop analysis of~\cite{Kruczenski:2008zk}~\footnote{See formula (5.17) therein.}, once we go back to Minkowski signature and reabsorbe the connection-related $\Gamma_{456}$-term via the $\tau$-dependent rotation~$\Psi\to\exp\left(-\frac{\tau}{2}\Gamma_{56}\right)\Psi$. In Fourier space this results in a shift of the integer fermionic frequencies $\omega$ by one half, turning periodic fermions into anti-periodic ones. In the general case \eqref{fermionic_operator_initial} we cannot eliminate all the connection-related terms $-a_{34}(\sigma)\Gamma_{3}+a_{56}(\sigma)\Gamma_{456}$, since the associated normal bundle is non-flat~\cite{Forini:2015mca}~\footnote{The arising of gauge connections in the covariant derivatives associated to the structure of normal bundle is discussed at length in~\cite{Forini:2015mca} and references therein. In particular, see discussion in Section 5.2 of~\cite{Forini:2015mca} for both the latitude and the circular Wilson loop limit.}. Performing anyway the above $\tau$-rotation at the level of \eqref{fermionic_operator_initial} has the merit of simplifying the circular limit making a direct connection with known results. This is how we will proceed: For now, we continue with the analysis of the fermionic operator in the form \eqref{fermionic_operator_initial} without performing any rotation. Then, in Section \ref{sec:partitionfunctions}, we shall take into account the effect of this rotation by relabelling the fermionic Fourier modes in terms of a suitable choice of half-integers. The analysis of the fermionic operator \eqref{fermionic_operator_initial} drastically simplifies noticing that the set of mutually-commuting matrices $\{\Gamma_{12},\Gamma_{56},\Gamma_{89}\}$ commutes with the operator itself and leaves invariant the spinor constraint \eqref{Weyl} and the fermionic gauge fixing (\ref{kappa_symmetry}). By means of the projectors \begin{flalign} \mathcal{P}_{12}^{\pm} \equiv\frac{\mathbb{I}_{32}\pm i\Gamma_{12}}{2},\ \ \ \ \mathcal{P}_{56}^{\pm} \equiv\frac{\mathbb{I}_{32}\pm i\Gamma_{56}}{2}\ \ \ \ \mathrm{and}\ \ \ \ \mathcal{P}_{89}^{\pm} \equiv\frac{\mathbb{I}_{32}\pm i\Gamma_{89}}{2}, \end{flalign} we decompose the $32\times 32$ fermionic operator into eight blocks of $2\times 2$ operators labeled by the triplet $\{p_{12},p_{56},p_{89}=-1,1\}$. Formally this can be seen as the decomposition into the following orthogonal subspaces \begin{flalign} \mathcal{O}_{F}\left(\theta_0\right)&= \underset{p_{12},p_{56},p_{89}=-1,1}{\bigoplus}\mathcal{O}_{F}^{p_{12},p_{56},p_{89}}\left(\theta_0\right)\\ \Psi&= \underset{p_{12},p_{56},p_{89}=-1,1}{\bigotimes}\Psi^{p_{12},p_{56},p_{89}} \end{flalign} where each operator \begin{eqnarray}\label{fermionic_operator_decomposed} \mathcal{O}_{F}^{p_{12},p_{56},p_{89}}\left(\theta_0\right) &\equiv& \frac{i}{\Omega(\sigma)}\left(\Gamma_{4}\partial_{\tau}+\Gamma_{3}\partial_{\sigma}-a_{34}(\sigma)\Gamma_{3}-ip_{56}a_{56}(\sigma)\Gamma_{4}\right)\\ &&+\frac{1}{\Omega^{2}(\sigma)}\left(-ip_{12}\sinh^{2}\rho(\sigma)\Gamma_{0}-p_{12}p_{56}\sin^{2}\theta(\sigma)\Gamma_{034}\right)\nonumber \end{eqnarray} acts on the eigenstates $\Psi^{p_{12},p_{56},p_{89}}$ of $\{\mathcal{P}_{12}^{\pm},\mathcal{P}_{56}^{\pm},\mathcal{P}_{89}^{\pm}\}$ with eigenvalues $\{\frac{1\pm \,p_{12}}{2},\frac{1\pm \,p_{56}}{2},\frac{1\pm \,p_{89}}{2}\}$. Notice that the operator defined in \eqref{fermionic_operator_decomposed} actually does not depend on the label $p_{89}$. Then the spectral problem reduces to the computation of eight 2D functional determinants \footnote{A non-trivial matrix structure is also encountered in the fermionic sector of the circular Wilson loop~\cite{Kruczenski:2008zk}, but the absence of a background geometry in $\textup{\textrm{S}}^5$ leads to a simpler gamma structure. It comprised only three gamma combinations ($\Gamma_0,\Gamma_4,\Gamma_{04}$), whose algebra allows their identification with the three Pauli matrices without the need of the labelling the subspaces.} \begin{gather} \label{prod_fermionic_operator_decomposed} \textrm{Det}\mathcal{O}_{F}\left(\theta_0\right) =\prod_{p_{12},p_{56},p_{89}=\pm1}\textrm{Det}\mathcal{O}_{F}^{p_{12},p_{56},p_{89}}\left(\theta_0\right). \end{gather} A deeper look at the properties of $\mathcal{O}_{F}^{p_{12},p_{56},p_{89}}$ allows us to focus just on the case of $p_{12}=p_{56}=p_{89}=1$. In fact, as motivated in details in Appendix \ref{ferm111}, the total determinant can be rewritten as follows \begin{equation}\label{detfermall1mirror} \textrm{Det}\mathcal{O}_{F}\left(\theta_0\right)=\prod_{\omega\in\mathbb{Z}}\,\textrm{Det}_{\omega}[(\mathcal{O}_{F}^{{1},1,1}(\omega) )^2]^2 \textrm{Det}_{\omega}[(\mathcal{O}_{F}^{{1},1,1}(-\omega) )^2]^2~\,. \end{equation} Using the matrix representation (\ref{representation_gamma}) and going to Fourier space, we obtain \begin{eqnarray}\label{O111} \mathcal{O}_{F}^{1,1,1}(\theta_0) &\equiv&\Big[ \frac{i}{\Omega(\sigma)}\big(-i\omega \sigma_2+\sigma_{1}\partial_{\sigma}-a_{34}(\sigma)\sigma_{1}+ia_{56}(\sigma)\sigma_2\big)\\\nonumber &&+\frac{1}{\Omega^{2}(\sigma)}\big(\sinh^{2}\rho(\sigma)\sigma_{3}-\sin^{2}\theta(\sigma){\mathbb{I}_2}\big)\Big]\otimes M \equiv \mathcal{\widetilde{O}}_{F}^{1,1,1}\otimes M~, \end{eqnarray} where $M=\sigma_2\otimes {\mathbb{I}_4}\otimes\sigma_1$. For simplicity of notation, from now on we will denote with $O_{F}^{1,1,1}(\theta_0)$ the first factor in the definition above. In a similar spirit to the analysis for the bosonic sector, we start to find the solutions of the homogeneous problem \begin{gather} \label{homogeneous_equation_fermions} \mathcal{{O}}_{F}^{1,1,1}\left(\theta_0\right) \bar{f}(\sigma)=0 \end{gather} where $\bar{f}(\sigma)$ denotes the two component spinor $ \left(f_{1}(\sigma), f_{2}(\sigma)\right)^T$. The system of coupled first-order differential equations now reads \begin{eqnarray} \label{first_equation} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\left(- \sin^{2}\theta(\sigma)+\sinh^{2}\rho(\sigma)\right)f_{1}(\sigma)+i \Omega(\sigma)\left(\partial_{\sigma}-\omega-a_{34}(\sigma)+a_{56}(\sigma)\right)f_{2}(\sigma)=0,\\ \label{second_equation} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\left(-\sin^{2}\theta(\sigma)-\sinh^{2}\rho(\sigma)\right)f_{2}(\sigma)+i \Omega(\sigma)\left(\partial_{\sigma}+\omega-a_{34}(\sigma)-a_{56}(\sigma)\right)f_{1}(\sigma)=0. \end{eqnarray} We can cast it into a second-order differential equation for one of the unknown functions. Solving (\ref{second_equation}) for $f_2(\sigma)$ \begin{gather} f_{2} (\sigma)=\frac{i }{\Omega\left(\sigma\right)}\left(\partial_{\sigma}+\omega-\frac{1}{2\tanh\sigma}-\frac{\tanh\left(\sigma+\sigma_{0}\right)}{2}\right)f_{1}(\sigma)\,, \end{gather} and then plugging it into (\ref{first_equation}) one obtains \begin{gather} \label{Schoedinger_p56PLUS} \!\!\! f_{1}^{''}(\sigma)-\Big(\frac{1}{2\sinh^{2}\sigma}-\frac{1}{2\cosh^{2}\left(\sigma+\sigma_{0}\right)}+\Big(\frac{1}{2\tanh\sigma}+\frac{\tanh(\sigma+\sigma_{0})}{2}-\omega\Big)^{2}\Big)\,f_{1}(\sigma)=0. \end{gather} It is worth noticing that the Gel'fand-Yaglom method has naturally led to an auxiliary Schr\"{o}dinger equation for a fictitious particle on a semi-infinite line and subject to a supersymmetric potential $V(\sigma)=-W'(\sigma)+W^2(\sigma)$ derived from the prepotential $W(\sigma)=\frac{1}{2\tanh\sigma}+\frac{\tanh(\sigma+\sigma_{0})}{2}-\omega$. Traces of supersymmetry are not surprising: They represent a vestige of the supercharges unbroken by the classical background~\footnote{The same property is showed by (5.26) in \cite{Kruczenski:2008zk}. }.\\ \\ As in the bosonic case, we have to separately discuss some critical values of the frequencies. We only report the independent solutions of the equations above, where the constants $c_{i,1}$ and $c_{i,2}$ have to be fixed in the desired initial value problem ($i=I,\,II$). \begin{flalign} \label{ferm_sol1} f_{(i)1}(\sigma) =\begin{cases} \frac{c_{i,1}e^{\sigma\left(1+\omega\right)}+c_{i,2}e^{\sigma\left(1-\omega\right)+\sigma_{0}}\left(2\omega^{2}\cosh\left(\sigma+\sigma_{0}\right)\sinh\sigma+\omega\cosh\left(2\sigma+\sigma_{0}\right)+\sinh\sigma_{0}\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)}} & \,\omega\neq-1,0,1\\ \frac{c_{i,1}e^{2\sigma}+c_{i,2}\left(-4\sigma e^{2\sigma+2\sigma_{0}}-2e^{2\sigma_{0}}+2-e^{-2\sigma}\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)}} & \ \omega=1\\ \frac{c_{i,1}e^{\sigma}+c_{i,2}\left(-e^{-\sigma}-e^{3\sigma+2\sigma_{0}}+2\sigma e^{\sigma}\left(e^{2\sigma_{0}}-1\right)\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)}} & \, \omega=0\\ \frac{c_{i,1}+c_{i,2}\left(4\sigma-2e^{2\sigma}+2e^{2\sigma+2\sigma_{0}}-e^{4\sigma+2\sigma_{0}}\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)}} & \, \omega=-1\\ \end{cases} \end{flalign} \begin{flalign} \label{ferm_sol2} f_{(i)2}(\sigma) =\begin{cases} c_{i,1}\frac{2i e^{\sigma\left(2+\omega\right)+\sigma_{0}}\left(-\cosh\left(2\sigma+\sigma_{0}\right)+2\omega\cosh\left(\sigma+\sigma_{0}\right)\sinh\sigma\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}}+ &\\ \qquad -c_{i,2}\frac{i e^{\sigma\left(2-\omega\right)+2\sigma_{0}}\left(2\omega+\sinh\left(2\sigma+2\sigma_{0}\right)+2\omega\sinh\sigma_{0}\sinh\left(2\sigma+\sigma_{0}\right)-\sinh2\sigma\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}}& \!\!\!\!\!\!\!\!\!\!\!\!\omega\neq-1,0,1\\ -c_{i,1}\frac{i \left(2e^{\sigma}-e^{3\sigma}+e^{3\sigma+2\sigma_{0}}\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}} &\\ \qquad -c_{i,2}\frac{i \left(2e^{5\sigma+4\sigma_{0}}-e^{-\sigma+2\sigma_{0}}+e^{-\sigma}+4\left(\sigma+1\right)e^{3\sigma+2\sigma_{0}}\left(1-e^{2\sigma_{0}}\right)-4\left(2\sigma+1\right)e^{\sigma+2\sigma_{0}}\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}} & \, \omega=1\\ -c_{i,1}\frac{i \sqrt{1+e^{4\sigma+2\sigma_{0}}}}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)}} &\\ \qquad -c_{i,2}\frac{i \left(e^{2\sigma}-6e^{2\sigma+2\sigma_{0}}+e^{2\sigma+4\sigma_{0}}+2\left(\sigma-1\right)e^{4\sigma+2\sigma_{0}}\left(e^{2\sigma_{0}}-1\right)+2\left(\sigma+1\right)\left(e^{2\sigma_{0}}-1\right)\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}} & \ \omega=0\\ -c_{i,1}\frac{i \left(e^{\sigma}-e^{\sigma+2\sigma_{0}}+2e^{3\sigma+2\sigma_{0}}\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}} &\\ \qquad -c_{i,2}\frac{i \left(2e^{-\sigma}-e^{5\sigma+2\sigma_{0}}+e^{5\sigma+4\sigma_{0}}+4\left(\sigma-1\right)e^{\sigma}\left(1-e^{2\sigma_{0}}\right)+4e^{3\sigma+2\sigma_{0}}\left(2\sigma-1\right)\right)}{\sqrt{\left(e^{2\sigma}-1\right)\left(e^{2\sigma_{0}}+1\right)\left(e^{2\sigma+2\sigma_{0}}+1\right)\left(e^{4\sigma+2\sigma_{0}}+1\right)}}& \, \omega=-1 \end{cases} \end{flalign} We are now ready to evaluate the determinants using the results of Appendix \eqref{sec:square}, namely considering Dirichlet boundary conditions for the square of the first order differential operator. Having in mind the solutions above and how they enter in \eqref{GY_Y} and \eqref{OY}, it is clear that already the \emph{integrand} in \eqref{op_square_Dirichlet} is significantly complicated. A simplification occurs by recalling that our final goal is taking the $R\to \infty$ limit of all determinants and combine them in the ratio of bosonic and fermionic contributions. As stated above in the bosonic analysis and shown explicitly below, for the correct large $\omega$ divergences to be reproduced, it is crucial to send $R\to\infty$ while keeping $\epsilon$ finite. In Appendix \eqref{largeRsimplified} we sketch how to use the main structure of the matrix of the solutions $Y(\sigma)$ to obtain the desired large-$R$ expressions for the determinants in a more direct way.\\ \\ The determinant of the operator $O_F^{1,1,1}$ for modes $\omega\neq\{-1,0,1\}$ reads for large $R$ \begin{align}\label{finalferm} & \textrm{Det}_{\omega\geq2}[(\mathcal{O}_{F}^{{1},1,1})^2]~ = \frac{ a_0\,e^{2 \omega (R-\epsilon_0 )}}{ \omega ^2\, (1+\omega)^2 (\omega-1 )}\,\Big[a_1\,\Phi \left(e^{-2 \epsilon_0 },1,\omega \right)+a_2\,\Phi (-e^{-2 (\sigma_0+\epsilon_0 )},1,\omega )+a_3\Big]\nonumber \\ & \textrm{Det}_{\omega\leq-2}[(\mathcal{O}_{F}^{{1},1,1})^2]=\frac{b_0\,e^{-2 \omega (R-\epsilon_0 )}}{ \omega \, (1-\omega )^2}\,\Big[b_1\,\Phi \left(e^{-2 \epsilon_0 },1,-\omega \right)+b_2\,\Phi (-e^{-2 (\sigma_0+\epsilon_0 )},1,-\omega )+b_3\Big] \end{align} where $\Phi(z,s,a)$ is the Lerch transcendent \eqref{lerch_def}. The presence of the Lerch function is just a tool to have a compact expression for the determinants. In fact, for the values of $\omega$ relevant for us, it can be can be written in terms of elementary functions, but its expression becomes more and more unhandy as the value of $\omega$ increases. The coefficients $a_i$ and $b_i$ can be also expressed in terms of elementary functions. For the $a_i$ we have \begin{align}\nonumber a_0&=e^{-R-\frac{3 \sigma _0}{2}} \,\frac{\sinh \epsilon_0 \left(\tanh\sigma _0+1\right)\cosh \left(\sigma _0+\epsilon_0 \right)}{8 \sqrt{2\cosh \left(\sigma _0+2 \epsilon_0 \right)}}\,\\ a_1&=4 \,\text{sech}\sigma_0 (\tanh \sigma_0+\omega )^2\,\\\nonumber a_2&=4 [2 \left(1-\omega ^2\right) \omega ^2 \cosh \sigma_0-2 \left(1-\omega ^2\right) \omega \sinh\sigma _0+\text{sech}\sigma _0 \left(\text{sech}^2\sigma_0+\omega ^2-1\right)]\,\\\nonumber a_3 &=\tanh ^2 \sigma _0\, (\coth\epsilon_0+1) \,\text{csch}\epsilon_0\, \text{sech}\,\left(\sigma _0+\epsilon_0 \right)\, \big[e^{\sigma _0} \left(\cosh \sigma _0 -2 \sinh \sigma _0-\sinh (2 \epsilon_0 -\sigma _0)\right)\\\nonumber &+\cosh(2\sigma _0+2\epsilon_0))\big]+2 \omega \, \Big[- \omega ^2 \cosh ^2 \sigma _0 \, \text{csch}\epsilon_0\, \text{sech} (\sigma _0+\epsilon_0)\\ \nonumber &+ \cosh \sigma _0 \big(2 \omega ^2+\omega +3 \omega ^2\, \text{csch}\epsilon_0\,\cosh (\sigma_0+2 \epsilon_0) \, \text{sech}(\sigma _0+\epsilon_0) +\omega \coth ^2\epsilon_0+2 \coth\epsilon_0-2\big)\\\nonumber & +2 \,\big(3 \omega \,\cosh \epsilon_0\,\text{sech} (\sigma _0+\epsilon_0 ) -\sinh\sigma_0\,\left(\omega -2 \omega \coth \epsilon_0-\text{csch}^2\epsilon_0\right)-\text{sech}\sigma _0 (\coth \epsilon_0+1)\big) \Big]\, \nonumber \end{align} while for the $b_i$ we get \begin{equation} \begin{split} b_0&=e^{R-\frac{\sigma _0}{2}}\text{sech}^2\sigma _0\, \frac{ \sinh\epsilon_0\,(\tanh\sigma _0+1) \cosh \left(\sigma _0+\epsilon_0 \right)}{8 \sqrt{2\cosh(\sigma _0+2 \epsilon_0)}}\,\label{finalfermomeganotspecial} \\ b_1&=-2\,\\ b_2&=-2 \,\big[\,\omega \big(\omega \cosh (2\sigma _0)+\sinh(2 \sigma_0)\big)+\omega ^2-1\,\big]\,\\ b_3&=-\cosh ^2\sigma_0\,\big[4 \omega \tanh(\sigma _0+\epsilon_0) -2 \omega\,\coth\epsilon_0+\text{csch}^2\epsilon_0\,\big]-\omega\\ &- \cosh(2 \sigma_0)(\omega +1)-\sinh (2 \sigma_0)+\cosh(\epsilon_0 -\sigma _0) \text{sech}\,(\sigma_0+\epsilon_0)\,. \end{split} \end{equation} The determinants of the lower modes have to be computed separately and they are given by \begin{align}\nonumber & \textrm{Det}_{\omega=1}[(\mathcal{O}_{F}^{{1},1,1})^2]\!=\! R\,e^{R}\,\frac{e^{-\frac{\sigma_0}{2}} (\tanh\sigma_0+1) \sinh \epsilon_0 \cosh (\sigma_0+\epsilon_0)}{\left(e^{2 \sigma_0}+1\right)^3 \sqrt{2 \cosh (\sigma_0+2 \epsilon_0)}} \Big[-2 e^{4\sigma_0} \Big(\log \frac{e^{2 \epsilon_0 }-1}{e^{2 (\sigma_0+\epsilon_0 )}+1}+ \\ \label{specialmode1} &+2 \sigma_0\Big) +\frac{(e^{2 \sigma _0}+1) \big(e^{6 \sigma _0+4 \epsilon_0 }+(e^{2 \epsilon_0 }+1) e^{4 \sigma _0+2 \epsilon_0 } +e^{2 \sigma _0} (-5 e^{2 \epsilon_0 }+3 e^{4 \epsilon_0 }+3 )+ (e^{2 \epsilon_0 }-1)^2\big)} {(e^{2 \epsilon_0 }-1)^2 (e^{2 (\sigma _0+\epsilon_0)}+1)} \Big]\, \\\nonumber \label{specialmode0} & \textrm{Det}_{\omega=0}[(\mathcal{O}_{F}^{{1},1,1})^2]\!=\! R\,e^{R}\,\frac{e^{-\frac{\sigma_0}{2}} (\tanh\sigma_0+1) \sinh \epsilon_0 \cosh (\sigma_0+\epsilon_0)}{\left(e^{2 \sigma_0}+1\right)^2 \sqrt{2 \cosh (\sigma_0+2 \epsilon_0)}}\Big[-2 e^{2 \sigma_0} \Big(\log\frac{e^{2 \epsilon_0 }-1}{e^{2 (\sigma_0+\epsilon_0 )}+1}+\\ &+2 \sigma_0\Big) +\frac{\left(e^{2\sigma_0}+1\right) \left(-e^{2\sigma_0}+3 e^{2 (\sigma_0+\epsilon_0 )} +e^{4 (\sigma_0+\epsilon_0 )}-e^{2 \epsilon_0 }+e^{4 \epsilon_0 }+1\right)}{\left(e^{2 \epsilon_0 }-1\right)^2 \left(e^{2 (\sigma_0+\epsilon_0 )}+1\right)} \Big]\, \\\nonumber & \textrm{Det}_{\omega=-1}[(\mathcal{O}_{F}^{{1},1,1})^2]\!=\! e^{3R}\,\frac{e^{-\frac{\sigma_0}{2}} (\tanh\sigma_0+1) \sinh \epsilon_0 \cosh (\sigma_0+\epsilon_0)}{8\,\left(e^{2 \sigma_0}+1\right)^2 \sqrt{2 \cosh (\sigma_0+2 \epsilon_0)}} \Big[-2 e^{2\sigma_0} \Big(\log \frac{e^{2 \epsilon_0 }-1}{e^{2 (\sigma_0+\epsilon_0 )}+1} +\\\label{specialmodes} &+2 \sigma_0\Big) +\frac{\left(e^{2\sigma_0}+1\right) \left(e^{4\sigma_0} \left(2 e^{2 \epsilon_0 }-1\right)+e^{2\sigma_0} \left(7 e^{2 \epsilon_0 }-2 e^{4 \epsilon_0 }-3\right)+e^{2 \epsilon_0 }\right)}{\left(e^{2 \epsilon_0 }-1\right)^2 \left(e^{2 (\sigma_0+\epsilon_0 )}+1\right)} \Big]\,. \end{align} \subsection{The circular Wilson loop limit} \label{circlimit} We report here the $\sigma_0\to\infty$ limit of all the bosonic and fermionic determinants, representing the circular Wilson loop case $\theta_0=0$. The result for $\textrm{Det}_{\omega}\mathcal{O}_1$ in \eqref{detO1} stays obviously the same, while for the limits of \eqref{detO2}, \eqref{detO3plus} and \eqref{detO3minus} one easily gets \begin{flalign}\label{detO2cir} \textrm{Det}_{\omega} \mathcal{O}_2(\theta_0=0)&=\begin{cases} \frac{e^{\left| \omega \right| (R-\epsilon_0)}}{2 \left| \omega \right| }& \,\qquad\qquad \omega\neq0\\ R&\,\qquad\qquad \omega=0 \end{cases}\\ \nonumber\\ \label{detO3pluscir} \textrm{Det}_{\omega} \mathcal{O}_{3+}(\theta_0=0)&=\begin{cases} \frac{e^{ (R- \epsilon_0)(\omega -1)}}{2 (\omega -1)}& \,\qquad\qquad \omega\geq2\\ R& \,\qquad\qquad \omega=1\\ \frac{e^{ -(R- \epsilon_0)(\omega -1)}}{2(1-\omega) }& \,\qquad\qquad \omega\leq0 \end{cases}\\ \nonumber\\ \label{detO3minuscir} \textrm{Det}_{\omega} \mathcal{O}_{3-}(\theta_0=0)&=\begin{cases} \frac{e^{ (R- \epsilon_0)(\omega +1)}}{2(1 +\omega) }& \,\qquad\qquad \omega\geq0\\ R& \,\qquad\qquad \omega=-1\\ -\frac{e^{ -(R- \epsilon_0)(\omega +1)}}{2 (\omega+1)}& \,\qquad\qquad \omega\leq-2 \,.\\ \end{cases} \end{flalign} The fermionic contributions \eqref{finalferm}-\eqref{specialmodes} reduce in this limit to \begin{align} \textrm{Det}_{\omega}\Big[\left(\mathcal{O}_{F}^{{1},1,1}(\theta_0=0)\right)^2\Big]= \begin{cases} \frac{e^{(R-\epsilon_0 )(2 \omega -1) }\left(\omega(e^{2 \epsilon_0 }-1)+1\right) }{4 (\omega -1) \omega ^2 \,(e^{2 \epsilon_0 }-1)}&\,\qquad\qquad\omega\geq 2 \\ \frac{R \,e^{R+\epsilon_0 }}{2 \left(e^{2 \epsilon_0 }-1\right)}&\, \qquad\qquad \omega=0,1 \\ \frac{ e^{3 (R- \epsilon_0) }\,(2 e^{2 \epsilon_0 }-1)}{16 (e^{2 \epsilon_0 }-1 )}&\, \qquad \qquad\omega=-1\,\\ \frac{e^{-(R-\epsilon_0 )(2 \omega -1) }\left((\omega -1) e^{2 \epsilon_0 }-\omega\right) }{4 (\omega -1)^2 \omega \, (e^{2 \epsilon_0 }-1 )}&\,\qquad\qquad\omega\leq-2 \,.\\ \end{cases} \end{align} \section{One-loop partition functions} \label{sec:partitionfunctions} We now put together the determinants evaluated in the previous sections and present the one-loop partition functions for the open strings representing the latitude ($\theta_0\neq0$) and the circular ($\theta_0=0$) Wilson loop, eventually calculating their ratio.\\ In the case of fermionic determinants, as motivated by the discussion below \eqref{fermionic_action_circle_us}, we will consider the relevant formulas \eqref{finalferm}-\eqref{specialmodes} relabelled using half-integer Fourier modes. In fact, once projected onto the subspace labelled by $(p_{12},p_{56},p_{89})$, the spinor $\Psi$ is an eigenstate of $\Gamma_{56}$ with eigenvalue $-i p_{56}$ and the rotation $\Psi\to\exp\left(-\frac{\tau}{2}\Gamma_{56}\right)\Psi$ reduces to a shift of the Fourier modes by $\omega\to\omega+\frac{p_{56}}{2}$. This in particular means that below we will consider \eqref{finalferm}-\eqref{specialmodes} effectively evaluated for $\omega=s+\frac{1}{2}$ and labeled by the half-integer frequency $s$. In the bosonic sector -- as discussed around \eqref{bosonic_operator_product} and \eqref{detO3minus} -- we pose $\omega=\ell+1$ in $\textrm{Det}_\omega\mathcal{O}_{3+}$ together with $\omega=\ell-1$ in $\textrm{Det}_\omega\mathcal{O}_{3-}$. This relabeling of the frequences provides in \eqref{detO3plus} and \eqref{detO3minus} a distribution of the $R$-divergences that is centered around $\ell=0$ ({\it i.e.} with a divergence $\sim R$ for $\ell=0$ and $\sim e^{|\ell |R}$ for $\ell\neq 0$) in the same way (in $\omega$) as for the other bosonic determinants \eqref{detO1} and \eqref{detO2}. This will turn out to be useful while discussing the cancellation of $R$-dependence. Recalling also \eqref{classical_action}, we write the formal expression of the one-loop string action \begin{equation}\label{Zlatitude} \!\!\!\!\! Z({\theta_0})=e^{\sqrt{\lambda}\cos\theta_0} \frac{\prod_{s\in\mathbb{Z}+1/2}\,\big[\,\textrm{Det}_{s}(\mathcal{O}_{F}^{{1},1,1})^2\,\textrm{Det}_{-s}(\mathcal{O}_{F}^{{1},1,1})^2\,\big]^{4/2}} {\prod_{\ell\in\mathbb{Z}}\big[\textrm{Det}_\ell\mathcal{O}_1(\theta_0)\big]^{3/2}\, \big[\textrm{Det}_\ell\mathcal{O}_2(\theta_0)\big]^{3/2}\,\big[\textrm{Det}_{\ell}\mathcal{O}_{3+}\left(\theta_0\right)\big]^{1/2} \big[\textrm{Det}_{\ell}\mathcal{O}_{3-}(\theta_0)\big]^{1/2}}~\,. \end{equation} To proceed, we rewrite \eqref{Zlatitude} as the (still unregularized) sum \begin{eqnarray}\label{sumunreg} \Gamma({\theta_0}) &\equiv& -\log Z({\theta_0}) \equiv -\sqrt{\lambda}\cos\theta_0+\Gamma^{(1)}({\theta_0}) \\ \Gamma^{(1)}({\theta_0}) &\equiv& \sum_{\ell\in\mathbb{Z}}\Omega_{\ell}^{B}(\theta_0)-\sum_{s\in\mathbb{Z}+1/2}\Omega_s^{F}(\theta_0)~,\nonumber \end{eqnarray} where the (weighted) bosonic and fermionic contributions read \begin{eqnarray} \Omega_{\ell}^{B}(\theta_0)&=&\frac{3}{2}\log\big[\textrm{Det}_\ell\mathcal{O}_1(\theta_0)\big]+\frac{3}{2}\log\big[\textrm{Det}_\ell\mathcal{O}_2(\theta_0)\big]+\frac{1}{2}\log\big[\textrm{Det}_{\ell}\mathcal{O}_{3+}\big]+\frac{1}{2}\log[\textrm{Det}_{\ell}\mathcal{O}_{3-}\big]\nonumber\\ \Omega_{s}^{F}(\theta_0)&=& \frac{4}{2}\log\big[\,\textrm{Det}_{s}(\mathcal{O}_{F}^{{1},1,1})^2\big]+\frac{4}{2}\log \big[\textrm{Det}_{-s}(\mathcal{O}_{F}^{{1},1,1})^2\,\big]\,. \end{eqnarray} Equation \eqref{sumunreg} has the same form with effectively antiperiodic fermions encountered in~\cite{Kruczenski:2008zk,Dekel:2013kwa}. \\ Introducing the small exponential regulator $\mu$, we proceed with the ``supersymmetric regularization'' of the one-loop effective action proposed in~\cite{Frolov:2004bh, Dekel:2013kwa} \begin{eqnarray}\label{susyreg} \Gamma^{(1)}(\theta_0)&=&\sum _{\ell\in \mathbb{Z}} e^{-\mu |\ell|}\left[\Omega ^{B}_{\ell}(\theta_0)-\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(\theta_0)+\Omega^{F}_{\ell-\frac{1}{2}}(\theta_0)}{2}\right]\nonumber\\ &&+\frac{\mu}{2}\Omega^{F}_{\frac{1}{2}}(\theta_0) \,+\frac{\mu}{2} \sum _{\ell\geq 1} e^{-\mu \ell}\left(\Omega ^{F}_{\ell+\frac{1}{2}}(\theta_0)-\Omega^{F}_{\ell-\frac{1}{2}}(\theta_0)\right)\,. \end{eqnarray} In the first sum (where the divergence is the same as in the original sum) one can remove $\mu$ by sending $\mu\to0$, and use a cutoff regularization in the summation index $| \ell |\leq\Lambda$. Importantly, the non-physical regulator $R$ disappears in \eqref{susyreg}. While in~\cite{Kruczenski:2008zk}~\footnote{In this reference a regularization slightly different from~\cite{Frolov:2004bh, Dekel:2013kwa} was adopted.} the $R$-dependence drops out in each summand, here it occurs as a subtle effect of the regularization scheme, and comes in the form of a cross-cancellation between the first and the second line once the sums have been carried out. The difference in the $R$-divergence cancellation mechanism is a consequence of the different arrangement of fermionic frequencies in our regularization scheme \eqref{susyreg}. In the circular case ($\theta_0=0$) this cancellation can be seen analytically, as in \eqref{firstsumcircle}-\eqref{othertermscircle} below. The same can be then inferred for the general latitude case, since in the normalized one-loop effective action $\Gamma^{(1)}(\theta_0)-\Gamma^{(1)}(\theta_0=0)$ one observes (see below) that the $R$-dependence drops out in each summand. A non-trivial consistency check of \eqref{susyreg} is to confirm that in the large $\ell$ limit the expected UV divergences~\cite{Drukker:2000ep, Forini:2015mca} are reproduced. Importantly, for this to happen one cannot take the limit $\epsilon_0\to0$ in the determinants above \emph{before} considering $\ell\gg1$, which is the reason why we kept dealing with the complicated expressions for fermionic determinants above. Using for the Lerch transcendent in \eqref{finalferm} \begin{equation} \label{lerch_def} \Phi(z,s,a)\equiv\sum_{n=0}^{\infty} \frac{z^n}{(n+a)^s} \end{equation} the asymptotic behavior for $|a|\gg1$ ({\it i.e.} $|\ell|\gg1$ in \eqref{finalferm}) \cite{Ferreira} \begin{equation} \Phi (z,s,a)\sim \text{sgn}(a) \left(\frac{s (s+1) z \left(z+1\right) a^{-s-2}}{2 (1-z)^3}-\frac{s\, z\, a^{-s-1}}{(1-z)^2}+\frac{a^{-s}}{1-z}\right)~, \end{equation} one finds that the leading $\Lambda$-divergence is logarithmic, and - as expected from an analysis in terms of the Seeley-De Witt coefficients~\cite{Drukker:2000ep,Forini:2015mca} - proportional to the volume part of the Euler number \begin{equation}\label{logdivergence} \Gamma^{(1)}({\theta_0}) = -\chi_v(\theta_0) \sum_{1\ll|\ell |\leq\Lambda}\,\frac{1}{2 |\ell|} +O(\Lambda^0) =-\chi_v(\theta_0) \log\Lambda +O(\Lambda^0)~,\qquad\qquad\Lambda\to\infty \end{equation} where \begin{eqnarray} \label{Eulervolume} \chi_v(\theta_0)&=&\frac{1}{4\pi} \int_{0}^{2\pi} d\tau\int_{\epsilon_0}^{\infty}d\sigma \sqrt{h} \,\, {^{(2)}\!\!\, R} \\\nonumber &=&1-\frac{3-\cosh (2 \epsilon_0)+\cosh (2 (\sigma_0+\epsilon_0))+\cosh (2 (\sigma_0+2\epsilon_0))}{4 \sinh \epsilon_0 \cosh (\sigma_0+\epsilon_0) \cosh (\sigma_0+2\epsilon_0)}=1-\frac{1}{\epsilon}+\mathcal{O}(\epsilon)\,, \end{eqnarray} and we notice that this limit is independent from $\sigma_0$ ($\theta_0$). This divergence should be cancelled via completion of the Euler number with its boundary contribution \eqref{Eulerboundary} and inclusion of the (opposite sign) measure contribution, as discussed in~\cite{Drukker:2000ep,Kruczenski:2008zk}. Having this in mind, we will proceed subtracting \eqref{logdivergence} by hand in $\Gamma^{(1)}(\theta_0)$ and in $\Gamma^{(1)}(\theta_0\!=0)$. \subsection{The circular Wilson loop} The UV-regulated partition function in the circular Wilson loop limit reads \begin{eqnarray}\label{susyregcircle} \Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0\!=0)&=& \sum _{|\ell | \leq \Lambda} \left[\Omega ^{B}_{\ell}(0)-\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(0)+\Omega^{F}_{\ell-\frac{1}{2}}(0)}{2} \right]+\chi_v(0) \log\Lambda\nonumber\\ &&+\frac{\mu}{2}\Omega^{F}_{\frac{1}{2}}(0) \,+\frac{\mu}{2} \sum _{\ell\geq 1} e^{-\mu \ell}\left(\Omega ^{F}_{\ell+\frac{1}{2}}(0)-\Omega^{F}_{\ell-\frac{1}{2}}(0)\right)\,. \end{eqnarray} The first line is now convergent and its total contribution evaluates for $\Lambda\to\infty$ to \begin{eqnarray}\label{firstsumcircle} &&\sum_{|\ell | \leq \Lambda} \left[\Omega ^{B}_{\ell}(0)-\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(0)+\Omega^{F}_{\ell-\frac{1}{2}}(0)}{2}\right]+\chi_v(0) \log\Lambda\nonumber\\ &=&\sum_{\ell=3}^{\Lambda} \log\frac{16(\ell-1)^2(\ell+1)\left(\ell+\frac{1}{\epsilon}\right)^{3}}{\ell^2 \left(2\ell+1+\frac{1}{\epsilon}\right)^2 \left(2\ell-1+\frac{1}{\epsilon}\right)^2}+\log\frac{1536 e^{-2R}\epsilon^{5/2}(1+2\epsilon)^3}{(1+3\epsilon)^4(1+5\epsilon)^2(1-\epsilon)} +\chi_v(0) \log\Lambda\nonumber\\ &=& -2R +\log\frac{16 \,\, \Gamma\left(\frac{3}{2}+\frac{1}{2\epsilon}\right)^4}{(1-\epsilon) \,\sqrt{\epsilon} \, \Gamma\left(2+\frac{1}{\epsilon}\right)^3}~, \end{eqnarray} where $\Gamma$ is Euler gamma function. The $R$-dependence in \eqref{firstsumcircle} cancels against the $\mathcal{O}(\mu^0)$ contribution stemming from the regularization-induced sum in the second line of \eqref{susyregcircle} \begin{eqnarray}\label{othertermscircle} \!\!\!\!\!\! && \frac{\mu}{2} \sum _{\ell\geq 1} e^{-\mu \ell}\left(\Omega ^{F}_{\ell+\frac{1}{2}}(0)-\Omega^{F}_{\ell-\frac{1}{2}}(0)\right)\nonumber \\ &&=\mu \sum_{\ell\geq3} e^{-\ell\,\mu} \, \Big[\,2 R+\log \frac{(\ell-1) \ell (1-\epsilon) (2\ell+1+\frac{1}{\epsilon})} {(\ell+1)^2 (1+\epsilon)(2\ell-1+\frac{1}{\epsilon})}\Big] \\ &&= 2R-2\,\,\textrm{arctanh}\,\epsilon\,.\nonumber \end{eqnarray} Summing all contributions and finally taking $\epsilon\to0$, the result is precisely as in~\cite{Kruczenski:2008zk} \begin{equation}\label{Gammacircular} \Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0\!=0)=\frac{1}{\epsilon }\left(\log \frac{\epsilon }{4} +1\right) +\frac{1}{2} \log (2 \pi )~, \end{equation} despite the different frequency arrangement we commented on. We have checked that the same result is obtained employing $\zeta$-function regularization in the sum over $\ell$. The same finite part was found in \cite{Buchbinder:2014nia} via heat kernel methods. There is no theoretical motivation for the $\log\epsilon/\epsilon$-divergences appearing in \eqref{Gammacircular}, which will be cancelled in the ratio \eqref{mainratiolog}. In~\cite{Kruczenski:2008zk}, this kind of subtraction has been done by considering the ratio between the circular and the straight line Wilson loop. \subsection{Ratio between latitude and circular Wilson loops} \label{ratio} In this section we describe the evaluation of the ratio \eqref{mainratiolog} \begin{eqnarray}\label{mainratioresult} \log\frac{Z\left(\lambda,\theta_0\right)}{Z\left(\lambda,0\right)}= \sqrt{\lambda} (\cos\theta_0-1)+ \Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0\!=0)-\Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0) \end{eqnarray} where $\Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0\!=0)$ is in \eqref{susyregcircle} and $\Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0)$ is regularized analogously. The complicated fermionic determinants \eqref{finalferm}-\eqref{finalfermomeganotspecial} make an analytical treatment highly non-trivial, and we proceed numerically. First, we spell out \eqref{mainratioresult} as \begin{eqnarray}\label{gammanumerics} \!\!\!\!\! \Gamma^{(1)}_{\textrm{UV-reg}}(0)-\Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0) &=& \sum _{\ell =-2}^{2} \textstyle\left[\Omega ^{B}_{\ell}(0)-\Omega ^{B}_{\ell}(\theta_0) -\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(0)+\Omega^{F}_{\ell-\frac{1}{2}}(0)}{2} +\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(\theta_0)+\Omega^{F}_{\ell-\frac{1}{2}}(\theta_0)}{2}\right]\nonumber\\ &+&\sum_{\ell=3}^{\Lambda} 2\!\textstyle\left[\Omega ^{B}_{\ell}(0)-\Omega ^{B}_{\ell}(\theta_0) -\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(0)+\Omega^{F}_{\ell-\frac{1}{2}}(0)}{2} +\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(\theta_0)+\Omega^{F}_{\ell-\frac{1}{2}}(\theta_0)}{2}\right]\nonumber\\ &-&\left(\chi_v(\theta_0)-\chi_v(0)\right) \log\Lambda +\frac{\mu}{2}\,\big[\,\Omega^{F}_{\frac{1}{2}}(0)-\Omega^{F}_{\frac{1}{2}}(\theta_0) \,\big]\\ &+&\frac{\mu}{2} \sum _{\ell\geq 1} e^{-\mu \ell} \left[\Omega ^{F}_{\ell+\frac{1}{2}}(0)-\Omega^{F}_{\ell-\frac{1}{2}}(0) -\Omega ^{F}_{\ell+\frac{1}{2}}(\theta_0)+\Omega^{F}_{\ell-\frac{1}{2}}(\theta_0)\right]\nonumber \end{eqnarray} where we separated the lower modes $|\ell |\leq 2$ from the sum in the second line~\footnote{This is convenient because of the different form for the special modes \eqref{specialmode1}-\eqref{specialmodes} together with the relabeling discussed above.}, and in the latter we have used parity $\ell\to-\ell$. The sum multiplied by the small cutoff $\mu$ is zero in the limit $\mu\to0$~\footnote{This can be proved analytically since the summand behaves as $\mu \, e^{-\mu \ell} \ell^{-2}$ for large $\ell$. Removing the cutoff makes the sum vanish.}. The sum with large cutoff $\Lambda$ can be then numerically evaluated using the Euler-Maclaurin formula \begin{eqnarray} \sum_{\ell=m+1}^{n}f\left(\ell\right) &=& \int_{m}^{n}f\left(\ell\right)d\ell+\frac{f\left(n\right)-f\left(m\right)}{2}+\sum_{k=1}^{p}\frac{B_{2k}}{\left(2k\right)!}\left[f^{(2k-1)}\left(n\right)-f^{(2k-1)}\left(m\right)\right]\nonumber\\ &&-\int_{m}^{n}f^{(2p)}\left(\ell\right) \: \frac{B_{2p}\left(\{\ell\} \right)}{\left(2p\right)!}d\ell \,, \qquad \qquad p\geq1\,, \end{eqnarray} in which $B_n(x)$ is the $n$-th Bernoulli polynomial, $B_n=B_n(0)$ is the $n$-th Bernoulli number, $\{\ell\}$ is the integer part of $\ell$, $f(\ell)$ is the summand in the second line of \eqref{gammanumerics}, so $m=2$, $n=\Lambda$. After some manipulations to improve the rate of convergence of the integrals, we safely send $\Lambda\to\infty$ in order to evaluate the normalized effective action \begin{eqnarray} \label{gammanumerics2} \Delta\Gamma(\theta_0)_{\textrm{sm}}&\equiv&\left[\Gamma^{(1)}_{\textrm{UV-reg}}(0)-\Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0)\right]_{\textrm{sm}} \\\nonumber &=& \sum _{\ell =-2}^{2} \left[\Omega ^{B}_{\ell}(0)-\Omega ^{B}_{\ell}(\theta_0) -\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(0)+\Omega^{F}_{\ell-\frac{1}{2}}(0)}{2} +\frac{\Omega ^{F}_{\ell+\frac{1}{2}}(\theta_0)+\Omega^{F}_{\ell-\frac{1}{2}}(\theta_0)}{2}\right]\nonumber\\\nonumber && +\int_{2}^{\infty}\left[f\left(\ell\right)-\frac{\chi_{v}(\theta_0)-\chi_{v}(0)}{\ell}\right]d\ell-\left(\chi_{v}(\theta_0)-\chi_{v}(0)\right)\log2\\ && -\frac{f\left(2\right)}{2} -\sum_{k=1}^{3}\frac{B_{2k}}{\left(2k\right)!} f^{(2k-1)}\left(2\right)-\frac{1}{6!}\int_{2}^{\infty}f^{(6)}\left(\ell\right) \: B_{6}\left(\{\ell\}\right)d\ell \nonumber\,. \end{eqnarray} In order to gain numerical stability for large $\ell$, above we have set $p=3$, we have cast the Lerch transcendents inside $\Omega^F_s\left(\theta_0\right)$ -- see \eqref{finalferm} -- into hypergeometric functions \begin{gather} \Phi(z,1,a) = \frac{{}_2 F_1(1,a;a+1;z)}{a} \,, \qquad\qquad |z|<1 \wedge z\neq0\, , \end{gather} and we have approximated the derivatives $f^{(k)}(\ell)$ by finite-difference expressions \begin{gather} f^{(k)}(\ell) \,\, \to \,\, {\Delta\ell}^{-k} \sum_{i=0}^{k} (-1)^i \binom{k}{i} f \left(\ell+ (\textstyle\frac{k}{2}-i )\Delta\ell \right) \,, \qquad \qquad \Delta\ell \ll 1~. \end{gather} At this stage, the expression \eqref{gammanumerics2} is only a function of the latitude parameter $\sigma_0$ ({\it i.e.} the polar angle $\theta_0$ in \eqref{theta_0_and_sigma_0}) and of two parameters -- the IR cutoff $\epsilon_0$ and the derivative discretization $\Delta\ell$, both small compared to a given $\sigma_0$. We have tuned them in order to confidently extract four decimal digits. \begin{figure} \centering \begin{subfigure}[b]{0.39\textwidth} \includegraphics[scale=0.5]{plot1.png} \caption{Comparison between $\Delta\Gamma(\theta_0)_{\rm sm}$ in \eqref{gammanumerics2} (orange dots) and $\Delta\Gamma(\theta_0)_{\rm loc}$ in \eqref{gammalocalization} (blue line). We set $\epsilon_0=10^{-7}$, $\Delta\ell=10^{-9}$. } \label{fig:Deltagamma} \end{subfigure} ~ \qquad\qquad\qquad \begin{subfigure}[b]{0.43\textwidth} \includegraphics[scale=0.5]{plot2.png} \caption{Fitting of the discrepancy \eqref{discrepancy} (red dots) with the test function $-\frac{1}{2}\log(1+e^{-2\sigma_0})$ (black line). We set $\epsilon_0=10^{-7}$, $\Delta\ell=10^{-9}$. The interval covers approximately $ 0.8\degree \leq \theta_0 \leq 89.4\degree$.} \label{fig:Remainder} \end{subfigure} ~ \caption{Comparison between string sigma-model perturbation theory and the predictions coming from supersymmetric localization for the ratio between latitude and circular Wilson loops in terms of the corresponding one-loop sigma-model (differences of) effective actions. }\label{fig:numerics} \end{figure} In Figure \ref{fig:Deltagamma} we compare the regularized one-loop effective action obtained from the perturbation theory of the string sigma-model \eqref{gammanumerics2} to the gauge theory prediction from \eqref{mainratio} \begin{gather} \label{gammalocalization} \Delta\Gamma(\theta_0)_{\textrm{loc}}\equiv\left[\Gamma^{(1)}_{\textrm{UV-reg}}(0)-\Gamma^{(1)}_{\textrm{UV-reg}}(\theta_0)\right]_{\textrm{loc}}= -\frac{3}{2}\log\tanh\sigma_0 \end{gather} for different values of $\sigma_0$. Data points cover almost entirely~\footnote{When pushed to higher accuracy, numerics is computationally expensive in the vicinity of the two limiting cases ($\sigma_0=0,\,\theta_0=\frac{\pi}{2}$) and($\sigma_0=\infty,\,\theta_0=0$).} the finite-angle region between the Zarembo Wilson loop ($\sigma_0=0,\,\theta_0=\frac{\pi}{2}$) and the circular Wilson loop ($\sigma_0=\infty,\,\theta_0=0$). The vanishing of the normalized effective action in the large-$\sigma_0$ region is a trivial check of the normalization. As soon as the opposite limit $\sigma_0=0$ is approached, the difference \eqref{gammanumerics2} bends up ``following'' the localization curve \eqref{gammalocalization} but also significantly deviates from it, and the measured discrepancy is incompatible with our error estimation. Numerics is however accurate enough to quantify the gap between the two plots on a wide range. Figure \ref{fig:Remainder} shows that, surprisingly, such gap perfectly overlaps a very simple function of $\sigma_0$ within the sought accuracy \begin{equation}\label{discrepancy} {\rm Rem }(\theta_0)\equiv\Delta\Gamma(\theta_0)_{\rm sm}-\Delta\Gamma(\theta_0)_{\rm loc} \approx -\frac{1}{2} \log(1+e^{-2\sigma_0})=\log\cos\textstyle{\frac{\theta_0}{2}}~. \end{equation} We notice at this point that the \emph{same} simple result above can be obtained taking in \eqref{gammanumerics} the limit of $\epsilon\to0$ \emph{before} performing the sums. As one can check, in this limit UV and IR divergences cancel in the ratio~\footnote{This is also due to the volume part of the Euler number $\chi_v(\theta_0)$ being independent of $\sigma_0$ up to $\epsilon$ corrections, see \eqref{Eulervolume}.}, the special functions in the fermionic determinants disappear and, because in general summands drastically simplify, one can proceed analytically getting the same result calculated in terms of numerics. We remark however that such inversion of the order of sum and limit on the IR cutoff cannot be a priori justified, as it would improperly relate the $\Lambda$ cutoff with a $1/\epsilon$ cutoff ({\it e.g.} forcing $\ell$ to be smaller than $1/\epsilon$). As emphasized above, in this limit the effective actions for the latitude and circular case separately do not reproduce the expected UV divergences. Therefore, the fact that in this limit the summands in the difference \eqref{gammanumerics} show a special property of convergence - which we have not analyzed in details - and lead to the exact result is a priori highly not obvious, rendering the numerical analysis carried out in this section a rather necessary step. On a related note, the simplicity of the result \eqref{discrepancy} and the possibility of getting an analytical result for the maximal circle $\theta_0=0$ suggest that the summation \eqref{susyreg} could have been performed analytically also in the latitude case $\theta_0\neq0$. We have not further investigated this direction. \section{Conclusions} \label{sec:conclusions} In this paper we calculated the ratio between the $\textup{\textrm{AdS}}_5\times \textup{\textrm{S}}^5$ superstring one-loop partition functions of two supersymmetric Wilson loops with the same topology. In so doing, we address the question whether such procedure -- which should eliminate possible ambiguities related to the measure of the partition function, under the assumption that the latter only depends on worldsheet topology -- leads to a long-sought agreement with the exact result known via localization at this order, formula \eqref{gammalocalization}. Our answer is that, in the standard setup we have considered for the evaluation of the one-loop determinants (Gelfand-Yaglom approach with Dirichlet boundary conditions at the boundaries, of which one fictitious \footnote{See also Appendix~\ref{bclower} where a minimally different choice for the boundary conditions on the bosonic and fermionic modes with small Fourier mode is considered, and shown not to affect the final result. }), the agreement is not found. A simple numerical fit allows us to quantify exactly a ``remainder function'', formula \eqref{discrepancy}~\footnote{See also discussion below \eqref{discrepancy}, where we notice that the same result is obtained analytically via the {\it a priori} not justified ``order-of-limits'' inversion.}. As already emphasized, the expectation that considering the ratio of string partition functions dual to Wilson loops with the same topology should cancel measure-related ambiguities is founded on the assumption that the partition function measure is actually \emph{not} depending on the particular classical solution considered. Although motivated in light of literature examples similar in spirit (see Introduction), this remains an assumption, and it is not possible to exclude a priori a geometric interpretation for the observed discrepancy. One reasonable expectation is that the disagreement should be cured by a change of the world-sheet computational setup, tailored so to naturally lend itself to a regularization scheme equivalent to the one (implicitly) assumed by the localization calculation~\footnote{Morally, this resembles the quest for an ``integrability-preserving'' regularization scheme, different from the most natural one suggested by worldsheet field theory considerations, in the worldsheet calculations of light-like cusps in $\mathcal{N}=4$ SYM~\cite{Giombi:2009gd} and ABJM theory~\cite{McLoughlin:2008he}.}. One possibility is a choice of boundary conditions for the fermionic spectral problem~\footnote{For the bosonic sector, we do not find a reasonable alternative to the Dirichlet boundary conditions.} different from the standard ones here adopted for the squared fermionic operator~ \footnote{For example, instead of squaring one could consider the Dirac-like first-order operator~(3.29). Then, Dirichlet boundary conditions would lead to an overdetermined system for the arbitrary integration constants of the 2 $\times$ 2 matrix-valued, first-order eigenvalue problem. The question of the non obvious alternative to consider is likely to be tied to a search of SUSY-preserving boundary conditions on the lines of~\cite{Sakai:1984vm}.}. Also, ideally one should evaluate determinants in a diffeomorphism-preserving regularization scheme. In that it treats asymmetrically the worldsheet coordinates, the by now standard procedure of employing the Gel'fand -Yaglom technique for the effective (after Fourier-transforming in $\tau$) one-dimensional case at hand does not fall by definition in this class. In other words, the choice of using a $\zeta$-function- like regularization -- the Gel'fand-Yaglom method -- in $\sigma$ and a cutoff regularization in Fourier $\omega$-modes is a priori arbitrary. To bypass these issues it would be desirable to fully develop a higher-dimensional formalism on the lines of~\cite{Dunne:2006ct, Kirsten:2010eg}. A likewise fully two-dimensional method to deal with the spectral problems is the heat kernel approach, which has been employed at least for the circular Wilson loop case (where the relevant string worldsheet is the homogenous subspace $\textup{\textrm{AdS}}_2$) in\cite{Buchbinder:2014nia,Bergamin:2015vxa}. As there explained, the procedure bypasses the need of a large $\sigma$ regulator and makes $\epsilon$ appear only in the $\textup{\textrm{AdS}}_2$ regularized volume, the latter being a constant multiplying the traced heat kernel and thus appearing as an overall factor in the effective action. This is different from what happens with the Gel'fand-Yaglom method, where different modes carry a different $\epsilon$-structure and one has to identify and subtract by hand the $\epsilon$-divergence in the one-loop effective action. However, little is known about heat kernel explicit expressions for the spectra of Laplace and Dirac operators in arbitrary two-dimensional manifolds, as it is the case as soon as the parameter $\theta_0$ is turned on. The application of the heat kernel method for the latitude Wilson loop seems then feasible only in a perturbative approach, {\it i.e.} in the small $\theta_0$ regime when the worldsheet geometry is nearly $\textup{\textrm{AdS}}_2$~\footnote{We are grateful to A. Tseytlin for a discussion on these points.}. It is highly desirable to address these or further possibilities in future investigations. \section*{Acknowledgements} We acknowledge useful discussions with Xinyi Chen-Lin, Amit Dekel, Sergey Frolov, Simone Giombi, Jaume Gomis, Thomas Klose, Shota Komatsu, Martin Kruczenski, Daniel Medina Rincon, Diego Trancanelli, Pedro Vieira, Leo Pando Zayas, and in particular with Nadav Drukker, Arkady Tseytlin, and Konstantin Zarembo. We also thank A. Tseytlin and the Referee of the published version for useful comments on the manuscript. The work of VF and EV is funded by DFG via the Emmy Noether Programme \emph{``Gauge Field from Strings''}. VF thanks the kind hospitality, during completion of this work, of the Yukawa Institute for Theoretical Physics in Kyoto, the Centro de Ciencias de Benasque ``Pedro Pascual", the Institute of Physics in Yerevan and in Tbilisi. The research of VGMP was supported in part by the University of Iceland Research Fund. EV acknowledges support from the Research Training Group GK 1504 \emph{``Mass, Spectrum, Symmetry''} and from the Seventh Framework Programme [FP7-People-2010-IRSES] under grant agreement n. 269217 (UNIFY), and would like to thank the Perimeter Institute for Theoretical Physics and NORDITA for hospitality during the completion of this work. All authors would like to thank the Galileo Galilei Institute for Theoretical Physics for hospitality during the completion of this work. \\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusion} We address the problem of unsupervised sarcasm generation that models several sarcasm factors including reversal of valence and semantic incongruity with the context. The key contribution of our approach is the modeling of commonsense knowledge in a retrieve-and-edit generation framework. A human-based evaluation based on four criteria shows that our generation approach significantly outperforms a state-of-the-art model. Compared with human generated sarcasm, our model shows promise particularly for creativity, humor and sarcasticness, but less for grammaticality. A bigger challenge in sarcasm generation and more generally, creative text generation, is to capture the difference between creativity (novel but well-formed material) and nonsense (ill-formed material). Language models conflate the two, so developing methods that are nuanced enough to recognize this difference is key to future progress. \section*{Acknowledgments} This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032 and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank Christopher Hidey, John Kropf, Anusha Bala and Christopher Robert Kedzie for useful discussions. The authors also thank members of PLUSLab at the University Of Southern California and the anonymous reviewers for helpful comments. \section{Unsupervised Sarcasm Generation} \label{sec:approach} An overview of the sarcasm generation pipeline is shown in Figure \ref{figure:pipeline}. In this section, we detail the three main modules that are designed to instantiate the key sarcasm factors. \subsection{Reversal of Valence} \label{sec:rev1} As sarcasm is a type of verbal irony used to mock or convey contempt, in most sarcastic messages we encounter a positive sentiment towards a negative situation (i.e., ironic criticism \cite{kreuz2002asymmetries}). This observation is also supported by research on sarcasm detection, particularly on social media. Hence, for our sarcasm generation task, we focus on transforming a literal utterance with negative valence into positive valence. To implement the reversal of valence, as highlighted in the yellow background in Figure \ref{figure:pipeline}, we first identify the evaluative words and replace them with their lexical antonyms using WordNet \cite{miller1995wordnet}. As we expect the evaluative words to be negative words, we rely on the word level negative scores obtained from SentiWordNet \cite{esuli2006sentiwordnet}. In the absence of words with negative polarity, we check if there is the negation word \textit{not} or words ending with \textit{n't} and remove these words. In case there are both negative words and \textit{not} (or words ending in \textit{n't}), we handle only one of them. Given the non sarcastic example \textit{``zero visibility in fog makes driving {\bf difficult}''} shown in Figure \ref{figure:pipeline} and which we use as our running example, the reversal of valence module generates \textit{``zero visibility in fog makes driving {\bf easy}"}. \subsection{Retrieval of Commonsense Context} \label{sec:common} As discussed before, a straightforward reversal of valence might not generate sarcastic messages that display a clear semantic incongruity, and thus, additional context is needed. We propose an approach to retrieve relevant context for the sarcastic message based on commonsense knowledge. First, we generate commonsense knowledge based on ConcepNet (e.g., ``driving in zero visibility" causes ``accidents'') (Section \ref{section:kb}). Second, we retrieve candidate context sentences that contain the commonsense concept from a retrieval corpus (Section \ref{section:retrieve}) and edit them for grammatical consistency with the input message (Section \ref{section:gr}). \subsubsection{Commonsense Reasoning} \label{section:kb} We extract nouns, adjectives, adverbs, and verbs from the non-sarcastic input messages and feed them as input to COMET \cite{comet} model to generate commonsense knowledge (highlighted in green background in Figure \ref{figure:pipeline}). COMET is an adaptation framework for constructing commonsense knowledge based on pre-trained language models. It initiates with a pre-trained GPT~\cite{gpt} model and fine-tune on commonsense knowledge tuples (in our case, ConceptNet~\cite{conceptnet}). These tuples provide COMET with the knowledge base structure and relations that must be learned, and COMET adapts the representations that the language model learned from the pre-training stage to add novel nodes to the seed knowledge graph. Our work only leverages the \textbf{causes} relation. For instance, from our running example, we first remove the stopwords and then extract nouns, adjectives, adverbs, and verbs including the terms \textit{zero}, \textit{visibility}, \textit{fog},\textit{makes} \textit{driving}, and \textit{difficult} to feed to COMET as inputs. In turn, COMET returns the probable causes with their probability scores. For the running example, COMET returns with the highest probability that these terms may cause an \textbf{accident} (illustrated in Figure \ref{figure:comet}). For further details regarding COMET please see \newcite{comet}. \begin{figure*}[ht] \centering \includegraphics[scale=0.18]{comet.png} \caption{\label{figure:comet} Model predictions from COMET. The edges are sorted by probability} \end{figure*} \subsubsection{Retrieving Sentences Containing Commonsense Concepts} \label{section:retrieve} Once we obtain the most probable output from COMET, the next step is to retrieve sentences containing the commonsense word or phrase from a retrieval corpus. We impose several constraints: (a) the retrieved sentences should contain the commonsense concept at the beginning or at the end; (b) sentence length should be less than twice the number of tokens in the non-sarcastic input to keep a consistency between the length of the non-sarcastic input and its sarcastic version. If none of the commonsense phrase is present in the retrieval corpus, we retrieve sentences containing the nouns within the top most phrase. For example, if COMET yields \textit{microwave burger awful} causes the phrase \textbf{food to spoil}, and this phrase does not appear in any sentence in the retrieval corpus, we search for \textit{food} and later replace it in the retrieved sentence with \textit{food to spoil}. COMET often returns output with common phrases such as \textit{you to be}, \textit{you to get}, \textit{person will be}, \textit{you have} which we also removed while keeping the main content word (i.e the commonsense concept) We use Sentencedict.com, an online sentence dictionary as the retrieval corpus, where one can find high quality sentences for almost every word obeying the above constraints. \footnote{https://sentencedict.com/} \subsubsection{Grammatical Consistency}\label{section:gr} We first check whether the retrieved sentences are consistent with the non-sarcastic input in terms of the pronouns. If the pronouns are mismatched, then we modify the pronoun of the retrieved sentence to match the pronoun of the non-sarcastic input. In case, the non-sarcastic input does not have any pronoun, but the retrieved sentence does, we simply change that pronoun to ``I''. For example, if the non-sarcastic input sentence is \textit{``Ignoring texts is literally the worst part of communication.''} and the retrieved commonsense sentence is \textit{``\textbf{He} has never suffered the torment of rejection.''}, we modify the retrieved sentence to \textit{``\textbf{I} have never suffered the torment of rejection.''} to have consistency among the pronoun use. After correcting the pronouns and proper names (in the same way as pronoun correction), we feed the corrected sentences into the Neural Grammatical Error Corrections System \cite{zhao2019improving} to correct any pronoun or gender specific errors introduced by the replacements. \subsection{Ranking for Semantic Incongruity} \label{sec:ranking} \iffalse To obtain the best possible context sentence from the retrieval corpus, we need a method to rank the retrieved sentences. Because semantic incongruity is one of the key factors we need to capture when generating sarcastic messages, we rank our retrieved sentences based on their degree of incongruity with the reversal of valence generated sentence, as shown in Figure~\ref{figure:pipeline} in the purple plate. \fi After the grammatical error correction, the next step is to select the best context sentence from the retrieved results. Since we expect the context sentences to be incongruous with the sentence generated by the reversal of valence approach (Section \ref{sec:rev1}), we rank the context sentences by semantic incongruity scores and select the best candidate. We frame the problem of semantic incongruity based on the Natural Language Inference (NLI)~\cite{nli} task. The Multi-Genre NLI \cite{multinli} covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization, making it an ideal choice as our NLI Dataset. We first fine-tune RoBERTa-large \cite{roberta}, a state-of-the-art pre-trained language model for a 3-way classification (i.e., contradiction, entailment, and neutral) by training on the Multi-NLI dataset. Next, for each retrieved sentence, we treat it as the \emph{premise} and the sentence generated by the reversal of valence as the \emph{hypothesis}, and thus, obtain a contradiction score from the trained model. Finally, the scores obtained for the \textit{contradiction} class are used as a proxy for the degree of \textit{semantic incongruity} and we select the context with the highest score. Figure \ref{figure:pipeline} shows the region with light purple background as our incongruity ranking module. \subsection{Implementation Details} We use the pre-trained COMET model \footnote{https://github.com/atcbosselut/comet-commonsense} for commonsense reasoning with a greedy decoding of five to generate a commonsense phrase and return the topmost that has no lexical overlap with the input. If the generated phrase contains stopwords in the beginning we remove them. For incorporating semantic incongruity, we use the RoBERTa-large model with 355M parameters and fine-tune on MNLI. For grammatical error correction model, we use an open source pre-trained model.\footnote{https://github.com/zhawe01/fairseq-gec} \subsection{Dataset} \label{section:dataset} \label{sec:data} \newcite{phrasing} released a dataset of 4,762 pairs of speakers' sarcastic messages and hearers' interpretations by conducting a crowdsourcing experiment. \newcite{sign} introduced a dataset of 3,000 sarcastic tweets, each interpreted by five human judges and present a novel task of sarcasm interpretation. Both datasets were collected using the hashtag \textit{\#sarcasm} from Twitter. We merge these two datasets and choose non-sarcastic utterances no longer than 15 words. For each literal non-sarcastic utterance we also keep the corresponding gold sarcastic message, which is useful for evaluation and comparison purposes. We randomly select 150 utterances as part of the test set (i.e., five times more than the size of the test data in \newcite{abhijit}), while assuring such utterances do not contain high lexical overlap. We allow this constraint to evaluate how our method(s) deal with diverse data. \section{Experimental Setup} \label{section:exp_setup} \input{dataset} \begin{table*}[] \centering \begin{tabular}{|l|l|l|l|l|} \hline System & Sarcasticness & Creativity & Humor & Grammaticality \\ \hline {State-of-the-art \cite{abhijit}} & 1.63 & 1.60 & 1.50 & 1.46 \\ \hline Human Generated & \textbf{3.57} & 3.16 & \textbf{3.18} & 3.98 \\ \hline\hline Reversal of Valence (RV) & 3.00 & 2.80 & 2.72 & \textbf{\color{black}4.29} \\ \hline No Reversal of Valence (NoRV) & 1.79 & 2.28 & 2.09 & 3.91 \\ \hline No Semantic Incongruity (NSI) & 3.04 & 2.99 & 2.90 & 3.68 \\ \hline Full Model (FM) & 3.23* & \textbf{\color{black}3.24} & 3.08* & 3.69 \\ \hline \end{tabular} \caption{Average scores for generated sarcasm from all systems as judged by the Turkers. The scale ranges from 1 (\emph{not at all}) to 5 (\emph{very}). For creativity and grammaticality, our models are comparable to human annotation and significantly better than the state-of-the-art ($p<0.001)$. For sarcasticness and humor, the full model is ranked 2nd by a small margin against the human generated message (denoted by *).} \label{table:example2} \end{table*} \subsection{Systems for Experiment} Here, we benchmark the quality of the generated sarcastic messages by comparing multiple systems. \begin{enumerate} \item \textbf{Full Model (FM)}: This model consists of all the three modules aimed at capturing reversal of valence, commonsense context, and semantic incongruity, respectively. \item \textbf{Reversal of Valence (RV)}: This model relies only on the reversal of valence component. \item \textbf{No Reversal of Valence (NoRV)}: This model only retrieves commonsense context and ranks them based on semantic incongruity. \item \textbf{No Semantic Incongruity (NSI)}: This model relies only on the reversal of valence and retrieval of commonsense context, without ranking based on semantic incongruity. A randomly selected retrieved sentence is used. \item \textbf{MTS2019}: We make use of the model released by \newcite{abhijit} as it is the state-of-the-art sarcasm generation system.\footnote{https://github.com/TarunTater/sarcasm\_generation} \item \textbf{Human (Gold) Sarcasm}: As described in Section~\ref{sec:data}, we have gold sarcasm created by humans for every non-sarcastic utterance. \end{enumerate} \label{sec:results} \subsection{Evaluation Criteria} BLEU \cite{BLEU} is one of the most widely used automatic evaluation metric for generation tasks such as Machine Translation. However, for creative text generation, it is not ideal to expect significant n-gram overlaps between the machine-generated and the gold-standard utterances. Hence, we performed a human evaluation. We evaluate a total of 900 generated utterances since our ablation study consisted of six different systems with 150 utterances each. Sarcasm is often linked with intelligence, creativity, and wit; thus we propose a set of 4 criteria to evaluate the generated output: (1) \textbf{Creativity} (``How creative are the utterances ?''), (2) \textbf{Sarcasticness} (``How sarcastic are the utterances ?''), (3) \textbf{Humour} (``How funny are the sentences ?'') \cite{skalicky2018linguistic}, and (4) \textbf{Grammaticality} (``How grammatical are the sentences ?''). We design a MTurk task where Turkers were asked to rate outputs from all the six systems. Each Turker was given the non-sarcastic utterance as well as a group of sarcastic utterances generated by all the six systems (randomly shuffled). Each criteria was rated on a scale from 1 (\emph{not at all}) to 5 (\emph{very}). Finally, each utterance was rated by three individual Turkers. 55, 59, 66, and 60 Turkers attempted the HITs (inter-annotator agreement of 0.59, 0.53, 0.47 and 0.66 for the tasks on creativity, sarcasticness, humour and grammaticality, respectively using Spearman's correlation coefficient). \begin{table}[] \small \centering \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{Aspect} & \multicolumn{2}{l|}{FM vs Human} & \multicolumn{2}{l|}{FM vs MTS2019} \\ \cline{2-5} & win\% & lose\% & win\% & lose\% \\ \hline Sarcasticness & 34.0 & \textbf{55.3} & \textbf{90.0} & 6.0 \\ \hline Creativity & \textbf{48.0} & 36.0 & \textbf{95.3} & 4.0 \\ \hline Humor & 40.6 & \textbf{48.0} & \textbf{90.0} & 4.0 \\ \hline Grammaticality & 26.6 & \textbf{56.6} & \textbf{98.0} & 1.3 \\ \hline \end{tabular} \caption{Pairwise comparison between the full model (FM) and human generated sarcasm, and between the full model (FM) and the state-of-the-art model in \newcite{abhijit}. Win \% (lose \%) is the percentage of the FM gets a higher (lower) average score compared to the other method for the 150 human-rated sentences. The rest are ties.} \label{table:example3} \end{table} \begin{table*}[!ht] \small \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline Non Sarcastic & System & Sarcasm & S & C & H & G \\ \hline \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}I inherited \\unfavorable genes \\from my mother.\end{tabular}} & FM & \begin{tabular}[c]{@{}l@{}}I inherited great genes from my mother. \textbf{\color{black}Ugly} goes down \\to the bone.\end{tabular} & \textbf{5.0} & 4.0 & \textbf{3.6} & 3.6 \\ \cline{2-7} & RV & I inherited great genes from my mother. & 3.0 & 2.6 & 2.0 & 2.3 \\[5pt] \cline{2-7} & NoRV & \textbf{\color{black}Ugly} goes down to the bone. & 3.0 & 2.6 & 3.0 & \textbf{4.0} \\[5pt] \cline{2-7} & NSI & \begin{tabular}[c]{@{}l@{}}I inherited great genes from my mother. She makes me \\feel dowdy and \textbf{\color{black}ugly}.\end{tabular} & 2.6 & 3.6 & 3.0 & \textbf{4.0} \\[5pt] \cline{2-7} & MTS2019 & \begin{tabular}[c]{@{}l@{}}Butch tagging bullies apc seymour good temper\\[5pt] good mentor.\end{tabular} & 1.3 & 1.0 & 1.3 & 2.0 \\[5pt] \cline{2-7} & Human & Great I inherited all of my mother's GOOD genes & 2.3 & \textbf{4.3} & 2.0 & 2.6 \\[5pt] \hline \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}It is not fun to date \\a drug addict.\end{tabular}} & FM & \begin{tabular}[c]{@{}l@{}}It is fun to date a drug addict. Spent the night in a police \\cell after his \textbf{\color{black}arrest}.\end{tabular} & 4.3 & \textbf{5.0} & \textbf{4.6} & \textbf{5.0} \\[5pt] \cline{2-7} & RV & It is fun to date a drug addict. & \textbf{5.0} & 2.3 & 2.0 & 4.6 \\[5pt] \cline{2-7} & NoRV & Spent the night in a police cell after his \textbf{\color{black}arrest}. & 1.0 & 1.0 & 2.0 & 2.6 \\[5pt] \cline{2-7} & NSI & \begin{tabular}[c]{@{}l@{}}It is fun to date a drug addict. The feds completely \\screwed up the \textbf{\color{black}arrest}.\end{tabular} & 3.3 & 4.3 & 2.0 & 2.6 \\[5pt] \cline{2-7} & MTS2019 & \begin{tabular}[c]{@{}l@{}}Butch is a powerful addict in gente he is \\ an optimist great fun.\end{tabular} & 2.6 & 2.0 & 1.0 & 1.3 \\[5pt] \cline{2-7} & Human & Dating a drug addict .. Wouldn't that be fun. & 3.0 & 1.6 & 2.6 & 4.0 \\[5pt] \hline \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}I hate getting sick \\ from fast food.\end{tabular}} & FM & \begin{tabular}[c]{@{}l@{}}I love getting sick from fast food. \textbf{\color{black}Stomach ache} is just an\\ additional side effect.\end{tabular} & 3.3 & 3.6 & \textbf{5.0} & 3.6 \\[5pt] \cline{2-7} & RV & I love getting sick from fast food. & 3.3 & 2.6 & 3.6 & \textbf{5.0} \\[5pt] \cline{2-7} & NoRV & \textbf{\color{black}Stomach ache} is just an additional side effect. & 1.3 & 2.6 & 3.6 & 3.3 \\[5pt] \cline{2-7} & NSI & \begin{tabular}[c]{@{}l@{}}I love getting sick from fast food. I ate too much and got a \\terrible \textbf{\color{black}stomach ache}.\end{tabular} & 2.3 & 3.3 & 4.3 & \textbf{5.0} \\[5pt] \cline{2-7} & MTS2019 & \begin{tabular}[c]{@{}l@{}}I hate love sick to ikes sword lowest **** giving\\ stains giving stains on printers making pound accidents \\work bikinis in\end{tabular} & 1.0 & 1.3 & 1.3 & 1.0 \\[5pt] \cline{2-7} & Human & \begin{tabular}[c]{@{}l@{}}Shout out to the mcdonalds for giving me bad food and \\making me sick right before work in two hours.\end{tabular} & \textbf{4.0} & \textbf{4.3} & 4.0 & 4.3 \\[5pt] \hline \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Burnt popcorn is \\gross.\end{tabular}} & FM & \begin{tabular}[c]{@{}l@{}}Burnt popcorn is lovely. The smell made me want to \textbf{\color{black} vomit.}\end{tabular} & \textbf{4.6} & 3.0 & 3.3 & \textbf{5.0} \\[5pt] \cline{2-7} & RV & Burnt popcorn is lovely. & 4.0 & 2.0 & 3.6 & \textbf{5.0} \\[5pt] \cline{2-7} & NoRV & The smell made me want to \textbf{\color{black}vomit.} & 1.0 & 2.0 & 3.6 & 4.6 \\[5pt] \cline{2-7} & NSI & \begin{tabular}[c]{@{}l@{}}Burnt popcorn is lovely. Hold the bag in case I \textbf{\color{black} vomit}.\end{tabular} & 4.3 & 2.3 & 4.3 & \textbf{5.0} \\[5pt] \cline{2-7} & MTS2019 & \begin{tabular}[c]{@{}l@{}} reggae burnt popcorn lol .\end{tabular} & 2.3 & 1.3 & 2.0 & 1.0 \\[5pt] \cline{2-7} & Human & \begin{tabular}[c]{@{}l@{}}Gotta love the smell of burnt microwave popcorn.\end{tabular} & 3.3 & \textbf{3.3} & \textbf{4.0} & 4.0 \\[5pt] \hline \end{tabular} \caption{Examples of generated outputs from different systems. S, C, H, G represent Sarcasticness, Creativity, Humor and Grammaticality. Text in bolded black represents the commonsense word/phrase obtained from COMET given the non-sarcastic utterance.} \label{table:analysis} \end{table*} \section{Experimental Results} \subsection{Quantitative Scores} Table~\ref{table:example2} presents the scores for the above mentioned metrics of different systems averaged over 150 test utterances. Our full model as well as the variations that ablated some components improve over the state-of-the-art~\cite{abhijit} on all the criteria. The ablation in Table~\ref{table:example2} shows that our full model is superior to individual modules in terms of sarcasticness, creativity and humor. For grammaticality, we observe that the Turkers scored shorter sentences higher (e.g., RV), which also explains why NoRV model received a higher score than the full model. NoRV otherwise performed worse than all the other variations. In terms of creativity, our full model attains the highest average scores over all the other models including sarcastic utterances composed by humans. For grammaticality, the reversal of valence model is the best, even better than human generated ones. The performance of the full model is the second best in terms of the sarcasticness and humor, only slightly worse than human-generated sarcasm, showing the effectiveness of our approach that captures various factors of sarcasm. \subsection{Pairwise game between Full Model, State-of-the-art and Humans} Table \ref{table:example3} displays the pairwise comparisons between the full model (FM) and human generated sarcasm, and FM and \newcite{abhijit}, respectively. Given a pair of inputs, we decide win/lose/tie by comparing the average scores (over three Turkers) of both outputs. We see that FM dominates~\newcite{abhijit} on all the metrics and human-generated sarcasm on the creativity metric. For sarcasticness, although humans are better, the FM model still has a 34\% winning rate. \begin{figure}[t] \centering \includegraphics[scale=0.5]{sarc1.png} \caption{\label{figure:pie} Pie chart comparing the success rate of all the variations of our model.} \end{figure} \subsection{Ablation Study} We focus our ablation study on the metric of sarcasticness, as we consider this as the main criterion for the success of generating sarcasm. As shown in Figure~\ref{figure:pie}, our best model (FM) outperforms individual ablation modules. We filtered out 60 examples from the 150 with no ties. The ablation component employing just \textit{Reversal of Valence} is second best for sarcasticness according to Figure ~\ref{figure:pie}. Further, to understand the extent to which ranking the retrieved sentence based on the degree of incongruity helped generate better sarcasm, we took the outputs from FM and NSI for comparisons. Out of the 150 utterances, 119 times there was no tie. Our best model (FM) wins 66\% of the time while the NSI model wins 34\% of the cases. \section{Introduction} \label{section:introduction} Studies have shown that the use of sarcasm or verbal irony, can increase creativity on both the speakers and the addressees ~\cite{article}, and can serve different communicative purposes such as evoking humor and diminishing or enhancing critique \cite{burgers2012verbal}. Thus, developing computational models that generate sarcastic messages could impact many downstream applications, such as better conversational agents and creative or humorous content creation. While most computational work has focused on sarcasm detection \cite{sarc1,sarc6,riloff2013sarcasm,ghosh2015sarcastic,sarc4,muresan2016identification,sarc9,ghoshetal2017role,ghosh2018sarcasm}, research on sarcasm generation is in its infancy \cite{joshi2015sarcasmbot,abhijit}. \begin{table}[] \centering \small \begin{tabular}{|@{ }l@{ }|@{ }l@{ }|} \hline \textbf{Literal Input 1} & I hate getting sick from fast food.\\ \hline\hline \textbf{GenSarc1} & I love getting sick from fast food. \\ \hline \begin{tabular}[c]{@{}l@{}} \textbf{GenSarc2}\end{tabular} & \begin{tabular}[c]{@{}l@{}}[I love getting sick from fast food.] [\\Stomach ache is just an additional side \\effect.]\end{tabular} \\ \hline \textbf{Human 1} & \begin{tabular}[c]{@{}l@{}}Shout out to the Mc donalds for giving\\ me bad food and making me sick right\\ before work in two hours.\end{tabular} \\ \hline \hline \textbf{Literal Input 2} & \begin{tabular}[c]{@{}l@{}}I inherited unfavorable genes from my \\mother.\end{tabular}\\ \hline\hline \textbf{GenSarc3} & \begin{tabular}[c]{@{}l@{}}I inherited great genes from my mother.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}} \textbf{GenSarc4}\end{tabular} & \begin{tabular}[c]{@{}l@{}}[I inherited great genes from my\\ mother.] [Ugly goes down to the bone.]\end{tabular} \\ \hline \textbf{Human 2} & \begin{tabular}[c]{@{}l@{}}Great I inherited all of my mother's \\GOOD genes\end{tabular} \\ \hline \end{tabular} \caption{Table showing a literal or non sarcastic input sentence and respective sarcastic outputs. GenSarc1 and GenSarc3 simply reverses the valence, while GenSarc2 and GenSarc4 add commonsense context to create incongruity or enhance the humorous effect.} \label{table:example1} \vspace{-1em} \end{table} Sarcasm generation is a challenging problem since the generated utterance should have at least five characteristics (a.k.a. ``sarcasm factors'') \cite{burgers2012verbal}: 1) be evaluative; 2) be based on a reversal of valence between the literal and intended meaning; 3) be based on a semantic incongruity with the context, which can include shared commonsense or world knowledge between the speaker and the addressee; 4) be aimed at some target, and 5) be relevant to the communicative situation in some way. To simplify the problem, we focus on the task of generating a sarcastic utterance starting from a non-sarcastic utterance that conveys the speaker's intended meaning and that is evaluative. Consider the examples in Table ~\ref{table:example1}. Given the literal input ``I hate getting sick from fast food" or ``I inherited unfavorable genes from my mother", our task is to generate a sarcastic message that would convey this intended literal meaning. In this simplifying task, we are not concerned with the fifth characteristic, while the first and to some degree, the fourth are specified by the input (literal) utterances. Given the lack of ``training'' data for the sarcasm generation task, we propose a novel \emph{unsupervised approach} that has three main modules guided by the above mentioned sarcasm factors: \begin{enumerate} \item{{\bf Reversal of Valence:} To generate sarcastic utterances that satisfy the second characteristic we identify the evaluative word and use negation or lexical antonyms to generate the sarcastic utterance by reversing the valence (Section \ref{sec:rev1}). For example, given, ``I \textbf{hate} getting sick from fast food'' this module will generate ``I \textbf{love} getting sick from fast food'' (GenSarc1 in Table~\ref{table:example1}).} \item{ {\bf Retrieval of Commonsense Context:} Adding commonsense context could be important to make explicit the semantic incongruity factor (e.g., GenSarc4 vs. GenSarc3 in Table~\ref{table:example1}), or could enhance the humorous effect of the generated sarcastic message (e.g., GenSarc2 vs. GenSarc1 in Table~\ref{table:example1}). We propose an approach where retrieved relevant commonsense context sentences are to be added to the generated sarcastic message. At first, we use a pre-trained language model fine-tuned on the ConceptNet \cite{conceptnet} called COMET \cite{comet} to generate relevant commonsense knowledge. COMET gives us that, ``inherited unfavorable genes from my mother'' causes \emph{``to be ugly''} or that ``getting sick from fast food'' causes \emph{``stomach ache''} (Section \ref{section:kb}). The derived commonsense concept is then used to retrieve relevant sentences --- from a corpus --- that could be added to the sentence obtained through reversal of valence (e.g., ``Stomach ache is just an additional side effect'' in Table~\ref{table:example1}) (Section \ref{section:retrieve}).} \item{ {\bf Ranking of Semantic Incongruity:} The previous module generates a list of candidate commonsense contexts. Next, we measure \emph{contradiction} between each of these commonsense contexts and the sentence generated by the reversal of valence approach (module 1) and select the commonsense context that received the highest contradiction score. Finally, we concatenate the selected context to the sentence obtained through reversal of valence. Here, conceptually, contradiction detection is aimed to capture the semantic incongruity between the output of valence reversal and its context. Contradiction scores are obtained from a model trained on the Multi-Genre NLI Corpus \cite{multinli} (Section \ref{sec:ranking}). } \end{enumerate} We test our approach on 150 non-sarcastic utterances randomly sampled from two existing data sets. We conduct human evaluation using several criteria: 1) how \emph{sarcastic} is the generated message; 2) how \emph{humorous} it is; 3) how \emph{creative} it is; and 4) how \emph{grammatical} it is. Evaluation via Amazon's Mechanical Turk (MTurk) shows that our system is better 34\% of the time compared to humans and 90\% of the time compared to a recently published reinforced hybrid baseline \cite{abhijit}. We also present a thorough ablation study of several variations of our system demonstrating that incorporating more sarcasm factors (e.g., reversal of valence, commonsense context, and semantic incongruity) lead to higher quality sarcastic utterances. We make the code and data from our experiments publicly available. \footnote{\url{https://github.com/tuhinjubcse/SarcasmGeneration-ACL2020}} \section{Introduction} \label{section:introduction} Social media has stimulated the production of user-generated content that contains figurative language use such as sarcasm. Recognizing sarcasm is critical for understanding people's actual sentiments and beliefs \cite{whocares}. For instance, the utterance \textit{I love waiting at the doctor's office for hours} is sarcastic, expressing a negative sentiment toward the situation of \textit{waiting for hours at the doctor's office,} even if the speaker uses positive sentiment word such as \textit{love}. \newcite{sardef1} define Sarcasm as a type of interactional phenomenon with specific perlocutionary effects on the hearer such as to break their pattern of expectation. Sarcasm is also defined as an intensive, ironic construct that is intended to express contempt or ridicule \footnote{https://www.thefreedictionary.com/Sarcasm}. As it is often linked with intelligence, creativity, and wit, being able to generate Sarcasm is a step forward in creative text generation. \begin{table}[] \centering \begin{tabular}{|l|} \hline I hate getting sick from fast food. \\ \hline \begin{tabular}[c]{@{}l@{}}Shout out to the McDonalds for \\giving me bad food and making me \\sick right before work in two hours.\end{tabular} \\ \hline \end{tabular} \caption{Table showing a literal or non sarcastic input sentence on the top and a human generated sarcastic version of the same} \label{table:example1} \end{table} Generating content that is novel or creative is a key requirement in many natural language generation tasks such as poetry generation \cite{poetry1, poetry2}, story generation \cite{story1,story2}, pun generation \cite{pun1,pun2} etc. In this paper, we explore creative generation with a focus on sarcasm. Generating Sarcasm effectively can be useful to downstream applications such as conversation systems making them sound more natural,intriguing and human like. Current approaches to text generation require huge amount of training data, but there are no such large corpus of sarcasm. Even in the presence of such a corpus, learning the distribution of existing data and sampling from it is unlikely to lead to truly novel and creative sentences. Creative composition requires deviating from the norm (i.e training samples) , whereas standard generation approaches seek to mimic the norm. Most research on Sarcasm has focused on the task of \textit{Sarcasm Detection} treating it as a binary classification task using either the utterance in isolation or adding contextual information such as conversation context, author context, visual context, or cognitive features \cite{sarc1, whocares, sarc3,sarc4,sarc5,sarc6,sarc7,sarc8,sarc9,sarc10,sarc11,sarc12}.There has been very few efforts towards automatic sarcasm generation where the generation is conditioned on a literal input sentence as shown in Table \ref{table:example1}. For Sarcasm speakers often rely on un-uttered knowledge such as mutually shared information or world knowledge.The human brain excels at understanding such implicit meanings, as people learn from experiences and feel certain emotions based on appraisals and inferences from related experiences. Computers, by contrast, lack such world knowledge and can only rely on what they have learned from specific data.This motivates us to put CommonSense knowledge as the principle for guiding our Sarcasm generation approach. Prior work on Sarcasm generation \cite{abhijit} has mostly focused on semantic incongruity in terms of contrasting sentiments.However for effective generation of Sarcasm we need to rely on various linguistic strategies as pointed out in \cite{phrasing}.To this end we build an end to end retrieval based generation framework where given a non sarcastic utterance as an input we generate a sarcastic version. In doing so we make use of \begin{itemize} \item \textbf{Lexical antonyms} and \textbf{Negation} \item \textbf{Pragmatic Inference} from a pre-trained language model fine-tuned on ConceptNet \cite{conceptnet} called COMET\cite{comet} \item Automatic way to measure \textbf{Semantic Incongruity} by relying on Contradiction scores obtained from a model trained on The Multi-Genre NLI Corpus \cite{multinli} \end{itemize} We test our approach on 150 non sarcastic utterances randomly sampled from existing data sets.For qualitative evaluation, we consider the human judgment of Sarcastic intensity, Creativity, Humor and Grammaticality of the generated sentences. Human evaluation shows that our system generates better sarcasm 34 percent of the time compared to humans , and 89.3 percent times compared to a recent state of the art. We also present a thorough ablation of various components of our system further demonstrating that a system that incorporates all the strategies is the best one. \section{Problem Statement} \label{section:problem} \section{Qualitative Analysis} Table \ref{table:analysis} demonstrates several generation outputs from different modules associated with human ratings for different criteria. We notice that often one of our modules generate better sarcasm than humans. For instance, for the first and the second example in Table \ref{table:analysis}, all of FM, RV and NSI are better than human generated sarcasm. In general, the generations from the FM model are more humorous, which is also an useful criterion to evaluate sarcasm besides sarcasticness \cite{skalicky2018linguistic} We also observe that Turkers consistently rated generations from the FM model more sarcastic than the NSI model suggesting that there is a correlation between human scores of sarcasticness and incongruity. To support this observation, we took the contradiction scores from the RoBERTa model for both best ranked retrieved sentences (FM) and the randomly selected retrieved sentences (NSI). We then computed a correlation between the sarcasticness scores given by the humans and the automatic contradiction scores for both the best ranked retrieved sentences (FM) and the randomly selected retrieved sentences (NSI). For FM model we obtain a higher Pearson correlation coefficient compared to NSI suggesting the important role of incongruency for sarcasm. \subsection{Limitations} While our best model combining different sarcasm factors does outperform the system with individual factors, there are sometimes exceptions. We notice, in few cases, the simple reversal of valence (RV) strategy is enough to generate sarcasm. For instance, for the literal input ``It is not fun to date a drug addict", just removing the negation word leads to a full score on sarcasticness without the additional commonsense module. Future work would include building a model that can decide whether just the RV strategy is sufficient or if we need to add additional commonsense context to it. Although incorporating incongruity ranking is useful, there are several cases when a randomly retrieved message may obtain better sarcasticness score. Table~\ref{table:incongruity} presents such an example. Even though the retrieved message ``Please stop whirling me round; it makes me feel sick." scores lower than ``The very thought of it makes me feel sick.", in terms of incongruity with respect to ``I love being put in the hospital for dehydration", the former received a higher sarcasticness score that suggests the incongruity scores obtained from NLI are not perfect. The ordering of the commonsense context and the valence reversed sentence is predetermined in our generation. Specifically, we always append the retrieved commonsense context after the valence reversed output. Changing the order can sometimes make the sarcasm better and more humorous. The reason for our current ordering choice is that we always treat the valence reversed version as \textit{hypothesis} and the commonsense retrieved sentence as \textit{premise} for the NLI model. We attempted reversing the order in preliminary experiments but received poor scores from the entailment model. In future, we would like to generate more diverse sarcasm that are not tied to a fixed pattern. Finally, the generations are dependent on COMET and thus the quality will be governed by the accuracy of the COMET model. \begin{table}[t] \small \centering \begin{tabular}{|p{0.4cm}|l|} \hline NSI & \begin{tabular}[c]{@{}l@{}}I love being put in the hospital for dehydration. \\ Please stop whirling me round; it makes me\\ feel \textbf{sick}.\end{tabular} \\ \hline FM & \begin{tabular}[c]{@{}l@{}}I love being put in the hospital for dehydration. \\ The very thought of it makes me feel \textbf{sick}.\end{tabular} \\ \hline \end{tabular} \caption{Sarcastic Generation from (FM) and (NSI) where NSI scores higher for sacrasticness} \label{table:incongruity} \vspace{-1.5em} \end{table} \section{Related Work} \label{section:related} \subsection{Sarcasm Generation} Research on sarcasm generation is in its infancy. \newcite{joshi2015sarcasmbot} proposed \textit{SarcasmBot}, a sarcasm generation system that implements eight rule-based sarcasm generators, each of which generates a certain type of sarcastic expression. \newcite{sign} introduced a novel task of sarcasm interpretation, defined as the generation of a non-sarcastic utterance conveying the same message as the original sarcastic one. They use supervised machine translation models for the same in presence of parallel data. However, it is impractical to assume the existence of large corpora for training supervised generative models using deep neural nets; we hence resort to unsupervised approaches. \newcite{abhijit} employed reinforced neural \emph{seq2seq} learning and information retrieval based approaches to generate sarcasm. Their models are trained using only unlabeled non-sarcastic and sarcastic opinions. They generated sarcasm as a disparity between positive sentiment context and negative situational context. We, in contrast, model sarcasm using semantic incongruity with the context which could include shared commonsense or world knowledge. \subsection{Style Transfer} Prior works looked into \textit{unsupervised} text style/sentiment transfer~\cite{shen2017style,fu2017style,li2018delete}, which transfers a sentence from one style to another without changing the content. This is relevant to the reversal of valence for sarcasm generation. However, these transformations are mainly at the lexical and syntax levels rather than pragmatic level; in contrast, sarcastic utterances often include additional information associated with the context they occur~\cite{regel2009comprehension}, which is beyond text style/sentiment transfer. \subsection{Use of Commonsense for Irony Detection} The study of irony and sarcasm are closely related as sarcasm is defined as, ``the use of verbal irony to mock someone or show contempt''. \newcite{van2018we} addressed the challenge of modeling implicit or prototypical sentiment in the framework of automatic irony detection. They first manually annotated stereotypical ironic situations (e.g., flight delays) and later addressed the implicit sentiment held towards such situations automatically by using both a lexico-semantic commonsense knowledge base and a data-driven method. They however used it for irony detection, while we are focused on sarcasm generation.\footnote{While we do not directly model the negative intent in sarcasm, the generated output could lead to sarcastic messages rather than just ironic depending on the initial target given in the non-sarcastic message (E.g a sample generation ``Our politicians have everything under control. The nation is in danger of falling into anarchy.")} \section{Sarcasm Factors Used in Generation} \label{sec:strategies} A sarcastic utterance must satisfy the sarcasm factors, i.e., the inherent characteristics of sarcasm \cite{attardo2000irony,burgers2012verbal}. In this research, we leverage the use of two particular factors to generate sarcasm. One is the \emph{reversal of valence} and the other is the \emph{semantic incongruity with the context}, which could include shared commonsense or world knowledge between the speaker and the hearer. \begin{figure*}[ht] \centering \includegraphics[scale=0.30]{pipeline.png} \caption{\label{figure:pipeline} Our complete pipeline for sarcasm generation. The components with highlighted background denote Reversal of Valence, Retrieval of Commonsense Context and Ranking based on Semantic Incongruity respectively } \vspace{-1em} \end{figure*} \subsection{Reversal of Valence} The first key sarcasm factor is the reversal of valence between the literal and the intended meaning \cite{burgers2012verbal}. Reversal of valence can be achieved in two ways: when the literal meaning of the sarcastic message is positive (e.g., ``that is a great outfit'' if the outfit is ugly) or when the literal meaning is negative (e.g., ``that is an ugly dress'' if the dress is really beautiful). Arguably, the former is more likely to appear in sarcastic utterances. As the intended meaning is generally the opposite of its literal meaning in sarcastic utterances \cite{gibbs1986psycholinguistics}, using lexical antonym of negative sentiment words or negation can be used to convert a non-sarcastic utterance to its sarcastic version. For example, given a non-sarcastic utterance ``Zero visibility in fog makes driving \textbf{difficult}", one could identify the evaluative negative word \textit{difficult} and replace it with its antonym \textit{easy}, thereby converting the utterance to the sarcastic ``Zero visibility in fog makes driving \textbf{easy}". Likewise, ``Drunk driving should be taken seriously" can be converted to its sarcastic counterpart, ``Drunk driving should \textbf{not} be taken seriously" by using negation. We propose a generation approach that is able to capture the reversal of valence (Section~\ref{sec:rev1}). \subsection{Semantic Incongruity} The second sarcasm factor, semantic incongruity, appears between the literal evaluation and the context, as in the example ``I love getting sick from fast food", where we have semantic incongruity between the positive word ``love" and the negative situation ``getting sick". However, often, the negative situation is absent from the utterance, and thus additional pragmatic inference is needed to understand the sarcastic intent. For example, the listener might miss the sarcastic intent in ``zero visibility in fog makes driving easy'', where the speaker meant to convey that it can cause \emph{``accidents''}. Adding ``suffered three cracked ribs in an accident.'' makes the sarcastic intent more explicit, while maintaining the acerbic wit of the speaker. In the next section, we propose a novel generation approach that incorporates such relevant commonsense knowledge as context for semantic incongruity (Section~\ref{sec:common} and Section~\ref{sec:ranking}).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:1} Stochastic simulation is a powerful tool for analyzing large-scale complex systems. In most of the real situations, systems are highly complex, precluding the possibility of applying analytical solutions; in contrast, simulation makes it possible to accurately describe a system through the use of logically complex, and often non-mathematical models. Consequently, detailed dynamics of the system can be faithfully modeled, the system performance can be studied, and the best system design can be selected \citep{chen2011}. Now simulation has been a widely-used operations-research and management-science technique, e.g., in the management of power systems \citep{Benini1998}, production planning \citep{kleijnen1993}, supply chain network \citep{ding2005}, emergency department \citep{Ahmed2009}, etc. In these applications, the standard process for analyzing the system is to first establish estimators for measures of interest based on the simulation output, and then develop optimization methods to find the best design of the system. This process highlights the two main purposes of a constructed simulation model, for estimating the system performance and optimizing it over a set of system designs. Throughout the paper, we will refer to these two purposes of simulation as the \emph{estimation problem} and the \emph{optimization problem}. When conducting simulation experiments, a common practice is to first reveal and fix the covariate values for the problem under consideration, and then repeat experiments on the simulation model with various system designs. Here, covariates refer to some input information other than system designs to the simulation model which will also affect the system performance. In the literature, covariates are also known as the side information or context. For example, in queueing network design, covariates can be the arrival rate of the customers, which influences the queue length and the mean waiting time of the network. In disease treatment, covariates can be the biometric characteristics of the patients, which influence the efficacy of the treatment methods. However, given the computational expense of simulation experiments, a notable issue with this practice, for both the purposes of estimation and optimization, is that the time for obtaining the desired simulation results can be very long for some real systems. In addition to the huge monetary cost it incurs, it significantly limits the use of simulation for online problems in which system performance and the best system design are expected soon after the covariate values are revealed. This is also one of the key concerns for simulation-related research \citep{law2015}. To address this issue, \cite{hong2019} and \cite{shen2019} recently proposed a new framework of using simulation. Instead of running simulation after the covariate values are revealed, the new framework does it before that with randomly sampled covariate values that might possibly appear in future problem instances. It establishes an offline simulation dataset that is useful in describing the system. More importantly, this dataset serves for the purpose of prediction. When the covariate values of a certain problem are known, machine learning and data mining tools can be adopted to build predictive models and predict the performance of each design (the estimation problem) and the best design (the optimization problem) in real time\footnote{If certain adaptive methods are used to collect the covariate points, the predictive models need to be built iteratively, instead of once after all the covariate points are collected.}. For example, a doctor can learn the efficacy of the potential treatment methods and recommend a personalized treatment for a diabetic patient immediately upon his/her arrival by checking the simulation results under the same biometric characteristics (covariate values) of this patient \citep{bertsimas2017}. By doing so, the time for obtaining performance estimation and the best decision can be substantially reduced. It enables simulation to be used in a much broader range of applications for which simulation was hardly a feasible technique before. We call this framework \emph{simulation with covariates}. The framework of simulation with covariates is quite general and new. A lot of key questions remain largely unexplored. In this research, we focus on the use of this framework in prediction and consider a fundamental problem in it, the quantification of the relationship between the offline simulation efforts and the online prediction accuracy. This quantification provides a good assessment on the quality of the estimated system performance and the best design that can be achieved using the offline dataset. We consider a continuous covariate space and a finite number of system designs. We sample the covariate space using a fixed distribution, conduct the same number of simulation replications on all the designs and sampled covariate points, and construct a predictive model for each design for predicting its performance and selecting the best design. Our main research question is to study the convergence rates of the prediction errors with the number of covariate points ever collected and to facilitate further decision making. We employ the stochastic kriging (SK) model as the predictive model. SK has is one of the most extensively studied models for simulation output, e.g., in \cite{ankenman2010,chenx2013,qu2014,wang2018}. It is a general-purpose model with less structural assumptions than linear and some nonlinear models, and tends to be more resistant to overfitting than general interpolators \citep{sabuncuoglu2002}. To evaluate the prediction errors of the estimation and optimization problems, we will use the maximal \emph{integrated mean squared error} (IMSE) and \emph{integrated probability of false selection} (IPFS) respectively. IMSE is the integral of the mean squared error of the SK model over the covariate space. An IMSE is associated to a system design, and describes the average MSE of the estimated system performance of this design over all the possible covariate values. The maximal IMSE corresponds to the largest IMSE from the designs. It serves as a measure for the worst-case error of the estimation problem, whose convergence rate governs the prediction errors for the performance of each design under consideration. IPFS is the integral of the probability of false selection, i.e., the probability of falsely selecting the best design using the SK predictions. It serves as a measure for the error of the optimization problem. In this study, we use a fixed distribution to sample the covariate space for three reasons. First, for real systems, covariates usually follow a fixed population distribution that can be estimated from historical data. Therefore, the offline dataset generated from this distribution can faithfully describe the distributional characteristics of the system and lead to more accurate estimation over the covariate space. Second, from the experiment design perspective, although more sophisticated sequential designs may have the benefit of using fewer design points in the covariate space, they may not be able to incorporate the distributional information due to the high computational cost in each iteration and may incur higher simulation cost for certain types of response surfaces. In comparison, sampling from a fixed distribution has the advantage of being simple with a fixed prespecified offline simulation cost. The distributional information also helps achieve sufficiently good performance when the number of covariate points sampled is large, and this advantage becomes more obvious when the covariate space has a higher dimension. Third, the setting of fixed-distribution sampling enables us to theoretically derive concrete convergence rates for the two target measures. These convergence rates serve as a good benchmark against which improvement from future design methods with possibly faster convergence rates might be measured (theoretically or numerically). \subsection{Contributions} \label{sec:1.1} Our work makes three main contributions. First, we establish a formulation for characterizing the performance of simulation with covariates in both the estimation and optimization problems. As one of the first simulation-based real-time decision making frameworks, simulation with covariates resolves the long-standing issue of efficiency for simulation experiments, but has rendered itself unclear about the effectiveness of the decision that is made. Our research builds an SK prediction model for each system design under study and proposes measures for the estimation and optimization problems that evaluate the quality of the prediction over all the possible problem instances that might be encountered. It lays the ground for theoretical analysis of simulation with covariates and other possible simulation frameworks of this kind. Second, we derive the convergence rates of the two target measures (the maximal IMSE and IPFS) with the number of sampled covariate points $m$ for three common types of SK covariance kernels: finite-rank kernels, exponentially decaying kernels and polynomially decaying kernels. Derivation for the rates of the two measures is based on the upper bounds of the IMSE of a single SK model, and contains additional analysis on the structures of the target measures. Specifically, we show that convergence rates of the two measures are both at the magnitudes of $1/m$, $(\log m)^{\frac{d}{\kappa_*}}/m$ and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for the three types of kernels respectively. In these rates, $\kappa_*$ and $\nu_*$ are some kernel parameters, and $d$ is the dimension of covariates. We also show that the convergence rate of IPFS can be improved to exponential with additional mild assumptions on the tail of MSE of each SK model. They provide good insight into the practical performance simulation with covariates can achieve. Third, based on the polynomial convergence rates of the maximal IMSE, we further propose a simple regression-based procedure to determine the number of distinct covariate points needed to achieve a target precision of the maximal IMSE in Section 5.3 of the Online Supplement. In addition, we numerically illustrate the convergence behaviors of the maximal IMSE and IPFS via several test examples, and show the impact of several factors on their convergence rates, including the problem structure, dimension of the covariate space, number of simulation replications and sampling distribution. \subsection{Literature Review} \label{sec:1.2} There are two streams of literature related to this study. The first stream is kriging, or Gaussian process regression, which is a popular interpolation method for building metamodels \citep{Stein99,kleijnen2009}. It interpolates the response surface of an unknown function using the realization of a Gaussian random field, and has proven to be a highly effective tool for global metamodeling. In \cite{ankenman2010}, kriging was extended to simulation modeling, in which the observations of the unknown function are no longer deterministic, but are corrupted by random noises. It is known as the stochastic kriging (SK). \cite{chenx2013} and \cite{qu2014} further enhanced SK by utilizing the gradient information when it is available, called stochastic kriging with gradient estimators (SKG). \cite{wang2018} proved the monotonicity of MSE in a sequential setting for both SK and SKG. Theoretical properties of Gaussian process regression and the related kernel ridge regression have been previously studied in \citet{VarZan11}, \citet{Steetal09}, etc. Instead of a single SK model studied in those papers, in this research, we are interested in measures from multiple SK models that are caused by multiple designs. The second stream is ranking and selection (R\&S), in particular the fixed-budget R\&S. Fixed-budget R\&S is a basic problem in simulation-based optimization, seeking to determine the allocation of a fixed simulation budget in order to correctly select the best simulated system design among a finite set of alternatives. Popular methods in this field include the optimal computing budget allocation (OCBA, \cite{chen2000,chen2008,gao2017a,gao2017b}) and value of information procedure (VIP, \cite{frazier2008,ryzhov2016}). In particular, \cite{gao2019selecting} utilized the OCBA approach to solve the R\&S problem with discrete covariates and derived the asymptotic optimal sampling rule. Similar to fixed-budget R\&S, this research is also set up with a finite number of designs, and samples them with a fixed simulation budget to make decisions. However, this research is different in objective. It aims to analyze the convergence rates of the target measures based on an existing sampling scheme, instead of developing a new sampling scheme as in fixed-budget R\&S. The rest of the paper is organized as follows. Section \ref{sec:2} presents the formulation of the problem. Sections \ref{sec:3} and \ref{sec:4} provide the main convergence rate results on the maximal IMSE and IPFS. Numerical examples are presented in Section \ref{sec:6}, followed by conclusions and discussion in Section \ref{sec:7}. A preliminary study of this research appeared in \cite{gao2019rate}. That paper only focused on the exponentially decaying kernels, and presented the convergence rates of the maximal IMSE and IPFS without proof. \section{Problem Formulation} \label{sec:2} In this section, we provide some preliminaries on the SK model and the definitions of the two target measures. For a summary of the key notation we use, please refer to Table 1 of the Online Supplement. Throughout the paper, the subscript $i$ is exclusively used to index the system design, and we will fold it for circumstances with no ambiguity. \subsection{Stochastic Kriging} \label{sec:2.1} We consider a finite number of $k$ system designs. The performance of each design depends on $\mathbf{X}=(X_1,\ldots,X_d)^\top$, a vector of random covariates with support $\mathcal{X}\subseteq \mathbb{R}^d$. For each $i=1,2,\ldots,k$, let $Y_{il} (\mathbf{X})$ be the $l$-th simulation sample from design $i$ under covariate $\mathbf{X}$, and $y_i (\mathbf{X})$ be the mean of design $i$, where the mean is taken with respect to the simulation noise. We assume that for any $\mathbf{X}=\mathbf{x}$, $Y_{il} (\mathbf{x})= y_i (\mathbf{x})+\epsilon_{il} (\mathbf{x})$ where $\epsilon_{il} (\mathbf{x})$'s are mean-zero simulation noises and are independent across different $i$, $l$ and $\mathbf{x}$. The relationship between the performance $y_i (\mathbf{x})$ of design $i$ and $\mathbf{x}$ is generally unknown and can only be estimated via stochastic simulations. In this paper, we use the SK model to describe $y_i(\mathbf{x})$: \begin{align}\label{krigmodel1} & y_i(\mathbf{x}) = \mathbf{f}_i(\mathbf{x})^\top \boldsymbol{\beta}_i + M_i(\mathbf{x}),\quad i=1,\ldots,k, \end{align} where $\mathbf{f}_i(\mathbf{x})=(\mathrm{f}_{i1}(\mathbf{x}),\ldots,\mathrm{f}_{iq}(\mathbf{x}))^\top$ and $\boldsymbol{\beta}_i=(\beta_{i1},\ldots,\beta_{iq})^\top$ are a $q\times 1$ vector of known functions of $\mathbf{x}$ and a $q\times 1$ vector of unknown parameters; $M_i(\mathbf{x})$ is a realization (or sample path) of a mean zero stationary Gaussian process, with the covariance function $\boldsymbol{\Sigma}_{M,i}(\mathbf{x},\mathbf{x}')=\cov\left[M_i(\mathbf{x}),M_i(\mathbf{x}')\right]$ quantifying the covariance between $M_i(\mathbf{x})$ and $M_i(\mathbf{x}')$ for any $\mathbf{x},\mathbf{x}'\in \mathcal{X}$. Model \eqref{krigmodel1} with regressor functions $\mathbf{f}_i(\cdot)$ is sometimes called \textit{universal kriging} (\citealt{Stein99}). In our model setting, we assume that we randomly draw $m$ covariate (design) points $\mathbf{X}^m=\left\{\mathbf{X}_1,\ldots,\mathbf{X}_m\right\}$ of $\mathbf{X}$ from a sampling distribution $\mathbb{P}_{\mathbf{X}}$. For a given covariate point sample $\mathbf{x}^m=\left\{\mathbf{x}_1,\ldots,\mathbf{x}_m\right\}$, we perform $n_j$ replications at covariate $\mathbf{x}_j$ for each of the $k$ designs. We denote the sample mean for design $i$ and covariate $\mathbf{x}_j$ by $\overline Y_i(\mathbf{x}_j) = n_j^{-1}\sum_{l=1}^{n_j} Y_{il}(\mathbf{x}_j)$, and correspondingly the averaged simulation errors by $\overline \epsilon_i(\mathbf{x}_j) = n_j^{-1}\sum_{l=1}^{n_j} \epsilon_{il}(\mathbf{x}_j)$. For $i=1,\ldots,k$ and $j=1,\ldots,m$, we let $\mathbf{Y}_{ij}=(Y_{i1}(\mathbf{x}_j),\ldots, Y_{in_j}(\mathbf{x}_j))^\top$, and let $\overline \mathbf{Y}_i = \left(\overline Y_i(\mathbf{x}_1),\ldots, \overline Y_i(\mathbf{x}_m)\right)^\top$. For design $i$, let the $m \times q$ design matrix be $\mathcal{F}_i=(\mathbf{f}_i(\mathbf{x}_1),\ldots,\mathbf{f}_i(\mathbf{x}_m))^\top$. Let $\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m)$ be the $m\times m$ covariance matrix across all covariate points $\mathbf{x}_1,\ldots,\mathbf{x}_m$, i.e., for $s,t\in\{1,\ldots,m\}$, the $(s,t)$ entry of $\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m)$ is $[\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m)]_{st} = \cov\left[y_i(\mathbf{x}_s),y_i(\mathbf{x}_t)\right]$. For any $\mathbf{x}\in \mathcal{X}$, let $$\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x})=\left(\cov\left[y_i(\mathbf{x}),y_i(\mathbf{x}_1)\right], \ldots, \cov\left[y_i(\mathbf{x}),y_i(\mathbf{x}_m)\right]\right)^\top.$$ Let $\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)$ be the $m\times m$ covariance matrix of the averaged simulation errors across $m$ covariate points in the design $i$, i.e., for $s,t\in\{1,\ldots,m\}$, the $(s,t)$ entry of $\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)$ is $\{\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)\}_{st} = \cov\left[\overline \epsilon_{i}(\mathbf{x}_s), \overline \epsilon_{i}(\mathbf{x}_t)\right]$. Let $\boldsymbol{\Sigma}_{y,i}=\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m)+\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)$. To estimate $y_i(\mathbf{x})$ in \eqref{krigmodel1}, we consider linear predictors in the form of $\alpha_{i,0}(\mathbf{x}_0)+\bm{\alpha}_i(\mathbf{x}_0)\overline \mathbf{Y}_i$, where $\alpha_{i,0}(\mathbf{x}_0)$ and $\bm{\alpha}_i(\mathbf{x}_0)$ are weights that depend on the test covariate point $\mathbf{x}_0\in \mathcal{X}$. The mean squared error MSE of the predictors at $\mathbf{x}_0$ is given by $\mathrm{MSE}_{i}(\mathbf{x}_0)=\E[(y_i(\mathbf{x}_0)-\alpha_{i,0}(\mathbf{x}_0)-\bm{\alpha}_i(\mathbf{x}_0)\overline \mathbf{Y}_i)^2]$, where the expectation is with respect to the randomness in $\overline \mathbf{Y}_i$, i.e., the simulation noise. We call the predictor that minimizes $\mathrm{MSE}_{i}(\mathbf{x}_0)$ MSE-optimal linear predictor. \cite{Stein99} (and also \citealt{ankenman2010}, \citealt{chenx2013}) has shown that the MSE-optimal linear predictor has the form \begin{align}\label{stockrig1} \widehat{y}_i(\mathbf{x}_0) &= \mathbf{f}_i(\mathbf{x}_0)^\top \widehat\boldsymbol{\beta}_{i} + \boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}_0) ^\top \boldsymbol{\Sigma}_{y,i}^{-1} \left(\overline \mathbf{Y}_i - \mathcal{F}_i \widehat\boldsymbol{\beta}_{i}\right), \end{align} where $\widehat\boldsymbol{\beta}_i= \left(\mathcal{F}_i^\top \boldsymbol{\Sigma}_{y,i}^{-1} \mathcal{F}_i\right)^{-1}\mathcal{F}_i^\top \boldsymbol{\Sigma}_{y,i}^{-1} \overline \mathbf{Y}_i$. In addition, \citet{ankenman2010} has shown that the optimal MSE from Equation \eqref{stockrig1} at $\mathbf{x}_0\in \mathcal{X}$ is: \begin{align}\label{optimmse} \mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{x}_0) &= \boldsymbol{\Sigma}_{M,i}(\mathbf{x}_0,\mathbf{x}_0) - \boldsymbol{\Sigma}_{M,i}^\top(\mathbf{x}^m,\mathbf{x}_0) \left[\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m) + \boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)\right]^{-1} \boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}_0) \nonumber \\ & ~ + \eta_i(\mathbf{x}_0)^\top \left[\mathcal{F}_i^\top \left(\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m)+\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)\right)^{-1}\mathcal{F}_i\right]^{-1} \eta_i(\mathbf{x}_0), \end{align} where $\eta_i(\mathbf{x}_0)=\mathbf{f}_i(\mathbf{x}_0) - \mathcal{F}_i^\top \left(\boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}^m)+\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)\right)^{-1} \boldsymbol{\Sigma}_{M,i}(\mathbf{x}^m,\mathbf{x}_0)$. In the following, we define some useful notation. For any finite dimensional vector $\mathbf{v}$, we let $\|\mathbf{v}\|$ be its Euclidean norm. For any generic matrix $A$, we use $A_{ab}$ to denote its $(a,b)$-entry, $cA$ to denote the matrix whose $(a,b)$-entry is $cA_{ab}$ for any constant $c\in \mathbb{R}$. For any positive definite matrix $A$, let $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ be its largest and smallest eigenvalues. For two sequences of positive numbers $\{a_l\}_{l\geq 1}$ and $\{b_l\}_{l\geq 1}$, $a_l\lesssim b_l$ means that $\limsup_{l\to\infty} a_l/b_l < \infty$, and $a_l\asymp b_l$ means that both $a_l\lesssim b_l$ and $b_l\lesssim a_l$ hold true. We introduce some concepts from the reproducing kernel Hilbert space (RKHS) theory that will be used in our theorems. Let $\mathbb{P}_{\mathbf{X}}$ be a probability distribution over $\mathcal{X}$, $L_2(\mathbb{P}_{\mathbf{X}})$ be the $L_2$ space under $\mathbb{P}_{\mathbf{X}}$. The inner product in $L_2(\mathbb{P}_{\mathbf{X}})$ is defined as $\langle f, g\rangle_{L_2(\mathbb{P}_{\mathbf{X}})} = {\E}_{\mathbf{X}} [f(\mathbf{X})g(\mathbf{X})]$ for any $f, g \in L_2(\mathbb{P}_{\mathbf{X}})$. For any $f\in L_2(\mathbb{P}_{\mathbf{X}})$, define the linear operator $[T_{\boldsymbol{\Sigma}_M}f](\mathbf{x}) = \int_{\mathcal{X}} \boldsymbol{\Sigma}_M(\mathbf{x},\mathbf{x}')f(\mathbf{x}')\mathrm{d} \mathbb{P}_{\mathbf{X}}(\mathbf{x}')$ for any $\mathbf{x}\in \mathcal{X}$. Since $\boldsymbol{\Sigma}_{M}(\cdot, \cdot)$ is a continuous symmetric non-negative definite kernel on $\mathcal{X}\times \mathcal{X}$, there exists an orthonormal basis $\left\{\phi_l(\mathbf{x}): l=1,2,\ldots\right\}$ with respect to $\mathbb{P}_{\mathbf{X}}$ consisting of eigenfunctions of the linear operator $T_{\boldsymbol{\Sigma}_M}$, i.e., $\int_{\mathcal{X}} \phi_l^2(\mathbf{x}) d\mathbb{P}_{\mathbf{X}}(\mathbf{x}) = 1$, $\int_{\mathcal{X}} \phi_l(\mathbf{x})\phi_{l'}(\mathbf{x}) d\mathbb{P}_{\mathbf{X}}(\mathbf{x}) = 0$ for $l\neq l'$, and $[T_{\boldsymbol{\Sigma}_M}\phi_l](\mathbf{x})=\mu_l \phi_l(\mathbf{x})$ for some eigenvalue $\mu_l\geq 0$, all $l=1,2,\ldots$ and $\mathbf{x}\in \mathcal{X}$. According to Mercer's theorem (e.g. Theorem 4.2 of \citealt{RasWil06}), the kernel $\boldsymbol{\Sigma}_{M}$ (which can be taken as any $\boldsymbol{\Sigma}_{M,i}$ for $i=1,\ldots,k$) has the series expansion $\boldsymbol{\Sigma}_{M}(\mathbf{x}, \mathbf{x}') = \sum_{l=1}^{\infty} \mu_l \phi_l(\mathbf{x}) \phi_l(\mathbf{x}')$ with respect to $\mathbb{P}_{\mathbf{X}}$ for any $\mathbf{x},\mathbf{x}'\in \mathcal{X}$, where we assume that the eigenvalues of $\boldsymbol{\Sigma}_{M}$ are sorted into the decreasing order $\mu_1\geq \mu_2 \geq \ldots\geq 0$. The trace of the kernel $\boldsymbol{\Sigma}_{M}$ is defined as $\tr(\boldsymbol{\Sigma}_{M})=\sum_{l=1}^{\infty} \mu_l$. Any function $f\in L_2(\mathbb{P}_{\mathbf{X}})$ has the series expansion $f(\mathbf{x}) = \sum_{l=1}^{\infty} \theta_l \phi_l(\mathbf{x})$, where $\theta_l = \langle f,\phi_l\rangle_{L_2(\mathbb{P}_{\mathbf{X}})}$. The reproducing kernel Hilbert space (RKHS) $\mathbb{H}$ attached to the kernel $\boldsymbol{\Sigma}_{M}$ is the space of all functions $f \in L_2(\mathbb{P}_{\mathbf{X}})$ such that its $\mathbb{H}$-norm $\|f\|_{\mathbb{H}}^2 = \sum_{l=1}^{\infty} \theta_l^2 / \mu_l<\infty$. We refer the readers to \citet{Gu02} and \citet{HsiEub15} for a complete treatment of the RKHS theory. Based on the decaying rates of eigenvalues, most commonly used covariance functions (kernels) can be categorized into the three types described below: the finite-rank kernels, exponentially decaying kernels, and polynomially decaying kernels. For a comprehensive review of covariance functions, see Chapter 4 of \citet{RasWil06}. \begin{enumerate} \item \textbf{Finite-rank kernels} satisfy $\mu_1\geq \ldots \geq \mu_{l_*} >0$ and $\mu_{l_*+1} = \mu_{l_*+2} =\ldots=0$ for some finite integer $l_*\in \mathbb{N}$. One example of finite-rank kernels is $\boldsymbol{\Sigma}_M(\mathbf{x},\mathbf{x}')= (1+\mathbf{x}^\top \mathbf{x}')^D$ for some fixed positive integer $D$ and any $\mathbf{x},\mathbf{x}'\in \mathcal{X}$. The sample paths generated from this kernel are the class of all polynomial functions up to the degree $D$, and has the finite rank at most equal to $D+1$ (\citealt{RasWil06}). If $D=1$, then $\boldsymbol{\Sigma}_M(\mathbf{x},\cdot)$ generates the class of linear functions in $\mathbf{x}$. \item \textbf{Exponentially decaying kernels} satisfy $\mu_l\asymp \exp(-c l^{\kappa/d})$ for some constants $c>0,\kappa>0$, with $d$ being the dimension of covariate $\mathbf{x}$. The most important example is the squared exponential kernel $\boldsymbol{\Sigma}_M(\mathbf{x},\mathbf{x}')=\exp\left\{-\varphi\|\mathbf{x}-\mathbf{x}'\|^2\right\}$ for $\varphi>0$ and $\mathbf{x},\mathbf{x}'\in \mathcal{X}\subseteq \mathbb{R}^d$. If $d=1$, $\mathbb{P}_{\mathbf{X}}=N(0,(4a_1)^{-1})$ for some $a_1>0$, then it is known (\citealt{RasWil06} Section 4.3.1) that for $l=0,1,2,\ldots$, the eigenfunctions can be taken as $\phi_l(\mathbf{x})=(a_2/a_1)^{1/4}\exp\{-(a_2-a_1)\mathbf{x}^2\}H_{l}(\sqrt{2a_2}\mathbf{x})/\sqrt{2^l l!}$, and the corresponding eigenvalues are $\mu_l = \sqrt{2a_1/(a_1+a_2+\varphi)} \exp\{-l\log (1/a_3)\}$, where $a_2=\sqrt{a_1^2+2a_1\varphi}$, $a_3 = \varphi/(a_1+a_2+\varphi) \in (0,1)$, and $H_l(z) = (-1)^l \exp(x^2)\tfrac{d^l}{dx^l}\exp(-x^2)$ is the $l$th order Hermite polynomial. So $\mu_l\asymp \exp(-c l^\kappa)$ holds with $c=\log (1/a_3)$ and $\kappa=1$. In general, $\mu_l\asymp \exp(-c l^{\kappa/d})$ holds for infinitely smooth stationary kernels on a bounded domain $\mathcal{X} \subseteq \mathbb{R}^d$ (\citealt{SanSch16}). \item \textbf{Polynomially decaying kernels} satisfy $\mu_l \asymp l^{-2\nu/d-1}$ for some constant $\nu>0$ (such that $\tr(\boldsymbol{\Sigma}_M)<\infty$). One example is the kernel $\boldsymbol{\Sigma}_M(\mathbf{x},\mathbf{x}')=\min\{\mathbf{x},\mathbf{x}'\}$ for $\mathbf{x},\mathbf{x}'\in \mathcal{X}=[0,1]$. This kernel generates the first-order Sobolev class that contains all Lipschitz functions on $[0,1]$. If $\mathbb{P}_{\mathbf{X}}$ is the uniform distribution on $[0,1]$, then it is known that $\mu_l \asymp 1/l^4$ (\citealt{Gu02}). Another very important example is the Mat\'ern kernel $\boldsymbol{\Sigma}_{M,i}(\mathbf{x},\mathbf{x}')=\tfrac{2^{1-\nu}}{\Gamma(\nu)}\left(\sqrt{2\nu}\varphi\|\mathbf{x}-\mathbf{x}'\|\right)^{\nu} K_{\nu}(\sqrt{2\nu}$ $\varphi\|\mathbf{x}-\mathbf{x}'\|)$, where $K_{\nu}$ is the modifed Bessel function and the smoothness parameter $\nu$ satisfies $\nu >0$. The Mat\'ern kernel is widely used for fitting spatial surfaces with varying roughness from $\nu$. A smaller $\nu$ generates rougher sample paths. If $\mathcal{X}\subseteq \mathbb{R}^d$ is a bounded set, then the Mat\'ern kernel has eigenvalues decaying as $\mu_l \leq C l^{-2\nu/d-1}$ for some constant $C>0$ (\citealt{SanSch16}). \end{enumerate} \subsection{Target Measures} \label{sec:2.2} For the estimation problem and a given covariate point sample $\mathbf{x}^m=\left\{\mathbf{x}_1,\ldots,\mathbf{x}_m\right\}$, the optimal MSE of the linear predictor \eqref{stockrig1} for design $i$ is $\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)$, where the test point $\mathbf{X}_0$ is randomly drawn from the same distribution $\mathbb{P}_{\mathbf{X}}$ as for $\mathbf{X}^m$. The IMSE for the $i$-th design is the integral of $\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)$ with respect to the sampling distribution of $\mathbf{X}_0$ \begin{equation*} \text{IMSE}_i={\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{i,\mathrm{opt}} (\mathbf{X}_0)\right], \end{equation*} and the maximal IMSE is defined as $\max_{i\in\{1,\ldots,k\}}\text{IMSE}_i$. Under our consideration, the maximal IMSE can be viewed as a measurement of the prediction error with the worst MSE-optimal linear predictor among the $k$ designs over all possible locations in $\mathcal{X}$. Our goal for the estimation problem is to prove that as the simulation budget increases to infinity, the maximal IMSE decreases at a certain rate to zero, under the correct specification of Model \eqref{krigmodel1} and other necessary mild technical assumptions. In particular, for the ease of presentation, we assume that all points in $\mathbf{x}^m$ receive the same number of simulation runs $n_1=\ldots =n_m =n$, i.e., we do not need to decide the number of simulation replications among different designs and covariate points. We will show that for any given $n$, the maximal IMSE converges to zero at some decreasing rate of $m$, which is the number of distinct points in $\mathbf{x}^m$. Intuitively, this goal is reasonable, because an SK model allows us to interpolate the unknown surface of $y_i(\mathbf{x})$ at a new location with higher accuracy if $m$ becomes larger. How fast the maximal IMSE converges to zero in terms of $m$ depends mainly on the smoothness of all the unknown true surfaces $y_i(\mathbf{x})$, $i=1,\ldots,k$. Since we assume that the true surface $y_i(\mathbf{x})$ is correctly specified as in Model \eqref{krigmodel1}, then equivalently, the convergence rate of the maximal IMSE depends on the properties of the covariance kernel $\boldsymbol{\Sigma}_{M,i}(\cdot,\cdot)$ and the functions $\mathbf{f}_i(\cdot)$. Note that the maximal IMSE is still random with respect to the covariate point sample $\mathbf{X}^m$, and our rate result for the maximal IMSE will be obtained in $\mathbb{P}_{\mathbf{X}^m}-$ probability. For the optimization problem, given configuration of designs $M_i(\cdot)$'s and a covariate point sample $\mathbf{x}^m= \left\{\mathbf{x}_1,\ldots,\mathbf{x}_m\right\}$, the real best design $i^\circ(\mathbf{x}_0)$ and the estimated best design $\widehat i^\circ (\mathbf{x}_0)$ at test point $\mathbf{X}_0=\mathbf{x}_0$ are \begin{align}\label{bestdesigndef} & y^\circ(\mathbf{x}_0)=\min_{i\in\{1,\ldots,k\}}y_i (\mathbf{x}_0),\qquad i^\circ(\mathbf{x}_0) \in \arg\min_{i\in\{1,\ldots,k\}} y_i (\mathbf{x}_0), \nonumber \\ & \widehat y^\circ(\mathbf{x}_0)=\min_{i\in\{1,\ldots,k\}} \widehat y_i (\mathbf{x}_0), \qquad \widehat i^\circ (\mathbf{x}_0) \in \arg\min_{i\in\{1,\ldots,k\}} \widehat y_i (\mathbf{x}_0). \end{align} Typically in R\&S problems, the correct selection for the best design is defined as $\widehat i^\circ (\mathbf{x}_0)=i^\circ(\mathbf{x}_0)$. However, due to the continuous nature of $\mathbf{x}_0$ in the framework of simulation with covariates, the best design $i^\circ(\mathbf{x}_0)$ might not be unique for certain values of $\mathbf{x}_0$, causing ambiguity in this definition. To solve this issue, in this research, we will focus the event of good selection \citep{ni2017}. Similarly as in the indifference-zone (IZ) formulation for R\&S problems \citep{kim2006}, suppose there is an IZ parameter $\delta_0>0$ showing the minimal difference for the means of designs that we believe is worth detecting. A good selection for $i^\circ(\mathbf{x}_0)$ happens when the mean of the estimated best design $y_{\widehat i^\circ (\mathbf{x}_0)}(\mathbf{x}_0)$ is better than $y^\circ(\mathbf{x}_0)+\delta_0$ for the test point $\mathbf{x}_0\in \mathcal{X}$; equivalently, a false (not good) selection happens when $y_{\widehat i^\circ (\mathbf{x}_0)}(\mathbf{x}_0)$ is no better than $y^\circ(\mathbf{x}_0)+\delta_0$. This definition allows some flexibility for determining the best design when the means of the top two designs are very close or exactly the same under some covariate value. Consequently, probabilities of good selection $\mathrm{PCS}(\mathbf{x}_0)$ and false selection $\mathrm{PFS}(\mathbf{x}_0)$ among the $k$ alternatives at $\mathbf{x}_0$ are given by \begin{align} \label{pfs_def} \mathrm{PCS}(\mathbf{x}_0)&=\mathbb{P}_{\epsilon}\left(y_{\widehat i^\circ (\mathbf{x}_0)} (\mathbf{x}_0)-y^\circ(\mathbf{x}_0) < \delta_0\right), \nonumber \\ \mathrm{PFS}(\mathbf{x}_0)&=\mathbb{P}_{\epsilon}\left(y_{\widehat i^\circ (\mathbf{x}_0)} (\mathbf{x}_0)-y^\circ(\mathbf{x}_0) \geq \delta_0\right), \end{align} where $\mathbb{P}_{\epsilon}$ is the joint probability measure of all simulation error terms $\epsilon_{il}(\mathbf{x}_j)$ for $i=1,\ldots,k$, $j=1,\ldots,m$ and $l=1,\ldots,n$. To ease the burden of notation, we hide the dependence of $\mathrm{PCS}(\mathbf{x}_0)$ and $\mathrm{PFS}(\mathbf{x}_0)$ on the constant IZ parameter $\delta_0$. Consequently, the integrated PFS is defined as \begin{equation*} \text{IPFS}={\E}_M{\E}_{\mathbf{X}_0} \left[\mathrm{PFS}(\mathbf{X}_0)\right], \end{equation*} where $M$ contains the randomness from all $M_i(\cdot)$'s, $i=1,\ldots,k$, measuring the \textit{extrinsic uncertainty} (\citealt{ankenman2010}). Our goal for the optimization problem is to identify the convergence rate of IPFS with the number of covariate points $m$. Similarly as for the maximal IMSE, IPFS is still random with respect to $\mathbf{X}^m$, and our rate result for IPFS will be obtained in $\mathbb{P}_{\mathbf{X}^m}-$ probability. We note two key differences bewteen our setting and existing research in the simulation literature. First, we assume that $\mathbf{X}_0$ is randomly drawn from $\mathbb{P}_{\mathbf{X}}$, independently of the random sample $\mathbf{X}^m$. Our treatment of both $\mathbf{X}^m$ and $\mathbf{X}_0$ is different from most SK studies (\citealt{ankenman2010}, \citealt{chenx2013}, \citealt{wang2018}), which usually treat $\mathbf{X}^m$ as fixed covariate points and $\mathbf{X}_0$ as uniformly sampled from $\mathcal{X}$. The randomness in $\mathbf{X}^m$ allows us to derive the asymptotic convergence rates of the two target measures for various types of covariance kernels. Second, although the maximal IMSE and IPFS are expected (integrated) measures, same in appearance to the expected measure $\text{PCS}_{\text{E}}$ in the research of ranking and selection with covariates \citep{shen2019}, the expectations in these two papers are caused by different types of randomness, leading to intrinsical difference in meaning and structure of these measures and the approaches used to analyze them. \cite{shen2019} considered a fixed number of $m$ covariate points, and the expectation in $\text{PCS}_{\text{E}}$ is with respect to the random covariate points, which seeks to assess the average of selection quality over all the possible covariate values (problem instances). In this paper, expectation is with respect to the random test point, which seeks to assess the average of prediction quality over all the possible covariate values (problem instances). This research also faces the randomness of the covariate point sample $\mathbf{X}^m$, and as discussed above, it is handled with the development of convergence rates in $\mathbb{P}_{\mathbf{X}^m}-$ probability. \section{Convergence Rates of the Maximal IMSE} \label{sec:3} In this section, we study the convergence rate of the first target measure, the maximal IMSE. We make the following assumptions: \begin{enumerate}[label=A.\arabic*] \item \label{c1} For $i=1,\ldots,k$, Model \eqref{krigmodel1} is correctly specified with $M_i(\cdot)$ being a sample path from a known covariance function $\Sigma_{M,i}(\cdot,\cdot)$. For $i=1,\ldots,k$, $j=1,\ldots,m$, $l=1,\ldots,n$, $\epsilon_{il} (\mathbf{x}_j)$'s are random variables with mean zero and variance $\sigma_i^2(\mathbf{x}_j)$, and they are independent across different $i$, $j$, and $l$. The simulation errors $\epsilon_{il} (\mathbf{x}_j)$'s are independent of the Gaussian process $M_i(\mathbf{x})$ for all $i$, $j$, $l$ and $\mathbf{x}\in \mathcal{X}$. There exist finite constants $\underline \sigma_0^2$ and $\overline \sigma_0^2$ such that $0<\underline \sigma_0^2 \leq \sigma_i^2(\mathbf{x})\leq \overline \sigma^2_0 $ for all $i$ and $\mathbf{x} \in \mathcal{X}$. \item \label{c2} (Trace class kernel) The kernel $\boldsymbol{\Sigma}_{M,i}$ satisfies $\tr\left(\boldsymbol{\Sigma}_{M,i}\right)<\infty$ for $i=1,\ldots,k$. \item \label{c3} (Basis functions) Let $\left\{\phi_{i,l}(\mathbf{x}): l=1,2,\ldots\right\}$ be an orthonormal basis with respect to $\mathbb{P}_{\mathbf{X}}$ consisting of eigenfunctions of the linear operator $T_{\boldsymbol{\Sigma}_{M,i}}$. There are positive constants $\rho_*$ and $r_* \geq 2$ common for all $i=1,\ldots,k$ such that ${\E}_{\mathbf{X}} \{\phi_{i,l}^{2r_*}(\mathbf{X})\} \leq \rho_*^{2r_*}$ for every $l=1,2,\ldots, \infty$. \item \label{c4} (Regressors) The regression functions satisfy $\mathrm{f}_{is}\in \mathbb{H}_i$ for all $i=1,\ldots,k$ and $s=1,\ldots,q$, where $\mathbb{H}_i$ the RKHS attached to kernel $\boldsymbol{\Sigma}_{M,i}$. Furthermore, $\lambda_{\min}\left({\E}_{\mathbf{X}}[\mathbf{f}_i(\mathbf{X})\mathbf{f}_i(\mathbf{X})^\top]\right)$ is lower bounded by a positive constant for all $i=1,\ldots,k$ if $\mathbf{X}$ follows the distribution $\mathbb{P}_{\mathbf{X}}$. \end{enumerate} \ref{c1} assumes independence of the simulation noise $\epsilon_{il} (\mathbf{x}_j)$ between different designs, covariate points and replications, so we do not consider the common random number technique in the simulation experiments. An implication of this setting is that learning the performance of a design does not enable learning the performance of another design. \ref{c1} also makes a mild assumption on the second moment of the error distribution. For all derivations related to IMSE in this paper, we do not require $\epsilon_{il} (\mathbf{x}_j)$ to be normally distributed. The lower and upper bounds for the error variance are technical, which is trivially satisfied if the errors are homogeneous with a constant variance. \ref{c2} assumes that the operator associated to the kernel $\boldsymbol{\Sigma}_{M,i}$ is a trace class operator (\citealt{HsiEub15}). This will be verified later for all the three types of kernels described before, in which their eigenvalues typically decrease at least polynomially and are usually summable. \ref{c3} imposes a mild moment condition on the orthonormal basis functions. Sometimes \ref{c3} can be strengthened to the assumption that the $L_{\infty}$ norms of $\phi_{i,l}(\mathbf{x})$'s are uniformly bounded for all $l=1,2,\ldots$ and all $\mathbf{x}\in \mathcal{X}$. For example, if $\mathcal{X}=[0,1]$ and $\mathbb{P}_{\mathbf{X}}$ is the uniform distribution on $\mathcal{X}$, then the eigenfunctions of the Mat\'ern covariance kernel with $\nu=1/2$ are the sine functions (Section 3.4.1 of \citealt{VT01}), whose $L_{\infty}$ norms are naturally bounded from above by constant, so that \ref{c3} trivially holds. The quantities $\rho_*$ and $r_*$ do not need to depend on $i$, because if the $i$th design satisfies ${\E}_{\mathbf{X}} \{\phi_{i,l}^{2r_i}(\mathbf{X})\} \leq \rho_i^{2r_i}$ for $r_i\geq 2$, one can let $r_*=\min_{i\in\{1,\ldots,k\}} r_i\geq 2$ and $\rho_*=\max\left(\max_{i\in\{1,\ldots,k\}} \rho_i,1\right)$. By Jensen's inequality, ${\E}_{\mathbf{X}} \{\phi_{i,l}^{2r_*}(\mathbf{X})\} \leq \left[{\E}_{\mathbf{X}} \{\phi_{i,l}^{2r_i}(\mathbf{X})\} \right]^{r_*/r_i}\leq \rho_i^{2r_i\cdot r_*/r_i} \leq \rho_*^{2r_*}$ and \ref{c3} holds. \ref{c4} requires that the matrix ${\E}_{\mathbf{X}}[\mathbf{f}_i(\mathbf{X})\mathbf{f}_i(\mathbf{X})^\top]$ is nonsingular. This is a necessary condition for the identifiability of $\boldsymbol{\beta}_i$, since a singular ${\E}_{\mathbf{X}}[\mathbf{f}_i(\mathbf{X})\mathbf{f}_i(\mathbf{X})^\top]$ implies that some functions in $\{\mathrm{f}_{i1}(\mathbf{x}),\ldots,$ $\mathrm{f}_{iq}(\mathbf{x})\}$ can be written as a linear combination of others, making it impossible to estimate $\boldsymbol{\beta}_i$. In most real applications, $\mathrm{f}_{is}$'s are highly smooth functions such as monomials; see p.12 of \citet{Stein99} for a cogent argument. In such cases, $\mathrm{f}_{is}\in \mathbb{H}_i$ is satisfied in general. For example, if the domain $\mathcal{X}$ is a bounded set and the covariance kernel is a Mat\'ern kernel, then $\mathbb{H}$ is norm equivalent to a Sobolev space of functions with certain smoothness. Since a monomial $\mathrm{f}_{is}$ is infinitely differentiable, $\mathrm{f}_{is}$ lies in $\mathbb{H}_i$. We first restrict our discussion to a single SK model and drop the subscript $i$. From \eqref{optimmse}, for a given test point $\mathbf{x}_0$ and an SK model, we can decompose the optimal MSE into two parts: \begin{align}\label{msedecomp} \mathrm{MSE}_{\mathrm{opt}}(\mathbf{x}_0) &= \mathrm{MSE}_{\mathrm{opt}}^{(M)}(\mathbf{x}_0) + \mathrm{MSE}_{\mathrm{opt}}^{(\boldsymbol{\beta})}(\mathbf{x}_0), \nonumber \\ \mathrm{MSE}_{\mathrm{opt}}^{(M)}(\mathbf{x}_0) &= \boldsymbol{\Sigma}_{M}(\mathbf{x}_0,\mathbf{x}_0) - \boldsymbol{\Sigma}_{M}^\top(\mathbf{x}^m,\mathbf{x}_0) \left[\boldsymbol{\Sigma}_{M}(\mathbf{x}^m,\mathbf{x}^m)+ \boldsymbol{\Sigma}_{\epsilon}\right]^{-1} \boldsymbol{\Sigma}_{M}(\mathbf{x}^m,\mathbf{x}_0), \nonumber \\ \mathrm{MSE}_{\mathrm{opt}}^{(\boldsymbol{\beta})}(\mathbf{x}_0) &= \eta(\mathbf{x}_0)^\top \left[\mathcal{F}^\top \left(\boldsymbol{\Sigma}_{M}(\mathbf{x}^m,\mathbf{x}^m)+\boldsymbol{\Sigma}_{\epsilon}\right)^{-1}\mathcal{F}\right]^{-1} \eta(\mathbf{x}_0), \end{align} where $\eta(\mathbf{x}_0) =\mathbf{f}(\mathbf{x}_0) - \mathcal{F}^\top \left(\boldsymbol{\Sigma}_{M}(\mathbf{x}^m,\mathbf{x}^m)+\boldsymbol{\Sigma}_{\epsilon}\right)^{-1} \boldsymbol{\Sigma}_{M}(\mathbf{x}^m,\mathbf{x}_0)$. They are two distinct contributions to the total MSE from estimating $M(\mathbf{x})$ and $\boldsymbol{\beta}$, respectively. The following two theorems provide upper bounds for the integrated $\mathrm{MSE}_{\mathrm{opt}}^{(M)}(\mathbf{X}_0)$ and $\mathrm{MSE}_{\mathrm{opt}}^{(\boldsymbol{\beta})}(\mathbf{x}_0)$ in \eqref{msedecomp}. Based on them, we can analyze the convergence behavior of the integrated $\mathrm{MSE}_{\mathrm{opt}}(\mathbf{x}_0)$, and consequently the maximal IMSE. \begin{theorem}\label{mseMthm} Under Assumptions \ref{c1}-\ref{c3}, the following relation holds \begin{align}\label{varbound1} & {\E}_{\mathbf{X}^m}{\E}_{\mathbf{X}_0} \left[\mathrm{MSE}_{\mathrm{opt}}^{(M)} (\mathbf{X}_0)\right]\leq \frac{2\overline \sigma_0^2}{mn} \gamma \left( \frac{\overline \sigma_0^2}{mn} \right) \nonumber \\ & ~~ + \underset{\zeta \in \mathbb{N}}{\inf} \,\left[ \left\{\frac{3mn}{\overline \sigma_0^2}\tr(\boldsymbol{\Sigma}_{M})+1\right\}\tr\left(\boldsymbol{\Sigma}_{M}^{(\zeta)}\right) + \tr(\boldsymbol{\Sigma}_{M}) \left\{ 300 \rho_*^2 \frac{b(m,\zeta,r_*) \gamma(\tfrac{\overline \sigma_0^2}{mn})}{\sqrt{m}} \right\}^{r_*} \right], \end{align} where \begin{align*} & b(m,\zeta,r_*) = \max \left( \sqrt{\max(r_*, \log \zeta)},~ \frac{\max(r_*,\log \zeta)}{m^{1/2 - 1/r_*}} \right), \\ & \gamma(a) = \sum_{l=1}^{\infty} \frac{\mu_l}{\mu_l+a} \text{ for any } a>0, ~~ \tr\left(\boldsymbol{\Sigma}_{M}^{(\zeta)}\right) = \sum_{l=\zeta+1}^{\infty} \mu_l \text{ for any } \zeta \in \mathbb{N}. \end{align*} \end{theorem} Theorem \ref{mseMthm} provides an upper bound for the expectation of the IMSE ${\E}_{\mathbf{X}_0} \left[\mathrm{MSE}_{\mathrm{opt}}^{(M)}(\mathbf{X}_0)\right]$. The reason we have another expectation ${\E}_{\mathbf{X}^m}$ before this IMSE is that $\mathbf{X}^m$ is a random sample from $\mathbb{P}_{\mathbf{X}}$ and hence this IMSE is also random in $\mathbf{X}^m$. The upper bound in Theorem \ref{mseMthm} takes a complicated form and some discussion is in order. First of all, the first term in the upper bound \eqref{varbound1} is the dominant term, while the terms inside the infimum are typically of smaller stochastic orders than the first term, as we will show later in the proof of Theorem \ref{jointmsenew1} for three types of kernels. Second, inside the first term in \eqref{varbound1}, the term $\gamma(\frac{\overline \sigma_0^2}{mn})$ is known as the \textit{effective dimensionality} of the kernel $\boldsymbol{\Sigma}_M$ with respect to $L_2(\mathbb{P}_{\mathbf{X}})$ (\citealt{Zha05}). As we will show later in Theorem \ref{jointmsenew1}, the term $\frac{\overline \sigma_0^2}{mn}\gamma(\frac{\overline \sigma_0^2}{mn})$ is the dominant term that determines the convergence rate of IMSE. Third, the terms inside the infimum sign are stochastic errors due to the randomness in $\mathbf{X}^m$, and under Assumptions \ref{c1}-\ref{c3}, they are of negligible orders by choosing a proper $\zeta \in \mathbb{N}$. For two random variables $U_m$ and $V_m$ that are measurable with respect to the sigma-algebra generated by $\mathbf{X}^m$, we use $U_m ~{\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} V_m$ to denote the relation that $|U_m/V_m|$ is bounded in $\mathbb{P}_{\mathbf{X}^m}-$ probability. \begin{theorem}\label{msebetathm} Under Assumptions \ref{c1}-\ref{c4}, the following relation holds \begin{align}\label{varbound2} {\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{\mathrm{opt}}^{(\boldsymbol{\beta})} (\mathbf{X}_0)\right] {\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} & \frac{8q\tr(\boldsymbol{\Sigma}_M)}{\lambda_{\min}\left({\E}_{\mathbf{X}}[\mathbf{f}(\mathbf{X})\mathbf{f}(\mathbf{X})^\top]\right)} \Bigg\{8C_{\mathrm{f}}^2 \frac{\overline \sigma_0^2}{mn} \nonumber \\ &+ \inf_{\zeta\in \mathbb{N}} \Bigg[8C_{\mathrm{f}}^2 \frac{mn \overline \sigma_0^2}{ \underline \sigma_0^4} \rho_*^4 \tr\left(\boldsymbol{\Sigma}_M \right) \tr\left(\boldsymbol{\Sigma}_M^{(\zeta)}\right) + C_{\mathrm{f}}^2 \tr\left(\boldsymbol{\Sigma}_M^{(\zeta)}\right)\nonumber \\ &~+ C_{\mathrm{f}}^2 \tr\left(\boldsymbol{\Sigma}_M\right)\left\{ 200 \rho_*^2 \frac{ b(m,\zeta,r_*) \gamma(\tfrac{\overline \sigma_0^2}{mn})}{\sqrt{m}} \right\}^{r_*} \Bigg]\Bigg\}, \end{align} where $C_{\mathrm{f}}=\max_{1\leq s\leq q}\| \mathrm{f}_s \|_{\mathbb{H}}$, $b(m,\zeta,r_*)$ and $\gamma(\cdot)$ are defined in Theorem \ref{mseMthm}. \end{theorem} Similar to the upper bound in Theorem \ref{mseMthm}, the terms inside the infimum can be made negligible compared to the leading term of $\frac{\overline \sigma_0^2}{mn}$ by choosing a proper $\zeta \in \mathbb{N}$. The upper bound in Theorem \ref{msebetathm} is a bound in probability, which means that as $m\to\infty$, the IMSE in \eqref{varbound2} is upper bounded in probability by the right-hand side. It is slightly weaker than the the upper bound on the expectation of IMSE in Theorem \ref{mseMthm}, but suffices for deriving the convergence rate of the maximal IMSE. The following theorem gives our main rate result on the maximal IMSE. \begin{theorem}\label{jointmsenew1} Suppose that all $k$ designs have the sampling distribution $\mathbb{P}_{\mathbf{X}}$ for $\mathbf{X}^m$ and $\mathbf{X}_0$. Under Assumptions \ref{c1}-\ref{c4}, the following results hold with $r_*$ given in Assumption \ref{c3}: \begin{itemize} \item[(i)] \textit{(Finite-rank kernels)} If for every $i=1,\ldots,k$, $\boldsymbol{\Sigma}_{M,i}$ is a finite-rank kernel of rank $l_{*i}$, i.e., its eigenvalues satisfy $\mu_{i,1}\geq \mu_{i,2}\geq \ldots \geq \mu_{i,l_{*i}}>0$ and $\mu_{i,l_{*i}+1}=\mu_{i,l_{*i}+2}=\ldots=0$, then as $m\to\infty$, \begin{align}\label{eqadd1} \max_{i\in\{1,\ldots,k\}}{\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)\right] ~{\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} ~ R^{F}(m,n)\equiv \max\left(\frac{1}{mn}, \frac{1}{m^{\frac{r_*}{2}}}\right). \end{align} \item[(ii)] \textit{(Exponentially decaying kernels)} If for every $i=1,\ldots,k$, $\boldsymbol{\Sigma}_{M,i}$ is a kernel with eigenvalues satisfying $\mu_{i,l} \leq c_{1i} \exp\left(-c_{2i} l^{\kappa_i/d} \right)$ for some constants $c_{1i}>0$, $c_{2i}>0$, $\kappa_i>0$ and all $l\in \mathbb{N}$. Let $\kappa_*=\min_{i\in\{1,\ldots,k\}}\kappa_i$. Then, as $m\to\infty$, \begin{align}\label{eqadd2} \max_{i\in\{1,\ldots,k\}}{\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)\right] ~{\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} ~ R^{E}(m,n) \equiv \max\left\{\frac{\left(\log(mn)\right)^{\frac{d}{\kappa_*}} }{mn}, \frac{ \left(\log(mn)\right)^{\frac{r_*(\kappa_*+d)}{\kappa_*}}}{m^{\frac{r_*}{2}}}\right\}. \end{align} \item[(iii)] \textit{(Polynomially decaying kernels)} If for every $i=1,\ldots,k$, $\boldsymbol{\Sigma}_{M,i}$ is a kernel with eigenvalues satisfying $\mu_{i,l} \leq c_{i} l^{-2\nu_i/d-1}$ for some constants $\nu_i>d/2$, $c_{i}>0$ and all $l\in \mathbb{N}$. Let $\nu_*=\min_{i\in\{1,\ldots,k\}}\nu_i$. Then, as $m\to\infty$, \begin{align}\label{eqadd3} \max_{i\in\{1,\ldots,k\}}{\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)\right] ~{\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} ~ R^{P}(m,n) \equiv \max\left\{\frac{1}{(mn)^{\frac{2\nu_*}{2\nu_*+d}}}, \frac{n^{\frac{dr_*}{2\nu_*+d}}(\log (mn))^{r_*} }{m^{\frac{r_*(2\nu_*-d)}{2\nu_*+d}}} \right\}. \end{align} \end{itemize} \end{theorem} \begin{remark}\label{ratermk1} \textit{(Simplified convergence rates for fixed $n$)} The convergence rates of the maximal IMSE for the three types of kernels in Theorem \ref{jointmsenew1} appear somehow complicated. However, since we perform the same number of simulation replications $n$ for each pair of covariate point and design, we can simplify the rate results by considering a \textit{fixed} $n$ and an increasing $m$ (to infinity). If $r_* > 2$ in Assumption \ref{c3}, then the larger terms in \eqref{eqadd1} and \eqref{eqadd2} are the first terms in the brackets; if $r_*>\tfrac{2\nu_*}{2\nu_*-d}$ in Case (iii), then the larger term in \eqref{eqadd3} is also the first term. By dropping the fixed constant of $n$, the convergence rates for the three kernels in Theorem \ref{jointmsenew1} can be simplified to: $1/m$ for Case (i), $(\log m)^{\frac{d}{\kappa_*}}/m$ for Case (ii), and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for Case (iii). \end{remark} The convergence rates of the maximal IMSE have been derived based on the upper bounds of ${\E}_{\mathbf{X}^m}{\E}_{\mathbf{X}_0} \left[\mathrm{MSE}_{\mathrm{opt}}^{(M)} (\mathbf{X}_0)\right]$ and ${\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{\mathrm{opt}}^{(\boldsymbol{\beta})} (\mathbf{X}_0)\right]$ in Theorems \ref{mseMthm} and \ref{msebetathm}. These rates are generally tight and cannot be improved. In Remark \ref{finitermk} below, we discuss the finite-rank kernels and formally prove in Theorem \ref{jointmsenew2} that the rate function $R^{F}(m,n)$ is optimal, in the sense that it cannot be improved further. \begin{remark}\label{finitermk} \textit{(Example of a finite-rank kernel)} To illustrate the tightness of the bounds in Theorem \ref{jointmsenew1}, we show that the rate $1/(mn)$ in \eqref{eqadd1} can be attained for fixed $n$ as $m\to\infty$. For simplicity, we assume that in Model \eqref{krigmodel1}, $\mathbf{f}_i(\mathbf{x})\equiv 0$ and $\epsilon_{il}(\mathbf{x})$ is a homogeneous white noise process with mean 0 and a common constant variance $\sigma^2>0$ for $l=1,2,...,n$, $i=1,\ldots,k$, and $\mathbf{x}\in \mathcal{X}$. Thus the model becomes $\overline Y_i(\mathbf{x}_j) = M_i(\mathbf{x}_j) + \overline \epsilon_i(\mathbf{x}_j)$ for $j=1,\ldots,m$ and $i=1,\ldots,k$. Let $\mathcal{X}\subseteq \mathbb{R}^d$, and let the $i$th covariance kernel be $\boldsymbol{\Sigma}_{M,i}(\mathbf{x},\mathbf{x}')=a_i(\mathbf{x}^\top \mathbf{x}'+b_i)$ for some known constants $a_i>0$ and $b_i>0$, $i=1,\ldots,k$. We analyze the MSE-optimal linear predictor in \eqref{stockrig1} and the asymptotic behavior of the optimal MSE in \eqref{optimmse}. \begin{theorem}\label{jointmsenew2} \textit{(Exact rate for a finite-rank kernel)} Suppose that the covariance kernels are $\boldsymbol{\Sigma}_{M,i}(\mathbf{x},\mathbf{x}')=a_i\big(\mathbf{x}^\top \mathbf{x}'+b_i\big)$ for $\mathbf{x},\mathbf{x}'\in \mathcal{X}\subseteq \mathbb{R}^d$, known constants $a_i>0,b_i>0$ and $i=1,\ldots,k$. Under Assumptions \ref{c1}-\ref{c4} and the model setup described above, the MSE-optimal linear predictor in \eqref{stockrig1} and the optimal MSE in \eqref{optimmse} are given by \begin{align} \label{rank1blue} \widehat y_i(\mathbf{x}_0) & = a_i\widetilde \mathbf{x}_{i,0}^{\top} \mathbf{Z}_i^\top \left(a_i\mathbf{Z}_i \mathbf{Z}_i^\top + \frac{\sigma^2}{n}\mathbf{I}_m \right)^{-1} \overline \mathbf{Y}_i, \nonumber \\ \mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{x}_0) & = a_i\widetilde \mathbf{x}_{i,0}^{\top} \left(\mathbf{I}_{d+1} + \frac{a_i n}{\sigma^2}\mathbf{Z}_i^\top \mathbf{Z}_i \right)^{-1} \widetilde \mathbf{x}_{i,0}, \end{align} for any $\mathbf{x}_0\in \mathbb{R}$ and $i=1,\ldots,k$, where $\mathbf{I}_l$ is the $l\times l$ identity matrix, and \begin{align*} & \overline \mathbf{Y}_i=(\overline Y_i(\mathbf{x}_1),\ldots,\overline Y_i(\mathbf{x}_m))^\top \in \mathbb{R}^m, \\ & \widetilde \mathbf{x}_{i,0} = \left( \begin{array}{c} \sqrt{b_i} \\ \mathbf{x}_0 \end{array}\right) \in \mathbb{R}^{d+1}, \quad \mathbf{Z}_i = \left( \begin{array}{ccc} \sqrt{b_i} & \ldots & \sqrt{b_i} \\ \mathbf{x}_1 & \ldots & \mathbf{x}_m \end{array} \right)^\top \in \mathbb{R}^{m\times (d+1)}. \end{align*} Let $\mathbb{P}_{\mathbf{X}}$ be any sampling distribution on $\mathbb{R}^d$ for $\mathbf{X}_1,\ldots,\mathbf{X}_m,\mathbf{X}_0$, and assume that its second moment ${\E}_{\mathbf{X}_0}(\mathbf{X}_0 \mathbf{X}_0^\top)$ exists. Then as $m\to\infty$, \begin{align}\label{mseank1limit} mn \cdot \max_{i\in \{1,\ldots,k\}}{\E}_{\mathbf{X}_0}\left[\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)\right] \to (d+1)\sigma^2, \quad \text{ almost surely in } \mathbb{P}_{\mathbf{X}^m}. \end{align} \end{theorem} Theorem \ref{jointmsenew2} shows that the maximal IMSE of the covariance kernel $\boldsymbol{\Sigma}_{i,M}(\mathbf{x},\mathbf{x}')=a_i\big(\mathbf{x}^\top \mathbf{x}'+b_i\big)$ decreases asymptotically at the rate $(d+1)\sigma^2/(mn)$. For fixed $n$, this has shown that the rate $1/m$ given in \eqref{eqadd1} for finite-rank kernels is tight and cannot be improved. \end{remark} \section{Convergence Rates of IPFS} \label{sec:4} We next consider the problem of selecting the best design from the $k$ alternatives, with their mean functions given in Model \eqref{krigmodel1}, and study how fast $\mathrm{PFS}(\mathbf{X}_0)$ converges to 0 (or equivalently, how fast $\mathrm{PCS}(\mathbf{X}_0)$ converges to 1). Similar to the analysis of the maximal IMSE before, the convergence rate here is again in the average sense, by taking expectations of $\mathrm{PFS}(\mathbf{X}_0)$ under three probability measures: (i) the joint Gaussian measure on $M_i(\cdot)$ ($i=1,\ldots,k$), denoted by $\mathbb{P}_M$ (with the expectation denoted by ${\E}_M$), induced by the $k$ independent Gaussian processes with mean zero and covariance function $\boldsymbol{\Sigma}_{M,i}(\cdot,\cdot)$ for $i=1,\ldots,k$; (ii) the probability measure of the testing point $\mathbb{P}_{\mathbf{X}_0}$; and (iii) the probability measure of the sample $\mathbb{P}_{\mathbf{X}^m}$. In the following, $R(m,n)$ refers to the rate function of the maximal IMSE, which becomes $R^F(m,n)$, $R^E(m,n)$ or $R^P(m,n)$ under the corresponding kernels in Theorem \ref{jointmsenew1}. The following additional assumptions will lead to faster convergence rates of PFS in some particular scenarios. \vspace{2mm} \begin{enumerate}[label=A.\arabic*] \setcounter{enumi}{4} \item \label{c6} The simulation errors $\epsilon_{il} (\mathbf{x})$'s are independent normal random variables following $N(0,\sigma_{i}^2(\mathbf{x}))$ for all $i=1,\ldots,k$, $l=1,\ldots,n$ and $\mathbf{x}\in \mathcal{X}$. \item \label{c7} For any given $\xi\in(0,1/2)$, there exist constants $w_1>0,w_2>0,m_0\geq 1$ that depend on $\xi$, such that for $m\geq m_0$, for any $t>0$, \begin{align} &\mathbb{P}_{\mathbf{X}^m} \left\{\mathbb{P}_{\mathbf{X}_0}\left(\frac{\max_{i\in \{1,\ldots,k\}}\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)}{R(m,n)}\geq t\right) \leq w_1 \exp\left(-w_2 t\right)\right\}\geq 1-\xi. \end{align} \item \label{c8} For any given $\xi\in(0,1/2)$, there exist constants $w_3>0,m_0\geq 1$ that depend on $\xi$, such that for $m\geq m_0$, \begin{align} \label{eq:c8} &\mathbb{P}_{\mathbf{X}^m} \left\{\frac{\max_{i\in \{1,\ldots,k\}}\sup_{\mathbf{x}_0\in \mathcal{X}}\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{x}_0)}{R(m,n)} \leq w_3 \right\}\geq 1-\xi. \end{align} \end{enumerate} Although \ref{c6} is stronger than \ref{c1} by assuming normal observation noises, it is a common assumption in simulation-based optimization problems. We emphasize that the normality assumption in \ref{c6} is only needed for deriving tighter and exponentially small bounds for IPFS in Theorem \ref{pfsthm} below. Without \ref{c6}, we can still establish convergence rates of IPFS directly from the convergence rates of IMSE in Theorem \ref{jointmsenew1}; see Theorem \ref{pfsthm} Part (i). Assumption \ref{c7} requires that the maximum of the $k$ MSE's decays at an exponential rate with a high probability. This is often the case when the MSE is distributed like chi-square with an exponentially decaying right tail. \ref{c8} is an alternative condition stronger than \ref{c7}, requiring that the supremum of MSE over $\mathcal{X}$ to be bounded with a high probability. Both \ref{c7} and \ref{c8} can be rigorously verified for the finite-rank kernel in Remark \ref{finitermk} and Theorem \ref{jointmsenew2}; see Theorem 6 and its proof in the Online Supplement. \ref{c6} together with either \ref{c7} or \ref{c8} will allow tighter bounds for the tail probability of PFS, and hence, sharpened convergence rates of IPFS, as shown in the next theorem. \begin{theorem}\label{pfsthm} Suppose that all the $k$ designs have the sampling distribution $\mathbb{P}_{\mathbf{X}}$ for $\mathbf{X}^m$ and $\mathbf{X}_0$. Let $\delta_0$ be the IZ parameter in the definition of $\mathrm{PFS}(\mathbf{X}_0)$. \begin{itemize} \item[(i)] If Assumptions \ref{c1}-\ref{c4} hold, then as $m\to \infty$, $\E_M\E_{\mathbf{X}_0} [\mathrm{PFS}(\mathbf{X}_0)] {\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} R(m,n) $; \item[(ii)] If Assumptions \ref{c1}-\ref{c7} hold, then as $m\to \infty$, \begin{align*} & {\E}_M{\E}_{\mathbf{X}_0} \left[\mathrm{PFS}(\mathbf{X}_0)\right] {\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} \exp\left\{-\frac{1}{2}w_2^{1/2}\delta_0\left[R(m,n)\right]^{-1/2}\right\}, \end{align*} where $w_2$ is given in Assumption \ref{c7}; \item[(iii)] If Assumptions \ref{c1}-\ref{c6} and \ref{c8} hold, then as $m\to \infty$, \begin{align*} & {\E}_M{\E}_{\mathbf{X}_0} \left[\mathrm{PFS}(\mathbf{X}_0)\right] {\lesssim}_{\mathbb{P}_{\mathbf{X}^m}} \exp\left\{-\frac{1}{4}w_3^{-1}\delta_0^2\left[R(m,n)\right]^{-1}\right\}, \end{align*} where $w_3$ is given in Assumption \ref{c8}. \end{itemize} \end{theorem} The convergence rates of IPFS in Theorem \ref{pfsthm} include the measure $\mathbb{P}_M$ and its expectation ${\E}_M$, mainly for the convenience of technical treatment, so that our result is general and does not depend on the particular shapes of the $M_i(\cdot)$ functions. Theorem \ref{pfsthm} provides three convergence rates, from slower to faster, under sequentially stronger sets of assumptions. In Part (i), if we only assume \ref{c1}-\ref{c4} without the normality assumption on error terms, then by a direct application of Markov's inequality, the convergence rate of IPFS is at least as fast as that of the maximal IMSE given in Theorem \ref{jointmsenew1}. If the covariance kernels of the $k$ designs belong to one of the three types of kernels described before, then when $n$ is fixed, we know from Theorem \ref{jointmsenew1} and Remark \ref{ratermk1} that $R(m,n)$ converges to zero at the rate of $1/m$, $(\log m)^{\frac{d}{\kappa_*}}/m$ and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for the three types of kernels, respectively. As a result, Part (i) of Theorem \ref{pfsthm} implies that these polynomial rates for IMSE also hold for IPFS (and IPGS): when $n$ is fixed, IPFS converges to zero (and the IPGS converges to one) at least polynomially fast in $m$, at least at the rate of $1/m$, $(\log m)^{\frac{d}{\kappa_*}}/m$ and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for the three types of kernels, respectively. In Part (ii) of Theorem \ref{pfsthm}, the additional normality assumption of \ref{c6} and Assumption \ref{c7} provide sharpened convergence rates of IPFS than in Part (i), from the polynomial rate in Part (i) to an exponential rate. In particular, following Theorem \ref{jointmsenew1} and Remark \ref{ratermk1}, if $n$ is fixed and $R(m,n)$ converges to zero at the rate of $1/m$, $(\log m)^{\frac{d}{\kappa_*}}/m$ and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for the three types of kernels, respectively, then Part (ii) of Theorem \ref{pfsthm} implies that the IPFS converges to zero (and the IPGS converges to one) at least exponentially fast in $m$, at least at the rate of $\exp(-c\sqrt{m})$, $\exp(-c\sqrt{m}(\log m)^{-\frac{d}{2\kappa_*}})$ and $\exp(-cm^{\frac{\nu_*}{2\nu_*+d}})$ for the three types of kernels, respectively, where the constant $c=w_2^{1/2}\delta_0/2$. In Part (iii) of Theorem \ref{pfsthm}, the additional Assumptions \ref{c6} and \ref{c8} provide even more sharpened convergence rates of IPFS than in Part (ii). Following Theorem \ref{jointmsenew1} and Remark \ref{ratermk1}, if $n$ is fixed and $R(m,n)$ converges to zero at the rate of $1/m$, $(\log m)^{\frac{d}{\kappa_*}}/m$ and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for the three types of kernels, respectively, then Part (iii) of Theorem \ref{pfsthm} implies that the IPFS converges to zero (and the IPGS converges to one) at least exponentially fast in $m$, at least at the rate of $\exp(-cm)$, $\exp(-cm(\log m)^{-\frac{d}{\kappa_*}})$ and $\exp(-cm^{\frac{2\nu_*}{2\nu_*+d}})$ for the three types of kernels, respectively, where the constant $c=w_3^{-1}\delta_0^2/4$. Each of these exponential rates converges to zero faster than the corresponding exponential rate from Part (ii). \begin{remark} Parts (ii) and (iii) of Theorem \ref{pfsthm} show that under additional assumptions on the distribution of simulation noises and tails of $\max_{i\in \{1,\ldots,k\}}\mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{X}_0)$ and $\max_{i\in \{1,\ldots,k\}} \sup_{\mathbf{x}_0\in \mathcal{X}}$ $ \mathrm{MSE}_{i,\mathrm{opt}}(\mathbf{x}_0)$, the convergence rate of IPFS can be exponentially fast. Note that this is distinguished from the well-established exponential convergence rate of the PFS in R\&S by comparing sample means of different designs (\citealt{dai1996,glynn2004}). In those studies, PFS is reduced by increasing the number of simulation replications for each design instead of increasing the number of covariate points, and its exponential convergence rate takes the form of $\exp(-\varrho n_{tot})$, where $n_{tot}$ is the total number of simulation samples and $\varrho$ is related to some large-deviations rate function. \end{remark} \begin{remark} \textit{(On the independence across different designs)} In the development of convergence rates of the two target measures, we have assumed in \ref{c1} that the simulation samples are independent across different designs $i$. This assumption is naturally the case when the designs are categorical, e.g., when the designs are the treatment methods for a certain disease. However, when the designs are represented as vectors in a metric space, they usually demonstrate spatial correlation, i.e., designs that are close to each other tend to have similar performance. For this case, our method and analysis can still be applied, but if the model can capture this spatial correlation between designs, it might lead to higher convergence rates for the maximal IMSE and IPFS. A possible way to do it is to build one SK that includes both the covariates and designs as inputs for predicting the system performance. That model is substantially different from ours, and further investigation along this direction is beyond the scope of this paper. \end{remark} \begin{remark} \textit{(On the choices of $m$ and $n$)} In Theorems \ref{mseMthm}-\ref{pfsthm}, we have assumed that the number of replications $n_i$ for covariate points of design $i$ remains the same across different designs. In practice, it is possible that the decision maker wants to unevenly allocate the simulation samples among the designs to optimize some target measures. In this case, $n_i$'s are no longer identical to each other. It falls in the well-established problem of ranking and selection (R\&S) in simulation. For this purpose, our analysis can still be applied. We will discuss this direction in Section 4 of the Online Supplement. When all the covariate points receive the same number of replications $n$, we can see that in all three cases of Theorem \ref{jointmsenew1}, the first term inside the maximum function in the rate expression is always a function of $n_{c}=mn$, while the second term depends on $m$ and $n$ separately. In order to make the maximal IMSE and IPFS decrease as fast as possible, we need to make the second term as small as possible, which means that for all three cases of Theorem \ref{jointmsenew1}, the best choice is to set $n=O(1)$, such that $m$ increases in the same order as $n_{c}$. Intuitively, this is because the maximal IMSE involves averaging MSE over all potential location $\mathbf{x}_0\in \mathcal{X}$, and we should use as many distinct covariate points as possible in order to cover more locations in $\mathcal{X}$. We emphasize that this analysis on the orders of $m$ and $n$ is only in the asymptotic sense based on our theoretical upper bounds. \end{remark} \begin{remark} \textit{(Determining the value of $m$)} When $n_i$'s are of a constant order, Theorems \ref{jointmsenew1} and \ref{pfsthm} imply that the maximal IMSE and IPFS decrease no slower than a polynomial order of $m$. This theory supports a natural procedure to determine the number of covariate points $m$. First, for given $m$ and $n_i$'s, the maximal IMSE and IPFS can be either calculated by numerical integration, or approximated by simple Monte Carlo estimators; see Section 3 of the Online Supplement. Second, after we fit a sequence of SK models with different sample sizes $m$, we can further fit a linear regression model with the logarithm of the maximal IMSE or IPFS as the response variable and $\log m$ as the predictor. Third, based on this fitted linear model, we reversely solve for the sample size $m^*$ such that the maximal IMSE or IPFS hits a small prespecified target precision. This simple procedure for determining $m$ is often accurate with IMSE and can be slightly conservative with IPFS, since sometimes IPFS can decay exponentially fast in $m$ as shown in Theorem \ref{pfsthm}. We will illustrate the practical implementation of this procedure in Section 5.3 of the Online Supplement. \end{remark} \vspace{1mm} \section{Numerical Experiments} \label{sec:6} In this section, we adopt two benchmark functions and an M/M/1 queue example for numerical testing. These experiments can provide concrete presentation for the rates of the maximal IMSE and IPFS, and show the impact of the factors such as the problem structure, covariance kernel, dimension of the covariate space, number of simulation replications and sampling distribution on the convergence rates. For all the experiments, we implement four types of covariance kernels ($\|\cdot\|$ denotes the Euclidean distance): \begin{enumerate} \item[(i)] Squared exponential kernel: $\boldsymbol{\Sigma}_M(\mathbf{x},\mathbf{x}')=\tau^2\exp\{-\varphi\|\mathbf{x}-\mathbf{x}'\|^2\}$, for $\mathbf{x},\mathbf{x}'\in \mathcal{X}$, $\tau^2>0$, and $\varphi>0$. \item[(ii)] Mat\'ern kernel with smoothness $\nu=5/2$: $\boldsymbol{\Sigma}_{M}(\mathbf{x},\mathbf{x}')=\tau^2(1+\sqrt{5}\varphi\|\mathbf{x}-\mathbf{x}'\| + \frac{5}{3}\varphi^2\|\mathbf{x}-\mathbf{x}'\|^2)\cdot \exp\{-\sqrt{5}\varphi\|\mathbf{x}-\mathbf{x}'\|\}$, for $\mathbf{x},\mathbf{x}'\in \mathcal{X}$, $\tau^2>0$, and $\varphi>0$. \item[(iii)] Mat\'ern kernel with smoothness $\nu=3/2$: $\boldsymbol{\Sigma}_{M}(\mathbf{x},\mathbf{x}')=\tau^2(1+\sqrt{3}\varphi\|\mathbf{x}-\mathbf{x}'\|)\cdot \exp\{-\sqrt{3}\varphi\|\mathbf{x}-\mathbf{x}'\|\}$, for $\mathbf{x},\mathbf{x}'\in \mathcal{X}$, $\tau^2>0$, and $\varphi>0$. \item[(iv)] Exponential kernel (Mat\'ern kernel with smoothness $\nu=1/2$): $\boldsymbol{\Sigma}_{M}(\mathbf{x},\mathbf{x}')=\tau^2\exp\{-\varphi\|\mathbf{x}-\mathbf{x}'\|\}$, for $\mathbf{x},\mathbf{x}'\in \mathcal{X}$, $\tau^2>0$, and $\varphi>0$. \end{enumerate} Similar to \citet{ankenman2010}, the covariance matrices $\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)$'s are estimated by $\text{diag}$ $\left\{\widetilde \sigma_i^2(\mathbf{x}_1)/n_1,\ldots, \widetilde \sigma_i^2(\mathbf{x}_m)/n_m\right\}$, where $\widetilde \sigma_i^2(\mathbf{x}_j)$ ($j=1,\ldots,m$) are estimated by the least-squares method based on the sample variances $\widehat \sigma_i^2(\mathbf{x}_j)=(n_j-1)^{-1}\sum_{l=1}^{n_j} [Y_{il}(\mathbf{x}_j)-\overline Y_i(\mathbf{x}_j)]^2$ ($j=1,\ldots,m$). Then given the estimated $\boldsymbol{\Sigma}_{\epsilon,i}(\mathbf{x}^m)$'s, for each of the four kernels, we estimate the parameters $\varphi$ and $\tau^2$ by the maximum likelihood estimation. The squared exponential kernel (i) belongs to the exponentially decaying kernels and the other three kernels (ii)-(iv) belong to the polynomially decaying kernels. The smoothness of sample paths decreases from kernel (i) to kernel (iv), with (i) giving the smoothest sample paths and (iv) giving the roughest sample paths. In all experiments below, we compute the estimated MSE at a single point $\mathbf{x}_0$ by the formula $\widehat \mathrm{MSE} (\mathbf{x}_0) = [\widehat y(\mathbf{x}_0) -y(\mathbf{x}_0)]^2$, where $y(\mathbf{x}_0)$ is the true function value at $\mathbf{x}_0$ and $\widehat y(\mathbf{x}_0)$ is the fitted mean function. To evaluate the IMSE $\E_{\mathbf{X}_0}[\mathrm{MSE}_{\mathrm{opt}}(\mathbf{X}_0)]$ over the domain $\mathcal{X}$, we sample $T$ points of $\mathbf{x}_0$ from $\mathcal{X}$ according to the distribution $\mathbb{P}_{\mathbf{X}}$ and average their estimated MSEs $\widehat \mathrm{MSE} (\mathbf{x}_0)$. In our experiments, $T$ is chosen as $10^3$, $10^4$, or $10^5$, depending on the dimension of $\mathbf{x}$. Monte Carlo estimates based on this setting of $T$ are in general accurate enough. Similarly, for each of the $T$ testing locations $\mathbf{x}_0$, we compute the true minimum mean performance $y^\circ(\mathbf{x}_0)$ and the estimated minimum mean performance $\widehat y^\circ(\mathbf{x}_0)$ according to \eqref{bestdesigndef}. Then the IPFS $\E_{\mathbf{X}_0}[\mathrm{PFS}(\mathbf{X}_0)]$ is computed by averaging over the $T$ points drawn from $\mathbb{P}_{\mathbf{X}}$. \subsection{Benchmark Functions} We consider the following common benchmark functions. In all cases, $\mathbf{x}=(x_1,\ldots,x_d)^\top\in \mathbb{R}^d$ is the covariate, $\mathbf{z}_i \in \mathbb{R}^d$'s are the ``solutions" that index the different designs, and $\epsilon(\mathbf{x})$ is an independent noise normally distributed as $N(0,(\sqrt{2})^2)$. 1. De Jong's function: \begin{equation}\label{eq:bench1} Y(\mathbf{x})=M(\mathbf{x})+\epsilon(\mathbf{x}) = \sum_{l=1}^d (x_l-z_l)^2+\epsilon(\mathbf{x}). \end{equation} For function $M(\mathbf{x})$, the global minimum $\mathbf{x}^*$ is obtained at $x_l=z_l$, $l=1,2,...,d$ with $M(\mathbf{x}^*)=0$. We consider 10 discrete designs with the $i$-th design $\mathbf{z}^i=(\underbrace{i,...,i}_{d})$, $i=1,2,...,10$. 2. Griewank's function: \begin{equation}\label{eq:bench4} Y(\mathbf{x})=M(\mathbf{x})+\epsilon(\mathbf{x})=\frac{1}{4000}\sum_{l=1}^d (x_l-z_l)^2-\prod_{l=1}^d \cos\left(\frac{x_l-z_l}{\sqrt{l}}\right)+1+\epsilon(\mathbf{x}). \end{equation} For function $M(\mathbf{x})$, the global minimum $\mathbf{x}^*$ is obtained at $x_l=z_l$, $l=1,2,...,d$ with $M(\mathbf{x}^*)=0$. We consider 10 discrete designs with the $i$-th design $\mathbf{z}^i=(\underbrace{i,...,i}_{d})$, $i=1,2,...,10$. Note that the performance of these functions depends on both the covariate $\mathbf{x}$ and design (solution) $\mathbf{z}$. We denote $y(\mathbf{x})$ to highlight the input $\mathbf{x}$ to the SK model. In this numerical test, we consider the De Jong's functions with $d=1$ and 3 and the Griewank's functions with $d=1$ and $10$. To better understand the two test functions, we have provided plots of them in Section 5.1 of the Online Supplement. The De Jong's functions are relatively smooth. The Griewank's functions are highly nonlinear with many oscillations, which brings difficulty to SK modeling when the number of covariate points $m$ is small. We consider three sampling distributions for $\mathbf{X}^m$: uniform, truncated normal and normal distributions. The covariate space is $\mathcal{X} = [1,10]^d$ when $d=1$, is $\mathcal{X} = [1,4]^d$ when $d=3,10$ for the uniform and truncated normal sampling, and is $\mathcal{X} = \mathbb{R}^d$ for the normal sampling. For the truncated normal distribution, the mean and variance on each dimension are $(5.5,7^2)$ when $d=1$ and $(2.5,3^2)$ when $d=3,10$. The normal distribution on each dimension is $N(5.5,(\sqrt{3})^2)$ when $d=1$ and $N(2.5,1^2)$ when $d=3,10$. We let the number of covariate points $m$ increase geometrically from $m=5$ to $m=100$ in the set $\{5,8,12,18,28,42,65,100\}$, roughly with the common ratio of $1.53$ when $d=1$. When $d=3$, $m$ increases from $m=5$ to $m=280$ in the set $\{5,9,16,27,50,87,155,280\}$, roughly with the common ratio of $1.77$; when $d=10$, $m$ increases from $m=5$ to $m=1000$ in the set $\{ 5,11,23,49,103,220,470,1000 \}$, roughly with the common ratio of $2.13$. We fix the number of replications at each $\mathbf{x}$ for all designs at $n=10$. For the indifference-zone parameter $\delta_0$, we set $\delta_0=0.05$ for the one dimensional De Jong's functions, $\delta_0=0.1$ for the one dimensional Griewank's functions and three dimensional De Jong's functions, and $\delta_0=0.2$ for the ten dimensional Griewank's functions. The maximal IMSE and IPFS in all cases are estimated by the average of 100 macro Monte Carlo replications. The convergence rates of the two measures under different sampling distributions, test functions and covariance kernels are illustrated in Figures \ref{fig:test1d1}-\ref{fig:test2d10}. In the legends, \texttt{SqExp} means the squared exponential kernel, \texttt{Matern 5/2} means the Mat\'ern kernel with $\nu=5/2$, \texttt{Matern 3/2} means the Mat\'ern kernel with $\nu=3/2$, and \texttt{Exp} means the exponential kernel. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.99\textwidth]{randd1DeJongmse1-eps-converted-to.pdf} \includegraphics[width=0.99\textwidth]{randd1DeJongpfs1-eps-converted-to.pdf} \end{center} \caption{1-d De Jong's functions: maximal IMSE and IPFS under different covariance kernels and sampling distributions.} \label{fig:test1d1} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.92\textwidth]{randd1griewangkmse1-eps-converted-to.pdf} \includegraphics[width=0.92\textwidth]{randd1griewangkpfs1-eps-converted-to.pdf} \end{center} \caption{1-d Griewank's functions: maximal IMSE and IPFS under different covariance kernels and sampling distributions.} \label{fig:test2d1} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.92\textwidth]{rand3DDejong-eps-converted-to.pdf} \includegraphics[width=0.92\textwidth]{rand3DDejongpfs-eps-converted-to.pdf} \end{center} \caption{3-d De Jong's functions: maximal IMSE and IPFS under different covariance kernels and sampling distributions.} \label{fig:test1d3} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.92\textwidth]{randd10griewangkmse-eps-converted-to.pdf} \includegraphics[width=0.92\textwidth]{randd10griewangkpfs-eps-converted-to.pdf} \end{center} \caption{10-d Griewank's functions: maximal IMSE and IPFS under different covariance kernels and sampling distributions.} \label{fig:test2d10} \end{figure} In terms of convergence patterns, the maximal IMSE decreases as $m$ increases in all cases, and the decreasing trends are very close to linear when $m$ exceeds 28 with $d=1,3$ and 103 with $d=10$. Since the maximal IMSE and $m$ are plotted on logarithmic scales, it implies that when $m$ is large enough, the maximal IMSE decreases polynomially with $m$. This observation agrees with our rate results in Theorem \ref{jointmsenew1}. The IPFS also decreases as $m$ increases in all cases, and the convergence rates are no slower than those of the maximal IMSE. In some cases, such as the uniform and truncated normal sampling on the 10-dimensional Griewank's function, the decreasing trends of the logarithmic IPFS are superlinear, suggesting that the IPFS might enjoy convergence rates faster than polynomial. These observations agree with the rate results in Theorem \ref{pfsthm}. Comparing the performances of the four covariance kernels, we can observe that the exponential kernel performs the worst with the largest maximal IMSE and IPFS in all tested cases, and its disadvantage is more obvious on the De Jong's function. This is mainly because the sample paths from the exponential kernel are rough (continuous but not differentiable) while the De Jong's function is very smooth. This mismatch creates bad fitting and predictions, and thus large values of the two target measures. This disadvantage becomes minor on the Griewank's function because the rough sample paths generated from the exponential kernel become appropriate for modeling the oscillations in the Griewank's function. Among the other three kernels, the Mat\'ern kernel with $\nu=5/2$ and the squared exponential kernel often have better performance because their sample paths are smoother. Among the three sampling distributions, the uniform and truncated sampling have very similar performance. These two distributions are defined on the same supports, i.e., $\mathcal{X} = [1,10]^d$ when $d=1$ and $\mathcal{X} = [1,4]^d$ when $d=3,10$. The truncated normal is set with relatively large variances ($7^2$ when $d=1$ and $3^2$ when $d=3,10$), which results in sufficiently spread out covariate points and hence similar performance to the uniform sampling. The performance of the normal sampling is a little different. This is because the normal sampling is defined on an infinite support, so the space that the MSE and PFS are integrated over is different. However, we can see that the normal sampling is effective in reducing the maximal IMSE and IPFS. The values of the two measures under normal sampling are basically on the same order as those under the uniform and truncated normal sampling. \subsection{M/M/1 Queue} \label{subsection:mm1} The M/M/1 queue is analytical, and thus provides convenience for estimating PFS. In this test, our example is taken from \cite{zhou2015}. Customers arrive at a system according to a Poisson process with rate $x$, and the service time of the server follows an exponential distribution with mean $1/\lambda$. We consider two types of cost, the service cost $c_u \lambda$ with $c_u$ being the per unit cost of the service rate, and the waiting cost, determined by the customers' mean waiting time $\E[\mathcal{W}(\lambda)]$ in the system. In addition, there is an upper bound $\mathcal{U}$ on the total cost. When the system is unstable (i.e., $x/\lambda\geq 1$), it will incur the cost $\mathcal{U}$. Therefore, the total cost $TC$ of this system is \begin{equation*} TC(x,\lambda)= \begin{cases} \min\{\E[\mathcal{W}(\lambda)]+c_u \lambda,\mathcal{U}\}, \ &\text{ if }x/\lambda<1;\\ \mathcal{U}, &\text{ otherwise.} \end{cases} \end{equation*} Note that for the M/M/1 queue, the mean waiting time $\E[\mathcal{W}(\lambda)]$ has an analytical form $1/(\lambda-x)$, and the solution that minimizes the total cost is obtained at $\lambda^*=x+1/\sqrt{c_u}$. To fit into the framework of simulation with covariates, we consider 10 discrete designs with the $i$-th design $\lambda_i=6+0.3i$, $i=1,2,...,10$, and let $c_u=0.1$ and $\mathcal{U}=2.5$. The covariate $x$ is restricted in an open interval $\mathcal{X}=(0.5,4.5)$. We consider two sampling distributions $\mathbb{P}_{\mathbf{X}}$ for $\mathbf{X}^m$: uniform on $\mathcal{X}$ and truncated normal on $\mathcal{X}$ with mean 2.5 and variance $3^2$. We let $m$ take values in $\{5,10,20,40,80,160,320,640\}$ and $n$ take values in $\{5,10\}$. The maximal IMSE and IPFS are estimated by the average of 100 macro Monte Carlo replications. The results for the maximal IMSE and the IPFS across the 10 designs are summarized in Figures \ref{fig:mm1:mseheter} and \ref{fig:mm1:pfsheter}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{formula_heter_imse-eps-converted-to.pdf} \end{center} \caption{Maximal IMSE under different covariance kernels, sampling distributions and values of $n$.} \label{fig:mm1:mseheter} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{formula_heter_ipfs-eps-converted-to.pdf} \end{center} \caption{IPFS under different covariance kernels, sampling distributions and values of $n$.} \label{fig:mm1:pfsheter} \end{figure} Figure \ref{fig:mm1:mseheter} shows that on the logarithmic scale, the maximal IMSE across the 10 designs decreases almost linearly as the sample size $\log m$ increases, for all the four kernels and numbers of simulation replications tested. This observation agrees with our theory (Theorem \ref{jointmsenew1}) that the convergence rates of the maximal IMSE are in the polynomial orders of $m$ for the three types of covariance kernels, including all the four kernels we have implemented here. We note that this linear trend can be utilized to help an analyst make the design decision for achieving a target precision of the maximal IMSE. More details are available in Section 5.3 of the Online Supplement. In Figure \ref{fig:mm1:mseheter}, increasing $n$ from $5$ to $10$ does not significantly reduce the maximal IMSE for all kernels. Among the four kernels, the exponential kernel gives larger maximal IMSE than the other three, again due to the mismatch between its rough sample paths and the smooth target function, since $TC(x,\lambda)$ is always a smooth function in $x$ (infinitely differentiable) for all values of $\lambda_i$. Different sampling distributions on the covariate space do not seem to have a significant impact on the convergence pattern and rate. Figure \ref{fig:mm1:pfsheter} shows the convergence of IPFS for $\delta_0=0.01$. It can be observed that the relative performance of the IPFS under different kernels, numbers of simulation replications and sampling distributions basically remains the same as that of the maximal IMSE, but the convergence rates of the IPFS are faster, demonstrating a superlinear pattern on the logarithmic scale. \begin{remark} In this research, we have employed the SK models for system performance predictions. It is well-known that the computational complexity of SK (or Gaussian process models) is $O(m^3)$, where $m$ is the number of covariate points. Although with a fixed sampling distribution for the covariate points, we can collect all the covariate points in advance and build the SK models just once, this complexity only makes the computational time practically acceptable when $m$ is no more than a few thousand, or tens of thousand when the offline simulation period is long. When $m$ becomes even larger than that, certain techniques in scalable Gaussian processes \citep{luo2013,hensman2013gaussian,wilson2015kernel} might be considered for improving the computational efficiency. \end{remark} \begin{remark}\label{remark:stat} In this research, we have adopted a fixed (static) distribution for sampling the covariate space. In the meantime, there has been an increasing interest recently in the development of adaptive design-of-experiment methods (\citealt{Garud2017}). As an initial investigation for the application potential of adaptive methods for the SK construction in simulation with covariates, we numerically compared our static sampling with an intuitive adaptive design procedure (Adaptive MSE Procedure). The results are provided in Section 5.2 of the Online Supplement. We observed that the static sampling considered in this research has similar empirical performance to the Adaptive MSE Procedure in general, and tends to be superior when (i) the dimension of the covariate space is high; (ii) the covariate distribution deviates from uniform; and (iii) the target function has strong oscillation. \end{remark} \section{Conclusions and Discussion} \label{sec:7} Simulation with covariates is a recently proposed framework for conducting simulation experiments \citep{hong2019,shen2019}. It is comprised of the offline simulation and online prediction periods, and is able to substantially reduce the decision time. We provide theoretical analysis for the predictive performance of the stochastic kriging model under this framework. We focus on two critical measures for the prediction errors, the maximal IMSE and IPFS, and study their convergence rates, in order to understand the relationship between the offline simulation efforts and the online prediction accuracy. For the maximal IMSE, we show that the convergence rates are $1/m$, $(\log m)^{\frac{d}{\kappa_*}}/m$ and $m^{-\frac{2\nu_*}{2\nu_*+d}}$ for the finite-rank kernels, exponentially decaying kernels and polynomially decaying kernels respectively, where $m$ is the number of sampled covariate points, $\kappa_*$ and $\nu_*$ are some kernel parameters, and $d$ is the dimension of covariates. For the IPFS, we show that the convergence rates are at least as fast as the maximal IMSE, and can be enhanced to exponential rates under some conditions. Since the rates derived for the maximal IMSE and IPFS are simple and concrete, and are the first to characterize the convergence rates of the prediction errors in simulation with covariates to the best of our knowledge, they serve as a good benchmark against which improvement in rates might be theoretically or numerically measured from future prediction methods built on possibly different assumptions, prediction models, covariance kernels and covariate point collection strategies. In addition, the theoretical analysis in this research has the chance to be extended to facilitate new developments in simulation with covariates, e.g., when adaptive design procedures are used to explore the covariate space.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Repeated interaction schemes, a.k.a collisional models~\cite{rep-int,rep-int2,Attal2,Karevski,rep-int3,rep-int4}, have played a vital role in the development of quantum optics~\cite{maser1,maser2,maser3,maser4} and the rapid evolution of quantum thermodynamics~\cite{Qth1,Qth2,Qth3,Qth4,Qth5}. The idealized and straightforward formalism has been crucial to designing and understanding quantum devices such as information engines~\cite{info1,info2,info3,Esposito rep. int.}, heat engines~\cite{Qth2, rep-int-engine,DenzlerLutz,enginePhil,PreB}, and quantum batteries~\cite{QBRepInt,QBRepInt2,BarraBattery,firstQB,quantacell1,quantacell2,campisibatt,qb1,qb2,qb3}. Recently, it was realized that the framework can be extended to deal with macroscopic reservoirs~\cite{PreB2,PreB}, expanding the reach of applications in quantum thermodynamics. For comprehensive reviews of the method and its applications, see~\cite{rev1} and~\cite{rev2}. In the simplest scenario, many copies of an auxiliary system in the Gibbs equilibrium thermal state interact sequentially with a system of interest. Each interaction step is described by a completely-positive trace-preserving (CPTP) map~\cite{BreuerBook}. The repeated interaction process corresponds to concatenations of the map, which eventually will bring the system to a nonequilibrium steady-state or an equilibrium state. In equilibrium, heat does not flow to the environment, and entropy is not produced. They are not sustained by work. When the repeated interaction brings the system to an equilibrium state, we say that we iterate a map with equilibrium. The thermodynamic quantities characterizing the process can be considered the average over stochastic versions of them defined on trajectories. These quantities satisfy fluctuation theorems~\cite{MHPPRE2015,HPNJP2013,Campisi,MHPPRX2018} that reveal essential properties of the process. Particular attention has been drawn to efficiency fluctuations in different classical~\cite{ef1,ef2,ef3,ef4,ef5,ef6,ef7,ef8,ef9,ef10,ef11} or quantum~\cite{DenzlerLutz} engines due to their remarkable properties. This paper will study a quantum battery charged by a repeated interaction process. A quantum battery is a system that stores energy. The battery's charge is characterized by its ergotropy~\cite{ergotropy}, i.e., the maximum amount of energy extracted with a unitary process. Once the energy is removed, the battery is recharged with a repeated interaction process that starts from the discharged state. In this way, we have a working cycle, and we analyze its thermodynamics. The most straightforward charging protocol considers the auxiliary systems in a nonequilibrium state. However, the process of sustaining the charged state is dissipative. Ref~\cite{BarraBattery} proposed a different kind of quantum battery. The charged state corresponds to the equilibrium state of the process. Therefore the charge is preserved without dissipation and provided by the work done in the recharging stage. We will study the efficiency fluctuations of the charging process. We will see that the equilibrium state of the battery determines its statistics. We will illustrate this in two examples. We will also study work fluctuations in the recharging stage. We discuss equilibrium fluctuations and the difference between fluctuations in the equilibrium states of battery systems and the Gibbs equilibrium state. To characterize the fluctuations of the quantum system, we consider the two-point measurement scheme~\cite{esposito-mukamel-RMP}. Evaluating the fluctuations requires detailed information about the bath and the process. However, a simplification arises because we deal with maps with equilibrium. The remainder of this article is organized as follows. In section \ref{traj.map.sec}, we review the thermodynamics for CPTP maps emphasizing the results for maps with equilibrium. Then, in section \ref{sec:battery}, we introduce our system of study, namely the equilibrium quantum battery proposed in~\cite{BarraBattery}. Section \ref{sec.fluct} discusses the stochastic versions of the thermodynamic equalities and laws, emphasizing the results for maps with equilibrium again. Subsequently, in section \ref{sec:ejemplos}, we evaluate these fluctuations in two illustrative examples. We conclude this article in section \ref{secCONC}. \section{Thermodynamic description for completely positive trace-preserving maps} \label{traj.map.sec} Consider a system $S$ and a system $B$ that jointly evolve under the unitary $U=e^{-i\frac{\tau}{\hbar}(H_S+H_B+V)}$. The Hamiltonians $H_S$ and $H_B$ of $S$ and $B$ respectively are constant in time. The coupling between $S$ and $B$ during the time interval $(0,\tau)$ is given by the interaction energy $V$ and vanishes for $t<0$ and $t>\tau$. Initially, $S$ and $B$ are uncorrelated, i.e., their density matrix is the tensor product of the respective density matrices $\rho_{\rm tot}=\rho_S\otimes \omega_\beta(H_B)$, where $\omega_\beta(H_B)=\frac{e^{-\beta H_B}}{Z_B}$ is the Gibbs thermal state for $B$ with $\beta$ the inverse temperature, and $Z_B={\rm Tr} \,e^{-\beta H_B}.$ After the lapse of time $\tau$ the initial state $\rho_{\rm tot}$ changes to a new state, \begin{equation} \label{unitary} \rho'_{\rm tot}=U \left(\rho_S\otimes\omega_\beta(H_B)\right) U^\dag. \end{equation} In the following, we denote $\rho_S'={\rm Tr}_B\rho_{\rm tot}'$ and $\rho_B'={\rm Tr}_S\rho_{\rm tot}'$, where ${\rm Tr}_X$ is the partial trace over subsystem $X$. By tracing out $B$, one obtains a CPTP map $\mathcal{E}$ for the system $S$ evolution \begin{equation} \rho'_S=\mathcal{E}(\rho_S)={\rm Tr}_B\left[U \left(\rho_S\otimes\omega_\beta(H_B)\right) U^\dag\right]. \label{CPTP} \end{equation} The energy change of $S$ \begin{equation} \Delta E={\rm Tr}[H_S(\rho'_S-\rho_S)], \label{Av.Energy} \end{equation} can be written as the sum of \begin{equation} Q={\rm Tr}[H_B(\omega_\beta(H_B)-\rho'_B)], \label{Av.Heat} \end{equation} and \begin{equation} W={\rm Tr}[(H_S+H_B)(\rho'_{\rm tot}-\rho_{\rm tot})], \label{Av.Work} \end{equation} satisfying the first law $\Delta E=W+Q$. Note that $Q$ is minus the energy change of $B$, we call it heat and $W$ is the energy change of the full $S+B$ system, we call it the switching work because it is due to the energy cost of turning on and off the interaction $V$ at the beginning and end of the process respectively~\cite{Barra2015,Chiara}. Consider the von Neumann entropy change \begin{equation} \Delta S_{\rm vN}=-{\rm Tr}[\rho_S'\ln\rho_S']+{\rm Tr}[\rho_S\ln\rho_S] \label{Av.Ent} \end{equation} of system $S$ and the heat $Q$ given in Eq.~(\ref{Av.Heat}). The entropy production $\Sigma=\Delta S_{\rm vN}-\beta Q$, is also given by~\cite{esposito-NJP} \begin{equation} \Sigma=D(\rho_{\rm tot}'||\rho_S'\otimes \omega_\beta(H_B))\geq 0, \label{Av.Ent.Prod} \end{equation} with $D(a||b)\equiv\Tr[a\ln a] - \Tr[a \ln b].$ The inequality in Eq.\eqref{Av.Ent.Prod} corresponds to the second law. Note that system $B$ does not need to be macroscopic, nevertheless, we will call it the bath. As in standard thermodynamics, analyzing the process $\rho_S\to\rho_S'={\mathcal E}(\rho_S)$, in terms of $\Delta E=W+Q$ and $\Sigma=\Delta S_{\rm vN}-\beta Q\geq 0$ with the quantities given in Eqs.~\eqref{Av.Energy},~\eqref{Av.Heat},~\eqref{Av.Work},~\eqref{Av.Ent}, and~\eqref{Av.Ent.Prod} is very useful. Note that for their evaluation, particularly for the work, Eq.~(\ref{Av.Work}), and entropy production, Eq.~(\ref{Av.Ent.Prod}), we need to know the full state $\rho_{\rm tot}'$. \subsection{ Maps with thermodynamic equilibrium} In a repeated interaction process, one concatenates $L$ CPTP maps $\mathcal{E}^L\equiv\mathcal{E}\circ\cdots\circ\mathcal{E}(\cdot)$ to describe a sequence of evolutions of a system coupled to a heat bath for a given lapse of time $\tau$. With each map $\mathcal{E}$, a new fresh bath is introduced that exchanges heat with the system during the time that the interaction is turned on. The concatenated map $\mathcal{E}^L$ is also a CPTP map. The total work performed is the sum of the work done switching on and off the interaction energy with each bath. Similarly, the total heat is the sum of the heat exchanged with each bath. Let us assume that the map ${\mathcal E}$ has an attractive invariant state $\bar{\rho}$ defined as \[ \lim_{L\to\infty}{\mathcal E}^L(\rho_S)=\bar{\rho}, \,\forall \rho_S, \] and $\bar{\rho}={\mathcal E}(\bar{\rho})$. The process $\bar{\rho}\to\mathcal{E}(\bar{\rho})$ is thermodynamically characterized by $\Delta S_{\rm vN}=0=\Delta E$, see Eq.~(\ref{Av.Energy}) and Eq.~(\ref{Av.Ent}). If the entropy produced by the action of the map ${\mathcal E}$ on $\bar{\rho}$ is $\Sigma>0$, then we say that the invariant state is a non-equilibrium steady state. The invariant state is an {\it equilibrium state} if $\Sigma=0$, i.e., if the entropy production, Eq.~(\ref{Av.Ent.Prod}), vanishes by the action of $\mathcal E$ on $\bar{\rho}$. Maps with these particular states are called maps with equilibrium~\cite{StochPRE,StochPRE2}. According to Eq.~(\ref{Av.Ent.Prod}), $\Sigma=0$ for the steady state $\bar{\rho}$ if and only if $\bar{\rho}\otimes\omega_\beta(H_B)=U \left(\bar{\rho}\otimes\omega_\beta(H_B)\right) U^\dag$. Equivalently, if the unitary $U$ in Eq.~(\ref{unitary}) satisfies $[U,H_0+H_B]=0$, where $H_0$ is an operator in the Hilbert space of the system, then the product state $\omega_\beta(H_0)\otimes\omega_\beta(H_B)$, with $\omega_\beta(H_0)=\frac{e^{-\beta H_0}}{Z_0}$, where $Z_0={\rm Tr}[e^{-\beta H_0}]$, is invariant under the unitary evolution in Eq.~(\ref{unitary}) and $\bar{\rho}=\omega_\beta(H_0)$ is an equilibrium state for the map in Eq.~(\ref{CPTP}). It follows from $[U,H_0+H_B]=0$ that the heat, Eq.\eqref{Av.Heat} and work, Eq.\eqref{Av.Work} simplify to \begin{equation} Q={\rm Tr}[H_0(\rho_S'-\rho_S)] \label{Eq.prop} \end{equation} and \begin{equation} W={\rm Tr}_S[(H_S-H_0)(\rho_S'-\rho_S)]. \label{Av.Work.Eq} \end{equation} The entropy production also reduces to an expression that does not involve the state of the bath. Indeed, we obtain \begin{equation} \Sigma =D(\rho_S||\omega_\beta(H_0))-D(\rho_S'||\omega_\beta(H_0)), \label{epthermal} \end{equation} which is positive due to the contracting character of the map~\cite{BreuerBook}. The averaged thermodynamic quantities for a map with equilibrium are only determined by the properties of the system of interest. If $H_0=H_S$, then the map is called thermal~\cite{terry2,terry3}. The equilibrium state is the Gibbs state $\omega_\beta(H_S)=e^{-\beta H_S}/Z_S$ with $Z_S={\rm Tr}[e^{-\beta H_S}]$, and the agent is passive because $W=0$ for every initial state $\rho_S$, see Eq.~(\ref{Av.Work.Eq}). When $H_0\neq H_S$, an active external agent has to provide (or extract) work to perform the map on a state $\rho_S$. However, once the system reaches the equilibrium state $\omega_\beta(H_0)$, the process $\omega_\beta(H_0)\to{\mathcal E}(\omega_\beta(H_0))=\omega_\beta(H_0)$ is performed with $W=0$, see Eq.~(\ref{Av.Work.Eq}), and $\Sigma=0$. Let us end this section with the following remark. Since the total evolution operator $U=e^{-i\frac{\tau}{\hbar}(H_S+H_B+V)}$ is time-independent, the equilibrium condition is satisfied by finding $H_0$ and $V$ such that $[H_0,H_S]=0$ and $[H_0+H_B,V]=0$~\cite{BarraBattery}. In this case, $H_S$ and $H_0$ share the same eigenbasis. To simplify the discussion of fluctuations, we consider non-degenerate eigenenergies. We denote the eigensystems as \[ H_S\ket{n}=E_n\ket{n},\quad H_0\ket{n}=E^0_n\ket{n}. \] whith increasing order $E_1< E_2<\cdots< E_N$ for the eigenenergies. The eigenvalues $E_n^0$ are not necessarily ordered but there is always a permutation that we call $\pi$ of $(1,\ldots,N)\to (\pi_1,\ldots,\pi_N)$ such that $E_{\pi_1}^0\leq \cdots\leq E_{\pi_N}^0.$ \section{The battery} \label{sec:battery} As is well known, the Gibbs state $\omega_\beta(H_S)$ is passive, i.e., one can not decrease (extract) its energy with a unitary operation~\cite{passivity1,passivity2}. This is not true for the equilibrium state \begin{equation} \label{cargado} \omega_\beta(H_0)=\sum_n \frac{e^{-\beta E_n^0}}{Z_0}\ket{n}\bra{n}, \end{equation} if a pair $(j,k)$ exists such that $(E_j-E_k)(E_j^0-E_k^0)<0$. In that case, the unitary operator $u$ with matrix elements $u_{ij}=\braket{i|u|j}=\delta_{\pi_i,j}$ extracts the ergotropy~\cite{ergotropy} \begin{equation} \label{ergotropy1} {\mathcal W}[\omega_\beta(H_0)]=\sum_{n=1}^N (E_{\pi_n}-E_n)\frac{e^{-\beta E_{\pi_n}^0}}{Z_0}> 0, \end{equation} where $\pi$ is the permutation that orders $E_n^0$ increasingly. Once the ergotropy is extracted, the system is left in the passive state \begin{equation} \label{pasivo} \sigma_{\omega_\beta(H_0)}=u \omega_\beta(H_0) u^\dag=\sum_{n=1}^N\frac{e^{-\beta E_{\pi_n}^0}}{Z_0}\ket{n}\bra{n}. \end{equation} An equilibrium quantum battery was proposed in~\cite{BarraBattery} based on that observation. The system is driven by a repeated interaction process described by a map ${\mathcal E}$ with equilibrium $\omega_\beta(H_0)$. Once the equilibrium is reached, it is kept with no cost ($ W=0$), energy does not leak from it, and the battery's charge, characterized by the ergotropy ${\mathcal W}[\omega_\beta(H_0)]$ is preserved. Equilibrium states with ergotropy are called active. The thermodynamic cycle is as follows: The battery starts in the active equilibrium state, then the ergotropy \eqref{ergotropy1} is extracted, leaving the battery in the passive state \eqref{pasivo} from which the repeated interaction process $\lim_{L\to\infty}{\mathcal E}^L(\sigma_{\omega_\beta(H_0)})$ recharges it. As a consequence of the second law, the recharging work $W_R={\rm Tr}_S[(H_S-H_0)(\omega_\beta(H_0)-\sigma_{\omega_\beta(H_0)})]$ is never smaller that the extracted ergotropy. In this way, the thermodynamic efficiency $0\leq \eta_{\rm th}\equiv {\mathcal W}[\omega_\beta(H_0)]/W_R\leq 1$ characterizes the operation of the device. \section{Fluctuations} \label{sec.fluct} \subsection{repeated interaction for a map with equilibrium} The thermodynamic quantities in Eqs.\eqref{Av.Energy}, \eqref{Av.Heat}, \eqref{Av.Work}, \eqref{Av.Ent} and \eqref{Av.Ent.Prod} were obtained as the average over their stochastic versions defined over trajectories using a two-point measurement scheme in~\cite{StochPRE}. Since all interesting density matrices $\omega_\beta(H_S), \omega_\beta(H_0)$ and $\sigma_{\omega_\beta(H_0)}$ are diagonal in the system energy basis, we need only projective energy measurement in this work. A trajectory $\gamma=\{n;i_1,j_1,\ldots,i_L,j_L;m\}$ for the recharging process is defined by the initial and final, $\epsilon_{i_k}$ and $\epsilon_{k_k}$, energy result for each auxiliary thermal system and $E_n$ and $E_m$ for the system. According to the two-point measurement scheme~\cite{esposito-mukamel-RMP}, its probability is \begin{equation} P^{(L)}_\gamma=|\langle j_1\cdots j_L m|U_L\cdots U_1|i_1\cdots i_L n\rangle|^2 \frac{e^{-\beta \sum_{k=1}^L\varepsilon_{i_k}}}{Z_B^L}p_{\rm ini}(n), \label{psm1} \end{equation} where $p_{\rm ini}(n)$ is the probability that the initial state of the system is $\ket{n}$, see Appendix~\ref{sec:appendix}. We now associate the stochastic thermodynamic quantities to these trajectories. The stochastic heat flow to the system $q_\gamma$ corresponds to the negative energy change of the bath, i.e., $q_\gamma=\sum_{k=1}^L(\varepsilon_{i_k}-\varepsilon_{j_k})$. According to the first law of stochastic thermodynamics~\cite{HPNJP2013}, the stochastic work is given by \begin{equation} w_\gamma=\Delta e_\gamma-q_\gamma, \label{w-gamma} \end{equation} where $\Delta e_\gamma=E_m-E_n$ is the stochastic energy change. These fluctuating quantities are studied through their distributions \begin{equation} p^{(L)}_w(x)=\sum_\gamma \delta(x-w_\gamma)P^{(L)}_\gamma,\quad p^{(L)}_{\Delta e}(x)=\sum_\gamma \delta(x-\Delta e_\gamma)P^{(L)}_\gamma,\quad p^{(L)}_q(x)=\sum_\gamma \delta(x-q_\gamma)P^{(L)}_\gamma, \label{p(w)} \end{equation} and, as for the averaged thermodynamic quantities, we need information on the state of the whole system to evaluate them. However, for maps with equilibrium, a stochastic trajectory is determined by the pair $\gamma=\{n,m\}$, see Appendix~\ref{sec:appendix}. Consequently these formulas simplify and become, $q_\gamma=E^0_m-E^0_n, w_\gamma=E_m-E_m^0-(E_n-E_n^0)$ with the distributions \begin{equation} \label{ecc:dist energ eq. map} p^{(L)}_{\Delta e}(x)=\sum_{n,m} \delta(x-[E_m-E_n])P^{(L)}_{n\to m}, \end{equation} \begin{equation} \label{ecc:dist trabajo eq. map} p^{(L)}_w(x)=\sum_{n,m} \delta(x-[(E_m-E_m^0)-(E_n-E_n^0)])P^{(L)}_{n\to m}, \end{equation} \begin{equation} \label{ecc:dist calor eq. map} p^{(L)}_q(x)=\sum_{n,m} \delta(x-[E_m^0-E_n^0])P^{(L)}_{n\to m}, \end{equation} and the trajectory probability \begin{equation} \label{transprob} P^{(L)}_{n\to m}=\bra{m}{\mathcal E}^L(\ket{n}\bra{n})\ket{m}p_{\rm ini}(n)=(T^L)_{m|n}\,p_{\rm ini}(n), \end{equation} in terms of the initial probability $p_{\rm ini}(n)$ and of the $L$ power of the stochastic matrix $T_{m|n}=\bra{m}{\mathcal E}(\ket{n}\bra{n})\ket{m}$. The averages $\int x p^{(L)}_{\Delta e}(x)dx,\int xp^{(L)}_w(x)dx,\int xp^{(L)}_q(x)dx$ reproduce Eqs.\eqref{Av.Energy}, \eqref{Eq.prop}, \eqref{Av.Work.Eq} with $\rho'_S={\mathcal E}^L(\rho_S)$ and $\rho_S=\sum_n p_{\rm ini}(n)\ket{n}\bra{n}$. \subsection{Fluctuations in the equilibrium state} \label{sec.eq.fluct} As noted before, all averaged thermodynamic quantities $\Delta E=\Delta S=\Sigma=W=Q=0$ vanish for a process in equilibrium. So, on average, the process $\omega_\beta(H_0)\to{\mathcal E}(\omega_\beta(H_0))=\omega_\beta(H_0)$ has no energy cost. However, if $H_0\neq H_S$, the agent is still active due to non-vanishing work fluctuations. For thermal maps $H_0=H_S$, and Eq.\eqref{ecc:dist trabajo eq. map} gives $p^{(L)}_w(x)=\delta(x)$. The external agent is truly passive. To analyze equilibrium fluctuations, we use Eqs.\eqref{ecc:dist energ eq. map}, \eqref{ecc:dist trabajo eq. map} and \eqref{ecc:dist calor eq. map} with $p_{\rm ini}(n)=\frac{e^{-\beta E_{n}^0}}{Z_0}$. \subsection{recharging process} \label{sec.rech.fluct} Since the recharging process starts from $\sigma_{\omega_\beta(H_0)}$, we take $p_{\rm ini}(n)=e^{-\beta E_{\pi_n}^0}/Z_0$, see Eq.\eqref{pasivo}, in the distributions Eqs.\eqref{ecc:dist energ eq. map}, \eqref{ecc:dist trabajo eq. map}, and \eqref{ecc:dist calor eq. map}. Since the charged state $\omega_\beta(H_0)$ is reached asymptotically, we take $L\to\infty$ to charge the battery fully. When $L$ is finite, we speak of partial recharging. However, in that case, we do not have a cyclic engine because the passive state associated with ${\mathcal E}^L(\sigma_{\omega_\beta(H_0)})$ is not $\sigma_{\omega_\beta(H_0)}$. In the limit $L\to \infty$ we have a well defined cycle. Moreover, since ${\mathcal E}$ has a unique equilibrium state, we will find that $T$ is a regular stochastic matrix~\cite{Feller} implying that $\lim_{L\to\infty}(T^L)_{m|n}=e^{-\beta E^0_m}/Z_0,\forall n$. Therefore, the limit in Eq.\eqref{transprob} \begin{equation} \label{Pcharge} P^{(\infty)}_{n\to m}=p_{\rm ini}(n)e^{-\beta E^0_m}/Z_0=e^{-\beta (E_{\pi_n}^0+E^0_m)}/Z_0^2, \end{equation} is independent of the map's details. Interestingly, the rate of convergence of $T^L$ to the equilibrium distribution depends on the map $\mathcal E$ parameters. We discuss later the fluctuations of a concatenated process $\mathcal E^L$ with finite $L$. The average of the stochastic energy change in the recharging process \begin{equation} \label{ergotropy2} \langle\Delta e_\gamma\rangle^{(\infty)}\equiv \sum_{n,m} (E_m-E_n)P^{(\infty)}_{n\to m}=\Tr[H_S(\omega_\beta(H_0)-\sigma_{\omega_\beta(H_0)})]={\mathcal W}(\omega_\beta(H_0)) \end{equation} is the ergotropy. The average stochastic work \begin{equation} \label{WR2} \langle w_\gamma\rangle^{(\infty)}\equiv \sum_{n,m} ((E_m-E_m^0)-(E_n-E_n^0))P^{(\infty)}_{n\to m}=\Tr[(H_S-H_0)(\omega_\beta(H_0)-\sigma_{\omega_\beta(H_0)})]=W_R \end{equation} is the recharging work. \subsection{extracting process} The extracting process is also fluctuating when we measure the battery's energy in the charged state and the discharged state. We call $\kappa$ the stochastic trajectory in the ergotropy extracting process and $\varpi_\kappa$ the stochastic extracted energy. The probability $p_\kappa$ of $\kappa=(m',n)$ is the product of the transition probability from $\ket{m'}$ to $\ket{n}$ under the permutation $u$, $P^{\rm ext}_{m'\to n}=|\braket{n|u|m'}|^2=\delta_{\pi_n,m'}$, with the initial probability $e^{-\beta E_{m'}^0}/Z_0$, see Eq.\eqref{cargado}. The averaged extracted energy, \begin{equation} \langle{\varpi}_{\kappa}\rangle=\sum_\kappa \varpi_\kappa p_\kappa=\sum_{m',n}(E_{m'}-E_n)P^{\rm ext}_{m'\to n}\frac{e^{-\beta E_{m'}^0}}{Z_0}=\sum_{n}(E_{\pi_n}-E_n)\frac{e^{-\beta E_{\pi_n}^0}}{Z_0}={\mathcal W}(\omega_\beta(H_0)) \label{erg-av} \end{equation} is the ergotropy. Eq.\eqref{ergotropy2} and Eq.\eqref{erg-av} show the cycle's consistency, where two processes, recharging ($\gamma$) and extracting ($\kappa$) connect the same states, $\omega_\beta(H_0)$ and $\sigma_{\omega_\beta(H_0)}$. \subsection{ Fluctuating efficiency for the cycle} \label{sec.eff.fluct} In terms of Eq.\eqref{erg-av} and Eq.\eqref{WR2} we have the thermodynamic efficiency $\eta_{\rm th}=\frac{{\mathcal W}}{W_R}=\frac{\langle\varpi_\kappa\rangle}{\langle w_\gamma\rangle^{(\infty)}}$. As the thermodynamic efficiency is the ratio of the ergotropy over the recharging work, the fluctuating efficiency~\cite{DenzlerLutz} should be the ratio of their fluctuating equivalents. The fluctuating extracted energy is $\varpi_\kappa=E_{m'}-E_n$, and the fluctuating work is $w_\gamma=E_m-E_m^0-(E_n-E_n^0)$. Therefore, we define the fluctuating efficiency as \begin{equation} \label{flucteff} \eta_{\gamma\kappa}=\frac{\varpi_\kappa}{w_\gamma}=\frac{E_{m'}-E_n}{E_{m}-E_{m}^0-(E_n-E_n^0)}. \end{equation} Given the extracting trajectory $\kappa$, the probability of the recharging trajectory $\gamma$ is $P^{\rm ext}_{m'\to n}P^\infty_{n\to m}$. Thus, the joint probability for the processes $\kappa$ and $\gamma$ is \[ p_{\gamma \kappa}=\frac{e^{-\beta E_{m'}^0}}{Z_0}P^{\rm ext}_{m'\to n}P^\infty_{n\to m}=\frac{e^{-\beta E_{m'}^0}}{Z_0}\delta_{\pi_n,m'}\frac{e^{-\beta E^0_m}}{Z_0}, \] and the distribution of the fluctuating efficiency is \begin{equation} \label{ecc:dist efficiency eq. map prima} p_\eta(x)=\sum_{\gamma,\kappa}\delta(x-\eta_{\gamma\kappa})p_{\gamma\kappa}= \sum_{n,m} \delta\left(x-\frac{E_{\pi_n}-E_n}{E_{m}-E_{m}^0-(E_n-E_n^0)}\right)\frac{e^{-\beta (E^0_m+E_{\pi_n}^0)}}{Z_0^2}. \end{equation} To simplify the notation we write this as \begin{equation} \label{etadistrinm} p_\eta(x) =\sum_{n,m} \delta\left(x-\eta_{nm}\right)P_{n\to m}, \end{equation} with \begin{equation} \label{flucteff2} \eta_{nm}=\frac{E_{\pi_n}-E_n}{E_{m}-E_{m}^0-(E_n-E_n^0)},\quad \text{and}\quad P_{n\to m}=\frac{e^{-\beta (E^0_m+E_{\pi_n}^0)}}{Z_0^2}. \end{equation} The probability $P_{n\to m}$ corresponds Eq.\eqref{Pcharge} and we omit the superscript. Trajectories with $w_\gamma=0$ have $\eta_{\gamma\kappa}=\infty$. Therefore the average $\langle\eta_{\gamma\kappa}\rangle$ does not always exist, and if it does, $\eta_{\rm th}\neq \langle\eta_{\gamma\kappa}\rangle$, unless the stochastic work and efficiency are uncorrelated. In fact, $\langle \eta_{\gamma\kappa} w_\gamma\rangle=\langle \varpi_\kappa\rangle={\mathcal W}$. So only if $\langle \eta_{\gamma\kappa} w_\gamma\rangle =\langle \eta_{\gamma\kappa}\rangle W_R$ we have $\langle\eta_{\gamma\kappa}\rangle=\eta_{\rm th}$. The thermodynamic and fluctuating efficiency can be very different. The following section discusses efficiency fluctuations for the cycle, heat and work fluctuations for the recharging process and equilibrium fluctuations in two examples. \section{Examples} \label{sec:ejemplos} We illustrate our results in two simple examples. The first example is a single-qubit battery that we use to discuss equilibrium fluctuations (section~\ref{sec.eq.fluct}). The second example is a two-qubit battery where we compute heath and work distributions in a partial recharging process (section~\ref{sec.rech.fluct}). In both, we compute the fluctuating efficiency distribution (section~\ref{sec.eff.fluct}). \subsection{Single-qubit battery} An interesting protocol, with $H_0=-H_S$, was discussed in~\cite{BarraBattery} for a system $S$ interacting with systems $B$, which are copies of $S$. The corresponding process ${\mathcal E}$ has the remarkable equilibrium state \[ \omega_\beta(-H_S)=\sum_{n=1}^N \frac{e^{\beta E_n}}{Z_+}\ket{n}\bra{n}, \] with $Z_+=\Tr[e^{+\beta H_S}]$ between a system in the state $\omega_\beta(-H_S)$ with copies of itself in the Gibbs state $\omega_\beta(H_S)$. In this subsection, we consider the battery $S$ and auxiliary systems $B$ identical qubits; i.e., the battery Hamiltonian is $H_S=(h/2) \sigma_S^z$, and the baths Hamiltonians are $H_B=(h/2) \sigma_B^z$, with $h>0$. Hereafter, $\sigma^x,\sigma^y$, and $\sigma^z$ are Pauli matrices. The coupling between the system and the bath qubit is \[ V=a(\sigma_S^+\sigma_B^++\sigma_S^-\sigma_B^-), \] with $\sigma^\pm=(\sigma^x\pm \sigma^y)/2$, and is such that $[\sigma_B^z-\sigma_S^z, V]=0$, i.e., $H_0=-H_S$. In the basis defined by $\sigma^z\ket{\uparrow}=\ket{\uparrow}$ and $\sigma^z\ket{\downarrow}=-\ket{\downarrow}$, the eigenvalues and eigenvectors of $H_S$ and $H_0$ are \begin{align} E_2&=h/2, & E^0_2&=-h/2, & \ket{2}&=\ket{\uparrow}\\ E_1&=-h/2, & E^0_1&=h/2, & \ket{1}&=\ket{\downarrow} \end{align} and the ordering permutation is $(\pi_1,\pi_2)=(2,1)$. Thus, on the above basis, the equilibrium state is \[ \omega_{\beta}(H_0)=\omega_{\beta}(-H_S) =\frac{e^{\beta \frac{h}{2}} }{Z}\ket{2}\bra{2}+\frac{e^{-\beta \frac{h}{2}}}{Z}\ket{1}\bra{1}, \] and the passive state for the system is \[ \sigma_{\omega_\beta(H_0)}=\omega_{\beta}(H_S)=\frac{e^{-\beta \frac{h}{2}} }{Z}\ket{2}\bra{2}+\frac{e^{\beta \frac{h}{2}}}{Z}\ket{1}\bra{1}, \] where $Z=Z_+=2\cosh(\beta h/2).$ The ergotropy of the battery in the equilibrium state $\omega_\beta(-H_S)$ is ${\mathcal W}=h\tanh \beta h/2$. From Eqs.\eqref{ergotropy2} and \eqref{WR2}, we see that the thermodynamic efficiency of the process is $\eta_{\rm th}=1/2,$ independent of the inverse temperature $\beta$. The recharging process in this single-qubit battery (1Q) is determined by the stochastic matrix (see Eq.\eqref{transprob}) \begin{equation} \label{Tqubitbattery} T_{1Q}=\left(\begin{array}{cc} 1-\frac{e^{\beta \frac{h}{2}}}{Z}g(a,h) & \frac{e^{-\beta\frac{h}{2}}}{Z}g(a,h)\\ \frac{e^{\beta\frac{h}{2}}}{Z}g(a,h) & 1-\frac{e^{-\beta \frac{h}{2}}}{Z}g(a,h) \end{array}\right) \end{equation} where $g(a,h)=\frac{a^2 \sin^2(\tau\sqrt{h^2+a^2}/\hbar)}{h^2+a^2}$ and $Z=e^{\beta \frac{h}{2}}+e^{-\beta \frac{h}{2}}$. It is a regular stochastic matrix if $g(a,h)\neq 0$. \subsubsection{fluctuating efficiency} The fluctuating efficiency (see Eq.\eqref{flucteff2}) takes the values \[ \eta_{11}=\eta_{22}=\infty,\quad \eta_{12}=\eta_{21}=\frac{1}{2} \] Its distribution Eq.\eqref{etadistrinm} is \[ p_\eta(x)=\delta(x-\infty)\mathrm{P}_\infty+\delta\left(x-\frac{1}{2}\right)\mathrm{P}_\frac{1}{2} \] with \begin{equation} \label{peff1q} \mathrm{P}_\infty=P_{1\to 1}+P_{2\to 2}=\frac{2}{Z^2},\quad \mathrm{P}_\frac{1}{2}=P_{1\to2}+P_{2\to 1}=\frac{e^{\beta h}+e^{-\beta h}}{Z^2} \end{equation} The explicit formulas at the right follow from Eq.\eqref{flucteff2} which is valid if $g(a,h)\neq 0$ in $T_{1Q}$. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{eficiencias1qubit.pdf} \includegraphics[width=0.4\textwidth]{probnm1qubit.pdf} \caption{For the 1-qubit battery (a) Plots of $\mathrm{P}_\eta$ (Eq.\eqref{peff1q}) as function of $\beta h$. (b) Plots of $P_{n\to m}$ given by Eq.\eqref{flucteff2} for the single-qubit battery} \label{figura qubit} \end{figure} In Figure~\ref{figura qubit}a, we depict the probabilities $\mathrm{P}_\eta$ as functions of $\beta h$ and see that for $\beta h\gg1$ with probability 1, the fluctuating efficiency equals the thermodynamic efficiency $1/2$, because, as we see in Figure~\ref{figura qubit}b, $P_{1\to 2}\to 1$ reflecting the charging character of the process. \subsubsection{equilibrium fluctuation} Let us analyze the fluctuations when maintaining the charged state, i.e., those of the process $\omega_\beta(H_0)\to{\mathcal E}^L(\omega_\beta(H_0))=\omega_\beta(H_0)$, see section~\ref{sec.eq.fluct}. As we can verify in the examples above, and as shown in~\cite{StochPRE}, the transition matrices $T$ for maps with equilibrium satisfy the detailed balance condition $T_{m|n}e^{-\beta E_n^0}=T_{n|m}e^{-\beta E_m^0}$. From this, it is simple to show that $P^{(L)}_{n\to m}=P^{(L)}_{m\to n}$ with $p_{\rm ini}(n)=e^{-\beta E_n^0}/Z_0$ in Eq.\eqref{transprob}. We are interested in distinguishing fluctuations in an active equilibrium state from fluctuations in a Gibbs equilibrium state. The main difference is that the probability distribution of equilibrium work fluctuation is $p_w(x)\neq\delta(x)$ for the former, reflecting an active agent, and $p_w(x)=\delta(x)$ for the latter, reflecting a passive agent. To investigate other differences, we consider our charging map ${\mathcal E}$ and a thermal map ${\mathcal E}^{\rm Thm}$ for a qubit. The map ${\mathcal E}^{\rm Thm}$ is obtained by coupling the qubit to an auxiliary thermal qubit with $V=a(\sigma_S^+\sigma_B^-+\sigma_S^-\sigma_B^+)$, and tracing out the auxiliary system. The resulting map is thermal (i.e., a map with the Gibbs equilibrium state), and the transition matrix for this process is \[ T^{\rm Thm}=\left(\begin{array}{cc} 1-\frac{e^{-\beta \frac{h}{2}}}{Z}g(a,0) & \frac{e^{\beta\frac{h}{2}}}{Z}g(a,0)\\ \frac{e^{-\beta\frac{h}{2}}}{Z}g(a,0) & 1-\frac{e^{\beta \frac{h}{2}}}{Z}g(a,0) \end{array}\right) \] where $g(a,0)=\sin^2(\tau a/\hbar)$ and $Z=e^{\beta \frac{h}{2}}+e^{-\beta \frac{h}{2}}.$ $T^{\rm Thm}$ is a regular stochastic matrix if $g(a,0)\neq 0$. Te most crucial difference between $T^{\rm Thm}$ and $T_{1Q}$ in Eq.\eqref{Tqubitbattery} is the position of the factors $e^{\pm \beta h/2}$. For the charging map, one can show $P^{(L)}_{2\to 2}>P^{(L)}_{1\to 1}$, reflecting the higher population of the excited state in the active equilibrium. Instead, for the thermal map $P^{(L){\rm Thm}}_{1\to 1}>P^{(L){\rm Thm}}_{2\to 2}$, reflecting the higher population of the ground state in Gibbs equilibrium. On the other hand, energy fluctuations due to $1\leftrightarrow 2$ transitions are qualitatively similar if $g(a,h)\approx g(a,0)$ for processes with finite $L$ but are indistinguishable for $L\to\infty$. Indeed, for $L\to\infty$ we have \[ P^{(\infty){\rm Thm}}_{1\to 2}=P^{(\infty){\rm Thm}}_{2\to 1}=\frac{1}{Z^2},\quad P^{(\infty){\rm Thm}}_{1\to 1}=\frac{e^{\beta h}}{Z^2},\quad P^{(\infty){\rm Thm}}_{2\to 2}=\frac{e^{-\beta h}}{Z^2} \] and for the charging map \[ P^{(\infty)}_{2\to 1}=P^{(\infty)}_{1\to 2}=\frac{1}{Z^2},\quad P^{(\infty)}_{2\to 2}=\frac{e^{\beta h}}{Z^2},\quad P^{(\infty)}_{1\to 1}=\frac{e^{-\beta h}}{Z^2}. \] \subsection{Two-qubit battery} We consider a two-qubit battery with Hamiltonian~\cite{BarraBattery} \[ H_S=\frac{h}{2}\left(\sigma^z_1+\sigma^z_2\right)+J\left(\sigma^x_1\sigma^x_2+\sigma^y_1\sigma^y_2\right), \] coupled with \[ V=J'(\sigma_{B}^x\sigma_1^x+\sigma_{B}^y\sigma_1^y), \] to auxiliary systems with Hamiltonian $H_B=\frac{h}{2}\sigma_{B}^z$, in the thermal state. The corresponding map ${\mathcal E}$ has the equilibrium state $\omega_{\beta}(H_0)$ with $H_0=\frac{h}{2}\left(\sigma^z_1+\sigma^z_2\right)$. The eigenvalues and eigenvectors of $H_S$ and $H_0$ in the basis defined by $\sigma^z\ket{\uparrow}=\ket{\uparrow}$ and $\sigma^z\ket{\downarrow}=-\ket{\downarrow}$ are \begin{align} E_3&=h, & E^0_3&=h, & \ket{3}&=\ket{\uparrow\uparrow},\\ E_4&=2J, & E^0_4&=0, & \ket{4}&=(\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow})/\sqrt{2},\\ E_1&=-2J, & E^0_1&=0, & \ket{1}&=(\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow})/\sqrt{2}, \\ E_2&=-h, & E^0_2&=-h, & \ket{2}&=\ket{\downarrow\downarrow}. \end{align} We take $2J>h>0$ such that $E_{i+1}>E_i$. The permutation that orders $E^0_{\pi_{i+1}}\geq E^0_{\pi_i}$ is $(\pi_1,\pi_2,\pi_3,\pi_4)=(2,1,4,3)$. Thus on the above basis, the equilibrium state is \[ \omega_{\beta}(H_0)=\frac{e^{-\beta h} }{Z_0}\ket{3}\bra{3}+\frac{1}{Z_0}(\ket{1}\bra{1}+\ket{4}\bra{4})+\frac{e^{\beta h} }{Z_0}\ket{2}\bra{2}, \] and the passive state for the system is \[ \sigma_{\omega_\beta(H_0)}=\frac{e^{\beta h} }{Z_0}\ket{1}\bra{1}+\frac{1}{Z_0}(\ket{2}\bra{2}+\ket{3}\bra{3})+\frac{e^{-\beta h} }{Z_0}\ket{4}\bra{4}, \] where $Z_0=2+2\cosh(\beta h).$ The ergotropy of the equilibrium state ${\mathcal W}=\Tr[H_S(\omega_{\beta}(H_0)-\sigma_{\omega_\beta(H_0)})]$ is \[ {\mathcal W}=(2J-h)\frac{\sinh \beta h}{1+\cosh \beta h}. \] The work done in the charging process $\sigma_{\omega_\beta(H_0)}\to \omega_\beta(H_0)$ is \[ W_R=2J \frac{\sinh \beta h}{1+\cosh \beta h}. \] We see that the thermodynamic efficiency is $\eta_{\rm th} = \mathcal W/W_R=1-\frac{h}{2J}$ independently of the inverse temperature $\beta$. The recharging process in this two-qubit battery (2Q) is determined by the stochastic matrix (see Eq.\eqref{transprob}) \begin{equation} \label{T4x4} T_{2Q}=\frac{1}{(J^2+J'^2)^2}\left( \begin{array}{cccc} \Phi^2 & \frac{2}{(1+e^{\beta h})}\Phi \Psi & \frac{2e^{\beta h}}{(1+e^{\beta h})}\Phi \Psi & \Psi^2\\ \frac{2e^{\beta h}}{(1+e^{\beta h})}\Phi \Psi & \frac{e^{\beta h}(J^2+J'^2)^2+\Delta}{(1+e^{\beta h})} & 0 & \frac{2e^{\beta h}}{(1+e^{\beta h})}\Phi \Psi\\ \frac{2}{(1+e^{\beta h})}\Phi \Psi& 0 & \frac{(J^2+J'^2)^2+e^{\beta h}\Delta}{(1+e^{\beta h})} & \frac{2}{(1+e^{\beta h})}\Phi \Psi\\ \Psi^2 & \frac{2}{(1+e^{\beta h})}\Phi \Psi & \frac{2e^{\beta h}}{(1+e^{\beta h})}\Phi \Psi & \Phi^2 \end{array}\right), \end{equation} with \[ \Phi=J^2+J'^2\cos^2(\frac{\tau}{\hbar}\sqrt{J^2+J'^2}), \quad \Psi=J'^2\sin^2(\frac{\tau}{\hbar}\sqrt{J^2+J'^2}), \quad \Delta=(\Phi-\Psi)^2, \] which is a regular stochastic matrix excepts at points with $\Psi=0$ or $\Phi=0$, as one can check by computing $T^2$. \subsubsection{fluctuating efficiency} For the fluctuating efficiency Eq.\eqref{flucteff2} we have \begin{align} \eta_{12}&=\eta_{13}=\eta_{21}=\eta_{34}=\eta_{42}=\eta_{43}=1-\frac{h}{2J}\\ \eta_{14}&=\eta_{41}=\frac{1}{2}(1-\frac{h}{2J})\\ \eta_{23}&=\eta_{32}=\infty\\ \eta_{24}&=\eta_{31}=-(1-\frac{h}{2J})\\ \end{align} and all $\eta_{nn}=\infty$. Its distribution follows from Eq.\eqref{etadistrinm} and it is \[ p_\eta(x)=\delta(x-\infty)\mathrm{P}_\infty+\delta\left(x-1+\frac{h}{2J}\right)\mathrm{P}_{(1-\frac{h}{2J})}+\delta\left(x+1-\frac{h}{2J}\right)\mathrm{P}_{-(1-\frac{h}{2J})}+\delta\left(x-\frac{1}{2}+\frac{h}{4J}\right)\mathrm{P}_{(1/2)(1-\frac{h}{2J})}\, \] with \begin{align &\mathrm{P}_\infty=P_{3\to2}+P_{2\to3}+\sum_n P_{n\to n}=3\frac{(e^{\beta h}+e^{-\beta h})}{Z_0^2},\label{28}\\ &\mathrm{P}_{(1-\frac{h}{2J})}=P_{1\to2}+P_{1\to3}+P_{2\to1}+P_{3\to4}+P_{4\to2}+P_{4\to3}=\frac{2+(e^{\beta h}+e^{-\beta h})^2}{Z_0^2},\label{29}\\ &\mathrm{P}_{-(1-\frac{h}{2J})}=P_{3\to1}+P_{2\to4}=\frac{2}{Z_0^2},\label{30}\\ &\mathrm{P}_{(1/2)(1-\frac{h}{2J})}=P_{1\to4}+P_{4\to1}=\frac{(e^{-\beta h}+e^{\beta h})}{Z_0^2}\label{31}. \end{align} The explicit formulas at the right are valid for parameters $\tau,J$ and $J'$ in which $T_{2Q}$ is regular. In Fig.~\ref{fig:Thermo}a, we plot the probabilities $\mathrm{P}_\eta$ in Eqs.~\eqref{28}, \eqref{29}, \eqref{30}, and \eqref{31} as a function of $\beta h$. We see that for small $\beta h$, where many transitions are assisted by heat, the average efficiency does not exist. On the other hand, when $\beta h\gg 1$, the efficiency goes to the thermodynamic efficiency with probability one because the work becomes deterministic. In Fig.~\ref{fig:Thermo}b, we see that $P_{1\to 2}$ goes to one in that limit. The second in importance are the $P_{1\to 4}$ associated with the largest charge but not very efficient transition and the $P_{3\to 2}$ contributing to $\eta=\infty$. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{eficiencias.pdf} \includegraphics[width=0.4\textwidth]{probnm.pdf} \caption{For the 2-qubit battery: (a) Plots of $\mathrm{P}_\eta$ as function of $\beta h$ and (b) Plots of $P_{n\to m}$ given by Eq.\eqref{flucteff2} for the two-qubit battery.} \label{fig:Thermo} \end{figure} \subsubsection{Heat and work fluctuations in the partial recharging process} Here, we consider the process ${\mathcal E}^L$ starting in the state $\sigma_{\omega_\beta(H_0)}$ and evaluate the heat and work distributions. Hence, we consider Eq.\eqref{ecc:dist trabajo eq. map} and Eq.\eqref{ecc:dist calor eq. map} with $P_{n \to m}^{(L)}=(T^L)_{m|n}\frac{e^{-\beta E_{\pi_n}^0}}{Z0}$, with the permutation $\pi$ ordering the eigenvalues of $H_0$ increasingly. For the two-qubit battery we obtain \begin{align} p_w^{(L)}(x)&=\delta(x)A^{(L)}_0+\delta(x-4J)A^{(L)}_1+\delta(x-2J)A^{(L)}_3+\delta(x+4J)A^{(L)}_2+\delta(x+2J)A^{(L)}_4,\\ p_q^{(L)}(x)&=\delta(x)B^{(L)}_0+\delta(x-h)B^{(L)}_4+\delta(x-2h)B^{(L)}_2+\delta(x+h)B^{(L)}_3+\delta(x+2h)B^{(L)}_1 \end{align} with \begin{align} A^{(L)}_0&=P^{(L)}_{2\to 3}+P^{(L)}_{3\to 2},& B^{(L)}_0&=P^{(L)}_{1\to 4}+P^{(L)}_{4\to 1},\\ A^{(L)}_1&=P^{(L)}_{1\to 4},& B^{(L)}_1&=P^{(L)}_{3\to 2}, \\ A^{(L)}_2&=P^{(L)}_{4\to 1},& B^{(L)}_2&=P^{(L)}_{2\to 3},\\ A^{(L)}_3&=P^{(L)}_{1\to 2}+P^{(L)}_{1\to 3}+P^{(L)}_{2\to 4}+P^{(L)}_{3\to 4},& B^{(L)}_3&=P^{(L)}_{1\to 2}+P^{(L)}_{3\to 1}+P^{(L)}_{4\to 2}+P^{(L)}_{3\to 4},\\ A^{(L)}_4&=P^{(L)}_{2\to 1}+P^{(L)}_{3\to 1}+P^{(L)}_{4\to 2}+P^{(L)}_{4\to 3},& B^{(L)}_4&=P^{(L)}_{2\to 1}+P^{(L)}_{1\to 3}+P^{(L)}_{2\to 4}+P^{(L)}_{4\to 3},\\ \end{align} where $A_i^{(L)}\neq B_i^{(L)}$ for finite $L$ but $A_i^{(\infty)}= B_i^{(\infty)}$ with \[ A_0^{(\infty)}= \frac{6 \cosh\beta h}{Z_0^2},\quad A_1^{(\infty)}=\frac{e^{\beta h}}{Z_0^2},\quad A_2^{(\infty)}=\frac{e^{-\beta h}}{Z_0^2},\quad A_3^{(\infty)}=\frac{e^{2\beta h}+3}{Z_0^2}, \quad A_4^{(\infty)}=\frac{3+e^{-2\beta h}}{Z_0^2}. \] This means that the average work $W^{(L)}$ and average heat $Q^{(L)}$ \[ W^{(L)}= 2J(A^{(L)}_3-A^{(L)}_4)+4J(A^{(L)}_1-A^{(L)}_2)\xrightarrow[L\to\infty]{}\frac{2J\sinh\beta h}{1+\cosh\beta h} \] \[ Q^{(L)}=h(B^{(L)}_4-B^{(L)}_3)+2h(B^{(L)}_2-B^{(L)}_1)\xrightarrow[L\to\infty]{}\frac{-h \sinh\beta h}{1+\cosh\beta h} \] becomes proportional when $L\to\infty$. Since Markov chains converge exponentially to the stationary state, it is unnecessary to consider a large $L$ to observe the asymptotic distribution. However, since the convergence rate depends on the map's parameters, we see deviations from it near the points where $\Phi=0$ or $\Psi=0$ in Eq.\eqref{T4x4}. To illustrate this, we plot in Figure~\ref{fig:11} the probabilities $A^{(L)}_0,B^{(L)}_0,A^{(L)}_2$ and $B^{(L)}_2$ for various values of $L$ and varying a map parameter. \begin{figure}[H] \centering \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{figAB2.pdf}& \includegraphics[width=0.3\textwidth]{figAB20.pdf}& \includegraphics[width=0.3\textwidth]{figABteo.pdf}\\ \includegraphics[width=0.3\textwidth]{figpAB2.pdf} & \includegraphics[width=0.3\textwidth]{figpAB20.pdf}& \includegraphics[width=0.3\textwidth]{figpABteo.pdf}\\ \end{tabular} \caption{ Plots of the probabilities $A^{(L)}_i$ and $B^{(L)}_i$, with $L=2$ at the left (a and d) and $L=20$ at the center (b and e) with $i=0$ at the top (a and b) and $i=2$ at the bottom (d and e). At the right (c and f) we superpose the analytical result $A^{(\infty)}_i$ and $B^{(\infty)}_i$ to the data at the center for $L=20$. For the numerical computation we take $\beta=\tau/\hbar=1,J=J'=x$ and $h=0.6 x$. } \label{fig:11} \end{figure} \section{Discussion} \label{secCONC} We have studied stochastic fluctuations in repeated interaction processes subjected to the two-point energy measurement scheme. Because map ${\mathcal E}$ has an equilibrium state, all quantities are expressed in terms of system properties simplifying their study because one does not require measuring the environment. We have shown that the equilibrium distribution of the map dominates the distributions, except at particular points in the parameter space of the map, where its details become essential. Near these zones, the convergence rate towards the asymptotic value is low, requiring larger values of $L$ to reach it. The quantum aspect of the system is relevant near these zones since the Planck constant appears in the parameters that set the convergence rate to the stationary state. We have applied these results to study active equilibrium fluctuations, fluctuations in the charging process of a quantum battery, and efficiency fluctuations of the cycle charging and extracting energy for the battery in two examples. The fluctuating efficiency converges to the thermodynamic efficiency of these examples in the low-temperature limit, where work fluctuations are negligible. On the other hand, at large temperatures, where heat assists many transitions, the efficiency may become infinite, preventing the existence of the average. For future research, it would be interesting to extend the results obtained here for the single-cycle efficiency to the case of an arbitrary number of cycles. As this number increases, universal statistical behaviors have been shown to appear in other machines~\cite{ef1,ef2,ef11}. \section*{Acknowledgments} F.B. gratefully acknowledges the financial support of FONDECYT grant 1191441 and the Millennium Nucleus "Physics of active matter" of ANID (Chile). \section*{Appendices} \begin{appendices} \section{Distributions for maps with equilibrium} \label{sec:appendix} Let us justify Eq.\eqref{psm1} and Eq.~\eqref{ecc:dist calor eq. map}. Eq.~\eqref{ecc:dist energ eq. map} and Eq.~\eqref{ecc:dist trabajo eq. map} follow from the same argument. We can consider that the system $S$ and all the copies of system $B$ start uncorrelated in a product state. We measure the energy of that system and project the state to $\ket{i_1\cdots i_L n}$ with probability $\frac{e^{-\beta \sum_{k=1}^L\varepsilon_{i_k}}}{Z_B^L}p_{\rm ini}(n)$ because the copies of $B$ are in the Gibbs state. Then, the full system evolves unitarily by composing the unitary evolutions where at each time only the system $S$ with a copy $i$ of $B$ is interacting. This is represented by the product $U_L\cdots U_1$, and the global state is $U_L\cdots U_1\ket{i_1\cdots i_L n}$. Then we measure the energy of $S$ and of each copy of $B$. According to the Born rule after the measurement the total system is is the state $\ket{j_1\cdots j_L m}$ with probability \begin{equation} P_\gamma^{(L)}=|\langle j_1\cdots j_L n_{a_L}|U_L\cdots U_1|i_1\cdots i_L n_{a_0}\rangle|^2 \frac{e^{-\beta \sum_{n=1}^L\varepsilon_{i_n}}}{Z_b^L}p_i(n_{a_0}). \end{equation} More details are found in~\cite{StochPRE}. We use that result to derive Eq.~\eqref{ecc:dist calor eq. map} and by extension, all other distributions for maps with equilibrium. Consider that \[ \langle j_1\cdots j_L n_{a_L}|U_L\cdots U_1|i_1\cdots i_Ln_{a_0}\rangle= \sum_{a_1a_2..a_{L-1}}\langle n_{a_L}j_L|U_L|n_{a_{L-1}}i_L\rangle\cdots \langle n_{a_2} j_2|U_2|n_{a_1}i_2\rangle\langle n_{a_1} j_1|U_1|n_{a_0}i_1\rangle \] Because $[H_0+H_B,U_k]=0$, the generic transition $\langle n_{a_k}j_{k}|U_k|n_{a_{k-1}}i_{k}\rangle=0$ unless $E^0_{a_k}+\varepsilon_{j_k}=E^0_{a_{k-1}}+\varepsilon_{i_k}$. Thus in every trajectory $\gamma$ with non-vanishing probability we have \[ q_\gamma=\sum_k (\varepsilon_{i_k}-\varepsilon_{j_k})=\sum_k(E^0_{a_k}-E^0_{a_{k-1}})=E^0_{a_L}-E^0_{a_1}. \] Hence \[ p_q^{(L)}(x)=\sum_\gamma\delta(x-q_\gamma)P^{(L)}_\gamma=\sum_\gamma\delta(q-(E^0_{a_L}-E^0_{a_0}))P^{(L)}_\gamma=\sum_{a_L,a_0}\delta(q-(E^0_{a_L}-E^0_{a_0}))\sum_{\gamma:a_L,a_0}P^{(L)}_\gamma, \] where in the last sum, we add over all trajectories $\gamma$ starting at $n_{a_0}$ and ending at $n_{a_L}$. This correspond to taking the traces over all systems $B$ that interacted with $S$ and thus $\sum_{\gamma:a_L,a_0}P^{(L)}_\gamma=\bra{n_{a_L}}{\mathcal E}^L(\ket{n_{a_0}}\bra{n_{a_0}})\ket{n_{a_L}}p_i(n_{a_0})$. \end{appendices}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Small $x$ Resummation} \subsection{Motivation} Current and forthcoming particle collider experiments involve very high energies, such that the momentum fractions $x$ of initial state partons are extremely small. The splitting functions that govern the evolution of parton densities $f_i(x,Q^2)$ with momentum scale $Q^2$, together with the coefficients that relate these partons to proton structure functions, are unstable at low Bjorken $x$ values due to terms behaving like $x^{-1}\alpha_S^n\log^m(1/x)$ where $n\geq m+1$. Although the standard DGLAP theory (where the splitting and coefficient functions are considered at a fixed order in $\alpha_S$) works well in parton fits, there is some evidence that a resummation of small $x$ logarithms is necessary. Previous work has shown that a LL analysis fails to describe data well. One resums small $x$ logarithms in the gluon density by solving the \emph{BFKL equation}~\cite{BFKL}, an integral equation for the unintegrated gluon 4-point function. One then relates this gluon to structure functions using the $k_T$ factorisation formalism~\cite{CollinskT,CatanikT} to obtain the resummed splitting and coefficient functions. \subsection{Solution of the BFKL equation} Introducing the double Mellin transformed unintegrated gluon density: \begin{equation} f(\gamma,N)=\int_0^\infty (k^2)^{-\gamma-1}\int_0^1 dx x^N f(x,k^2), \label{Mellin} \end{equation} the NLL BFKL equation in $(N,\gamma)$ space is a double differential equation in $\gamma$: \begin{align*} \frac{d^2f(\gamma,N)}{d\gamma^2}&=\frac{d^2f_I(\gamma,Q_0^2)}{d\gamma^2}-\frac{1}{\bar{\beta}_0 N}\frac{d(\chi_0(\gamma)f(\gamma,N))}{d\gamma}\notag\\ &+\frac{\pi}{3\bar{\beta}_0^2 N}\chi_1(\gamma)f(\gamma,N), \end{align*} with $\bar{\beta}_0=3/(\pi\beta_0)$. The derivatives in $\gamma$ arise from the use of the LO running coupling $\alpha_S(k^2)=1/(\beta_0\log{k^2/\Lambda^2})$ in momentum space, and $\chi_n(\gamma)$ is the Mellin transform of the $n^{\text{th}}$-order BFKL kernel. One may solve this to give: \begin{equation} f(N,\gamma)=\exp\left(-\frac{X_1(\gamma)}{\bar{\beta}_0 N}\right)\int_\gamma^\infty A(\tilde{\gamma})\exp\left(\frac{X_1(\tilde{\gamma})}{\bar{\beta}_0 N}\right)d\tilde{\gamma} \label{sol} \end{equation} for some $A(\tilde{\gamma})$ and $X_1(\tilde{\gamma})$. One would ideally like to factorise the perturbative from the non-perturbative physics to make contact with the collinear factorisation framework. This can be achieved (up to power-suppressed corrections) by shifting the lower limit of the integral in equation (\ref{sol}) from $\gamma\rightarrow0$. Then one finds for the integrated gluon: \begin{equation} {\cal G}(N,t)={\cal G}_E(N,t){\cal G}_I(Q_0^2,N), \end{equation} where the perturbative piece is: \begin{equation} {\cal G}_E^1(N,t)=\frac{1}{2\pi\imath}\int_{1/2-\imath\infty}^{1/2+\imath\infty}\frac{f^{\beta_0}}{\gamma}\exp\left[\gamma t-X_1(\gamma,N)/(\bar{\beta}_0 N)\right]d\gamma, \end{equation} where $X_1$ can be derived from $\chi_0(\gamma)$ and $\chi_1(\gamma)$, and $f^{\beta_0}$ is a known function of $\gamma$. Structure functions have a similar form: \begin{equation} {\cal F}_E^1(N,t)=\frac{1}{2\pi\imath}\int_{1/2-\imath\infty}^{1/2+\imath\infty}\frac{h(\gamma,N)f^{\beta_0}}{\gamma}\exp\left[\gamma t-X_1(\gamma,N)/(\bar{\beta}_0 N)\right]d\gamma, \end{equation} where $h(\gamma,N)$ is a NLL order impact factor coupling the virtual photon with the BFKL gluon. If all impact factors are known, one can derive all necessary splitting and coefficient functions in double Mellin space (within a particular factorisation scheme) by taking ratios of the above quantities. The non-perturbative dependence then cancels, and one obtains results in momentum and $x$ space by performing the inverse Mellin integrals either numerically or analytically. The exact NLL impact factors are not in fact known, but the LL results supplemented with the correct kinematic behaviour of the gluon have been calculated~\cite{Peschanski,WPT}. We have shown that one expects them to approximate well the missing NLL information in the true impact factors~\cite{WT1}. \\ Consistent implementation of small $x$ resummations in the massive sector requires the definition of a variable flavour number scheme that allows the massive impact factors to be disentangled in terms of heavy coefficient functions and matrix elements. We have devised such a scheme, the DIS($\chi$) scheme~\cite{WT2}. With resummations in both the massive and massless sectors, one has everything necessary to carry out a global fit to DIS and related data. First, the resummed splitting and coefficient functions are combined with the NLO DGLAP results using the prescription: \begin{equation*} P^{tot.}=P^{NLL}+P^{NLO}-\left[P^{NLL(0)}+P^{NLL(1)}\right], \end{equation*} where the subtractions remove the double counted terms, namely the LO and NLO (in $\alpha_S$) parts of the resummed results. Then the resulting improved splitting and coefficient functions interpolate between the resummed results at low $x$, and the conventional DGLAP results at high $x$. \section{Results} The resummed splitting functions $P_{+}$ ($\simeq P_{gg}+4/9 P_{qg}$ at small $x$) and $P_{qg}$ are shown in figure \ref{pgg}. One sees that the LL BFKL results are much more divergent than the standard NLO results, which are known to describe data well. The addition of the running coupling to the LL BFKL equation suppresses this divergence, but it is still unacceptable. Inclusion of the NLL BFKL kernel, however, leads to a significant dip of the splitting functions below the NLO results. This dip is also observed in other resummation approaches~\cite{ABF,CCSS} and has an important consequence in the global fit in that it resolves the tension between the Tevatron jet data (which favour a larger high $x$ gluon) and the H1 and ZEUS data (which prefer a higher low $x$ gluon). By momentum conservation, one cannot increase the gluon at both low and high $x$ in a standard NLO DGLAP fit. This is possible in the resummed approach, due to the dip in the splitting functions. \\ \begin{wrapfigure}{}{0.6\columnwidth} \centerline{\includegraphics[width=0.6\columnwidth]{white_chris.fig1.ps}} \caption{Splitting functions in the DIS scheme for $n_f=4$, $t=\log(Q^2/\Lambda^2)=6$: NLL+NLO (solid); LL with running coupling + LO (dashed); LL + LO (dot-dashed); NLO (dotted).}\label{pgg} \end{wrapfigure} Indeed, the gluon distribution at the parton input scale of $Q_0^2=1\text{GeV}^2$ is positive definite over the entire $x$ range. This is in contrast to a NLO fit, where the gluon distribution is negative at small $x$ for low $Q^2$ values. Whilst a negative gluon is not disallowed, it can lead to negative structure functions which are unphysical. The resummed gluon, however, leads to a prediction for the longitudinal structure function that is positive and growing at small $x$ and $Q^2$, in contrast to fixed order results which show a significant perturbative instability.\\ A consequence of a more sensible description for $F_L$ is that a turnover is observed in the reduced cross-section $\tilde{\sigma}=F_2-y^2/[1+(1-y)^2]\,F_L$ at high $y$. As seen in figure \ref{redxsec}, this is required by the HERA data. Furthermore, this feature is missing in NLO fits (but present at NNLO). Thus the resummations lead to qualitatively different behaviour, consistent with known consequences of higher orders in the fixed order expansion. Overall, we find very compelling evidence of the need for BFKL effects in describing proton structure \cite{WT3}. \begin{figure}[t!] \begin{center} \scalebox{0.5}{\includegraphics{white_chris.fig2.ps}} \caption{Reduced cross-section data, compared with both resummed and fixed order predictions.}\label{redxsec} \end{center} \end{figure} \begin{footnotesize}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Superposition and entanglement are at the basis of the remarkable advantages of using quantum mechanics over classical physics in quantum information science \cite{Yung19,Terhal18,Harrow17}. Such quantum resources are exploited in several technological applications ranging from quantum simulation \cite{Kokail21,Dalmonte18}, quantum metrology \cite{Carollo18, Carollo19, Dobrza14, Riedel10, Joo11} and quantum cryptography \cite{Schimpf21,Yin20} to quantum computing algorithms \cite{Bruss11,Zidan18}. Besides entanglement, other quantities have been discovered and proposed as quantum resources over the last years, such as, for example, discord \cite{Ollivier01,Henderson01}, coherence \cite{Baumgratz14}, steering \cite{Cavalcanti14}, and contextuality \cite{Grudka14}. Generating entanglement and superposition states of large systems is a target which has kindled a growing interest in physical scenarios characterized by small quantum systems through which it is possible to control and coherently manipulate mesoscopic environments \cite{Dong19,Villazon19,Koch16}. In this sense, in recent decades, a great attention has been payed in studying spin systems which have been successfully applied in quantum information \cite{Bennett00,Das13,Troiani11} The simplest imaginable setting consists in a single spin-qubit (either just a qubit, or a spin-1/2 or, more generally, a two-level system) that controls other $N$ spin-1/2's homogeneously distributed in a circle centred on it. This system is commonly known as central spin system \cite{Hutton04} and the Hamiltonian models used to describe it are called spin-star models. In such a star-shaped system the $N$ `environmental' spins do not interact with each other directly, but only with the central one which, hence, plays the role of a bridge through which quantum correlations between the spins surrounding it can arise. The interest towards central spin models has remarkably grown thanks to: I) their suitability in describing the hyperfine interaction in quantum dots \cite{Urbaszek13} and the interactions between nuclear spins and nitrogen-vacancy centers in diamond \cite{Schwartz18,London13}; II) their broad applicability in different fields like quantum information \cite{Yung11}, quantum metrology and sensing \cite{Sushkov14,He19}, quantum thermodynamics \cite{Arisoy21} and fundamental aspects \cite{Haddadi21} Further, a lots of works have been developed to investigate the quantum correlation and thermal entanglement arising among spins in the star framework \cite{Byrnes19,Haddadi19,Xu18,Anza10,Militello11}, as well as the Markovian and non-Markovian dynamics induced by a surrounding bath interacting with the spin-star system \cite{Motamedifar19,Wang13,Ferr08} Recently, the idea of spin-chain-star system has been proposed \cite{Ping13, Yao11} where the control spin occupies the center of a star of $M$ rays (chains), arranged at angular distance $2 \pi / M$, each of which hosts the same sequence of $N$ spins, generally equidistant. There is no interaction between different chains and the spins in a given radius may not even be the same \cite{Zhu18}. The spin-spin interactions within each chain and with the control spin are described by Hamiltonian terms strictly related to the physical scenario to be studied. In \cite{Eoghan21}, for example, the same system has been analysed to study the effects related to the quantum darwinism in such a structured environment. In this work we consider particular spin-chain-star systems characterized by the peculiarity that the spins in each chain are all the same and interact through $N$-wise interactions among them and with the central spin, Fig.~\ref{fig: confs a}. Such a kind of interactions can be implemented via quantum simulation apparatus based on either trapped ions \cite{Barreiro11,Muller11} or superconducting circuits made of transmon qubits \cite{Mezzacapo14}. Moreover, these exotic couplings have been demonstrated to be exploitable to generate superposition states and then quantum correlations in large spin-chains under the experimental control of magnetic fields applied on the spin-system \cite{GLSM,GVdCVM}. We analytically demonstrate that our spin-chain-star systems, under appropriate conditions, can be unitarily reduced to standard spin-1/2-star systems, Fig.~\ref{fig: confs b}. This result implies the possibility of applying to such a large system some of the results already achieved for the standard spin-star systems, as, for example, the eigenspectrum, the eigenvalues and the quantum dynamics \cite{Villazon20}. We show, indeed, that, by appropriately `translating' these results in the spin-chain-star language, it is possible to bring to light how to to generate entangled states of the different chains. Further, we show as well how to produce different classes of entangled states depending on the specific topological configuration chosen for the spin-chain-star system. The paper is organized as follows. In Sec. \ref{SCSS} different types of $N$-wise spin-chain star models together with their unitarily equivalent standard spin-star models are introduced and their possible integrability is analysed. The quantum dynamics and the possibility of generating different classes of entangled states (depending on the configuration) for an $XX$ spin-chain-star model are discussed in detail in Sec. \ref{QD}. Finally, concluding remarks are reported in Sec. \ref{Conc}. \begin{figure*}[htp] \begin{center} \subfloat[][]{\includegraphics[width=0.35\textwidth]{Figure_spin-chain-star_systems.pdf} \label{fig: confs a}} \hspace{2cm} \subfloat[][]{\includegraphics[width=0.25\textwidth]{Figure_spin-star_systems.pdf} \label{fig: confs b}} \captionsetup{justification=raggedright,format=plain,skip=4pt}% \caption{(a) A spin-chain-star system composed by eight five-spin chains. The central white circle represents the ancilla, while the ellipses embracing the spins of each chain and the ancilla represent the $N$-wise interactions. The overlap of the eight ellipses is byproduct of the cartoon and has no physical meaning. (b) The standard spin-star system unitarily equivalent to the system shown in panel (a).} \label{fig: confs} \end{center} \end{figure*} \section{Spin-chain-star systems} \label{SCSS} In this section we propose a new class of spin-chain star systems characterized by the presence of only many-body interactions of maximum order. We show that, by considering the many-body $N$-wise interactions for each chain, such systems are unitarily equivalent to standard spin-chain star systems, which are remarkably considered and studied for their applications in several fields \cite{Yung11,Sushkov14,He19,Arisoy21,Haddadi21, Ma10}. \subsection{The $X$ Model} Consider the following model: \begin{equation} \label{Ham 2 chains} H = H_1 + H_2, \end{equation} with \begin{subequations} \begin{align} H_1 &= {\hbar \omega_a \over 2} \hat{\sigma}_a^z + \gamma_1 \hat{\sigma}_a^x \otimes \left[ \bigotimes_i^{M_1} \hat{\sigma}_{1i}^x \right], \\ H_2 &= {\hbar \omega_a \over 2} \hat{\sigma}_a^z + \gamma_2 \hat{\sigma}_a^x \otimes \left[ \ \bigotimes_j^{M_2} \hat{\sigma}_{2j}^x \right]. \end{align} \end{subequations} The physical system described by this Hamiltonian model can be thought of as a central spin-1/2, to which we refer as ancilla, coupled to two spin-1/2 chains (the subscripts of the two terms in the Hamiltonian \eqref{Ham 2 chains} refer to the two different chains). The coupling which characterizes the two spin-chains (including the ancilla) consists in $N$-wise interaction terms, that is, a type of interaction involving all the spins at one time. It has been demonstrated \cite{GLSM} that the $M$-wise spin operator $\bigotimes_{i=1}^M \hat{\sigma}_{k}^x$ can be unitarily reduced as (see Appendix \ref{App}) \begin{equation} \label{Passage x} \bigotimes_{i=1}^M \hat{\sigma}_{i}^x ~ \rightarrow ~ \hat{\sigma}_1^x, \end{equation} where $\hat{\sigma}_1^x$ is intended to be a $2^M$-dimensional operator. From a physical and mathematical point of view, it means that the $M$-spin chain effectively behaves as a single two-level system and then can be formally treated as a single qubit. The origin of such a lucky circumstance can be traced back to the existence of a set of constants of motion which generate an equally numbered set of dynamically invariant two-dimensional Hilbert subspaces \cite{GLSM}. This implies that, within each of these subspaces, the $M$-spin dynamics can be mapped into that of a single two-level system \cite{GLSM}. A particularly interesting subdynamics is that characterized by the Hilbert space spanned by the two states $\ket{\downarrow}^{\otimes M}$ and $\ket{\uparrow}^{\otimes M}$ (with $\hat{\sigma}^z\ket{\downarrow}= -\ket{\downarrow}$ and $\hat{\sigma}^z\ket{\uparrow}= +\ket{\uparrow}$). Therefore, the two $M_k$-spin operators $\bigotimes_i^{M_1} \hat{\sigma}_{1i}^x$ and $\bigotimes_j^{M_2} \hat{\sigma}_{2j}^x$ in $H_1$ and $H_2$, can be unitarily transformed into $\hat{\sigma}_{11}^x$ and $\hat{\sigma}_{21}^x$, respectively. In this way, the model in Eq. \eqref{Ham 2 chains}, after the appropriate unitary transformations, can be written as \begin{equation} \tilde{H} = \hbar\omega_a \hat{\sigma}_a^z + \gamma_1 \hat{\sigma}_a^x \otimes \hat{\sigma}_{1}^x + \gamma_2 \hat{\sigma}_a^x \otimes \hat{\sigma}_{2}^x, \end{equation} where the second index of the spin operator in the last two terms, indicating the first spin of each chain, has been omitted. Let us consider now a more general spin-chain system as the following one \begin{equation} \label{X spin-chain-star} H_x = \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k \hat{\sigma}_a^x \otimes \left[ \ \bigotimes_j^{M_k} \hat{\sigma}_{kj}^x \right]. \end{equation} We can call such a physical system a spin-chain-star system since we can imagine the different $N$ chains to be disposed in a star-shaped configuration, each coupled to the same central spin. It is easy to convince oneself that also this model, analogously to the two-chain case, can be transformed through unitary transformations into the following simpler spin model: \begin{equation} \label{X spin-star} \tilde{H}_x = \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k \hat{\sigma}_a^x \otimes \hat{\sigma}_{k}^x. \end{equation} This result shows that a star-shaped spin-chain system can be formally described and mathematically treated as a standard spin-star system, that is, a system consisting of $N$ mutually uncoupled spin-1/2's, each interacting with a unique central spin-1/2. This possibility stems from the fact that each $N$-wise-interacting spin-chain can be effectively reduced to a single two-level system. \subsection{The $XY$ Model} It is interesting to point out that also the $N$-wise spin-operator $\bigotimes_{j=1}^M \hat{\sigma}_{j}^y$ can be unitarily reduced to \cite{GLSM} (see Appendix \ref{App}) \begin{equation} \label{Passage y} \bigotimes_{j=1}^M \hat{\sigma}_{j}^y ~ \rightarrow ~ \Biggl[ (-1)^{M-1 \over 2} \gamma_{y} \prod_{j=1}^{(M-1)/2} \sigma_{2j+1}^{z} \Biggr] \hat{\sigma}_1^y, \end{equation} provided that the number $M$ of spins is odd \cite{GLSM}, as shown in Appendix \ref{App}. The operator realizing such a transformation is the same accomplishing that in Eq. \eqref{Passage x}. In the above expressions $\sigma_{2j+1}^{z}$ are constants of motion and their possible values can be +1 and -1. The specific values assigned to these integrals of motion identify a precise dynamically invariant subspace \cite{GLSM}. The Hilbert subspace spanned by $\ket{\downarrow}^{\otimes M}$ and $\ket{\uparrow}^{\otimes M}$, for example, is characterized by all the constants of motion equal to 1, namely $\sigma_{2j+1}^{z}=1, ~ \forall ~ j$. Therefore, the following $XY$ spin-chain-star system \begin{equation} \label{XY spin-chain-star} H_{xy} = \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k^x \hat{\sigma}_a^x \otimes \left[ \bigotimes_j^{M_k} \hat{\sigma}_{kj}^x \right]+ \sum_k^N \gamma_k^y \hat{\sigma}_a^y \otimes \left[ \bigotimes_j^{M_k} \hat{\sigma}_{kj}^y \right], \end{equation} can be mapped into the standard $XY$ spin-star system, which reads \begin{equation} \label{XY spin-star} \tilde{H}_{xy} = \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k^x \hat{\sigma}_a^x \otimes \hat{\sigma}_{k}^x + \sum_k^N \gamma_k^y \hat{\sigma}_a^y \otimes \hat{\sigma}_{k}^y, \end{equation} if the number $M_k$ of spins in the $k$-th chain is chosen so that $(M_k-1)/2$ is even, and if all the chains coupled to the central ancilla are initially prepared in either $\ket{\downarrow}^{\otimes M_k}$ or $\ket{\uparrow}^{\otimes M_k}$ or any arbitrary superposition of these two states (such as a GHZ-like state). \subsection{The $XYZ$ Model} It is possible to demonstrate that the same set of unitary operators realizing the transformations in Eqs. \eqref{Passage x} and \eqref{Passage y} realizes also the following transformation \cite{GLSM} (see Appendix \ref{App}) \begin{equation} \label{Passage z} \bigotimes_{l=1}^M \hat{\sigma}_{l}^z ~ \rightarrow ~ \Biggl[ \gamma_{z} \prod_{l=1}^{(M-1)/2} \sigma_{2l+1}^{z} \Biggr] \hat{\sigma}_1^z, \end{equation} where $M$ is odd. Also in this case, if the involved two-level subdynamics of each chain is that characterized by the two states $\ket{\downarrow}^{\otimes M_k}$ and $\ket{\uparrow}^{\otimes M_k}$, then the $XYZ$ spin-chain-star system \begin{equation} \label{XYZ spin-chain-star} \begin{aligned} H_{xyz} = & \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k^x \hat{\sigma}_a^x \otimes \left[ \bigotimes_j^{M_k} \hat{\sigma}_{kj}^x \right]+ \\ & \sum_k^N \gamma_k^y \hat{\sigma}_a^y \otimes \left[ \bigotimes_j^{M_k} \hat{\sigma}_{kj}^y \right]+ \sum_k^N \gamma_k^z \hat{\sigma}_a^z \otimes \left[ \bigotimes_j^{M_k} \hat{\sigma}_{kj}^z \right], \end{aligned} \end{equation} is unitarily equivalent to the standard $XYZ$ spin-star model \begin{equation} \label{XYZ spin-star} \begin{aligned} \tilde{H}_{xyz} = & \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k^x \hat{\sigma}_a^x \otimes \hat{\sigma}_{k}^x + \\ & \sum_k^N \gamma_k^y \hat{\sigma}_a^y \otimes \hat{\sigma}_{k}^y + \sum_k^N \gamma_k^z \hat{\sigma}_a^z \otimes \hat{\sigma}_{k}^z. \end{aligned} \end{equation} A qualitative representation of a star-shaped system composed by eight five-spin chains is shown in Fig.~\ref{fig: confs a}. Its unitarily equivalent standard spin-star system is shown in Fig.~\ref{fig: confs b}. Finally, it is worth pointing out that the effective mathematical description, basing on the unitary transformation procedure, is not affected by a possible time-dependence of the Hamiltonian parameters. This property stems from the fact that the unitary operators, which transform the Hamiltonians, does not depend on the Hamiltonian parameters and, more in general, on time. \subsection{In presence of fields} In this subsection we see that our analysis and then the unitary reduction of a spin-chain-star model to a standard spin-star model keeps its validity also when fields applied to the entire chains are considered. Let us suppose each entire chain in the system to be subject to a uniform magnetic field. The models in Eqs. \eqref{X spin-chain-star}, \eqref{XY spin-chain-star} and \eqref{XYZ spin-chain-star} are then enriched of the term \begin{equation} \label{Fields on chains} \sum_k^N \hbar \omega_0^k \sum_j^{M_k} \hat{\sigma}_{kj}^z. \end{equation} It is possible to convince oneself \cite{GLSM} that the unitary transformations acting as expressed in Eqs. \eqref{Passage x}, \eqref{Passage y} and \eqref{Passage z} convert the operators in Eq. \eqref{Fields on chains} into (see Appendix \ref{App}) \begin{equation} \sum_k^N \left[ 1 + \sum_{j=2}^{M_k} \prod_{i=2}^j \sigma_{kj}^z \right] \hbar \omega_0^k \hat{\sigma}_{k1}^z, \end{equation} where $\sigma_{kj}^z$ are constants of motions as before. Within the subspace (for each chain) we are interested in, that is the one spanned by the two states $\ket{\downarrow}^{\otimes M_k}$ and $\ket{\uparrow}^{\otimes M_k}$, the integrals of motions are all equal to 1. We can then write the following effective two-level operators \begin{equation} \sum_k^N M_k ~ \hbar \omega_0^k ~ \hat{\sigma}_{k}^z, \end{equation} each of which accounts for the magnetic field applied on a chain, whose dynamics is equivalent to that of a single spin-qubit system. Therefore, in this physical scenario, the unitarily transformed effective models in Eqs. \eqref{X spin-star}, \eqref{XY spin-star} and \eqref{XYZ spin-star} are modified by simply introducing such terms in the Hamiltonians. It is important to underline that the introduction of the fields uniformly acting upon each chain does not alter the symmetry properties of the original Hamiltonians. In this way, the possibility to perform the same unitary operations on the Hamiltonian interaction terms results to be not affected. As a final remark, we wish to emphasize the importance of the symmetry properties possessed by the Hamiltonian(s). For the case analysed in this paper, the study of the Hamiltonian symmetries is fundamental for the exact solution of the dynamical problem. More in general, the symmetries, besides allowing to analytically treat some models, have profound physical implications. Indeed, the disclosure of symmetry-protected (sub-)dynamics in different physical systems \cite{GMIV, GMMM} can lead to discover physical effects which turn out to be useful and applicable in different fields such as quantum metrology \cite{Yoshinaga21, Hatomura22}. \subsection{Integrability} \label{Int} It is worth pointing out that the fully isotropic $XXX$ spin-star model is integrable; precisely, it belongs to the class of $XXX$ Richardson-Gaudin integrable models \cite{Gaudin14,Villazon20}. The condition of integrability stems from the existence of an appropriate set of integrals of motion, allowing to obtain all eigenstates and related eigenvalues through the use of Bethe ansatz techniques\cite{Villazon20}. This circumstance, joined with the fact that the $XXX$ spin-star model well describes systems with spherical symmetry (such as quantum dots in semiconductors with s-type conduction bands \cite{Hanson07}), has spurred several studies focused on the equilibrium and dynamical properties of such a model \cite{Claeys18,Faribault13}. Very recently, it has been demonstrated that also the $XX$ model is integrable \cite{Villazon20}. This model naturally emerges in resonant dipolar spin systems in rotating frames \cite{Fernandez18,Ding14} and its eigenstates are divided into two classes: dark and bright states. The former are product states of the ancilla and of all the other environmental spins, so that the central spin is thus disentangled from the spin-bath. The latter can be written as a combination of dark states and then exhibit entanglement between the ancilla and the other spins \cite{Villazon20}. On this basis we therefore claim that, when an $XX$ spin-chain star model can be unitarily reduced to a standard spin-star one, we can derive the exact expressions of the eigenvalues and eigenvectors of the spin-chain-star system. It is important to stress that, in this case, fully integrability cannot be invoked since the spin-chain-star model is exactly solvable only within the specific subspaces where the mapping to an integrable spin-star model is possible. As previously said, one of these subspaces is that spanned by the pair of states $\{ \ket{\uparrow}^{\otimes M}, \ket{\downarrow}^{\otimes M} \}$. Within other subspaces, instead, although the reduction to standard spin-star models is always possible, the effective unitarily equivalent models present inhomogeneities ($XYZ$) which affect the integrability. \section{Exact solution for $XX$ spin-chain-star systems} \label{QD} \subsection{$W$-like and $GHZ$-like states of spin-chains} In this section we specialize the $XX$ spin-chain-star system considered so far by setting $\gamma_k^x=\gamma_k^y, ~ \forall~k$ in Eq. \eqref{XY spin-chain-star}, namely \begin{equation} \label{XX spin-chain-star} H_{xx} = \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k \left\{ \hat{\sigma}_a^x \otimes \left[ \bigotimes_j^{M} \hat{\sigma}_{kj}^x \right]+ \hat{\sigma}_a^y \otimes \left[ \bigotimes_j^{M} \hat{\sigma}_{kj}^y \right] \right\}. \end{equation} Moreover, we suppose a number $N$ of spin-1/2-chains, each consisting of $M$ spin-qubits and satisfying the constraint that $(M-1)/2$ is an even number. Through the appropriate unitary transformations previously discussed, we get thus an effective standard $XX$ spin-star system (Eq. \eqref{XY spin-star} with $\gamma_k^x=\gamma_k^y, ~ \forall~k$) \begin{equation} \label{XX spin-star} \tilde{H}_{xx} = \hbar\omega_a \hat{\sigma}_a^z + \sum_k^N \gamma_k \left[ \hat{\sigma}_a^x \otimes \hat{\sigma}_{k}^x + \hat{\sigma}_a^y \otimes \hat{\sigma}_{k}^y \right]. \end{equation} We consider all the $N$ spin-chains initialized in the $M$-spin state $\ket{\downarrow}^{\otimes M}$. As said in the previous section, the symmetries of the Hamiltonian lead to a two-dimensional dynamically invariant subspace spanned by $\ket{\downarrow}^{\otimes M}$ and $\ket{\uparrow}^{\otimes M}$. It means that the $k$-th chain ($k=1 \dots N$) can be effectively represented in terms of dynamical variables $\sigma_k^x, ~ \sigma_k^y, ~ \sigma_k^z$ of a fictitious qubit. The following mapping (valid for each chain) \begin{equation} \label{Mapping} \ket{\downarrow}^{\otimes M} \Longleftrightarrow \ket{-}, \qquad \ket{\uparrow}^{\otimes M} \Longleftrightarrow \ket{+}. \end{equation} (with $\hat{\sigma}^z\ket{\pm}= \pm\ket{\pm}$) enables to fix unambiguously the initial state of the fictitious standard spin-star system. We suppose the ancilla and the effective standard spin-star system initially prepared in the following state \begin{equation} \label{Initial state} \ket{\psi(0)} = \ket{\uparrow_a} \ket{-}^{\otimes N}, \end{equation} which in terms of spin-chain states is written as \begin{equation} \label{Initial state new} \ket{\psi(0)} = \ket{\uparrow_a} \ket{\downarrow_1}^{\otimes M} \dots \ket{\downarrow_k}^{\otimes M} \dots \ket{\downarrow_N}^{\otimes M}. \end{equation} It is known that the exact time evolution of the initial condition taken into account for the $XX$ spin-star system is \cite{Jivulescu09,Ferraro08} \begin{equation} \label{Evolved state} \ket{\psi(t)} = \alpha(t) \ket{\uparrow_a} \ket{-}^{\otimes N} + \ket{\downarrow_a} \sum_{k}^N \beta_k(t) \ket{-_1 \dots +_k \dots -_N}, \end{equation} with \begin{subequations} \label{alpha beta} \begin{align} \alpha(t) &= \cos(\omega~t) + i {\omega_a \over \omega} \sin(\omega~t) \label{alpha}, \\ \beta_k(t) &= -i {\gamma_k/\hbar \over \omega} \sin(\omega~t), \label{beta} \end{align} \end{subequations} where \begin{equation} \label{omega} \omega=\sqrt{\sum_k (\gamma_k/\hbar)^2 + \omega_a^2}. \end{equation} We underline that, for the initial condition under scrutiny, only one effective frequency, namely $\omega$, characterizes the time evolution of the system. Thus, the $XX$ spin-star system comes back to its initial condition with a period $T = 2\pi/\omega$ and behaves as if its dynamics were governed by an effective Hamiltonian describing $N$ different spins homogeneously coupled to the central one, that is with the same effective coupling constant. Precisely, such an effective model can be obtained by substituting in Eq. \eqref{XY spin-chain-star} $\gamma_k^x=\gamma_k^y=\gamma_{eff}$ with $\gamma_{eff} = \hbar \omega$, which is independent of $k$. Moreover, it is interesting to note that, for $\gamma_k = \gamma, ~ \forall ~ k$ and $\omega_a=0$, when $t=n \pi / \omega$ (so that $\alpha(t)=0$), we get the state \begin{equation} \label{W-states} \ket{W} = \ket{\downarrow_a} \left[ {1 \over \sqrt{N}} \sum_{k}^N \ket{-_1 \dots +_k \dots -_N} \right], \end{equation} up to a global factor $\exp\{-i \pi/2\}$. For this scenario, the time behaviours of the probabilities $|\alpha(t)|^2$ and $|\beta_k(t)|^2 = |\beta(t)|^2, ~ \forall ~ k$, are shown in Fig. \ref{fig: alpha-beta}(a) in the case of $N=9$. We highlight that the $N$-two-level system results in a $W$-like state and that, therefore, each of these $N$ involved two-level systems represents one of the $N$ $M$-spin chains. Thus, the ancilla-mediated (quantum) correlations which arise between the effective $N$ spin-1/2's can be interpreted as correlations get established between the $N$ spin-chains. It means that such a $W$-state is a `macro-state' consisting in a superposition of states involving all the $M$-spin-chains in the systems. We can then write the $W$-like state in terms of the spin-chains as \begin{equation} \label{W-states spin-chain} \ket{W} = \ket{\downarrow_a} \left[ {1 \over \sqrt{N}} \sum_{k}^N \ket{\downarrow_1}^{\otimes M} \dots \ket{\uparrow_k}^{\otimes M} \dots \ket{\downarrow_N}^{\otimes M} \right]. \end{equation} Therefore, we claim that the $XX$ spin-chain-star model [Eq. \eqref{XX spin-chain-star}] allows for the generation of `macro' $W$-like states of the chains and then for the creation of a large-scale entanglement between all the subsystems in the spin-chain-star physical scenario. \begin{figure}[htp] \begin{center} {\includegraphics[width=0.45\textwidth]{alpha-beta.pdf}} \captionsetup{justification=raggedright,format=plain,skip=4pt}% \caption{(a) Time dependencies of $|\alpha(t)|^2$ (blue dashed line) and $|\beta_k(t)|^2 = |\beta(t)|^2, ~ \forall ~ k$ (red solid line) [see Eq. \eqref{alpha beta}] in case of $N=9$ number of $M$-spin chains, for $\gamma_k=\gamma, ~ \forall ~ k$ and $\omega_a=0$. (b) Effects of the detuning on the time dependence of $|\beta_k(t)|^2 = |\beta(t)|^2, ~ \forall ~ k$, in case of $N=9$ and $\gamma_k=\gamma, ~ \forall ~ k$; different values of the ratio $\hbar \Delta / \gamma$ are considered: 0 (red dotted line), 1 (green dot-dashed line), 5 (blue dashed line) and 10 (black solid line).} \label{fig: alpha-beta} \end{center} \end{figure} \subsection{Effects of Detuning} Let us suppose to apply a uniform magnetic field to all the identical $M$-spin-chains of the system, namely \begin{equation} \label{Fields on chains det} {\hbar \omega_0 \over M} \sum_k^N \sum_j^{M} \hat{\sigma}_{kj}^z. \end{equation} In this case the unitarily equivalent spin-star model reads \begin{equation} \label{XX spin-star fields} \tilde{H}_{xx}' = \hbar\omega_a \hat{\sigma}_a^z + \hbar \omega_0 \sum_k^N \hat{\sigma}_{k}^z + \sum_k^N \gamma_k \left[ \hat{\sigma}_a^x \otimes \hat{\sigma}_{k}^x + \hat{\sigma}_a^y \otimes \hat{\sigma}_{k}^y \right]. \end{equation} It is easy to check that, for such a physical scenario, the expression of the evolved state in Eq. \eqref{Evolved state}, obtained when the spin-chain-star system is initially prepared in the state \eqref{Initial state}, remains formally identical, as well as that of $\beta_k(t)$ in Eq. \eqref{beta} \cite{Jivulescu09,Ferraro08}. The only slight variation is found in the mathematical expressions of $\alpha(t)$ and $\omega$ which become now respectively \cite{Jivulescu09,Ferraro08} \begin{subequations} \begin{align} \alpha(t) &= \cos(\omega~t) - i ~ {\Delta \over \omega} \sin(\omega~t) \label{alpha det}, \\ \omega &= \sqrt{\sum_k (\gamma_k/\hbar)^2 + \Delta^2}, \end{align} \end{subequations} where the detuning is defined as follows: \begin{equation} \label{omega det} \Delta = \omega_0 - \omega_a. \end{equation} The effects of the detuning on the probabilities $|\beta_k(t)|^2$ are shown in Fig. \ref{fig: alpha-beta}(b), for $\gamma_k=\gamma, ~ \forall ~ k$. We see, as expected, that the greater is the detuning, the lower are the probabilities $|\beta_k(t)|^2$ (and then the higher the complementary probability $|\alpha(t)|^2$), meaning that the system tends to remain in its initial condition for high values of the detuning. This result, thus, demonstrates also that, although a non-vanishing field $\omega_a$ is present on the ancilla spin, it is still possible generating the $W$-state in Eq. \eqref{W-states}, provided that a further magnetic field of magnitude $\omega_a/M$ is uniformly applied to all the $M$-spin chains (so that $\Delta=0$). \subsection{Entanglement} In this section we investigate the possible quantum correlations emerging in the spin-chain-star system. To this end, we consider the case of $N=2$ chains, each made of $M$ spin-qubits, and $\gamma_k = \gamma, ~ \forall ~ k$. It is interesting to note that in this case, through the procedure previously exposed and for $\Delta=0$, if we get $-1$ by measuring the ancilla dynamical variable $\hat{\sigma}_a^z$ at $t= \pi/\omega$, Eq. \eqref{Evolved state} foresees that the resulting spin-state of the two chains has the following form \begin{equation} \label{GHZ 2 chains} \ket{GHZ} = {\ket{\uparrow}^{\otimes M} \ket{\downarrow}^{\otimes M} + \ket{\downarrow}^{\otimes M} \ket{\uparrow}^{\otimes M} \over \sqrt{2}}. \end{equation} Since in this state the concurrence \cite{Wootters} between two generic spin-1/2's (in the same or different chains) vanishes and the measure of the collective $z$-component of the whole spin-chain can give only the two values $\pm N$, it is legitimate to call this multi-spin state a $GHZ$-like state. The legitimacy of such a denomination of the state \eqref{GHZ 2 chains} can be convincingly strengthened simply observing that, exploiting our mapping \eqref{Mapping}, the same state \eqref{GHZ 2 chains} can be written as the Bell state \begin{equation} \ket{GHZ} \rightarrow \ket{\Psi^+} = {\ket{+} \ket{-} + \ket{-} \ket{+} \over \sqrt{2}}, \end{equation} which is characterized by the maximum level of concurrence ($C=1$). We stress that, as a result of our analysis, the entanglement should be intended as the signature of the existence of quantum correlations between the spin-chains. We wish to emphasize the added value of the representation in terms of fictitious two-level systems, since it provides information about quantum correlations get established between the spin-chains. According to this interpretation, it becomes relevant to calculate the time dependence of the concurrence exhibited by two chains in the state \eqref{Evolved state}: \begin{equation} \begin{aligned} \ket{\psi(t)} =& \alpha(t) \ket{\uparrow_a} \ket{-} \ket{-} \\ &+\ket{\downarrow_a} \left[ \beta_1 (t) \ket{+} \ket{-} + \beta_2(t) \ket{-} \ket{+} \right] = \\ =&\alpha(t) \ket{\uparrow_a} \ket{\downarrow}^{\otimes M} \ket{\downarrow}^{\otimes M} \\ &+\ket{\downarrow_a} \left[ \beta_1 (t) \ket{\uparrow}^{\otimes M} \ket{\downarrow}^{\otimes M} + \beta_2(t) \ket{\downarrow}^{\otimes M} \ket{\uparrow}^{\otimes M} \right]. \end{aligned} \end{equation} It is possible to verify that the concurrence for the density matrix of the two effective spin-1/2's is \begin{equation} C(t) = 2 |\beta_1(t)| |\beta_2(t)|. \end{equation} We are therefore able to write the time-dependence of the entanglement emerging between the two chains. The latter results to be maximum for $\beta_1(t) = \beta_2(t) = 1/\sqrt{2}$, corresponding to the $GHZ$-like state previously examined. Analogously, in the case of $N=3$ $M$-spin chains, by measuring $\sigma_a^z=1$ at $t= \pi/\omega$, the state reached by the spin system is \begin{equation} \begin{aligned} &\ket{W} = { \ket{+--} + \ket{-+-} + \ket{--+} \over \sqrt{3}} = \\ &{\ket{\uparrow}^{\otimes M} \ket{\downarrow}^{\otimes M} \ket{\downarrow}^{\otimes M} + \ket{\downarrow}^{\otimes M} \ket{\uparrow}^{\otimes M} \ket{\downarrow}^{\otimes M} + \ket{\downarrow}^{\otimes M} \ket{\downarrow}^{\otimes M} \ket{\uparrow}^{\otimes M} \over \sqrt{3}}, \end{aligned} \end{equation} which can be interpreted as a maximally entangled $W$-like state of the three $M$-spin chains. Also in this case we may infer the entanglement between the three spin chains with the help of the effective description involving three interacting two-level systems as well. It is easy to verify that, for the state \eqref{Evolved state} in the case of $N=3$, each pair $i$-$j$ of chains exhibits a non-vanishing concurrence equal to $2|\beta_i||\beta_j|$. Each pair of two generic true spin-1/2's is, instead, disentangled as in the case of two chains. This result can be extended to the case of $N$ spin-chains. In this instance, it is possible to check, indeed, that the entanglement between two generic true spins vanishes, while the concurrence for a generic pair of chains $i$-$j$ results to be $2|\beta_i||\beta_j|$. This means that the spin-chain-star system here discussed, besides generating quantum correlations in a large spin-system, is suitable to generate different types of entangled states of the chains assumed in the model under scrutiny. The origin of such differences can be traced back to specific topological and structural properties of the spin-chain-star system, as for example the number of chains and the number of spins per chain. \section{Conclusions} \label{Conc} In this work we have considered a special class of spin-chain-star systems. Precisely, we have focused our attention on spin chains characterized by many body $N$-wise interactions, that is interaction terms involving all the spins in a chain at once. We have taken into account several types of spin-chain-star models including different types of $N$-wise interactions as well as the presence of local magnetic fields on both the ancilla and the spins in the chains. We have shown that each model we have considered can be analytically treated through unitary transformations and that each chain, for specific initial conditions, effectively behaves and can be thought of as a two-level system. This implies that a spin-chain-star system belonging to the class under scrutiny is unitarily equivalent to a standard spin-star system. Therefore, we can exploit the knowledge and the results obtained for the standard spin-star models and interpret them in terms of multiple-chain states. For example, we have demonstrated that, by starting from a disentangled state of the spin-chain-star system, our model and scheme allow for the generation of a `macro'-entangled state of all the spin chains which form the system. Therefore, we can speak of quantum correlations arising between the actual spins and between the spin chains globally described as two-level systems. In case of $N=2$ and $N=3$ spin chains, thanks to the mapping into a spin-1/2-star model, we can affirm that the system evolves and reaches a maximally entangled superposition at appropriate instants of time. Moreover, we are also able to quantify the entanglement get established between two spin chains through the calculation of the concurrence (since the two chains effectively behave as two spin-qubits). Finally, we have analysed the effects on the probability related to the generation of such a state stemming from the presence of magnetic fields acting on the ancilla and (homogeneously) on the chains. In this way, we have found the optimal experimental work condition necessary to get a macro-entangled state. Our result, thus, paves the way to the possibility of generating large-scale entanglement in spin systems made of several spins. Further investigations could concern quantum oscillator baths(s) interacting with the spin chains and/or the ancilla. In this case, likely, a numerical approach to solve the dynamics is necessary. Several approaches could be used to deal with this scenario, from the standard Lindblad theory \cite{Gorini76, Lindblad76} to the partial Wigner transform \cite{Kapral99, kapral01, Sergi03, Sergi05, Sergi07, SGHM, SHGM} and the non-Hermitian formalism \cite{Feshbach58, Bender98, Mostafazadeh10, Rotter15, Sergi13, Sergi15, Brody12, GdCKM, GdCNM2, GMSVF}. \section*{Acknowledgements} RG and DV acknowledge financial support from the PRIN Project PRJ-0232 - Impact of Climate Change on the biogeochemistry of Contaminants in the Mediterranean sea (ICCC). \section*{Appendices}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In these notes we delineate a new approach to the study of forces between two (but not limited to two) neutral atoms mediated by a quantum (in this case, electromagnetic) field based on nonequilibrium quantum field theory \cite{NEqQFT}. This theory is ostensibly very different from the usual approaches researchers in atomic, molecular and optical (AMO) physics are familiar with in the treatment of atomic-optical systems, and it may at first sight appear to be too cumbersome or complicated to be necessary. However, with the advances of sophisticated and highly controllable experiments in AMO physics made possible by new high-precision instrumentation applied to cold atoms in optical lattices (see, e.g., the experiments and the theoretical analysis of \cite{Rey04}) or cavities (with the capability of tracking atoms in real time \cite{Orozco}) or nanoelectromechanical systems we are entering an age where traditional theories will soon become inadequate. The present method we introduce has the advantage that it is the amalgamation of both quantum field theory and nonequilibrium statistical mechanics, the former is required for quantum field (customarily referred to as retardation, but there are more involved) effects, the latter for treating processes involving quantum dissipation and noises. Not only can this method reproduce all the effects and forces known in the last century as detailed below \cite{Lon,DLP, CasPol} , it can also deal with phenomena and processes more recently brought to central attention from quantum foundational and information processing issues, such as quantum decoherence and entanglement dynamics, including non-Markovian processes (those carrying memories) which invariably will appear when back-action is taken into consideration. Since this method can treat quantum back-action and feedback in a self-consistent manner, it is uniquely adept to quantum control considerations \cite{QControl}. Unlike most known treatments of systems in near-equilibrium conditions based on linear or nonlinear response theory this is a fully nonequilibrium dynamical description of the atoms' motion \footnote{One simple way to tell the difference is whether temperature is used ab initio or whether the system remains stationary.}. Although our attention is focussed on the forces between two neutral atoms derived by finding their relative trajectories (which are determined self-consistently by the atoms' interaction with the field and with each other via the field and the field's fluctuations in the presence of the atoms), one can also treat radiation reaction, dissipation, and fluctuation phenomena with non-Markovian behaviors \cite{BH09}. As this paper will hopefully illustrate a small initial investment into this new method can pay bountifully. We consider an assembly of $n$ neutral atoms (labeled by $a = 1, ...n$) and model the internal degrees of freedom (idf) of the $a$th atom by a three dimensional harmonic oscillator with coordinates $\vec{Q}_a$, (thus describing the atom's spontaneous and stimulated emissions and absorption while interacting with a field). The atoms interact with an electromagnetic field (from near-field Coulomb force to far-field radiation) with vector potential $A^\mu$ through a dipole interaction, but not directly with one another. The force between them arises through field-mediated mutual influences. The non-relativistic trajectory of the $a$th atom is described by $\vec{z}_a$ which, unlike in most previous treatments, is a dynamic variable (not prescribed) determined self-consistently by a negotiation amongst all the other variables $(\vec{Q}_a, A^\mu)$. Our interest in this paper is primarily focussed on the center of mass motion of each atom and not on the microscopic details of the other variables. The open quantum systems \cite{qos} approach can efficiently isolate the desired information about the atom's trajectory through a succession of coarse-graining procedures, as detailed below, which take into account the overall effects (back-action) of the remaining variables. Using the influence functional method we can incorporate the effects of the microscopic physics of the field and the atom's idf and derive an effective equation of motion for the atom's trajectory from which the force between the two atoms can be extracted by appealing to Newton's Second Law. In this setup the dipole moment of the atom modeled by an oscillator is not permanent but only instantaneously non-vanishing. The uneven distribution of charge in the atom comes from two effects. First, semi-classically speaking, the magnitude and direction of the nuclear-electron separation (which is proportional to the dipole moment of the atom) will unpredictably vary in time even in the absence of quantum fields, and second, in the presence of quantum fields the atom is polarized by electric field fluctuations. Dipole moment fluctuations are the source of the interaction among neutral atoms and are responsible for two types of forces arising from distinctly different physical origin. \subsubsection{Intrinsic Fluctuation Force} In the quantum-field conception of a neutral atom the electronic wavefunction surrounding the nucleus has a fluctuating component, modeled in our approach by a quantum mechanical harmonic oscillator. As a whole the atom will always remain neutral. However, in time $\textit{intrinsic fluctuations}$ of the oscillator, due to its quantum nature, lead to an uneven local distribution of charge in an otherwise (globally) neutral atom which gives rise to an instantaneous dipole moment that couples to the attending electromagnetic field. Radiation traveling away from the first (fluctuating) atom carries information about the orientation of its dipole (at the time of emission in the past) which eventually reaches and polarizes the second atom. The second atom's response to the field leads it also to produce a time-varying electric field that travels back to the first atom and is correlated with the activities of the fluctuating atom's idf, leading to nonvanishing interaction energy. One can think of the second atom as a transponder which receives a signal from the fluctuating atom and then rebroadcasts it. In this analogy the fluctuating atom will receive a signal reflected from the transponder atom which encodes its own history. This is easily conceptualized if we consider the atoms to be so close that the light transit time between them is much greater than all other characteristic time scales governing the dynamics. In such a case the retarded electric field is well approximated by the the electrostatic field. Thus, an intrinsic fluctuation of the idf of one atom will source a static dipole electric field seen by the second atom. The second atom is polarized by this external field leading it too to source a dipole field felt by the fluctuating atom. This process leads to an energetically favorable arrangement of the two atom's dipole moments which gives rise to the attractive force between them. This type of force due to intrinsic fluctuations in the neutral atoms' dipole moments contain two well-known forces: 1) the van der Waals force, usually used to describe all interactions between neutral atoms and molecules categorically, and 2) the London force which arises from the Coulombic interaction between atoms without permanent multipole moments and without the consideration of retardation effects (as Casimir-Polder force does). We refer to forces of this type as \textbf{intrinsic fluctuation forces}. \subsubsection{Induced Dipole Force} It goes without saying that the quantum field itself possesses intrinsic fluctuations. Any instantaneously generated local electric field will $\it{induce}$ non-vanishing dipole moments in both atoms. We classify the interaction of dipole moments induced by the fluctuations of the quantum field as \textbf{induced dipole forces}. We suggest making a clean separation between forces arising from intrinsic (before) and induced (here) fluctuations of the dipole moment of a neutral atom because the physical processes produce quite distinct results, as shown in later sections. The physical origin of this component of the force is the spatial correlation of field fluctuations. Any given field fluctuation will induce correlated dipole moments for the two atoms, much like a long wavelength water wave on the ocean will raise and lower two nearby buoys in phase. The excitation of the dipoles by the field will lead to radiation that contains information about the emitter. When the radiation from one atom reaches the other the correlation between the induced motion of each dipole moment at the time of emission, and subsequent communication of that motion via radiation leads to a nonvanishing interaction energy. A well known force of this nature is that of Casimir-Polder (CP) \cite{CasPol} who included considerations of the quantum nature of the field. This CP force (there is also the CP force between an atom and a mirror which will be treated in our second paper) is a generalization of the London description including retardation corrections as well as effects of field quantization -- quantization being what imbues the field with its own intrinsic fluctuations. \subsubsection{Coarse-graining and Back-action} For a description of the forces between two atoms we need to know only the averaged effect of the quantum field and the oscillator's idf on the atom's trajectory, their details are not of great concern in this quest. Imagine the transition amplitude for the $\it{total}$ system to evolve from some initial state, $\left| \vec{z}_{in}, \varphi_{in} \right>$ to some final state $\left| \vec{z}_{out}, \varphi_{out} \right>$ in time $T$ where $\vec{z}$ labels the atom's position and $\varphi$ is a collective label of the state of all the remaining (environment) variables in the total system. Our primary interest is the time development of the atom's center of mass for which the field and its interaction with the atom's idf plays a central role through processes like dissipation and radiation reaction. For a given final position of the center of mass there can be many consistent final field and oscillator states, likely unobservable. Summing over all final environment states compatible with the atom's motion is $\it{necessary}$ when we are ignorant of the final state of the environment whether we choose to ignore those details or they are not measurable. This leads to an effective transition amplitude for the trajectory of the atom $\it{alone}$ where all environmental effects on the trajectory have been taken into account. Carrying out this process of $\it{coarse-graining}$ where the final field and oscillator states are traced over leads to an effective action that self-consistently accounts for all back-action of the field and the atom's idf on the atom's trajectory. The equation of motion for the atom, and thus the atom-atom force can be obtained through a variation of this action \cite{BH09}. As we shall show the present formulation goes beyond previous work in that we can derive the forces between two atoms for fully dynamical and under nonequilibrium conditions. When the spacing between the atoms is held fixed we recover the well-known London and CP forces. For the case where the atoms and field are not in thermal equilibrium we find a novel far field scaling for the induced dipole force diminishing as $1/z^3$ rather than $1/z^8$, and for the case when the two atom's are entangled we find a novel near field scaling that enters at second order in perturbation theory as $q^2/z^2$ as opposed to the standard $q^4/z^7$ where $z$ quantifies the interatomic distance and $q$ the electronic charge. To the best of our knowledge this nonequilibrium force and the entanglement force behavior have not been reported in the literature. This paper is organized as follows: In Sec. 2 we introduce the microscopic details of the system by defining the action describing the dynamics of the entire system. The worldline influence functional method is adopted where the environment degrees of freedom (field + oscillator) are traced over to find the time evolution of the reduced density matrix. The equations of motion for the atomic trajectories are then obtained from the saddle points of the reduced density matrix. In Sec. 3 the explicit form for the atom-atom force is computed, and physical origin of each component is explained. In Sec. 4 an initially entangled state for the two oscillators is considered. It is found that the correlation among the oscillator coordinates leads to a contribution to the atom-atom force that enters at second order in perturbation theory. Sec. 5 concerns the possibility of detecting the nonequilibrium atom-atom force (when the field and atoms are not in thermal equilibrium) which is new and the novel entanglement force we discovered as outlined in Sec. 4. \section{The Model} We describe the microphysical degrees of freedom of the entire system through the following action \begin{equation} S[\vec{Q}_a,\vec{z}_a,A^\mu]= \sum_{a} (S_Q[\vec{Q}_a]+S_Z[\vec{z}_a]+S_{int}[\vec{Q}_a,\vec{z}_a,A^\mu] )+S_{E}[A^\mu] \end{equation} where the sum is over all atoms. The action describing the dynamics of the oscillator is given by \begin{equation} S_Q[\vec{Q}_a]=\frac{\mu_a}{2}\int d\lambda[\dot{\vec{Q}}_a(\lambda)^2-\Omega^2_a\vec{Q}_a(\lambda)^2] \end{equation} where $\mu_a$ is the $a$th oscillator's reduced mass and $\lambda$ its worldline parameter, $\Omega_a$ being its natural frequency. The electomagnetic field action is given by \begin{equation} S_{E}[A^\mu]=-\frac{1}{4}\int d^4x F_{\mu\nu} F^{\mu\nu} \end{equation} (the subscript $E$ stands for the electric field) where $A^\mu$ is the $4$-vector potential and $F_{\mu \nu}=\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu}$ is the field strength tensor. The action for the center of mass motion is \begin{equation} S_Z[\vec{z}_a]=\int d\lambda \bigg[ \frac{M_a}{2} {\dot{\vec{z}}_a}^2(\lambda)-V[\vec{z}_a] \bigg] \end{equation} where $M_a$ is the atom's total mass and $V[\vec{z}_a]$ is an external potential. In the dipole approximation, the potential energy for an atom interacting with the photon field takes the form $-q \ \vec{Q}\cdot \vec{E}[\vec{z}]$, where $q\vec{Q}$ is the atom's instantaneous dipole moment and $\vec{E}$ is the electric field leading to the interaction action $ S_{int}[\vec{Q}_a, \vec{z}_a , A_\mu]= q_a \int d\lambda {Q}^i_a(\lambda) E_{i}[z^{\mu}_a(\lambda)] $. Above, $q_a$, quantifies the coupling of the $a$th atom to the field. [Greek indices will refer to spacetime components of a four-vector, zero referring to time, and Roman indices refer to spatial components where we will exclusively use the letters $\{i,j,k\}$ to avoid confusion with the letter $a$ used to label atoms. Contraction of four-vectors is undertaken with the Minkowski metric with (-,+,+,+) signature, and the Einstein summation convention is used throughout. \subsubsection{Worldline Influence Functional } Assume that at time $t_{in}$ the quantum statistical state of the oscillators, trajectory and field is described by a density operator $\hat{\rho}(t_{in})$. This state is unitarily evolved from the initial time $t_{in}$ to a later time $t>t_{in}$, and can be expressed in terms of path integrals by considering matrix elements in an appropriate basis. To isolate the influence of the field on the dynamics of the atom we coarse-grain over the field variables to construct the field-reduced density matrix, $ \rho_r(\vec{Q}_a,\vec{Q}'_a;\vec{z}_a,\vec{z}'_a;t)=\int dA^{\mu} \ {\rho}(\vec{Q}_a,\vec{Q}'_a ;\vec{z}_a,\vec{z}'_a; A^\mu, A^\mu;t). $ By assuming that the field is initially uncorrelated with the other degrees of freedom the reduced density matrix takes the form, \begin{eqnarray} \label{rhor} \rho_r(\vec{Q}_a,\vec{Q}'_a;\vec{z}_a,\vec{z}'_a;t)= \prod_a \int d \vec{Q}_{in,a} \ d\vec{Q}'_{in,a} \int d\vec{z}_{in,a} \ d\vec{z}'_{in,a} \int_{\vec{Q}_{in,a}}^{\vec{Q}_a}\mathcal{D}\vec{Q}_a \int_{\vec{Q}_{in,a}'}^{\vec{Q}'_a}\ \mathcal{D}\vec{Q}'_a \int_{\vec{z}_{in,a}}^{\vec{z}_a}\mathcal{D}\vec{z} _a \int_{\vec{z}_{in,a}'}^{\vec{z}'_a} \mathcal{D}\vec{z}' _a \nonumber \\ \times e^{i(S_Q[\vec{Q}_a]+S_Z[\vec{z}_a]-S_Q[\vec{Q}'_a]-S_Z[\vec{z}'_a])} \rho_{Qa}(\vec{Q}_{in,a},\vec{Q}_{in,a}';t_{in}) \rho_Z(\vec{z}_{in,a},\vec{z}_{in,a}';t_{in}) \mathcal{F}[{J}^{\mu-}, {J}^{\nu+}] \end{eqnarray} which introduces the influence functional (IF) $\mathcal{F}[{J}^{\mu-}, {J}^{\nu+}]$ \cite{FeyVer}. If the initial state of the field is Gaussian in field variables (which includes vacuum and thermal states) the influence functional can be calculated exactly for the dipole field interaction. \begin{equation} \label{IF} \mathcal{F}[{J}^{\mu-}, {J}^{\nu+}]=\exp\bigg\{i\int d^4y\int d^4y' [J^{\mu-}(y)D^{ret}_{\mu \nu}(y,y')J^{\nu+}(y')+\frac{i}{4}J^{\mu-}(y)D^H_{\mu \nu}(y,y')J^{\nu-}(y')]\bigg\} \end{equation} Here the current density is \begin{equation} \label{CD} J_\mu(x)=- \sum_a q_a \int d\lambda \kappa_{i\mu} \delta^4(x^\mu-z^\mu_a(\lambda))Q^i_a(\lambda), \end{equation} $J^+=(J+J')/2$ and $J^-=J-J'$ are its semi-sum and difference, respectively, where prime distinguishes histories, and $\kappa_{i\mu}=\partial_i \eta_{0 \mu}-\partial_0 \eta_{i \mu}$ is a differential operator that relates the photon field to the electric field by contraction i.e. $E^i = \kappa^i_\mu A^\mu$. $D^{ret}_{\mu \nu}(y,y')$ and $D^H_{\mu \nu}(y,y')$ are the retarded Green's function and Hadamard function for the field respectively. In the Feynman gauge they can be expressed in terms of the retarded, $D_{ret}$, and Hadamard, $D_H$, Green's function for a massles scalar field. \begin{equation} D^{ret}_{\mu \nu}(x,x')=\eta_{\mu \nu}D_{ret}(x,x') \ \ \ \ \ \ D^{H}_{\mu \nu}(x,x')=\eta_{\mu \nu}D_{H}(x,x') \end{equation} At zero temperature they take on the explicit form \begin{equation} D_{ret}(x,x')=\frac{1}{4\pi}\theta(t-t')\delta(\sigma) \ \ \ \ \ \ D_{H}(x,x')=-\frac{1}{4\pi^2 \sigma} \end{equation} where $\sigma$ is Synge's worldfunction defined to be half the geodesic distance between the four-vectors $x$ and $x'$, $\sigma=(x-x')^2/2$. \subsubsection{Oscillator-Reduced Influence Functional} We isolate the net influence that the oscillator's idf $\vec{Q}_a$ and the field $A^\mu$ have on the trajectory by tracing over all final oscillator configurations $\rho_{or}(\vec{z}_a, \vec{z}'_a; t)= \int d\vec{Q}_a \rho_r(\vec{Q}_a,\vec{Q}_a;\vec{z}_a,\vec{z}'_a;t)$ introducing the oscillator-reduced density matrix $\rho_{or}$. \begin{eqnarray} \label{ordm} \rho_{or}(\vec{z}, \vec{z}'; t) = \int d\vec{z}_{in,1} \ d\vec{z}'_{in,1} \int_{\vec{z}_{in,1}}^{\vec{z}}\mathcal{D}\vec{z} _1 \int_{\vec{z}_{in,1}'}^{\vec{z}'} \mathcal{D}\vec{z}' _1 e^{i(S_Z[\vec{z}_1]-S_Z[\vec{z}'_1])} \rho_Z(\vec{z}_{in,1},\vec{z}_{in,1}';t_{in}) \mathcal{F}_Z[\vec{z}_1,\vec{z}'_1] \end{eqnarray} All the effects of the environment are now packaged in the oscillator-reduced IF, $\mathcal{F}_Z[\vec{z}^-,\vec{z}^+]$. The development has been simplified by working in the rest frame of the second atom in so doing $\vec{z}_2$ is no longer treated as a dynamical variable. \begin{eqnarray} \label{orIF1} \mathcal{F}_Z[\vec{z}^-_1,\vec{z}^+_1] = \prod_a \int d\vec{Q}_a d\vec{Q}_{in,a} d\vec{Q}'_{in,a} \int_{\vec{Q}_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}_a \int_{\vec{Q}'_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}'_a e^{i(S_Q[\vec{Q}_a]-S_Q[\vec{Q}'_a])} \rho_{Q_a}(\vec{Q}_{in,a},\vec{Q}'_{in,a} ;t_{in}) \mathcal{F}[J^{\mu-},J^{\nu+}] \end{eqnarray} To elucidate our approach we write (\ref{orIF1}) in a more suggestive form \begin{eqnarray} \label{orIF} \mathcal{F}_Z[\vec{z}^{+}_1,\vec{z}^{-}_1]= \mathcal{F}\left[\vec{z}^+_a, \vec{z}^-_a ; -i\frac{\delta}{\delta {\vec{j}_a}^+},-i\frac{\delta}{\delta {\vec{j}_a}^-} \right] \prod_{a} F_{a}[\vec{j}_a^+,\vec{j}_a^-] \bigg|_{{j_a}^\pm=0} \end{eqnarray} which defines the IF for a three dimensional harmonic oscillator, $F_{a}[\vec{j}_a^+,\vec{j}_a^-]$ To bring $ \mathcal{F}[J^{\mu-},J^{\nu+}]$ out of the path integrals in (\ref{orIF1}) $[Q^{k \pm}_a(\lambda)]^n$ is replaced with functional derivatives on the IF for the harmonic oscillators $\left(-i\frac{\delta}{\delta {j_a}^{\mp}_k(\lambda)} \right)^n F_{a}[\vec{j}_a^+,\vec{j}_a^-] \bigg|_{{j_a}^{\pm}=0}$. The explicit form for $F_{a}[\vec{j}_a^+,\vec{j}_a^-]$ is \begin{equation} \label{fo} F_{a}[\vec{j}_a^+,\vec{j}_a^-] =\int d{\vec{Q}_a } \int d{\vec{Q}_{in,a}} d{\vec{Q}_{in,a}} \ \rho_{Q_a}(\vec{Q}_{in,a},\vec{Q}_{in,a} ; t_{in} ) \int_{\vec{Q}_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}_a\int_{\vec{Q}_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}'_a e^{iS_Q[\vec{Q}_a]-iS_Q[\vec{Q}'_a]+i\int d\lambda [\vec{j}_a^+\cdot\vec{Q}_a^-+\vec{j}_a^-\cdot\vec{Q}_a^+]} \end{equation} In the above the dot product between current and oscillator coordinate i.e. $\vec{j} \cdot \vec{Q}$ is taken with respect to a three dimensional Euclidean metric. For a Gaussian initial state (\ref{fo}) can be evaluated exactly \begin{eqnarray} \label{ } F_{a}[\vec{j}_a^+,\vec{j}_a^-] =\mathcal{N} \exp \bigg\{ i\int d\lambda d\lambda' [ \vec{j}_a^-(\lambda)\cdot\vec{j}_a^+(\lambda')g_{ret,a}(\lambda,\lambda')+\frac{i}{4}\vec{j}_a^-(\lambda) \cdot\vec{j}_a^-(\lambda')g_{H,a}(\lambda,\lambda')] \bigg\} \end{eqnarray} where $g_{ret,a}(\lambda,\lambda')$ and $g_{H,a}(\lambda,\lambda')$ (expressed below at $T=0$) are the retarded and Hadamard Green's functions for a one dimensional harmonic oscillator with natural frequency $\Omega_a$, mass $\mu_a$, and $\mathcal{N}$ is a normalization constant. \begin{equation} \label{ } g_{ret,a}(\lambda,\lambda')=\frac{1}{\mu_a \Omega_a}\theta(\lambda-\lambda')\sin \Omega_a(\lambda-\lambda') \ \ \ \ \ g_{H,a}(\lambda,\lambda')=\frac{1}{\mu_a\Omega_a}\cos \Omega_a(\lambda-\lambda') \end{equation} \subsubsection{Decoherence and the Semi-Classical Limit} The complex norm of (\ref{orIF1}) at leading order in a $\vec{z}_1^-$-expansion follows \begin{eqnarray} \label{} | \mathcal{F}_Z[\vec{z}_1^-,\vec{z}_1^+] | = \exp\bigg\{-\int d\lambda d\lambda' \ z^{i-}_1(\lambda) N_{ij}(\lambda,\lambda') z^{j-}_1(\lambda') \bigg\}. \end{eqnarray} where $N_{ij}$ is a symmetric positive definite kernel. Thus, we observe that the off-diagonal elements of the density matrix in (\ref{rhor}) are strongly suppressed for large values of $\vec{z}^-=\vec{z}-\vec{z}'$ as is indicative of decoherence of the quantum trajectory \cite{HPZ1}. Decoherence of the trajectory due to its interactions with the quantum fluctuations of the environment and the internal degrees of freedom of the atoms permits the existence of a semi-classical limit for the oscillator's path through space. Using a saddle-point approximation to evaluate (\ref{ordm}) one can show that the semi-classical dynamics is determined from the variation \begin{equation} \label{fd} \frac{\delta S_{CGEA}[\vec{z}^{+}_1,\vec{z}^{-}_1]}{\delta z^{k-}_1(\tau)}\bigg|_{z^{k-}_1=0}=0 \Longrightarrow M \ddot{z}_k(\tau)=f_k(\tau) \end{equation} where the so-called coarse-grained effective action is given by $S_{CGEA}[z_1^{i+},z_1^{i-}]=S_Z[\vec{z}_1]-S_Z[\vec{z}'_1]+S_{IF}[\vec{z}^+_1,\vec{z}^-_1]$, and $S_{IF}[\vec{z}^+_1,\vec{z}^-_1]=-i\ln \mathcal{F}_Z[\vec{z}_1^-,\vec{z}_1^+]$ defines the influence action. The force acting on the trajectory due to its interactions with the oscillators and field is given by \begin{equation} \label{inff} f_k(\tau)=\frac{\delta S_{IF}[\vec{z}^{+}_1,\vec{z}^{-}_1]}{\delta z^{k-}_1(\tau)}\bigg|_{z^{k-}_1=0}. \end{equation} For general atom motion this force contains all known effects, including the Lamb shift, radiation reaction, dissipation, and the atom-atom force. \section{Nonequilibrium Atom-Atom Force} The suppression of the reduced density matrix for off-diagonal elements justifies an expansion of (\ref{orIF}) for small values of ${\vec{z}_1}^-$. The linear order term yields the influence force and is represented by an infinite series in powers of the coupling. The local (spatially independent) terms in this expansion lead to the aforementioned Lamb shift, radiation reaction, and dissipation. The atom-atom force can be obtained from this series by extracting the terms that depend upon the spatial separation of the atoms. To simplify the presentation we rewrite the influence functional for the atom's trajectory as $\mathcal{F}_Z= \left< e^{i S_{eff}} \right>_o$ where the form of $S_{eff}=-i \ln \mathcal{F}$ can be taken from (\ref{IF}) \begin{equation} \label{ } S_{eff}= \int d^4 x \int d^4 x' [ J^{\mu-}(x) D^{ret}_{\mu \nu}(x,x') J^{\nu+}(x') +\frac{i}{4} J^{\mu-}(x) D^{H}_{\mu \nu}(x,x') J^{\nu-}(x') ] \end{equation} and $\left< ...\right>_o$ is the expectation value with respect to both oscillators under the condition of no interactions. \begin{eqnarray} \label{ } \left< ...\right>_o= \prod_a \int d\vec{Q}_a d\vec{Q}_{in,a} d\vec{Q}'_{in,a} \ \rho_{Q_a}(Q_{in,a}, Q'_{in,a};t_{in}) \int_{\vec{Q}_{in,a}}^{\vec{Q}_a } \mathcal{D} \vec{Q}_a \int_{\vec{Q}'_{in,a}}^{\vec{Q}_a } \mathcal{D} \vec{Q}'_a e^{i(S_Q[\vec{Q}_a]-S_Q[\vec{Q}'_a])} (...) \end{eqnarray} $S_{eff}$ is a quadratic function of the current density (\ref{CD}) which depends on a sum of delta functions with support at each atom's position. Thus, one can see that the cross terms $S_{cross}$ between the two atom's currents appearing in $S_{eff}$ will lead to terms that depend upon their spatial separation. \begin{equation} \label{CR} S_{cross}=\int d^4 x d^4 x' [ J_1^{\mu-}(x) D^{ret}_{\mu\nu}(x,x') J_2^{\nu+}(x')+J_2^{\mu-}(x) D^{ret}_{\mu\nu}(x,x') J_1^{\nu+}(x')+\frac{i}{2}J_1^{\mu-}(x) D^{H}_{\mu\nu}(x,x') J_2^{\nu-}(x') ] \end{equation} $J_a^\mu$ refers to the current density of the $a$th atom, with $a=1$ referring to the distinguished atom, where all the atom-atom forces we are studying here act upon. $\left< S_{cross} \right>_o$, the expectation value of $S_{cross}$, will vanish for initially uncorrelated and Gaussian oscillator states because it is linear in the coordinate of each oscillator. Therefore the leading order contribution to the atom-atom force will be proportional to the square of $S_{cross}$ at order $q_1^2q_2^2$. Expanding $\mathcal{F}_Z$ in powers of $S_{eff}$ we express the IF as \begin{eqnarray} \label{ } \mathcal{F}_Z = e^{iS_{IF}[z^\pm]} \approx 1+(\text{self energy terms})-\frac{1}{2} \left< (S_{cross}[\vec{z}_a^\pm,\vec{Q}_a^\pm])^2\right>_o . \end{eqnarray} The leading order linear terms $\mathcal{O}(q^2_a)$ contain the back-action of the field on the motion of the atom itself only. We refer to them as``self energy terms", borrowing a terminology from particle physics, and for a stationary atom these effects are unimportant. We focus on the quadratic term which contains not only higher order self energy type effects but also the leading order contribution to the atom-atom force contained in $S_{cross}$. Note that $S_{eff}$ contains a term linear in the retarded propagator for the field. It makes sense that the force will manifest as the square of (\ref{CR}) because, as described in the introduction, fluctuations in the dipole moment of one atom induce radiation that travels to the other, influences its dynamics, and induces the other atom to radiate. From a diagrammatic viewpoint this process requires two propagators. The leading order expression for the force can be derived from \begin{equation} \label{fAA} f_k(\tau)=\frac{\delta S_{IF}[\vec{z}_1^\pm]}{\delta z^{k-}_1}\bigg|_{z^{k-}=0}\approx -\frac{i}{2} \frac{\delta }{\delta z_1^{k-}(\tau)}\left< (S_{cross}[\vec{z}^\pm_a,\vec{Q}^\pm_a])^2\right>_o \bigg|_{z^{k-}=0} \end{equation} where the first equality holds for a stationary trajectory (radiation reaction and dissipation vanish). Carrying out the variation in (\ref{fAA}) yields three contributions the first two being \begin{equation} \label{fA} f^A_k(\tau)=\frac{1}{2}q_1^2 q_2^2 \int_{\lambda_i}^{\lambda_f} d\lambda \int_V d^4 x \int_V d^4 y \ g_{ret,1}(\tau, \lambda) E^{ij}_{ret}(z^\alpha(\lambda), y) G_H(x,y) \partial_k(x') E^{ret}_{ij}(x', x)|_{x'=z^\alpha(\tau)} \end{equation} \begin{equation} \label{fB} f^B_k(\tau)=\frac{1}{2}q_1^2 q_2^2 \int_{\lambda_i}^{\lambda_f} d\lambda \int_V d^4 x \int_V d^4 y \ g_{H,1}(\tau, \lambda) E^{ij}_{ret}(z^\alpha(\lambda), y) G_{ret}(x,y) \partial_k(x') E^{ret}_{ij}(x', x)|_{x'=z^\alpha(\tau)} \end{equation} where $E_{ij}(x,x')$ is the dyadic electric field Green's function $E_{ij}(x,x')=\text{Tr} \{ \hat{\rho}_E \hat{E}_i(x) \hat{E}_j(x') \}$ which arises in our formalism through a contraction of the operator $\kappa^i_\mu$ with the photon field. Note also that after all functional derivatives are taken $z_1 \rightarrow z$, $G_{ret}(x,x')$ and $G_{H}(x,x')$ are meant to generally represent the retarded and Hadamard functions for whatever the atom interacts with. For example to find the surface-atom force the integration volume $V$ is taken to be the half space, and the Green's functions describing the physics of the media occupying that region are used. For the specific case of an atom located at $\vec{z}_2$, $G(x,x')\propto g_2(t,t') \delta^3(\vec{x}-\vec{z}_2)\delta^3(\vec{x}'-\vec{z}_2)$. More general cases will be considered in a future paper. The form of (\ref{fA}) and (\ref{fB}) can be explained by appealing to the heuristic description of the force given in the introduction. $f^A$ and $f^B$ arise from the $\it{intrinsic}$ $\it{fluctuations}$ in the dipole moments of the atoms. This can be seen by noting that they contain the atom's Hadamard function i.e. the symmetric two point function for the oscillator degree of freedom. The two retarded electric field Green's functions account for the transfer of information between the two atoms, and the retarded Green's function for the atom characterizes its response to an external field see Fig[\ref{fafb}]. \begin{figure} \begin{center} \mbox{\subfigure{\includegraphics[width=3in]{fafb.pdf}}\quad \subfigure{\includegraphics[width=3in]{fc.pdf} } } \end{center} \caption{ The illustrations depict the physical origin of the intrinsic fluctuation and induced dipole forces. \textit{On the left} intrinsic dipole fluctuations (represented by shaded oval); 1. radiate information about their motion, and 2. this radiation induces a correlated dipole moment in the second atom (solid black arrow denotes an induced dipole moment). The induced motion at $t_2$ leads to radiation that travels back to the fluctuating atom. At $t_3$ the radiation produced at step 2 will produce a local electric field near the fluctuating atom which carries information about its own fluctuations in the past. The illustration \textit{on the right} depicts the physical origin of the second component of the force arising from field fluctuations and their spatial correlation. Step 1 shows how a field fluctuation induces correlated dipole moments in both atoms. The induced motion of the dipole moments will lead to radiation emitted from both atoms containing information about their motion (only left moving radiation included). At $t_2$ the radiation generated by the induced motion produces a local electric field around each atom that is correlated with its motion. } \label{fafb} \end{figure} The third component of the force, $f^C$ arises from $\it{induced}$ $\it{fluctuations}$ of the atom's dipole moments. The retarded Green's functions for the two oscillators, $g_{ret,a}$, characterize their response to a given field fluctuation. The $k$th component of the induced dipole moment of the $a$th atom can be written as $d^k_{ind,a}= q_a \int d\lambda' g_{ret,a}(\lambda, \lambda') E^k[z^\alpha_a(\lambda')]$ where $E^k[z^\alpha_a(\lambda')]$ is the $k$th component of the electric field at the position of the atom. The symmetric two-point function of the induced dipole moment quantifies its fluctuations $ \left< \{ d^j_{ind,a}(t), d^k_{ind,b}(t') \} \right>= q_a q_b \int d\lambda d\lambda' g_{ret,a}(t, \lambda) g_{ret,b}(t', \lambda') E^{jk}_H(z^\alpha_a(\lambda),z^\alpha_b(\lambda))$. The remaining electric field propagator, $E^{ret}_{ij}$, carries information about the motion of one atom to the other and accounts for the form of $f^C$ see Fig[\ref{fafb}] \begin{eqnarray} \label{vdWC} f^C_k(\tau)= \frac{1}{2} q_1^2 q_2^2 \int_{\lambda_i}^{\lambda_f} d\lambda \int_V d^4 x \int_V d^4 y \ g_{ret,1}(\tau, \lambda)[ \partial_k(x') E^{ij}_{ret}(x', x) G_{ret}(x,y) E^{H}_{ij}(z^\alpha(\lambda), y) |_{x'=z^\alpha(\tau)} \nonumber \\ + E^{ij}_{ret}(z^\alpha(\tau), x) G_{ret}(x,y) \partial_k(x') E^{H}_{ij}(x', y) |_{x'=z^\alpha(\lambda)} ] \end{eqnarray} where $\partial_k(x)$ denotes differentiation with respect to $x^{k}$. The previous form of the force is valid for any atomic motion. However, a self consistent treatment would require that the aforementioned `self energy' terms be included in order to account for the back-action of the field on the atom itself. \subsection{Induced Dipole Force} In this section we calculate the induced dipole force explicitly by plugging in the retarded Green's function for the second oscillator, $g_{ret,2}$, and choosing $\vec{z}_2$ to be the origin of coordinates. \begin{eqnarray} \label{ } f^C_k(\tau)= \frac{1}{2} q_1^2 q_2^2 \int_{\lambda_i}^{\lambda_f} d\lambda \int d t \int dt' \ g_{ret,1}(\tau, \lambda) g_{ret,2}(t,t') [ E^{ij}_{ret}(z^\alpha(\tau), t',\vec{0}) \partial_k(x') E^{H}_{ij}(x', t, \vec{0}) |_{x'=z^\alpha(\lambda)} \nonumber \\ + E^{ij}_{H}(z^\alpha(\lambda), t',\vec{0}) \partial_k(x') E^{ret}_{ij}(x', t, \vec{0}) |_{x'=z^\alpha(\tau)} ] \end{eqnarray} The derivatives operating on the various Green's functions can be simplified by employing the equation of motion for the field, the resultant form valid for general atom motion follows. \begin{eqnarray} \label{IDF} f^C_k= \frac{q_1^2q_2^2}{4\pi} \int d\lambda \int dt \int d t' g_{ret,1}(\tau,\lambda) g_{ret,2}(t,t') \theta(\tau,t') \bigg\{ \delta^{'''}[\sigma(z^\alpha(\tau);t', \vec{0})] \bigg[ \frac{1}{2} D_H^{''}[\sigma(z^\alpha(\lambda);t, \vec{0})] \nonumber \\ \times \sigma(z^\alpha(\tau);t', \vec{0})_k \bigg( \sigma(z^\alpha(\lambda);t, \vec{0})_j \ \sigma(z^\alpha(\lambda);t, \vec{0})^j \ \sigma(z^\alpha(\tau);t', \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0})^i + [ \sigma(z^\alpha(\lambda);t, \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0}) )^i ]^2 \bigg) \nonumber \\ + \delta^{''}[\sigma(z^\alpha(\tau);t', \vec{0})] \bigg[ \frac{1}{2} D_H^{'''}[\sigma(z^\alpha(\lambda);t, \vec{0})] \nonumber \\ \times \sigma(z^\alpha(\lambda);t, \vec{0})_k \bigg( \sigma(z^\alpha(\lambda);t, \vec{0})_j \ \sigma(z^\alpha(\lambda);t, \vec{0})^j \ \sigma(z^\alpha(\tau);t', \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0})^i + [ \sigma(z^\alpha(\lambda);t, \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0}) )^i ]^2 \bigg) \nonumber \\ + D_H^{''}[\sigma(z^\alpha(\lambda);t, \vec{0})] \bigg( 6 \sigma(z^\alpha(\tau);t', \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0})^i \sigma(z^\alpha(\lambda);t, \vec{0})_k + 2 \sigma(z^\alpha(\lambda);t, \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0}) )^i \ \sigma(z^\alpha(\tau);t', \vec{0})_k \bigg) \bigg] \nonumber \\ + \delta^{'}[\sigma(z^\alpha(\tau);t', \vec{0})] \bigg[ 4 \ D_H^{'''}[\sigma(z^\alpha(\lambda);t, \vec{0})] \sigma(z^\alpha(\tau);t', \vec{0})_i \ \sigma(z^\alpha(\tau);t', \vec{0})^i \ \sigma(z^\alpha(\lambda);t, \vec{0})_k \nonumber \\ + 20 \ D_H^{''}[\sigma(z^\alpha(\lambda);t, \vec{0})] \sigma(z^\alpha(\lambda);t, \vec{0})_k \bigg] \bigg\} \ \ \ \end{eqnarray} Here, primes on functions denote derivatives with respect to $\sigma$, and $\sigma_k=\partial_k \sigma$ denotes differentiation of $\sigma$ with respect to $x^k$. We can separate (\ref{IDF}) into 4 terms with differing number of $\sigma$-derivatives and specify a static trajectory for the distinguished atom to bring the derivatives outside of the integral i.e. $d/d\sigma=z^{-1}d/dz$. To distinguish which Green's function a given $\sigma$-derivative acts on we attach a dummy subscript to $z$ that should not be confused with an atom label. Once all derivatives are taken $z_1$ and $z_2$ are set to $z$, the separation between the two atoms. The evaluation of the $t$-integral can be done by substituting $ \delta(\sigma(x,x'))=\frac{\delta(t'-t+|\vec{x}-\vec{x'}|)}{|\vec{x}-\vec{x'}|}$. \begin{eqnarray} \label{ } f^{C1}_z(\tau)=-\frac{q_1^2q_2^2}{4\pi} z \ \frac{1}{2} \left[ \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^3 \left( \frac{1}{z_1}\frac{d}{dz_1} \right)^2 +\left( \frac{1}{z_2}\frac{d}{dz_2} \right)^2 \left( \frac{1}{z_1}\frac{d}{dz_1} \right)^3 \right] \nonumber \\ \frac{1}{z_1} \int d\lambda \int dt \ g_{ret,1}(\tau,\lambda) g_{ret,2}(t,\tau-z_1) D_H [\sigma(z_2^\alpha(\lambda),t, \vec{0})] \bigg[ 2 z^4 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} \begin{eqnarray} \label{ } f^{C2}_z (\tau)=-\frac{q_1^2q_2^2}{4\pi} z \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^2 \left( \frac{1}{z_1}\frac{d}{dz_1} \right)^2 \frac{1}{z_1} \int d\lambda \int dt \ g_{ret,1}(\tau,\lambda) g_{ret,2}(t,\tau-z_1) D_H [\sigma(z_2^\alpha(\lambda),t, \vec{0})] \bigg[ 8 z^2 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} \begin{eqnarray} \label{ } f^{C3}_z(\tau)=-\frac{q_1^2q_2^2}{4\pi} z \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^3 \left( \frac{1}{z_1}\frac{d}{dz_1} \right) \frac{1}{z_1} \int d\lambda \int dt \ g_{ret,1}(\tau,\lambda) g_{ret,2}(t, \tau-z_1) D_H [\sigma(z_2^\alpha(\lambda),t, \vec{0})] \bigg[ 4 z^2 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} \begin{eqnarray} \label{ } f^{C4}_z(\tau)=-\frac{q_1^2q_2^2}{4\pi} z \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^2 \left( \frac{1}{z_1}\frac{d}{dz_1} \right) \frac{1}{z_1} \int d\lambda \int dt \ g_{ret,1}(\tau,\lambda) g_{ret,2}(t, \tau-z_1 ) D_H [\sigma(z_2^\alpha(\lambda),t, \vec{0})] \bigg[ 20 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} We can express the Green's function for the field through a mode sum and subsequently evaluate the $\lambda$ and $t$ integrals in the long time limit. The exact field-influenced dynamics of the oscillators will be dissipative, however this dissipative effect does not appear at this order in perturbation theory, but can be modeled phenomenologically by inclusion of an infinitesimal dissipation in the oscillator equation of motion i.e. $g_{ret,a}(t-t') \rightarrow g_{ret,a}(t-t')e^{-\epsilon (t-t')}$. At finite temperature the Hadamard function for the field can be obtained through periodicity in imaginary time, or by taking the trace of symmetrized field operators with respect to a thermal density matrix. Expressing the result as a mode sum we find. \begin{eqnarray} \label{ } f^{C1}_z=-\frac{2}{\pi} z \ \frac{1}{2} \left[ \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^3 \left( \frac{1}{z_1}\frac{d}{dz_1} \right)^2 +\left( \frac{1}{z_2}\frac{d}{dz_2} \right)^2 \left( \frac{1}{z_1}\frac{d}{dz_1} \right)^3 \right] \nonumber \\ \frac{1}{z_1 z_2} \int_0^{\infty} d\omega \ \alpha_1(\omega) \alpha_2(\omega) \coth (\beta \omega/2) \sin \omega z_2 \cos \omega z_1 \bigg[ 2 z^4 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} \begin{eqnarray} \label{ } f^{C2}_z=-\frac{2}{\pi} z \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^2 \left( \frac{1}{z_1}\frac{d}{dz_1} \right)^2 \frac{1}{z_1 z_2} \int_0^{\infty} d\omega \ \alpha_1(\omega) \alpha_2(\omega) \coth (\beta \omega/2) \sin \omega z_2 \cos \omega z_1 \bigg[ 8 z^2 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} \begin{eqnarray} \label{ } f^{C3}_z=-\frac{2}{\pi} z \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^3 \left( \frac{1}{z_1}\frac{d}{dz_1} \right) \frac{1}{z_1 z_2} \int_0^{\infty} d\omega \ \alpha_1(\omega) \alpha_2(\omega) \coth (\beta \omega/2) \sin \omega z_2 \cos \omega z_1 \bigg[ 4z^2 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} \begin{eqnarray} \label{ } f^{C4}_z=-\frac{2}{\pi} z \left( \frac{1}{z_2}\frac{d}{dz_2} \right)^2 \left( \frac{1}{z_1}\frac{d}{dz_1} \right) \frac{1}{z_1 z_2} \int_0^{\infty} d\omega \ \alpha_1(\omega) \alpha_2(\omega) \coth (\beta \omega/2) \sin \omega z_2 \cos \omega z_1 \bigg[ 20 \bigg] \bigg|_{z_1=z_2=z} \end{eqnarray} As the retarded Green's functions for the atoms characterizes the response of their dipole moments to an external field they play the role of the dynamic polarizability, $\alpha$. The result above is expressed in terms of the frequency dependent form, $\alpha_a(\omega)= q_a^2(4\pi \mu_a(\Omega_a^2-\omega^2))^{-1}$ , which can be derived from the classical equations of motion (the aforementioned infinitesimal dissipative term kills an imaginary part in the infinite time limit). $\beta$ is the field's inverse temperature. All the derivatives can be taken and the force can be expressed as an integral over frequency. \begin{eqnarray} \label{ } f^{C}_z =-\frac{2}{\pi z^7} \int_{0}^{\infty} d\omega \ \alpha_1(\omega) \alpha_2(\omega) \coth (\beta \omega/2) \bigg[ \omega z ( 18-8z^2 \omega^2 +z^4 \omega^4) \cos 2\omega z \nonumber \\ + (-9 +16 \omega^2 z^2 -3 \omega^4 z^4) \sin 2 \omega z \bigg] \end{eqnarray} This expression agrees with what can be found in the literature for the CP force in a finite temperature field \cite{Passante}. In the far-field at zero temperature we recover the well known form \cite{CPP95} \begin{equation} \label{ } f^C_z\approx - \frac{161}{4\pi} \alpha_1(0) \alpha_2(0) \frac{1}{z^8} \end{equation} where a UV regulator must be employed to render the frequency integrals finite. If however we take the dissipation to be zero (as it truly is in our perturbative approach) the force is altered because the polarizability acquires an imaginary part $\alpha(\omega) \rightarrow (q^2/4\pi \mu \Omega)[\Omega/(\Omega^2-\omega^2)^{-1}+ i (\pi/2) \delta(\omega-\Omega)- i (\pi/2) \delta(\omega+\Omega)]$. The imaginary term plays an important role when the quantum nature of the dipole moment of the oscillator is accounted for. When such a term is neglected, the contribution to the atom-atom force from $f^A$ and $f^B$ dominates in the far field as $1/z^3$ rather than $1/z^8$ at $T=0$. We denote this contribution to the force by $\delta f^C_z$. \begin{eqnarray} \label{ } \delta f^{C}_z =-\frac{q_1^2 q_2^2}{16 \pi^2 \mu_1 \mu_2 \ \Omega_1 \Omega_2 z^7} \int_{0}^{\infty} d\omega \ \coth (\beta \omega/2) \bigg[ \frac{\Omega_1 \delta(\omega-\Omega_2)}{\Omega_1^2-\Omega_2^2} + ( \Omega_1 \leftrightarrow \Omega_2) \bigg] \nonumber \\ \times \bigg[ -9-2 z^2\omega^2 -z^4 \omega^4 +(9-16 z^2\omega^2 +3 z^4 \omega^4)\cos (2\omega z) + z\omega (18-8 z^2\omega^2 + z^4 \omega^4)\sin (2\omega z) \bigg] \end{eqnarray} As $z \rightarrow \infty$ this term possesses the asymptotic scaling, \begin{eqnarray} \label{ } \delta f^{C}_z \approx \frac{q_1^2 q_2^2}{16 \pi^2 \mu_1 \mu_2 \Omega_1 \Omega_2 z^3} \int_{0}^{\infty} d\omega \ \omega^4 \coth (\beta \omega/2) \bigg[ \frac{\Omega_1 \delta(\omega-\Omega_2)}{\Omega_1^2-\Omega_2^2} + (\Omega_1 \leftrightarrow \Omega_2) \bigg] \end{eqnarray} but at $T=0$ for $z \rightarrow 0$ $\delta f^C$ is subleading to the dominant $1/z^7$ near field scaling from the London term. \subsection{Intrinsic Fluctuation Force} The treatment by London of the atom-atom force can be reproduced by computing the interaction energy of two atoms interacting via the Coulomb potential. The force follows from the negative gradient of the perturbed energy eigenvalues. We obtain an analogous expression for the London force in our formulation but with an additional contribution from retardation effects as we treat the field relativistically. The contributions to the force from $f^A_z$ and $f^B_z$ can be computed in the same way as the contribution $f^C_z$, and so we omit the details of that calculation here, and only state the result in the long-time zero temperature limit. \begin{equation} \label{FA} f^{A}_z=-\frac{q_1^2 q_2^2}{16 \pi^2 \mu_1 \mu_2 \Omega_1 \Omega_2} \frac{\Omega_1}{\Omega_1^2-\Omega_2^2} [ 9+2 \Omega_2^2 z^2 +\Omega^4_2 z^4] \frac{1}{z^7} \end{equation} $f^B_z$ can be obtained from $f^A_z$ by exchanging $\Omega_1$ and $\Omega_2$. These terms are responsible for the near-field behavior and agree with those derived by London when retardation corrections to the field Green's function are neglected \cite{Lon}. \begin{equation} \label{ } f^A_z+f^B_z \approx f^{Lon}_z= -\frac{9 q_1^2 q_2^2}{ 16 \pi^2 \mu_1 \mu_2 \Omega_1 \Omega_2} \frac{1}{\Omega_1+\Omega_2} \frac{1}{z^7} \end{equation} The thermal version of the previous result does not make sense for a single oscillator where temperature is an ill-defined quantity, but does in the case of a gas of atoms. If the gas is sufficiently dilute the force between two collections of trapped atoms can be approximated using the density distribution of the gas and $f_z$ \cite{APSS08}. The finite temperature form follows where $\beta_a$ is the inverse temperature of the $a$th oscillator (or trapped gas). \begin{equation} \label{} f^{A}_z=-\frac{q_1^2 q_2^2}{16 \pi^2 \mu_1 \mu_2 \Omega_1 \Omega_2} \frac{\Omega_1}{\Omega_1^2-\Omega_2^2} [ 9+2 \Omega_2^2 z^2 +\Omega^4_2 z^4] \coth( \beta_2 \Omega_2/2) \frac{1}{z^7} \end{equation} In the far field the leading order behavior reduces to the following form. \begin{equation} \label{FA} f^{A}_z \approx -\frac{q_1^2 q_2^2}{16 \pi^2 \mu_1 \mu_2} \frac{\Omega^3_2 }{\Omega_1^2-\Omega_2^2} \coth( \beta_2 \Omega_2/2) \frac{1}{z^3} \end{equation} Note that when the field and the atoms are in thermal equilibrium this new asymptotic scaling cancels with an equal and opposite contribution contained in $\delta f^C_z$ and the standard far field scaling $1/z^8$ is restored. When the atoms and field are out of thermal equilibrium this cancelation no longer occurs and the dominant contribution to the force scales like $1/z^3$, where the zero temperature contribution cancels as indicated below: \begin{equation} \label{NS} f_z \approx -\frac{q_1^2 q_2^2}{8 \pi^2 \mu_1 \mu_2 } \frac{\Omega^3_2 }{\Omega_1^2-\Omega_2^2} \bigg[ \frac{1}{e^{\beta_2 \Omega_2}-1}-\frac{1}{e^{\beta \Omega_2}-1}\bigg] \frac{1}{z^3} +(\Omega_1 \leftrightarrow \Omega_2) \end{equation} \section{Entanglement Force} The previous derivation of the atom-atom force assumes that the initial state of the two oscillators is uncorrelated. If however, the two atoms are initially entangled then a new contribution to atom-atom force arises. To our knowledge this force has not been reported in the literature. We begin by computing the oscillator-reduced IF for two initially entangled atoms. \begin{eqnarray} \label{orIF1E} \mathcal{F}_Z[\vec{z}^-_1,\vec{z}^+_1] = \prod_a \left[ \int d\vec{Q}_a d\vec{Q}_{in,a} d\vec{Q}'_{in,a} \int_{\vec{Q}_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}_a \int_{\vec{Q}'_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}'_a e^{i(S_Q[\vec{Q}_a]-S_Q[\vec{Q}'_a])} \right] \nonumber \\ \times \rho_Q(\vec{Q}_{in,1},\vec{Q}'_{in,1} ,\vec{Q}_{in,2},\vec{Q}'_{in,2} ;t_{in}) \mathcal{F}[J^{\mu-},J^{\nu+}] \end{eqnarray} Writing (\ref{orIF1E}) in a more suggestive form \begin{eqnarray} \label{orIFE} \mathcal{F}_Z[\vec{z}^{+}_1,\vec{z}^{-}_1]= \mathcal{F}\left[\vec{z}^+_a, \vec{z}^-_a ; -i\frac{\delta}{\delta {\vec{j}_a}^+},-i\frac{\delta}{\delta {\vec{j}_a}^-} \right] F [\vec{j}_1^+,\vec{j}_1^-,\vec{j}_2^+,\vec{j}_2^-] \bigg|_{{j_a}^\pm=0} \end{eqnarray} defines the IF for two entangled harmonic oscillators, $F [\vec{j}_1^+,\vec{j}_1^-,\vec{j}_2^+,\vec{j}_2^-] $. To bring $\mathcal{F}[J^{\mu-},J^{\nu+}]$ out of the path integrals in (\ref{orIF1E}) we replace the oscillator coordinates with functional derivatives as before. \begin{eqnarray} \label{foE} F [\vec{j}_1^+,\vec{j}_1^-,\vec{j}_2^+,\vec{j}_2^-] = \prod_a \int d{\vec{Q}_a } \int d{\vec{Q}_{in,a}} d{\vec{Q}_{in,a}} \ \rho_{Q}(\vec{Q}_{in,1},\vec{Q}'_{in,1} ,\vec{Q}_{in,2},\vec{Q}'_{in,2} ; t_{in} ) \nonumber \\ \int_{\vec{Q}_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}_a\int_{\vec{Q}_{in,a}}^{\vec{Q}_a} \mathcal{D}\vec{Q}'_a e^{iS_Q[\vec{Q}_a]-iS_Q[\vec{Q}'_a]+i\int d\lambda [\vec{j}_a^+\cdot\vec{Q}_a^-+\vec{j}_a^-\cdot\vec{Q}_a^+]} \end{eqnarray} For the initially entangled squeezed Gaussian state \begin{eqnarray} \label{ } \rho_Q(\vec{Q}_{in,1},\vec{Q}'_{in,1} ,\vec{Q}_{in,2},\vec{Q}'_{in,2} ;t_{in})= \left( \frac{\beta}{\pi \alpha} \right)^6 \exp \bigg\{ -\frac{1}{4} \bigg[ \beta^2 \left( (\vec{Q}_{in,1}+\vec{Q}_{in,2})^2 + (\vec{Q}'_{in,1}+\vec{Q}'_{in,2})^2 \right) \nonumber \\ + \frac{1}{\alpha^2} \left( (\vec{Q}_{in,1}-\vec{Q}_{in,2})^2 + (\vec{Q}'_{in,1}-\vec{Q}'_{in,2})^2 \right) \bigg] \bigg\} \end{eqnarray} (\ref{foE}) can be evaluated exactly. For this case, like oscillator coordinate components are entangled together with equal magnitude in each direction i.e. the parameters $\alpha$ and $\beta$ are common to each component. The influence functional for two entangled oscillators follows \begin{eqnarray} \label{ } F [\vec{j}_1^+,\vec{j}_1^-,\vec{j}_2^+,\vec{j}_2^-] = \tilde{\mathcal{N} } \exp \bigg\{ \frac{1}{8} \bigg( i B_{\Delta} g_\Delta+ i B_{\Sigma} g_\Sigma-\alpha^2 B_\Delta^2-\frac{1}{\beta^2} B_\Sigma^2-\frac{1}{\alpha^2} g_\Delta^2-\beta^2 g_\Sigma^2 +i \varphi \bigg) \bigg\} \end{eqnarray} where $\tilde{\mathcal{N}}$ is a normalization constant and the definitions of $B_a$, $g_a$ and $\varphi$ are defined below. \begin{equation} \label{ } B_a= \int dt \ J^-_a(t) [\cot \Omega T \sin \Omega(t-t_i)+\csc \Omega T \sin \Omega(t_f-t) ] \end{equation} \begin{equation} \label{ } g_a=\frac{1}{\mu \Omega} \int dt \ J^-_a(t) \sin \Omega(t-t_i) \end{equation} \begin{equation} \label{ } \varphi=\sum_a \bigg[ \frac{1}{\sin \Omega T} \int dt \ g_a J'_a(t) \sin \Omega(t_f-t) -\frac{1}{2} \mu \Omega g_a^2 \cot \Omega T +\frac{i}{2} \int dt dt' [J_a(t) J_a(t') g^a_F(t,t')+J'_a(t) J'_a(t') g^a_D(t,t') ] \bigg] \end{equation} The subscript $\Sigma$ denotes $C_\Sigma=C_1+C_2$, and the subscript $\Delta$ denotes $C_\Delta=C_1-C_2$. The entanglement force comes from the leading order contribution to $\mathcal{F}_Z$ that depends upon the spatial separation between the atoms. Previously we needed to consider the square of $S_{cross}$. However, when the two atoms are entangled there exists nonvanishing cross correlation between their coordinates such that $\left< S_{cross} \right>_o \neq 0$. So in distinction to the previous section we have \begin{eqnarray} \label{ } \mathcal{F}_Z = e^{iS_{IF}[z^\pm]} \approx 1+(\text{self energy terms})+i \left< S_{cross}[\vec{z}_a^\pm,\vec{Q}_a^\pm]\right>_o +\mathcal{O}({z^-}^2) \end{eqnarray} where the force can be derived from \begin{equation} \label{EF} f^{ent}_k(\tau) \approx -\frac{\delta }{\delta z_1^{k-}(\tau)}\left< S_{cross}[\vec{z}^\pm_a,\vec{Q}^\pm_a] \right>_o \bigg|_{z^{k-}=0}. \end{equation} Expanding $S_{cross}$ for small $z^{k-}_1$ we arrive at \begin{eqnarray} \label{ } S_{cross} \approx S_o+ q_1 q_2 \int d\lambda d\lambda' z_1^{k-}(\lambda) [ \partial_k \kappa_i^\mu \kappa^{\nu'}_j D^{ret}_{\mu \nu}( z^{\alpha+}_1(\lambda), z^{\alpha}_2(\lambda')) Q^{i+}_1(\lambda) Q^{j+}_2(\lambda') \nonumber \\ + \frac{1}{4} \partial_{k'} \kappa_i^{\mu'} \kappa^{\nu}_j D^{ret}_{\mu \nu}( z^{\alpha}_2(\lambda), z^{\alpha +}_1(\lambda')) Q^{i-}_1(\lambda) Q^{j-}_2(\lambda') + \frac{i}{2} \partial_{k} \kappa_i^{\mu} \kappa^{\nu'}_j D^{H}_{\mu \nu}( z^{\alpha +}_1(\lambda), z^{\alpha}_2(\lambda')) Q^{i+}_1(\lambda) Q^{j-}_2(\lambda') ] + \mathcal{O}({z^-}^2) \end{eqnarray} where a prime in the index of a derivative operator means differentiation with respect to the second argument. Only one term survives after we take the expectation value, that which contains the cross correlator $\left< Q^{i+}_1(\lambda) Q^{j+}_2(\lambda') \right>_o$ which equals \begin{equation} \label{ } \left< Q^{i+}_1(t) Q^{j+}_2(t') \right>_o= \delta^{ij} \ g_{ent}(t,t') = \frac{1}{4}\bigg(\frac{1}{\alpha^2}-\beta^2\bigg)\ \delta^{ij} \ \bigg[ \frac{1}{(\mu\Omega)^2} \sin \Omega(t-t_i) \sin \Omega(t'-t_i)- \frac{\alpha^2}{4 \beta^2} \csc^2 \Omega T S(t) S(t') \bigg] \end{equation} where $T=t_f-t_i$ and $S(t)= \sin \Omega(t-t_f)-\sin \Omega(t-t_i+T)$. After taking the expectation value and then using (\ref{EF}) we obtain the entanglement force. \begin{equation} \label{ } f^{ent}_k(\tau)= - q_1 q_2 \int d\lambda \ g_{ent}(\tau,\lambda) \partial_k \kappa_i^\mu \kappa^{i \nu'} D^{ret}_{\mu \nu}( z^{\alpha}_1(\tau), z^{\alpha}_2(\lambda)) \end{equation} All derivatives can be taken on the field's retarded Green's function and simplified using the equation of motion. We then specify the trajectory to be static to arrive at \begin{equation} \label{ } \partial_k \kappa^\mu_i \kappa^{i \nu} D^{ret}_{\mu \nu}(\sigma)= \delta_{k z} \left[ z^3 \left( \frac{d}{d \sigma} \right)^3 + 5 z \left( \frac{d}{d \sigma} \right)^2 \right] D_{ret}(\sigma). \end{equation} With a static trajectory specified the $\sigma$ derivatives can be expressed in terms of $z$ derivatives and can be factored out of the integral. The retarded Green's function can be expressed as a delta function $D_{ret}(\sigma(z_1^\alpha(\tau), z_2^\alpha(\lambda)))=\delta(\tau-\lambda-z)/(4\pi z)$. We work in a coordinate system centered at the $a=2$ atom, where various forces act upon (also the origin of the xy-plane), with $z$ axis along the ray connecting the two atoms at distance $z$ apart (pointing from atom 2 to atom 1). Thus the distinguished atom ($a=1$) is located at ($0, 0, z$). This leads to the explicit expression for the entanglement force. \begin{equation} \label{ } f^{ent}_z(\tau)= -\frac{q_1 q_2}{4 \pi} \left[ z^3 \left( \frac{1}{z} \frac{d}{d z} \right)^3 + 5 z \left( \frac{1}{z} \frac{d}{d z} \right)^2 \right] \frac{1}{z} g_{ent}(\tau,\tau-z) \end{equation} In the infinite time limit $\tau \rightarrow t_f \rightarrow \infty$ $g_{ent}(\tau,\tau-z) \sim \frac{1}{8}( \beta^2-1/\alpha^2)(\alpha^2/\beta^2- 1/(\mu \Omega)^2) \cos \Omega z$. The force vanishes in the far field but has a well-defined near field limit i.e. $\Omega z \rightarrow 0$. \begin{equation} \label{EFf} f^{ent}_z \sim - \frac{q_1 q_2}{32 \pi} \left( \beta^2-\frac{1}{\alpha^2} \right)\left(\frac{\alpha^2}{\beta^2}- \frac{1}{(\mu \Omega)^2} \right) \frac{\Omega^2}{z^2} \end{equation} This effect is not only due to entanglement between the two atoms, but it is also due to retardation. For the case considered above where the degree of entanglement between like components of the atom's dipole moments has the same magnitude the interaction energy as described through the Coulomb potential vanishes. Thus, only through the inclusion of relativistic effects does any force manifest. However, the previous discussion can easily be generalized to the case where the magnitude of the parameters $\alpha$ and $\beta$ is not common to all directions. As such the initial state for the oscillators takes the generalized form \begin{eqnarray} \label{ } \rho_Q(\vec{Q}_{in,1},\vec{Q}'_{in,1} ,\vec{Q}_{in,2},\vec{Q}'_{in,2} ;t_{in})= \prod_j \left( \frac{\beta_j}{\pi \alpha_j} \right)^2 \exp \bigg\{ -\frac{1}{4} \bigg[ \beta_j^2 \left( (Q^j_{in,1}+Q^j_{in,2})^2 + (Q^{j'}_{in,1}+ Q^{j'}_{in,2})^2 \right) \nonumber \\ + \frac{1}{\alpha_j^2} \left( (Q^j_{in,1}-Q^j_{in,2})^2 + (Q^{j'}_{in,1} -Q^{j'}_{in,2})^2 \right) \bigg] \bigg\}. \end{eqnarray} The development follows closely that given for the previous case and so we only state the result in the near-field long-time limit, \begin{equation} \label{EFcoul} f^{ent}_z\approx -\frac{3}{4\pi} q_1q_2 \bigg[ \Delta_x +\Delta_y -2\Delta_z \bigg] \frac{1}{z^4} \end{equation} where we have used the shorthand $\Delta_j= \frac{1}{8}( \beta_j^2-1/\alpha_j^2)(\alpha_j^2/\beta_j^2- 1/(\mu \Omega)^2) $. Note that if the parameters $\alpha$ and $\beta$ are equal for all directions (\ref{EFcoul}) vanishes. The sign of the force can also be changed by the appropriate choice of the squeeze parameters $\alpha$ and $\beta$. \section{Possibility of Detection} \subsection{Atom and Field Out of Thermal Equilibrium} In this section we compute the relative magnitude for the atom-atom force when the field and atoms are not in thermal equilibrium to the force at zero temperature. We focus our attention on the case where the atom's are in their ground state and the field is in a thermal state of inverse temperature $\beta$. Measuring this new asymptotic scaling requires a balance between temperature and the first optical resonance of the atomic species used. For the case when $\Omega \beta>>1$ (\ref{NS}) is exponentially suppressed, this would rule out the use of heavier atoms like Rb near room temperature, the only hope is to work in the regime where $\beta \Omega \gtrsim 1$, not only to prevent suppression by the Planck factor but also to prevent the excitation of the atom so that measurement can be done before thermalization. The relative magnitude of (\ref{NS}) to $f_C$ in the far field shows when this new scaling will dominate. For realistic experiments, atom-atom distance of the order of $\mu m$, the high temperature limit is beyond access for the temperatures and atomic species we are considering, so we replace $\delta f^C$ with its zero temperature form (also the appropriate factors of $c$ have been restored to ensure that (\ref{Ratio}) is dimensionless). \begin{equation} \label{Ratio} \frac{\delta f^C}{f^C} =- \frac{8\pi}{161 c^5} \frac{\Omega_1^2 \Omega_2^5}{\Omega_1^2-\Omega_2^2} z^5 \frac{1}{e^{\beta \Omega_2}-1} +( \Omega_1 \leftrightarrow \Omega_2) \end{equation} If the atomic species are the same the previous expression reduces to \begin{equation} \label{ } \frac{\delta f^C}{f^C} =\frac{24\pi}{322 } \left( \frac{\Omega z }{c} \right)^5 \frac{1}{e^{\beta \Omega}-1}. \end{equation} Tuning $\Omega$ to hydrogen's first optical resonance ($\Omega \approx 10 \text{eV}$, $\Omega \approx 2.4 \times 10^{15} \text{Hz}$, or $\Omega \approx 116, 000 \text{K} $ ) we find \begin{equation} \label{ } \frac{\delta f^C}{f^C} \approx \frac{24\pi}{322 } \left( \frac{8 z}{\mu m} \right)^5 \frac{1}{e^{\beta \Omega}-1}. \end{equation} If the atomic species are different we find different behavior. Particularly, when one of the atom's first optical resonance is very large such that $\beta \Omega>>1$ (like Rb near room temperature) the Planck factor for that atom will be strongly suppressed so its contribution to the force can be ignored. In such a case (\ref{Ratio}) takes the form \begin{equation} \label{} \frac{\delta f^C}{f^C} =- \frac{8\pi}{161} \left( \frac{\Omega z }{c} \right)^5 \frac{1}{e^{\beta \Omega}-1} =- \frac{8\pi}{161} \left( \frac{8 z}{\mu m} \right)^5 \frac{1}{e^{\beta \Omega}-1} \end{equation} where we have a different sign and a slightly different coefficient. For atoms $ \delta f^C/f^C$ only becomes significantly greater than 1 for large distances and very high temperatures and so is unlikely observable. However, these effects may play a role in the laboratory for molecules with sub $eV$ excitation energies. We leave a study of those effects for a later work. \subsection{Entanglement Force} Now that we have an expression for the entanglement force at short distances we check for regimes in which (\ref{EF}) will dominate. To do this we take the ratio of the entanglement force to the near-field van der Waals force. After restoring all physical constants to yield the correct dimensions and allowing both atoms to be the same species we find. \begin{equation} \label{ } \frac{f^{ent}}{f^{Lon}}=\frac{4 \epsilon_o}{9 c^2} \frac{\mu \Omega^4}{q^2} \left( \tilde{\beta}^2-\frac{1}{\tilde{\alpha}^2} \right) \left( \frac{\tilde{\alpha}^2}{\tilde{\beta}^2} -1 \right) z^5 \end{equation} Above $\tilde{\beta}=\beta/\mu\Omega$ and $\tilde{\alpha}=\mu\Omega \alpha$. By tuning the frequency to the first optical resonance of Hydrogen, taking the reduced mass to be the electron mass and $q$ to be the electronic charge we find. \begin{equation} \label{OOM} \frac{f^{ent}}{f^{Lon}} \approx 8.9 \times \left( \tilde{\beta}^2-\frac{1}{\tilde{\alpha}^2} \right) \left( \frac{\tilde{\alpha}^2}{\tilde{\beta}^2} -1 \right) \left( \frac{z}{ \text{nm} } \right)^5 \end{equation} The near field condition requires that the distance between the atoms be much smaller than the wavelength associated with their first optical resonance. For hydrogen this wavelength is $\lambda= c /\Omega =122 \text{nm} $. So, for the case where the prefactor of (\ref{OOM}) is order unity and the interatomic distances are in the range of a few nanometers we find the entanglement force dominates over the standard London form see Fig[\ref{POD}]. \begin{figure} \begin{center} \includegraphics[width=4in]{RelativeMag_EFvsvdW.pdf} \caption{Relative magnitude of the entanglement force to the London force. Intersection of the various graphs with the black line yields the value of the interatomic distance where the the two forces are equal in magnitude. The various colors represent different values for the prefactor in (\ref{OOM}) i.e. $ P=8.9 \times \left( \tilde{\beta}^2-\frac{1}{\tilde{\alpha}^2} \right) \left( \frac{\tilde{\alpha}^2}{\tilde{\beta}^2} -1 \right)$. For red $P=0.01$ for blue $P=0.1$ and for green $P=1$. The plot shows that for interatomic distances satisfying the near field condition $\Omega z<<1$ that the entanglement force dominates for order $1$ prefactor for distances of a few nanometers. } \label{POD} \end{center} \end{figure} \subsection{Conclusion} In this paper we have laid down the theoretical groundwork for the study of interatomic forces under fully nonequilibrium conditions. As a first step we have employed the influence functional formalism to derive a fully dynamical description of the atom-atom force for general atomic motion and initial states. We have found that a careful treatment of the infinite time limit shows the existence of a novel far field scaling when the atoms and field are not in thermal equilibrium. The dominance of this term in the laboratory would require a careful balance between temperature and the first optical resonance of the atomic species. For entangled atoms a novel near-field scaling is obtained that dominates the standard London force in certain regimes. These new forces could play an important role in quantum computing schemes involving entangled atoms.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Perovskites, primarily oxides, have been having a tremendous impact on physics and technology for many years due to a plethora of intriguing physical phenomena such as magnetism, ferroelectricity, multiferroicity, superconductivity, piezoelectricity, colossal magnetoresistance, etc~\cite{spaldin2019advances}. Some of these phenomena arise from the flexible crystal structure and the wide variety of elements that perovskites may contain. In the past decade, metal halide perovskites \ch{AMX3} with a monovalent inorganic or organic \ch{A^{1+}} cation, a divalent metal \ch{B^{2+}} cation, and a halide \ch{X^{1-}} anion such as \ch{F^{1-}}, \ch{Cl^{1-}}, \ch{Br^{1-}}, or \ch{I^{1-}} have become an appealing class of materials for next-generation high-performance optoelectronic, photonic, spintronic devices and beyond due to potentially unique optical and electrical properties that can not be observed in oxide perovskites~\cite{ricciardulli2021emerging,neumann2021manganese,cherniukh2021perovskite,long2020chiral,fu2019metal,snaith2018present}. On the other hand, many physical phenomena observed in oxide perovskites can also be present in metal halide perovskites. Ferroelectric properties were discovered in some organic metal halide perovskites, e.g. in \ch{CH3NH3PbI3}~\cite{shahrokhi2020emergence}, and in inorganic ones, such as orthorhombic \ch{CsPbBr3}~\cite{li2020evidence}, and hexagonal \ch{KNiCl3}~\cite{machida1994ferroelectricity}, \ch{TlFeCl3}~\cite{yamanaka2002structural}, \ch{RbMnBr3}~\cite{kato1994successive}, \ch{RbCoBr3}~\cite{morishita2000dielectric}, and \ch{RbFeBr3}~\cite{mitsui1994ferroelectric}. However, none of the synthesized inorganic metal halide fluoroperovksites \ch{AMF3} was reported to be ferroelectric with the only exception being \ch{CsPbF3} with a lone pair active \ch{Pb^{2+}} cation which was experimentally observed in a non-centrosymmetric space group $R3c$~\cite{berastegui2001low,smith2015interplay}. Moreover, although the polar ground state was theoretically predicted in \ch{NaCaF3}, \ch{NaCdF3}, \ch{LiMgF3}, and \ch{LiNiF3} fluoroperovskites~\cite{edwardson1989ferroelectricity,claeyssens2003abinitio,duan2004electronic}, up to date, there have been no reports on the synthesis of these crystals. Notwithstanding the foregoing, it was theoretically predicted in Ref.~\cite{garcia2014geometric} that synthesized orthorhombic fluoroperovskites have ferroelectric instability in their hypothetical high-symmetry cubic phase with the degree of which correlates with the tolerance factor $t$~\footnote{The Goldschmidt tolerance factor $t$ is a dimensionless parameter defined as $t = \cfrac{r_{\ch{A}} + r_{\ch{F}}}{\sqrt{2} (r_{\ch{B}} + r_{\ch{F}})}$, where $r_{\ch{A}}$, $r_{\ch{B}}$ and $r_{\ch{F}}$ are ionic radii and which is used for predicting the stability of the fluoroperovskite crystal structure. Fluoroperovskites with \mbox{$t<0.78$} have the trigonal structure $R3c$ (\#161, $Z=6$), in the range of \mbox{$0.78<t<0.88$} the orthorhombic phase $Pnma$ (\#62, $Z=4$) is stabilized, the cubic phase $Pm\overline{3}m$ (\#221, $Z=1$) is adopted in the range of \mbox{$0.88<t<1.00$}, and the hexagonal structure $P6_{3}/mmc$ (\#194, $Z=6$) is realized for \mbox{$1.00<t<1.08$}~\cite{babel1967structural}. There are a few exceptions, e.g., \ch{KCuF3}\xspace and \ch{KCrF3}\xspace in which the tetragonal $I4/mcm$ structure is realized due to the Jahn-Teller distortions. }\nocite{babel1967structural}. It is worth noting that the ferroelectric instability in fluoroperovskites originates from geometric ionic size effects without noticeable hybridization, which plays a crucial role in the oxide perovskites~\cite{garcia2014geometric}. The latter prediction was experimentally corroborated and it has been observed that the bulk crystal of the orthorhombic fluoroperovskite \ch{NaMnF3}, with the lowest tolerance factor $t=0.78$, is an incipient multiferroic in which incipient ferroelectricity coexists and even interacts with antiferromagnetic ordering below the N{\'e}el temperature $T_{N}=66$\,K~\cite{dubrovin2020incipient}. Furthermore, it was shown that the strained thin film of \ch{NaMnF3} is ferroelectric already at room temperature~\cite{garcia2016strain,yang2017room}. This intriguing results put on the agenda the necessity of further detailed experimental studies of the lattice dynamics of cubic fluoroperovskites with different tolerance factors $t$ aiming to unveil any signs of incipient ferroelectricity in the high-symmetry cubic phase. In this paper, we report results of systematic study of the lattice dynamic of cubic fluoroperovskites by far-infrared spectroscopy technique supported by appropriate theoretical calculations and analysis. We experimentally revealed that the low-frequency polar phonon softens with cooling at the $\Gamma$ point of the Brillouin zone in all studied crystals, similar to what is observed in incipient ferroelectrics. This frequency change correlates with the tolerance factors $t$ of cubic fluoroperovskites so that the lower $t$, the more significant frequency decrease at cooling is observed. The coupling between harmonic and anharmonic force constants of polar softening phonons is experimentally observed in these crystals. Moreover, according to our harmonic first-principles simulations, the cubic fluoroperovskites tend to lattice softening at all high-symmetry points of the Brillouin zone with a reduction of the tolerance factor $t$ that indicates to geometric origin of this effect. \section{Methods} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig_1.pdf} \caption{\label{fig:structure} The crystal structure of cubic fluoroperovskites \ch{AMF3} with the tolerance factor (a)~$t<1$ and (b)~$t>1$. (c)~The Brillouin zone of a face-centered cubic lattice indicating the high-symmetry $\Gamma$ -- $M$ -- $R$ -- $\Gamma$ -- $X$ path used in the lattice dynamic calculations. The $k_{x}$, $k_{y}$, and $k_{z}$ are the primitive reciprocal lattice vectors. } \end{figure} The experimentally studied cubic fluoroperovskites \ch{KZnF3}, \ch{RbMnF3}, \ch{KNiF3}, and \ch{KMgF3} have the crystal structure which belongs to the space group $Pm\overline{3}m$ (\#221, $Z=1$)~\cite{knight2017low,knox1961perovskite,okazaki1961crystal,windsor1966spin,vaitheeswaran2007high}. The lattice parameters $a$ of these crystals at room temperature are listed in Table~\ref{tab:phonon_parameters}. The perovskite unit cell contains five ions occupying the Wyckoff positions $1a$ (0, 0, 0) for \ch{A^1+}\xspace, $1b$ ($\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$) for \ch{M^2+}\xspace, and $3c$ (0, $\frac{1}{2}$, $\frac{1}{2}$) for \ch{F^1-}\xspace as shown in Figs.~\ref{fig:structure}(a) and~~\ref{fig:structure}(b). The fluoroperovskites \ch{KZnF3} and \ch{KMgF3} are diamagnetic, while \ch{RbMnF3} and \ch{KNiF3} are antiferromagnetic below N{\'e}el temperatures $T_{N}=83.5$\,K~\cite{lopez2014magnetic} and 244.8\,K~\cite{nouet1972determination,oleaga2015critical}, respectively. Single crystals of cubic fluoroperovskites were grown by the Czochralsky method~\cite{gesland1980growth}. The x-ray oriented crystals were cut normal to the $a$ axis and optically polished. The surface size of the samples used in the far-infrared experiments were about $10\times{10}$\,mm$^{2}$. Samples for dielectric measurements were prepared in a form of plane-parallel optically polished plates with a thickness about 0.5\,mm and area about 10\,mm$^{2}$. The far-infrared (IR) reflectivity measurements were carried out in the spectral range of 30--700\,cm$^{-1}$ with near normal incident light (the incident light beam was at 10$^{\circ}$ from the normal to the crystal surface) using Bruker~IFS~125HR spectrometer equipped by a liquid helium cooled bolometer as a detector. Due to the cubic symmetry of crystals, three axes are equivalent, and reflectivity measurements were performed with unpolarized light. Samples were attached to a cold finger of a closed cycle helium cryostat Cryomech~ST403 and the relative reflectivity spectra were measured at continuous cooling from 300 to 5\,K with respect to a reference reflectivity of a gold mirror at room temperature. No corrections on the surface quality and shape of sample, as well as the sample positions due to cold finger thermal contraction were done. The absolute reflectivity spectra were obtained at room temperature using Bruker~IFS~66v/S spectrometer in the range of 50--7500\,cm$^{-1}$ with DTGS (50--450\,cm$^{-1}$) and DLaTGS (450--7500\,cm$^{-1}$) detectors which allowed us to determine the high-frequency dielectric permittivity $\varepsilon_{\infty}$. According to Refs.~\cite{markovin1976observation,krichevtsov1984isotropic} the values of $\varepsilon_{\infty}$ in the cubic fluoroperovskites are characterized by weak temperature changes, which were neglected in the analysis. Measurements of the low-frequency dielectric permittivity $\varepsilon^{\textrm{lf}}_{0}$ were done in the range from 20\,Hz to 1\,MHz using precision RLC meter AKTAKOM AM-3028. Electric contacts were deposited on the sample faces using silver paint to form a capacitor. Samples were placed in a helium flow cryostat Cryo CRC-102 and measurements were performed at continuous heating from 5 to 300\,K. Experimental data are presented only at 100\,kHz because no noticeable frequency dispersion was observed. The dielectric losses were very small, on the order of 10$^{-5}$, with no perceptible temperature changes. \begin{figure*} \centering \includegraphics[width=2\columnwidth]{Fig_2.pdf} \caption{\label{fig:reflectivity} (a)--(d) Room temperature and (e)--(h) temperature colormap of far-infrared reflectivity spectra of the cubic fluoroperovskites \ch{KZnF3}, \ch{RbMnF3}, \ch{KNiF3}, and \ch{KMgF3}, respectively. The solid black lines are results of fits based on the generalized oscillator model according to Eq.~\eqref{eq:epsilon_TOLO}. Color vertical dashed lines indicate $\omega_{\textrm{TO}}$ and $\omega_{\textrm{LO}}$ phonon frequencies, with $\omega_{\textrm{TO}}<\omega_{\textrm{LO}}$. Horizontal black dashed lines indicate antiferromagnetic phase transition temperatures $T_{N}$. } \end{figure*} The experimental results were supported by the use of first-principles calculations of the lattice dynamics within the density functional theory (DFT) framework~\cite{hohenberg1964inhomogeneous,kohn1965self} as implemented in the \textsc{vasp} code~\cite{kresse1996efficient,kresse1999from}. The projected augmented wave (PAW) method~\cite{blochl1994projector} was used to represent the valence and core electrons. The following electronic configurations of valence electrons of \ch{K} ($3p^{6}4s^{1}$, version 17Jan2003), \ch{Rb} ($4p^{6}5s^{1}$, version 06Sep2000), \ch{Ca} ($3s^{2}3p^{6}4s^{2}$, version 06Sep2000), \ch{Mn} ($3p^{6}4s^{2}3d^{5}$, version 02Aug2007), \ch{Co} ($3p^{6}3d^{7}4s^{2}$, version 23Apr2009), \ch{Zn} ($3d^{10}4s^{2}$, version 06Sep2000), \ch{Ni} ($3p^{6}3d^{3}4s^{2}$, version 06Sep2000), and \ch{F} ($2s^{2}2p^{5}$, version 08Apr2002) have been applied. The $k$-point mesh of Brillouin zone sampling in primitive cell Monkhorst-Pack was set to $8\times8\times8$. The exchange correlation was represented within the generalized gradient approximation (GGA) in the PBEsol parametrization~\cite{perdew2008restoring}. Additionally, the $d$ electrons were corrected through the DFT$+U$ ($U=4$\,eV) approximation within the Liechtenstein formalism~\cite{liechtenstein1995density}. Born effective charges, dielectric properties, and lattice dynamics were calculated within the density functional perturbation theory (DFPT)~\cite{gonze1997dynamical} as implemented in the \textsc{vasp} code and analyzed through the Phonopy interface~\cite{togo2015first}. The longitudinal-transverse optical phonon (LO-TO) splitting near the $\Gamma$ point of the Brillouin zone was included using non-analytical corrections to the dynamical matrix~\cite{wang2010mixed}. Finally, the force constants $k$ of phonon modes were calculated as eigenvalues of the force constant matrix~\footnote{The force constant matrix is defined by $C_{\alpha i, \beta j} = \cfrac{\partial F_{\alpha i}}{\partial r_{\beta j}}$, where $\alpha$ and $\beta$ label the ions, $i$ and $j$ Cartesian directions, $F$ is the force on the ion and $r$ is the ion position, where the acoustic sum rule is imposed to guarantee translation invariance as implemented in Phonopy~\cite{togo2015first}}. \section{Results and Discussion} \subsection{Far-infrared spectroscopy} The group-theoretical analysis for the cubic fluoroperovskites predicts five triply degenerate phonons \mbox{$\Gamma_{\textrm{total}} = 4 T_{1u} \oplus T_{2u}$} among which \mbox{$\Gamma_{\textrm{IR}} = 3 T_{1u}$} are IR-active or polar~\cite{kroumova2003bilbao}. The three reflection bands observed in the far-infrared spectra $R(\omega)$ at room temperature shown in Figs.~\ref{fig:reflectivity}(a)--\ref{fig:reflectivity}(d) originate from the IR-active phonons in the studied crystals. The reflection band widths correspond to the difference between LO and TO frequencies of polar phonons arising from Coulomb interaction. The far-infrared reflectivity spectra $R(\omega)$ were fitted using the Fresnel equation~\cite{born2013principles} \begin{equation} \label{eq:reflectivity} R(\omega) = \Bigl|\frac{\sqrt{\varepsilon(\omega)} - 1}{\sqrt{\varepsilon(\omega)} + 1}\Bigr|^2, \end{equation} with a factorized complex dielectric permittivity~\cite{gervais1974anharmonicity} \begin{equation} \label{eq:epsilon_TOLO} \varepsilon(\omega) = \varepsilon_{1}(\omega) - i\varepsilon_{2}(\omega) = \varepsilon_{\infty}\prod\limits_{j}\frac{{\omega^{2}_{j\textrm{LO}}} - {\omega}^2 + i\gamma_{j\textrm{LO}}\omega}{{\omega^{2}_{j\textrm{TO}}} - {\omega}^2 + i\gamma_{j\textrm{TO}}\omega}, \end{equation} where $\varepsilon_{\infty}$ is the high-frequency dielectric permittivity, $\omega_{j\textrm{LO}}$, $\omega_{j\textrm{TO}}$, $\gamma_{j\textrm{LO}}$ and $\gamma_{j\textrm{TO}}$ correspond to $\textrm{LO}$ and $\textrm{TO}$ frequencies ($\omega_{j}$) and dampings ($\gamma_{j}$) of the $j$th IR-active phonon, respectively. There is a good agreement between experimental (green) and fitted (black) lines as shown in Figs.~\ref{fig:reflectivity}(a)--\ref{fig:reflectivity}(d). Deviations appear only at the highest-frequency phonon presumably due to multiphonon processes involving zone boundary phonons~\cite{young1969temperature}. The obtained from the fit room temperature values of frequencies $\omega_{j}$ and dampings $\gamma_{j}$ of the $j=1-3$ polar phonons and high-frequency dielectric permittivity $\varepsilon_{\infty}$ are listed in Table~\ref{tab:phonon_parameters}. These parameters are in satisfactory agreement with the data for room temperature from the literature for the cubic fluoroperovskites~\cite{axe1967infrared,perry1967infrared,balkanski1967infrared,hofmeister1991comparison}. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Fig_3.pdf} \caption{\label{fig:phonon} Temperature dependences of frequencies $\omega_{j}$ of the $j=1-3$ polar phonons for the cubic fluoroperovskites (a)~\ch{KZnF3}, (b)~\ch{RbMnF3}, (c)~\ch{KNiF3}, and (d)~\ch{KMgF3}. The color circles represent the experimental data. The black lines correspond to the fit under assumption of anharmonic temperature behavior in the absence of magnetic ordering according to Eq.~\eqref{eq:omega_anharmonism}. The green lines are fits of the shift due to spin-phonon coupling according to Eq.~\eqref{eq:spin_phonon}. Values of the spin-phonon coupling $\Delta\omega^{\textrm{SP}}$ (cm$^{-1}$) are given. Differences between nonmagnetic and magnetic fit functions are shown by green filled areas. The paramagnetic and antiferromagnetic phases are shown in blue and red color filled backgrounds, respectively. } \end{figure*} For determining the temperature evolution of polar phonons in the studied crystals, we examined the far-infrared reflectivity spectra in the range from 5 to 300\,K which are shown by the colormaps in Figs.~\ref{fig:reflectivity}(e)--\ref{fig:reflectivity}(h). The fitting of these spectra using Eqs.~\eqref{eq:reflectivity} and~\eqref{eq:epsilon_TOLO} allowed us to obtain the temperature dependences of the $\omega_{j\textrm{TO}}$ and $\omega_{j\textrm{LO}}$ phonon frequencies which are shown by color circles in Fig.~\ref{fig:phonon}. \subsection{Softening of the polar mode} Our analysis reveals that the phonon frequency $\omega_{1\textrm{TO}}$ decreases (\textit{i.e.} softens), by a few cm$^{-1}$, at cooling in all studied cubic fluoroperovskites as shown in the bottom frames of Fig.~\ref{fig:phonon}. This softening is similar to that observed in the isostructural \ch{KCoF3} and \ch{RbCoF3} crystals previously reported in Ref.~\cite{dubrovin2019lattice}. For our further analysis, it is convenient to convert phonon frequency $\omega$ to the force constant $k$, which are related as $\omega = \sqrt{{k}/{\mu}}$, where $\mu$ is the reduced mass of ions in the unit cell. According to the general principles of lattice dynamics theory~\cite{born1954dynamical}, the force constant $k$ can be represented as $k = k_{0} + k_{\textrm{ah}}$, where $k_{0}$ is a temperature independent harmonic and $k_{\textrm{ah}}$ is a temperature dependent anharmonic force constant. It was shown in Ref.~\cite{ridou1984anharmonicity} that in the cubic fluoroperovskites the value of quasi-harmonic force constant $k_{\textrm{qh}}$ related to the thermal expansion of crystal is one order of magnitude lower than the anharmonic force constant $k_{\textrm{ah}}$ that allows us to neglect $k_{\textrm{qh}}$ in the analysis. Moreover, we suppose that the phonon frequency at the low temperature $\omega_{1\textrm{TO}}(5\,\mathrm{K})$ is determined by the harmonic force constant $k_{0}$ only whereas the anharmonic force constant $k_{\textrm{ah}}$ is neglected. It is easy to see that the quantity ${\omega_{1\textrm{TO}}^{2}(T)}/{\omega_{1\textrm{TO}}^{2}(5\,\textrm{K})} - 1$ represents a ratio of anharmonic to harmonic force constants ${k_{\textrm{ah}}(T)}/{k_{0}}$ of the 1TO polar phonon. The temperature dependences of this quantity for the considered cubic fluoroperovskites together with the data for \ch{KCoF3} and \ch{RbCoF3} from Ref.~\cite{dubrovin2019lattice} are shown in Fig.~\ref{fig:soft_mode}(a). It can be seen that a consistent decrease of the ratio ${k_{\textrm{ah}}(T)}/{k_{0}}$ takes place at cooling, with more pronounced change of this value in \ch{KCoF3} with the lowest tolerance factor $t=0.94$, whereas the smallest reduction was observed in \ch{RbCoF3} with the largest value of $t=1$ among the presented cubic fluoroperovskites. Figure~\ref{fig:soft_mode}(b) shows the extracted ratios of the force constants at room temperature ${\Delta k_{\textrm{ah}}}/{k_{0}}$ in the studied crystals as a function of the tolerance factor $t$. A strong correlation between the ratio ${\Delta k_{\textrm{ah}}}/{k_{0}}$ and the tolerance factor $t$ is observed, while the smaller is the value of $t$, the larger is the ratio of ${\Delta k_{\textrm{ah}}}/{k_{0}}$ in the presented cubic fluoroperovskites. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Fig_4.pdf} \caption{\label{fig:soft_mode} (a)~Temperature and (b)~tolerance factor $t$ (at room temperature) dependences of the differences of squared phonon frequencies $\omega_{1\textrm{TO}}^{2}(T) - \omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ which represents the ratio of anharmonic and harmonic force constants ${\Delta k_{\textrm{ah}}}/{k_{0}}$ in the cubic fluoroperovskites. The data for \ch{KCoF3} and \ch{RbCoF3} have been adapted from Ref.~\cite{dubrovin2019lattice}. (c)~Dependence of the squared phonon frequency $\omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ at low temperature which represent the reduced harmonic force constant ${k_{0}(T)}/{\mu}$ on the tolerance factor $t$ in two groups of the cubic fluoroperovskites: \ch{KMnF3}~\cite{axe1967infrared,perry1967infrared} (at room temperature), \ch{KCoF3}~\cite{dubrovin2019lattice}, \ch{KNiF3} and \ch{RbMnF3}, \ch{RbFeF3}~\cite{nakagawa1973transverse} (at room temperature), \ch{RbCoF3}~\cite{dubrovin2019lattice} with insignificant difference of the reduced mass $\mu$. Temperature dependences of the $\omega_{1\textrm{TO}}^{2}(T) - \omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ which represent the reduced anharmonic force constant ${k_{\textrm{ah}}(T)}/{\mu}$ in groups of the cubic fluoroperovskites (d)~\ch{KCoF3}, \ch{KNiF3} and (e)~\ch{RbMnF3}, \ch{RbCoF3} in which the difference in the value of $\mu$ is neglected. } \end{figure*} To analyze the behavior of the harmonic force constant $k_{0}$ let us consider the squared frequency $\omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ at the low temperature also in the two groups of the cubic fluoroperovskites \ch{KMnF3}~\cite{axe1967infrared,perry1967infrared}, \ch{KCoF3}~\cite{dubrovin2019lattice}, \ch{KNiF3} and \ch{RbMnF3}, \ch{RbFeF3}~\cite{nakagawa1973transverse}, \ch{RbCoF3}~\cite{dubrovin2019lattice} in which also only the $3d$ ion changes. The $k_{0}$ values for \ch{KMnF3} and \ch{RbFeF3} are somewhat overestimated because the phonon frequencies $\omega_{1\textrm{TO}}$ are given in the literature only for room temperature but this does not violate the general trend. As can be seen in Fig.~\ref{fig:soft_mode}(c), the value $\omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ is reduced with decrease of the tolerance factor $t$ in each of these two crystal groups, which reflects the corresponding change of the reduced harmonic force constant ${k_{0}}/{\mu}$. Therefore, this analysis leads us to conclude that in the cubic fluoroperovskites \ch{AMF3} there is a tangible coupling between the tolerance factor $t$ and harmonic force constant $k_{0}$ of the 1TO phonon at which a decrease of the $t$ drives to a clear reduction of the $k_{0}$. In order to reveal the temperature behavior of the anharmonic force constant $k_{\textrm{ah}}(T)$, the difference of squared phonon frequencies $\omega_{1\textrm{TO}}^{2}(T) - \omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ which represents the value ${k_{\textrm{ah}}(T)}/{\mu}$ were analyzed in the two groups of the cubic fluoroperovskites \ch{KCoF3}, \ch{KNiF3} and \ch{RbMnF3}, \ch{RbCoF3}, as shown in Figs.~\ref{fig:soft_mode}(d) and~\ref{fig:soft_mode}(e), respectively. In each of these groups, the difference in values of $\mu$ can be neglected since only the \ch{M} ion belonging to the $3d$ ions changes, which allows comparing the quantity $k_{\textrm{ah}}(T)$ for crystals with different tolerance factor $t$. According to Figs.~\ref{fig:soft_mode}(d) and~\ref{fig:soft_mode}(e), the value of $\omega_{1\textrm{TO}}^{2}(T) - \omega_{1\textrm{TO}}^{2}(5\,\textrm{K})$ for \ch{KCoF3} ($t=0.94$) and \ch{RbMnF3} ($t=0.96$) is respectively more than for \ch{KNiF3} ($t=0.96$) and \ch{RbCoF3} ($t=1.0$), thus anharmonic force constant $k_{\textrm{ah}}$ increases when the tolerance factor $t$ is decreased in the cubic fluoroperovskites. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Fig_5.pdf} \caption{\label{fig:dft_phonon_spectra} Phonon dispersion curves along the $\Gamma$ -- $M$ -- $R$ -- $\Gamma$ -- $X$ high-symmetry path of the Brillouin zone of the cubic fluoroperovskites with different tolerance factors $t$. \textcolor{newtext}{ The lines represent the calculated data. Imaginary frequencies are signified by negative numbers and correspond to unstable phonons. The circles denote the adapted experimental data from the neutron scattering for \ch{KCoF3}~\cite{holden1971excitations_1}, \ch{KZnF3}~\cite{lehner1982lattice}, and \ch{KMgF3}~\cite{salatin1993lattice}. The squares represent the polar phonon frequencies in the Brillouin zone center from our infrared spectroscopy experiments for \ch{KZnF3}, \ch{RbMnF3}, \ch{KNiF3}, and \ch{KMgF3}, and adapted from the literature for \ch{RbCaF3}~\cite{ridou1986temperature}, \ch{KMnF3}~\cite{axe1967infrared}, \ch{KCoF3}~\cite{dubrovin2019lattice}, and \ch{RbCoF3}~\cite{dubrovin2019lattice}.}} \end{figure*} A similar behavior of the polar phonons, but with more pronounced temperature changes, was observed in the incipient ferroelectrics such as oxide perovskites \ch{SrTiO3}~\cite{muller1979srti}, \ch{CaTiO3}~\cite{lemanov1999perovskite} and \ch{EuTiO3}~\cite{kamba2007magnetodielectric}. In these materials there is a soft polar mode which frequency $\omega_{\textrm{SM}}$ tends to but does not reach zero at cooling up to the lowest temperatures and the expected ferroelectric phase transition does not occur~\cite{kamba2021soft}. The Curie temperature $T_{C}$ in incipient ferroelectrics at which the frequency $\omega_{\textrm{SM}}$ or its extrapolated linear part goes to zero is usually negative, but sometimes it is positive what suggests a ferroelectric transition which nevertheless is not realized due to a rather low value of this temperature~\cite{kvyatkovskii2001quantum}. Moreover, the harmonic force constant $k_{0}$ of the soft polar mode in incipient ferroelectrics has a small absolute value that is negative, similar to the case of ferroelectrics, or positive~\cite{kvyatkovskii2001quantum}. As mentioned above, the temperature behavior of the phonon frequency $\omega$ is determined by the anharmonic force constant $k_{\textrm{ah}}$, the sign and value of which is defined by the mutual compensation of the two terms with the opposite temperature dependence of the third and fourth order in anharmonicity~\cite{bruce1973lattice,bruce1981structural}. Thus, the $k_\textrm{ah}$ is positive for ferroelectrics and incipient ferroelectrics but it is negative for normal insulators. A delicate balance between these anharmonic terms occurs in the cubic fluoroperovskites at which anharmonicities mutually compensate each other when $t$ approaches to 1. We believe that the revealed correlation between the harmonic $k_{0}$, anharmonic $k_{\textrm{ah}}$ force constants, and their ratio ${k_{\textrm{ah}}(T)}/{k_{0}}$ with the tolerance factor $t$ clearly indicates the existence of incipient ferroelectric instability in the cubic fluoroperovskites. It is worth noting that a similar consistent increase of the anharmonic force constant $\Delta k_{\textrm{ah}}$ with decreasing of the harmonic force constant $k_{0}$ was previously observed in the group of the IV--VI materials \ch{PbS}, \ch{PbSe}, \ch{PbTe}, and \ch{SnTe}, but according to Ref.~\cite{kvyatkovskiui1988microscopic} this effect cannot be explained within the framework of the phenomenological theory of lattice anharmonicity. \subsection{Lattice dynamics simulations} To reveal the features of the lattice dynamics of the cubic fluoroperovskites under study, we performed the calculations of the phonon dispersion curves considering the LO-TO splitting along the $\Gamma$ -- $M$ -- $R$ -- $\Gamma$ -- $X$ high-symmetry path of the Brillouin zone [see Fig.~\ref{fig:structure}(c)]. We also included the \ch{RbCaF3}, \ch{KMnF3}, \ch{KCoF3}, and \ch{RbCoF3} crystals to fully cover the $0.88<t<1.0$ range of stability of the cubic fluoroperovskites as shown in Fig.~\ref{fig:dft_phonon_spectra}. The G-type antiferromagnetic (G-AFM) spin configuration was considered for the case of magnetic crystals. The obtained results are in satisfactory agreement with the reported data~\cite{holden1971excitations_1,lehner1982lattice,becher1989simulation,salatin1993lattice,salaun1995determination,vaitheeswaran2016calculated,ehsan2018dft} \textcolor{newtext}{as can be seen in Fig.~\ref{fig:dft_phonon_spectra}.} The calculated frequencies $\omega$ and force constants $k$ of the low-lying $T_{1u}$, $X_{5}$, $M_{2}$ and $R_{15'}$ phonons at the $\Gamma$, $X$, $M$ and $R$ points of the Brillouin zone, respectively, together with the obtained lattice parameters $a$ and tolerance factors $t$ of the cubic fluoroperovskites are listed in Table~\ref{tab:dft_phonons_BZ}. For the studied crystals, the obtained phonon frequencies $\omega$, dielectric strengths $\Delta\varepsilon$, and permittivities $\varepsilon_{0}$ and $\varepsilon_{\infty}$ at the $\Gamma$ point, as well as the lattice parameters $a$ are listed in Table~\ref{tab:dft_phonon_parameters}. It can be noted that the obtained values are in fair agreement to the experimental data presented in Table~\ref{tab:phonon_parameters} \textcolor{newtext}{and in the literature~\cite{perry1967infrared,axe1967infrared,balkanski1967infrared,nakagawa1973transverse,nakagawa1974infrared,ridou1986temperature,dubrovin2019lattice}.} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig_6.pdf} \caption{\label{fig:dft_DZ_force} Calculated force constants $k$ of the low-lying $T_{1u}$, $X_{5}$, $M_{2}$, and $R_{15'}$ phonons at the $\Gamma$, $X$, $M$, and $R$ high-symmetry points of the Brillouin zone, respectively, in the cubic fluoroperovskites \ch{AMX3} with different tolerance factors $t$. } \end{figure} The cubic structure is dynamically stable for the vast majority of fluoroperovskites under study. However, the computed harmonic force constants $k$ of phonons monotonically decrease with lowering of the tolerance factor $t$ at all high-symmetry points of the Brillouin zone, as shown in Fig.~\ref{fig:dft_DZ_force}. As a result, the crystals with a small tolerance factors \ch{RbCaF3} ($t=0.88$) and \ch{KMnF3} ($t=0.91$) have strongly unstable antiferrodistortive modes $R_{15'}$ and $M_{2}$ with negative force constants $k$ and imaginary frequencies $\omega$ at the $R$ and $M$ high-symmetry points and along the $M$--$R$ symmetry line, as it can be seen from Figs.~\ref{fig:dft_phonon_spectra}, \ref{fig:dft_DZ_force}(c), \ref{fig:dft_DZ_force}(d), and Table~\ref{tab:dft_phonons_BZ}. It is interesting to note that in the studied fluoroperovskites the lowest-frequency phonon mode covering a continuum of wave vectors along the $M$--$R$ edge of the Brillouin zone is sufficiently flat as shown in Fig.~\ref{fig:dft_phonon_spectra}, which is similar to the case found for some other cubic perovskites~\cite{lasota1997abinitio,rushchanskii2012first,lanigan2021two}. The $R_{15'}$ phonon force constant $k$ tending to zero at cooling drives the phase transition from the cubic $Pm\overline{3}m$ to the tetragonal $I4/mcm$ structure which was experimentally observed at $T_{1}=193$\,K in \ch{RbCaF3}~\cite{rousseau1977rbcaf3} and 186\,K in \ch{KMnF3}~\cite{kapusta1999revised}. A further temperature decrease leads to the zeroing of the $M_{2}$ phonon force constant $k$ which induces the phase transition to the orthorhombic $Pnma$ structure below $T_{2}=65$\,K in \ch{RbCaF3}~\cite{knight2018high} and at 75\,K in \ch{KMnF3}~\cite{knight2020nuclear}. It should be noted that in \ch{CsCaF3} ($t=0.94$) and \ch{KZnF3} ($t=0.95$) crystals with a slightly higher tolerance factors, the experimental frequency of the low-lying phonon at the $R$ point is also decreased at cooling, but does not reach a zero~\cite{ridou1984anharmonicity,lehner1982lattice}. Moreover, the experimental phonon frequencies at the $R$ point in the cubic fluoroperovskites from Ref.~\cite{holden1971excitations_1} correspond to the trend shown in Fig.~\ref{fig:dft_DZ_force}(d). The force constants $k$ at the $\Gamma$ and $X$ points are positive for all cubic fluoroperovskites with $0.88<t<1.0$ and become negative only in the high-symmetry cubic structure of the orthorhombic crystals with low values of $t$~\cite{garcia2014geometric}. The dependence of the computed harmonic force constant $k$ of the 1TO phonon at the $\Gamma$ point on the tolerance factor $t$ is very close to those obtained for the $k_{0}$ from experiments, as shown in Figs.~\ref{fig:dft_DZ_force}(a) and~\ref{fig:soft_mode}(c), respectively. Thus, we revealed the correlations between the force constants $k$ values and the tolerance factor $t$, such that with a reduction of the $t$ the values of $k$ are decreased at all high-symmetry points of the Brillouin zone in the cubic fluoroperovskites. The obtained results are in good agreement with the calculated data for fluoroperovskites in Ref.~\cite{garcia2014geometric}. Moreover, the computed ionic Born effective charges in the studied crystals are close to the nominal values in contrast to the ferroelectric oxide perovskites~\cite{zhong1994giant} and without any correlations on the tolerance factor $t$ as shown in Table~\ref{tab:dft_born_charges}. Remarkably, our analysis suggests that the discovered trend in the cubic fluoroperovskites has a geometric origin related to steric effect, namely, by the volume filling of the unit cell by ions with different ionic radii. This size effect was previously used to explain the multiferroicity observed in the \ch{BaMF4} crystal family~\cite{ederer2006origin,garcia2018direct}. It is worth noting that no such clear correlation between calculated frequencies $\omega$, force constants $k$, and lattice parameters $a$ was observed in our study, emphasizing the exceptional importance of the tolerance factor $t$ for fluoroperovskites. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig_7.pdf} \caption{\label{fig:KNiF3_volume} (a)~Calculated phonon frequencies $\omega$ of the polar phonons in cubic fluoroperovskite \ch{KNiF3} in (left frames) G-AFM and (right frames) FM states as a function of lattice parameter $a$. The equilibrium value of $a$ is in the center. Values of the obtained Gr{\"u}neisen parameters $\gamma$ for phonons are given. (b)~Phonon frequency $\omega_{1\textrm{TO}}$ for \ch{KNiF3} in G-AFM state as a function of biaxial epitaxial strain $\eta$. Negative and positive signs of $\eta$ corresponds to compression and expansion, respectively. For $\eta\neq{0}$ the symmetry is tetragonal with the space group $P4/mmm$. } \end{figure} In order to gain insight into the quasi-harmonic behavior of lattice dynamics of the cubic fluoroperovskites, we calculated the frequencies $\omega$ of polar phonons at lattice parameters $a$ change on $\pm0.5\%$ with respect to the equilibrium in \ch{KNiF3}. We assume that the volume trend of phonons of the other cubic fluoroperovskites is similar to the \ch{KNiF3} due to the closeness of their dynamical properties. Figure~\ref{fig:KNiF3_volume}(a) shows that the $\omega$ of all phonons increase (\textit{i.e.} hardens) with decreasing of $a$ in the both G-AFM and FM (ferromagnetic) states. Thus, the observed softening of the low-frequency 1TO polar phonons cannot be explained by the quasi-harmonic contribution from thermal contraction of crystals at cooling and it is caused by the anharmonic effects. The obtained values of the Gr{\"u}neisen parameters $\gamma$ for polar phonons, which quantitatively reflect sensitivity of the frequency to the crystal volume changes, are also presented in Fig.~\ref{fig:KNiF3_volume}(a). It should be noted that there is an essential difference between cubic fluoroperovskites and incipient ferroelectrics, despite the proximity of the anharmonic behavior of the softening polar phonons. Thus, it was theoretically predicted and experimentally corroborated that incipient ferroelectrics become genuine ones under epitaxial strain, e.g., for \ch{SrTiO3}~\cite{pertsev2000phase,haeni2004room}, \ch{EuTiO3}~\cite{fennie2006magnetic,lee2010strong}, and \ch{NaMnF3}~\cite{garcia2016strain,yang2017room}. According to our calculations, the compressive and tensile biaxial epitaxial strain $\eta$ does not lead the phonon frequency $\omega_{1\textrm{TO}}$ to cross zero up to $\eta=\pm{5}\%$, as shown in Fig.~\ref{fig:KNiF3_volume}(b). Therefore, the studied cubic fluoroperovskites are not incipient ferroelectrics \textcolor{newtext}{in contrast to the orthorhombic \ch{NaMnF3}~\cite{garcia2016strain,yang2017room,dubrovin2018unveiling,dubrovin2020incipient}}. \textcolor{newtext}{Moreover, most cubic fluoroperovskites that are stable at zero pressure experimentally showed no structural changes even under high pressure~\cite{vaitheeswaran2007high,aguado2008high,vaitheeswaran2010high,mishra2011high}.} \subsection{Spin-phonon coupling} To reveal the antiferromagnetic ordering effects on the phonon landscape, we fitted the temperature dependences of phonon frequencies in the diamagnetic and paramagnetic phases using the expression~\cite{balkanski1983anharmonic} \begin{eqnarray} \label{eq:omega_anharmonism} \omega_{j}(T) = \omega_{j0} + A_{j} \left( 1 + \frac{2}{e^{\hbar\omega_{j0}/2k_{B}T} - 1} \right) \nonumber\\ + B_{j} \left( 1 + \frac{3}{e^{\hbar\omega_{j0}/3k_{B}T} - 1} + \frac{3}{(e^{\hbar\omega_{j0}/3k_{B}T} - 1)^2} \right), \end{eqnarray} where $\omega_{j0}$ is the harmonic frequency of the $j$th phonon, $A_{j}$ and $B_{j}$ are parameters describing three- and four-phonon anharmonic processes, respectively. In this simple model, it is assumed that an optical phonon with frequency $\omega_{j0}$ decays into the two (three-phonon process) and three (four-phonon process) acoustic phonons with frequencies ${\omega_{j0}}/{2}$ and ${\omega_{j0}}/{3}$ satisfying both energy and momentum conservation. In the real crystals, the phonon anharmonicity is more complicated and contributions from these decays processes are usually very small~\cite{mendez1984temperature,lan2012phonon}. However, Eq.~\eqref{eq:omega_anharmonism} often gives good fits to experimental data, including the case of the studied fluoroperovskite crystals in the paramagnetic and diamagnetic phases as shown by the black lines in the temperature range denoted by the blue background in Fig.~\ref{fig:phonon}. Due to oversimplified approximation, the model parameters may be physically incorrect and are not given and discussed. The deviations of experimental frequencies of the polar phonons from the anharmonic fits below the N{\'e}el temperature $T_{N}$ in the antiferromagnetic \ch{RbMnF3} and \ch{KNiF3} caused by the spin-phonon coupling are shown in Figs.~\ref{fig:phonon}(b) and~\ref{fig:phonon}(c), respectively. These frequency shifts, which are the result of magnetic ordering, are described by the function~\cite{cottam2019spin} \begin{equation} \label{eq:spin_phonon} \omega^{\textrm{AFM}}(T) = \omega^{\textrm{NM}}(T) + \Delta\omega^{\textrm{SP}} \langle S_{i}\cdot{}S_{j} \rangle, \end{equation} where $\omega^{\textrm{AFM}}(T)$ and $\omega^{\textrm{NM}}(T)$ are phonon frequency temperature dependences in the antiferromagnetic (AFM) and in hypothetical non-magnetic (NM) phases, $ \langle S_{i}\cdot{}S_{j} \rangle$ denotes a spin-pair correlation function, and $\Delta\omega^{\textrm{SP}}$ is the spin-phonon coupling constant which is equal to the phonon frequency shift at the low temperature. The spin-pair correlation function $ \langle S_{i}\cdot{}S_{j} \rangle$, neglecting the short-range magnetic ordering, is proportional to $M^{2}$, where $M$ is the magnetic order parameter, which temperature dependence can be described using the Brillouin function~\cite{darby1967tables} \begin{equation} \label{eq:brillouin} B(x) = \frac{M}{M_{0}} = \frac{2S + 1}{2S} \coth{\Bigl(\frac{2S + 1}{S}x\Bigr)} - \frac{1}{2S} \coth{\Bigl(\frac{x}{2S}\Bigr)}, \end{equation} where $x = \cfrac{3S}{S + 1} \cfrac{M}{M_{0}} \cfrac{T_{N}}{T}$ and $S$, $T$, $T_{N}$, $M$ and $M_{0}$ are spin value, temperature, N{\'e}el temperature, spontaneous magnetization, and full magnetization, respectively. The difference between anharmonic fits and experimental data in the antiferromagnetic phases were fitted using Eq.~\eqref{eq:spin_phonon} as shown by the green lines in Figs.~\ref{fig:phonon}(b) and~\ref{fig:phonon}(c) for \ch{RbMnF3} and \ch{KNiF3}, respectively. Besides, the obtained values of spin-phonon coupling constant $\Delta\omega^{\textrm{SP}}$ for all phonons are also given in Figs.~\ref{fig:phonon}(b) and~\ref{fig:phonon}(c). It is clearly seen in Figs.~\ref{fig:phonon}(b) and~\ref{fig:phonon}(c) that in both crystals appreciable shifts $\Delta\omega^{\textrm{SP}}$ below $T_{N}$ due to the spin-phonon coupling are exhibited by only $\omega_{\textrm{1LO}}$, $\omega_{\textrm{2TO}}$, $\omega_{\textrm{2LO}}$, and $\omega_{\textrm{3TO}}$ phonon frequencies which is qualitatively similar to the results previously observed in \ch{KCoF3} and \ch{RbCoF3}~\cite{dubrovin2019lattice}. However, in \ch{RbMnF3} and \ch{KNiF3} the spin-phonon coupling was observed for $\omega_{\textrm{3TO}}$, whereas in \ch{KCoF3} and \ch{RbCoF3} the polar phonon with frequency $\omega_{\textrm{3LO}}$ was susceptible to the magnetic ordering. The reason for this difference can presumably be related to the influence of the strongly anisotropic \ch{Co^{2+}} ion possessing the largest orbital momentum among the $3d^{n}$ ions, which leads to symmetry lowering at the magnetostructural phase transition in \ch{KCoF3} and \ch{RbCoF3} caused by the spin-orbit interaction~\cite{julliard1975analyse}. The obtained spin-phonon coupling constants $\Delta\omega^{\textrm{SP}}$ for phonons with $\omega_{\textrm{1LO}}$, $\omega_{\textrm{2TO}}$ and $\omega_{\textrm{2LO}}$ frequencies in \ch{KNiF3} are noticeably larger than in \ch{RbMnF3} presumably due to the difference in the values of exchange integrals which appears in the corresponding distinction of $T_{N}$ in these crystals~\cite{barocchi1978determination,chinn1970two}. It is known that the $\Delta\omega^{\textrm{SP}}$ is proportional to the second derivative of the exchange integral concerning the ion displacements for phonon~\cite{granado1999magnetic}. Moreover, the mass of the \ch{Rb^{1+}} ion (85.5) significantly exceeds that of \ch{K^{1+}} (39.1), which also leads to the value of $\Delta\omega^{\textrm{SP}}$ in \ch{KNiF3} is more than in \ch{RbMnF3}~\cite{granado1999magnetic}. The phonon with $\omega_{3\textrm{TO}}$ frequency has a relatively large value of $\Delta\varepsilon^{\textrm{SP}}$ which differs slightly in \ch{RbMnF3} and \ch{KNiF3} supposedly related to its strong anharmonicity, which leads to large temperature changes as shown in Figs.~\ref{fig:phonon}(b) and~\ref{fig:phonon}(c), respectively. \begin{figure*} \centering \includegraphics[width=2\columnwidth]{Fig_8.pdf} \caption{\label{fig:spin_phonon} (a)~Sketch of the ion displacements for polar phonons in the cubic fluoroperovskites according to the DFT simulations. (b)~The \ch{M}--\ch{F}--\ch{M} superexchange bond angle $\theta_{0}$ and length $r_{0}$ which are dynamically changed due to ion displacements of polar phonons. $\textrm{F}_{\perp}$ and $\textrm{F}_{\parallel}$ indicate the displacements perpendicular and parallel to the \ch{M}--\ch{F}--\ch{M} bond, respectively. (c)~Computed absolute values of ion displacements related to polar phonons in \ch{RbMnF3} and \ch{KNiF3}. (d)~Relationship between the calculated dynamical changes of the bond angle ${\Delta\theta}/{\theta_{0}}$ and length ${\Delta{r}}/{r_{0}}$ and the computed frequency shift ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ due to the spin-phonon coupling in \ch{RbMnF3} and \ch{KNiF3}. Picture was prepared using the \textsc{VESTA} software~\cite{momma2011vesta}. } \end{figure*} To grasp further insights on the origin of observed spin-phonon coupling in the cubic fluoroperovskites, we performed lattice dynamics DFT simulations for \ch{RbMnF3} and \ch{KNiF3} also in FM spin configuration. Since the NM phase is not accessible for the DFT calculations of the magnetic crystals, we assumed for it the averaged values between G-AFM and FM states according to Ref.~\cite{schleck2010elastic}. For studied magnetic crystals, the computed frequencies $\omega$ of polar phonons at the $\Gamma$ point for G-AFM, FM, and NM states together with values of spin-phonon coupling shifts $\Delta\omega^{\textrm{SP}}=\omega^{\textrm{AFM}}-\omega^{\textrm{NM}}$ are listed in Table~\ref{tab:dft_SP}. There is a good qualitative agreement between experimental and computed $\Delta\omega^{\textrm{SP}}$ in \ch{RbMnF3} and \ch{KNiF3} as it can be seen from Figs.~\ref{fig:phonon}(b)--\ref{fig:phonon}(c) and Table~\ref{tab:dft_SP}. The calculated frequency shift $\Delta\omega^{\textrm{SP}}$ due to spin-phonon coupling of the 1TO phonon has a meager absolute value in both crystals, in full agreement with our experiments. Furthermore, according to our calculations, the 1LO, 2TO, and 2LO phonons have a positive sign of $\Delta\omega^{\textrm{SP}}$ with a most pronounced effect for the 2TO phonon which corresponds to that observed in experiments. The 3TO phonon exhibits appreciable positive frequency shift $\Delta\omega^{\textrm{SP}}$ in simulations which also agrees with our experimental results. However, according to Figs.~\ref{fig:phonon}(b) and~\ref{fig:phonon}(c), the spin-phonon coupling for the polar phonons with frequency $\omega_{3\textrm{LO}}$ is insignificant in both crystals which is in agreement with the calculation results for \ch{RbMnF3} only, while for \ch{KNiF3} the $\Delta\omega^{\textrm{SP}}$ has a significant negative value as it can be observed in Table~\ref{tab:dft_SP}. Figure~\ref{fig:spin_phonon}(a) shows the computed ion displacements for polar phonons in the considered cubic fluoroperovskites. The TO polar phonon displacements are close to previously published data these crystals~\cite{harada1970determination,nakagawa1973transverse}. The lowest-frequency 1TO phonon corresponds to the so called Last mode with opposite vibration of the \ch{A} cations and the \ch{MF6} octahedra. The second 2TO phonon corresponds to the vibrations of the \ch{M} cations against the fluoride octahedra which are known as the Slater mode. Note that the Slater mode has the lowest frequency in the lead-free oxide perovskites, e.g., in \ch{KNbO3}, \ch{BaTiO3}, \ch{SrTiO3} and \ch{EuTiO3}, and this mode dominates in the ferroelectrics and incipient ferroelectrics~\cite{harada1970determination,hlinka2006infrared,rushchanskii2012first}. The 3TO polar phonon with the highest frequency represents the bending of the \ch{MF6} octahedra, which corresponds to the Axe mode. The ion displacements for the LO polar phonons in the cubic fluoroperovskites are also shown in Fig.~\ref{fig:spin_phonon}(a). The spin-phonon coupling originates from the dependence of the exchange interaction on the positions of ions. In the antiferromagnetic cubic fluoroperovskites, phonons dynamically change the bond angle $\theta_{0}$ and length $r_{0}$ of the superexchange \ch{M}--\ch{F}--\ch{M} pathway as shown in Fig.~\ref{fig:spin_phonon}(b). According to the Goodenough-Kanamori rules, the superexchange interaction is given by a relation $J \propto \cfrac{\tau^{2}}{\Delta - J_{\textrm{H}}}$, where $\tau$ is the hopping integral which is proportional to the effective overlap between the wave functions of electron orbitals of the magnetic \ch{M} cations via the \ch{F} anion, $\Delta$ is the energy difference between the \ch{F} and \ch{M} ion orbitals, and $J_{\textrm{H}}$ is the Hund coupling~\cite{goodenough1963magnetism,lee2011large}. The superexchange bond angle variation $\Delta\theta$ by phonons affects to the exchange interaction $J$ through the change of the hopping integral $\tau$, whereas the length variation $\Delta{r}$ leads to a change of the energy difference between orbitals $\Delta$. The calculated absolute values of ion displacements for the TO and LO polar phonons in \ch{RbMnF3} and \ch{KNiF3} are presented as stacked bar graph in Fig.~\ref{fig:spin_phonon}(c). Therefore, this allows us to reveal the origin of the spin-phonon coupling observed in studied antiferromagnetic crystals. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Fig_9.pdf} \caption{\label{fig:dielectric} Temperature dependences of (bottom frames) the dielectric strength $\Delta\varepsilon_{j}$ for $T_{1u}$ phonons $j=1-3$, (middle frames) the static dielectric permittivity $\varepsilon_{0}$, and (upper frames) the low-frequency dielectric permittivity $\varepsilon^{\textrm{lf}}_{0}$ at $f=100$\,kHz for the cubic fluoroperovskites (a)~\ch{KZnF3}, (b)~\ch{RbMnF3}, (c)~\ch{KNiF3} and (d)~\ch{KMgF3}. The temperature dependence of the $\varepsilon^{\textrm{lf}}_{0}$ for \ch{RbMnF3} has been adapted from Ref.~\cite{dubrovin2018unveiling}. The color circles correspond to the experimental data. The black and green lines are fits assuming the anharmonic and spontaneous magnetodielectric effects, respectively. Values of the spontaneous magnetodielectric effect $\Delta\varepsilon^{\textrm{MD}}$ are given. The paramagnetic and antiferromagnetic phases are shown in blue and red color filled backgrounds, respectively. } \end{figure*} Figure~\ref{fig:spin_phonon}(d) shows the relation between the relative changes of bond angles ${\Delta\theta}/{\theta_{0}}$ and lengths ${\Delta{r}}/{r_{0}}$ estimated using the calculated ion displacements and the computed phonon frequency shifts ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ in \ch{RbMnF3} and \ch{KNiF3}. There is a good agreement between the ${\Delta\theta}/{\theta_{0}}$ and ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ for TO phonons in both crystals. The smallest value of ${\Delta\theta}/{\theta_{0}}$ corresponds to the least ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ for the 1TO phonon, while the highest frequency shift is observed for 2TO phonon, which has the most pronounced dynamic modulation of the superexchange bond angle as shown in Fig.~\ref{fig:spin_phonon}(d). This result is in good agreement with the similar calculations in oxide perovskites~\cite{lee2011large}. For the LO polar phonons the satisfactory agreement between ${\Delta\theta}/{\theta_{0}}$ and ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ is observed only for 1LO phonon in both crystals, whereas the expected frequency shifts for calculated relative changes of bond angle ${\Delta\theta}/{\theta_{0}}$ for 2LO and 3LO phonons exceed the computed frequency shifts ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ as can be seen in Fig.~\ref{fig:spin_phonon}(d). Apparently, this is due to that the longitudinal ion displacements ${\Delta{r}}/{r_{0}}$ have a more significant effect which competes with bond angle changes ${\Delta\theta}/{\theta_{0}}$ on the spin-phonon coupling for LO phonons. It is worth noting that the phonon-induced change bond lengths ${\Delta{r}}/{r_{0}}$ deviate markedly from the frequency shifts ${\Delta\omega^{\textrm{SP}}}/{\omega^{\textrm{NM}}}$ for TO phonons [see Fig.~\ref{fig:spin_phonon}(d)], which indicates that variation of the energy difference between orbitals $\Delta$ has a weaker effect on superexchange interaction $J$ than the change of the hopping integral $\tau$ in this case, what is consistent with published data~\cite{lee2011large,son2019unconventional}. Thereby, the frequency shifts $\Delta\omega^{\textrm{SP}}$ due to the spin-phonon coupling found in antiferromagnets \ch{RbMnF3} and \ch{KNiF3} are closely related to the dynamical modulation of the \ch{M}--\ch{F}--\ch{M} bond angle by TO phonons, while for LO phonons the noticeable competing effect of the bond length change is observed. \subsection{Dielectric properties} The experimental temperature dependences of the low-frequency dielectric permittivity $\varepsilon^{\textrm{lf}}_{0}(T)$ for the studied cubic fluoroperovskites are shown in the upper frames of Fig.~\ref{fig:dielectric}. The data for \ch{RbMnF3} have been adapted from Ref.~\cite{dubrovin2018unveiling}. The $\varepsilon^{\textrm{lf}}_{0}(T)$ mainly grows at cooling in \ch{KZnF3}, \ch{RbMnF3}, and \ch{KMgF3} crystals, while the more complex behavior was found in \ch{KNiF3}, in which a decrease of this quantity turns into an increase as the temperature is reduced. Wherein, the relative changes of $\varepsilon^{\textrm{lf}}_{0}(T)$ are quite small and are about 1\% for \ch{RbMnF3}, \ch{KNiF3}, and \ch{KMgF3}, while for \ch{KZnF3} this value is about 5\% in the measured temperature range, which are quite typical for fluoroperovskites~\cite{dubrovin2018unveiling,dubrovin2019lattice,dubrovin2020spontaneous}. The experimental frequencies of the polar phonons from Fig.~\ref{fig:phonon} allow us to obtain the dielectric strength $\Delta\varepsilon_{j}$ of a particular $j$th phonon from the expression~\cite{gervais1983long} \begin{equation} \label{eq:oscillator_strength_TOLO} \Delta\varepsilon_{j} = \frac{\varepsilon_{\infty}}{{\omega^{2}_{j\textrm{TO}}}}\frac{\prod\limits_{k}{\omega^{2}_{k\textrm{LO}}}-{\omega^{2}_{j\textrm{TO}}}}{\prod\limits_{k\neq{}j}{\omega^{2}_{k\textrm{TO}}}-{\omega^{2}_{j\textrm{TO}}}}, \end{equation} which corresponds to the contribution of this polar mode to the static dielectric permittivity $\varepsilon_{0} = \varepsilon_{\infty} + \sum_{j}\Delta\varepsilon_{j}$. The calculated values of $\Delta\varepsilon$ and $\varepsilon_{0}$ from experiments at room temperature for the studied cubic fluoroperovskites are listed in Table~\ref{tab:phonon_parameters}. Figure~\ref{fig:dielectric} shows the temperature dependences of the dielectric strengths $\Delta\varepsilon_{j}$ (bottom frames) and the static dielectric permittivity $\varepsilon_{0}(T)$ (middle frames). The color circles correspond to the experimental data whereas the black and green lines are the fits including and neglecting the antiferromagnetic ordering below $T_{N}$, respectively. There is a qualitative agreement between temperature dependences of the low-frequency $\varepsilon^{\textrm{lf}}_{0}$ and static $\varepsilon_{0}$ dielectric permittivities in the studied crystals, as seen in the upper and middle frames of Fig.~\ref{fig:dielectric}. The temperature behaviors of $\Delta\varepsilon$ are very similar in the crystals under study and close to that previously observed in \ch{KCoF3} and \ch{RbCoF3}~\cite{dubrovin2019lattice}. The $\Delta\varepsilon_{1}$ grows rapidly while the $\Delta\varepsilon_{2}$, on the contrary, decreases at cooling, whereas, the temperature changes of the $\Delta\varepsilon_{3}$ are quite insignificant in the all studied crystals. The temperature dependences of the low-frequency dielectric permittivity $\varepsilon^{\textrm{lf}}_{0}$ exhibit kinks at $T_{N}$ in antiferromagnets \ch{RbMnF3} and \ch{KNiF3} due to the spontaneous magnetodielectric effect as shown in the upper frames in Figs.~\ref{fig:dielectric}(b) and~\ref{fig:dielectric}(c), respectively. It should be noted that this effect was previously experimentally observed in some other magnetic fluoroperovskites with different crystal structures~\cite{dubrovin2018unveiling,dubrovin2019lattice,dubrovin2020incipient,dubrovin2020spontaneous}. The frequency shifts $\Delta\omega^{\textrm{SP}}$ of the polar phonons caused by spin-phonon coupling lead to changes of the dielectric strengths $\Delta\varepsilon^{\textrm{MD}}$ as a result of the spontaneous magnetodielectric effect below $T_{N}$ as shown by the green lines in the bottom frames of Figs.~\ref{fig:dielectric}(b) and~\ref{fig:dielectric}(c). The well-pronounced spontaneous magnetodielectric effect $\Delta\varepsilon^{\textrm{MD}}$ is observed for $\Delta\varepsilon_{2}$ and $\Delta\varepsilon_{3}$, and has negative sign in both crystals. It is worth noting that the absolute values of $\Delta\varepsilon^{\textrm{MD}}$ are fairly close in these crystals even though the spin-phonon coupling is more pronounced in \ch{KNiF3} than in \ch{RbMnF3}. This is due to that the value of $\Delta\varepsilon^{\textrm{MD}}$ of the polar phonon is determined by the relative changes of $\omega_{\textrm{TO}}$ and $\omega_{\textrm{LO}}$ caused by frequency shifts $\Delta\omega^{\textrm{SP}}$ not only this, but also others polar phonons according to Eq.~\eqref{eq:oscillator_strength_TOLO}. A good qualitative agreement is observed in changes of temperature behavior of the low-frequency $\varepsilon^{\textrm{lf}}_{0}$ and the static $\varepsilon_{0}$ dielectric permittivities due to the spontaneous magnetodielectric effect below $T_{N}$ in both crystals, as it can be seen in the upper and middle frames in Figs.~\ref{fig:dielectric}(b) and~\ref{fig:dielectric}(c). It should be noted that the signs and relative values of the spontaneous magnetodielectric effect observed in the experiments are in satisfactory agreement with $\Delta\varepsilon^{\textrm{MD}} = \Delta\varepsilon^{\textrm{AFM}} - \Delta\varepsilon^{\textrm{NM}}$ obtained by using Eq.~\eqref{eq:oscillator_strength_TOLO} with $\omega^{\textrm{AFM}}$ and $\omega^{\textrm{NM}}$ phonon frequencies from our DFT simulations for both antiferromagnetic crystals, as presented in Table~\ref{tab:dft_SP}. Remarkably, we have shown that the microscopic lattice dynamics features in the cubic fluoroperovskites, such as the softening polar phonons and frequency shifts due to the spin-phonon coupling, manifest themselves in macroscopic effects, which can be effectively studied using the low-frequency dielectric spectroscopy. \section{Conclusions} In summary, we have systematically studied the lattice dynamics of the cubic fluoroperovskites by far-infrared reflectivity technique and supported by the first-principles calculations. We have experimentally demonstrated that the polar phonons at the $\Gamma$ point of the Brillouin zone, which are stable for all studied crystals, are softening by several cm$^{-1}$ at cooling, indicating the proximity of studied crystals to incipient ferroelectrics. We revealed that the harmonic $k_{0}$ and anharmonic $k_{\textrm{ah}}$ force constants associated with these softening polar phonons are mutually coupled and correlate with the tolerance factor $t$ of the cubic fluoroperovskites. Furthermore, we observed that as the lower is the $t$, the smaller is the value of $k_{0}$ and the most significant is the temperature change of $k_{\textrm{ah}}$. Thus, the disclosed trend leads to that the lower is the value of the tolerance factor $t$ of the studied cubic fluoroperovskite, the closer its anharmonic properties become to incipient ferroelectrics. However, according to our simulations, epitaxial strain does not lead to the ferroelectric instability and, hence, cubic fluoroperovskites are not incipient ferroelectrics. Based on our first-principles calculations, we disclosed the physical origin of the tolerance factor $t$ in the cubic fluoroperovskites. According to our lattice dynamics simulations, the computed harmonic force constants $k$ of the lowest-frequency phonons tend to decrease with a reduction of $t$ not only at the $\Gamma$ point, but at all high-symmetry points of the Brillouin zone. Thus, the revealed trends point to the incipient lattice instability in the cubic fluoroperovskites which is realized only in the crystals with small values of $t$ and drives to phase transitions from the cubic to the tetragonal and orthorhombic structures. However, these transitions lead to nonpolar structures but not to polar ones, since the phonon condensation that caused it occurs rather at the $M$ and $R$ points than at the $\Gamma$ point of the Brillouin zone. The disclosed correlation with the tolerance factor $t$ indicates the geometric origin of the observed incipient lattice instability caused by the steric effect due to the unit-cell volume filling by ions rather than forming a strong covalent bond as in oxide perovskites. We found that the frequency shifts due to spin-phonon coupling observed in the antiferromagnetic cubic fluoroperovskites can be understood in terms of the dynamical modulation of the \ch{M}--\ch{F}--\ch{M} bond angle induced by the relevant TO polar phonons, while the competing effect from bond length modulation is noticeable for the LO polar phonons. Finally, we have shown that low-frequency dielectric spectroscopy fairly reflects the observed lattice dynamics features such as the softening of polar phonons and frequency shifts due to the spin-phonon coupling in the studied insulating crystals. We believe that our results will stimulate further experimental and theoretical studies of the unusual lattice dynamics of highly anharmonic inorganic metal halide perovskites. These efforts will be relevant to further optimization of their physical properties for the rational design of multifunctional devices. Moreover, we can envisage that the cubic fluoroperovskites with the different influence of the antiferromagnetic ordering on the TO and LO polar phonons are close to becoming model materials for the rapidly developing field of THz magnetophononics~\cite{stupakiewicz2020ultrafast,afanasiev2021ultrafast}. \section*{Acknowledgments} The single crystals provided by J.-Y.\,Gesland, P.\,P.\,Syrnikov, and S.\,V.\,Petrov were used in experiments. We are grateful to O.\,A.\,Alekseeva and M.\,P.\,Scheglov for the help with x-ray orientation and preparation of samples, and D.\,A.\,Andronikova, A.\,S.\,Sheremet, R.\,G.\,Burkovsky, and A.\,K.\,Tagantsev for fruitful scientific discussions. R.\,M.\,D., N.\,V.\,S., and R.\,V.\,P. acknowledge the support by Russian Foundation for Basic Research according to the project No.\,19-02-00457. N.\,N.\,N. and K.\,N.\,B. acknowledge the Ministry of Science and Higher Education of Russia under the grant No.\,0039-2019-0004. Calculations presented in this work were carried out using the GridUIS-2 experimental testbed, being developed under the Universidad Industrial de Santander (SC3-UIS) High Performance and Scientific Computing Centre, development action with support from UIS Vicerrector\'ia de Investigaci\'on y Extensi\'on (VIE-UIS) and several UIS research groups as well as other funding resources. Additionally, we acknowledge the XSEDE facilities' support, a project from the National Science Foundation under grant number ACI-1053575. The authors also acknowledge the Texas Advanced Computer Center (with the Stampede2 and Bridges-2 supercomputers). We also acknowledge the use of the SuperComputing System (Thorny Flat) at WVU, which is funded in part by the National Science Foundation (NSF) Major Research Instrumentation Program (MRI) Award $\#$1726534. A.\,C.\,G.\,C. acknowledges the grant No. 2677: ``Quiralidad y Ordenamiento Magnético en Sistemas Cristalinos: Estudio Teórico desde Primeros Principios'' supported by the VIE – UIS. A.\,H.\,R. acknowledges the support of DMREF-NSF 1434897 and NSF OAC-1740111 projects.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what's in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Conclusion}\label{sec:con} In this paper, we propose a novel multi-model architecture MuCoS{} for code search. Instead of training one single model for code snippets and queries, we train multiple models which have specific features and combine them, which can help us better capture the diverse meaning from code snippets and natural language queries. To train models with specific features, we use a data augmentation and separation strategy to force the models to capture features of specific perspectives. Then we use an ensemble learning strategy to combine our models. Our experimental study has shown that the proposed approach is effective and outperforms other state-of-the-art approaches. This work is ongoing. In the future, we plan to evaluate our methods on more datasets. And we will train more individual learners focusing on more features. Moreover, we want to build a selection module to help select individual learners which are most suitable for the search query. \section{Discussions} \section{Experimental Setup} \label{sec:exp} \subsection{Dataset} To evaluate the effectiveness of our method, we use a widely used dataset for the code search task: CodeSearchNet \cite{Husain2019}, containing $500, 754$ pairs of function-level Java code snippets and their descriptions. There are $454, 443$ pairs as the training set, $30, 655$ pairs as the validation set and $26, 909$ pairs as the test set. We extend the training dataset by collecting the same number of negative samples as the positive samples. For each positive pair sample, we randomly select a mismatched query replacing the original query while maintaining the code snippet unchanged to construct a negative pair sample. \subsection{Evaluation Metrics} We use three widely used metrics for the evaluation of code search methods: FRank, SuccessRate@k, and MRR. The FRank, or best hit rank, is the rank of the first hit result in this result list. A smaller FRank implies lower inspection effort for finding the desired result \cite{Gu2018}. FRank can represent the effectiveness of a single code search query. The $SuccessRate@k$ measures the percentage of queries for which more than one correct result exists in the top k ranked results. In our evaluation, is calculated as follows: $$ \text { SuccessRate@k }=\frac{1}{|Q|} \sum_{q=1}^{Q} \delta\left(\text { FRank }_{q} \leq k\right) $$ where $Q$ is a set of queries, $\delta\left(\cdot\right)$ is a characteristic function, i.e., $\delta\left(\cdot\right) = 1$ if $\cdot$ satisfied, otherwise $\delta\left(\cdot\right) = 0$. A higher $SuccessRate@k$ means better code search performance. The MRR is the average of the reciprocal ranks of results of a set of queries $Q$. The reciprocal rank of a query is the inverse of the rank of the FRank. In our evaluation, is calculated as follows: $$ MRR=\frac{1}{|Q|} \sum_{q=1}^{|Q|} \frac{1}{FRank_{q}}. $$ \section{Introduction} Code search is the most frequent developer activity in software development process \cite{Caitlin15}. Reusable code examples help improve the efficiency of developers in their developing process \cite{Brandt09, Shuai2020}. Given a natural language query that describes the developer's intent, the goal of code search is to find the most relevant code snippet from a large source code corpus. Many code search engines have been developed for code search. They mainly rely on traditional information retrieval (IR) techniques such as keyword matching \cite{Meili15} or a combination of text similarity and Application Program Interface (API) matching \cite{Lv15}. Recently, many works have taken steps to apply deep learning methods \cite{he2016deep,ChoMGBBSB14,wang2019tag2gauss,wang2019tag2vec,yang2020domain} to code search \cite{Gu2018,Cambronero2019,Yan2020,Li2020,Feng2020,Zhu2020,Shuai2020,Ye2020,Haldar2020,Ling2020,Ling2020a,wang2020cocogum}, using neural networks to capture deep and semantic correlations between natural language queries and code snippets, and have achieved promising performance improvements. These methods employ various types of model structures, including sequential models \cite{Gu2018,Cambronero2019,Yan2020,Li2020,Feng2020,Zhu2020,Shuai2020,Ye2020,Haldar2020}, graph models \cite{Ling2020, Guo2020}, and transformers \cite{Feng2020}. Existing deep learning code search methods mainly use a single model to represent queries and code snippets. However, code may have diverse information from different dimensions, such as business logic, specific algorithm, and hardware communication, making it hard for a single code representation module to cover all the perspectives. On the other hand, as a specific query may focus on several perspectives, it is difficult for a single query representation module to represent different user intents. \begin{figure}[t] \begin{tcolorbox}[colback=white,colframe=yellow!50!black,boxrule=0.2mm,bottom = 0pt] \begin{lstlisting}[language=Java,escapechar=@,linewidth=0.99\columnwidth,xleftmargin=-12pt,frame=single,framesep=0mm,backgroundcolor=\color{white},tabsize=1, caption={},captionpos=b] public static String replaceHtmlEntities(String @\ghl{content}@, Map<String, Character> @\ghl{map}@) { for (Entry<String, Character> @\ghl{entry}@ : escapeStrings.entrySet()) { if (@\ghl{content}@.indexOf(@\ghl{entry}@.getKey()) != -1) { @\ghl{content}@ = @\ghl{content}@.replace(@\ghl{entry}@.getKey(), String.valueOf(@\ghl{entry}@.getValue())); } } return @\ghl{content}@; } \end{lstlisting} \end{tcolorbox} \begin{tcolorbox}[colback=white,colframe=yellow!50!black,boxrule=0.2mm,bottom = 0pt] \begin{lstlisting}[language=Java,escapechar=@,linewidth=0.99\columnwidth,xleftmargin=-12pt,frame=single,framesep=0mm,backgroundcolor=\color{white},tabsize=1, caption={},captionpos=b] public static String replaceHtmlEntities(String @\hl{var0}@, Map<String, Character> @\hl{var2}@) { for (Entry<String, Character> @\hl{var1}@ : escapeStrings.entrySet()) { if (@\hl{var0}@.indexOf(@\hl{var1}@.getKey()) != -1) { @\hl{var0}@ = @\hl{var0}@.replace(@\hl{var1}@.getKey(), String.valueOf(@\hl{var1}@.getValue())); } } return @\hl{var0}@; } \end{lstlisting} \end{tcolorbox} \caption{Code before and after variable renaming.} \label{fig:variable_renaming} \end{figure} \begin{figure}[t] \begin{tcolorbox}[colback=white,colframe=yellow!50!black,boxrule=0.2mm,bottom = 0pt] \begin{lstlisting}[language=Java,escapechar=@,linewidth=0.99\columnwidth,xleftmargin=-12pt,frame=single,framesep=0mm,backgroundcolor=\color{white},tabsize=1, caption={},captionpos=b] public void doAESEncryption() throws Exception{ if(!initAESDone) initAES(); cipher = Cipher.getInstance("AES/CBC/PKCS5Padding"); //System.out.println(secretKey.getEncoded()); @\hl{cipher.init(Cipher.ENCRYPT\_MODE, secretKey);}@ @\ghl{AlgorithmParameters params = cipher.getParameters();}@ iv = params.getParameterSpec(IvParameterSpec.class).getIV(); secretCipher = cipher.doFinal(secretPlain); clearPlain(); } \end{lstlisting} \end{tcolorbox} \begin{tcolorbox}[colback=white,colframe=yellow!50!black,boxrule=0.2mm,bottom = 0pt] \begin{lstlisting}[language=Java,escapechar=@,linewidth=0.99\columnwidth,xleftmargin=-12pt,frame=single,framesep=0mm,backgroundcolor=\color{white},tabsize=1, caption={},captionpos=b] public void doAESEncryption() throws Exception{ if(!initAESDone) initAES(); cipher = Cipher.getInstance("AES/CBC/PKCS5Padding"); //System.out.println(secretKey.getEncoded()); @\ghl{AlgorithmParameters params = cipher.getParameters();}@ @\hl{cipher.init(Cipher.ENCRYPT\_MODE, secretKey);}@ iv = params.getParameterSpec(IvParameterSpec.class).getIV(); secretCipher = cipher.doFinal(secretPlain); clearPlain(); } \end{lstlisting} \end{tcolorbox} \caption{Code before and after statement permutation.} \label{fig:permute_statement} \end{figure} \begin{figure*}[t] \includegraphics[width=1.0\textwidth]{images/CodeSearch.pdf} \caption{An overview of MuCoS{}. This framework consists of three phases: data augmentation/separation, individual encoder fine-tuning, and ensemble learning. We first generate three datasets each of which focuses on a specific aspect of code snippets based on data augmentation or separation. Then we learn three individual code search models by fine-tuning three pre-trained CodeBert models based on the generated datasets, respectively. Finally, we use a multi-layer perceptron followed by a concatenation of encodings from three individual models for ensemble learning.} \label{fig:overview} \end{figure*} To address the problems above, we propose \textbf{MuCoS{}}: \textbf{Mu}lti-Model for \textbf{Co}de \textbf{S}earch. First, we use data augmentation strategy to train multiple models that focus on different perspectives of code. Then, we combine these models using an ensemble learning strategy. For one natural language query in the training corpus, there are multiple corresponding code snippets with the same functionality but different structures or variable names after data augmentation. We believe our model can be inducted to be less focused on the structure or variable information using this data augmentation strategy. To be specific, we combine three models which focus on structure, local variable, and information of API invocation separately. The three models can roughly represent three typical scenarios in code search where the query is sensitive to business logic, specific algorithm, or hardware communication. Since programs have both lexical information and structure information, to build the structure-focused model, we can make the model rely less on local variables so that they will focus more on the structure. In practice, we employ a data augmentation technology, using variable renaming method \cite{Rabin2020} to change the local variable to $varN$ as shown in Fig.\xspace~\ref{fig:variable_renaming}, while preserving the original semantics. Similarly, we learn a variable-focused model based on an augmented dataset where new code snippets are added by permuting two statements of an exiting one if its semantics will not be changed \cite{Rabin2020} as shown in Fig.\xspace~\ref{fig:permute_statement}. And we also select programs that have API invocations from the JVM library to build a model that focuses on APIs. After learning several code retrieval models which represent different characteristics of code, we employ an ensemble strategy to combine the models. Our preliminary results suggest that our method has an improvement of around 12\% compared to the state-of-the-art baselines. To summarize, this paper makes the following contributions: \begin{itemize \item We propose a novel multi-model architecture MuCoS{} for code search. We use an ensemble learning strategy to capture different perspectives of code information and query intents. \item We do data augmentation based on semantic invariance of programs to get multiple individual learners which focus on different features of code. To the best of our knowledge, we are the first to employ data augmentation to solve the semantic code search problem. \item We conduct extensive experiments to evaluate the effectiveness of our approach. The results show that our approach significantly outperforms the state-of-the-art methods by 12\% on the standard dataset and 14\% on the sampled dataset. \end{itemize} \section{Proposed Model: MuCoS{}}\label{sec:model} Fig.\xspace~\ref{fig:overview} illustrates the overall structure of the proposed model framework MuCoS{}. This framework consists of three phases: data augmentation/separation, individual encoder fine-tuning, and ensemble learning. The basic idea of our method is to learn several individual encoders, each of which focuses on a specific aspect of code snippets based on different datasets. These datasets are generated from the original dataset through domain knowledge of programming language, such as API information from JVM library and semantic equivalent transformations of code snippets \cite{Rabin2020}. Finally, we leverage ensemble learning to integrate the individual modules. \subsection{Data Augmentation or Separation} The first step is to generate proper datasets for feeding individual learners. We design three data augmentation or separation strategies for separately building a structure-focused dataset, a variable-focused dataset, and an API-focused dataset: \begin{itemize} \item \textbf{Structure-focused data generation:} To reduce the impact of local variable names and make the model focus more on the program structure, we use the variable renaming program transformation method in \cite{Rabin2020} to generate semantic equivalent code snippets with local variable names replaced by $varN$, as shown in Fig.\xspace~\ref{fig:permute_statement}. Then we mix them with origin data. \item \textbf{Variable-focused data generation:} To reduce the impact of structure and make the model focus more on lexical information, we use the statement permutation program transformation method in \cite{Rabin2020} to change the program structure, swapping two independent statements (i.e., with no data or control dependency) in a basic block of a method while maintaining semantic equivalence, as shown in Fig.\xspace~\ref{fig:variable_renaming}. Then we mix them with origin data. \item \textbf{API-focused data generation:} We select the samples that the code snippets has API invocation from JVM library as the training data for API-focused model. \end{itemize} The general idea behind the first generation method is that the model will not highly rely on variable names to determine the similarity between query-code pairs, because the code snippets with different variable names will correspond to the same query, and hence the model will pay more attention to structural characteristics. Similarly, the model fine-tuned by the second dataset will pay more attention to variable names of code snippets. The generation method of the third one is easy to understand via directly selecting the code snippets with API invocation. \subsection{Individual Model Fine-Tuning and Model Combination} We follow CodeBERT \cite{Feng2020} to use a multi-layer bidirectional Transformer as the model architecture. We feed positive and negative samples in the model, and the ratio of them is 1:1. To build negative examples, for each positive sample we change the query into a randomly mismatched one while the code snippet remains unchanged. We use an ensemble learning strategy to combine the structure-focused model, variable-focused model, and API-focused model. We first concatenate the embedding of the hidden state's last layers of these models, then append an MLP classifier with two linear layers to the last layer of the concatenated neural network, training and updating its weights using the origin data. The loss functions of individual model fine-tuning and ensemble learning are both cross entropy to discriminate the positive pair and the negative pair. \section{Related Work}\label{sec:rw} Code search is a cross-field of natural language processing and software engineering which aims to retrieve code snippets from a large code corpus that most match the developer's thoughts using natural language. There are mainly two kinds of approaches of code search, information retrieval (IR) based \cite{Meili15, Lv15} and deep learning based \cite{Gu2018,Cambronero2019,Yan2020,Li2020,Feng2020,Zhu2020,Shuai2020,Ye2020,Haldar2020,Ling2020,Ling2020a}. Most of the existing code search engines rely on IR-based techniques, employing keyword matching or text similarity to retrieve code snippets. Recently, deep learning methods become mainstream since they do better at capturing deep and semantic correlations between code snippets and search queries and have promising performance. For example, \cite{Gu2018} proposed CODEnn, which can jointly embed code snippets and natural language descriptions into a high-dimensional vector space and then compute similarity. \cite{Feng2020} present CodeBERT, a bimodal pre-trained model for natural language and programming language which can solve code search problem. \section{Results}\label{sec:eval} In this section, we present the results of our experiments and answer three research questions. We also provide a case study. \subsection{RQ1. Does MuCoS{} outperform SOTA deep code search methods?} Table \ref{tab:baseline} shows the evaluation results of MuCoS{} compared to several state-of-the-art deep code search models: five baseline models provided by CodeSearchNet \cite{Husain2019}, the classical joint embedding model CODEnn \cite{Gu2018}, and the pre-trained model CodeBert \cite{Feng2020}. Among the SOTA models, MuCoS{} achieves the best performance, with MRR 12\% higher than the best baseline. We test our method on five seeds, the variance is of the order of 1e-5, which can be ignored and shows that our method is stable. \begin{table}[t] \centering \caption{Evaluation of different baselines on full Code Search Net corpus.} \begin{tabular}{l|cccc} \hline \bottomrule Model & S@1 & S@5 & S@10 & MRR\\\hline NBoW &0.499 & 0.698 & 0.752 & 0.589 \\ \hline 1D-CNN & 0.424 & 0.631 & 0.699 & 0.518\\ \hline biRNN & 0.485 & 0.685 & 0.743 & 0.644\\ \hline SelfAtt & 0.486 & 0.682 & 0.738 & 0.575\\ \hline ConvSelfAtt & 0.413 & 0.619 & 0.681 & 0.507\\ \hline CODEnn &0.146 & 0.146 & 0.146 & 0.146\\ \hline CodeBert & 0.642 & 0.792 & 0.825 & 0.708\\ \hline MuCoS{} & \textbf{0.750} & \textbf{0.843} & \textbf{0.860} & \textbf{0.793} \\ \hline \bottomrule \end{tabular} \label{tab:baseline} \end{table} \subsection{RQ2. How does MuCoS{} perform compared to other baselines on a small dataset?} We try a small sampled dataset with 60k samples to test whether code search models can maintain good performance on a small dataset. The training set is sampled from CodeSearchNet's training set, the number of training data is 60000, while the validation data and testing data keep the same amount. The baselines are trained with default parameters. Table \ref{tab:sampled} shows that the performance of most baselines drop largely on the small dataset. Our method and CodeBert have significantly less performance drop than other methods. We assume that pre-trained models can better adapt to scenarios with small data on the code search task. Moreover, our method still has a 14\% advantage compared to CodeBert on the small dataset. \begin{table}[!t] \centering \caption{Evaluation of different baselines on sampled Code Search Net corpus.} \begin{tabular}{l|cccc} \hline \bottomrule Model & S@1 & S@5 & S@10 & MRR\\ \hline NBoW & 0.271 &0.441 &0.507 & 0.354 \\ \hline 1D-CNN & 0.052 &0.151 & 0.224 & 0.110\\ \hline biRNN & 0.178 & 0.364 & 0.454 & 0.270 \\ \hline SelfAtt & 0.298 & 0.487 & 0.562 & 0.388 \\ \hline ConvSelfAtt & 0.193 & 0.375 & 0.461 & 0.282 \\ \hline CODEnn &0.043 &0.043 &0.043 &0.043\\ \hline CodeBert & 0.624 & 0.773 & 0.805 & 0.661\\ \hline MuCoS{} & \textbf{0.702} & \textbf{0.815} & \textbf{0.831} & \textbf{0.754} \\ \hline \bottomrule \end{tabular} \label{tab:sampled} \end{table} \subsection{RQ3. How the individual models in MuCoS{} affect its overall effectiveness?} In this research question, we evaluate whether each individual model contribute to building our final model MuCoS{}. Fig.\xspace~\ref{fig:individual} shows the results. Surprisingly, the three models that capture individual features all perform better than the baselines. The reason may be that the baselines do not model information of different features separately, which causes the baseline model to be confused with information from different features. On the other hand, MuCoS{} is significantly better than the three individual learners, which is also in line with the basic assumptions of ensemble learning. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{images/individual_models.png} \vspace{5pt} \caption{Evaluation of individual model in MuCoS{}. } \label{fig:individual} \end{figure} \subsection{Case Study} We now provide an example to show our individual models can capture the specific feature and contribute to the performance of MuCoS{}. Fig.\xspace~\ref{fig:3-1} shows the corresponding code snippet for the natural language query "get the field label". We evaluate them on our individual models and find that the correct result ranks first in the API-focused model, third in the structure-focused model, fifth in the var-focused model, and also first in our MuCoS{} model. In this case, the API-focused model can better capture code features than the structure-focused model and the var-focused model. Since the query has many overlaps with the APIs in the code snippet, the API-focused model can capture more information and have better performance. It also contributes to the overall performance of MuCoS{}. \begin{figure}[t] \begin{tcolorbox}[colback=white,colframe=yellow!50!black,boxrule=0.2mm,bottom = 0pt] \begin{lstlisting}[language=Java,escapechar=@,linewidth=0.99\columnwidth,xleftmargin=-12pt,frame=single,framesep=0mm,backgroundcolor=\color{white},tabsize=1, caption={},captionpos=b] public static String createLabelWithNameSpace( final String namespace, final String fieldName, final ResourceBundle bundle ) { String label; try { label = bundle.getString( namespace + '.' + fieldName ); } catch ( MissingResourceException mre ) { label = generateLabelValue( fieldName ); } return label; } \end{lstlisting} \end{tcolorbox} \caption{The corresponding code snippet to the query "get the field label".} \label{fig:3-1} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the end of the sixties Parker \cite{pa1}, \cite{pa2} investigated the particle creation in dynamic universe using adiabatic vacuum states. Later, L\"uders and Roberts gave an exact mahematical definition in \cite{lr} based on the algebraic approach to QFT of Haag, Kastler \cite{hk}. In their article the Klein-Gordon equation is solved by the Fourier method. As a part of this, they found the equation for the time dependent part of the state distribution, which lead to the another equation whose solutions describes adiabatic vacuum states, and as a limit case the Hadamard states. They don't solve this equation exactly, but by iterations. All iterative solutions were obtained under the assumption of smoothness of the scale factor.\\ In the first part of this paper we give a construction of adiabatic vacuum states and Hadamard states according to L\"uders and Roberts \cite{lr}. In the second part we will give a statement of the Hadamard condition, in the way of characterization of the microlocal structure $2$-point Wightman distribution due to Radzikowski \cite{rad}. Here we assume the validity of the Hadamard condition and its corollaries to prove that the scale factor is smooth. \section{Hadamard Condition for Linear Klein-Gordon Field} Let $M$ be a globally hyperbolic four dimensional Riemannian manifold with metric tensor $g$ and Riemannian connection $\nabla$. Consider a scalar field $\Phi:M\to \mathbb {R}$ satisfying the Klein-Gordon equation \begin{equation} \label{kg} (\Box_{g}+m^{2})\Phi=0. \end{equation} The hyperbolicity of $M$ implies that this equation is well-posed. Denote by $C^{\infty}_{0}(M)$ the space of smooth functions with compact support on $M$ and by $C^{\infty}(M)$ the space of smooth functions on $M$. There are two uniquely determined continous operators $$\Delta_{R},\Delta_{A}:C^{\infty}_{0}(M)\to C^{\infty}(M),$$ such that $$(\Box_{g}+m^{2})\Delta_{R}f = \Delta_{R}(\Box_{g}+m^{2})f=f,$$ and similarly for $\Delta_{A}$, and $$\begin{array}{rcl} {\rm supp}\ (\Delta_{A}f)&\subset & J^{-}({\rm supp}\ f),\\ {\rm supp}\ (\Delta_{R}f)&\subset & J^{+}({\rm supp}\ f), \end{array}$$ for $f\in C^{\infty}_{0}(M)$, where $J^{+}(S)$ means the causal future and $J^{-}(S)$ the causal past of a set $S\subset M$. These operators are called the retarded ($\Delta_{R}$) and the advanced ($\Delta_{A}$) propagator. Their difference $E=\Delta_{R}-\Delta_{A}$ is the propagator of the Klein-Gordon equation. The following is valid $$\begin{array}{rcl} (\Box_{g}+m^{2})Ef & = & E(\Box_{g}+m^{2})f=0,\\ {\rm supp}\ (Ef)&\subset & J^{+}({\rm supp}\ f)\cup J^{-}({\rm supp}\ f), \end{array}$$ for $f$ as above. The operators $\Delta_{R}$, $\Delta_{A}$ and $E$ have continous extensions to operators $\Delta_{R}',\Delta_{A}', E':\mathcal {E'}(M)\to\mathcal {D'}(M)$, where $\mathcal {D'}(M)$ resp. $\mathcal {E'}(M)$ is the space of the distributions resp. the space of distributions with compact support.\\ Now let $\Sigma$ be an arbitrary Cauchy hypersurface in $M$ with unit normal field $n^{\alpha}$ directed to the future cone. Then there exist operators $$\begin{array}{rclc} \rho_{0}: & C^{\infty}(M) & \to & C^{\infty}(\Sigma)\\ \ & f & \mapsto & f|_{\Sigma}\\ \rho_{1}: & C^{\infty}(M) & \to & C^{\infty}(\Sigma)\\ \ & f & \mapsto & (n^{\alpha}\nabla_{\alpha}f)|_{\Sigma}, \end{array}$$ with adjoints $\rho_{0}',\rho_{1}':\mathcal {E'}(\Sigma)\to\mathcal {E'}(M)$, see (e. g.\cite{jun},\cite{d}).\\ Let us note that using these operators we can construct, if there are given Cauchy initial conditions $u_{0}$, $u_{1}\in C^{\infty}_{0}(\Sigma)$, solutions of Klein-Gordon equation (for the details se \cite{d}).\\ This makes it to possible describe elements of the phase space using initial data on the Cauchy surface $\Sigma$. Suppose that $d^{3}\sigma$ is the volume form on this surface and define the real symplectic space $(\Gamma,\varsigma)$, where $\Gamma = C^{\infty}(\Sigma)\otimes C^{\infty}(\Sigma)$ is formed by the initial data of (\ref{kg}) and the real valued symplectic form $\varsigma$ is given by $$\varsigma\left(\left(^{u_{1}}_{p_{1}}\right),\left(^{u_{2}}_{p_{2}}\right)\right) =-\int_{\Sigma}[u_{1}p_{2}-u_{2}p_{1}]\ {\rm d}^{3}\sigma.$$ To this symplectic space $(\Gamma, \varsigma)$ there exists an associated Weyl algebra $\mathcal {A}[\Gamma,\varsigma]$, generated by elements $W(F)$, $F\in\Gamma$, subject to the relations $$W(F)^{*}=W(F)^{-1}=W(-F),$$ $$W(F_{1})W(F_{2})=e^{-\frac{i}{2}\sigma (F_{1}, F_{2})}W(F_{1}+F_{2}), \ \mbox {for all}\ F_{1},F_{2}\in\Gamma.$$ This Weyl algebra is a local algebra of observables in the sense of Haag and Kastler \cite{hk}. States on the algebra $\mathcal {A}$ are defined as complex-valued functionals on $\mathcal {A}$.\\ Now we can define the {\em quasifree state} $\omega_{\mu}$ associated with $\mu$ by $$\omega_{\mu}(W(F))=e^{-\frac{1}{2}\mu (F, F)},$$ where $\mu$ is a real-valued scalar product on $\Gamma$ satisfiyng $$\frac{1}{4}|\varsigma (F_{1}, F_{2})|^{2}\leq \mu (F_{1}, F_{1})\mu (F_{2}, F_{2}).$$ So we are now in a position to introduce two point function $$\lambda^{(2)}(F_{1},F_{2})=\mu (F_{1}, F_{2})+\frac{i}{2}\varsigma (F_{1}, F_{2}),$$ and the Wightman the two-point distribution \begin{equation} \label{2b} \Lambda^{(2)}(f_{1}, f_{2})=\lambda^{(2)}\left(\left(^{\rho_{0}Ef_{1}}_{\rho_{1}Ef_{1}}\right) ,\left(^{\rho_{0}Ef_{2}}_{\rho_{1}Ef_{2}}\right)\right), \end{equation} which enters into the definition the Hadamard state.\\ To introduce the latter, we use the microlocal formulation discovered by Radzikowski in \cite{rad}. (An overview of the theory of pseudodifferential operators, microlocal analysis and wavefront sets can be found in the book \cite{gs}.) Denote by $T^{*}M$ the cotangent bundle of $M$. For $(x_{i},\xi_{i}) \in T^{*}M$, $i=1,2$, writing $(x_{1},\xi_{1})\sim (x_{2},\xi_{2})$ means that there is a null geodesic $\gamma$ such that $x_{1},x_{2}\in\gamma$, with $\xi_{1}^{\mu}$ tangent to $\gamma$ at $x_{1}$ and $\xi_{2}^{\mu}$ is obtained from $\xi_{1}$ by a parallel transport to $x_{2}$ along $\gamma$. We use Theorem 3.9 of \cite{jun} as the microlocal formulation of a the Hadamard condition. \begin{definice} A quasifree state of a Klein-Gordon field on a globally hyperbolic spacetime is a {\em Hadamard state} if and only if the wavefront set of the two-point distribution (\ref{2b}) has the form $$WF(\Lambda^{2})=\{(x_{1},\xi_{1};x_{2},\xi_{2})\in T^{*}(M\times M)\setminus \{0\};(x_{1},\xi_{1})\sim (x_{2},\xi_{2}),\xi_{1}^{0}\geq 0\}.$$ \end{definice} The same microlocal structure has $2$-point distribution of the Hadamard state. Note that the Hadamard state is defined locally, but according to the "local-to-global singularity theorem" the local Hadamard condition implies the global Hadamard condition \cite{rad1}. \section{Adiabatic Vacuum and Hadamard States on Robetson-Walker Spacetimes} Let $M=\mathbb {R}\times\Sigma$ be a Lorentz manifold equipped with the Robertson- Walker metric \begin{equation} \label{metr} ds^{2}=dt^{2}-a^{2}(t)\left[\frac{dr^{2}}{1-\kappa r^{2}}+ r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi^{2})\right], \end{equation} where $\varphi\in[0, 2\pi]$, $\theta\in [0, \pi]$, $r\in [0, \infty)$ if $\kappa=-1, 0$ and $\varphi\in[0, 2\pi]$, $\theta\in [0, \pi]$, $r\in [0, 1)$ if $\kappa=1$ with $a(t)\in C^{2}(\mathbb {R})$, $a(t)>0$ for all $t\in\mathbb{R}$. The function $a(t)$ is called a {\em scale factor}. The Riemannian manifolds $\Sigma^{\kappa}$ are defined as $$\begin{array}{rcl} \Sigma^{+}&=&\left\{x\in\mathbb {R}^{4};\ (x^{0})^{2}+\sum_{i=0}^{3}(x^{i})^{2}=1\right\},\\ \Sigma^{0}&=&\left\{x\in\mathbb {R}^{4};\ (x^{0})=0\right\},\\ \Sigma^{-}&=&\left\{x\in\mathbb {R}^{4};\ (x^{0})^{2}-\sum_{i=0}^{3}(x^{i})^{2}=1, x^{0}>0\right\}. \end{array}$$ On $\Sigma^{\kappa}$ we consider the metric tensor $$s_{ij}=\left(\begin{array}{ccc} \frac{1}{1-\kappa r^{2}} & \ & \ \\ \ & r^{2} & \ \\ \ & \ & r^{2}\sin^{2}\theta \end{array}\right).$$ Cauchy surfaces are of the form $\Sigma_{t}=\{t\} \times\Sigma$ with $n^{\alpha}=(1,0,0,0)$. The hypersurfaces $\Sigma_{t}=\{t\}\times\Sigma$ are therefore equipped with the metrics $h_{ij}=a^{2}(t)s_{ij}$ in which we will study the Klein-Gordon equation on the $\Sigma_{t}$. $$(\Box_{g}+m^{2})\Phi=\frac{\partial^{2}\Phi}{\partial t^{2}}+3\frac{\dot a(t)}{a(t)} \frac{\partial\Phi}{\partial t}+(-^{(3)}\Delta_{h}+m^{2})\Phi=0,$$ where $^{(3)}\Delta_{h}$ is the Laplace-Beltrami operator on the Cauchy surface $\Sigma$, $$^{(3)}\Delta_{h}=\frac{1}{a^{2}(t)}\left\{(1-r^{2})\frac{\partial^{2}}{\partial r^{2}} +\frac{2-3r^{2}}{r}\frac{\partial}{\partial r}+\frac{1}{r^{2}}\Delta (\theta, \varphi) \right\}$$ $$\Delta (\theta, \varphi)=\frac{1}{\sin\theta}\left[\frac{\partial}{\partial\theta} \left(\sin\theta\frac{\partial}{\partial\theta}\right)+\frac{1}{\sin\theta} \frac{\partial^{2}}{\partial\varphi^{2}}\right].$$ This is a linear partial differential equation solvable by the Fourier method. By separation of variables we get $$\Phi(t,r,\theta,\varphi)=\int T_{k}(t)\phi_{k}(r,\theta,\varphi)d\mu(k),$$ where $T_{k}(t)$ is obtained as the solution of the ordinary differential equations \begin{equation} \label{te} \ddot T_{\vec k}(t)+3\frac{\dot a(t)}{a(t)}\dot T_{\vec k}(t)+\omega^{2}_{k}(t)T_{\vec k}(t)=0, \end{equation} $$\omega^{2}_{k}(t)=\frac{E(k)}{a^{2}(t)}+m^{2}.\ \ k=0,1,2,\ldots ,$$ and the measure is defined by $$\begin{array}{rcl} \int d\mu(\vec k)&=&\sum_{k=0}^{\infty}\sum_{l=0}^{k}\sum_{m=-l}^{l},\ \vec k=(k,l,m),\ E(k)=k(k+2)\ {\rm for}\ \kappa =1,\\ \int d\mu(\vec k)&=&\int_{\mathbb {R}^{3}}{\rm d}^{3}k,\ \vec k\in\mathbb {R}^{3},\ k=|\vec k|,\ E(k)=k^{2}\ {\rm for}\ \kappa =0,\\ \int d\mu(\vec k)&=&\int_{\mathbb {R}^{3}}{\rm d}^{3}k,\ \vec k\in\mathbb {R}^{3},\ k=|\vec k|,\ E(k)=k^{2}+1\ {\rm for}\ \kappa =-1. \end{array}$$ Each of these Riemannian manifolds have their own generalized eigenfunctions (for details see \cite {jun}). This system of functions is complete and orthonormal, hence we can define a generalized Fourier transform $$\begin{array}{rcl} \tilde{}: L^{2}(\Sigma) & \to & L^{2}(\Sigma)\\ h & \mapsto & \tilde{h}(\vec k) =\int_{\Sigma}\overline{\phi_{\vec k}(\vec y)} h(\vec y)d^{3}\sigma, \end{array}$$ where $d^{3}\sigma=\sqrt{|s|}dy=(1-\kappa r^{2})^{-\frac{1}{2}}r^{2}\sin\theta drd\theta d\varphi$. The phase space $(\Gamma, \varsigma)$ of the initial data $\Gamma=C^{\infty}_{0}(\Sigma)\times a^{3}(t)C^{\infty}_{0}(\Sigma)$ has the symplectic form $$\varsigma(F_{1}, F_{2})=-a^{3}(t)\int_{\Sigma}[q_{1}p_{2}-q_{2}p_{1}]d^{3}\sigma,$$ for $F_{i}=\left(q_{i} \atop a^{3}(t)p_{i}\right)\in\Gamma$, $i=1,2$.\\ Following a theorem in \cite{jun} (for the proof see \cite{lr}) we introduce two parameters which will serve to describe certain states. \begin{veta} The homogeneous, isotropic quasifree states for the free Klein-Gordon field in a Robertson- Walker spacetime are given by the following 2-point function $$\lambda^{(2)}(F_{1}, F_{2})=\int \langle\overline{\tilde{F}_{1}(\vec k)}, S(k)\tilde{F}_{2}(\vec k)\rangle ,$$ $$ S(k)=\left(\begin{array}{cc} |p(k)|^{2} & -q{k}\overline {p(k)}\\ -\overline{q(k)}p(k) & |q(k)|^{2} \end{array}\right),$$ where $p(k)$ and $q(k)$ are (essentially bounded measurable) complex valued functions satisfying $$\overline{q(k)}p(k)-q(k)\overline{p(k)}=-i.$$ \end{veta} Using these parameters we can according to \cite{jun} write down the formulas defining the adiabatic vacuum states, which in limit $n\to\infty$ gives the Hadamard states. \begin{definice} An {\em adiabatic vacuum state of order} n is a homogeneous, isotropic Fock state whose 2-point function is given by the functions $q(k)=T_{k}(t)$, $p(k)=a^{3}(t)\dot T_{k}(t)$ where $T_{k}(t)$ is a solution of the differential equation (4) with initial conditions at time t \begin{eqnarray} T_{k}(t) & = & W^{(n)}_{k}(t)\\ \dot T_{k}(t) & = & \dot W^{(n)}_{k}(t). \end{eqnarray} Here, \begin{equation} W^{(n)}_{k}(t)=\frac{e^{-i\int_{t_{0}}^{t}\Omega^{(n)}_{k}(t)dt'}}{a^{3/2}(t) \sqrt{2\Omega^{(n)}_{k}(t)}} \end{equation} is iteratively defined by \begin{eqnarray} (\Omega^{[0]}_{k}(t))^{2} & = & \omega_{k}^{2}(t)=\frac{E(k)}{a^{2}(t)}+m^{2} \\ (\Omega_{k}^{[n+1]})^{2} & = & \omega_{k}^{2}(t)- \frac{3}{4}\left(\frac{\dot a(t)}{a(t)}\right)^{2}- \frac{3}{2}\frac{\ddot a(t)}{a(t)} +\frac{3}{4}\left(\frac{\dot\Omega_{k}^{[n]}(t)}{\Omega_{k}^{[n]}(t)}\right)^{2} -\frac{1}{2}\frac{\ddot\Omega_{k}^{[n]}(t)}{\Omega_{k}^{[n]}(t)}. \end{eqnarray} \end{definice} The functions $\Omega_{k}^{[n]}(t)$ are iterative solutions of the equation $$ \Omega_{k}^{2}(t) = \omega_{k}^{2}(t)-\frac{3}{4}\left(\frac{\dot a(t)}{a(t)}\right)^{2} -\frac{3}{2}\frac{\ddot a(t)}{a(t)} +\frac{3}{4}\left(\frac{\dot\Omega_{k}(t)}{\Omega_{k}(t)}\right)^{2}- \frac{1}{2}\frac{\ddot\Omega_{k}(t)}{\Omega_{k}(t)},$$ which is closely linked to the equation (\ref{te}), see \cite{lr}. From this definition we see that the adiabatic vacuum states are essentialy dependent on the order of the iteration and on the initial time $t$. \section{Main Result} Now we will say that the {\it Hadamard condition is valid} if $$\Omega_{k}^{[n]}(t) {\rm \ exists\ for\ each\ natural\ number\ } n.\ \ \ \ {\rm (H1)}$$ This means, in particular, that $$\Omega_{k}^{[n]}(t) {\rm \ is\ twice\ continously\ differentiable\ \ \ \ \ (H2)}$$ and $$\Omega_{k}^{[n]}(t)>0, \forall t\in\mathbb {R} {\rm \ for\ any\ }n.\ \ \ \ {\rm (H3)}.$$ \begin{veta} Let $(M,g)$ be a Robertson-Walker spacetime with $a(t)\in C^{2}(\mathbb {R})$. Suppose that the Hadamard condition is valid. Then $a(t)$ is a smooth function. \end{veta} Proof: We use the mathematical induction. Calculating the first iteration we get $$ (\Omega^{[1]}_{k}(t))^{2} - \frac{1}{4a^{6}(t)\omega_{k}^{4}(t)} [4a^{6}(t)\omega_{k}^{6}(t)-$$ $$-3a^{4}(t)\dot a^{2}(t)\omega_{k}^{4}(t) +6a^{5}(t)\ddot a(t)\omega_{k}^{4}(t) +3[E(k)\dot a(t)]^{2}+$$ \begin{equation} +2E(k)(\ddot a(t)a(t)-\dot a^{2}(t))a^{2}(t)\omega_{k}^{2}(t) -2E(k)a^{6}(t)\dot a^{2}(t)]=0, \end{equation} The expression (10) is linearly dependent on the highest derivative of $a(t)$ so we can express $$\ddot a(t)=\frac{1}{2\omega_{k}^{2}(t)(3a^{5}(t)+E(k)a^{3}(t))} [4a^{6}(t)\omega_{k}^{6}(t) -3\dot a^{2}(t)a^{4}(t)\omega_{k}^{4}(t)- 3E^{2}(k)\dot a^{2}(t)$$ \begin{equation} -2E(k)\dot a^{2}(t)a^{2}(t)\omega_{k}^{2}(t)-2E(k)a^{2}(t)\dot a^{6}(t)- 4a^{6}(t)(\Omega^{[1]}_{k})^{2}(t)\omega_{k}^{2}(t)]. \end{equation} Since $a(t)\in C^{2}(\mathbb {R})$ by hypothesis, the right-hand side has a continous derivative, hence so has the left-hand side, i. e. $a(t)\in C^{3}(\mathbb {R})$. This in turn means-by differentiating (11) again-that right hand side is in $C^{2}(\mathbb {R})$ hence $a(t)\in C^{4}(\mathbb {R})$. From (10) we have $$\left(\Omega^{[1]}_{k}(t)\right)^{2}=\left(2E(k)a^{3}(t)\ddot a(t)\omega_{k}^{4}(t) -6a^{5}(t)\omega_{k}^{2}(t)\right)\ddot a(t)+$$ $$+({\rm terms\ involving\ only\ }a(t),\dot a(t)),$$ in particular, $\left(\Omega^{[1]}_{k}(t)\right)^{2}$ depends linearly on $\ddot a(t)$.\\ Now assume $a(t)\in C^{2n}(\mathbb {R})$ and consider the $n$-th iteration ($n\geq 2$) \begin{equation} (\Omega_{k}^{[n]})^{2} =\omega_{k}^{2}(t)- \frac{3}{4}\left(\frac{\dot a(t)}{a(t)}\right)^{2}- \frac{3}{2}\frac{\ddot a(t)}{a(t)} +\frac{3}{4}\left(\frac{\dot\Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}\right)^{2} -\frac{1}{2}\frac{\ddot\Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}, \end{equation} where $$\left(\Omega_{k}^{[n-1]}(t)\right)^{2}=F_{n}(a(t),\dot a(t),\ldots, a^{(2n-2)}(t)),$$ and assume that $F_{n}$ depends on $a^{(2n-2)}(t)$ linearly, $$\left(\Omega_{k}^{[n-1]}(t)\right)^{2}=f_{n}(a(t), \dot a(t),\dots, a^{(2n-3)}(t))a^{(2n-2)}(t)+$$ \begin{equation} +({\rm terms\ involving\ only\ }a(t), \dot a(t),\dots, a^{(2n-3)}(t)). \end{equation} This implies $$\left[{\left(\Omega_{k}^{[n-1]}(t)\right)^{2}}\right]\ddot\ =f_{n}(a(t), \dot a(t),\dots, a^{(2n-3)}(t))a^{(2n)}(t)+$$ \begin{equation} +({\rm terms\ involving\ only\ }a(t), \dot a(t),\dots, a^{(2n-1)}(t)), \end{equation} with $f_{n}\in C^{\infty}$.\\ By (9) \begin{equation} (\Omega_{k}^{[n]})^{2} -\bigg[\omega_{k}^{2}(t)- \frac{3}{4}\left(\frac{\dot a(t)}{a(t)}\right)^{2}- \frac{3}{2}\frac{\ddot a(t)}{a(t)} +\frac{3}{4}\left(\frac{\dot\Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}\right)^{2} -\frac{1}{2}\frac{\ddot\Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}\bigg]=0. \end{equation} Since $$\frac{\dot \Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}- \frac{1}{2}\frac{\left[{\left(\Omega_{k}^{[n-1]}(t)\right)^{2}}\right]\ddot\ }{\big(\Omega_{k}^{[n-1]}(t)\big)^{2}},$$ and $$\frac{\ddot \Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}= \frac{1}{2}\frac{\left[{\left(\Omega_{k}^{[n-1]}(t)\right)^{2}}\right]\ddot\ }{\big(\Omega_{k}^{[n-1]}(t)\big)^{2}} -\left(\frac{\dot\Omega_{k}^{[n-1]}(t)}{\Omega_{k}^{[n-1]}(t)}\right)^{2},$$ it follows from (13) and (14) that the left hand side of (15) depends linearly on $a^{(2n)}(t)$, $$\left(\Omega_{k}^{[n]}(t)\right)^{2}= ({\rm terms\ involving\ only\ }a(t), \dot a(t),\dots, a^{(2n-3)}(t))-$$ \begin{equation} -\frac{1}{4}\frac{f_{n}(a(t), \dot a(t),\dots, a^{(2n-3)}(t))}{\big(\Omega_{k}^{[n-1]}(t)\big)^{2}}a^{(2n)}(t). \end{equation} Thus $$a^{(2n)}(t)=({\rm terms\ involving\ only\ }a(t), \dot a(t),\dots, a^{(2n-3)}(t), \Omega_{k}^{[n-1]}(t))-$$ \begin{equation} -4\frac{\left(\Omega_{k}^{[n]}(t)\right)^{2}\left(\Omega_{k}^{[n-1]}(t)\right)^{2}} {f_{n}(a(t), \dot a(t),\dots, a^{(2n-3)}(t))}. \end{equation} The induction hypothesis $a(t)\in C^{2n}(\mathbb{R})$ together with (H2) implies that the right hand side of (17) is in $C^{2}(\mathbb {R})$. Thus $a^{(2n)}(t)\in C^{2}(\mathbb {R})$, i. e. $a(t)\in C^{2n+2}(\mathbb{R})$.\\ Besides (17) shows that $\left(\Omega_{k}^{[n]}(t)\right)^{2}$ depends on $a^{(2n)}(t)$ linearly, i. e. (13) holds for $n+1$ in the place of $n$. Consequently, by induction on $n$, we conclude that $a(t)\in C^{\infty}(\mathbb{R})$.\\ To make the passage from (16) to (17) completely rigorous, it remains to check that the denominator in (17) does not vanish, i. e. $$f_{n}(a(t), \dot a(t),\dots, a^{(2n-3)}(t))\not=0.$$ Observe that, by (11), $$f_{2}(a(t), \dot a(t))=\omega_{k}^{2}(t)(3a^{5}(t)-E(k)a^{3}(t)),$$ while by (16) $$f_{n+1} (a(t), \dot a(t),\dots, a^{(2n-1)}(t))= -\frac{1}{4}\frac{f_{n} (a(t), \dot a(t),\dots, a^{(2n-3)}(t))}{\big(\Omega^{[n-1]}_{k}(t)\big)^{2}}.$$ Iteratively the last relation gives ($n\geq 1$) $$f_{n+1} (a(t), \dot a(t),\dots, a^{(2n-1)}(t))= \left(-\frac{1}{4}\right)^{n-1}f_{2}(a(t),\dot a(t))\prod_{i=1}^{n-1}\frac{1}{\big(\Omega^{[i]}_{k}(t)\big)^{2}}.$$ The last denominator is nonzero by (H3).\\ Remark: Observe that it follows from (13) and Theorem 2 that $\Omega^{[n]}_{k}(t)$, are in fact, not only $C^{2}$ but $C^{\infty}$. \section{Conclusion} We have proved that on the Robertson-Walker spacetime the validity of the Hadamard condition for the free scalar quantum fields implies smoothness of the scale factor, i. e. together with the paper \cite {lr} we can state that in our case the validity of the Hadamard condition is a sufficient condition for the smoothness of the scale factor of the Robertson-Walker spacetime.\\ There are still some open questions, for instance, whether we can derive the same result for different kinds of quantum fields, e. g. Hermite scalar fields, spinor fields, etc., or whether it is valid also for others spacetimes than the Robertson-Walker spacetime. \section{Acknowledgements} The author acknowledge the support from the M\v{S}MT under project MSM4781305904, and ESI in Vienna. Special thanks are to M. Engli\v{s} for helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Main results} In \cite{QS} the authors study the semilinear elliptic problem \begin{equation}\label{eq:main1} \Delta u = \left(-\log u\right) 1_{\{u > 0\}} \quad \hbox{and} \quad u \geq 0 \qquad \hbox{ in } B_1, \end{equation} proving optimal regularity as well as non-degeneracy of the solutions: essentially, if $d = d(x, \partial \{u - 0\})$, solutions behave like $d^2 |\log d|$. From here one could also show that the free boundary has zero Lebesgue measure. It should be remarked that the main points of interest are along $\partial \{ u > 0\}$. We assume that the origin is such a point, and will discuss the free boundary around the origin. Since such an analysis is local and disregards the behavior of solutions far away from the free boundary $\partial \{ u > 0\}$, we omit the boundary values on $\partial B_1$. Also all statements about regularity will be uniform in the half-ball $B_{1/2}$, where the norms depend on some norm of the solution in the unit ball and dimension. One of the key difficulties in this problem becomes apparent when one studies the blow-up limits of such solutions, $\frac{u(r \cdot)}{ 2r^2 |\log r|}$: it is not hard to verify (see Section \ref{sec:prelim}) that these converge locally uniformly, along subsequences, to entire solutions of the classical obstacle problem \[ \Delta u = 1_{\{u > 0\}} \quad \hbox{and} \quad u \geq 0 \qquad \hbox{ in } {\mathbb R}^n. \] On the one hand, this problem is well-understood (see \cite{PSU} for a general reference); on the other hand, the scaling for this problem is different from the scaling of \eqref{eq:main1}. This lack of invariance makes it difficult to gain information from compactness arguments in this setting. The other major difficulty is more apparent when one differentiates the equation: $\nabla u$ solves $\Delta \nabla u = - \frac{\nabla u}{u} 1_{\{u > 0\}}$, which is extremely singular near the origin. As most methods for studying the regularity of the free boundary involve differentiating the equation in this fashion (directly or indirectly), they are difficult to apply here. In fact, the techniques employed in \cite{QS} do not seem sufficient to prove regularity of the free boundary or otherwise go beyond what is shown there. In this short note our goal is to explore some alternative tools available for studying problems like \eqref{eq:main1}. We consider a two-phase version of it here: \begin{equation}\label{eq:main2} \Delta u = -\lambda_+ \left(\log u^+ \right) 1_{\{u > 0\}} + \lambda_- \left(\log u^- \right) 1_{\{u < 0\}} \qquad \hbox{ in } B_1, \end{equation} where $u^\pm = \max (\pm u, 0)$. Unlike for \eqref{eq:main1}, the optimal regularity of solutions to this is not known in the literature, and does not follow from the arguments in \cite{QS}. Our main result here proves it using a rather different technique: \begin{theorem}\label{thm:main1} Let $u$ be a solution to \eqref{eq:main2}. Then $u \in C^{2 -\log} (B_{1/2}) (0)$, i.e. \[ |\nabla u (x) -\nabla u (y) | \leq C|x -y| |\log |x - y||, \] for a constant $C(n, \lambda_+, \lambda_-, \int_{B_1}u^2)$. \end{theorem} The proof follows an approach to regularity using a Weiss-type monotonicity formula, somewhat like in \cite{ASUW}. The major difference is that it is not actually clear that the Weiss energy associated to this problem is almost monotone \emph{a priori}; unless one already knows the optimal regularity of solutions, there is potentially a non-integrable error term in the monotonicity formula. Our argument, therefore, is simultaneously proving the monotonicity of the appropriate energy and the regularity of $u$, not just using the former to establish the latter. Moreover, to make this work we also have to use a nearly-optimal regularity result for $u$, Lemma \ref{lem:log2reg}, within the argument to estimate some of the problematic terms in the monotone quantity. While we plan to study the regularity of the free boundary for \eqref{eq:main1}, \eqref{eq:main2} and related questions in future work, this seems to be more delicate and is not treated here. The structure of this paper is as follows: Section \ref{sec:prelim} establishes notation and collects useful results readily available in the literature. Then Section \ref{sec:subopt} covers straightforward suboptimal regularity results which are nonetheless needed in later sections. In Section \ref{sec:growth} we prove the key growth lemma, which contains the main ideas of this note. Finally, Section \ref{sec:proofmain} gives a proof of Theorem \ref{thm:main1} using this growth lemma and some PDE techniques, while Section \ref{sec:nondegeneracy} presents a nondegeneracy property that shows that Theorem \ref{thm:main1} is essentially optimal. \section{Preliminary Analysis} \label{sec:prelim} \subsection{Definitions and Notation} Let $\Omega$ be a smooth domain, and consider minimizers to the functional \[ E(u; \Omega) = \int_{\Omega} \frac{1}{2}|\nabla u|^2 + F(u), \] where \[ F(t) = \begin{cases} (\lambda_+ t_+ + \lambda_- t_-) (1 - \log |t|) & t\neq 0 \\ 0 & t = 0 . \end{cases} \] In particular, we say that \emph{$u$ is a minimizer of $E$ on $\Omega$} if $E(u; \Omega) \leq E(v; \Omega)$ among all functions $v$ in $H^1(\Omega)$ with $v - u \in H^1_0(\Omega)$. Given a function $v \in H^1(\Omega)$, it is not difficult to see that $E$ admits a minimizer $u$ to $E$ with $u - v\in H^1_0(\Omega)$ using the direct method, but it is not clear whether or not $u$ is a unique minimizer. We do not address this question of uniqueness here, but our results apply to any minimizer. We will use the notation $E(u)$ for $E$ where the choice of domain $\Omega$ is clear from the context. Since our analysis is mainly local we will generally assume $\Omega = B_1(0)$. Minimizers of $E$ will be solutions of \eqref{eq:main2} on $B_1$, in the weak sense. They will also be analytic functions on $\{u \neq 0\}$. Note that, on the other hand, it is not clear that every solution to $\eqref{eq:main2}$ is a minimizer of $E$, as (unlike with the classical obstacle problem) the functional $E$ is not convex. We will deal only with minimizers in this paper. Let us define the following rescaled functions and rescaled $E$, for $r<1$: \[ u_r(x) = \frac{u(r x)}{r^2(1 - 2\log r)} \] and \[ E_r(v; \Omega) = \int_{\Omega} \frac{1}{2} |\nabla v|^2 + F_r(v), \] where \[ F_r(t) = \begin{cases} (\lambda_+ t_+ + \lambda_- t_-) (1 - \frac{\log |t|}{1 - 2 \log r} - \frac{\log (1 - 2\log r)}{1 - 2\log r}) & t \neq 0\\ 0 & t = 0. \end{cases} \] These have the following property: if $u$ minimizes $E$ on $B_1$, then $u_r$ minimizes $E_r$ on $B_1$. Note that at least when evaluated on a smooth, fixed $v$, the last two terms of $E_r$ tend to zero as $r\rightarrow 0$, leading to \[ E_0(v; \Omega) = \int_{\Omega} \frac{1}{2} |\nabla v|^2 + F_0(v). \] where \[ F_0(t) = \lambda_+ t_+ + \lambda_- t_-. \] This, however, is a convex functional whose minimizers coincide with solutions to the two-phase obstacle problem \begin{equation}\label{eq:2phaseobst} \Delta u = \lambda_+ 1_{\{u>0\}} - \lambda_- 1_{\{u < 0\}}. \end{equation} \subsection{Basic Regularity of Solutions} In \cite{GG}, the authors show (a much more general version of) the following: \begin{proposition} \label{prop:C1a} Let $u$ be a minimizer of \[ G(u) = \int_{B_1} \frac{1}{2}|\nabla u|^2 + g(u), \] where $|g(t) - g(s)|\leq C_0 |t - s|^{\alpha_0}$. Then \[ \|u\|_{C^{1,\alpha_1}(B_{1/2})} \leq C(n, C_0, \alpha_0, \|u\|_{L^2(B_1)}), \] where $\alpha_1$ depends only on $\alpha_0$. \end{proposition} Note that the function $F$ does not actually satisfy the assumptions here in general: while $F(t)$ is locally H\"older continuous, it grows like $|t||\log |t||$ for large $|t|$ and so does not admit a uniform modulus. It does, however, satisfy the assumptions so long as $\sup_{B_1} |u|$ is bounded, and so the proposition may be applied with extra dependence on $\sup_{B_1}|u|$. The following proposition gives an estimate on this quantity, based only on the following fact about $F_r$: \[ |F_r(t)| \leq C |t| (1 + |\log |t|| ) \leq C_*(1 + t^2). \] Using this, we may apply the results of Section 2 and 3 of \cite{GG2} to obtain: \begin{proposition}\label{prop:degiorgi} Let $u$ be a minimizer of $G$ on $B_1$, with $|g(u)| \leq C_*(1 + t^2)$. Then there is a constant $\alpha$ depending only on $n$ such that \[ \|u\|_{C^{0, \alpha}(B_{1/2})} \leq C(C_*, n, \|u\|_{L^2(B_1)}). \] \end{proposition} The proof there is based on directly verifying a Cacciopoli inequality and then applying De Giorgi's technique. A straightforward consequence of the above propositions is the following lemma: \begin{lemma}\label{lem:conv} Let $u_k$ be minimizers of $G_{k}(u) = \int \frac{1}{2}|\nabla u|^2 +g_k(u)$ on $B_1$ with \[ \sup_k \|u_k\|_{L^2(B_1)} \leq C < \infty. \] Assume that $g_k$ satisfy $|g_k(t) - g_k(s)|\leq C |t - s|^{\alpha_0}$ uniformly in $k$, and converge to $g(t)$ locally uniformly. Then, along a subsequence, $u_k \rightarrow u$ on $B_{1/2}$ in $C^{1,\alpha}$ topology for some $\alpha>0$, and $u$ is a minimizer of $G(u) = \int \frac{1}{2}|\nabla u|^2 +g(u)$. In particular, if $g_k = F_{r_k}$ with $r_k \searrow 0$, and $|u|\leq C$, the assumption on $g_k$ is verified and $u$ solves \eqref{eq:2phaseobst}. \end{lemma} \begin{proof} Applying Proposition \ref{prop:C1a}, we have that \[ \|u_k\|_{C^{1,\alpha_1}(B_{1/2})} \leq C \] uniformly in $r$. This immediately gives the convergence along a subsequence to a function $u$. Next, we note that to show that $u$ minimizes $G$, it suffices to check that for any $v \in C^{\infty}(B_{1/2})$ and $\supp(v-u) \subset\subset B_{1/2}$, we have that $G(v; B_{1/2}) \geq G(u; B_{1/2})$. To that end, let $\eta$ be a smooth cutoff which is equal to one on $B_{1/2 - \rho}$ and vanishes on $\partial B_{1/2}$. Set $v_k = \eta v + (1-\eta) u_k$; this is a valid competitor for $G_k$, so \[ G_k(v_k; B_{1/2}) \geq G_k(u_k; B_{1/2}). \] Set $v_\infty = \eta v + (1-\eta) u$. Now, as $\nabla u_k \rightarrow \nabla u$, $g_k(u_k) \rightarrow g(u)$, and $g_k(v_k) \rightarrow g(v_\infty)$ uniformly, we have that \[ G(v_\infty; B_{1/2}) \geq G(u; B_{1/2}). \] Choosing $\rho$ small enough, we see that $v_\infty = v$, and this implies the conclusion. Finally, observe that the integrands $F_r$ satisfy $|F_r(t) - F_r(s)| \leq C_0 |t - s|^{\alpha_0}$ for any $\alpha_0 <1$ and $C_0$ independent of $r$ if $t, s < C/{r^2(1 - 2 \log r)}$, and converge uniformly to $F_0$ as $r\rightarrow 0$. The last conclusion follows by noting that \eqref{eq:2phaseobst} is the Euler-Lagrange equation for an $E_0$ minimizer. \end{proof} Recall that minimizers of $G$ need not be unique (except in the case of $G = F_{0}$, which is convex). As such, this lemma should not be thought of as a stability property for minimizers of $G$ but rather a closure property for minimizing families. Our intended use for it is in compactness and blow-up arguments. \section{Suboptimal Regularity} \label{sec:subopt} The function $F_r$ is continuous, and in fact satisfies \[ |F_r(t) - F_r(s)| \leq C |t - s| (1 + |\log |t - s||) \] uniformly in $r$. Indeed, when $|t - s| > |s|/2$, this follows from \begin{align*} |F_r(t) - F_r(s)| &\leq |F_r(t)| + |F_r(s)| \\ &\leq C[|t| (1 + |\log |t||) + |s| (1 + |\log |s||)] \\ &\leq C |t - s| (1 + |\log |t - s||), \end{align*} where the last inequality used that $|t| \leq |s| + |t - s| < 3 |t - s|$. On the other hand, if $|t -s | \leq \min\{|t|, |s|\}/2$, this implies that $t$ and $s$ have the same sign and \begin{align*} |F_r(t) - F_r(s)| & \leq |t-s|\max_{z\in [s,t]}|F'_r(z)|\\ & \leq C |t - s| (1 + \max\{|\log |s||, |\log |t||\})\\ & \leq C |t - s| (1 + |\log |t - s||). \end{align*} Setting \[ \eta(t) = C|t|(1 + |\log |t||) \] for the remainder of this section, we can obtain an optimal regularity estimate for minimizers of \[ G(u) = \int \frac{1}{2}|\nabla u|^2 +g(u), \] where \begin{equation}\label{eq:loglip} |g(t) - g(s)| \leq \eta(t -s) \end{equation} This is not the optimal regularity for minimizers of $F$ and $F_r$, which will be discussed in the next section, but surprisingly we will require it anyway. The proof is a simple application of the methods of \cite{GG}. The only properties of $\eta$ which are needed below are: \begin{enumerate} \item $\eta :[0, 1] \rightarrow [0, C]$ is an increasing continuous bijection. \item $\eta$ is concave. \item $t \mapsto \frac{\eta(t)}{t}$ is a nonincreasing function. \item $ \eta(t) \leq C t^{\frac{1}{2} + \alpha}$ for some $C, \alpha$ and all $t \in [0, 1]$. \end{enumerate} Note that (1) and (3) imply that $c t \leq \eta(t)$ for $t \leq 1$. \begin{lemma} \label{lem:harmonicapprox} Let $u$ be a minimizer of $G$ on $B_r$, $r\leq 1$, with $G$ satisfying \eqref{eq:loglip}. There is a $c_*(\eta)$ such that if $\osc_{B_r} u\leq c_*$ and $h$ is a harmonic function with the same boundary values as $u$ along $\partial B_r$, then \[ \fint_{B_r}|\nabla (u - h)|^2 \leq C \frac{\eta^2(r^2)}{r^2}. \] \end{lemma} \begin{proof} As $h$ is harmonic, $\int_{B_r} \nabla h \cdot \nabla(u - h) = 0$, so \[ \int_{B_r} |\nabla (u-h)|^2 = \int_{B_r} \nabla (u + h) \cdot \nabla (u - h) = \int_{B_r} |\nabla u|^2 - |\nabla h|^2. \] Using $h$ as a competitor for $u$ in the minimization of $G$ gives $G(u) \leq G(h)$, and so \begin{equation}\label{eq:harmonicapprox1} \int_{B_r} |\nabla u|^2 - |\nabla h|^2 \leq 2 \int_{B_r} g(h) - g(u) \leq 2\int_{B_r}\eta(h -u). \end{equation} From the assumption on the oscillation of $u$ and the maximum principle, $|u-h|\leq 2c_* < 1$, while the modulus $\eta$ is concave. This can be used to show that \begin{equation}\label{eq:harmonicapprox2} \frac{1}{|B_r|}\int_{B_r}\eta(h -u) \leq \eta(\frac{1}{|B_r|}\int_{B_r}|h -u|) \leq \eta((\frac{1}{|B_r|} \int_{B_r} |h - u|^{\frac{2n}{n-2}})^{\frac{n-2}{2n}}) \leq \eta(2c_*) \end{equation} from Jensen's inequality. Applying the Sobolev embedding, we get \[ \int_{B_r}\eta(h -u) \leq |B_r| \eta( C |B_r|^{\frac{2 - n}{2n}} (\int |\nabla (u - h)|^2)^{\frac{1}{2}} ). \] Rewriting and plugging into \eqref{eq:harmonicapprox1} gives \[ \frac{1}{|B_r|}\int_{B_r} |\nabla (u-h)|^2 \leq 2\eta( C r (\frac{1}{|B_r|} \int_{B_r} |\nabla (u -h)|^2)^{1/2}). \] Setting $A = \frac{1}{|B_r|}\int_{B_r} |\nabla (u-h)|^2$, we have shown that \[ A \leq 2 \eta( C r \sqrt{A}). \] We also have $A \leq 2\eta(2c_*) < \frac{1}{C}$ by inserting \eqref{eq:harmonicapprox2} into \eqref{eq:harmonicapprox1} directly and taking $c_*$ small enough. Now, if $A \geq r^2/C^2$, then \[ \frac{\eta(C r \sqrt{A})}{C r \sqrt{A}} \leq \frac{\eta(r^2)}{r^2}, \] giving $ A \leq 2 C r \sqrt{A} \frac{\eta(r^2)}{r^2}$, and so $ A \leq 4 C^2 \frac{\eta^2(r^2)}{r^2}$. This means that \[ A \leq \max\{4 C^2 \frac{\eta^2(r^2)}{r^2}, r^2/C^2\} \leq C' \frac{\eta^2(r^2)}{r^2}. \] \end{proof} \begin{lemma}\label{lem:harmonic} Let $h$ be a harmonic function with finite Dirichlet energy on $B_r$ and with $h(0) = 0$, and let $s<r$. Then \[ \int_{B_s}|h|^2 \leq \left( \frac{s}{r}\right)^{n+2} \int_{B_r}|h|^2. \] \end{lemma} \begin{proof} Write $h = \sum_{i = 1}^\infty \alpha_i P_i$, where each $P_i$ is a harmonic polynomial and the $P_i$ are orthonormal in $L^2(B_1)$. By the assumptions made, each $P_k$ is of degree at least $1$. Hence \[ \int_{B_t}|h|^2 = \sum_{i = 1}^\infty \alpha_i^2 t^{n + 2 \deg P_k}; \] after dividing by $t^{n+2}$ we see that the right hand side is a nondecreasing function. This gives the conclusion. \end{proof} \begin{lemma} \label{lem:log2decay} Let $u$ be a minimizer of $G$ on $B_r$, $r\leq 1$, with $G$ satisfying \eqref{eq:loglip}. Assume that $\osc_{B_r} u\leq c_*$. Then there is a constant $C = C(n, \eta) > 0$ such that \[ \frac{1}{\rho^{n-2} \eta^2(\rho^2)}\int_{B_\rho}|\nabla u - \fint_{B_\rho}\nabla u|^2 \leq C\left( 1 + |\log \rho/r|^2 + \frac{1}{r^{n-2} \eta^2(r^2)}\int_{B_r}|\nabla u - \fint_{B_r}\nabla u|^2\right) \] for any $\rho < r$. \end{lemma} \begin{proof} First, fix $\rho_1 < \rho_2 \leq r$ and let $h$ be the harmonic function on $B_{\rho_2}$ which coincides with $u$ on $\partial B_{\rho_2}$. Then, from Lemma \ref{lem:harmonic} applied to the components of $\nabla h - \nabla h(0)$ (noting that $\fint_{B_{s}}\nabla h = \nabla h(0)$), we have \[ \int_{B_{\rho_1}}|\nabla h - \fint_{B_{\rho_1}}\nabla h|^2 \leq \left(\frac{\rho_1}{\rho_2}\right)^{n+2} \int_{B_{\rho_2}}|\nabla h - \fint_{B_{\rho_2}}\nabla h|^2. \] Applying Lemma \ref{lem:harmonicapprox} to $u$ and $h$ on $B_{\rho_2}$, this gives \[ \left(\int_{B_{\rho_1}}|\nabla u - \fint_{B_{\rho_1}}\nabla u|^2\right)^{1/2} \leq \left(\1\frac{\rho_1}{\rho_2}\right)^{n+2} \int_{B_{\rho_2}}|\nabla u - \fint_{B_{\rho_2}}\nabla u|^2\right)^{1/2} + C \rho_2^{n/2-1} \eta(\rho_2^2). \] Setting $\phi(t) = \left(\frac{1}{t^{n-2} \eta^2(t^2)} \int_{B_{t}}|\nabla u - \fint_{B_{t}}\nabla u|^2\right)^{1/2}$, we have shown that for any $t < s \leq r$, \[ \phi(t) \leq \left(\frac{t^2\eta(s^2)}{s^2\eta(t^2)}\right)\phi(s) + C \frac{s^{n/2-1}\eta(s^2)}{t^{n/2-1}\eta(t^2)} \leq \phi(s) + C \frac{s^{n/2+1}}{t^{n/2+1}}. \] Setting $t = \tau s$ in the above gives \[ \phi(\tau s) \leq \phi(s) + C \tau^{-n/2-1}. \] Iterating, \begin{align*} \phi(\tau^{k+1} s) &\leq \phi(\tau^{k}s) + C \tau^{-n/2-1}\\ &\leq \phi(s) + C (k+1) \tau^{-n/2-1}\\ &\leq \phi(s) + C \frac{|\log(t/s)|}{|\log \tau|} \tau^{-n/2-1} \end{align*} where $t = \tau^{k + 1}s$. Now fix $\tau < 1$ (for example, $\tau = 1/2$), set $s = r$, and choose $k$ such that $\rho \in [\tau^{k+1}s, \tau^k s]$, to obtain \[ \phi(\rho) \leq \phi(\tau^k r) + C \tau^{-n/2-1} \leq \phi(r) + C(\tau)(1 + |\log \rho/r| ). \] \end{proof} Below set \[ \eta_1^2(t) = \int_0^t \frac{\eta^2(s^2)}{s^2}(1 + |\log s|^2) ds. \] From assumption (4) on $\eta$, this is a finite increasing function of $t$. We will use that \[ \sum_{k = K}^\infty \frac{\eta^2(2^{-2k}r_0^2)}{2^{-2k}r_0^2} (1 + k^2) \approx \eta_1^2(2^{-K}r_0) \] and the doubling property \[ \eta_1(2r) \leq C\eta_1(r) \] below. For $\eta(t) = t(1 + |\log t|^J)$, one may compute $\eta_1(t) \approx t (1 + |\log t|^{J + 1})$. \begin{lemma}\label{lem:log2reg} Let $u$ be a minimizer of $G$ on $B_1$, with $G$ satisfying \eqref{eq:loglip}. Then for any $x,y\in B_{1/4}$, we have \[ |\nabla u(x) - \nabla u(y)| \leq C(n, \eta, \|u\|_{L^2(B_1)}) \eta_1(|x - y|). \] \end{lemma} The $\eta$ dependence here is both in the form of $\eta_1$ (which depends on $\eta$ explicitly) and in the constant (which depends on the constant in Lemma \ref{lem:log2decay}). \begin{proof} First, from applying Propositions \ref{prop:degiorgi} and \ref{prop:C1a}, we have that for $x\in B_{1/2}$, \begin{equation}\label{eq:log2reglip} |\nabla u(x)| \leq C. \end{equation} In particular, there is a fixed $r_0>0$ such that $\osc_{B_r(x)} u \leq c_*$ for every $x\in B_{1/4}$ and $r\leq r_0$. Applying Lemma \ref{lem:log2decay} with $r = r_0$, we see that for any $\rho < r_0<\frac{1}{4}$, \[ \fint_{B_\rho(x)}|\nabla u - \fint_{B_{\rho}(x)} \nabla u|^2 \leq C(r_0, \|u\|_{H^1(B_1)}) \frac{\eta^2(\rho^2)}{\rho^2} (1 + |\log \rho|^2). \] This gives \begin{align*} |\fint_{B_{2^{-k-1}r_0}(x)}\nabla u - \fint_{B_{2^{-k}r_0}(x)}\nabla u|^2 & \leq \fint_{B_{2^{-k-1}r_0}(x)} |\nabla u - \fint_{B_{2^{-k}r_0}(x)}\nabla u|^2\\ & \leq 2^n\fint_{B_{2^{-k}r_0}(x)} |\nabla u - \fint_{B_{2^{-k}r_0}(x)}\nabla u|^2\\ & \leq C \frac{\eta^2(2^{-2k}r_0^2)}{2^{-2k}r_0^2} (1 + k^2) \end{align*} for $k \geq 0$. Summing, we have that the averages $\fint_{B_{2^{-k}r_0(x)}} \nabla u$ form a Cauchy series and \[ \left|\nabla u(x) - \fint_{B_{2^{-k}r_0}(x)} \nabla u\right|^2 \leq C \eta_1^2(2^{-k}r_0). \] Now take any $x,y\in B_{1/4}$. If $|x - y|\geq r_0/4$, then \eqref{eq:log2reglip} directly implies the conclusion. If not, let $k$ be such that $2^{-k-1}r_0 < |x-y| < 2^{-k}r_0$; we then have \begin{align*} |\nabla u(x) - \nabla u(y)|^2 &\leq C \left|\fint_{B_{2^{1-k}r_0}(x)}\nabla u - \fint_{B_{2^{-k}r_0}(y)}\nabla u\right|^2 + C \eta_1^2(2^{-k}r_0)\\ & \leq C\fint_{B_{2^{-k}r_0}(y)}\left|\nabla u - \fint_{B_{2^{1-k}r_0}(x)} \nabla u\right|^2 + C \eta_1^2(2^{-k}r_0)\\ & \leq C\fint_{B_{2^{1-k}r_0}(x)}\left|\nabla u - \fint_{B_{2^{1-k}r_0}(x)} \nabla u\right|^2 + C \eta_1^2(2^{-k}r_0)\\ &\leq C \eta_1^2(2^{-k}r_0)\\ &\leq C \eta_1^2(|x - y|). \end{align*} This gives the conclusion. \end{proof} \section{Optimal Growth via the Weiss Formula} \label{sec:growth} In this section we establish growth and monotonicity results for $u$ near points where $|u(0)|, |\nabla u(0)|$ are small. The results here are already interesting if $u(0) = |\nabla u(0)| = 0$ (i.e. at one-phase and branch points), though the greater generality will be helpful in the next section. \begin{remark} \label{rem:modulus} Let $u$ be an $E_r$ minimizer on $B_1$ with $|u(0)| \leq \beta$ and $ |\nabla u(0)| \leq \varepsilon$. Then applying Lemma \ref{lem:log2reg}, we see that $u$ admits the suboptimal modulus $\omega(t) = Ct^2 (1 + |\log t|)^2$ for $t |\log t| \geq \varepsilon$ and $t^2 |\log t| \geq \beta$. In other words, $\sup_{B_r} |u| \leq \omega(r)$ as long as $r |\log r| \geq \varepsilon$, $r^2 |\log r| \geq \beta$, and $r \leq \frac{1}{2}$, and $\omega$ satisfies \[ \int_0^1 \frac{\log(\omega(r)) - 2 \log r}{r (1- 2\log r)^2} < \infty. \] If $\varepsilon, \beta = 0$, i.e. $u(0) = |\nabla u(0)| = 0$, then this is valid for all $r$. This integrability property is the only aspect of $\omega$ which will be relevant below; note that it would also remain valid for $\omega(t) = t (1 + |\log t|)^p$ for any $p$, though not for $\omega(t) = t^{\alpha}$ with $\alpha < 2$ (hence the importance of the preceding section). Note that $\omega$ here depends on $n$ and $\int_{B_1}u^2$ only. \end{remark} \begin{remark} \label{rem:Fcontrol} So long as $\sup_{B_r}|u| \leq \omega(r)$ and $r \leq r_0(n, \omega)$, we have that $F_r(u_r) \geq c(n) |u_r| \geq 0$ on $B_1$. Indeed, \begin{align*} F_r(u_r) &\geq c |u_r|(1 - \frac{\log |u_r|}{1 - 2 \log r})\\ & \geq c |u_r|(1 - \frac{\log (\omega(r)/\mu(r))}{1 - 2 \log r})\\ & \geq c |u_r|(1 - \frac{\log (C(1 - \log r))}{1 - 2 \log r})\\ & \geq c |u_r|. \end{align*} \end{remark} Let \[ W(r) = \alpha(r)\int_{B_1} |\nabla u_r|^2 + 2 F_r(u_r) -2 \int_{\partial B_1} u_r^2 \] be a renormalized Weiss-type energy centered about the origin, where \[ \alpha(r) = 1 - \frac{1}{2 \log r} \geq 1 \] is an increasing function. Set \[ \mu(r) = r^2 (1 - 2\log r). \] Let us compute the derivative, in $r$, of this quantity $W$. First, \[ \partial_r u_r(x) = \partial_r \frac{u(r x)}{\mu(r)} = \frac{\nabla u_r(x) \cdot x}{r} - u_r(x) \frac{\mu'(r)}{\mu(r)}. \] The last factor can be written as \[ \frac{\mu'(r)}{\mu(r)} = \frac{2}{r} (1 - \frac{1}{1 - 2 \log r}). \] We also have \[ \partial_r |\nabla u_r|^2 = 2 \nabla u_r \nabla (\partial_r u_r), \] and \[ \partial_r F_r(u_r) = (\partial_r F_r)(u_r) + f_r(u_r) (\partial_r u_r), \] where $f_r(t) = \partial_t F_r(t) = \Delta u_r$. Combining and using the divergence theorem, \begin{align*} W'(r) &= \alpha(r)\int_{B_1} 2 \nabla u_r \nabla(\partial_r u_r) + 2 (\Delta u_r) (\partial_r u_r) + 2(\partial_r F_r)(u_r) \\ &\qquad - 4 \int_{\partial B_1} u_r \partial_r u_r + \alpha'(r)\int_{B_r} |\nabla u_r|^2 + 2 F_r(u_r) \\ & \geq 2\alpha(r) \int_{B_1} (\partial_r F_r)(u_r) + 2 \int_{\partial B_1} (\alpha(r)\nabla u_r \cdot x - 2 u_r) \partial_r u_r \\ & \geq 2\alpha(r) \int_{B_1} (\partial_r F_r)(u_r) + \frac{2 \alpha(r)}{r} \int_{\partial B_1} (\nabla u_r \cdot x - \frac{2}{\alpha(r)} u_r)^2 \\ & := \frac{2 \alpha(r)}{r} \int_{\partial B_1} (\nabla u_r \cdot x - \frac{2}{\alpha(r)} u_r)^2 + Q(r). \end{align*} The first step used that $\alpha'$ is nonnegative, as is the integral that it is multiplied by, while the second step used that $\alpha(r) = (1 - \frac{1}{1 - 2 \log r})^{-1}$ and the computation of $\partial_r u_r$. A central point in our further discussions will be control over the error term $Q$. Let us expand it out: \[ (\partial_r F_r)(t) = (\lambda_+t_+ + \lambda_- t_-) \frac{2 r^3}{\mu^2(r)} (-\log |t| + 1 - \log(1 - 2 \log r)) . \] The key observation here is that since we are only concerned about bounding this from below, the single problematic situation in the above is when $u_r$ is large and hence $- |u_r| \log |u_r|$ is very negative. This motivates the following computation: (assuming $r < r_0$ below, so that $\alpha(r) \leq C$) \begin{align*} Q(r) &= 2\alpha(r) \int_{B_1} (\partial_r F_r)(u_r)\\ &\geq - C \frac{r^3}{\mu^2(r)} \int_{B_1} |u_r|( (\log |u_r|)_+ + \log(1 - 2 \log r))\\ &\geq - C \frac{r^3}{\mu^2(r)} \int_{B_1} |u_r|( \log (\omega(r)/\mu(r)) + \log(1 - 2 \log r))\\ & \geq - C \nu(r) \int_{B_1} |u_r| \\ & \geq - C \nu(r) \int_{B_1}F_r(u_r), \end{align*} where $\nu(r) := \frac{1 - \log(1 - 2 \log r) - \log(\omega(r)/\mu(r)) }{r (1 - 2\log r)^2}$ is an integrable function on $[0,1]$. We used here that $|u_r| \leq \omega(r)/\mu(t)$; the final inequality comes from Remark \ref{rem:Fcontrol}. To summarize, we have shown \begin{equation} \label{eq:Qest} Q(r) \geq - C \nu(r) \int_{B_1}F_r(u_r) \qquad r < r_0, \end{equation} with $\nu$ an integrable function. For any $H^1$ function $u$, let $P u$ denote the quadratic harmonic polynomial on $B_1$ minimizing \[ \int_{\partial B_1} |u - P u|^2. \] One may check that \begin{equation}\label{eq:subpoly} \int_{B_1} |\nabla (u - Pu)|^2 - 2 \int_{\partial B_1} |u - Pu|^2 = \int_{B_1} |\nabla u|^2 - 2 \int_{\partial B_1} u^2 \end{equation} by integrating by parts and using that $Pu$ is homogeneous of degree 2. \begin{lemma}\label{lem:BMOest} Let $u$ be an $E_r$ minimizer on $B_1$ with $|u(0)| \leq 1$ and $|\nabla u(0)|\leq 1$. Then \[ \int_{\partial B_1} |u - P u|^2 \leq C [W_0(u; r) + 1]. \] \end{lemma} Here $W_0(u; r) = \int_{B_1} |\nabla u|^2 + 2 F_r(u) - 2 \int_{\partial B_1} u^2$. \begin{proof} We argue by contradiction. Assuming this is not the case, there is a sequence of numbers $r_k \rightarrow r_\infty \in [0,1]$ and $u_k$ being $E_{r_k}$ minimizers such that \[ \int_{\partial B_1} |u_k - P u_k|^2 = C_k [W_0(u_k) + 1] = M_k, \] with $C_k \rightarrow \infty$. Note that $M_k \rightarrow \infty$ as well. Set $v_k = \frac{u_k - P u_k}{\sqrt{M_k}}$; these functions have \begin{align*} \int_{B_1} |\nabla v_k|^2 - 2\int_{\partial B_1} v_k^2 & = M_k^{-1}[\int_{B_1} |\nabla (u_k - P u_k)|^2 - 2\int_{\partial B_1} (u_k - P u_k)^2] \\ & = M_k^{-1}[\int_{B_1} |\nabla u_k|^2 - 2\int_{\partial B_1} u_k^2]\\ & \leq M_k^{-1}W_0(u_k; r_k) = \frac{W_0(u_k; r_k)}{C_k(W_0(u_k; r_k) + 1)} \leq C_k^{-1} \rightarrow 0. \end{align*} This gives \[ \int_{B_1} |\nabla v_k|^2 \leq 2 + C_k^{-1} \leq 3, \] so passing to a subsequence, the $v_k$ converge weakly in $H^{1}$ to a $v$ with \[ \int_{B_1} |\nabla v|^2 \leq \liminf_{k} \int_{B_1} |\nabla v_k|^2 \leq 2. \] We also have that $v_k$ converge strongly in $L^2(B_1)$ and $L^2(\partial B_1)$, the latter giving \[ \int_{\partial B_1} v^2 = \lim_k \int_{\partial B_1} v_k^2 = 1 \] by the definitions of $M_k$ and $v_k$. From Propositions \ref{prop:degiorgi} and \ref{prop:C1a} applied to $v_k$, we have that $v_k$ converge locally on $B_1$ in $C^{1,\alpha}$ topology, so in particular $|v(0)| + |\nabla v(0)| \leq \lim M_k^{-1/2} = 0$. From Lemma \ref{lem:conv}, we have that as $F_{r_k}(\sqrt{M_k} t)/M_k \rightarrow 0$ locally uniformly (in $t$), $v$ is harmonic on $B_1$. It follows from the monotonicity of Almgren's frequency that \[ 2 \leq \lim_{s \rightarrow 0} \frac{s \int_{B_s} |\nabla v|^2}{\int_{\partial B_s} v^2} \leq \frac{\int_{B_1} |\nabla v|^2}{\int_{\partial B_1} v^2} \leq 2, \] and this equality implies that $v$ is a quadratic harmonic polynomial. However, $v_k$ is orthogonal to quadratic harmonic polynomials, and this passes to the limit by the strong convergence of $v_k$ in $L^2(\partial B_1)$: \[ 0 = \lim_k \int_{\partial B_1} v_k P v_k = \int_{\partial B_1} v P v = \int_{\partial B_1} v^2 \] giving $v = 0$; this contradicts that $\int_{\partial B_1} v_k^2 = 1$. \end{proof} \begin{theorem}\label{thm:logreg} Let $u$ be an $E$ minimizer on $B_1$. There exists a $\rho_W \leq 1$ and a $C_W$, depending only on $n, \lambda_\pm,$ and $\int_{B_1}u^2$, such that for any $M>0$ and $\rho < \rho_0 \leq \rho_W(n)$, if $|\nabla u(0)|\leq \frac{\rho}{2} |\log \rho|$, $|u(0)|\leq \frac{\rho^2}{4} |\log \rho|$, \[ E_{\rho_0}(u_{\rho_0}; B_1) \leq M, \] and \[ \sup_{r \in [\rho,\rho_0]} \int_{B_1}F_r(u_r) \leq C_W(1 + M), \] then \[ \sup_{r \in [\rho/2,\rho_0]} \int_{B_1} |\nabla (u_r - P u_r)|^2 + \int_{B_1}F_r(u_r) \leq C_W(1 + M). \] \end{theorem} \begin{proof} Observe that $|\nabla u_r(0)| \leq 1$, $|u_r(0)|\leq 1$, and $\sup_{B_r} |u|\leq \omega(r)$ for $r \geq \rho/2$ from Remark \ref{rem:modulus} and scaling. First, we claim that \[ W(r) \leq W(\rho_0) - \int_{r}^{\rho_0} Q(s)ds \leq 2 M + C_1 C_W (1 + M) \int_{0}^{\rho_W} \nu(s)ds \] for all $r \geq \rho/2$. This is clear if $r\geq \rho$ from \eqref{eq:Qest}, forcing $\rho_W \leq r_0$. We can then use that $\int_{B_1} F_{r/2}(u_{r/2}) \leq C(n)\int_{B_1} F_{r}(u_{r})$, which is immediate from changing variables. Now, \begin{align*} \int_{B_1} &|\nabla (u_r - P u_r)|^2 + 2\alpha(r)\int_{B_1} F_r(u_r) \\ &= \int_{B_1} |\nabla (u_r - P u_r)|^2 + W(r) - \alpha(r)\int_{B_1} |\nabla u_r|^2 + 2 \int_{\partial B_1} u_r^2 \\ & \leq \int_{B_1} |\nabla (u_r - P u_r)|^2 + W(r) - \int_{B_1} |\nabla u_r|^2 + 2 \int_{\partial B_1} u_r^2 \\ & = \int_{B_1} |\nabla (u_r - P u_r)|^2 + W(r) - \int_{B_1} |\nabla (u_r - P u_r)|^2 + 2 \int_{\partial B_1} (u_r - P u_r)^2 \\ & \leq W(r) + 2 \int_{\partial B_1} (u_r - P u_r)^2 \\ & \leq W(r) + 2 C_2 (1 + W(r))\\ & \leq 2 C_2 + (2 C_2 +1) (2M + C_1 C_W (1 + M) \int_{0}^{\rho_W} \nu(s)ds) \end{align*} All the numbered constants depend only on $n$; the second line used that $\alpha(r)\geq 1$, after which we used \eqref{eq:subpoly}, our estimate of $W$, and Lemma \ref{lem:BMOest}. Therefore for $r \in [\rho/2, \rho]$, \begin{equation*} \int_{B_1} |\nabla (u_r - P u_r)|^2 + \int_{B_1} F_r(u_r) \leq C_3 (1 + M) + C_4 C_W (1 + M) \int_{0}^{\rho_W} \nu(s)ds. \end{equation*} We select $\rho_W$ so that \[ C_4 \int_{0}^{\rho_W} \nu(s)ds \leq \frac{1}{2}; \] this depends only on $n$. Then take $C_W$ so large that $C_3 \leq \frac{1}{2} C_W$; this gives \[ \int_{B_1} |\nabla (u_r - P u_r)|^2 + F_r(u_r) \leq C_W (1 + M) \] as promised. \end{proof} \begin{corollary}\label{cor:optimalgrowth} Let $u$ be an $E$ minimizer on $B_1$ with $|u(0)| \leq r_1^2 |\log r_1|$ and $ |\nabla u(0)| \leq r_1 |\log r_1|$. Then there is a $C = C(n, \lambda_\pm, \|u\|_{L^2(B_1)})$ such that \[ \sup_{B_r}|u| \leq C r^2 (1 + |\log r|). \] for $r_1 \leq r< \frac{1}{2}$. In particular, if $u(0) = |\nabla u(0)| = 0$, this holds for all $r$. \end{corollary} \begin{proof} Apply Remark \ref{rem:modulus} to $u$ to deduce that \begin{equation}\label{eq:logoff} \sup_{B_r} |u| \leq \omega(r) \end{equation} for $r \in [r_1, \frac{1}{2}]$. From Remark \ref{rem:Fcontrol}, this gives $F_r(u_r) \geq c |u_r|$ for $r_1 \leq r \leq r_0$. Set $\rho_0 = \min\{r_0, \rho_W \}$ and apply Theorem \ref{thm:logreg} repeatedly (with $M$ set to $E_{r_0}(u_{r_0}; B_1) \geq \int_{B_1}F_{r_1}(u_{r_1})$) to get \begin{equation}\label{eq:fullgrowth} \sup_{r \in [r_1,\rho_0]} \int_{B_1} |\nabla (u_r - P u_r)|^2 + F_r(u_r) \leq C_W (1 + M). \end{equation} Next, we consider $P u_r$. Select an orthonormal (in $L^2(\partial B_1)$) basis for the quadratic harmonic polynomials on $B_1$, $\{Q_i\}_{i = 1}^{J}$ (the space spanned by them is isomorphic to that of trace-free symmetric matrices over ${\mathbb R}^n$, via $Q \mapsto D^2 Q(0)$, so $J = n(n+1)/2 -1$). Let $S = \max_{B_1, i} |Q_i|$. Then \[ \int_{B_1}|\nabla P u_r|^2 = 2 \int_{\partial B_1} |P u_r|^2 = 2 \sum_{i = 1}^J (\int_{\partial B_r} Q_i u_r)^2 \leq 2 J S^2 ( \int_{\partial B_1} |u_r| )^2. \] By applying Chebyshev's inequality, we have that for some $\rho \in [\frac{1}{2}, 1]$, \[ \int_{B_1}|\nabla P u_{r\rho}|^2 \leq C ( \int_{\partial B_1} |u_{r\rho}| )^2 \leq C (\int_{\partial B_{\rho}} |u_r|)^2 \leq C (\int_{B_1} |u_r|)^2. \] As $F_r(u_r) \geq c |u_r|$, this may be rewritten: \[ \int_{B_1}|\nabla P u_{r\rho}|^2 \leq C (\int_{B_1} F_r(u_r))^2 \leq C (1 + M)^2. \] Combining this with the estimate on $u_{s} - Pu_s$ from \eqref{eq:fullgrowth}, \[ \int_{B_1}|\nabla u_{r/2}|^2 \leq C\int_{B_{\frac{1}{2\rho}}}|\nabla u_{r\rho}|^2 \leq C \int_{B_1}|\nabla u_{r\rho}|^2 \leq C(M). \] This is valid for every $r_1 \leq r \leq r_0$, regardless of the choice of $\rho$ previously. Finally, we may directly apply Proposition \ref{prop:degiorgi} to obtain that \[ \sup_{B_{1/2}} |u_r| \leq C(\int_{B_1}|\nabla u_r|^2) \leq C(M) \] for all $r_1/2 \leq r \leq r_0/2$. Rescaling this gives the conclusion so long as $r \leq r_0/2$, but for $r \geq r_0/2$ the conclusion is immediate from \eqref{eq:logoff}. \end{proof} \section{Optimal Regularity} \label{sec:proofmain} Corollary \ref{cor:optimalgrowth} provides an optimal growth control for a minimizer $u$ near points where $u(0) = |\nabla u(0)| = 0$, and a useful estimate when $|\nabla u(0)|$ is small. In order to turn this into a regularity statement, we must consider the opposite situation: $u(0) = 0$ but $\nabla u(0)$ is large. The key point here is that in this setting, the behavior in directions orthogonal to $\nabla u$ is extremely regular, so the problem is largely one-dimensional. To exploit this we use a change of variables argument. \begin{lemma}\label{lem:biggrad} Let $u$ be a minimizer of $E_r$ on $B_1$ with $u(0) = 0$, and assume that $|\nabla u(0)| \geq \frac{1}{4}$. Then \[ |\nabla u(x) - \nabla u(y)| \leq C | x - y|(1 + \frac{\log |x - y|}{\log r}) \] for $x, y \in B_{r_1}$, where $C, r_1$ depend only on $n$, $\lambda_\pm$, and $\int_{B_{1}}u^2$. \end{lemma} \begin{proof} First, apply Lemma \ref{lem:log2reg} to $u$ to obtain that \[ |\nabla u(x) - \nabla u(y)| \leq C | x - y||\log |x - y||^2 \] for $x, y \in B_{1/2}$, and $|\nabla u(0)|\leq C$. We may therefore write \[ |u(x) - \nabla u(0) x|\leq \omega(|x|), \] where $\omega$ is as in Remark \ref{rem:modulus}. Select a coordinate system $(x', x_n)$ with $e_n = \frac{\nabla u(0)}{|\nabla u(0)|}$, and let $Q_s = \{(x', x_n) : |x'|\leq s, |x_n|\leq s \}$ be a cylinder. For $s$ small and fixed, we have that on $Q_s$ the map $t \mapsto u(x + t e_n)$ is strictly monotone, and so the change of variables \[ \psi(x', x_n) = (x', u(x', x_n)) : Q_s \rightarrow U \] is a $C^{1,\alpha}$ diffeomorphism. Set $v : U \rightarrow {\mathbb R}$ to be the $n$-th component of the inverse map $\psi^{-1} : U \rightarrow Q_s$. Direct computation gives the relations \[ v_n \circ \psi = \frac{1}{u_n} \qquad v_i \circ \psi = - \frac{u_i}{u_n}, \] where $i < n$ and subscripts denote derivatives. A further computation gives \[ (\Delta u) \circ \psi^{-1} = L v := - \frac{1}{v_n} \sum_{i < n} v_{ii} + \frac{2}{v_n^2} \sum_{i < n} v_i v_{n i} - \frac{v_{nn}}{v_n^3} (1 + \sum_{i < n} v_i^2). \] These computations may be found in \cite{KN}. Now, $|u_n - u_n(0)|\leq \frac{1}{8}$ for small enough $s$, while $u_n(0) = |\nabla u(0)| \in [\frac{1}{4}, C]$, so $u_n, v_n \in [c, C]$ on $Q_s, U$, respectively. On the other hand, $|u_i| = |u_i - u_i(0)|\leq C s |\log s|^2$, so $|v_i|$ is small in terms of $s$. Thus for small enough $s$, $L$ is elliptic on $U$, with ellipticity constant $C$. Hence on $U$, $v$ satisfies the elliptic equation \[ L v = f_r(u \circ \psi^{-1}) = f_r(y_n). \] We also have $|v|\leq C$ on $U$. From standard elliptic theory and the fact that $|f_r(y_n)|\leq C( 1 + |\frac{\log |y_n|}{\log r}|)$, we may obtain that $\|v_{ij}\|_{L^p}(Q_{s_0}) \leq C$ for some fixed $Q_{s_0} \subset\subset U$ and $p > n$. Apply the partial Schauder estimate from Theorem 2.10 in \cite{DK} to $v$ on $Q_{s_0}$ to obtain that \[ [v_{ij}]_{C^{0,\alpha}(Q_{s_0/2})} \leq C( \sup_{y'_1 \neq y'_2} \frac{|f_r(t) - f_r(t)|}{|y'_1 - y'_2|^\alpha} + C(\|\nabla v\|_{C^{0,\alpha}}) \|D^2 v\|_{L^p}(Q_{s_0}) ) \leq C \] for any $\alpha < 1$, $i \leq n$, and $j < n$. While $f_r(y_n)$ was required to be continuous there, the estimate is independent of its boundedness or continuity, and so can be obtained in the fashion written here by an approximation argument. As, for example, $|D^2 u(0, s_0/4)| \leq C$ from local elliptic estimates, this can used to bound the full $C^{0,\alpha}$ norm of these derivatives: \[ \|v_{ij}\|_{C^{0,\alpha}(Q_{s_0/2})} \leq C. \] As a consequence of this, we may rewrite $L v$ as \[ f_r(y_n) = L v (y) = h_1(y) - h_2(y) v_{nn}(y), \] where $h_1, h_2$ are $C^{0, \alpha}$ functions of $y$ with $h_2 \geq 1$. From this, we immediately obtain that \[ |v_{nn}(y)|\leq C( 1 + \frac{|\log |y_n||}{|\log r|}), \] and so \[ |v_i(x) - v_i(y)|\leq C|x - y|(1 + \frac{|\log |x - y||}{|\log r|}) \] by integrating. Changing variables back, we learn that on $Q_{s_1} \subseteq \phi^{-1}(Q_{s_0})$, \[ |\nabla u(x) - \nabla u(y)|\leq C|x - y|(1 + \frac{|\log |x - y||}{|\log r|}). \] \end{proof} \begin{theorem} Let $u$ be minimizer of $E$ on $B_1$. Then for $x \in B_{1/2}$, \[ |\nabla u(x) - \nabla u(0)|\leq C |x| (1 + |\log |x||), \] where $C = C(n, \lambda_\pm, \int_{B_{1}}u^2)$. \end{theorem} \begin{proof} Applying Lemma \ref{lem:log2reg}, we know that \[ |\nabla u(x) - \nabla u(0)| \leq C |x|(1 + |\log |x|)^2 \] on $B_{1/2}$. This implies the conclusion for $|x| \geq \frac{1}{8}$, so we only need to consider $|x|\leq \frac{1}{8}$ below. We first consider the case of $u(0) = 0$. Set $s\in [0, \frac{1}{2}]$ to be the smallest value of $r$ for which \[ |\nabla u_r(0)| = \frac{1}{r(1 - 2\log r)} |\nabla u(0)| \leq \frac{1}{4}. \] Then for $r \in [s, 1/2]$, we may apply Corollary \ref{cor:optimalgrowth} to obtain that \[ \sup_{B_1} |u_r| \leq C. \] Applying Propositions \ref{prop:degiorgi} and \ref{prop:C1a} to $u_r$ gives \[ \sup_{B_{1/2}} |\nabla u_r| \leq C. \] If $|x|\geq \frac{s}{2}$, then using $r = 2|x|$ gives \begin{align*} |\nabla u(x) - \nabla u(0)| &\leq |\nabla u(x)| + |\nabla u(0)| \\ &\leq r(1 - 2\log r)|\nabla u_r(x/r)| + \frac{1}{4}r(1 - 2\log r)\\ & \leq Cr(1 - 2\log r)\\ & \leq C |x|(1 + |\log |x||). \end{align*} On the other hand, if $|x| \leq \frac{s}{2}$, we apply Lemma \ref{lem:biggrad} to $u_s$, to get \begin{align*} |\nabla u(x) - \nabla u(0)| &\leq s(1 - 2\log s)|\nabla u_s(x/s) - \nabla u_s(0)|\\ & \leq Cs|\log s| \frac{|x|}{s} (1 + \frac{|\log\frac{|x|}{s}|}{|\log s|}) \\ &\leq C |x|(1 + |\log |x||). \end{align*} Now we turn to the case of $u(0) \neq 0$. We proceed similarly to the above, except now set $s\in [0, \frac{1}{2}]$ to be the smallest value of $r$ for which \begin{equation}\label{eq:final1} |\nabla u_r(0)| \leq \frac{1}{2} \qquad |u_r(0)| \leq \varepsilon, \end{equation} with $\varepsilon$ to be chosen. For $r \in [s, 1/2]$, we may apply Corollary \ref{cor:optimalgrowth} to obtain \[ \|\nabla u_r\|_{C^{0,\alpha}(B_{1/2})} \leq C \] as before, and this implies the conclusion so long as $|x|\geq c_0 s$ for any fixed $c_0$, also to be chosen. At this point there are two cases to consider, depending on which of the two criteria in \eqref{eq:final1} failed to be satisfied first. Consider first the case that $|\nabla u_s(0)| = \frac{1}{2}$ and $|u_s(0)| \leq \varepsilon$. We know that $\|\nabla u_s\|_{C^{0,\alpha}(B_{1/2})} \leq C$, so \[ |u_s(y) - y \cdot \nabla u_s(0)| \leq |u(0)| + C|y|^{1 + \alpha} \leq \varepsilon + C|y|^{1 + \alpha}. \] Set $y = \pm t \frac{\nabla u_s(0)}{|\nabla u_s(0)|}$; then \[ |u_s(y) \pm \frac{1}{2} t| \leq \varepsilon + Ct^{1+\alpha} \leq \varepsilon + \frac{1}{8}t \] if we fix $ t$ with $8 Ct^\alpha \leq 1$ and $t \leq \frac{1}{16}$. Choose $\varepsilon = \frac{t}{8}$; then $u_s(y)$ has opposite sign at the two values of $t$, and so there must be a point $z \in B_{t}$ with $u_s(z) = 0$. From the same estimates, we have that $|\nabla u(z)| \geq \frac{1}{2} - Ct^\alpha \geq \frac{1}{4}$. Apply Lemma \ref{lem:biggrad} to $u_s$ on $B_{1}(z)$ to obtain that \[ |\nabla u_s(x) - \nabla u_s(y)| \leq C |x - y|(1 + \frac{|\log |x - y||}{|\log s|}) \] for $x, y \in B_{1/2}(z)$. In particular this holds with $y = 0$ and $ x\in B_{1/4} \subseteq B_{1/2}(z)$, which leads to \[ |\nabla u(x) - \nabla u(0)| \leq C |x| (1 + |\log |x||) \] for $ x\in B_{s/4}$ after rescaling. Now for the opposite case: $u_s(0) = \varepsilon$, with $\varepsilon$ chosen above (changing sign if negative, without loss of generality) while $|\nabla u_s(0)|\leq \frac{1}{2}$. We still have that $\|\nabla u_s\|_{C^{0,\alpha}(B_{1/2})} \leq C$, so on $B_{c \varepsilon}$, this implies $u_s \geq \frac{\varepsilon}{2}$. Using only that $u_s \in [\frac{\varepsilon}{2}, C]$, $|\nabla u_s|\leq C$, and the PDE $-\Delta u_s = f_s(u_s)$ allows us to estimate directly that \[ |\Delta u_s| \leq C \qquad |\nabla \Delta u_s| \leq C. \] From standard Schauder estimates, \[ |\nabla u_s(x) - \nabla u_s(0)|\leq C |x| \] for $x \in B_{c \varepsilon/2}$, and rescaling this gives \[ |\nabla u(x) - \nabla u(0)| = s (1 - 2 \log s) |\nabla u_s(x/s) - \nabla u_s(0)| \leq C s (1 - 2 \log s) \frac{|x|}{s} \leq C |x| (1 + |\log |x||) \] for $|x|\leq cs\varepsilon/2$. Set $c_0 = \min\{ c \varepsilon/2, \frac{1}{4}\}$ above to obtain the conclusion. \end{proof} \section{Nondegeneracy} \label{sec:nondegeneracy} Unlike the maximal growth estimate of the previous section, showing a minimal growth rate for the solution away from free boundary points can be done with elementary modifications of standard arguments (see e.g. \cite{PSU} for the classical case). \begin{lemma}\label{lem:lb} Let $u$ be an $E$ minimizer on $B_1$ with $u(0) = 0$. Then there is a $c = c(n, \lambda, \|u\|_{H^1(B_1)})$ such that, for $r \leq \frac{1}{2}$, either \[ \sup_{B_r} u^+ \geq c r^2 (1 + |\log r|) \] or $\{u > 0\}\cap B_{r/2} = \emptyset$, and either \[ \sup_{B_r} u^- \geq c r^2 (1 + |\log r|) \] or $\{u < 0\}\cap B_{r/2} = \emptyset$. \end{lemma} \begin{proof} We note that on $B_r$, $|u|\leq C r$ from Propositions \ref{prop:degiorgi} and \ref{prop:C1a}. This gives that, on $\{u>0\}$, \[ \Delta u = - \lambda_+ \log u \geq - \frac{1}{2}\lambda_+ \log r \] for $r < r_0(C)$ small enough. Then select a point $x_0 \in B_{r/2}\cap \{u>0\}$; if there is no such point, the conclusion follows directly. If there is, $v = u - \frac{\lambda_+|\log r|}{4n}|x - x_0|^2$ is subharmonic on $\{u > 0\}\cap B_r$. At $x_0$, $v>0$, while on $\partial \{u > 0\} \cap B_r$, $v < 0$; it follows from the maximum principle that there must be a point $y$ in $\partial B_r$ with $v(y) \geq 0$. This gives \[ u(y) \geq \frac{\lambda_+}{16n} |\log r| r^2, \] implying the conclusion. If, on the other hand, $r > r_0$, one may instead use the point found for $r = r_0$ above, adjusting the constant by a factor depending on $r_0$ only to obtain the conclusion. \end{proof} \section*{Acknowledgments} DK was partially supported by the NSF MSPRF fellowship DMS-1502852. Much of this work was conducted during his visit to the KTH Royal Institute of Technology, and he is grateful to it for making this possible. HS was supported by the Swedish Research Council \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{} \subsection{} \subsubsection{} \section{Introduction} Partly motivated by the Gubser-Mitra conjecture on the relationship between the dynamic instability and thermodynamic instability of the black branes in AdS spacetimes\cite{GM,GM2}, Hollands and Wald established a criterion for the dynamic stability of black holes as well as its close connection with the thermodynamic stability in \cite{HW} by working with the Wald formalism\cite{LW,Wald,IW}. Such a strategy was further extended successfully to the relativistic neutral perfect fluid stars\cite{GSW}, where not only were the dynamic and thermodynamic stability analysed in a transparent and unified manner, demonstrating its very advantage over the case by case explicit calculations\cite{SWZ,Roupas,Fang2017,Fang2018}, but also some new observations were made for the axisymmetric perturbations, compared to the previous explorations\cite{C1970,FS1975,FS1978,F1978,LH}. Although most of the real life stars are believed to be nearly electrically neutral\footnote{Among other possible exceptions, quark stars are suspected to be generically charged\cite{GL}.}, it is at least of academic interest to inquire about whether the above results for the neutral perfect fluid stars also hold for the charged perfect fluid stars. Moreover, it is noteworthy that such a charged perfect fluid star in an AdS spacetime also plays an important role in addressing the holographic metallic criticality\cite{HT}. But nevertheless, there is no a priori answer to the above inquiry, since the electromagnetic field as well as the Lorentz force acting on the charged perfect fluid kicks in the game. The purpose of this paper is intended to provide a definite answer by performing a comprehensive and detailed analysis of the dynamic and thermodynamic stability of the charged perfect fluid star in an asymptotically flat spacetime. As a result, we find that neither the presence of the electromagnetic field nor the electromagnetic force experienced by the charged fluid makes any obstruction to the key steps towards the results obtained in \cite{GSW} for the neutral star. To this end, we are especially required to derive an explicit expression of the canonical energy at null infinity for the electromagnetic part and argue for its non-negativity as well as its zero value for the physically stationary perturbations. Among others, we are also required to show that the angular momentum per particle, vorticity, and circulation share the same properties as those for the neutral perfect fluid stars, although their expressions get modified in the presence of the electromagnetic field. Thus we are able to obtain exactly the same results for the dynamic and thermodynamic stability of our charged stars as those presented in \cite{GSW} for the neutral stars. In passing, not only do we provide more detailed derivations and arguments in some places where they are only sketched or completely skipped in \cite{GSW}, but also we present a thorough alternative derivation for Eq. (\ref{zero}) to that suggested in \cite{GSW}. The rest of this paper is structured as follows. In the next section, we shall provide a brief review of the dynamics of the charged perfect fluid, the definition of the resulting star in dynamic equilibrium, and the criterion for its thermodynamic equilibrium as well as the corresponding first law of thermodynamics. In Section \ref{main}, we devote ourselves to developing the Lagrangian formulation of the Einstein-Maxwell-charged fluid system. To this end, we first set up the dynamical fields for our system, pin down the associated redundancies, define the Eulerian and Lagrangian variations and point out their relations in the subsection \ref{subdynamical}. Then we propose the Lagrangian for our system and apply the Wald formalism to it in the subsection \ref{sublagrangian}, where we particularly introduce the symplectic form and define the three Noether current related charges for both the diffeomorphisms and $U(1)$ gauge transformations. In the subsection \ref{subphase}, we figure out the phase space of our system, resort to the canonical variables to introduce an inner product on the subspace of perturbations, on which the symplectic complement is defined and the double symplectic complement of a subspace is shown to be itself. Finally, we introduce the canonical energy for the linear on-shell perturbations in the subsection \ref{subcanonical}, where not only do we present the explicit expression of the canonical energy at null infinity for both the electromagnetic and gravitational parts, but also show the relation of the canonical energy with the second order perturbations. With such a preparation, we take advantage of the physically stationary perturbations to establish the criterion for the dynamic stability of our charged star within some subspace of linear on-shell perturbations in Section \ref{DS}, where the implication of such a dynamic stability for both non-axisymmetric and axisymmetric perturbations are further explored, respectively. In Section \ref{TS}, we further establish the criterion for the thermodynamic stability of our charged star against generic perturbations as well as axisymmetric perturbations, and show the equivalence of the dynamic and thermodynamic stability with respect to the spherically symmetric perturbations of the static, spherically symmetric isentropic charged star. We conclude our paper with some discussions. The notations and conventions of \cite{GRbook} will be followed by us, except that the indices are not required to be balanced in our equations if no confusion arises. In particular, early Latin letters $(a,b,c,...)$ denote abstract spacetime indices while middle Latin letters $(i,j,k,...)$ denote concrete spatial indices on a spacelike Cauchy surface unless specified otherwise. In addition, the differential forms with the indices omitted are indicated in bold typeface and a vector contracting with the first index of a differential form is denoted by the dot with it. \section{Charged perfect fluid star, dynamic equilibrium, and thermodynamic equilibrium} By a charged perfect fluid in gravitational and electromagnetic fields, we mean that its energy momentum and charge current satisfy the following equations of motion \begin{equation}\label{conservationlaw} \nabla_aT_m^{ab}=F^{bc}J_c, \quad \nabla_a J^a=0 \end{equation} with $\bm{F}=d\bm{A}$ the electromagnetic field strength and \begin{equation} T_m^{ab}=(\rho+p)u^au^b+pg^{ab},\quad J^a=nu^a. \end{equation} Here the charge of per particle has been set to be unity such that the charge density and number density are conflated, and $u^a$ is the $4$-velocity satisfying $u_au^a=-1$. In addition, the energy density is determined by the equation of state $\rho(n,s)$ and the pressure can be obtained as \begin{equation} p=n\frac{\partial\rho}{\partial n}-\rho, \end{equation} where we have used Euler's equation \begin{equation} \rho+p=(Ts+\mu)n \end{equation} with $s$ the entropy per particle and $\mu$ the chemical potential, as well as the local version of the first law of thermodynamics \begin{equation} d\rho=(Ts+\mu)dn+Tnds. \end{equation} Furthermore, one can show that the entropy current is also conserved, i.e., $\nabla_a (sJ^a)=0$, which is equivalent to the statement that the entropy per particle along the $4$-velocity is constant. The resulting charged star is said to be in dynamic equilibrium if \begin{equation} \mathscr{L}_tA_a=\mathscr{L}_\varphi A_a=\mathscr{L}_tu^a=\mathscr{L}_\varphi u^a=\mathscr{L}_t n=\mathscr{L}_\varphi n=\mathscr{L}_t s=\mathscr{L}_\varphi s=0 \end{equation} for the timelike and axial Killing vector fields $t^a$ and $\varphi^a$, as well as the $4$-velocity $u^a$ taking the following circular form \begin{equation} u^a=(t^a+\Omega \varphi^a)/|v| \end{equation} with $\Omega$ the angular velocity and \begin{equation} |v|^2=-g_{ab}(t^a+\Omega\varphi^a)(t^b+\Omega\varphi^b). \end{equation} Note that $v^a\nabla_a(|v|^2)=2v^av^b\nabla_av_b=0$ with $v^a=t^a+\Omega \varphi^a$, we have $\nabla_au^a=0$. Namely, for the charged star in dynamic equilibrium, the corresponding $4$-velocity is divergence free, which further implies that the particle number density along the $4$-velocity is also constant. As proven in \cite{our}, the above charged star in dynamic equilibrium is in thermodynamic equilibrium if and only if \begin{equation} \tilde{T}\equiv |v|T=\text{const.},\quad \tilde{\mu}\equiv |v|\mu=\text{const.},\quad \Omega=\text{const.} \end{equation} throughout the whole star\footnote{Note that we have $\nabla_au^a=0$, $\sigma^{ab}\equiv q^{ac}q^{bd}(\nabla_{(c}u_{d)}-\frac{1}{4}g_{cd}\nabla_eu^e)=0$, and $\nabla_a(\frac{\mu}{T})=0$ for a star in thermodynamic equilibrium, so its energy momentum tensor, charge current, and entropy current receive no correction individually from the bulk viscosity, shear viscosity, and electric conductivity in the first order gradient expansion of dissipative hydrodynamics as it should be the case\cite{Kovtun}.}. Moreover, the charged star in thermodynamic equilibrium satisfies the first law of thermodynamics as \begin{equation} \delta \mathcal{M}=\tilde{T}\delta S+\tilde{\mu}\delta N+\Omega \delta \mathcal{J} \end{equation} with $\mathcal{M}$ the ADM mass, $\mathcal{J}$ the ADM angular momentum, $S$ the total entropy, and $N$ the total particle number. \section{Lagrangian formulation of the Einstein-Maxwell-charged fluid system}\label{main} \subsection{Dynamical fields, redundancies, and variations}\label{subdynamical} To develop a Lagrangian description of the Einstein-Maxwel-charged fluid system, not only are we required to have the spacetime manifold $M$, on which the metric $g_{ab}$ and the electromagnetic potential $A_a$ are defined, but also we like to introduce a fiducial manifold $M'$, called fluid spacetime, which is diffeomorphic to $M$. Then with a fixed scalar field $s'$ and a fixed $3$-form $\bm{\mathcal{N}}'$ on $M'$, satisfying \begin{equation} d\bm{\mathcal{N}}'=d(s'\bm{\mathcal{N}}')=0, \end{equation} one can define the physical fluid fields on $M$ by pushing forward with a diffeomorphism $\chi$ as \begin{equation} nu\cdot\bm{\epsilon}\equiv\bm{\mathcal{N}}=\chi_*\bm{\mathcal{N}}',\quad s=\chi_*s', \end{equation}\ where $\bm{\epsilon}$ is the associated spacetime volume element. Accordingly, both the charge current conservation and the entropy current conservation are automatically satisfied. With the above prescription, we can take $\phi=(g_{ab},A_a,\chi)$ as the dynamical fields for our Einstein-Maxwell-charged fluid system, where $\chi$ can be understood equivalently as a collection of 4 scalar fields $x'^{\mu}\circ\chi^{-1}$ on $M$ with $x'$ the local coordinates on $M'$. But nevertheless, it is noteworthy that such a description has an additional redundancy besides the usual diffeomorphism and $U(1)$ gauge redundancies. Namely, two field configurations $\phi=(g_{ab},A_a,\chi)$ and $\tilde{\phi}=(g_{ab},A_a,\tilde{\chi})$ are physically equivalent to each other, or trivially related if $\chi_*\bm{\mathcal{N}}'=\tilde{\chi}_*\bm{\mathcal{N}}'$ and $\chi_*s'=\tilde{\chi}_*s'$, which amounts to saying that $\bm{\mathcal{N'}}$ and $s'$ are invariant under $\tilde{\chi}^{-1}\circ\chi$, or equivalently $\bm{\mathcal{N}}$ and $s$ are invariant under $\tilde{\chi}\circ\chi^{-1}$. The variation around an arbitrary field configuration $\phi$ can be formulated by introducing a one-parameter family of dynamical fields $\phi(\lambda)=(g_{ab}(\lambda),A_a(\lambda),\chi_\lambda)$ with $\phi(0)=\phi$. Note that $\varphi_\lambda\equiv\chi_\lambda\circ\chi_0^{-1}$ gives rise to a one-parameter family of diffeomorphisms on $M$, so the first order perturbation is completely specified by a triple \begin{equation} \delta\phi \equiv(\delta g_{ab}\equiv\frac{d g_{ab}(\lambda)}{d\lambda}|_{\lambda=0},\delta A_a\equiv\frac{dA_a(\lambda)}{d\lambda}|_{\lambda=0},\xi^a\equiv\frac{dx^\mu\circ\varphi_\lambda}{d\lambda}(\frac{\partial}{\partial x^\mu})^a|_{\lambda=0}), \end{equation} where the vector field $\xi^a$ is known as the Lagrangian displacement with $x$ the local coordinate on $M$. A first order perturbation $\delta\mathcal{Q}$ of an arbitrary tensor field $\mathcal{Q}$ induced by $\delta\phi$ is usually called the Eulerian perturbation. More generally, the $k^\text{th}$-order Eulerian perturbation of $\mathcal{Q}$ is defined as \begin{equation} \delta^k\mathcal{Q}\equiv\frac{d^k\mathcal{Q}(\lambda)}{d\lambda^k}|_{\lambda=0}. \end{equation} However, it turns out to be convenient to consider the so called Lagrangian perturbation, in which the variation is performed onto the diffeomorphism equivalent field configuration $\hat{\phi}(\lambda)=(\varphi_\lambda^*g_{ab}(\lambda),\varphi_\lambda^*A_a(\lambda),\chi_0)$. Put it another way, the $k^\text{th}$-order Lagrangian perturbation of $\mathcal{Q}$ is defined as \begin{equation} \Delta^k\mathcal{Q}\equiv \frac{d^k\hat{\mathcal{Q}}(\lambda)}{d\lambda^k}|_{\lambda=0} \end{equation} with $\hat{\mathcal{Q}}(\lambda)=\varphi_\lambda^*\mathcal{Q}(\lambda)$. Whence we obviously have \begin{equation} \Delta^k\phi=(\Delta^kg_{ab},\Delta^kA_a,0), \quad \Delta^k\bm{\mathcal{N}}=0, \quad \Delta^k s=0 \end{equation} for any $k\geq 1$. In addition, it is easy to show \begin{equation} \Delta\mathcal{Q}=\delta\mathcal{Q}+\mathscr{L}_\xi\mathcal{Q} \end{equation} for the first order perturbation, whereby we further have \begin{equation} \delta \bm{\mathcal{N}}=-\mathscr{L}_\xi\bm{\mathcal{N}}, \quad \delta s=-\mathscr{L}_\xi s. \end{equation} $\xi^a$ is called a trivial displacement if $\delta\bm{\mathcal{N}}=0$ and $\delta s=0$. In what follows, we shall denote such a trivial displacement as $\eta^a$. It is not hard to show that $\eta^a$ can always be written in the form of the superposition of two trivial displacements as follows\cite{GSW} \begin{equation} \eta^a=fu^a+\tilde{\eta}^a, \end{equation} where $fu^a$ is called the flowline trivial with $f$ an arbitrary function and \begin{equation} \tilde{\eta}=\frac{1}{n^2}\mathcal{N}^{abc}\nabla_bZ_c, \end{equation} with $Z_c$ satisfying \begin{equation} u^cZ_c=0,\quad \mathscr{L}_uZ_c=0,\quad \nabla_{[a}s\nabla_bZ_{c]}=0. \end{equation} Furthermore, one can show $\bm{Z}$ can be expressed as \begin{equation}\label{special} \bm{Z}=Fds \end{equation} with $\mathscr{L}_u F=0$ for $\nabla_as\neq0$ and as a sum of terms of the form $F_1dF_2$ with $\mathscr{L}_uF_1=\mathscr{L}_uF_2=0$ for $\nabla_as=0$. \subsection{Lagrangian, symplectic form, and Noether current related charges}\label{sublagrangian} Next we like to apply the Wald formalism to the Einstein-Maxwell-charged fluid system with the Lagrangian $4$-form given by \begin{equation} \bm{L}=R\bm{\epsilon}-\frac{1}{4}F_{ab}F^{ab}\bm{\epsilon}+A_aJ^a\bm{\epsilon}-\rho\bm{\epsilon} \end{equation} with $J^a=nu^a$. Note that \begin{equation} \begin{aligned} \delta (R\bm{\epsilon})&=-G^{ab}\delta g_{ab}\bm{\epsilon}+ d\bm{\theta}_{g},\\ \delta (-\frac{1}{4}F_{ab}F^{ab}\bm{\epsilon})&=\nabla_bF^{ba}\delta A_a\bm{\epsilon}+\frac{1}{2}T^{ab}_{EM}\delta g_{ab}\bm{\epsilon}+d\bm{\theta}_{EM} \end{aligned} \end{equation} with the energy momentum tensor for the electromagnetic field given by \begin{equation} T_{EM}^{ab}=F^{a}{}_cF^{bc}-\frac{1}{4}F^{cd}F_{cd}g^{ab} \end{equation} and \begin{equation} \begin{aligned} \bm{\theta}_g&=g^{ab}g^{cd}(\nabla_d\delta g_{bc}-\nabla_b\delta g_{cd})\epsilon_{aefg}, \\ \bm{\theta}_{EM}&=-F^{ab}\delta A_b\epsilon_{aefg}. \end{aligned} \end{equation} In addition, we can derive \begin{equation} \begin{aligned} \delta(A_aJ^a\bm{\epsilon})&=(\Delta-\mathscr{L}_\xi)(J^aA_a\bm{\epsilon})\\ &=\Delta A_aJ^a\bm{\epsilon}-\mathscr{L}_\xi(A_aJ^a\bm{\epsilon})\\ &=J^a\delta A_a\bm{\epsilon}+J^a\mathscr{L}_\xi A_a\bm{\epsilon}-d(J^aA_a\xi\cdot\bm{\epsilon})\\ &=J^a\delta A_a\bm{\epsilon}+J^a\xi^bF_{ba}\bm{\epsilon}+\nabla_a(\xi^bA_bJ^a)\bm{\epsilon}-d(A_aJ^a\xi\cdot\bm{\epsilon})\\ &=J^a\delta A_a \bm{\epsilon}+J^a\xi^b F_{ba}\bm{\epsilon}+d\bm{\theta}_{m1} \end{aligned} \end{equation} with \begin{equation} \bm{\theta}_{m1}=\xi^aA_aJ\cdot\bm{\epsilon}-A_aJ^a\xi\cdot\bm{\epsilon}, \end{equation} where we have used the Cartan identity $\mathscr{L}_w\bm{F}=w\cdot d\bm{F}+d(w\cdot\bm{F})$ as well as the fact that \begin{equation} \begin{aligned} \Delta \bm{\epsilon}&=\frac{1}{2}g^{ab}\Delta g_{ab}\bm{\epsilon}\\ \Delta u^a&=\frac{1}{2}u^au^bu^c\Delta g_{bc}\\ \Delta n&=-\frac{1}{2}nq^{bc}\Delta g_{bc}. \end{aligned} \end{equation} in the second step with $q^{bc}=g^{bc}+u^bu^c$ and $\nabla_aJ^a=0$ in the fourth step. With the help of $\Delta\rho=\frac{\rho+p}{n}\Delta n$ and by the same token, one can show \begin{equation} \begin{aligned} \delta (-\rho\bm{\epsilon})&=\frac{1}{2}T^{ab}_m\delta g_{ab}\bm{\epsilon}-\xi_b\nabla_aT_m^{ab}\bm{\epsilon}+d\bm{\theta}_{m2} \end{aligned} \end{equation} with the energy momentum tensor for our fluid taking the familiar form \begin{equation} T_m^{ab}=(\rho+p)u^au^b+pg^{ab} \end{equation} and \begin{equation} \bm{\theta}_{m2}=(\xi_bT_m^{ba}+\rho\xi^a)\epsilon_{aefg}. \end{equation} Therefore the variation of the full Lagrangian $4$-form can be encapsulated as \begin{equation} \delta\bm{L}=\bm{E}(\phi)\delta\phi+d\bm{\theta}(\phi;\delta\phi). \end{equation} Here $\bm{E}=0$ gives rise to the following equations of motion \begin{equation} \begin{aligned} E^{ab}&\equiv -G^{ab}+\frac{1}{2}T^{ab}=0,\\ E_{EM}^a&\equiv \nabla_bF^{ba}+J^a=0,\\ E_{mb}&\equiv - \nabla^aT_{mab}+F_{ba}J^a=0 \end{aligned} \end{equation} with the total energy momentum tensor $T^{ab}=T_{EM}^{ab}+T_m^{ab}$, and the total pre-symplectic potential $3$-form is given by \begin{equation} \bm{\theta}=\bm{\theta}_g+\bm{\theta}_{EM}+\bm{\theta}_{m} \end{equation} with \begin{equation} \bm{\theta}_{m}=\bm{\theta}_{m1}+\bm{\theta}_{m2}=\xi^aP_{abcd}, \end{equation} where we have defined \begin{equation} P_{abcd}=[(\rho+p)q_a{}^e-A_fJ^f \delta_a{}^e +A_fJ^e\delta_a{}^f]\epsilon_{ebcd}=[(\rho+p)q_a{}^e-A_fJ^f q_a{}^e +A_fJ^eq_a{}^f]\epsilon_{ebcd}. \end{equation} Now with $\delta\phi$ formally viewed as a vector at the point $\phi$ in the configuration space $\mathcal{F}$, denoted as $\delta\phi^A$, one can define a $1$-form field $\Theta_A$ and the pre-symplectic form on $\mathcal{F}$ as follows \begin{equation} \Theta_A\delta\phi^A=\int_\Sigma\bm{\theta}(\phi;\delta\phi),\quad \Omega_{AB}=(D\Theta)_{AB}, \end{equation} where $\Sigma$ denotes a Cauchy surface in $M$ and $D$ represents the exterior derivative on $\mathcal{F}$. A convenient way to evaluate $\Omega_{AB}\delta_1\phi^A\delta_2\phi^B$ at the point $\phi\in\mathcal{F}$ is to extend $\delta_1\phi^A$ and $\delta_2\phi^A$ off of $\phi$ in an arbitrary manner and use the following formula \begin{equation} \Omega_{AB}\delta_1\phi^A\delta_2\phi^B=\mathscr{L}_{\delta_1\phi}(\Theta_A\delta_2\phi^A)-\mathcal{L}_{\delta_2\phi}(\Theta_A\delta_1\phi^A)-\Theta_A[\delta_1\phi,\delta_2\phi]^A. \end{equation} This amounts to saying \begin{equation} \Omega_{AB}\delta_1\phi^A\delta_2\phi^B=\int_\Sigma \bm{\omega}(\phi;\delta_1\phi,\delta_2\phi), \end{equation} with the pre-symplectic current $3$-form on $M$ defined as \begin{equation} \bm{\omega}(\phi;\delta_1\phi,\delta_2\phi)=\delta_1\bm{\theta}(\phi;\delta_2\phi)-\delta_2\bm{\theta}(\phi;\delta_1\phi)-\bm{\theta}(\phi;\delta_1\delta_2\phi-\delta_2\delta_1\phi). \end{equation} For our purpose, we choose $(\delta_1g_{ab},\delta_1A_a)$ and $(\delta_2g_{ab},\delta_2A_a)$ as variations along a two-parameter family of metrics and gauge fields $(g_{ab}(\lambda_1,\lambda_2),A_a(\lambda_1,\lambda_2))$, which implies \begin{equation} \delta_1\delta_2g_{ab}=\delta_2\delta_1g_{ab},\quad \delta_1\delta_2A_a=\delta_2\delta_1A_a. \end{equation} On the other hand, we like to keep $\xi_1^a$ and $\xi_2^a$ fixed, i.e., \begin{equation} \delta_1\xi_2^a(x)=\delta_2\xi_1^a(x)=0, \end{equation} which implies \begin{equation} \delta_1\delta_2\chi=\delta_1\xi_2^a(\chi)=\xi_1^b\partial_b\xi_2^a(\chi),\quad \delta_2\delta_1\chi=\delta_2\xi_1^a(\chi)=\xi_2^b\partial_b\xi_1^a(\chi). \end{equation} Whence we have \begin{equation} \delta_1\delta_2\chi-\delta_2\delta_1\chi=[\xi_1,\xi_2]^a. \end{equation} As a result, the pre-symplectic form can be written explicitly as \begin{eqnarray}\label{sf} \Omega_{AB}\delta_1\phi^A\delta_2\phi^B&=&\int_\Sigma (\delta_2h_{ij}\delta_1\bm{\pi}^{ij}-\delta_1h_{ij}\delta_2\bm{\pi}^{ij})+\int_\Sigma(\delta_2A_i\delta_1\bm{\pi}^i-\delta_1A_i\delta_2\bm{\pi}^i)\nonumber\\ &&+\int_\Sigma(\xi_2^a\delta_1P_{aefg}-\xi_1^a\delta_2P_{aefg}-[\xi_1,\xi_2]^aP_{aefg}), \end{eqnarray} where \begin{equation} \bm{\pi}^{ij}=(K^{ij}-Kh^{ij})\hat{\bm{\epsilon}}, \quad \bm{\pi}^i=\nu_aF^{ai}\hat{\bm{\epsilon}} \end{equation} with $K_{ij}$ the extrinsic curvature and $\hat{\bm{\epsilon}}=\nu\cdot\bm{\epsilon}$ the induced volume on $\Sigma$ by the future directed normalized normal vector $\nu^a$. But nevertheless, no matter whether $\delta_1\delta_2\phi-\delta_2\delta_1\phi$ vanishes or not, one can show that we always have \begin{equation} d\bm{\omega}(\phi;\delta_1\phi,\delta_2\phi)=\delta_2\bm{E}\delta_1\phi-\delta_1\bm{E}\delta_2\phi, \end{equation} which means that $\bm{\omega}$ is closed if both $\delta_1\phi$ and $\delta_2\phi$ satisfy the linearized equations of motion $\delta_1\bm{E}=\delta_2\bm{E}=0$. Accordingly, $\Omega_{AB}\delta_1\phi^A\delta_2\phi^B$ does not depends on the choice of the Cauchy surface $\Sigma$ if not only do $\delta_1\phi$ and $\delta_2\phi$ satisfy the linearized equations of motion but also have an appropriate fall-off behavior at the spatial infinity. In addition, for the local symmetry associated with the diffeomorphisms generated by an arbitrary vector field $X^a$, we can define the corresponding Noether current $3$-form as \begin{equation} \bm{J}_X=\bm{\theta}(\phi;\mathscr{L}_X\phi)-X\cdot\bm{L}, \end{equation} with $\mathscr{L}_X\phi=(\mathscr{L}_Xg_{ab},\mathscr{L}_XA_a,-X^a)$, whereby we have \begin{equation} d\bm{J}_X=-\bm{E}\mathscr{L}_X\phi. \end{equation} Therefore the corresponding Noether charge \begin{equation} Q_X=\int_\Sigma \bm{J}_X=0 \end{equation} for the on-shell field configurations provided that $X^a$ vanishes sufficiently rapidly at the spatial infinity. Furthermore, one can show \begin{equation} \bm{\omega}(\phi;\delta\phi,\mathscr{L}_X\phi)=\delta\bm{J}_X+X\cdot\bm{E}(\phi)\delta\phi-d(X\cdot\bm{\theta}(\phi,\delta\phi)). \end{equation} Note that the Noether current can be written as follows \begin{equation} \bm{J}_X=X^a\bm{C}_a+d\bm{Q}_X \end{equation} for our Einstein-Maxwell-charged fluid system with \begin{equation} \bm{ C}_a=(-2E^b{}_a-E_{EM}^bA_a)\epsilon_{befg},\quad \bm{Q}_X=-*d\bm{X}-*\bm{F}A_cX^c, \end{equation} where the star denotes the Hodge dual. So we end up with the following fundamental identity \begin{equation}\label{fi} \bm{\omega}(\phi;\delta\phi,\mathscr{L}_X\phi)=X^b\delta\bm{C}_b+X\cdot\bm{E}(\phi)\delta\phi+d(\delta \bm{Q}_X-X\cdot\bm{\theta}(\phi,\delta\phi)). \end{equation} One immediate implication of this identity is the diffeomorphism invariance of the symplectic form. Speaking specifically, if $\phi$ is on-shell, $\delta\phi$ satisfies the linearized constraint equations $\delta\bm{C}_a=0$, and $X^a$ vanishes sufficiently rapidly as before, we have \begin{equation} \Omega_{AB}\delta\phi^A\mathscr{L}_X\phi^B=0, \end{equation} which amounts to saying that $\Omega_{AB}\delta_1\phi^A\delta_2\phi^B$ keeps invariant under the gauge shift $\delta\phi\rightarrow \delta\phi+\mathscr{L}_X\phi$. On the other hand, for a large gauge transformation, where $X^a$ approaches a nontrivial asymptotic symmetry, we instead have \begin{equation}\label{si} \Omega_{AB}\delta\phi^A\mathscr{L}_X\phi^B=\int_\Sigma(X\cdot\bm{E}\delta\phi+ X^a\delta\bm{C}_a)+\int_{S_\infty}(\delta\bm{Q}_X-X\cdot\delta \bm{B}), \end{equation} where we have assumed that \begin{equation} \int_{S_\infty}X\cdot\bm{\theta}(\phi;\delta\phi)=\int_{S_\infty}X\cdot\delta\bm{B}(\phi) \end{equation} for some $3$-form $\bm{B}$. By evaluating Eq. (\ref{si}) at an on-shell $\phi$, we have \begin{equation} \Omega_{AB}\delta\phi^A\mathscr{L}_X\phi^B=\delta H_X=\delta\phi^AD_AH_X \end{equation} with the Hamiltonian conjugate to $X^a$ defined as \begin{equation} H_X=\int_\Sigma X^a\bm{C}_a+\int_{S_\infty}(\bm{Q}_X-X\cdot\bm{B}). \end{equation} By further restricting ourselves onto the phase space $\mathcal{P}$, which is obtained by factoring out the degeneracy orbits of $\Omega_{AB}$ in the configuration space $\mathcal{F}$, we wind up with a non-degenerate $\Omega_{AB}$. Thus we have the familiar form of Hamilton's equation on the phase space as \begin{equation} \mathscr{L}_X\phi^A=\Omega^{AB}D_BH_X \end{equation} with $\Omega^{AB}$ the inverse of $\Omega_{AB}$, satisfying $\Omega^{AB}\Omega_{BC}=\delta^A{}_C$. The ADM charge $\mathcal{Q}_X$ is defined as the surface term in $H_X$, which is equal to $H_X$ when evaluated at an on-shell $\phi$. In particular, for the asymptotic time translation $t^a$ and rotation $\varphi^a$, the corresponding ADM mass and angular momentum are defined as \begin{equation} \mathcal{M}=\int_{S_\infty}(\bm{Q}_t-t\cdot\bm{B})=\int_{S_\infty}(-*d\bm{t}-t\cdot \bm{B}),\quad \mathcal{J}=-\int_{S_\infty}\bm{Q}_\varphi=\int_{S_\infty}*d\bm{\varphi}, \end{equation} where not only have we chosen $S_\infty$ such that $\varphi^a$ is tangent to it but also have taken the gauge such that $A_a|_{S_\infty}=0$. If the rotation $\varphi^a$ is a background symmetry generator, the ADM angular momentum can be expressed as follows \begin{equation} \begin{aligned} \mathcal{J}&=-2\int_\Sigma \nabla_a\nabla^{[a}\varphi^{b]}\epsilon_ {befg}=-2\int_\Sigma \nabla_a\nabla^a\varphi^b\epsilon_{befg}=2\int_\Sigma R^{ab}\varphi_a\epsilon_{befg}=\int_\Sigma T^{ab}\varphi_a\epsilon_{befg}\\ &=\int_\Sigma [\frac{\rho+p}{n}\varphi\cdot\bm{u}\bm{\mathcal{N}}+F^{cb}\nabla_c(\varphi\cdot\bm{A})\epsilon_{befg}]=\int_\Sigma [\varphi\cdot(\frac{\rho+p}{n}\bm{u}+\bm{A})\bm{\mathcal{N}}-d*\bm{F}(\varphi\cdot\bm{A})]\\ &=\int_\Sigma \varphi\cdot(\frac{\rho+p}{n}\bm{u}+\bm{A})\bm{\mathcal{N}}=\int_\Sigma j\bm{\mathcal{N}} \end{aligned} \end{equation} with $\Sigma$ so chosen that $\varphi^a$ is tangent to it and $j\equiv\varphi\cdot(\frac{\rho+p}{n}\bm{u}+\bm{A})$ interpreted as the angular momentum per particle. Here we have used $d*\bm{\Lambda}=-\nabla_a\Lambda^{ab}\epsilon_{befg}$ for any $2$-form $\bm{\Lambda}$ in the first step, $\nabla_a\nabla_b\kappa_c=R_{cba}{}^d\kappa_d$ for any Killing vector field $\kappa^a$ in the third step, and $\mathscr{L}_\varphi\bm{A}=0$ as well as the Cartan identity in the fifth step. Similarly, associated with the local symmetry under the field configuration independent $U(1)$ gauge transformation $\delta_\vartheta\phi=(0,\nabla_a\vartheta,0)$, we can define the corresponding Noether current as \begin{equation} \bm{J}_\vartheta=\bm{\theta}(\phi;\delta_\vartheta\phi)-\vartheta\bm{\mathcal{N}}, \end{equation} whereby we have \begin{equation} \begin{aligned} \delta\bm{J}_\vartheta&=\bm{\omega}(\phi;\delta\phi,\delta_\vartheta\phi)+\xi^a\delta_\vartheta\bm{P}_a-\vartheta\delta\bm{\mathcal{N}}\\ &=\bm{\omega}(\phi;\delta\phi,\delta_\vartheta\phi)+\xi^a[-\nabla_f(\vartheta J^f)\epsilon_{abcd}+\nabla_a\vartheta\bm{\mathcal{N}}]+\vartheta\mathscr{L}_\xi\bm{\mathcal{N}}\\ &=\bm{\omega}(\phi;\delta\phi,\delta_\vartheta\phi)-\xi\cdot d(\vartheta\bm{\mathcal{N}})+\mathscr{L}_\xi(\vartheta\bm{\mathcal{N}})\\ &=\bm{\omega}(\phi;\delta\phi,\delta_\vartheta\phi)+d(\vartheta\xi\cdot\bm{\mathcal{N}}) \end{aligned} \end{equation} In addition, a straightforward calculation yields \begin{equation} \bm{J}_\vartheta=\vartheta\bm{C}+d\bm{Q}_\vartheta \end{equation} with \begin{equation} \bm{C}=-E^{EM}\cdot\bm{\epsilon},\quad \bm{Q}_\vartheta=-*\bm{F}\vartheta. \end{equation} Thus we obtain \begin{equation} \bm{\omega}(\phi;\delta\phi,\delta_\vartheta\phi)=\vartheta\delta\bm{C}+d(\delta\bm{Q}_\vartheta-\vartheta\xi\cdot\bm{\mathcal{N}}), \end{equation} whereby one can obviously introduce the analogous charges and make the analogous statements to those following Eq. (\ref{fi}). In particular, note that $d*\bm{F}=\bm{\mathcal{N}}$, so we have the on-shell variation $\delta H_\vartheta=0$ automatically for those gauge transformations which preserve the gauge condition $A_a=0$ at $S_\infty$. \subsection{Phase space, inner product, and symplectic complement}\label{subphase} As alluded to in the previous subsection, the phase space is obtained by factoring out the degeneracy orbits in the configuration space. Put it another way, $\delta\phi_\text{p}=0$ is the only degeneracy of the symplectic form if and only if one can parameterize the phase space by $\phi_\text{p}$. To proceed, we would like first to introduce the space of fiducial flowlines $\Sigma'$, namely the space of the integral curves of a non-vanishing $u'^a$ with $u'\cdot\bm{\mathcal{N}}'$. Then we can further introduce the diffeomorphism $\psi$ from $\Sigma'$ to $\Sigma$ obtained by intersecting with $\Sigma$ the images of the fiducial flowlines under $\chi$. Next we shall show that one can parameterize the phase space for our Einstein-Maxwell-charged fluid system by $\phi_\text{p}=(h_{ij},\bm{\pi}^{ij},A_i,\bm{\pi}^i,\psi, u^i)$ on $\Sigma$ with $u^i=h^{ia}u_a$ the fluid $3$-velocity, namely we are required to show that $ \Omega_{AB}\delta_1\phi^A\delta_2\phi^B=0$ for any $\delta_2\phi$ necessitates $\delta_1\phi_\text{p}=0$. To this end, we like to express \begin{equation}\label{sfd} \begin{aligned} \Omega_{AB}\delta_1\phi^A\delta_2\phi^B&=\Omega_{AB}\delta_1\phi^A\Delta_2\phi^B-\Omega_{AB}\delta_1\phi^A\mathscr{L}_{\xi_2}\phi^B\\ &=\int_\Sigma[(\Delta_2h_{ij}\delta_1\bm{\pi}^{ij}-\delta_1h_{ij}\Delta_2\bm{\pi}^{ij}+\Delta_2A_i\delta_1\bm{\pi}^i-\delta_1A_i\Delta_2\bm{\pi}^i-\xi_1^a\Delta_2\bm{P}_a)- (\xi_2\cdot\bm{E}\delta_1\phi+\xi_2^a\delta_1\bm{C}_a)], \end{aligned} \end{equation} where we have used Eq. (\ref{sf}) to evaluate the first term and Eq. (\ref{si}) together with the Lagrangian displacement of spatial compact support for the second term. By $\Delta q_a{}^e=u^eq_a{}^{(b}u^{c)}\Delta g_{bc}$, one can show \begin{equation} \Delta \bm{P}_a=\bm{A}_a{}^{bc}\Delta g_{bc}+\bm{B}_a{}^f{}\Delta A_f \end{equation} with \begin{equation} \bm{A}_a{}^{bc}=-\frac{1}{2}(\rho+p)u^bu^cq_a{}^e\epsilon_{ebcd}-\frac{1}{2}c_s^2(\rho+p)q^{bc}q_a{}^e\epsilon_{ebcd}+\frac{\rho+p}{n}q_a{}^{(b}u^{c)}\bm{\mathcal{N}},\quad \bm{B}_{a}{}^{f}=2q_{a}{}^{[f}J^{e]}\epsilon_{ebcd}, \end{equation} where $c_s$ is the sound speed, defined as $c_s^2\equiv(\frac{\partial p}{\partial\rho})_s=\frac{\Delta p}{\Delta \rho}$ and assumed to satisfy $0\le c^2_s\le1$. Furthermore, with the $3+1$ decomposition in the coordinates $(\tau,x^i)$ with $\Sigma$ given by $\tau=0$, we have \begin{equation} ds^2=-\alpha^2d\tau^2+h_{ij}(dx^i+\beta^id\tau)(dx^j+\beta^jd\tau),\quad \bm{A}=Ad\tau+A_i(dx^i+\beta^id\tau), \end{equation} whereby we can express \begin{equation} \Delta g_{ab}=-\frac{2}{\alpha}\nu_a\nu_b\Delta\alpha-\frac{1}{\alpha}\nu_a\Delta N_b-\frac{1}{\alpha}\nu_b\Delta N_a+\Delta\gamma _{ab},\quad \Delta A_a=-\frac{1}{\alpha}\nu_a\Delta\mathcal{A}+\Delta \mathcal{A}_a \end{equation} with $\Delta N_a=\Delta N_i(dx^i+\beta^id\tau)_a\equiv h_{ij}\Delta\beta^j(dx^i+\beta^id\tau)_a$, $\Delta \gamma_{ab}\equiv \Delta h_{ij}(dx^i+\beta^id\tau)_a(dx^j+\beta^jd\tau)_b$, $\Delta\mathcal{A}\equiv\Delta A+A_i\Delta\beta^i$, and $\Delta\mathcal{A}_a\equiv\Delta A_i(dx^i+\beta^id\tau)_a$. The degeneracy of our symplectic form is obtained by requiring the coefficients of $\Delta \bm{\pi}^{ij}$, $\Delta_2\alpha$, $\Delta_2 N_i$, $\Delta_2 h_{ij}$, $\Delta_2\bm{\pi}^i$, $\Delta_2\mathcal{A}$, $\Delta_2 A_i$, and $\xi_2^a$ in Eq. (\ref{sfd}) vanish when pulled back onto $\Sigma$, which gives rise to \begin{equation}\label{degeneracy} \begin{aligned} \delta_1h_{ij}&=0,\\ \xi_1^a\bm{A}_a{}^{bc}\nu_b\nu_c&=0,\\ \xi_1^a\bm{A}_a{}^{bi}\nu_b&=0,\\ \delta_1\bm{\pi}^{ij}-\xi_1^a\bm{A}_a{}^{ij}&=0,\\ \delta_1A_i&=0,\\ \xi_1^a\bm{B}_a{}^b\nu_b&=0,\\ \delta_1\bm{\pi}^i-\xi_1^a\bm{B}_a{}^i&=0,\\ \delta_1\bm{C}_a-\nu\cdot\bm{E}\delta_1\phi\nu_a&=0. \end{aligned} \end{equation} Here the second and third equations can be encapsulated as \begin{equation} 0=\xi_1^a\bm{A}_a{}^{bc}\nu_b=-\frac{1}{2}(\rho+p)[(u^b\nu_b)^2\delta^c{}_d-c_s^2\nu_bq^{bc}\nu_d]\xi_1^aq_a{}^d\hat{\bm{\epsilon}}. \end{equation} The contraction with $\nu_c$ implies $\nu_d\xi_1^aq_a{}^d=0$, whereby one can further obtain $\xi_1^aq_a{}^d=0$ by the above equation. That is to say $\xi_1^a\propto u^a$, namely $\delta_1\psi=0$. In addition, by $u^a\bm{A}_a{}^{bc}=u^a\bm{B}_a{}^b=0$, the fourth and seventh equations in Eq. (\ref{degeneracy}) yield $\delta_1\bm{\pi}^{ij}=\delta_1\bm{\pi}^i=0$. So right now we are only left with $\delta u^i=0$ to show. To achieve this, we first decompose $\delta_1\phi$ as follows \begin{equation} ( \delta_1g_{ab}, \delta_1A_a, \xi_1^a)=(0, 0, fu^a)+(\delta_1g_{ab}, \delta_1A_a, \tau\zeta^a) \end{equation} with $u^a\zeta_a=0$. Note that the last equation in Eq. (\ref{degeneracy}) is automatically satisfied by $(0, 0, fu^a)$ since $u^aE_{ma}=0$ due to the built-in conservation of the charge current and entropy current in our Lagrangian description. Furthermore, $\delta_1h_{ij}=\delta_1\bm{\pi}^{ij}=\delta_1A_i=\delta_1\bm{\pi}^i=0$ implies that the pure gravitational and electromagnetic parts do not contribute to the left side of the last equation in Eq. (\ref{degeneracy}). Accordingly, for $(\delta_1g_{ab},\delta_1A_a,\tau\zeta^a)$, we have \begin{equation}\label{lengthy} \begin{aligned} 0&=-\delta_1(T^b_{ma}\epsilon_{befg})-\frac{1}{2}T_m^{bc}\delta_1g_{bc}\nu_a\hat{\bm{\epsilon}}-\delta_1(A_a\bm{\mathcal{N}})-J^b\delta_1A_b\nu_a\hat{\bm{\epsilon}}\\ &=-\delta_1(\frac{\rho+p}{n}u_a\bm{\mathcal{N}}+p\epsilon_{aefg})-\frac{1}{2}[(\rho+p)u^bu^c\delta_1g_{bc}+pg^{bc}\delta_1g_{bc}]\nu_a\hat{\bm{\epsilon}}-\delta_1(A_a\bm{\mathcal{N}})-J^b\delta_1A_b\nu_a\hat{\bm{\epsilon}}\\ &=-\delta_1(\frac{\rho+p}{n}u_a)\bm{\mathcal{N}}+\delta_1p\nu_a\hat{\bm{\epsilon}}+\frac{1}{2}(\rho+p)u_bu_c\delta_1g^{bc}\nu_a\hat{\bm{\epsilon}}-\delta_1A_a\bm{\mathcal{N}}-J^b\delta_1A_b\nu_a\hat{\bm{\epsilon}}\\ &=[nu_b\nu^b\Delta_1(\frac{\rho+p}{n}u_a)+\Delta_1p\nu_a-(\rho+p)u_b\nu^bu_c\delta_1\nu^c\nu_a]\hat{\bm{\epsilon}}\\ &=-(\rho+p)(\frac{1}{2}c_s^2u_b\nu^bq^{cd}\Delta_1g_{cd}u_a-u_b\nu^b\Delta_1u_a+\frac{1}{2}c_s^2q^{cd}\Delta_1g_{cd}\nu_a+u_b\nu^bu_c\delta_1\nu^c\nu_a)\hat{\bm{\epsilon}}\\ &=-(\rho+p)(\frac{1}{2}c_s^2q^{cd}\Delta_1g_{cd}q_{ab}\nu^b-u_b\nu^b\Delta_1u_a+u_b\nu^bu_c\delta_1\nu^c\nu_a)\hat{\bm{\epsilon}}\\ &=-(\rho+p)[c_s^2(\nu_c\delta_1\nu^c+u_c\nu^cu_d\delta_1\nu^d-\frac{1}{\alpha}\zeta_c\nu^c)q_{ab}\nu^b-u_b\nu^b\Delta_1u_a+u_b\nu^bu_c\delta_1\nu^c\nu_a]\hat{\bm{\epsilon}}, \end{aligned} \end{equation} where we have used $\delta_1\bm{\mathcal{N}}=-\mathscr{L}_{\tau\zeta}\bm{\mathcal{N}}=-d\tau\wedge\zeta\cdot\bm{\mathcal{N}}=0$ when restricted onto $\Sigma$ in the third step, $\delta_1h^{ab}=0$ as well as $\delta_1A_a\propto\nu_a$ in the fourth step, and $\Delta_1g_{ab}=\delta_1g_{ab}-\frac{1}{\alpha}(\nu_a\zeta_b+\nu_b\zeta_a)$ in the seventh step. To proceed, we compute \begin{equation}\label{key1} \begin{aligned} \Delta_1u_a&=\nu_au_d\delta_1\nu^d+u_d\nu^dg_{ac}\delta_1\nu^c-\frac{1}{\alpha}\zeta_au_d\nu^d+u_au_c\nu^cu_d\delta_1\nu^d\\ &=h_{ab}\delta_1(h^{bc}u_c)+\nu_a[u_d\delta_1\nu^d-u_d\nu^d\nu_c\delta_1\nu^c+\frac{1}{\alpha}\zeta_c\nu^cu_d\nu^d-(u_c\nu^c)^2u_d\delta_1\nu^d], \end{aligned} \end{equation} whereby we further have \begin{equation}\label{key2} u_b\delta_1(h^{bc}u_c)=u^ah_{ab}\delta_1(h^{bc}u_c)=(u_c\nu^c)^2(\nu_d\delta_1\nu^d-\frac{1}{\alpha}\zeta_d\nu^d+u_b\nu^bu_d\delta_1\nu^d). \end{equation} Then plugging Eq. (\ref{key1}) into Eq. (\ref{lengthy}) and using Eq. (\ref{key2}), we end up with \begin{equation}\label{zero} 0=-(\rho+p)B_{ab}\delta_1(h^{bc}u_c)\hat{\bm{\epsilon}}, \end{equation} where \begin{equation} B_{ab}=\frac{c_s^2}{(u_d\nu^d)^2}q_{ac}\nu^cu_b-2u^ch_{b[a}\nu_{c]}. \end{equation} By contracting Eq. (\ref{zero}) with $\nu^a$, we obtain $u_b\delta_1(h^{bc}u_c)=0$, whereby Eq. (\ref{zero}) further implies $\delta_1(h^{ab}u_b)=0$. Thus we have accomplished the proof that the phase space for our Einstein-Maxwell-charged fluid system is described exactly by $\phi_\text{p}$ on $\Sigma$. However, the variables $(\psi,u^i)$ are not canonically conjugate. To construct the canonically conjugate variables for our charged fluid, we like to choose the coordinates in $M'$ such that the fiducial flowlines are given by the integral curves of $(\frac{\partial}{\partial x'^0})^a$. Then by $\delta x'^\mu=-\partial_a x'^\mu\xi^a$, we can write \begin{equation} \bm{\theta}_m=-\delta x'^\mu \bm{P}'_\mu=-\delta x'^i\bm{P}'_i \end{equation} with $\bm{P}'_\mu=(\frac{\partial}{\partial x'^\mu})^a\bm{P}_a$. Whence we further have \begin{equation} \bm{\omega}_m=-(\delta_2x'^i\delta_1\bm{P}'_i-\delta_1x'^i\delta_2\bm{P}'_i), \end{equation} which tells us that $(x'^i, -\bm{P}'_i)$ can be regarded as the canonically conjugate variables for our charged fluid. Accordingly, the symplectic form for our Einstein-Maxwell-charged fluid system can be cast into the following canonical manner \begin{equation} \Omega_{AB}\delta_1\phi^A\delta_2\phi^B=\int_\Sigma(\delta_2q^\alpha\delta_1p_\alpha-\delta_1q^\alpha\delta_2p_\alpha) \end{equation} with $q^\alpha=(h_{ij},A_i, x'^i)$ and $p_\alpha=(\bm{\pi}^{ij},\bm{\pi}^i,-\bm{P}'_i)$. Then by working with the coordinates in which $h=1$, we can introduce an inner product \begin{equation} \langle\delta_1\phi,\delta_2\phi\rangle=\int_\Sigma(h_{\alpha\beta}\delta_1q^\alpha\delta_2q^\beta+h^{\alpha\beta}\delta_1p_\alpha\delta_2p_\beta), \end{equation} where $h_{\alpha\beta}$ and $h^{\alpha\beta}$ should be understood as either $h_{ij}$ or $h^{ij}$ depending on the tensor indices of $(q^\alpha, p_\alpha)$. Associated with this inner product, we have the Hilbert space $\mathcal{H}$ as a subspace of perturbations. As pointed out in \cite{GSW}, the square integrability indicates that $\mathcal{H}$ does not include those perturbations with $\delta \mathcal{M}\neq0$, but includes all the perturbations of interest with $\delta\mathcal{M}=0$. Then it is not hard to see that our symplectic form can be expressed as a bounded linear operator $W$ on $\mathcal{H}$ as follows \begin{equation} \Omega_{AB}\delta_1\phi^A\delta_2\phi^B=\langle \delta_1\phi,\hat{\Omega}\delta_2\phi) \end{equation} with \begin{equation} \hat{\Omega}(\delta q^\alpha,\delta p_\alpha)=(-h^{\alpha\beta}\delta p_\beta,h_{\alpha\beta}\delta q^\beta). \end{equation} Whence we know that $\hat{\Omega}$ is an orthogonal map, since $\hat{\Omega}^2=-1$ and $\hat{\Omega}^\dagger=-\hat{\Omega}$. The symplectic complement of a subspace $\mathcal{S}$ in $\mathcal{H}$ is defined as follows \begin{equation} \mathcal{S}^{\perp_\text{s}}=\{\delta'\phi\in \mathcal{H}|\langle \delta'\phi,\hat{\Omega}\delta\phi\rangle=0, \forall \delta\phi\in \mathcal{S}\}. \end{equation} Then it is not hard to show $\mathcal{S}^{\perp_\text{s}}=(\hat{\Omega}[\mathcal{S}])^\perp=\hat{\Omega}[\mathcal{S}^\perp]$, which further gives \begin{equation} (\mathcal{S}^{\perp_\text{s}})^{\perp_\text{s}}=(\hat{\Omega}[\mathcal{S}^\perp])^{\perp_\text{s}}=\hat{\Omega}[(\hat{\Omega}[\mathcal{S}^\perp])^\perp]=\hat{\Omega}[\hat{\Omega}[(\mathcal{S}^\perp)^\perp]=(\mathcal{S}^\perp)^\perp=\bar{\mathcal{S}}, \end{equation} where the bar represents the closure in $\mathcal{H}$. Since any subspace is dense in its closure, below we shall not bother ourselves by saying that the double symplectic complement of any subspace is itself. \subsection{Canonical energy, explicit expression at null infinity, and relation to second order perturbations}\label{subcanonical} Finally, associated with an arbitrary background symmetry generated by $\kappa^a$, we would like to introduce the corresponding canonical energy, which is a bilinear form on the space of linear on-shell perturbations defined as \begin{equation} \mathcal{E}_\kappa(\delta_1\phi,\delta_2\phi)=\Omega_{AB}\delta_1\phi^A\mathscr{L}_\kappa\delta_2\phi^B. \end{equation} It is easy to show that not only is the canonical energy conserved and symmetric, but also gauge invariant. To obtain the expression of the canonical energy $\mathcal{E}_t(\delta_1\phi,\delta_2\phi)$ at the future null infinity $\mathscr{I}$ for later use, we like to introduce the unphysical metric $\tilde{g}_{ab}=\Omega^2g_{ab}$ with the conformal factor $\Omega=0$ corresponding to the location of $\mathscr{I}$ and work in the gauge such that $\tilde{\nabla}_an_b|_\mathscr{I}=n^bA_b|_\mathscr{I}=0$ with $n_b=\tilde{\nabla}_b\Omega$. Here the indices are raised or lowered by our unphysical metric, which is smooth across $\mathscr{I}$. We further assume that $\Omega$ near $\mathscr{I}$ and $\tilde{g}_{ab}$ at $\mathscr{I}$ are universal quantities in the sense that $\delta\Omega=0$ near $\mathscr{I}$ and $\delta\tilde{g}_{ab}|_\mathscr{I}=0$. Whence the symplectic potential at $\mathscr{I}$ for the electromagnetic part can be written as \begin{equation} \bm{\theta}_{EM}=-F^{ab}\delta A_b\tilde{\epsilon}_{aefg}=\Pi^b\delta A_b\hat{\tilde{\bm{\epsilon}}} \end{equation} with $\Pi^b=n_aF^{ab}$ and the induced volume $3$-form given by $\tilde{\bm{\epsilon}}=-\bm{n}\wedge\hat{\tilde{\bm{\epsilon}}}$. Whence the symplectic current reads \begin{equation} \bm{\omega}_{EM}=(\delta_2A_b\delta_1\Pi^b-\delta_1A_b\delta_2\Pi^b)\hat{\tilde{\bm{\epsilon}}}, \end{equation} where one should keep in mind that $\delta \Pi^b=n_a\tilde{g}^{ac}\tilde{g}^{bd}\delta F_{cd}$. Note that $t^a=cn^a$ at $\mathscr{I}$ with $c$ a positive constant for the future directed timelike Killing vector field $t^a$, so we have $\mathscr{L}_t\delta \bm{A}=d(t\cdot\delta\bm{A})+t\cdot d\delta \bm{A}=\sigma \bm{n}+cn\cdot \delta \bm{F}$ at $\mathscr{I}$ for some function $\sigma$. In addition, $0=\tilde{\nabla}_an^a\tilde{\bm{\epsilon}}=\mathscr{L}_n\tilde{\bm{\epsilon}}=-\bm{n}\wedge\mathscr{L}_n\hat{\tilde{\bm{\epsilon}}}$ at $\mathscr{I}$ implies that $\mathscr{L}_n\hat{\tilde{\bm{\epsilon}}}=0$ when restricted onto $\mathscr{I}$. As a result, we have \begin{equation}\label{emnull} \begin{aligned} \mathcal{E}^{EM}_t(\delta_1\phi,\delta_2\phi)&=\int_{\mathscr{I}_{12}} c[2\delta_1\Pi_b\delta_2\Pi^b-\mathscr{L}_n(\delta_1 A_b\delta_2\Pi^b)]\hat{\tilde{\bm{\epsilon}}}\\ &=2c\int_{\mathscr{I}_{12}} \delta_1\Pi_b\delta_2\Pi^b\hat{\tilde{\bm{\epsilon}}}-c\int_{\mathscr{I}_{12}}\mathscr{L}_n(\delta_1 A_b\delta_2\Pi^b\hat{\tilde{\bm{\epsilon}}})\\ &=2c\int_{\mathscr{I}_{12}} \delta_1\Pi_b\delta_2\Pi^b\hat{\tilde{\bm{\epsilon}}}+c(\int_{\mathcal{B}_2}\delta_1 A_b\delta_2\Pi^b\bm{\varepsilon}-\int_{\mathcal{B}_1}\delta_1 A_b\delta_2\Pi^b\bm{\varepsilon})\\ &=2c\int_{\mathscr{I}_{12}} \delta_1\Pi_b\delta_2\Pi^b\hat{\tilde{\bm{\epsilon}}}+c(\int_{\mathcal{B}_2}\delta_1 A^b\mathscr{L}_n\delta_2 A_b\bm{\varepsilon}-\int_{\mathcal{B}_1}\delta_1 A^b\mathscr{L}_n\delta_2 A_b\bm{\varepsilon})\\ \end{aligned} \end{equation} where the portion of the null infinity $\mathscr{I}_{12}$, as depicted in Fig. \ref{null}, is bounded by its two cross sections $\mathcal{B}_1$ and $\mathcal{B}_2$ with $\bm{\varepsilon}\equiv-n\cdot\hat{\tilde{\bm{\epsilon}}}$ the induced volume $2$-form on them. Similarly, the canonical energy for the gravitational part can be obtained as\cite{HW} \begin{equation}\label{grnull} \begin{aligned} \mathcal{E}^g_t(\delta_1\phi,\delta_2\phi)&=c\int_{\mathscr{I}_{12}} \delta_1 N_{ab}\delta_2 N^{ab}\hat{\tilde{\bm{\epsilon}}}-\frac{c}{2}(\int_{\mathcal{B}_2} \tau_1^{ab}\delta_2 N_{ab}\bm{\varepsilon}-\int_{\mathcal{B}_1} \tau_1^{ab}\delta_2 N_{ab}\bm{\varepsilon})\\ &=c\int_{\mathscr{I}_{12}} \delta_1 N_{ab}\delta_2 N^{ab}\hat{\tilde{\bm{\epsilon}}}+\frac{c}{2}(\int_{\mathcal{B}_2} \tau_1^{ab}\mathscr{L}_n\tau_{2ab}\bm{\varepsilon}-\int_{\mathcal{B}_1} \tau_1^{ab}\mathscr{L}_n\tau_{2ab}\bm{\varepsilon}), \end{aligned} \end{equation} where $N_{ab}$ is the well known Bondi news tensor and its variation at $\mathscr{I}$ satisfies \begin{equation} \delta N_{ab}=-\mathscr{L}_n\tau_{ab}+4\Omega^{-1}n_{(a}\tau_{b)c}n^c-\Omega^{-1}n^cn_c\tau_{ab} \end{equation} with $\tau_{ab}\equiv\Omega\delta g_{ab}$. The canonical energy for the charged fluid part is automatically zero at null infinity since our charged star has a spatial compact support. \begin{figure} \centering \includegraphics[width=15cm]{figure} \caption{Penrose diagram for a perturbed relativistic star in an asymptotically flat spacetime. $\Sigma$ and $\Sigma'$ are spacelike Cauchy surface terminating at the spatial infinity $i^0$ and the boundary of the red region is given by spacelike hypersurfaces $\mathscr{S}_1$ and $\mathscr{S}_2$ as well as the portion of the null infinity $\mathscr{I}_{12}$, which is bounded by its two cross-sections $\mathcal{B}_1$ and $\mathcal{B}_2$.} \label{null} \end{figure} On the other hand, to develop the relation of the canonical energy with the second order perturbations, we would like to work in the gauge with the Lagrangian displacement vanishing to all orders. Accordingly, we have \begin{equation}\label{relationtosecond} \begin{aligned} \mathcal{E}_\kappa(\delta\phi,\delta\phi)&=\int_\Sigma[\bm{\omega}^g(g_{ab};\Delta g_{ab},\mathscr{L}_\kappa\Delta g_{ab})+\bm{\omega}^{EM}(A_a;\Delta A_a,\mathscr{L}_\kappa \Delta A_a)]\\ &=\int_\Sigma [\Delta\bm{\omega}^g(g_{ab};\Delta g_{ab},\mathscr{L}_\kappa g_{ab})+\Delta \bm{\omega}^{EM}(A_a;\Delta A_a,\mathscr{L}_\kappa A_a)]\\ &=\Delta^2\mathcal{Q}_\kappa+\int_\Sigma \Delta \{\kappa^a \Delta[ (2G^b{}_a-T_{EM}^b{}_a-\nabla_c F^{cb}A_a)\epsilon_{befg}]+\kappa^a [(-G^{bc}+\frac{1}{2}T_{EM}^{bc})\Delta g_{bc}+\nabla_bF^{bc}\Delta A_c]\epsilon_{aefg}\}\\ &=\delta^2\mathcal{Q}_\kappa +\int_\Sigma\kappa^a\Delta \{[\Delta (T_m^b{}_a\epsilon_{befg})-\frac{1}{2}T_m^{bc}\Delta g_{bc}\epsilon_{aefg}]+[\Delta (J^bA_a\epsilon_{befg})-J^c\Delta A_c\epsilon_{aefg}]\}. \end{aligned} \end{equation} Here we have used Eq. (\ref{fi}) in the third step by replacing $\bm{C}_a$ and $\bm{E}$ with the corresponding terms purely from gravitational and electromagnetic fields. In addition, we have also used the fact that the ADM charge is gauge invariant in the last step. \section{Physically stationary perturbations and dynamical stability }\label{DS} Dynamical stability we are concerned with is mode stability. That is to say, our charged star in dynamic equilibrium is mode stable if there does not exist any non-pure-gauge linearized solution with the time dependence of the form $e^{\Lambda t}$ with $\Lambda>0$. Otherwise, it is mode unstable. Compared to a complete analysis of linearized perturbation equations, one favorable way of proving mode stability is to construct a positive-definite conserved norm on the space of linear on-shell perturbations, because this precludes those perturbations with exponential growth. A nice candidate for such a norm is the canonical energy $\mathcal{E}_t$. To be more specific, let us first introduce the physically stationary perturbations. A linear on-shell perturbation $\delta\phi$ off of a stationary background is called physically stationary if \begin{equation} \begin{aligned} \mathscr{L}_t(\delta g_{ab}+\mathscr{L}_X g_{ab})&=0,\\ \mathscr{L}_t(\delta A_a+\mathscr{L}_X A_a+\nabla_a\vartheta)&=0,\\ \mathscr{L}_t(\delta \bm{\mathcal{N}}+\mathscr{L}_X\bm{\mathcal{N}})=-\mathscr{L}_{[t,\xi-X]}\bm{\mathcal{N}}&=0\\ \mathscr{L}_t(\delta s+\mathscr{L}_Xs)=-\mathscr{L}_{[t,\xi-X]}s&=0 \end{aligned} \end{equation} with $X^a$ the asymptotic symmetry generator and $\vartheta$ satisfying $n^a\tilde{\nabla}_a\vartheta|_\mathscr{I}=0$, or equivalently \begin{equation}\label{stationary} \mathscr{L}_t\delta \phi=-\mathscr{L}_{[t,X]}\phi-\delta_{\mathscr{L}_t\vartheta}\phi+\text{trivial}. \end{equation} For the gravitational part, we first note that the news tensor vanishes for all stationary spacetimes\cite{Geroch,WZ}, so the perturbed news tensor induced by $\delta g_{ab}+\mathscr{L}_X g_{ab}$ vanishes. In addition, the news tensor keeps vanishing under the asymptotic symmetry transformation on a stationary spacetime, so the perturbed news tensor induced by $\mathscr{L}_X g_{ab}$ also vanishes. As a result, the perturbed news tensor vanishes for a physically stationary perturbation. While for the electromagnetic part, we first note that $\mathscr{L}_t \bm{A}=0$ implies $n\cdot\bm{F}\propto \bm{n}$ at $\mathscr{I}$. Moreover, the commutator $[t,X]^a$, at most, leads to an asymptotic spatial translation, which is proportional to $n^a$ at the null infinity. So we have $\mathscr{L}_{[t,X]}\bm{A}\propto \bm{n}$ at $\mathscr{I}$. On the other hand, $\mathscr{L}_t\vartheta|_\mathscr{I}=cn^a\tilde{\nabla}_a\vartheta=0$, so we have $\tilde{\nabla}_a\mathscr{L}_t\vartheta\propto n_a$ at $\mathscr{I}$. Then by Eq. (\ref{stationary}), for a physically stationary perturbation, we have $\mathscr{L}_n \delta A_a|_\mathscr{I}\propto n_a$, which vanishes when restricted onto $\mathscr{I}$. Accordingly, according to Eq. (\ref{emnull}) and Eq. (\ref{grnull}), we have that the canonical energy $\mathcal{E}_t(\delta\phi,\delta\phi)$ across the portion of the null infinity is apparently non-negative for both the gravitational part and the electromagnetic one if $\mathcal{B}_2$ is where a physically stationary perturbation solution settles down and $\mathcal{B}_1$ is chosen at the spatial infinity where the boundary term is supposed to vanish. In addition, the canonical energy $\mathcal{E}_t(\delta\phi,\delta_\text{ps}\phi)$ across the null infinity vanishes for any physically stationary perturbation $\delta_\text{ps}\phi$, which implies \begin{equation}\label{misspoint} \mathcal{E}_t(\delta\phi,\delta_\text{ps}\phi)= \mathcal{E}_t(\mathscr{S}_1;\delta\phi,\delta_\text{ps}\phi)=\mathcal{E}_t(\mathscr{S}_2;\delta\phi,\delta_\text{ps}\phi) \end{equation} with the canonical energy evaluated on $\mathscr{S}_i$ denoted by $\mathcal{E}_t(\mathscr{S}_i;\delta\phi,\delta_\text{ps}\phi)$. Next let us define the subspace $\mathcal{V}\subset\mathcal{H}$ of the linear on-shell perturbations such that $\delta\phi\in \mathcal{V}$ satisfies \begin{equation} \mathcal{E}_t(\delta\phi,\delta_{\text{ps}}\phi)=0 \end{equation} for any physically stationary perturbation $\delta_{\text{ps}}\phi$. This amounts to saying that $\mathcal{V}$ is the symplectic complement of the subspace $\mathcal{W}$ spanned by the perturbations of the form $\mathscr{L}_t\delta_\text{ps}\phi$, i.e., \begin{equation}\label{conditions} \delta H_{\mathscr{L}_t\vartheta}=\langle \delta \phi,\hat{\Omega}\delta_{\mathscr{L}_t\vartheta}\phi\rangle=0,\quad \delta H_{[t,X]}=\langle \delta\phi, \hat{\Omega}\mathscr{L}_{[t,X]}\phi\rangle=0, \quad \langle \delta\phi,\hat{\Omega}[\text{trivial}]\rangle=0, \end{equation} where the first requirement is trivial since it is automatically satisfied, and the second one requires the perturbation should not change the ADM $3$-momentum. Note that the double symplectic complement of $\mathcal{W}$ is itself, thus the symplectic completement of $\mathcal{V}$ in itself is $\mathcal{V}\cap \mathcal{W}$, which implies that the degeneracy of the canonical energy is given precisely by the physically stationary perturbations when restricted onto $\mathcal{V}$. With this observation, we have the following criterion for the dynamical stability of our charged star. \textbf{Criterion for dynamical stability}: If $\mathcal{E}_t(\delta\phi,\delta\phi)\ge 0$ for each $\delta\phi\in \mathcal{V}$, then our charged star has mode stability with respect to the perturbations within $\mathcal{V}$. On the other hand, if there exists some $\delta\phi\in \mathcal{V}$ such that $\mathcal{E}_t(\delta\phi,\delta\phi)<0$, then our charged star has instability in the sense that such a perturbation cannot approach a physically stationary perturbation at late times. It is noteworthy that the physically stationary perturbations play a vital role in the above criterion. For the stability criterion, $\mathcal{E}_t(\delta\phi,\delta\phi)=0$ only for the physically stationary perturbations in $\mathcal{V}$. This can be argued by contradiction. Suppose that we have $\mathcal{E}_t(\delta\phi,\delta\phi)=0 $ for a perturbation $\delta\phi\in\mathcal{V}$, which is not a physically stationary one. Then there exists another perturbation $\delta'\phi\in\mathcal{V}$ such that $\mathcal{E}_t(\delta'\phi,\delta\phi)\neq 0$. Whence we have \begin{equation} \mathcal{E}_t(\delta'\phi+z\delta\phi,\delta'\phi+z\delta\phi)=\mathcal{E}_t(\delta'\phi,\delta'\phi)+2z\mathcal{E}_t(\delta'\phi,\delta\phi), \end{equation} which can become negative by an appropriate choice of $z$. But this contradicts with the assumption that the canonical energy $\mathcal{E}_t$ is non-negative in $\mathcal{V}$. Thus for those perturbations which are not physically stationary ones, the canonical energy $\mathcal{E}_t$ provides a positive definite conserved norm during the evolution from one Cauchy surface $\Sigma$ to a later one $\Sigma'$, as depicted in Fig. \ref{null}, guaranteeing the mode stability. On the other hand, a physically stationary perturbation is stable by definition, albeit its zero canonical energy. While for the instability criterion, we can also argue for it by contradiction. Suppose that the negative canonical energy mode $\delta\phi$ approaches a physically stationary solution $\delta_\text{ps}\phi$ at late times, say $\mathscr{S}_2$ in Fig. \ref{null}, then we have \begin{equation} \mathcal{E}_t(\mathscr{S}_2;\delta_\text{ps}\phi,\delta_\text{ps}\phi)\le \mathcal{E}_t(\delta\phi,\delta\phi)<0 \end{equation} due to the non-negative net flux across the null infinity. On the other hand, Eq. (\ref{misspoint}) gives rise to \begin{equation} \mathcal{E}_t(\mathscr{S}_2;\delta_\text{ps}\phi,\delta_\text{ps}\phi)=\mathcal{E}_t(\delta\phi,\delta_\text{ps}\phi)=0 \end{equation} according to the definition of $\mathcal{V}$. This leads to an obvious contradiction. So we are done. However, such a criterion has its apparent shortcoming because one can only talk about the stability of the perturbations within the subspace $\mathcal{V}$. Gratefully, the second condition in Eq. (\ref{conditions}) imposed on $\mathcal{V}$ can always be achieved by adding a perturbation generated by the Lorentz boosts on the background. Since such an added perturbation is stable, the above criterion for the stability of the perturbations in $\mathcal{V}$ also implies the criterion for the stability of the perturbations in the larger subspace with the second condition in Eq. (\ref{conditions}) disregarded. On the other hand, regarding the third condition in Eq. (\ref{conditions}), we also have good news for both non-axisymmetric and axisymmetric perturbations, on which we shall elaborate separately below. \subsection{Nonaxisymmetric perturbations} As to the neutral star, Friedman shows that the aforementioned third condition does not lead to a real physical restriction for nonaxisymmetric perturbations, because it can be achieved in the suitable background solution by adding a trivial perturbation\cite{F1978}. Here we focus only onto the background in which $\nabla_as\neq0$ to show the key strategy developed in \cite{F1978} for the neutral star as well as its utterly equal applicability to our charged star. First, by working in the gauge $\delta\phi=(\Delta g_{ab}, \Delta A_a,0)$, we have \begin{equation} \begin{aligned} \langle \delta\phi,\hat{\Omega}(0,0,\tilde{\eta})&=\int_\Sigma\tilde{\eta}^a\Delta P_{aefg}\\ &=\int_\Sigma \frac{1}{n^2}\mathcal{N}^{abc}\nabla_bZ_c\Delta P_{aefg}\\ &=\int_\Sigma\nabla_bZ_c\Delta(\frac{1}{n^2}\mathcal{N}^{abc}P_{aefg})\\ &=6\int_\Sigma \nabla_{[e}Z_f\Delta(\frac{\rho+p}{n}u_{g]}+A_{g]})\\ &=\int_\Sigma d\bm{Z}\wedge\Delta (\frac{\rho+p}{n}\bm{u}+\bm{A})\\ &=\int_\Sigma \bm{Z}\wedge \Delta d(\frac{\rho+p}{n}\bm{u}+\bm{A})\\ &=\int_\Sigma F\Delta [ds\wedge d(\frac{\rho+p}{n}\bm{u}+\bm{A})], \end{aligned} \end{equation} where we have used $\Delta(\frac{1}{n^2}\mathcal{N}^{abc})= u^{[a}\Lambda^{bc]}$ for some antisymmetric tensor $\Lambda^{bc}$ and $u\cdot d\bm{Z}=u^a\bm{P}_a=0$ in the third step. We define the vorticity and circulation in the presence of the electromagnetic field as $\bm{\omega}=d(\frac{\rho+p}{n}\bm{u}+\bm{A})$ and $\bm{\Gamma}=ds\wedge\bm{\omega}$, respectively. Then we have \begin{equation} \begin{aligned} u\cdot\bm{\omega}&=u\cdot[d(\frac{\rho+p}{n})\wedge \bm{u}+\frac{\rho+p}{n}d\bm{u}+\bm{F}]\\ &=d(\frac{\rho+p}{n})+\frac{\rho+p}{n}u^a\nabla_au_b+u^aF_{ab}\\ &=d(\frac{\rho+p}{n})-\frac{dp}{n}=Tds, \end{aligned} \end{equation} where we have used the first equation of Eq. (\ref{conservationlaw}) as well as the fact that both the particle number density and the entropy per particle along the divergence free $4$-velocity are constant for our charged star in dynamic equilibrium. Whence we obtain Ertel's theorem \begin{equation} \mathscr{L}_u\bm{\omega}=dT\wedge ds \end{equation} as well as $u\cdot\bm{\Gamma}= \mathscr{L}_u\bm{\Gamma}=0$ in the presence of the electromagnetic field, which implies $\bm{\Gamma}=\gamma \bm{\mathcal{N}}$ and $\mathscr{L}_u\gamma=0$. On the other hand, note that \begin{equation} \langle (0,0,\tilde{\eta}'),\hat{\Omega}(0,0,\tilde{\eta})\rangle=\int_\Sigma F\mathscr{L}_{\tilde{\eta}'}\bm{\Gamma} \end{equation} for any trivial displacement $\tilde{\eta}'^a=\frac{1}{2n^2}\mathcal{N}^{abc}(dF'\wedge ds)_{bc}$. So we are required to ask whether there exists a trivial displacement such that $\mathscr{L}_{\tilde{\eta}'}\bm{\Gamma}=-\Delta \bm{\Gamma}$ on $\Sigma$, i.e., \begin{equation} \begin{aligned} -\Delta\gamma&=\mathscr{L}_{\tilde{\eta}'}\gamma=\frac{1}{n^2}\mathcal{N}^{abc}(dF')_b (ds)_c (d\gamma)_a=\frac{1}{n}\epsilon^{abcd}(d\gamma)_a(ds)_b(dF')_cu_d\\ &=\frac{\frac{\partial F'}{\partial\varphi}}{n}\epsilon^{abcd}(d\gamma)_a(ds)_b[(d\varphi)_c-\Omega(dt)_c]u_d=\frac{\frac{\partial F'}{\partial\varphi}}{n}\epsilon^{abcd}(d\gamma)_a(ds)_b(d\varphi)_c(dt)_d(u_t+\Omega u_\varphi)\\ &=\frac{\frac{\partial F'}{\partial\varphi}}{n}\epsilon^{abcd}(d\gamma)_a(ds)_b(d\varphi)_c(dt)_d(g_{tt}+\Omega g_{t\varphi}+\Omega g_{\varphi t}+\Omega^2g_{\varphi\varphi})/|v|=-\frac{|v|\frac{\partial F'}{\partial\varphi}}{n}\epsilon^{abcd}(d\gamma)_a(ds)_b(d\varphi)_c(dt)_d, \end{aligned} \end{equation} where we have assumed that $(\gamma, s, t, \varphi)$ can offer a local atlas for our charged star with $t^a(dt)_a=\varphi^a(d\varphi)_a=1$ and used $\mathscr{L}_uF'=0$. With this, the answer is definitely yes, because one can always find a $F'$ on $\Sigma$ such that \begin{equation} \frac{\partial F'}{\partial\varphi}=\frac{n\Delta \gamma}{|v|\epsilon^{abcd}(d\gamma)_a(ds)_b(d\varphi)_c(dt)_d}. \end{equation} The upshot is that the third condition in Eq. (\ref{conditions}) does not correspond to a real physical restriction on the nonaxisymmetric perturbations in most of the cases we are interested in, since it can be fulfilled by supplementing a trivial perturbation. \subsection{Axisymmetric perturbations} For an arbitrary axisymmetric perturbation $\delta \mathcal{Q}=(\delta g_{ab},\delta A_a,\delta\bm{\mathcal{N}},\delta s)$, which may not be described by our Lagrangian formulation, one can follow exactly the same construction developed in \cite{GSW} for the neutral star to show\footnote{The derivation presented in \cite{GSW} has used $u^a\nabla_a s=0$ and $u^a\nabla_aj=0$ for axisymmetric perturbations. The validity of the former is obvious since it holds actually for arbitrary perturbations. The validity of the latter also for our charged star can be seen through $0=\varphi_b(\nabla_aT_m^{ab}-F^{bc}J_c)=\nabla_a(jJ^a+p\varphi^a)=\nabla_a(jJ^a)=J^a\nabla_aj$.} \begin{equation} \mathscr{L}_t\delta\bm{\mathcal{N}}=-\mathscr{L}_\xi\bm{\mathcal{N}},\quad \mathscr{L}_t\delta s=-\mathscr{L}_\xi s,\quad \mathscr{L}_t\delta j=-\mathscr{L}_\xi j, \end{equation} where $\xi^a=|v|\delta u^a+\beta \varphi^a$ with an arbitrary axisymmetric scalar $\beta$ satisfying $u^a\nabla_a\beta=\delta u^a\nabla_a\Omega$. This means that $\mathscr{L}_t\delta\mathcal{Q}$ can be realized in our Lagrangian description as $\hat{\delta}\phi=(\mathscr{L}_t \delta g_{ab},\mathscr{L}_t\delta A_a,\xi^a)$ with $\Delta j=0$. Now for an arbitrary axisymmetric trivial displacement $\tilde{\eta}^a$, we have \begin{equation}\label{v1} \begin{aligned} \langle \mathscr{L}_t\hat{\delta}\phi,\hat{\Omega}(0,0,\tilde{\eta})\rangle&= -\langle \hat{\delta}\phi,\hat{\Omega}(0,0,\mathscr{L}_t\tilde{\eta})\rangle =-\langle \hat{\delta}\phi,\hat{\Omega}(0,0,\gamma\varphi)\rangle =\int_\Sigma \gamma\varphi^a\Delta P_{abcd}\\ &=\int_\Sigma \gamma\varphi^a\Delta[\frac{\rho+p}{n}(n\epsilon_{abcd}+u_a\mathcal{N}_{bcd})-A_fJ^f\epsilon_{abcd}+A_a\mathcal{N}_{bcd}]\\ &=\int_\Sigma \gamma \mathcal{N}_{bcd}\Delta[(\frac{\rho+p}{n}u_a+A_a)\varphi^a]=\int_\Sigma \gamma \bm{\mathcal{N}}\Delta j=0, \end{aligned} \end{equation} where we have used $\mathscr{L}_t\tilde{\eta}=\gamma \varphi^a$ for some function $\gamma$\cite{GSW}. Similarly, one can also have \begin{equation}\label{v2} \langle \mathscr{L}_t\hat{\delta}\phi,\hat{\Omega}\mathscr{L}_{[t,X]}\phi\rangle=-\langle \hat{\delta}\phi,\hat{\Omega}\mathscr{L}_Y\phi\rangle=0, \end{equation} where we have used the fact that $Y^a=[t,[t,X]]^a$ vanishes at infinity for any asymptotic symmetry generator $X^a$. Eq. (\ref{v1}) and Eq. (\ref{v2}) imply that $\mathscr{L}_t^2\delta\mathcal{Q}\in \mathcal{V}$ for an arbitrary axisymmetric perturbation $\delta\mathcal{Q}$. On the other hand, the mode stability for $\delta\mathcal{Q}$ is equivalent to the mode stability for $\mathcal{L}_t^2\delta\mathcal{Q}$. Thus the mode stability for perturbations within $\mathcal{V}$ implies the mode stability for all perturbations in axisymmetric case. \section{Thermodynamic stability and axisymmetric perturbations}\label{TS} Our charged star in thermodynamic equilibrium is said to be linearly thermodynamically stable if $\delta^2S$ is negative for all the linear on-shell perturbations with $\delta\mathcal{M}=\delta N=\delta \mathcal{J}=\delta^2\mathcal{M}=\delta^2N=\delta^2\mathcal{J}=0$. Note that the variation of the first law of thermodynamics gives \begin{equation} \delta^2 \mathcal{M}-\tilde{T}\delta^2S-\tilde{\mu}\delta^2N-\Omega\delta^2\mathcal{J}=\int_\Sigma (\delta \tilde{T}\delta(s\bm{\mathcal{N}})+\delta \tilde{\mu}\delta \bm{ \mathcal{N}}-\delta\Omega d\delta\bm{Q}_\varphi), \end{equation} which implies that the left side, denoted by $\mathcal{C}$ later on, depends solely on the first order variation of our system. Thus the criterion for the thermodynamic stability of our charged star is the positivity of $\mathcal{C}$ for all the linear on-shell perturbations with $\delta\mathcal{M}=\delta N=\delta\mathcal{J}=0$. In our Lagrangian description, we have $\delta N=\delta S=\delta^2 N=\delta^2 S=0$ automatically\footnote{We also have $\delta\mathcal{M}=0$ since we are always working within the subspace $\mathcal{H}$.}. Then by Eq. (\ref{relationtosecond}), we obtain \begin{equation} \mathcal{C}=\mathcal{E}_v-\int_\Sigma v^a\Delta \{[\Delta (T_m^b{}_a\epsilon_{befg})-\frac{1}{2}T_m^{bc}\Delta g_{bc}\epsilon_{aefg}]+[\Delta (J^bA_a\epsilon_{befg})-J^c\Delta A_c\epsilon_{aefg}]\} \end{equation} for the Killing field $v^a$, where $\mathcal{E}_v$ can be understood as the canonical energy in the comoving frame. We borrow directly from \cite{our} the following two identities \begin{equation} \begin{aligned} u^a[J^c\Delta A_c \epsilon_{aefg}-\Delta(J^bA_a\epsilon_{befg})]&=-u\cdot\bm{ A} \Delta \bm{\mathcal{N}},\\ u^a[\frac{1}{2}T_m^{bc}\Delta g_{bc}\epsilon_{aefg}-\Delta(T_m^b{}_a\epsilon_{befg})]&=\mu \Delta \bm{\mathcal{N}}+T\Delta (s\bm{\mathcal{N}}). \\ \end{aligned} \end{equation} By the Lagrangian variation of these two identities, we further have \begin{equation} \begin{aligned} u^a\Delta [J^c\Delta A_c \epsilon_{aefg}-\Delta(J^bA_a\epsilon_{befg})]&=0,\\ u^a\Delta [\frac{1}{2}T^{bc}\Delta g_{bc}\epsilon_{aefg}-\Delta(T^b{}_a\epsilon_{befg})]&=0,\\ \end{aligned} \end{equation} where we have used $\Delta u^a\propto u^a$ as well as the fact that any order of the Lagrangian variation of $\bm{\mathcal{N}}$ and $s\bm{\mathcal{N}}$ vanishes. Accordingly, we obtain \begin{equation} \mathcal{C}=\mathcal{E}_v, \end{equation} whereby we further have \begin{equation} \mathcal{C}=\mathcal{E}_t \end{equation} for axisymmetric perturbations. With this observation, we obtain the following criterion for the thermodynamic stability of our charged star. \textbf{Criterion for thermodynamic stability}: For our charged star in thermodynamic equilibrium, the necessary condition for its thermodynamic stability is the positivity of $\mathcal{E}_v$ for all the linear on-shell perturbations with $\delta\mathcal{J}=0$, and the necessary condition for its thermodynamic stability with respect to all the linear on-shell axisymmetric perturbations with $\delta\mathcal{J}=0$ is the positivity of the corresponding $\mathcal{E}_t$. Here the necessary condition can be replaced by the necessary and sufficient condition if and only if $\frac{\delta s}{|D_as|}$ is bounded for all allowable perturbations of $s$, since only in this case can the perturbations be implemented within our Lagrangian description\cite{F1978}. One such an application scenario is provided by the so-called isentropic star, which is regarded generally as a good approximation at least for the convective part of a real life star\cite{Weinberg}. For such an isentropic star, we have a uniform entropy per particle throughout the whole star. As a result, $\frac{\delta s}{|D_as|}$ is bounded in the sense that not only do we have $D_as=0$ but also the allowable $\delta s=0$. In particular, let us consider the linear on-shell spherically symmetric perturbations of a static, spherically symmetric isentropic charged star, for which we have $\delta \mathcal{J}=0$ automatically. On the other hand, we also have no spherically symmetric trivial displacement because of \begin{equation} 0=\mathscr{L}_{\xi\frac{\partial}{\partial r}}\bm{\mathcal{N}}=d(\xi\frac{\partial}{\partial r}\cdot\bm{\mathcal{N}})=d(n\sqrt{h}\xi d\theta\wedge d\varphi)=\frac{\partial(n\sqrt{h}\xi)}{\partial r} dr\wedge d\theta\wedge d\varphi \end{equation} subject to the boundary condition $\xi=0$ at $r=0$ in the spherical coordinates. Together with the ADM $3$-momentum unchanged under the spherically symmetric perturbations, we know that there is no restriction on $\mathcal{V}$ for the spherically symmetric perturbations. Thus we end up with the corollary that the dynamic stability is equivalent to the thermodynamic stability for the spherically symmetric perturbations of the static, spherically symmetric isentropic charged star, which has also been obtained most recently in \cite{YFJ2021} instead by performing explicit calculations in the spherical coordinates. \section{Discussions} As promised, we have accomplished a thorough analysis of the dynamic and thermodynamic stability for the charged star. As we show explicitly in our paper, neither the presence of the electromagnetic field nor the Lorentz force experienced by the charged fluid makes any obstruction to the key steps towards the results obtained previously in \cite{GSW} for the neutral star. Thus our main results for the charged star are in close parallel with those presented in \cite{GSW} for the neutral star, but with various improvements scattered in our paper. We conclude our paper with two interesting issues worthy of further investigation. The first one is related to the Gubser-Mitra conjecture. As alluded to in the introduction section, Hollands and Wald proved the Gubser-Mitra conjecture for black branes in AdS spacetimes by resorting to the criterion they established for the dynamic and thermodynamic stability of the black holes\cite{HW}. Thus it is tempting to ask whether one can obtain the similar equivalence between the dynamic instability and the thermodynamic instability for the AdS planar star by the same token. On the other hand, new progress has recently been made towards the Lagrangian formulation of dissipative fluids and its various implications\cite{CGL,GCL,LG,GLR,GGL}, so it is also interesting to explore what new insights one can gain by applying the Wald formalism to it. We hope to report some progress along these two lanes in the future. \section*{Acknowledgements} We are grateful to Bob Wald for his helpful communications regarding his work. We also thank Hu Zhu for his inspiring discussions on some issues related to this work. This work is partly supported by the National Key Research and Development Program of China Grant No. 2021YFC2203001 as well as the NSFC under Grant Nos. 11975239, 12005088, 12035016, 12075026, and 12275350. JZ is also supported by the Beijing Research Fund for Talented Undergraduates.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Technical details and numerical considerations\label{sec:details}} \subsection{Hamiltonian and basis representation} In order to describe charge-density wave (CDW) states in one dimension, we introduce an extended unit cell containing two lattice sites, as sketched in Fig.~\ref{fig:unitcell}. \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{Fig5} \caption{Illustration of the choice of the non-primitive unit cell (lattice constant $a^\prime = 2 a$) in one dimension. \label{fig:unitcell}} \end{figure} In what follows, let us denote the lattice sites of each unit cell by indices $i,j,\dots$, while the sites in each unit cell will be labelled by $\alpha,\beta$. In general, CDW order is driven by two (possibly coexisting) mechanisms: electron-electron (e--e) interactions and electron-phonon (e--ph) interactions. In order to take both of them into account, we extend the Hubbard model considered in the main text to the Hubbard-Holstein model. Using the above convention, the Hamiltonian is expressed as \begin{align} \label{eq:ham0} \hat{H}(t) = \hat{H}_\mathrm{0}(t) + \hat{H}_{\mathrm{e-e}} + \hat{H}_{\mathrm{e-ph}} + \hat{H}_\mathrm{ph} \ , \end{align} where \begin{align} \label{eq:ham1} \hat{H}_\mathrm{0} (t) = \sum_{\langle i,j\rangle} \sum_{\alpha \beta} \sum_{\sigma }h^{\alpha \beta}_{ij}(t) \hat{d}^\dagger_{i \alpha \sigma} \hat{d}_{j \beta \sigma} \end{align} denotes the time-dependent electron Hamiltonian. The e--e interaction is modeled by the Hubbard interaction \begin{align} \hat{H}_\mathrm{e-e} &= \frac{U}{2} \sum_{i,\alpha,\sigma} \left(\hat{d}^\dagger_{i \alpha \sigma} \hat{d}_{i\alpha \sigma} - \frac12 \right) \left(\hat{d}^\dagger_{i \alpha \bar{\sigma}} \hat{d}_{i\alpha \bar{\sigma}} - \frac12 \right) \nonumber \\ &\equiv \frac{U}{2} \sum_{i,\alpha,\sigma} \left(\hat{n}_{i \alpha \sigma} - \frac12 \right) \left(\hat{n}_{i \alpha \bar{\sigma}} - \frac12 \right)\ . \end{align} As usual, $\bar{\sigma} \!= \downarrow$ ($\bar{\sigma} \!= \uparrow$) for $\sigma \!= \uparrow$ ($\sigma \!= \downarrow$). Furthermore, we account for Holstein-type phonons described by \begin{align} \hat{H}_\mathrm{ph} = \omega_\mathrm{ph}\sum_{i,\alpha} \hat{b}^\dagger_{i\alpha} \hat{b}_{i\alpha} \ , \end{align} while the electron-phonon interaction is of the form \begin{align} \hat{H}_\mathrm{e-ph} = g \sum_{i,\alpha,\sigma} \hat{n}_{i\alpha \sigma} \hat{X}_{i\alpha} \ . \end{align} Here, $\hat{X}_{i\alpha}=[\hat{b}^\dagger_{i\alpha} + \hat{b}_{i\alpha} ]/\sqrt{2}$ represents the phonon distortion. The two limiting cases of (i) purely correlation-driven and (ii) purely phonon-driven charge order can be obtained by (i) setting $g=0$, or (ii) $U=0$, respectively. Transforming to momentum space via \begin{align} \hat{d}_{k\alpha \sigma} = \frac{1}{\sqrt{N}} \sum_m e^{-\mathrm{i} m k a^\prime} \hat{d}_{m\alpha \sigma} \ , \end{align} where $N\rightarrow \infty$ denotes the number of cells, the Hamiltonian~\eqref{eq:ham1} reads \begin{align} \hat{H}_\mathrm{0} (t) = \sum_{k \in \mathrm{BZ}}\sum_{\alpha \beta} \sum_\sigma h_{\alpha \beta}(k;t) \hat{d}^\dagger_{k \alpha \sigma} \hat{d}_{k \beta \sigma} \ . \end{align} Here, the one-body Hamiltonian (in matrix notation) is given by $\vec{h}(k;t) = \vec{h}^{(0)}(k - A_F(t))$ with \begin{align} \vec{h}^{(0)}(k) = \begin{pmatrix} -\mu & -J_0 \left[ 1 + \exp(-\mathrm{i} k a^\prime) \right] \\ -J_0 \left[ 1 + \exp(\mathrm{i} k a^\prime) \right] & - \mu \end{pmatrix} \ , \end{align} which has eigenvalues $\varepsilon_{\pm}(k) = -\mu \pm 2 J_0 \cos(k a)$. Here, $A_F(t)$ denotes the electromagnetic vector potential along the chain direction. Note that the Brillouin zone refers to the extended unit cell, i.\,e. $\mathrm{BZ}=[-\pi/a^\prime,\pi/a^\prime]$. We assume half filling ($\mu=0$). The basis with respect to the sublattice sites (ss) in the extended unit cell is particularly convenient for introducing approximations to the e--e interaction, as discussed below. Therefore, all calculations have been performed in the sublattice representation. An equivalent basis -- which has been used in the main text -- is given by the momentum pair $(k,k+Q)$ with the nesting vector $Q=\pi/a$. The momentum-pair (mp) representation is most useful for defining pseudospin operators, as introduced in the main text. Any $k$-dependent matrix in th ss representation ($\vec{A}_k$) can be transformed to the mp basis by the unitary transformation \begin{align} \vec{A}^{(\mathrm{mp})}_k = \vec{R}_k \vec{A}^{(\mathrm{ss})}_k \vec{R}^\dagger_k \end{align} with \begin{align} \vec{R}_k = \frac{1}{\sqrt{2}}\begin{pmatrix} e^{\mathrm{i} k a/2} & e^{-\mathrm{i} k a/2} \\ e^{\mathrm{i} k a/2} & -e^{-\mathrm{i} k a/2} \end{pmatrix} \ . \end{align} \subsection{Mean-field treatment} Within the mean-field (MF) approximation, the time-dependent Hamiltonian~\eqref{eq:ham0} is replaced by \begin{align} \hat{H}^\mathrm{MF}(t) = \sum_{k \in \mathrm{BZ}}\sum_{\alpha \beta} \sum_\sigma h^{\mathrm{MF}}_{\alpha \beta}(k-\mathrm{A}_F(t)) \hat{d}^\dagger_{k \alpha \sigma} \hat{d}_{k \beta \sigma} \ , \end{align} where the one-body MF Hamiltonian reads \begin{align} \label{eq:ham_mf} \vec{h}^{\mathrm{MF}}(k) &= \vec{h}^{(0)}(k) + U \begin{pmatrix} \langle \hat{n}_{\mathrm{A}} \rangle -\frac12 & 0\\ 0 & \langle \hat{n}_\mathrm{B} \rangle - \frac12 \end{pmatrix} \nonumber \\ &\quad + g \begin{pmatrix} \langle \hat{X}_\mathrm{A} \rangle & 0 \\ 0 & \langle \hat{X}_\mathrm{B} \rangle \end{pmatrix} \ . \end{align} Here, $\langle \hat{n}_\mathrm{A} \rangle$ ($\langle \hat{n}_\mathrm{B} \rangle$) denotes the occupation on sublattice site A (B). We assume the paramagnetic case $\langle \hat{n}_\mathrm{A,B} \rangle = \langle \hat{n}_{\mathrm{A,B},\uparrow} \rangle = \langle \hat{n}_{\mathrm{A,B},\downarrow} \rangle $ and therefore drop the spin indices. $\hat{X}_{\mathrm{A,B}}$ measures the lattice distortion (corresponding to $k=0$) at the respective sublattice sites. We introduce the order parameter for CDW \begin{align} \Delta n = \langle\hat{n}_\mathrm{A}\rangle-\langle\hat{n}_\mathrm{B}\rangle \end{align} and the distortion parameter $\Delta X = \langle \hat{X}_\mathrm{A} \rangle -\langle \hat{X}_\mathrm{B} \rangle$. Expressing the MF Hamiltonian~\eqref{eq:ham_mf} with these definitions and transforming to the mp basis then gives the MF Hamiltonian \begin{align} \hat{H}^{\mathrm{MF}}(t) = \sum_{k \in \mathrm{BZ}} \sum_{\sigma} \hat{\vec{c}}^\dagger_{k \sigma} \,\widetilde{\vec{h}}^{\mathrm{MF}}(k,t) \hat{\vec{c}}_{k \sigma} \end{align} with $\hat{\vec{c}}_{k \sigma} = (\hat{c}_{k\sigma},\hat{c}_{k+Q,\sigma})$ and \begin{align} \label{eq:ham_mf_mp} \widetilde{\vec{h}}^{\mathrm{MF}}(k,t) = \begin{pmatrix} \varepsilon_k(t) & \frac{g}{2} \Delta X(t) + \frac{U}{2} \Delta n(t) \\ \frac{g}{2} \Delta X(t) + \frac{U}{2} \Delta n(t) & \varepsilon_{k+Q}(t) \end{pmatrix}\ . \end{align} Here, $\varepsilon_k(t) = \varepsilon_{k-A_F(t)}$ with $\varepsilon_k=-2 J_0 \cos(k a)$ denoting the free dispersion. The pseudospin representation can now be introduced in terms of the Pauli matrices $\gvec{\sigma}^\alpha$ by \begin{align} \hat{S}^\alpha_{k \sigma} = \frac12 \begin{pmatrix} \hat{c}^\dagger_{k\sigma} \hat{c}^\dagger_{k+Q,\sigma} \end{pmatrix} \gvec{\sigma}^\alpha \begin{pmatrix} \hat{c}_{k\sigma} \\ \hat{c}_{k+Q,\sigma} \end{pmatrix}\ . \end{align} Exploiting $\varepsilon_{k+Q}=-\varepsilon_k$, the Hamiltonian can be written as \begin{align} \hat{H}^{\mathrm{MF}}(t) =\sum_{k\in\mathrm{BZ}} \sum_\sigma \left( B^x_{k}(t)\hat{S}^x_{k\sigma} + B^z_{k}(t)\hat{S}^z_{k\sigma} \right)\ , \end{align} where the effective magnetic fields are given by \begin{align} B^x_k(t) = g \Delta X(t) + U \Delta n(t) \ , \ B^z_k(t) = \varepsilon_k(t) - \varepsilon_{k+Q}(t) \ . \end{align} The time-dependent MF equations are solved in two steps. First, the one-body density matrix in thermal equilibrium $\gvec{\rho}_\mathrm{eq} (k)$ is self-consistently computed by diagonalizing Eq.~\eqref{eq:ham_mf} and calculating the order $\Delta n$ and distortion $\Delta X = -(2g/\omega_\mathrm{ph})\Delta n$ parameters, until convergence is reached. Using $\gvec{\rho}(k,t=0)=\gvec{\rho}_\mathrm{eq} (k)$ as initial condition the time evolution is determined by the time stepping \begin{align} \gvec{\rho}(k,t+\Delta t) = \vec{U}_k(t+\Delta t,t) \gvec{\rho}(k,t) \vec{U}^\dagger_k(t+\Delta t,t) \ . \end{align} Here, $\vec{U}_k(t+\Delta t,t)$ denotes the time evolution operator of the MF Hamiltonian, which is computed by fourth-order commutator-free matrix exponentials~\cite{alvermann_high-order_2011}. To determine the self-consistent MF Hamiltonian, the phonon amplitudes $\langle \hat{X}_{\mathrm{A,B}}(t) \rangle$ also need to be propagated. Combing their respective equations of motion, the distortion parameter is obtained from the equation \begin{align} \label{eq:osceq} \frac{1}{2\omega_\mathrm{ph}}\left[\frac{\mathrm{d}^2}{\mathrm{d} t^2} + \omega^2_\mathrm{ph}\right] \Delta X(t) = - g \Delta n(t) \ , \end{align} which we solve using the fourth-order Numerov method with the initial condition $\frac{\mathrm{d}}{\mathrm{d} t}\Delta X(t) = 0$ for $t=0$. Since the values $\Delta X(t+\Delta t)$ and $\Delta n(t+\Delta t)$ are needed to carry out the step from $t$ to $t+\Delta t$, the time propagation is performed in a predictor-corrector fashion. \subsection{Solution of the Kadanoff-Baym equations \label{subsec:kbe}} Treating the electron-electron interactions up to second order in $U$ is accomplished by the second-Born (2B) approximation to the self-energy. In the original lattice representation, the 2B self-energy reads \begin{align} \label{eq:2b} \Sigma_{ij\sigma}(z,z^\prime) = U^2 G_{ij \sigma}(z,z^\prime) G_{ij\bar{\sigma}}(z,z^\prime) G_{ji \bar{\sigma}}(z^\prime, z) \ . \end{align} Here, $z,z^\prime$ denote the arguments on the Matsubara-Keldysh contour $\mathcal{C}$, and $G_{ij\sigma}$ stands for the one-body Green's function \begin{align} G_{ij\sigma}(z,z^\prime) = -\mathrm{i} \langle \mathcal{T} \hat{d}_{i\sigma}(z) \hat{d}^\dagger_{j\sigma}(z^\prime) \rangle \ , \end{align} with $\mathcal{T}$ the contour-ordering operator. More details on the formalism can be found, for instance, in Ref.~\cite{stefanucci_nonequilibrium_2013}. As discussed in the main text, we employ the local second-Born (2Bloc) approximation, which is obtained from Eq.~\eqref{eq:2b} by replacing the index pair $(ij)$ by the diagonal $(ii)$. Switching to $k$-space, the 2Bloc is cast into a momentum-independent self-energy, \begin{align} \Sigma_{\alpha \alpha \sigma}(z,z^\prime) = U^2 \mathcal{G}_{\alpha\alpha\sigma}(z,z^\prime) \mathcal{G}_{\alpha\alpha\bar\sigma}(z,z^\prime) \mathcal{G}_{\alpha\alpha\bar\sigma}(z^\prime,z) \ , \end{align} where the index $\alpha$ refers to the sublattice site basis and \begin{align} \mathcal{G}_{\alpha\beta\sigma}(z,z^\prime) = \frac{1}{|\mathrm{BZ}|} \int_{\mathrm{BZ}} \mathrm{d} k \, G_{\alpha \beta\sigma}(k; z,z^\prime) \end{align} defines the local Green's function. Switching to a matrix notation for the sublattice indices (and dropping the spin index), the equation of motion for the Green's function assumes the standard form \begin{align} \left[\mathrm{i} \partial_z - \vec{h}^\mathrm{MF}(z) \right] \vec{G}(k;z,z^\prime) = \int_\mathcal{C}\!\mathrm{d} z^\dprime \, \gvec{\Sigma}(z,z^\dprime) \vec{G}(k; z^\dprime,z^\prime) \ . \end{align} Projecting onto imaginary and real times and invoking the Langreth rules then yields the Kadanoff-Baym equations (KBEs). The KBEs are solved using an in-house massively parallel computer code based on a fourth-order implicit predictor-corrector algorithm. For the results presented in the main text, the time interval was split into $N_t=3000$ equidistant points, while the imaginary branch (for the nonequilibrium calculations) was represented by $N_\tau=800$ grid points. The Green's function for every $k$-point has to be propagated simultaneously, which is accomplished by a distributed-memory layout. For obtaining converged results, we used $N_k=256$ points in the Brillouin zone. \subsection{Equilibrium spectral function and mean-field fit} \begin{figure}[t] \includegraphics[width=\columnwidth]{Fig6} \caption{Left panel: spectral function $A(k,\omega)$ (summed over bands) within the 2Bloc approximation. The dashed lines represent the fit by the MF model with renormalized parameters $\widetilde{J}_0$ and $\widetilde{U}$. Right panel: band-resolved (green and purple filled curves) and total (gray filled curve) density of states within the 2Bloc approximation. The parameters are, as in the main text, $U=-2$, $\beta=40$. \label{fig:spectral}} \end{figure} Before the KBEs can be solved (see subsec.~\ref{subsec:kbe}), the equilibrium (Matsubara) Green's function needs to be computed. To this end, we solve the corresponding Dyson equation \begin{align} \label{eq:dyson} \vec{G}(k;\tau) &= \vec{g}(k;\tau) \nonumber \\ &\quad + \int^\beta_0\! \mathrm{d} \tau^\prime \int^\beta_0\! \mathrm{d} \tau^\dprime \, \vec{g}(k;\tau-\tau^\prime) \gvec{\Sigma}(\tau^\prime-\tau^\dprime) \vec{G}(k;\tau^\dprime) \ . \end{align} Here, $\vec{g}(k,\tau)$ denotes the MF Green's function, while $\gvec{\Sigma}(\tau)$ is the self-energy in the 2Bloc approximation. The Dyson equation~\eqref{eq:dyson} is solved by a combination of Fourier transformation and fifth-order fix-point iteration to improve the accuracy. A description of the method can be found in Ref.~\cite{schuler_spectral_2017}. As for the nonequilibrium calculations, we use $N_k=256$ $k$-points, whereas $N_\tau=4096$ points on the Matsubara axis were needed for converging the results. For the nonequilibrium calculations, the Matsubara Green's function is defined on the reduced imaginary grid by interpolation. The spectral function $\vec{A}(k,\omega)$ in real-frequency space is obtained by Pad\'e analytic continuation as in Ref.~\cite{schuler_spectral_2017}. The band-integrated spectral function $A(k,\omega)=\sum_\alpha A_{\alpha \alpha}(k;\omega)$ is shown in Fig.~\ref{fig:spectral}. In accordance with the Luttinger-Ward theorem, the broadening due to many-body effects is least pronounced in the vicinity of the chemical potential $\mu =0$, while significant broadening is apparent at the band top and bottom. Since we are in the weak-coupling regime, the main effect of the electronic correlations is a renormalization of the bands. In order to be able to directly compare the dynamics within the MF and 2Bloc approximation, the band renormalization is taken into account in the effective parameters of the MF Hamiltonian~\eqref{eq:ham_mf}, replacing $J_0 \rightarrow \widetilde{J}_0$ and $U \rightarrow \widetilde{U}$. These parameters are determined by fitting the MF band structure to the maximum (with respect to $\omega$) of $A(k,\omega)$, while requiring the order parameter to be identical (see Fig.~\ref{fig:spectral}). The result is $\widetilde{J}_0=0.89 J_0$ and $\widetilde{U}=0.625 U$. The good quality of the fit is supported by the almost identical short-time dynamics within the MF and 2Bloc approximation, ensuring that applying the QOCT on the MF level provides optimal pulses for the correlated dynamics, as well. \subsection{Quantum optimal control\label{subsec:qoct}} The optimization of the laser pulses by quantum optimal control theory (QOCT) is performed on the MF level. Since the MF dynamics is described by a nonlinear equation of motion for the one-body density matrix, the usual approach based on an (effective) Schr\"odinger equation (Krotov algorithm) is not applicable. In fact, one has to resort to gradient-free optimization methods because the derivative with respect to the driving field can not be obtained analytically. One can expect that pulses containing a minimal amount of mean field energy $E_\mathrm{p}=(\epsilon_0/T_\mathrm{p}) \int \mathrm{d} t \, |E_F(t)|^2 = (\epsilon_0/T_\mathrm{p}) \int \mathrm{d} t \, |\dot{A}_F(t)|^2$ -- as required to minimize heating effects -- are relatively smooth functions without strong variations. On the other hand, the search space has to be large enough to find good approximations to the optimal fields. To fulfil these objectives, we parameterize the vector potential by \begin{align} A_F(t) = \sum^{N_b}_{i=1} c_i B_i(t) \ , \end{align} where $B_i(t)$ are fourth-order B-splines with respect to the time interval $[t_0,t_0+T_\mathrm{p}]$. To ensure the corresponding electric field $E_F(t) = - \dot{A}_F(t)$ is zero at the end points of the interval and make sure no momentum is transferred to the system ($A_F(t_0+T_\mathrm{p})=A_F(t_0)=0$), the boundary coefficients are fixed by $c_1=c_2=c_{N_b-1}=c_{N_b}=0$. For the switching scenario, we are interested in long-time stable dynamics of $\Delta n(t)$. As is known from the analysis with respect to the single-cycle pulses, amplitude mode oscillations are expected to be present around a switched value of the order parameter after time $t_1$. We thus perform a linear fit $\Delta n_\mathrm{fit}(t) = a (t-t_1) +b$ to the dynamics of $\Delta n(t)$ after the pulse. We then required that (i) the mean value of the order, encoded in $b$, is maximal, while (ii) the average slope $|a|$ should be minimal. The condition (ii) is necessary for the long-time stability of the switched state to ensure no drift can occur at longer time scales. Similarly, for coherent destruction one requires $|b|$ to be minimal. Gathering the B-spline coefficients in the vector $\vec{c}$, the target functional for switching the order from $\Delta n(t=0)<0$ is given by \begin{align} \label{eq:func1} J_\mathrm{switch}[\vec{c}] = -b + \epsilon_1 |a| + \epsilon_2 E_\mathrm{abs} \ , \end{align} while we use \begin{align} \label{eq:func2} J_\mathrm{CD}[\vec{c}] = |b| + \epsilon_1 |a| + \epsilon_2 E_\mathrm{abs} \end{align} for achieving coherent destruction. Here, $\epsilon_1$ is a penalty parameter for the slope, whereas $\epsilon_2$ denotes the penalty with respect to the absorbed energy $E_\mathrm{abs}$. In order to evaluate the functionals~\eqref{eq:func1} or \eqref{eq:func2}, one has to perform the time propagation up to a sufficiently large time $T_\mathrm{max}$ (we set $T_\mathrm{max}=500$), compute the fitting parameters $a$, $b$ and the absorbed energy. As mentioned above, the gradient with respect to $\vec{c}$ cannot be calculated directly. Therefore, we minimize the functionals by a combination of the Pikaia genetic algorithm~\cite{charbonneau_genetic_1995} for finding local minima and the NEWUOA algoritm~\cite{pillo_large-scale_2006} for finding the global minimum. This procedure depends on the parameters $N_b$, $\epsilon_1$, $\epsilon_2$ and the pulse duration $T_\mathrm{p}$. \section{Dynamics of phonon-driven charge order} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{Fig7} \caption{Dynamics of the order parameter $\Delta n(t)$ induced by single-cycle pulses with strength $F_0$ (analogous to Fig.~1(b) in the main text), for different ratios of the contribution to the CDW from e--ph interactions (we fix $\omega_\mathrm{ph}=0.2$).\label{fig:phonon} } \end{figure} \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{Fig8} \caption{Switching dynamics of the CDW order parameter induced by selected optimized pulses in the MF (light blue) and 2Bloc (dark blue) approximation. The panels (a)--(e) correspond to increasing energy penalty $\epsilon_2$; in (a) $\epsilon_2=0$. The black dashed lines indicate the initial and flipped value $\pm\Delta n_0$, while the purple short-dashed lines represents the mean value $\Delta \bar{n}$ within the MF approximation.\label{fig:switching} } \end{figure*} As discussed in the main text, the distinct regimes in the nonequilibrium phase diagram are tightly connected to the driving mechanism of the CDW. In order to corroborate this behavior and, moreover, test the robustness of nonequilibrium features with respect to lattice distortions, we included e-ph couplings. Both, the e--e and the e--ph coupling is responsible for the formation of the CDW. The relative contribution from either effect can be captured by the parameter \begin{align} \label{eq:vcdw} V_\mathrm{CDW} \equiv V_\mathrm{ph} + V_\mathrm{e} = \frac{g^2}{\omega_\mathrm{ph}} - \frac{U}{2} \ . \end{align} The parameter~\eqref{eq:vcdw} is related to the interaction energy, as can be seen by computing the total energy in the MF approximation: \begin{align*} E_\mathrm{tot} &= 2\sum_{k} \mathrm{Tr} \left[ \vec{h}^{(0)}(k) \gvec{\rho}(k)\right] + \sum_{k} \mathrm{Tr} \left[ (\vec{h}^{\mathrm{MF}}(k)-\vec{h}^{(0)}(k)) \gvec{\rho}(k)\right] \\ &\equiv E_0 + E_\mathrm{int} \ . \end{align*} Expressing the interaction energy by the order and distortion parameters in equilibrium, one finds $E_\mathrm{int} = V_\mathrm{CDW} \Delta n ^2$. Note that an identical value of $V_\mathrm{CDW}$, regardless of the individual contributations of the e--e or e--ph interactions, gives rise to the same value of $\Delta n$ and the gap size. Fixing $V_\mathrm{CDW}=0.625$ (corresponding to the results of the main text) we now vary the ratio $V_\mathrm{ph}/V_\mathrm{CDW}$ and study how the increased contribution of e--ph interactions to the order affects the pulse-induced dynamics. Figure~\ref{fig:phonon} shows the nonequilibrium phase diagram (MF approximation) analogous to Fig.~1(b) in the main text for $V_\mathrm{ph}/V_\mathrm{CDW} = 0.1$ (top), $V_\mathrm{ph}/V_\mathrm{CDW}=0.2$ (middle) and $V_\mathrm{ph}/V_\mathrm{CDW}=1$ (bottom). For a CDW dominated by e--e correlation effects, the different regimes of amplitude mode oscillations, coherent destruction and switching are qualitatively still present, but superimposed with coherent phonon oscillations. It is interesting to see that the lower boundary of the coherent destruction regime represents the fastest way to destroy the order, whereas the ``sweet spot" is exhibiting more oscillations. In general, the amplitude of the phonon oscillations increases under stronger driving. The qualitative behavior of the laser-driven nonequilibrium regimes is still present for $V_\mathrm{ph}/V_\mathrm{CDW}=0.2$, albeit the boundaries are smeared out by the phonon oscillations. For an even larger contribution of the electron-phonon coupling, the dynamics is dominated by the phonons and thus displays the generic behavior of the purely phonon-driven case (bottom panel in Fig.~\ref{fig:phonon}). In this scenario, the persistent oscillations of $\Delta n$ with frequency $\omega_\mathrm{ph}$ are the dominating feature. Neither destruction nor switching of the CDW is possible anymore. We conclude that a qualitatively different dynamics of the order parameter for different pulse amplitudes is a clear signature of a predominantly correlation-driven CDW formation. Small e--ph coupling leads to small additional coherent phonon oscillations, but does not suppress the characteristic features discussed in the main text. A larger contribution of the phonons, on the other hand, suppresses any switching behavior. \section{Switching and coherent destruction by optimized pulses} As explained in subsec.~\ref{subsec:qoct}, the pulse optimization with the aim of switching the CDW is performed on the MF level and depends on the number of B-spline coefficients $N_b$, the slope penalty $\epsilon_1$, the penalty with respect to the absorbed energy $\epsilon_2$ and the pulse duration $T_\mathrm{p}$. We performed the pulse optimization for various combinations of these parameters and found that $N_b=28$ is enough to find the optimal pulse shapes. Increasing $N_b$ yields essentially the same pulses with extra oscillations on top. Furthermore, the value of $\epsilon_1$ affects the pulses only weakly since most of the resulting pulses result in a vanishing average slope of $\Delta n(t)$. The pulse length $T_\mathrm{p}$ was varied from $T_\mathrm{p} = 5.0$ to $T_\mathrm{p} =20.0$; we select the best pulses in this range for a fixed value of $\epsilon_2$. \subsection{Switching dynamics} The energy penalty $\epsilon_2$ is the most crucial parameter. Choosing $\epsilon_2=0$ results in very strong pulses, leading to almost perfect switching on the MF level (Fig.~\ref{fig:switching}(a)). However, within the 2Bloc approximation, the huge amount of absorbed energy rapidly destroys the order. Further analysis shows that the system thermalizes at a very high effective temperature shortly after the pulse. Gradually increasing $\epsilon_2$ decreases the switching efficiency (Fig.~\ref{fig:switching}(b--e)) while reducing the energy absorption. This leads to a longer life time of the switched state within the 2Bloc dynamics. Interestingly, the shape of the vector potential $A_F(t)$ looks quite similar in Fig.~\ref{fig:switching}(c)--(e). It corresponds to the minimization of dephasing which is explained in the main text. The best compromise between energy absorption and switching is provided by the pulse in Fig.~\ref{fig:switching}(e). We found that applying a smoothening low-pass filter further reduces $E_\mathrm{abs}$, while the short-time dynamics is not altered. This optimal pulse is the one presented and discussed in the main text. Note that increasing $\epsilon_2$ further leads to a suppression of switching, since the requirement to minimize the absorbed energy -- which is zero if no pulse is applied -- starts to dominate. \subsection{Coherent destruction dynamics} An analogous analysis was carried out for the coherent destruction of the CDW order. However, we found that the optimal pulse and the resulting dynamics is very robust against changes of $\epsilon_1$ and $\epsilon_2$. The pulse with the smallest $E_\mathrm{abs}$ is shown in Fig.~\ref{fig:cd}(b) and compared to the dynamics driven by the single-cycle pulse at the "sweet spot" (Fig.~\ref{fig:cd}(a)) discussed in the main text. It is interesting to note that the simple single-cycle pulse results in perfect suppression of the order while injecting only little energy into the system. Correspondingly, the optimized field $A_F(t)$ is qualitatively almost the same as the single-cycle pulse. However, the absorbed energy is reduced, such that the thermalization (Fig.~\ref{fig:cd}(d)) is slower than for the single-cycle pulse (Fig.~\ref{fig:cd}(c)). \begin{figure}[b] \includegraphics[width=\columnwidth]{Fig9} \caption{Dynamics on the 2Bloc level at the sweet spot of coherent destruction: (a) single-cycle pulse, (b) optimized pulse. The insets show the laser fields. The corresponding fluctuation measure $\mathcal{F}(t)$ is shown in panels (c) and (d), respectively.\label{fig:cd} } \end{figure} \section{Anisotropic two-dimensional lattice} \begin{figure}[t] \includegraphics[width=\columnwidth]{Fig10} \caption{(a) Sketch of the checker-board CDW order in an anisotropic 2D square lattice. The shaded area shows the unit cell chosen for the calculations. (b) Full (while square) and reduced (gray shaded) Brillouin zone. (c) Dependence of the order parameter $\Delta n$ on the anisotropy $\delta$. (d) Dynamics induced by the single-cycle pulse at the point of coherent destruction for different values of $\delta$. \label{fig:aniso} } \end{figure} In the main text, we consider a one-dimensional configuration of the lattice. Note that the local approximation to the self-energy leads to generic features of a higher-dimensional system, while the 1D character primarily enters via the free band structure. Most CDW orders observed in materials are, in fact, two-dimensional (typically stripe or checker board order). In this section we confirm that our results based on the 1D system are also valid for the 2D case with anisotropic hopping. To be concrete, we consider a square lattice with hopping $J_0$ in the $x$-direction and $(1-\delta) J_0$ in the $y$-direction (see Fig.~\ref{fig:aniso}(a)); $\delta=0$ corresponds to the isotropic 2D system, while $\delta =1$ recovers the 1D limit. The CDW forming in this configuration follows a checker-board order, corresponding to a nesting vector $\vec{Q}=(\pi/a,\pi/a)$. The derivations in Section~\ref{sec:details} are applicable to the 2D case, as well, after (i) replacing the 1D wave vector $k$ by a vector $\vec{k}$ from the reduced Brillouin zone shown in Fig.~\ref{fig:aniso}(b), and (ii) modifying the free Hamiltonian (mp basis) to \begin{align} \vec{h}^{(0)}(\vec{k}) = \begin{pmatrix} \varepsilon(\vec{k}) & \frac{U}{2}\Delta n \\ \frac{U}{2}\Delta n & \varepsilon(\vec{k}+\vec{Q}) \end{pmatrix} \ . \end{align} Here, \begin{align} \varepsilon(\vec{k}) = -2 J_0 \left(\cos(k_x a) + (1-\delta)\cos(k_y a) \right) \end{align} denotes the original free band structure. We have performed equilibrium calculations with the 2Bloc approximation for different values of $\delta$ (see Fig.~\ref{fig:aniso}(c)). The order parameter $\Delta n$ deviates by less than 10\% from the 1D value in the regime of anisotropy $\delta = 0.7\dots 1$. This relatively large span shows that small deviations from our 1D setup have almost no influence on the results discussed in the main text. Furthermore, we have analyzed the pulse-induced dynamics in the 2D scenario. As an example, we show the dynamics of the order parameter at the ``sweet spot" of coherent destruction within the MF approximation in Fig.~\ref{fig:aniso}(d). We applied the same pulse as for the 1D case (polarization along the $x$ direction). One observes similar behavior as for the equilibrium properties: for moderately strong anisotropy, the time evolution of $\Delta n$ is very close to the 1D case. Therefore, the different regimes of the nonequilibrium phase diagram discussed in the main text are also relevant for the anisotropic 2D system.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A critical transition (or tipping point) is a rapid sudden change of a time-dependent system. For the introduction we shall rely on this intuitive notion; the mathematical development starts in Section \ref{sec:fast_slow}. Typical examples of critical transitions are drastic changes in the climate \cite{Lentonetal,Alleyetal}, in ecological systems \cite{Clarketal,Carpenteretal}, in medical conditions \cite{ElgerLehnertz,Venegasetal} or in economics \cite{HongStein,HuangWang}. Reviews of recent progress to develop early-warning signals for these critical transitions from an applied perspective can be found in \cite{Schefferetal,Scheffer}. The goal of a mathematical theory should be to provide qualitative and quantitative conditions to check whether a drastic change in a dynamical system can be predicted before it occurs; note that it is obvious that certain transitions are very difficult to predict, for example, due to large noise effects \cite{DitlevsenJohnsen} or non-smooth transitions \cite{HastingsWysham}. A basic assumption in many applications is that the underlying process is deterministic but is subject to small random fluctuations. Furthermore, one often assumes that the change occurs rapidly in comparison to the current state dynamics. Elementary remarks how the mathematical theory of stochastic fast-slow systems can be used to encapsulate these hypotheses can be found in \cite{KuehnCT1}. In particular, several one-parameter normal form models were studied and a more detailed link between rising variance \cite{CarpenterBrock}, rising autocorrelation \cite{Dakosetal}, time series analysis and dynamical models was pointed out. For additional references on critical transitions we refer the reader to \cite{Schefferetal} and \cite{KuehnCT1}.\\ We outline our results without stating detailed technical assumptions. It will be assumed that the main dynamics near a critical transition is governed by an ordinary differential equation (ODE). A classification which bifurcations are critical transitions based on a definition suggested in \cite{KuehnCT1} is explained. In a suitable singular limit this classification is a simple exercise dealing with all bifurcations up to codimension two. Some of the details for this classification are explained since one has to determine, at least once, which conditions on the fast and slow subsystems of a multi-scale system near higher-codimension bifurcations lead to trajectories that resemble critical transitions observed in applications. To model the random fluctuations stochastic differential equations (SDEs) with sufficiently small white noise are used. We calculate asymptotic formulas for all possible covariance matrices associated to sample paths approaching a critical transition. The setup for the calculations is straighforward and is based on normal form assumptions, approximation by Ornstein-Uhlenbeck processes and the solution of a few algebraic equations. An error estimate for the asymptotic expansions is proven for the fold bifurcation by analyzing stochastic difference processes and applying elementary moment estimates, thereby avoiding more advanced techniques \cite{BerglundGentz} for a certain regime. The focus on the fold bifurcation is justified as it is one of the most frequently encountered critical transitions \cite{ThompsonSieber2,GuttalJayaprakash2}. For the same reason, we also provide higher-order asymptotic expansions for the variance as doubly singular limit expansions with small noise and small time scale separation for the approach towards a fold point. Then we use the mathematical results in a wide variety of models. For each application the theoretical predictions are compared with numerical results. We briefly describe which important results are obtained within the examples. In a box-model of atmospheric and ocean circulation we test different approaches to estimate the variance from a given time series and suggest a new method motived by fast-slow systems. In a discrete epidemic spreading model a moment expansion is used to simplify an adaptive network and to analyze the onset of an epidemic. It is shown that predictability in adaptive networks can be improved by focusing on link dynamics instead of node dynamics. A model from systems biology is used to explain the effect of two critical transitions linked in a three-time scale systems. A predator-prey model illustrates the effect of multiple system parameters which can potentially hide early-warning signals that are usually expected to occur in ecology. The last example treats buckling of a spring in the context of a biomechanics experiment. The model for this experiment shows how parameter-dependent non-additive noise influences, and systematically changes, observed early-warning signs. The examples from epidemics, biomechanics and systems biology also seem to be among the first (or even the first) ones where early-warning signs for critical transitions have been applied in the respective fields.\\ In summary, our theoretical results combine well-known elementary mathematical tools from bifurcation theory, fast-slow systems, real analysis, stochastic differential equations, probability, asymptotic analysis, numerical continuation/integration and time series analysis to systematize some of the aspects of critical transitions. In this way, we make progress towards the major open question to develop a unified critical transitions theory \cite{Schefferetal}. Although no complicated technical steps are treated we hope that our work forms a starting point to motivate new mathematical insights into predictability for dynamical systems; see also Section \ref{sec:conclusions}. Our second contribution is to show that abstract critical transitions theory can yield very useful conclusions with immediate value for applications.\\ The paper is structured as follows. Section \ref{sec:fast_slow} describes the background from deterministic fast-slow systems. Section \ref{sec:nforms} contains the classification results. Section \ref{sec:SDE_fs} reviews theory for stochastic fast-slow systems based on which we prove the error estimate for asymptotic moment results near folds. In Section \ref{sec:variance} the leading-order asymptotic scaling laws for the covariance are obtained and in Section \ref{sec:fold_asymp} these results are refined for the fold. Section \ref{sec:applications} contains the five important examples. Section \ref{sec:conclusions} provides an outlook how the framework developed here could be extended.\\ \textit{Convention:} Whenever a citation with detailed page numbers at the beginning of a result (Theorem, Lemma, etc.) is given then the statement and proof can be found in the reference. \section{Brief Review of Fast-Slow Systems} \label{sec:fast_slow} We recall the necessary definitions and results from multiple time scale dynamics \cite{Desrochesetal,Jones,MisRoz,Grasman} that are required to define critical transitions. A \texttt{fast-slow system} of (ODEs) is given by \be \label{eq:gen1} \begin{array}{rcrcl} \epsilon \frac{dx}{ds}&=&\epsilon \dot{x}&=&f(x,y,\epsilon),\\ \frac{dy}{ds}&=& \dot{y}&=& g(x,y,\epsilon),\\ \end{array} \ee where $0<\epsilon\ll1$, $x\in\R^m$ are \texttt{fast variables} and $y\in\R^n$ are \texttt{slow variables}. The maps $f:\R^{m+n+1}\ra \R^m$ and $g:\R^{m+n+1}\ra \R^n$ are assumed to be sufficiently smooth. If $f,g$ do not depend on $\epsilon$ we omit the $\epsilon$-argument and write {e.g.} $f(x,y)$ instead of $f(x,y,\epsilon)$. Changing in \eqref{eq:gen1} from the \texttt{slow time} $s$ to the \texttt{fast time} $t=s/\epsilon$ gives \be \label{eq:gen2} \begin{array}{rcrcr} \frac{dx}{dt}&=&x'&=&f(x,y,\epsilon),\\ \frac{dy}{dt}&=&y'&=& \epsilon g(x,y,\epsilon).\\ \end{array} \ee Henceforth $(\dot{~})$ will denote differentiation with respect to the slow time $s$ and prime differentiation with respect to the fast time $t$. The \texttt{singular limit} $\epsilon\ra 0$ in \eqref{eq:gen1} yields the \texttt{slow subsystem} \be \label{eq:sss} \begin{array}{rcl} 0&=& f(x,y,0),\\ \dot{y}&=&g(x,y,0),\\ \end{array} \ee which is a \texttt{differential-algebraic equation} restricted to the \texttt{critical manifold} $C_0:=\{(x,y)\in\R^{m+n}:f(x,y,0)=0\}$. The \texttt{fast subsystem} is obtained as the singular limit of \eqref{eq:gen2} \be \label{eq:fss} \begin{array}{rcl} x'&=& f(x,y,0),\\ y'&=& 0,\\ \end{array} \ee where the slow variables can be viewed as parameters. The flows generated by \eqref{eq:sss} and \eqref{eq:fss} are called the \texttt{slow flow} and the \texttt{fast flow} respectively. A point $p\in C_0$ is an equilibrium point of the fast subsystem. The critical manifold is \texttt{normally hyperbolic} at $p\in C_0$ if the $m\times m$ matrix $D_xf(p)$ has no eigenvalues with zero real parts. In this case, the implicit function theorem provides a map $h_0:\R^n\ra \R^m$ that describes $C_0$, locally near $p$, as a graph $C_0=\{(x,y)\in\R^{m+n}:x=h_0(y)\}$. Then the slow subsystem \eqref{eq:sss} can be written more concisely as $\dot{y}=g(h_0(y),y)$. If all eigenvalues of $D_xf(p)$ are negative (positive) then $C_0$ is \texttt{attracting} (\texttt{repelling}); other normally hyperbolic critical manifolds are of \texttt{saddle-type}. Observe that $C_0$ is attracting at $p$ if and only if the fast subsystem has a stable hyperbolic equilibrium at $p$. Fenichel's Theorem provides a complete description of the dynamics for normally hyperbolic invariant manifolds for sufficiently smooth vector fields $(f,g)$. To state the result, we recall that the \texttt{Hausdorff distance} between two sets $V,W\subset \R^{m+n}$ is given by \benn d_H(V,W)=\max\left\{\sup_{v\in V} \inf_{w\in W}\|v-w\|,\sup_{w\in w} \inf_{v\in V}\|v-w\|\right\} \eenn where $\|\cdot\|$ denotes the Euclidean norm. \begin{thm}[\texttt{Fenichel's Theorem}~\cite{Fenichel4}] \label{thm:fenichel1} Suppose $S = S_0$ is a compact normally hyperbolic submanifold (possibly with boundary) of the critical manifold $C_0$. Then for $\epsilon > 0$ sufficiently small there exists a locally invariant manifold $S_\epsilon$ diffeomorphic to $S_0$. $S_\epsilon$ has a Hausdorff distance of $\cO(\epsilon)$ from $S_0$ and the flow on $S_\epsilon$ converges to the slow flow as $\epsilon \to 0$. $S_\epsilon$ is normally hyperbolic and has the same stability properties with respect to the fast variables as $S_0$ (attracting, repelling or saddle type). \end{thm} $S_\epsilon$ is called a \texttt{slow manifold} and is usually not unique. In regions that remain at a fixed distance from the boundary of $S_\epsilon$, all manifolds satisfying Theorem \ref{thm:fenichel1} lie at a Hausdorff distance $\cO(e^{-K/\epsilon})$ from each other for some $K > 0$ with $K = \cO(1)$. The choice of representative will be irrelevant for the asymptotic analysis we are interested in here; see also \cite{MKKR}. If the choice of subset $S_0$ is understood then we also write $C_\epsilon$ for the slow manifold associated to $C_0$ and refer to $C_\epsilon$ as ``the'' \texttt{slow manifold}.\\ \begin{figure}[htbp] \centering \psfrag{R1}{(R2)} \psfrag{R2}{(R1)} \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{C0}{$C_0$} \psfrag{g0}{$\gamma_0$} \psfrag{fold}{$\label{eq:fold} \left\{\begin{array}{rcl}\epsilon \dot{x}&=&y-x^2\\ \dot{y}&=&-1 \\ \end{array}\right.$} \includegraphics[width=0.85\textwidth]{./fig1.eps} \caption{\label{fig:fig1}Critical transition at a fold bifurcation of the fast subsystem. The critical manifold $C_0$ splits into a repelling part (dashed grey) and an attracting part (solid black). A typical candidate trajectory $\gamma_0$ (red) is shown. We have also sketched the two different regions: (R1, blue) where normal hyperbolicity of the critical manifold holds and (R2, green) near the bifurcation point.} \end{figure} A \texttt{candidate trajectory} $\gamma_0$ is a concatenation of slow and fast subsystem trajectories; see Figure \ref{fig:fig1}. More precisely we define a candidate as a homeomorphic image $\gamma_0(t)$ of a partitioned interval $a = t_0 < t_1 < \cdots < t_m = b$, where the image of each subinterval $\gamma_0(t_{j-1},t_j)$ is a trajectory of either the fast or the slow subsystem and the image $\gamma_0(a,b)$ has an orientation that is consistent with the orientations on each subinterval $\gamma_0(t_{j-1},t_j)$ induced by the fast and slow flows. Note that the intervals $(t_j,t_{j+1})$ do not necessarily correspond to the time parametrizations of the fast or slow subsystem; to achieve such a parametrization one has to pick a convention such as using the slow time and compactifying infinite time intervals as necessary. If consecutive images $\gamma_0(t_{j-1},t_j)$ and $\gamma_0(t_{j},t_{j+1})$ are trajectories for different subsystems then we say that $\gamma_0(t_j)$ is a \texttt{transition point}. \begin{defn} \label{defn:ct} Let $p=(x_p,y_p)\in C_0$ be a point where the critical manifold $C_0$ is not normally hyperbolic. We say that $p$ is a \texttt{critical transition} if there is a candidate $\gamma_0$ so that \begin{itemize} \item [(C1)] $\gamma_0(t_{j-1},t_j)$ is contained in a normally hyperbolic attracting submanifold of $C_0$, \item [(C2)] $p=\gamma_0(t_j)$ is a transition point, \item [(C3)] and $\gamma_0(t_{j-1},t_j)$ is oriented from $\gamma_0(t_{j-1})$ to $\gamma_0(t_j)$. \end{itemize} \end{defn} Definition \ref{defn:ct} was suggested in \cite{KuehnCT1}. It is related to the concept of ``hard'' or ``catastrophic'' loss of stability (\cite{Kuznetsov}, p.87 or \cite{ArnoldEncy}, p.36) but does not coincide with it. Note that Definition \ref{defn:ct} is entirely based upon the singular limit $\epsilon=0$. Definition \ref{defn:ct} is simple, easy to verify for a system, concretely includes the focus on the candidate orbit occuring in an actual time series and also seems to represent all the requirements laid out in \cite{Schefferetal}. Note carefully that (C1) excludes slow canard orbit segments in repelling parts of the critical manifold but see Section \ref{sec:conclusions} for possible extensions. \begin{prop} \label{prop:negative} Suppose $p=(x_p,y_p)$ is Lyapunov stable with respect to the fast subsystem, then there is no critical transition at $p$. \end{prop} \begin{proof} Suppose $\gamma_0$ is a candidate that satisfies (C1) of Definition \ref{defn:ct}. If $p= \gamma_0(t_j)$ is a transition point then the orientation condition (C3) implies that the $\gamma_0(t_{j},t_{j+1})$ is an orbit segment in the fast subsystem starting from $p$. Since $p$ is Lyapunov stable we have reached a contradiction. \end{proof} In this paper, we are interested in the approach of trajectories to critical transitions as illustrated in Figure \ref{fig:fig1}. This approach towards a critical transitions can be subdivided into two main regions: (R1) Fenichel's Theorem applies near a normally hyperbolic critical manifold and (R2) Fenichel's Theorem fails near the bifurcation point. Note that Fenichel's Theorem implies that near a local bifurcation point $(x,y)=(x_p,y_p)$ of the fast subsystem the region (R2) shrinks to $(x_p,y_p)$ as $\epsilon \ra 0$. By making $\epsilon$ sufficiently small, we should start to focus on (R1). Whenever we consider decreasing $y\ra y_p$ we shall make the assumption that $\epsilon$ has been chosen small enough so that we stay inside region (R1). For example, for a \texttt{fold point} \cite{KruSzm1} it is known that the size of (R2) scales like \benn (x,y)\sim(\cO(\epsilon^{1/3}),\cO(\epsilon^{2/3}))\in \R^2 \eenn as $\epsilon \ra 0$; see also Lemma \ref{lem:asympCeps}. Therefore we would assume that $\epsilon^{1/3}\ll y$ as $y$ gets small. We formalize our assumptions by restricting the analysis to a compact domain contained in the region (R1): \begin{itemize} \item[(A0)] Fast-slow systems will be considered on a compact domain for $\cD(\epsilon)=\cD=\cD_x\times \cD_y\subset \R^m\times \R^n$ that depends smoothly on $\epsilon$. $\cD(\epsilon)$ is chosen so that an attracting slow manifold $C^a_\epsilon$ is contained in $\cD(\epsilon)$, the intersection $\partial \cD(\epsilon)\cap C^a_\epsilon$ is transverse and $\cD(\epsilon)$ is contained in the basin of attraction of $C^a_\epsilon$; the slow manifold $C^a_\epsilon$ is given locally as a graph $C^a_\epsilon=\{(x,y)\in \cD(\epsilon):x=h_\epsilon(y)\}, \text{ for $h_\epsilon:\cD_y\ra \cD_x$.}$ Furthermore, fast subsystem local bifurcation points will lie on $\partial \cD(0)$ and asymptotics with respect to $y\ra y_p$ is chosen depending on $\epsilon$ so that normal hyperbolicity holds; see Figure \ref{fig:fig1}. \end{itemize} Note that this means that all scaling estimates we derive are restricted to a bounded domain. Within this bounded domain no other attracting critical manifold perturbs to a slow manifold. \section{Fast Subsystem Normal Forms and Critical Transitions} \label{sec:nforms} We assume familiarity with the normal form approach to bifurcation theory (\cite{GH}, p.138) and apply it in the singular limit to the fast subsystem viewing $y\in\R^n$ as parameters. The number of slow variables $y\in\R^n$ is chosen as the codimension of the bifurcation. We are going to check which bifurcations are critical transitions in the sense of Definition \ref{defn:ct}. Note carefully that this classification, although complete on the singular limit level $\epsilon=0$, has interesting possible extensions which are discussed in Section \ref{sec:conclusions}. Assume without loss of generality that the bifurcation point is at $x=(0,\ldots,0)=:0$ and $y=(0,\ldots,0)=:0$. To reduce the analysis to normal forms we will assume that all the necessary genericity conditions (non-degeneracy and transversality) are satisfied \cite{Kuznetsov}: \begin{itemize} \item[(A1)] $f\in C^{r}(\R^{m+n+1},\R^m)$ where $r\geq 1$ is chosen according to the differentiability required by normal form theory. We also assume that $g\in C^2(\R^{m+n+1},\R^n)$. \item[(A2)] The genericity conditions for bifurcations hold so that normal form theory applies. \end{itemize} The only generic codimension one bifurcation with $x\in \R^1$ is the \texttt{fold bifurcation} with normal form (\cite{Kuznetsov}, p.84) \be \label{eq:nf_fold} f(x,y)=-y-x^2. \ee Considering the dynamics of the slow variable as $g(x,y)$ \cite{KuehnCT1} we get the fast-slow system \be \label{eq:nf_fold1} \begin{array}{rcl} x'&=&-y-x^2,\\ y'&=&\epsilon g(x,y),\\ \end{array} \ee where $y\in\R^1$ since we have a codimension one bifurcation. The critical manifold for \eqref{eq:nf_fold1} is $C_0=\{(x,y)\in\R^2:x=\pm \sqrt{-y}=:h_0^\pm(y),y\leq 0\}$. $C_0\cap\{x=\sqrt{-y},y<0\}$ is attracting and $C_0\cap \{x=-\sqrt{-y},y<0\}$ is repelling. The linearization $D_xf(h_0^+(y),y)$ around the attracting branch of the critical manifold will be crucial for the calculations in Section \ref{sec:variance} as it describes the dynamics in the region (R1); therefore we will calculate/record this linearization for each critical transition. \begin{lem} \label{lem:fold} If $g(0,0)>0$ then \eqref{eq:nf_fold1} has a critical transition at $(x,y)=(0,0)$. \end{lem} The proof is obvious and similar results hold for the pitchfork and transcritical normal forms \bea f(x,y)&=&yx+sx^3,\quad D_xf(h_0(y),y)=y,\quad \text{for $s=\pm 1$, (\cite{Kuznetsov}, p.284)}, \label{eq:nf_pitchfork}\\ f(x,y)&=&yx-x^2,\quad D_xf(h_0(y),y)=y. \quad \text{(\cite{GH}, p.149))} \label{eq:nf_transcritical}. \eea \begin{lem} If $g(0,0)>0$ then \eqref{eq:nf_pitchfork} has a critical transition at $(x,y)=(0,0)$ if and only if the pitchfork bifurcations is subcritical ($s=1$). If $g(0,0)\neq0$ then \eqref{eq:nf_transcritical} has a critical transition at $(x,y)=(0,0)$. \end{lem} The remaining one-dimensional fast subsystem is the codimension two \texttt{cusp bifurcation}. The normal form is (\cite{Kuznetsov}, p.304-305) \be \label{eq:nf_cusp} f(x,y)=y_1+y_2x+sx^3, \qquad \text{for $s=\pm 1$} \ee where the fast dynamics $x'=f(x,y)$ is augmented with two-dimensional slow dynamics $y'=\epsilon g(x,y)$, $y=(y_1,y_2)\in\R^2$. The critical manifold for \eqref{eq:nf_cusp} is $C_0=\{(x,y)\in\R^3:0=y_1+y_2x+sx^3\}$. Due to the two-dimensional slow flow it is slightly less obvious to determine under which conditions the cusp bifurcation is a critical transition. \begin{lem} There is no critical transition for \eqref{eq:nf_cusp} at $(x,y)=(0,0)$ if $s=-1$. If $s=1$ then \eqref{eq:nf_cusp} has a critical transition at $(x,y)=(0,0)$ if and only if $g_2(0,0)>0$ and $g_1(0,0)=0$. \end{lem} \begin{proof} First consider the case $s=-1$. At $y_1=0=y_2$ the fast subsystem is $x'=-x^3$. It is easy to see that $x=0$ is asymptotically stable and Proposition \ref{prop:negative} implies that there cannot be a critical transition at $(x,y)=(0,0)$. For $s=1$ the fast subsystem is $x'=x^3$ so that a candidate orbit $\gamma_0$ can have a segment $\gamma_0(t_j,t_{j+1})$ in the fast subsystem oriented from $\gamma_0(t_j)$ to $\gamma_0(t_{j+1})$. To see that there exists an attracting critical manifold connecting to $(x,y)=(0,0)$ we need the unfolding of a cusp bifurcation. The linearization of \eqref{eq:nf_cusp} is $D_xf(x,y)=y_2+3x^2$ and the stability of the slow manifold changes at fold points when $D_xf|_{C_0}=0$. Given the two equations \benn \begin{array}{lcl} 0 &=& y_1+y_2x+x^3\\ 0 &=& y_2+3x^2\\ \end{array} \eenn the variable $x$ can be eliminated which yields the classical cusp curve. After a projection into the $(y_1,y_2)$-plane it is given by $\Gamma:=\{(y_1,y_2)\in\R^2:4y_2^3+27y_1^2=0\}$. A repelling subset of the critical manifold is $C^r_0:=C_0\cap\{4y_2^3+27y_1^2>0\}$. The set $C_0\cap\{4y_2^3+27y_1^2<0\}$ splits into three branches corresponding to the three solutions of $f(x,y)=0$ where two branches $C^{r\pm}_0$ are repelling and one branch $C^a_0$ is attracting; observe that $y_2<0$ for any of the three branches. Now consider a candidate $\gamma_0$ with $\gamma_0(t_{j-1},t_j)\subset C^a_0=\{x=h_0(y)\}$. The slow flow on $C^a_0$ is given by \be \label{eq:sf_cusp} \begin{array}{lcl} \dot{y}_1&=&g_1(h_0(y),y),\\ \dot{y}_2&=&g_2(h_0(y),y).\\ \end{array} \ee Since the $y_2$-component of $\gamma_0(t_{j-1})$ is negative we must have $g_2(0,0)>0$. Furthermore, we know that trajectories of \eqref{eq:sf_cusp} on $C^a$ reach $(y_1,y_2)=(0,0)$ if and only if $4y_1^3+27y_1^2<0$ holds for the $y$-components of $\gamma_0(t_{j-1},t_j)$. Considering the branches of $\Gamma$ we have \benn y_1=\Gamma_\pm(y_2)=\pm\sqrt{\frac{4}{27}y_2^3}\qquad\Rightarrow \quad \frac{d}{dy_2}\Gamma_\pm(y_2)=\pm\frac{y_2^2}{\sqrt{3y_2^3}}=\Gamma_\pm'(y_2). \eenn In particular, we find that $\lim_{y_2\ra 0}\Gamma_\pm'(y_2)=0$ which implies that $g_1(0,0)=0$ if the candidate $\gamma_0$ reaches $(x,y)=(0,0)$. Hence there exists $\gamma_0$ as required by Definition \ref{defn:ct} if and only if $g_2(0,0)>0$ and $g_1(0,0)=0$. \end{proof} The attracting part of the critical manifold $C^a_0$ (as introduced in the previous proof) is $C^a_0=\{(x,y_1,y_2)\in\R^3:x=h_0(y)\}$. The linearization is \be \label{eq:A0_cusp} D_xf(h_0(y),y)=y_2+3h_0(y)^2=\cO_y(y_2) \quad \text{as $y\ra 0$.} \ee where the notation $\cO_y(\cdot)$ indicates asymptotic scaling as $y\ra 0$ under the assumption that Fenichel Theory is still valid; see also Section \ref{sec:fast_slow} where this is referred to as region (R1). The asymptotic scaling in \eqref{eq:A0_cusp} holds since points on $C^a_0$ satisfy $y_2+3x^2<0$ because on $C^a_0$ we have \benn y_2<0,\qquad x\in[-(-y_2/3)^{1/2},(-y_2/3)^{1/2}],\qquad y_1=-xy_2-x^3 \eenn Therefore, $x=h_0(y)$ grows at most like $\sqrt{y_2}$ as $y\ra 0$ and the scaling law in \eqref{eq:A0_cusp} follows. This concludes our discussion of one-dimensional fast subsystem bifurcations.\\ For two fast subsystem variables consider the codimension-one \texttt{Hopf bifurcation} normal form (\cite{Kuznetsov}, p.98) \be \label{eq:nf_Hopf} \begin{array}{lcl} f_1(x,y)&=&yx_1-x_2+l_1x_1(x_1^2+x_2^2),\\ f_2(x,y)&=&x_1+yx_2+l_1x_2(x_1^2+x_2^2),\\ \end{array} \ee where $l_1$ is the \texttt{first Lyapunov coefficient}. The critical manifold for \eqref{eq:nf_Hopf} is $C_0=\{(x,y)\in\R^3:x=0\}$ where $C_0\cap \{y<0\}$ is attracting and $C_0\cap \{y>0\}$ is repelling and the linearization is \be \label{eq:A0_Hopf} D_xf(0,y)=\left(\begin{array}{cc}y & -1 \\ 1 & y \\ \end{array}\right). \ee \begin{lem} If $g(0,0)>0$ then \eqref{eq:nf_Hopf} has a critical transition at $(x,y)=(0,0)$ if and only if the Hopf bifurcation is subcritical ($l_1>0$). \end{lem} For vanishing first first Lyapunov coefficient ($l_1=0$) a codimension-two \texttt{generalized Hopf} (or \texttt{Bautin}) bifurcation occurs with normal form (\cite{Kuznetsov}, p.313) \be \label{eq:nf_Bautin} \begin{array}{lcl} f_1(x,y)&=&y_1x_1-x_2+y_2x_1(x_1^2+x_2^2) +l_2x_1(x_1^2+x_2^2)^2,\\ f_2(x,y)&=&x_1+y_1x_2+y_2x_2(x_1^2+x_2^2) +l_2x_2(x_1^2+x_2^2)^2,\\ \end{array} \ee where $l_2=\pm 1$ is the \texttt{second Lyapunov coefficient}. The critical manifold is $C_0=\{(x,y)\in\R^4:x=0=:h_0(y)\}$. The linearization $D_xf(h_0(y),y)$ coincides with the linearization \eqref{eq:A0_Hopf} for the Hopf bifurcation upon replacing $y$ by $y_1$. \begin{lem} \label{eq:Baut_lem1} The Bautin bifurcation is not a critical transition if $l_2<0$. \end{lem} \begin{proof} Without loss of generality let $l_2=-1$ then the fast subsystem at $(y_1,y_2)=(0,0)$ is \be \label{eq:Bautin_neg_concl} \begin{array}{lcl} x_1'&=&-x_1(x_1^2+x_2^2)^2=:\tilde{f}_1(x),\\ x_2'&=&-x_2(x_1^2+x_2^2)^2=:\tilde{f}_2(x).\\ \end{array} \ee where $\tilde{f}=(\tilde{f_1},\tilde{f}_2)$. Define a function $V:\R^2\ra \R$ by $V(x_1,x_2):=x_1^2+x_2^2$. Observe that $V(x)>0$ for $x\neq 0$ and \benn \frac{d}{dt}V(x)=D_xV(\tilde{f}(x))=-(2x_1^2+2x_2^2)(x_1^2+x_2^2)^2<0 \eenn for $x\neq 0$. Therefore $V(x)$ is a Lyapunov function and $x=0$ is asymptotically stable as an equilibrium point of \eqref{eq:Bautin_neg_concl} and Proposition \ref{prop:negative} finishes the proof. \end{proof} \begin{lem} \label{lem:Bautin} If $l_2>0$ then the Bautin bifurcation is a critical transition if and only if $g_1(0,0)>0$ and either (a) $g_2(0,0)\neq 0$ or (b) $g_2(0,0)=0$ and $\frac{\partial g_2}{\partial y_2}(0,0)<\frac12$. \end{lem} \begin{proof} The critical manifold splits into two 2-dimensional planes $C^a_0=C_0\cap \{y_1<0\}$ and $C^r_0=C_0\cap \{y_1>0\}$ where $C^a_0$ is attracting and $C^r_0$ is repelling. The condition $g_1(0,0)>0$ implies that the slow flow $\dot{y}=g(0,y)$ has trajectories that start in $C^a_0$ and reach $(x,y)=(0,0)$ in finite time. This guarantees the existence of a candidate $\gamma_0$ satisfying (C1) and (C3) of Definition \ref{defn:ct}. The proof of Lemma \ref{eq:Baut_lem1} shows, upon reversal of time in equation \eqref{eq:Bautin_neg_concl}, that $x=0$ is an asymptotically unstable equilibrium point of the fast subsystem at $y=0$ when $l_2=1$. We note from the unfolding of a Bautin bifurcation (see \cite{Kuznetsov}, p.314) that saddle-node bifurcations of limit cycles for the fast subsystem occur on the curve $\text{LPC}:=\{(y_1,y_2)\in\R^2:\frac14y_1^2=y_1,y_2<0\}$. The conditions on the slow flow guarantee that the candidate orbit enters the fast subsystem region without limit cycles and with an unstable equilibrium point; this region is given by \benn \left\{(y_1,y_2)\in\R^2:y_2<0, y_1>\frac14y_1^2\right\}\cup\left\{(y_1,y_2)\in\R^2:y_2>0,y_1>0\right\}. \qedhere \eenn \end{proof} The last codimension two bifurcation with two fast variables is the \texttt{Bogdanov-Takens bifurcation} with normal form (\cite{GH}, p.365) \be \label{eq:BT_nform} \begin{array}{lcl} f_1(x,y)&=&x_2,\\ f_2(x,y)&=&y_1+y_2x_2+x_1^2+sx_1x_2,\\ \end{array} \ee where $s=\pm 1$. The critical manifold is $C_0=\left\{(x,y)\in\R^4: x_2=0, x_1=\pm \sqrt{-y_1}\right\}$ so that we always require $y_1\leq 0$. \begin{lem} \label{lem:BT} The Bogdanov-Takens bifurcation \eqref{eq:BT_nform} is a critical transition for $s=-1$ if and only if $g_2(0,0)>0$ and $g_1(0,0)=0$ and for $s=1$ if and only if (a) $g_1(0,0)>0$ or (b) $g_1(0,0)=0$, $g_2(0,0)>0$, $\frac{\partial g_2}{\partial y_2}(0,0)<-2$ . \end{lem} \begin{proof} As usual we consider the fast subsystem at $y=0$ \be \label{eq:csup_point} \begin{array}{lcl} x_1'&=&x_2,\\ x_2'&=&x_1^2+sx_1x_2.\\ \end{array} \ee The theory of non-hyperbolic equilibria in planar analytic vector fields (\cite{Perko}, p.151; see also \cite{AndronovLeontovichGordonMaier}) implies that $x=0$ is a cusp point. Hence there exists a candidate $\gamma_0$ with a fast-subsystem orbit segment $\gamma_0(t_j,t_{j+1})$ oriented from $\gamma_0(t_j)=(0,0)$ to $\gamma_0(t_{j+1})$. It remains to show when we can approach $(x,y)=(0,0)$ via the slow flow on an attracting critical manifold. The linearization around the critical manifold is \be \label{eq:A0_BT} D_xf|_{C_0}=\left(\begin{array}{cc} 0 & 1 \\ \pm2\sqrt{-y_1} & y_2\pm s\sqrt{-y_1}\\\end{array}\right) \ee with $\text{Tr}(D_xf|_{C_0})=y_2\pm s\sqrt{-y_1}$ and $\det(D_xf|_{C_0})=-2(\pm\sqrt{-y_1})$. Hence the critical manifold is attracting if and only if $y_2\pm s\sqrt{-y_1}<0$ and $2(\pm\sqrt{-y_1})<0$. For $s=-1$ this yields the set \benn C^a_0=\{(x_1,x_2,y_1,y_2)\in\R^4:y_1<0,x_2=0,x_1=-\sqrt{-y_1},y_2<-\sqrt{-y_1}\}. \eenn A candidate orbit starting in $C^a_0$ will reach $(x,y)=(0,0)$ if and only if $g_2(0,0)>0$ and $g_1(0,0)=0$ where the second condition is required since the curve $\{y_2=-\sqrt{-y_1}\}$ approaches the origin tangentially i.e. $\frac{d}{dy_2}(-y_2^2)|_{y=0}=0$. The second case for $s=1$ is similar and follows using the symmetry $(x_1,x_2,y_1,y_2,t)\mapsto (x_1,-x_2,y_1,-y_2,-t)$. \end{proof} The linearization for the Bogdanov-Takens bifurcation has already been recorded in \eqref{eq:A0_BT} but notice that we must have the condition $y_2<\mp s\sqrt{-y_1}$ to be on $C^a_0$. The asymptotic result as $y\ra 0$ for the linearization is \be \label{eq:A0_BT1} D_xf|_{C_0\cap C^a_0}=\left(\begin{array}{cc} 0 & 1 \\ \pm2\sqrt{-y_1} & \cO_y(\sqrt{-y_1})\\\end{array}\right). \ee The two remaining codimension-two bifurcations (fold-Hopf and Hopf-Hopf) require three and four fast dimensions. The \texttt{fold-Hopf} (or \texttt{Gavrilov-Guckenheimer}) bifurcation has normal form (\cite{Kuznetsov}, p.338) \be \label{eq:fH_nform} \begin{array}{lcl} f_1(x,y)&=& y_1+x_1^2+s(x_2^2+x_3^2),\\ f_2(x,y)&=& y_2x_2-\omega x_3+\theta x_1x_2-x_1x_3+x_1^2x_2,\\ f_3(x,y)&=& \omega x_2+y_2x_3+x_1x_2+\theta x_1x_3+x_1^2x_3,\\ \end{array} \ee where $s=\pm 1$ and $\theta=\theta(y)$ satisfies $\theta(0)\neq 0$ and $\omega\neq 0$. The critical manifold is given by \benn C_0=\{(x,y)\in\R^5:x_2=0=x_3,x_1=\pm\sqrt{-y_1},y_1\leq 0\}. \eenn \begin{lem} \label{lem:fH} The fold-Hopf bifurcation is a critical transition \begin{itemize} \item for $\theta(0)>0$, $s=1$ if and only if (a) $g_1(0,0)>0$ or (b) $g_1(0,0)=0$ and $g_2(0,0)>0$, \item for $\theta(0)>0$, $s=-1$ if and only if $g_1(0,0)>0$ and $g_2(0,0)<J_2(0,0)$ where $J_2(0,0)$ is the $y_2$-component of the tangent vector to the ``cycle blow-up curve'' (cf. \cite{Kuznetsov}, p.343), \item for $\theta(0)<0$, $s=1$ if and only if $g_1(0,0)=0$ and $g_2(0,0)>0$, \item for $\theta(0)<0$, $s=-1$ if and only if $g_1(0,0)=0$ and $g_2(0,0)>0$. \end{itemize} \end{lem} \begin{proof} The same techniques as before will apply so we just sketch the proof. The fast subsystem at $y=0$ is \be \label{eq:fH_fss} \begin{array}{lcl} x_1'&=& x_1^2+s(x_2^2+x_3^2),\\ x_2'&=& -\omega x_3+\theta x_1x_2-x_1x_3+x_1^2x_2,\\ x_3'&=& \omega x_2+x_1x_2+\theta x_1x_3+x_1^2x_3,\\ \end{array} \ee Changing to cylindrical coordinates $(x_2,x_3)=(r\cos\phi,r\sin \phi)$ in \eqref{eq:fH_fss} and neglecting the angular component $\phi$, since it is always a neutral direction with respect to attraction and repulsion for the critical manifold of equilibrium points, we get a two-dimensional system \be \label{eq:fH_fss1} \begin{array}{lcl} x_1'&=& x_1^2+sr^2,\\ r'&=& r(\theta x_1+x_1^2).\\ \end{array} \ee It can be checked that the origin $(x_1,r)=(0,0)$ is unstable for \eqref{eq:fH_fss}. Therefore we can find a candidate that leaves the bifurcation point in a fast direction. The attracting part of the critical manifold is computed from the linearization \be \label{eq:A0_fH} D_xf|_{C_0}=\left(\begin{array}{ccc} \pm 2\sqrt{-y_1} & 0 & 0 \\ 0 & y_2\pm\theta\sqrt{-y_1} & -\omega \mp\sqrt{-y_1} \\ 0 & \omega\pm \sqrt{-y_1} & y_2\pm\theta\sqrt{-y_1}\\ \end{array}\right) \ee and is given by $C^a_0=C_0\cap \{x=-\sqrt{-y_1},y_2<\theta\sqrt{-y_1}\}$. The conditions on the slow flow $\dot{y}=g=(g_1,g_2)$ can be derived from the unfolding of the fold-Hopf bifurcation (see \cite{Kuznetsov}, p.339-345). \end{proof} The linearization is given by \eqref{eq:A0_fH}; we note that the condition of the approach via $C^a_0$ means that the leading order approximation to $D_xf|_{C_0\cap C_0^a}$ as $(y_1,y_2)\ra (0^-,0)$ is given by \be \label{eq:A0_fH1} D_xf|_{C_0\cap C_0^a}=\left(\begin{array}{ccc} \pm 2\sqrt{-y_1} & 0 & 0 \\ 0 & \cO_y(\sqrt{-y_1}) & -\omega \\ 0 & \omega & \cO_y(\sqrt{-y_1})\\ \end{array}\right). \ee As the last case we are going to consider is the \texttt{Hopf-Hopf bifurcation}. We shall not discuss the complicated unfolding (\cite{Kuznetsov}, p.351-370; \cite{GH}, p.396-411) in detail to show when the Hopf-Hopf bifurcation is a critical transition. A normal form in polar coordinates $(r_1,r_2,\theta_1,\theta_2)=(r,\theta)$ is (\cite{Kuznetsov}, p.358) \be \label{eq:HH_nform} \begin{array}{lcl} f_1(r,\theta,y)&=& r_1(y_1+p_{11}r_1^2+p_{12}r_2^2+s_1r_2^4),\\ f_2(r,\theta,y)&=& r_2(y_2+p_{21}r_1^2+p_{22}r_2^2+s_2r_1^4),\\ f_3(r,\theta,y)&=& \omega_1,\\ f_4(r,\theta,y)&=& \omega_2,\\ \end{array} \ee where $\omega_{1,2}$ are the imaginary parts of the eigenvalues at the bifurcation point $y=0$; $p_{ij}$ and $s_{1,2}$ are further parameters. We note that all parameters also depend on the slow variables $y$ but are usually assumed to be non-zero at the bifurcation point. Observe from \eqref{eq:HH_nform} that the critical manifold is \benn C_0=\{(r,\theta,y)\in\R^4\times \R^2:r_1=0=r_2\}=\{(x,y)\in\R^4\times \R^2:x_j=0\text{ for $j=1,2,3,4$}\}. \eenn To study whether candidate orbits can leave the fast subsystem for $y=0$ we would have to study the nonlinear stability of the origin depending on the parameters. It is not difficult to see that the Hopf-Hopf bifurcation is not always a critical transition depending on parameter values but there are cases when it is a critical transition. Instead of providing this detailed study (which can be inferred from the unfoldings in \cite{Kuznetsov}) we shall only state one important linearization \be \label{eq:A0_HH} D_xf(x,y)|_{C_0\cap C_0^a}= \left(\begin{array}{cccc}y_1 & -\omega_1 & 0 & 0 \\ \omega_1 & y_1 & 0 & 0 \\0 & 0 & y_2 & -\omega_2 \\ 0 & 0 & \omega_2 & y_2\\\end{array}\right). \ee It will always be assumed for \eqref{eq:A0_HH} that $\omega_{1,2}\neq 0$; note that we also exclude resonances $k\omega_1=l\omega_2$ for $k+l\leq3$ as non-resonance conditions are non-degeneracy conditions for the Hopf-Hopf bifurcation ({cf.} assumption (A2)). We remark that the main bifurcation phenomena of interest near a Hopf-Hopf bifurcation are global orbits (limit cycles and tori) which would not be captured by our local analysis anyway. \begin{table}[htbp] \centering \begin{tabular}{|l|l|l|c|} \hline \textbf{Name} & \textbf{N.-Form} & \textbf{Critical, if...} & \textbf{$D_xf|_{C_0\cap C^a_0}=:A_0(y)$} \\ \hline Fold & Eq. \eqref{eq:nf_fold} & $g>0$ & $ -2\sqrt{-y}$ \\ \hline Pitchfork & Eq. \eqref{eq:nf_pitchfork} & $s=1$, $g>0$ & $y$ \\ \hline Transcr. & Eq. \eqref{eq:nf_transcritical} & $g\neq 0$ & $y$ \\ \hline Hopf & Eq. \eqref{eq:nf_Hopf} & $l_1>0$, $g>0$ & $\left(\begin{array}{cc}y & -1 \\ 1 & y \\ \end{array}\right)$ \\ \hline Cusp & Eq. \eqref{eq:nf_cusp} & $s=-1$, $g_1=0$, $g_2>0$ & $\cO_y(y_2)$ \\ \hline Bautin & Eq. \eqref{eq:nf_Bautin} & $\begin{array}{l} l_2>0,~ g_1>0 \\ \text{and (a) } g_2\neq 0 \text{ or}\\ \text{(b) }g_2=0,~\partial_{y_2}g_2<1/2\\ \end{array}$ & $\left(\begin{array}{cc}y_1 & -1 \\ 1 & y_1 \\ \end{array}\right)$ \\ \hline Bog.-Tak. & Eq. \eqref{eq:BT_nform} & $\begin{array}{l}s=-1, ~g_1>0,~ g_2=0 \\ s=1 \text{ and (a) }g_1>0 \text{ or}\\\text{(b) }g_1=0,g_2>0,\partial_{y_2}<-2 \\ \end{array}$& $\left(\begin{array}{cc} 0 & 1 \\ \pm2\sqrt{-y_1} & \cO_y(\sqrt{-y_1})\\ \end{array}\right)$ \\ \hline Fold-Hopf & Eq. \eqref{eq:fH_nform} & $\begin{array}{l}\theta<0, ~g_1>0,~ g_2=0 \\ \theta>0,s=1 \text{ and (a) } g_1>0, \\ \text{or (b) }g_1=0,g_2>0\\ \theta>0,s=-1,g_1>0,g_2<J_2\\ \end{array}$ & $\left(\begin{array}{ccc} \pm 2\sqrt{-y_1} & 0 & 0 \\ 0 & \cO_y(\sqrt{-y_1}) & -\omega \\ 0 & \omega & \cO_y(\sqrt{-y_1})\\ \end{array}\right)$ \\ \hline Hopf-Hopf & Eq. \eqref{eq:HH_nform} & $\begin{array}{l}\text{special case only} \\ \end{array}$ & $\left(\begin{array}{cccc}y_1 & -\omega_1 & 0 & 0 \\ \omega_1 & y_1 & 0 & 0 \\0 & 0 & y_2 & -\omega_2 \\ 0 & 0 & \omega_2 & y_2\\\end{array}\right)$ \\ \hline \end{tabular} \caption{\label{tab:det_res}Results for fast subsystem bifurcations. The additional hypotheses on the slow flow $\dot{y}=g(x,y)$ at $(x,y)=(0,0)$ are abbreviated and we always understand $g_j$ as $g_j(0,0)$ for $j=1,2$ and $g$ as $g(0,0)$ in this table. The Hopf-Hopf bifurcation has not been analyzed in detail and only a particular case is stated. The last column records the linearization around the attracting branch of the critical manifold.} \end{table} Having finished the exercise it is now clear which \emph{local} fast subsystem bifurcation points are critical transitions under suitable slow flow conditions. We record the results developed in Lemmas \ref{lem:fold}-\ref{lem:fH} as well as the resulting linearizations $D_xf|_{C_0\cap C^a}$ in Table \ref{tab:det_res} where we introduced the shorthand notation $A_0(y):=D_xf(h_0(y),y)=D_xf|_{C_0\cap C^a_0}$. Let us point out again that the classification results are for the singular limit $\epsilon=0$. Detailed unfoldings for the deterministic case for $\epsilon>0$ are known for the fold, pitchfork, transcritical and Hopf bifurcations \cite{KruSzm1,KruSzm3,KruSzm4,Neishtadt1}. Partial results are available for the Bogdanov-Takens bifurcation \cite{Chiba1} and the cusp \cite{BroerKaperKrupa} is work in progress. Section \ref{sec:conclusions} provides an overview where future work is needed. \section{Sample Paths and Moments for Stochastic Fast-Slow Systems} \label{sec:SDE_fs} Let $\{W_s\}_{s\geq 0}$ be a $k$-dimensional Brownian motion on a probability space $(\Omega,\mathcal{F},\P)$. Consider the \texttt{fast-slow stochastic differential equation (fast-slow SDE)} \be \label{eq:gen_SDE} \begin{array}{lcl} dx_s&=&\frac1\epsilon f(x_s,y_s)ds+\frac{\sigma}{\sqrt{\epsilon}}F(x_s,y_s)dW_s,\\ dy_s&=& g(x_s,y_s)ds.\\ \end{array} \ee which is understood as an It\^{o}-SDE \cite{Oksendal}. Noise acting on the slow variables $y$ will not be considered explicitly but it is implicitly included in all of our results as it appears as a higher-order term (\cite{BerglundGentz}, p.145; \cite{KuehnCT1}, p.1026). In addition to the assumptions (A0)-(A1) that hold for the deterministic part of \eqref{eq:gen_SDE} we require the following hypothesis: \begin{itemize} \item[(A3)] $F\in C^2(\R^{m+n},\R^{m\times k})$ and the \texttt{noise level} $\sigma=\sigma(\epsilon)$ depends continuously on $\epsilon$. \item[(A4)] We consider \texttt{small noise} with $\lim_{\epsilon \ra 0}\sigma(\epsilon)=0$. \end{itemize} To understand the effect of a deterministic smooth invertible normal form transformation (coordinate change) $u(x,y)=(X,Y)\in\R^m\times \R^n$, with $u\in C^r(\R^m\times \R^n,\R^m\times \R^n)$ we need the following result which directly follows from It\^{o}'s formula (\cite{Oksendal}, p.44). \begin{lem} Consider the fast variable equation for \eqref{eq:gen_SDE} \benn dx_s=\frac1\epsilon f(x_s,y_s)ds+\frac{\sigma}{\sqrt{\epsilon}}F(x_s,y_s)dW_s \eenn then, using the notations $z_s=(x_s,y_s)$ and $Z_s=(X_s,Y_s)$, we have \bea \label{eq:stoch_normal} dX^{(i)}_s&=&\left[ \frac{1}{\epsilon}\sum_{j=1}^{m} \frac{\partial u^{(i)}}{\partial x_j}f^{(j)}(u^{-1}(Z_s))+\sum_{j=1}^n \frac{\partial u^{(i)}}{\partial y_j} g^{(j)}(u^{-1}(Z_s))+\cO\left(\frac{\sigma^2}{\epsilon}\right)\right]ds+\frac{\sigma}{\sqrt\epsilon}F^{(i)}(u^{-1}(Z_s))dW_s\nonumber\\ &=:& \left[\frac{1}{\epsilon}\tilde{f}^{(i)}(X_s,Y_s)+\cO(1)+\cO\left(\frac{\sigma^2}{\epsilon}\right)\right]ds+\frac{\sigma}{\sqrt\epsilon}F^{(i)}(X_s,Y_s)dW_s \eea where superscripts $(i), (j)$ denotes the $i$-th resp. $j$-th row/component. \end{lem} Since $g(x_s,y_s)=\cO(1)$ is smaller than the first term, the only term that could be of leading order and obstruct the transformation to normal form for the deterministic part of \eqref{eq:gen_SDE} is of order $\cO(\sigma^2/\epsilon)$. By (A4) we have $\sigma^2(\epsilon)/\epsilon\ll 1/\epsilon$ which implies that the third term is also of higher-order after the normal form transformation in comparison to the deterministic $\cO(1/\epsilon)$-term. We now \emph{formally truncate} \eqref{eq:stoch_normal} by discarding the two terms of order lower than $\cO(1/\epsilon)$ as well as the polynomial terms appearing in $\tilde{f}^{(i)}(X_s,Y_s)$ which are of higher-order than the leading normal form terms ({e.g.}~for the fold $X^2-Y+\cO(Y^2,XY,X^3)+\cO(\epsilon)+\cO(\sigma^2/\epsilon)\approx X^2-Y$). On the basis of this formal truncation we now work with \eqref{eq:gen_SDE} where $f$ is chosen from the set of deterministic normal forms discussed in Section \ref{sec:nforms}. There are several interesting remarks regarding the formal truncation; see also Section 8.\\ \textit{Remark 1:} In the deterministic case on the fast time scale, discarding higer-order polynomial and $\cO(\epsilon)$ terms is well understood for \emph{generic} fast subsystem bifurcations as shown {e.g.}~in (\cite{SzmolyanWechselberger1}, Proposition 2.1, Section 4.1; \cite{KruSzm3}, equation (2.5), section 2.4). Intuitively this is also clear since small perturbations do not change the unfolding of 1- or 2-parameter generic bifurcations for general smooth vector fields \cite{Wiggins}.\\ \textit{Remark 2:} For the deterministic ($\sigma=0$) pitchfork and transcritical bifurcations, which are not generic for general smooth vector fields, the $\cO(1)$-term in \eqref{eq:stoch_normal} is relevant as shown {e.g.}~in (\cite{KruSzm4}, Lemma 2.1, Theorem 2.1) in a region of the type (R2) near the singularity. The stochastic early-warning signs in a normally hyperbolic attracting region, such as (R1), are not expected to depend upon these terms (see the discussion of the attracting regime in \cite{BerglundGentz6}) but we do not verify this here and work with the formal truncation.\\ \textit{Remark 3:} It might be possible to weaken the assumption (A4) and to give a rigorous proof for a suitable 'equivalence' of a given SDE and its normal form. For the deterministic case, topological equivalence is known but for the stochastic case one needs different concepts such as \texttt{random normal form transformations} as suggested by Arnold and co-workers \cite{ArnoldRDS}.\\ Once the SDE \eqref{eq:gen_SDE} has been transformed into normal form we study \texttt{sample paths} $(x_s,y_s)$ that solve \eqref{eq:gen_SDE} as suggested in \cite{BerglundGentz}. By (A0) there exists a deterministic attracting slow manifold \benn C^a_\epsilon=\{(x,y)\in\R^m\times \R^n:x=h_\epsilon(y)\} \eenn for $h_\epsilon:\cD_y\ra \cD_x$. The deviation of sample paths from this deterministic slow manifold is $\xi_s=x_s-h_\epsilon(y_s)$ and the variational SDE for $\xi_s$ is \bea \label{eq:deviate_SDE} d\xi_s&=&dx_s-D_yh_\epsilon(y_s)~dy_s \\ &=& \frac1\epsilon\left[ f(h_\epsilon(y_s)+\xi_s,y_s)-\epsilon D_yh_\epsilon(y_s)~g(h_\epsilon(y_s)+\xi_s,y_s)\right]ds+\frac{\sigma}{\sqrt\epsilon}F(h_\epsilon(y_s)+\xi_s,y_s) dW_s. \nonumber \eea Let $y_s^{\det}$ denote the deterministic solution of \eqref{eq:gen_SDE} (i.e. a solution for $\sigma=0$). For $\xi_s=0$ the drift term in \eqref{eq:deviate_SDE} satisfies the \texttt{invariance equation} \cite{ZagarisKaperKaper} for a slow manifold \be \label{eq:inv2} f(h_\epsilon(y^{\det}_s),y^{\det}_s)-\epsilon D_yh_\epsilon(y^{\det}_s)~g(h_\epsilon(y^{\det}_s),y^{\det}_s)=0. \ee Linearizing \eqref{eq:deviate_SDE} around $\xi_s=0$ and replacing $y_s$ by $y_s^{\det}$ yields a lowest-order system for the process $(\xi^l_s,y_s)$ given by \be \label{eq:lin_SDE} \begin{array}{lcl} d\xi^l_s &=& \frac1\epsilon [D_xf(h_\epsilon(y^{\det}_s),y^{\det}_s)-\epsilon D_yh_\epsilon(y^{\det}_s)~D_xg(h_\epsilon(y^{\det}_s),y^{\det}_s)]\xi^l_s ds\\ &&+\frac{\sigma}{\sqrt\epsilon} F(h_\epsilon(y^{\det}_s),y^{\det}_s)dW_s,\\ dy^{\det}_s&=&g(h_\epsilon(y^{\det}_s),y^{\det}_s)ds. \end{array} \ee For notational simplicity we let \bea A_\epsilon(y^{\det}_s)&:=&D_xf(h_\epsilon(y^{\det}_s),y^{\det}_s)-\epsilon D_yh_\epsilon(y^{\det}_s)~D_xg(h_\epsilon(y^{\det}_s),y^{\det}_s)\label{eq:Aeps_def},\\ F_\epsilon(y^{\det}_s)&:=&F(h_\epsilon(y^{\det}_s),y^{\det}_s)\label{eq:Feps_def}. \eea Note carefully that for $\epsilon=0$ we get the matrix $A_0(y)=D_xf(h_0(y),y)$ which is precisely the linearization recorded in Table \ref{tab:det_res}. We shall always assume that initial conditions for \eqref{eq:lin_SDE} are deterministic and given by $(\xi^l_0,y_0)=(0,y_0)$ which corresponds to starting on the deterministic slow manifold. Now we can state an important result about the covariance $\text{Cov}(\xi^l_s)$ of the linearized process. \begin{lem}[\cite{BerglundGentz}, p.146-147] \label{lem:BG_lem} Let $X_s:=\sigma^{-2}\text{Cov}(\xi^l_s)$ then $X_s$ satisfies a fast-slow ODE \be \label{eq:lin_fs_ODE} \begin{array}{rcl} \epsilon \dot{X}&=&A_\epsilon(y)X+XA_\epsilon(y)^T+F_\epsilon(y)F_\epsilon(y)^T,\\ \dot{y}&=& g(h_\epsilon(y),y). \end{array} \ee Furthermore, for $0<\epsilon\ll 1$, the critical manifold for \eqref{eq:lin_fs_ODE} is attracting for $y\in \cD_y$ and given by \benn \cC_0=\{X\in \R^{m\times m}:A_0(y)X+XA_0(y)^T+F_0(y)F_0(y)^T=0\}. \eenn \end{lem} Fenichel's Theorem provides an associated slow manifold $\cC_\epsilon=\{X=H_\epsilon(y)\}$ for $H_\epsilon:\R^n\ra \R^{m\times m}$. Assuming that the matrix $H_\epsilon(y)$ is invertible and that the operator norm $\|H_\epsilon^{-1}(y)\|$ is uniformly bounded for $y\in \cD_y$ one can define the \texttt{covariance neighborhood} \benn \cB(r):=\left\{(x,y)\in\cD:[x-h_\epsilon(y)]^T\cdot H_\epsilon(y)^{-1}[x-h_\epsilon(y)]<r^2\right\}. \eenn Define the \texttt{first-exit time} of the original process $(x_s,y_s)$, starting at $s=s_0$, from a set $\cA$ as \benn \tau_\cA:=\inf\{s\in[s_0,\infty):(x_s,y_s)\notin \mathcal{A},(x_0,y_0)\in \mathcal{A})\} \eenn where $\cA$ is chosen so that $\tau_\cA$ is a stopping time wrt the filtration generated by $\{(x_s,y_s)\}_{s\geq s_0}$. \begin{thm}[\cite{BerglundGentz1}, p.149-150] \label{thm:BG} Sample paths stay inside $\cB(r)$ with high probability. More precisely, there exists $K(s,\epsilon,\sigma)$ and $\kappa>0$ such that $\P\left\{\tau_{\cB(r)}<\min(s,\tau_{\cD_y})\right\}\leq K(s,\epsilon,\sigma)e^{-\kappa r^2/(2\sigma^2)}$, where the pre-factor $K(s,\epsilon,\sigma)$ grows at most polynomially in its arguments as $(\epsilon,\sigma)\ra (0,0)$ and $s\ra \I$. \end{thm} The main conclusion of Theorem \ref{thm:BG} is that sample paths near normally hyperbolic attracting slow manifolds are \texttt{metastable} i.e. they stay near the manifold for exponentially long times except when the slow dynamics moves the system near a fast subsystem bifurcation point so that the stopping time $\tau_{\cD_y}$ is reached. Theorem \ref{thm:BG} does not immediately guarantee that we can use moments from the linearized process $\xi^l_s$ to approximate the moments of the nonlinear process $\xi_s$. For a approach to this problem re-consider the general fast-slow SDE \eqref{eq:gen_SDE}. The associated slow flow ODE is $dy^0_s=g(h_0(y^0_s),y^0_s)ds$. Define $x^0_s:=h_0(y^0_s)$ and observe that the solutions $(x_s,y_s)$ of \eqref{eq:gen_SDE} depend implicitly on $\epsilon$. A complementary result to Theorem \ref{thm:BG} by Kabanov and Pergamenshchikov provides a convenient convergence in probability of the process $(x_s,y_s)$ to $(x^0_s,y^0_s)$ as $\epsilon \ra 0$. \begin{thm}[\cite{KabanovPergamenshchikov}, p.45-46] \label{thm:KP} Suppose (A0)-(A2) and (A4) hold. We start at $s=s_0$ and consider a final time $S>0$ such that $(x_s,y_s)$ has not left $\cD$. Then for any $s\in[s_0,S]$ \benn \sup_{0\leq s\leq S}|x_s-x^0_s|\stackrel{\P}{\ra} 0\qquad \text{and}\qquad \sup_{0\leq s\leq S}|y_s-y^0_s|\stackrel{\P}{\ra} 0 \eenn as $\epsilon\ra 0$ where $\stackrel{\P}{\ra}$ indicates convergence in probability. \end{thm} As a direct corollary to this result the linearized process $\xi^l_s$ also approximates $\xi_s$ in probability as both processes tend to the same deterministic limit as $\epsilon\ra 0$. \begin{prop} \label{prop:conv} Under the assumptions (A0)-(A2) and (A4) we have $\sup_{0\leq s\leq S}|\xi_s-\xi^l_s|\stackrel{\P}{\ra} 0$, as $\epsilon \ra 0$. In particular, we have convergence in distribution $\xi_s \stackrel{d}{\ra} \xi^l_s$ as $\epsilon \ra 0$. \end{prop} \begin{proof} Observe that Theorem \eqref{thm:KP} can also be applied to the processes $\xi_s$ and $\xi_s^l$ instead of $x_s$ with $\tilde{h}_0(y^0_s)=\xi^0_s\equiv 0$. This yields \benn \sup_{0\leq s\leq S}|\xi_s-\xi^l_s|=\sup_{0\leq s\leq S}|\xi_s-\xi^0_s+\xi^0_s-\xi^l_s|\leq \sup_{0\leq s\leq S}|\xi_s-0|+\sup_{0\leq s\leq S}|\xi^l_s-0| \stackrel{\P}{\ra} 0.\qedhere \eenn \end{proof} Proposition \ref{prop:conv} only states that the two stochastic processes converge to the same deterministic process as $\epsilon\ra 0$. However, for a metastable approximation one must check how the $k$-th moment approximation depends on the \textit{time} $s$ and the time scale separation $\epsilon$. In particular, we are interested in the first and second moments and let \benn \delta_1(s,\epsilon):=\E[\xi_s]-\E[\xi_s^l],\qquad \delta_2(s,\epsilon):=\text{Cov}(\xi_s)-\text{Cov}(\xi_s^l). \eenn It is certainly possible to adapt previous results such as the work by Berglund and Gentz \cite{BerglundGentz} to achieve explicit moment bounds. However, the techniques are rather complicated and based upon martingale methods, Bernstein-type inequalities, subdivison of suitable time intervals and calculating new explicit moment bounds and aim to control probabilities \emph{path-wise}. Here we are going to develop a very short and essentially 'algorithmic' argument for moment bounds for truncated normal forms. The technique is elementary and only uses a suitable difference process, well-known even-moment bounds and the Cauchy-Schwartz inequality; this approach may even have the potential to simplify calculations for path-wise control such as (\cite{BerglundGentz8}, Section 4). We shall only discuss moment approximation for the fold bifurcation which provides an outline how moments can be controlled in the general case. The simplest normal form model for a fold bifurcation with additive noise is \be \label{eq:fold_SDE} \begin{array}{rcl} dx_s&=&\frac{1}{\epsilon}(-y_s-x_s^2)ds + \frac{\sigma}{\sqrt\epsilon} dW_s,\\ dy_s&=&1 ~ds,\\ \end{array} \ee where we can also view $y_s=(s-s_0)+y_{s_0}$ as a time variable. The attracting critical manifold is $C^a_0=\{(x,y)\in\R^2:x=\sqrt{-y}=h_0(y)\}$ with an associated slow manifold $C^a_\epsilon=\{x=h_\epsilon(y)=h_0(y)+\cO(\epsilon)\}$. Note that for \eqref{eq:fold_SDE} we have $y_s=y^{\det}_s$. Therefore we get that \eqref{eq:deviate_SDE} is given by \be \label{eq:fold_SDE1} \begin{array}{rcl} d\xi_s&=&\frac{1}{\epsilon}(-2\sqrt{-y_s}\xi_s-\xi_s^2+\cO(\epsilon))ds + \frac{\sigma}{\sqrt\epsilon} dW_s,\\ dy_s&=&1 ~ds.\\ \end{array} \ee where we are going to formally drop the higher-order term $\cO(\epsilon)$-term from now on. The linearized problem \eqref{eq:lin_SDE} is \be \label{eq:fold_SDE2} \begin{array}{rcl} d\xi^l_s&=&\frac{1}{\epsilon}(-2\sqrt{-y_s})\xi_s^l ds + \frac{\sigma}{\sqrt\epsilon} dW_s,\\ dy_s&=&1 ~ds.\\ \end{array} \ee To analyze the transient behavior we consider the \texttt{difference process} $v_s:=\xi_s-\xi^l_s$. It satisfies the differential equation \be \label{eq:diff_process} \begin{array}{lcl} dv_s&=&\frac{1}{\epsilon}[-y_s-h_\epsilon(y_s)^2-2h_\epsilon(y_s)v_s -\xi_s^2-\epsilon D_yh_\epsilon(y_s)]ds\\ &=&\frac{1}{\epsilon}[-2\sqrt{-y_s}v_s -\xi_s^2+\cO(\epsilon)]ds.\\ \end{array} \ee where the $\cO(\epsilon)$-term will again be dropped. We always consider a initial condition $y_0$ at time $s_0=0$ such that $y_s=s+y_0$ and $y_{s}<0$ for $s\in[0,s^*]$ for some $s^*>0$ such that $y_s$ remains in the compact region $\cD$. \begin{lem} The expected value of the difference process satisfies the ODE \be \label{eq:moment_equation1} \frac{d}{ds}\E[v_s]=\frac{1}{\epsilon}\left(-2\sqrt{-y_s}~\E[v_s]-\E[\xi_s^2]\right). \ee \end{lem} \begin{proof} Substract \eqref{eq:fold_SDE2} from \eqref{eq:fold_SDE1} and take the expectation. \end{proof} By the variation of constants formula (\cite{Hale}, p.82) the solution of \eqref{eq:moment_equation1} is given by \be \label{eq:fold_ex_fund} \E[v_s]=\E[v_0]~X(s,s_0)-\int_{s_0}^s \frac{\E[\xi_r^2]}{\epsilon}X(s,r) dr \ee where $X(s,r)$ is the fundamental solution of $\frac{d}{ds}\E[v_s]=\frac1\epsilon (-2\sqrt{-y_s} ~\E[v_s])$. If we can show that $\E[\xi_s^2]$ is ``small'' then \eqref{eq:fold_ex_fund} provides a way to show that the mean of $v_s$ remains small as well. \begin{lem}[\cite{KabanovPergamenshchikov}, p.20-25] \label{lem:moment_estimates} Suppose the stochastic differential equation $dX_s=\alpha(X_s,s)ds+\beta(s)dW_s$ with $X\in\R^m$ and $\beta(s)\in \R^{m\times k}$ satisfies for $s\in[s_1,s_2]$ the stability condition $X^T\alpha(X,s)\leq -\kappa \|X\|^2$ and has uniformly bounded noise term $\sup_{s\in[s_1,s_2]}\|\beta(s)\|\leq M$ then \be \label{eq:KP_mom} \E[X_s^{2p}]\leq p!\left(\frac{M^2}{\kappa}\right)^p. \ee \end{lem} Applying Lemma \ref{lem:moment_estimates} to $\xi_s=X_s$ and equation \eqref{eq:fold_SDE1} we see that $\kappa=\cO(1/\epsilon)$ and $M=\sqrt\sigma/\epsilon$. Therefore using \eqref{eq:KP_mom} with $p=1$ yields \be \label{eq:variance_xis} \frac{\E[\xi_s^2]}{\epsilon}\leq \frac{\sigma^2}{\epsilon^2}\cO(\epsilon)= \cO\left(\frac{\sigma^2}{\epsilon}\right)=\cO\left(\frac{\sigma(\epsilon)^2}{\epsilon}\right) \ee where $s\in[0,s^*]$ to assure normal hyperbolicity with $\kappa=\cO(1/\epsilon)$. Using the estimate \eqref{eq:variance_xis} in \eqref{eq:fold_ex_fund} and assuming that $\E[v_0]=\E[\xi_0-\xi_0^l]=0$ we get the inequality \be \label{eq:fold_ex_fund1} |\E[v_s]|\leq \int_{0}^s \left|\cO\left(\frac{\sigma(\epsilon)^2}{\epsilon}\right)X(s,r) \right|dr =\delta_1(s,\epsilon). \ee In particular, we can use the linearized process to approximate the mean \benn |\E[v_s]|=|\E[\xi_s]-E[\xi_s^l]|\leq \delta_1(s,\epsilon). \eenn Next, we define $V_s:=\text{Var}(\xi_s)-\text{Var}(\xi_s^l)$. \begin{lem} \label{lem:var_est_final} The difference process $V_s$ for the variance satisfies the ODE \be \label{eq:moment_equation2} \frac{d}{ds}V_s=\frac2\epsilon \left( -2\sqrt{-y_s}~V_s -\E[\xi_s^2 \xi_s] +\E[\xi_s^2]\E[\xi_s]\right). \ee \end{lem} \begin{proof} A direct calculation using It\^{o}'s formula (\cite{Socha}, p.87) shows that \be \begin{array}{lcl} \frac{d}{ds}\text{Var}(\xi_s)&=&2\E\left[\frac1\epsilon \left(-2\sqrt{-y_s}\xi_s-\xi_s^2\right)(\xi_s-\E[\xi_s])\right]+\frac{\sigma^2}{\epsilon},\\ \frac{d}{ds}\text{Var}(\xi^l_s)&=&2\E\left[\frac1\epsilon \left(-2\sqrt{-y_s}\xi^l_s\right)(\xi^l_s-\E[\xi_s^l])\right]+\frac{\sigma^2}{\epsilon}.\\ \end{array} \ee Then using $\text{Var}(\xi_s)=\E[\xi_s^2]-\E[\xi_s]^2$ and $\text{Var}(\xi_s^l)=\E[(\xi^l_s)^2]-\E[\xi^l_s]^2$ gives \eqref{eq:moment_equation2}. \end{proof} \begin{lem} \label{lem:third_moment} $|\E[\xi_s^2\xi_s^l]|\leq \cO(\sigma^3)$ and $|\E[\xi_s^2]\E[\xi_s^l]|\leq \cO(\sigma^3)$. \end{lem} \begin{proof} By a combination of Lemma \eqref{lem:moment_estimates} and the Cauchy-Schwarz inequality its follows that \benn \E[\xi_s^2\xi_s^l]\leq \E[\xi_s^4]^{1/2}\E[(\xi_s^l)^2]^{1/2}\leq \cO(\sigma^2)\cO(\sigma)=\cO(\sigma^3). \eenn The second results is proven similarly. \end{proof} As in the derivation of the bound \eqref{eq:fold_ex_fund1} we now use Lemma \ref{lem:var_est_final} and Lemma \ref{lem:third_moment} to conclude that \be \label{eq:fold_ex_fund2} |Var(\xi_s)-Var(\xi_s^l)|=|V_s|\leq \int_{0}^s \left|\cO\left(\frac{\sigma(\epsilon)^3}{\epsilon}\right)\tilde{X}(s,r) \right|dr =\delta_2(s,\epsilon). \ee where $\tilde{X}(s,r)$ is the fundamental solution of $\frac{d}{ds}\E[V_s]=\frac1\epsilon (-4\sqrt{-y_s} ~\E[V_s])$. The estimates \eqref{eq:fold_ex_fund1} and \eqref{eq:fold_ex_fund2} require the fundamental solutions of systems of the form \be \label{eq:fund_sol_1D} \frac{d}{ds}w(s,r)=-\frac{\kappa}{\epsilon}\sqrt{-s-y_0}~w(s,r),\quad w(r,r)=1\quad \Rightarrow \quad w(s,r)=e^{\frac{2\kappa}{3\epsilon}\left[(-s-y_0)^{3/2}-(-r-y_0)^{3/2}\right]}. \ee where $\kappa>0$ is a constant; here $\kappa=2,4$ for the first and second moment estimates. We remark that in \eqref{eq:fund_sol_1D} the formal condition $(-s-y_0)^{3/2}\sim \epsilon$ with $s=0$ yields the critical scaling $y\sim \epsilon^{2/3}$ as expected from the loss of normal hyperbolicity near a fold (\cite{KruSzm3}, p.291; \cite{BerglundGentz}, p.87). To estimate $\delta_1(s,\epsilon)$ and $\delta_2(s,\epsilon)$ one must consider the integral \be \label{eq:Laplace_integral} \int_0^s e^{\varphi(r)/\epsilon}dr \quad\text{ with }\varphi(r):=\frac{2\kappa}{3}\left[(-s-y_0)^{3/2}-(-r-y_0)^{3/2}\right]. \ee Note carefully that \eqref{eq:Laplace_integral} has asymptotics that can be determined via Laplace's method (see \cite{BenderOrszag}, p.265-267). If no formal truncation, {e.g.}~in \eqref{eq:fold_SDE1}, is applied there are much more detailed results available in \cite{BerglundGentz8} using explicit calculations where Laplace-type integrals still appear \cite{BerglundGentz}. However, it seems that the ideas used here utilizing the difference process, the direct moment estimates from Lemma \ref{lem:moment_estimates} and the Cauchy-Schwarz inequality are a simple, and essentially algorithmic, shortcut to lead to a Laplace-type integral. \begin{prop} \label{prop:Laplace} Suppose $(-s+y_0)=\cO(\epsilon^{2\alpha})$ with $\alpha<1/3$ then $\int_0^s e^{\varphi(r)/\epsilon}dr\sim\epsilon^{1-\alpha}$ as $\epsilon\ra 0$. \end{prop} \begin{proof} One calculates that $\varphi'(r)>0$ for $r\in[0,s]$ if $s<s^*$. Then applying the standard asymptotic Laplace approximation at the endpoint $s$ (see \cite{BenderOrszag}, p.266, (6.4.19b)) yields \benn \int_0^s e^{\varphi(r)/\epsilon}dr\sim \epsilon\frac{e^{\varphi(s)/\epsilon}}{\varphi'(s)}=\epsilon\frac{1}{\kappa(-s-y_0)^{1/2}}=\epsilon^{1-\alpha},\qquad \text{as $\epsilon\ra 0$.}\qedhere \eenn \end{proof} Hence we obtain from \eqref{eq:fold_ex_fund1}, \eqref{eq:fold_ex_fund2} and Proposition \ref{prop:Laplace} that in the normally hyperbolic regime $\cD$ with $y=\cO(\epsilon^{2\alpha})$ and $\alpha<1/3$ the moment estimates are \benn \delta_1(s,\epsilon)=\cO(\sigma^2\epsilon^{-\alpha}) \qquad \text{and} \qquad \delta_2(s,\epsilon)=\cO(\sigma^3\epsilon^{-\alpha}). \eenn Lemma \ref{lem:BG_lem} gives for the fold bifurcation the desired moment approximation for the linearized process $\text{Var}(\xi^l_s)=\sigma^2 [H_\epsilon(y)]$ so that the approximation result for the variance is \be \label{eq:var_fold_final} \text{Var}(\xi_s)=\sigma(\epsilon)^2 [H_\epsilon(y)]+\cO\left(\sigma(\epsilon)^3\epsilon^{-\alpha}\right)\qquad \text{as $\epsilon\ra 0$.} \ee For $\alpha=0$ the process is at an $\cO(1)$-distance from the critical transition point at the fold and $\text{Var}(\xi_s)=\sigma(\epsilon)^2 [H_\epsilon(y)]+\cO\left(\sigma(\epsilon)^3\right)$. As expected, the estimate of variance becomes less accurate the closer sample paths move towards $(x_p,y_p)=(0,0)$. For $\alpha>0$, the error term in formula \eqref{eq:var_fold_final} is asymptotic if and only if $\sigma^2\gg \sigma^3\epsilon^{-\alpha}$ or $\epsilon^\alpha\gg \sigma$. Note that if $\epsilon^{k_0+\alpha}H_{k_0}(y)=\cO( \sigma)$ for all $k\geq k_0>0$ then \benn \text{Var}(\xi_s)=\sigma^2 \left[H_0(y)+\sum_{k=1}^{k_0-1}H_k(y)\epsilon^k\right]+\cO\left(\sigma^3\epsilon^{-\alpha}\right) \eenn since we can absorb the correction terms for the slow manifold of the variance into $\cO(\sigma^3\epsilon^{-\alpha})$. In particular, if $k_0=1$ then it follows that \be \label{eq:move_everything} \text{Var}(\xi_s)=\sigma^2 H_0(y)+\cO\left(\sigma^3\epsilon^{-\alpha}\right). \ee In principle, we could also calculate higher-order corrections to the slow manifold defined by $X=H_\epsilon(Y)$; see (\cite{BerglundGentz}, p. 147) and Section \ref{sec:fold_asymp}. For simplicity, we shall only consider the lowest-order approximation for a general codimension-two fast subsystem bifurcation.\\ For another fast subsystem bifurcation, we will get another approximation of the moments as we used $\sqrt{-y}=x$ for the slow manifold in the fold scenario. However, we still expect that \be \label{eq:estimate_var_general} \text{Cov}(\xi_s)=\sigma^2 [H_\epsilon(y)]+\delta_2(\epsilon,s). \ee where $H_\epsilon(y)=\sum_{k=0}^\I \epsilon^kH_k(y)$ and $\delta_2(\epsilon,s)$ denotes a small $\epsilon$-dependent error term for the second moments. In fact, the methods we use here based upon moment equations, integral estimates and direct asymptotics all generalize to higher-dimensional phase space and higher-codimension bifurcations. Hence we conjecture that \eqref{eq:estimate_var_general} is still valid for these cases. Although we do not calculate the asymptotic relations here, our approach provides a direct computational method for the relevant scalings. It is very important to recall the result is still only local around the attracting slow manifold in a compact set $\cD=\cD(\epsilon)$. Although $(x_p,y_p)\in\partial \cD(0)$ one always has to use the moment approximations by a linearized process outside of a $(\epsilon,\sigma(\epsilon))$-dependent neighbourhood of the critical transition point $(x_p,y_p)$. Small $(\epsilon,\sigma(\epsilon))$-dependent neighbourhoods (R2) near the bifurcation point have to be considered separately \cite{BerglundGentzKuehn,Kuske,BerglundGentz6}. For early-warning signs it is very reasonable to ask for the earliest possible statistical indicators. Once a sample path reaches (R2) it is extremely close to a fast jump so a warning sign may be difficult to utilize in applications. \section{Covariance Scaling Laws near Critical Transitions} \label{sec:variance} To calculate $H_0(y)$ we have to solve the algebraic equation \be \label{eq:ma1} 0=A_0(y)X+XA_0(y)^T+F_0(y)F_0(y)^T. \ee where the matrices $A_0(y)$ are chosen according to normal form theory from Table \ref{tab:det_res} (see also \eqref{eq:Aeps_def}-\eqref{eq:Feps_def} for definitions). It will be convenient to introduce a notation for the symmetric matrix $F_0(y)F_0(y)^T$ that describes the noise term \be \label{eq:defineN} (N_{ij}(y))=N(y):=F_0(y)F_0(y)^T \ee for $i,j\in\{1,2,\ldots,m\}$. If $N(y)$ is a constant matrix then we deal with purely additive noise while dependence on $y$ indicates multiplicative (or slowly parameter-dependent) noise. To distinguish between the small noise asymptotics \benn \epsilon\ra 0\qquad \Rightarrow \quad \sigma=\sigma(\epsilon)\ra 0 \eenn and the approach towards the fast subsystem bifurcation point $y$ tending to the origin we use the order notation $\cO^*_y$ for $y\ra 0$. Recall that (A0) specifies what type of double asymptotics we allow and that all results are constrained to a bounded domain {i.e.} a result $w(y)=\cO_y^*(W(y))$ is to be understood as, for a given sufficiently small $\epsilon>0$, and hence a given $\sigma(\epsilon)>0$, there exists a compact non-empty domain $\cD_y(\epsilon)\subset \R^n$ with $0\in\partial \cD(0)$ but $0\not\in\cD(\epsilon)$ ({cf.}~(R1) in Figure \ref{fig:fig1}) and constants $K_i$, $i=1,2$ such that \benn K_1W(y)\leq w(y)\leq K_2W(y) \eenn for all $y\in \cD_y(\epsilon)$. In particular, the early-warning signs we are going to derive are for fixed $(\epsilon,\sigma(\epsilon))$ sufficiently small, a fixed domain $\cD(\epsilon)=\cD_x(\epsilon)\times \cD_y(\epsilon)$ chosen around a slow manifold so that the approximation is good as $y$ tends to the transition point but will eventually break down in a small region near the critical transition that scales with $\epsilon$ and $\sigma$ and includes the critical transition point in its boundary for $\epsilon=0=\sigma$. Small $(\epsilon,\sigma)$-dependent regions containing a critical transition point require a special analysis and will not be considered; see the remarks on additional literature in Section 8. Furthermore, we agree to the convention that any limit as $y\ra 0$ is always understood as the natural one-sided limit if necessary e.g. $\cO_y^*(\sqrt{-y})$ means $y\ra 0^-$. \begin{thm} \label{thm:CK1} Suppose $0<\epsilon\ll1$ and (A0)-(A4) hold for a fast subsystem bifurcation with one fast variable (fold, transcritical, pitchfork, cusp) and $\epsilon>0$ is sufficiently small. Then the variance of the process $\xi_s$ near an attracting slow manifold approaching the bifurcation satisfies \benn \text{Var}(\xi_s)=\sigma^2 [H_\epsilon(y)]+\delta_2(s,\epsilon). \eenn where $H_\epsilon(y)=H_0(y)+\cO(\epsilon)$ and \begin{enumerate} \item[(V1)] (fold) $H_0(y)=\cO_y^*\left(\frac{N(y)}{\sqrt{y}}\right)$, \item[(V2)] (transcritical, pitchfork) $H_0(y)=\cO_y^*\left(\frac{N(y)}{y}\right)$, \item[(V3)] (cusp) $H_0(y)=\cO_{y_2}^*\left(\frac{N(y)}{y_2}\right)$; where the slow variable $y_2$ multiplies the linear term in the fast subsystem normal form \eqref{eq:nf_cusp}. \end{enumerate} In particular, if $\delta_2(s,\epsilon)\ll \sigma^2$ and $N(y)$ is constant then the variance scales, to lowest order, as $\sigma^2/\sqrt{y}$ for the fold, as $\sigma^2/y$ for the transcritical/pitchfork and as $\sigma^2/y_2$ for the cusp transition. \end{thm} \begin{proof} We can approximate the variance of the process $\xi_s$ by its linearization $\xi_s^l$ if $\epsilon$ is sufficiently small. The linearized process has variance \be \label{eq:var_asymp_easy1} \text{Var}(\xi^l_s)=\sigma^2 (H_0(y)+\cO(\epsilon))+\delta_2(s,\epsilon). \ee where $X=H_0(y)\in \R^+$ is the solution of \be \label{eq:proof_1D} 0=2A_0(y)X+N(y),\qquad \Rightarrow \quad X=H_0(y)=-\frac{N(y)}{2A_0(y)}. \ee The non-degeneracy assumptions of the four bifurcations considered are satisfied. By using normal forms, we know from Table \ref{tab:det_res} that $A_0(y)=\cO_y^*(\sqrt{y})$ for the fold transition, $A_0(y)=\cO_y^*(y)$ for the transcritical and pitchfork transitions while $A_0(y)=\cO_{y_2}^*(y_2)$ for the cusp transition. Direct substitution of these results for $A_0(y)$ into \eqref{eq:proof_1D} gives the result. \end{proof} The codimension-one fold and the transcritical/pitchfork case in Theorem \ref{thm:CK1} can also be inferred from previous works see {e.g.}~\cite{BerglundGentz}. In fact, rigorous proofs without formal truncation are available. In these results the higher-order terms do not seem to influence the scaling law in region (R1); this is one of the motivations to consider a formal truncation. The stochastic cusp, and all the following codimension-two results, have not been considered previously. It should be noted that for fast subsystems with dimension greater than one the stochastic scaling effects are much more interesting as the next result shows. \begin{thm} \label{thm:CK2} Suppose $0<\epsilon\ll1$ and (A0)-(A4) hold for a fast subsystem bifurcation with two fast variables (Hopf, Bogdanov-Takens, Bautin) and $\epsilon>0$ is sufficiently small. Then the covariance matrix of the process $\xi_s$ near an attracting slow manifold approaching the bifurcation satisfies \benn \text{Cov}(\xi_s)=\sigma^2 [H_\epsilon(y)]+\delta_2(s,\epsilon). \eenn where $H_\epsilon(y)=H_0(y)+\cO(\epsilon)$ and \begin{enumerate} \item[(V4)] (Hopf, Bautin) \benn H_0(y)=\left(\begin{array}{cc} -\frac{2N_{11}(y)y^2+2N_{12}(y)y+N_{11}(y)+N_{22}(y)}{4y(y^2+1)} & \frac{N_{11}(y)-N_{22}(y)-2N_{12}(y)y}{4(y^2+1)} \\ \frac{N_{11}(y)-N_{22}(y)-2N_{12}(y)y}{4(y^2+1)} & -\frac{2N_{22}(y)y^2-2N_{12}(y)y+N_{11}(y)+N_{22}(y)}{4y(y^2+1)}\\ \end{array} \right). \eenn In particular, $N$ is a constant matrix with $N_{11}+N_{22} \neq 0$ then \benn H_0(y)=\left(\begin{array}{cc} \cO_y^*\left(\frac1y\right) & \frac{N_{11}-N_{22}}{4} +\cO_y^*(y)\\ \frac{N_{11}-N_{22}}{4} +\cO_y^*(y) & \cO_y^*\left(\frac1y\right) \\ \end{array} \right). \eenn \item[(V5)] (Bogdanov-Takens; we set $\cO_y^*(\sqrt{-y_1})=k\sqrt{-y_1}$ for $A_0(y)$) \benn H_0(y)=\left(\begin{array}{cc} \frac{-N_{22}(y)+2kN_{12}(y)\sqrt{-y_1}\pm 2N_{11}(y)\sqrt{-y_1}+N_{11}k^2y_1}{\pm 4ky_1} & -\frac{N_{11}(y)}{2} \\ -\frac{N_{11}(y)}{2} & \frac{\pm 2N_{11}(y)+N_{22}(y)y_1/(-y_1)^{3/2}}{2k} \\ \end{array} \right). \eenn In particular, if $N$ is a constant matrix and $N_{22}\neq 0$ then \benn H_0(y)=\left(\begin{array}{cc} \cO_y^*\left(\frac{1}{y_1}\right) & -\frac{N_{11}}{2} \\ -\frac{N_{11}}{2} & \pm \frac{N_{11}}{k}+\cO_y^*\left(\frac{1}{\sqrt{-y_1}}\right) \\ \end{array} \right). \eenn \end{enumerate} \end{thm} \begin{proof} The proof follows the same outline as the proof of Theorem \ref{thm:CK1}. Therefore we shall only detail the calculations for the proof of (V4). We can again reduce to the linearized process and apply the formula \be \label{eq:var_asymp_easy2} \text{Cov}(\xi_s)=\sigma^2 (H_0(y)+\cO(\epsilon))+\delta_2(s,\epsilon). \ee Denote the elements of the scaled covariance matrix as follows \benn X=\sigma^{-2}\text{Cov}(\xi^l_s):=\left(\begin{array}{cc}v_{11} & v_{12} \\ v_{12} & v_{22} \\\end{array}\right). \eenn Using the normal form matrix $A_0(y)$ from Table \ref{tab:det_res} we calculate \bea \label{eq:Hopf_p} 0&\stackrel{!}{=}&A_0(y)X+XA_0(y)^T+N(y) \nonumber\\ &=&\left( \begin{array}{cc} -v_{12}+v_{11}y & v_{12}y-v_{22} \\ v_{11}+v_{12}y & v_{12}+v_{22}y\\\end{array} \right)+ \left( \begin{array}{cc} -v_{12}+v_{11}y & v_{11}+v_{12}y \\ v_{12}y-v_{22} & v_{12}+v_{22}y\\\end{array} \right)+ \left( \begin{array}{cc} N_{11}(y) & N_{12}(y) \\ N_{12}(y) & N_{22}(y)\\\end{array} \right) \nonumber\\ &=& \left(\begin{array}{cc} N_{11}(y)-2v_{12}+2v_{11}y & N_{12}(y)+v_{11}-v_{22}+2v_{12}y \\ N_{12}(y)+v_{11}-v_{22}+2v_{12}y & N_{22}(y)+2v_{12}+2v_{22}y\end{array}\right). \eea Equation \eqref{eq:Hopf_p} yields three independent conditions. Hence we get a linear system \benn \left(\begin{array}{ccc}2y & 0 & -2\\ 0 & 2y & 2 \\ 1 & -1 & 2y \\\end{array}\right) \left(\begin{array}{c}v_{11} \\ v_{22} \\ v_{12}\\\end{array}\right) = \left(\begin{array}{c}-N_{11}(y) \\ -N_{22}(y) \\ -N_{12}(y)\\\end{array}\right) \eenn that can be solved for $(v_{11},v_{22},v_{12})$. This result can be substituted as $X=H_0(y)$ in \eqref{eq:var_asymp_easy2} and this yields the first part of (V4). If $N$ is a constant matrix then direct asymptotics shows that \benn v_{11}\sim-\frac{N_{11}+N_{22}}{4y}=\cO_y^*\left(\frac{1}{y}\right), \qquad v_{22}\sim-\frac{N_{11}+N_{22}}{4y}=\cO_y^*\left(\frac{1}{y}\right) \eenn where $y\ra 0^-$ as we approach the critical transition via the attracting slow manifold. For the covariance we get \benn v_{12}\sim \frac{N_{11}-N_{22}}{4}-\frac{2N_{12}}{4}y=\frac{N_{11}-N_{22}}{4}+\cO_y^*\left(y\right). \eenn The result for the Bogdanov-Takens transition follows by the same techniques. \end{proof} Before we continue to codimension two bifurcations in $\R^3$ and $\R^4$, let us interpret the results of Theorem \ref{thm:CK2} for $\delta_2(s,\epsilon)\ll \sigma^2$. For the Hopf transition with a fixed noise level $\sigma>0$ we have found that the variance of the coordinates increases as $\cO_y^*(1/y)$ as the bifurcation point is approach with $y\ra 0$. This result is expected as we already saw an increase in variance for the one-dimensional fast subsystem bifurcations. However, for the covariance the additive noise case with a constant matrix $N$ yields \benn \frac{N_{11}-N_{22}}{4} +\cO_y^*(y). \eenn This implies that the covariance tends to a constant as $y\ra 0$; even more surprisingly, for the reasonable assumption of equal individual diffusion $N_{11}=N_{22}$ we get that the covariance tends to zero as the bifurcation is approached. Hence we can already conclude that measuring covariances can also provide important information to predict critical transitions. For the Hopf transition with multiplicative noise, let us just consider the simplest case of linear multiplicative noise without correlation $N_{11}(y)=c_1y$, $N_{22}(y)=c_2y$, $N_{12}(y)=0$. Then we find \beann \text{Var}(\xi_{1,s})&\sim& -\frac{(c_1+c_2)y}{4y}-\frac{2c_1y^3}{y}=\cO_y^*(1)+\cO_y^*(y^2), \quad \text{if $c_1\neq- c_2$,}\\ \text{Var}(\xi_{2,s})&\sim& -\frac{(c_1+c_2)y}{4y}-\frac{2c_2y^3}{y}=\cO_y^*(1)+\cO_y^*(y^2), \quad \text{if $c_1\neq -c_2$,}\\ \text{Cov}(\xi_{1,s},\xi_{2,s})&\sim& -\frac{(c_1-c_2)y}{4}=\cO_y^*(y), \quad \text{if $c_1\neq c_2$.} \eeann Therefore measuring the variance alone is not expected to yield valuable information; indeed, variance tending to a constant could be interpreted as a normally hyperbolic regime without critical transitions for additive noise \cite{KuehnCT1}. Obviously one could discuss further interesting scalings depending on the matrix $N(y)$. It should be clear from the formulas (V1)-(V5) and the previous discussion how to approach these situations as long as the system is in normal form near the bifurcation point. We proceed to look at some results for the remaining codimension two bifurcations. \begin{thm} \label{thm:CK3} Suppose $0<\epsilon\ll1$ and (A0)-(A4) hold for a codimension-two fast subsystem bifurcation with at least three fast variables (Gavrilov-Guckenheimer, Hopf-Hopf). Assume that $\epsilon$ is sufficiently small and that $N=N(y)$ is a constant matrix. Then the covariance matrix of the process $\xi_s$ near an attracting slow manifold approaching the bifurcation satisfies \benn \text{Cov}(\xi_s)=\sigma^2 [H_\epsilon(y)]+\delta_2(s,\epsilon). \eenn where $H_\epsilon(y)=H_0(y)+\cO(\epsilon)$ and \begin{enumerate} \item[(V6)] (Gavrilov-Guckenheimer) if $N_{11}\neq 0$ and $N_{22}+N_{33}\neq 0$ then \benn H_0(y)=\cO_y^*\left(\begin{array}{ccc} \frac{1}{\sqrt{y_1}} & \frac{N_{13}}{\omega} &\frac{N_{12}}{\omega} \\ \frac{N_{13}}{\omega} & \frac{1}{y_2} & \frac{N_{22}-N_{33}}{4\omega}\\ \frac{N_{12}}{\omega}&\frac{N_{22}-N_{33}}{4\omega} & \frac{1}{y_2} \\ \end{array} \right). \eenn \item[(V7)] (Hopf-Hopf, special case: $A_0(y)$ given by \eqref{eq:A0_HH}) if $N_{11}+N_{22}\neq 0$ and $N_{33}+N_{44}\neq 0$ then \benn H_0(y)=\cO_y^*\left(\begin{array}{cccc} \frac{1}{y_1}& \frac{N_{11}-N_{22}}{4\omega_1} & \frac{N_{14}\omega_2-N_{23}\omega_1}{\omega_1^2-\omega_2^2} & -\frac{N_{24}\omega_1-N_{13}\omega_2}{\omega_1^2-\omega_2^2}\\ \frac{N_{11}-N_{22}}{4\omega_1} & \frac{1}{y_1} & \frac{N_{13}\omega_1+N_{24}\omega_2}{\omega_1^2-\omega_2^2} & \frac{N_{14}\omega_1-N_{23}\omega_2}{\omega_1^2-\omega_2^2}\\ \frac{N_{14}\omega_2-N_{23}\omega_1}{\omega_1^2-\omega_2^2} & \frac{N_{13}\omega_1+N_{24}\omega_2}{\omega_1^2-\omega_2^2} & \frac{1}{y_2} & \frac{N_{33}-N_{44}}{4\omega_2} \\ -\frac{N_{24}\omega_1-N_{13}\omega_2}{\omega_1^2-\omega_2^2}& \frac{N_{14}\omega_1-N_{23}\omega_2}{\omega_1^2-\omega_2^2} & \frac{N_{33}-N_{44}}{4\omega_2} & \frac{1}{y_2}\\ \end{array}\right). \eenn \end{enumerate} \end{thm} We shall omit the calculations for the proof of Theorem \ref{thm:CK3} as it follows the same steps as the proofs of Theorems \ref{thm:CK1}-\ref{thm:CK2}. We remark that the solution of the algebraic equation \eqref{eq:ma1} becomes much more cumbersome for systems in $\R^3$ and $\R^4$ and we compared our solution to the results obtained by a computer algebra system \cite{Mathematica}. Another important note on Theorem \ref{thm:CK3} is that we do not have to assume explicitly that $\omega_1\neq \omega_2$ for the Hopf-Hopf bifurcation since this is included in assumption (A2). The 1:1 resonance case at a Hopf-Hopf bifurcation $\omega_1=\omega_2$ (see \cite{vanGilsKrupaLangford,GH}) naturally appears as a special case in our analysis. In particular, when $|\omega_1^2-\omega_2^2|$ is small then the covariances of the two-by-two off-diagonal blocks in (V7) can also get large near a Hopf-Hopf critical transition. \section{Double-Singular Variance Asymptotics for the Fold} \label{sec:fold_asymp} In the last section we have computed the leading-order term for the covariance near a critical transition for all fast subsystem bifurcations up to codimension two. In current applications of critical transitions one frequently encounters the fold bifurcation. From a mathematical viewpoint, the fold bifurcation is the lowest slow codimension, lowest fast dimension generic bifurcation without further assumptions. Both reasons warrant a more detailed asymptotic study to determine higher-order correction terms to the formula \benn \text{Var}(\xi_s)=\sigma^2 \left[\cO_y^*\left(\frac{1}{\sqrt{y}}\right)+\cO(\epsilon)\right]+\cO\left(\frac{\sigma^3}{\epsilon^{\alpha}}\right). \eenn from Theorem \ref{thm:CK1} for additive noise. We can always assume a preliminary normal form transformation \cite{MisRoz,Kuznetsov} and consider on the slow time scale \be \label{eq:fold1} \begin{array}{rcl} dx&=& \frac1\epsilon (y-x^2) ds+ \frac{\sigma}{\sqrt\epsilon }dW_s,\\ dy&=& -1~ds,\\ \end{array} \ee where we assume additive noise to simplify the algebraic manipulations to follow. We also refer to Figure \ref{fig:fig1} and consider the attracting branch $C^a_0=\{(x,y)\in\R^2:x=\sqrt{y}=h_0(y)\}$ of the critical manifold. Fenichel's Theorem provides an attracting slow manifold $C^a_\epsilon=\{(x,y)\in\R^2:x=h_\epsilon(y)=h_0(y)+\cO(\epsilon)\}$. A converging series expansion for $C^a_0$ can easily be derived by direct regular asymptotics where convergence of the asymptotic series is guaranteed by Fenichel's Theorem. \begin{lem} \label{lem:asympCeps} The attracting slow manifold $C_\epsilon^a$ for the fold bifurcation normal form is given by \be \label{eq:fold_asymp_mani} h_\epsilon(y)=\sqrt{y}-\epsilon\frac{1}{4y}-\epsilon^2 \frac{5}{32y^{5/2}}-\epsilon^3 \frac{15}{64y^{4}}-\epsilon^4 \frac{1105}{2048y^{11/2}}+\cO(\epsilon^5). \ee Terms of order $\cO(\epsilon^5)$ or higher are omitted but can easily be calculated from a recursive solution of algebraic equations. \end{lem} Setting $\xi_s=x_s-h_\epsilon(y)$ and using Lemma \ref{lem:BG_lem} for equation \eqref{eq:fold1} we find that the scaled variance $X_s=\sigma^{-2}\text{Var}(\xi_s)$ satisfies the ODE \be \label{eq:var_fold_asymp_full} \begin{array}{rcl} \epsilon \dot{X}&=&-4h_\epsilon(y)X+1,\\ \dot{y}&=&-1.\\ \end{array} \ee The attracting critical manifold of \eqref{eq:var_fold_asymp_full} is given by $\cC_0=\{(X,y)\in \R^2:X=H_0(y)\}$. Fenichel's Theorem yields an associated attracting slow manifold $\cC_\epsilon=\{(X,y)\in \R^2:X=H_\epsilon(y)=H_0(y)+\cO(\epsilon)\}$. We already know from the proof of Theorem \ref{thm:CK1} that $H_0(y)=1/(4\sqrt{y})$. \begin{prop} \label{prop:var_fold} The attracting slow manifold $\cC_\epsilon^a$ associated to \eqref{eq:var_fold_asymp_full} has an asymptotic expansion given by \be \label{eq:sol_var_fold} H_\epsilon(y)=\frac{1}{4\sqrt{y}}+\epsilon\frac{3}{32y^2}+\epsilon^2\frac{7}{64y^{7/2}}+\epsilon^3\frac{201}{1024 y^5}+\epsilon^4\frac{3837}{8192y^{13/2}}+\cO(\epsilon^5). \ee Terms of order $\cO(\epsilon^5)$ or higher are omitted but can easily be calculated from a recursive solution of algebraic equations. \end{prop} \begin{proof} We make the ansatz $H_\epsilon(y)=H_0(y)+\epsilon H_1(y)+\epsilon^2H_2(y)+\cdots$. Using this ansatz and the result from Lemma \ref{lem:asympCeps} in \eqref{eq:var_fold_asymp_full} we get a hierarchy of algebraic equations at different orders \beann 0&=& 1-4h_0(y)H_0(y)\\ \frac{dH_{k-1}}{dy}&=& -4\sum_{i,j:~i+j=k}H_i(y)h_j(y) \eeann where $k\in\{1,2,\ldots\}$. The result \eqref{eq:sol_var_fold} follows by direct calculation. \end{proof} We expect that the expansion up to fourth order of $H_\epsilon$ is sufficient for all practical purposes. Recall from the end of Section \ref{sec:SDE_fs} that the condition $\epsilon^{k_0+\alpha}H_{k_0}(y)=\cO( \sigma)$ determines whether terms of the expansion for $H_\epsilon$ can be moved to the higher-order correction $\cO(\sigma^3/\epsilon^{\alpha})$. Proposition \ref{prop:var_fold} yields the conditions \be \label{eq:hide_const} \frac{\epsilon^{k_0+\alpha}}{y^{(3k_0+1)/2}}=\cO(\sigma) \qquad \text{for $k_0\in\{1,2,\ldots\}$}. \ee For the fold bifurcation, we know that the critical scaling of $y$ to stay inside the normally hyperbolic regime is $y\sim \epsilon^{2/3}$; see Section \ref{sec:fast_slow} and assumption (A0) as well as Lemma \ref{lem:asympCeps}. Suppose $y\sim \epsilon^{2\alpha}$ for some $\alpha<1/3$ and use the scaling in \eqref{eq:hide_const} for $k_0=1$ then $\epsilon^{1-3\alpha}=\cO(\sigma)$ is the condition to move all slow manifold correction terms into the higher-order for the variance estimate; for $\alpha =0$ this condition obviously reduces to the known fact $\epsilon=\cO(\sigma)$ from equation \eqref{eq:move_everything}. Using the critical scaling $2\alpha=2/3$ in \eqref{eq:hide_const} we get the condition $\epsilon^{1-1}=1= \cO(\sigma)$ which can never hold under assumption that $\sigma(\epsilon)\ra 0$ as $\epsilon \ra 0$ by assumption (A4). Therefore, the slow manifold approximation in (R1) obtained by the linearized process $\xi_s$ for the moments is not valid in (R2). \section{Applications} \label{sec:applications} We are going to present five applications to illustrate the previous results. We also indicate how novel conclusions about the applications follow from the theory. \subsection{A Climate Box-Circulation Model} \label{ssec:climate} The Stommel model \cite{Stommel1} describes the \texttt{North Atlantic Thermohaline Circulation (THC)} by two boxes $B_1$ and $B_2$ representing low and high latitudes respectively. An atmospheric freshwater flux and differences in insolation can induce temperature and salinity differences $\Delta T = T_1-T_2$ and $\Delta S=S_1-S_2$. The resulting system has an Atlantic northward surface current and an Atlantic southward bottom current. For a version of Stommel's box model \cite{Cessi} it can be shown that $(\Delta T,\Delta S)$ obey a two-dimensional fast-slow system where the temperature difference represents the fast variable \cite{BerglundGentz5}. After reduction to an attracting slow manifold and a re-scaling of the variables the dynamics reduces to \be \label{eq:SDE_climate} \dot{Y}=\mu-Y\left(1+\eta^2(1-Y)^2\right) \ee where $Y$ represents the salinity difference, we fix $\eta^2=7.5$ and $\mu$ is a parameter proportional to the atmospheric freshwater flux. Obviously the freshwater flux can also be viewed as a dynamical variable and we assume that it changes slower than $Y$. Furthermore we assume that \eqref{eq:SDE_climate} is subject to small stochastic perturbations which is reasonable if we decide not to model the system in more detail. Setting $x:=Y$ and $y:=\mu$ we get another two-dimensional fast-slow system \be \label{eq:main_box_model} \begin{array}{rcl} dx_s &=& \frac1\epsilon\left[y_s-x_s(1+7.5(1-x_s)^2)\right]ds +\frac{\sigma}{\sqrt\epsilon} F(y_s)dW_s,\\ dy_s &=& g(x_s,y_s) ds.\\ \end{array} \ee The deterministic critical manifold is $C_0=\{(x,y)\in\R^2:y=x(1+7.5(1-x)^2)=:h_0(x)\}$, which is immediately recognized as a classical \texttt{S-shaped} (or \texttt{cubic}) fast subsystem nonlinearity. There are two fold points (fast subsystem fold bifurcations) at \benn (x^-,y^-)=\left(\frac{1}{15}(10-\sqrt{15}),\frac{11}{9}+\frac{1}{\sqrt{15}}\right)\qquad \text{and} \qquad (x^+,y^+)=\left(\frac{1}{15}(10+\sqrt{15}),\frac{11}{9}-\frac{1}{\sqrt{15}}\right). \eenn The critical manifold splits into three parts $C_0^{a,-}:=C_0\cap \{x<x_-\}$, $C_0^{r}:=C_0\cap \{x^-<x<x^+\}$, and $C_0^{a,+}:=C_0\cap \{x>x^+\}$ where $C_0^{a,\pm}$ are attracting and $C_0^r$ is repelling. The lower branch $C_0^{a,-}$ represents small salinity difference which corresponds to a weak THC. The upper branch $C_0^{a,+}$ corresponds to a strong THC which can be viewed as the present state of the climate. A critical transition from a strong to a weak THC would mean a significant cooling of the mild European climate. Therefore, we shall focus on the critical transition $(x^+,y^+)$ with initial conditions on $C^{a,+}_0$. The initial condition will be fixed at $(x_0,y_0)=(x_0,3/2)\in C_0^{a,+}$ which roughly corresponds to the \texttt{drop point} \cite{MKKR} on the upper attracting critical manifold after a transition at $(x^-,y^-)$. \begin{figure}[htbp] \centering \psfrag{x}{$x$} \psfrag{xd}{$xd$} \psfrag{y}{$y$} \psfrag{t}{$t$} \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{c}{(c)} \psfrag{d}{(d)} \includegraphics[width=1\textwidth]{./fig5.eps} \caption{\label{fig:fig5}Illustration of the different techniques (M1)-(M4) to approximate the variance $\text{Var}(x(y))$; we use the Stommel-Cessi model \eqref{eq:main_box_model} with parameters given in \eqref{eq:para_Cessi}. (a) Typical time series (black) near the attracting critical manifold $C^{a,+}$ (red) up to the fold point $(x^+,y^+)$ (black dot). We also show two sliding windows (green) where the dashed green line is a linear trend and the solid green line is given by $C^{a,+}_0$ i.e. the green curves are used for linear and CM detrending respectively. (b) Detrended time series $xd$ from (a) corresponding to the two (green) sliding windows. (c) Zoom near $y=1.05$, 5 sample paths are shown. The dots (magenta) mark the five points of the paths at $y=1.05$. To calculate the variance $\text{Var}(x(y=1.05))$ one simulates many paths. (d) Simulation of the fast subsystem of \eqref{eq:main_box_model} with y=1.05 for a fixed fast time $t\in[0,100]$.} \end{figure} We start by simulating \eqref{eq:main_box_model} using an Euler-Maruyama method \cite{Higham} using \be \label{eq:para_Cessi} \epsilon=0.01,\qquad \sigma=0.01, \qquad F(y)\equiv 1, \qquad g(x,y)\equiv-1. \ee where the assumptions on $g$ mean that one may also interpret $y$ as a time variable. A typical sample path is shown in Figure \ref{fig:fig5}(a); the path is stopped at a final value $y=0.95$. We want to estimate the variance $\text{Var}(y_s)$ from a time series \be \label{eq:tseries} y_0=y_{s_0},y_{s_1},\ldots,y_{s_N}=0.95, \qquad x_{s_0}, x_{s_1}, \ldots,x_{s_N}. \ee The values $x_{s_j}=:x_j$ can be viewed as functions of $y$ since $y_s=(s-s_0)+y_0$ and we indicate this by writing $\text{Var}(x(y)):=\text{Var}(x_s)$. The goal is to estimate the variance. There are several possibilities to extract an approximation: \begin{enumerate} \item[(M1)] Consider a single time series. Select a \texttt{moving window} of fixed length $M$ and compute the sample variance for $M+1$ consecutive points $x_j,\ldots,x_{j+M}$; see Figure \ref{fig:fig5}(a)-(b). This provides an estimate for the variance $\text{Var}(x(y))$ roughly at the midpoint of the moving window $\frac1M\sum_{k=0}^M y_{j+k}$. The idea is that if the window is sufficiently small and we have sufficiently many data points inside each window then we can calculate a good approximation to $\text{Var}(x(y))$ for each $y$. \item[(M2)] Consider a single time series as for (M1). We can remove a given \texttt{trend} from \eqref{eq:tseries} before calculating the variance. For example, interpolating \eqref{eq:tseries} linearly and subtracting the resulting linear function from the time series yields a variance estimate with \texttt{linear detrending}. Another natural possibility is to remove the critical manifold as a trend; we call this \texttt{critical manifold (CM) detrending}. See also Figure \ref{fig:fig5}(a)-(b). \item[(M3)] Another possibility is to consider a large number $R$ of time series $x^{(r)}_{0}, x^{(r)}_{1}, \ldots,x^{(r)}_{N}$ for $r\in\{1,2,\ldots,R\}$ and then calculate the variance $\text{Var}(x(y_j))$ at $y_j$ as the sample variance of $\{x^{(1)}_{j}, x^{(2)}_{j}, \ldots,x^{(R)}_{j}\}$. This idea is illustrated in Figure \ref{fig:fig5}(c) and avoids the moving window technique. However, it does require multiple time series passing near the same critical point. \item[(M4)] Instead of simulating the entire SDE \eqref{eq:main_box_model} we can also assume that $y=y_j$ is constant, simulate the fast subsystem for a sufficiently long time and then calculate $\text{Var}(x(y_j))$ from this fast subsystem time series; see Figure \ref{fig:fig5}(d). \end{enumerate} \begin{figure}[htbp] \centering \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{c}{(c)} \psfrag{d}{(d)} \psfrag{e}{(e)} \psfrag{f}{(f)} \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{V}{$V$} \includegraphics[width=1\textwidth]{./fig6.eps} \caption{\label{fig:fig6}Comparison of different methods to estimate the variance $V=\text{Var}(x(y))$ for the Stommel-Cessi model \eqref{eq:main_box_model} with parameters given in \eqref{eq:para_Cessi}. The black curves in (a)-(e) indicate the variance estimate and the green curves are obtained by least squares fit of \eqref{eq:ls_fit}. (a) Sliding window technique (M1) without detrending, average over 1000 sample paths. (b) Sample paths ``pointwise variance'' (M3), average over 1000 sample paths. (c) Sliding window with linear detrending (M2), average over 100 sample paths. (d) Sliding window with CM detrending, average over 100 sample paths. (e) Fast subsystem simulation (M4) for a fast time $t\in[0,100]$. (f) The critical manifold (red/blue) with fold point (black) is shown. The green markers indicate the estimators for $y_c$ from a least squares fit of \eqref{eq:ls_fit} plotted at the same x-value as the fold point; the green star ``*'' is the lower bound estimate for $y_c$ from (a) and (e), the green circle ``o'' marks $y_c$ for (b), the green plus ``+'' corresponds to (c) and the green ``x'' marks $y_c$ for (d).} \end{figure} Each of the methods (M1)-(M4) has different advantages and disadvantages. A direct sample variance measurement using the sliding window technique (M1) does include the curvature of the critical manifold in the estimate as demonstrated in \cite{KuehnCT1}. Linear detrending requires no a priori knowledge about the dynamics but can obviously not remove curvature near the fold point. CM detrending corresponds to the change of variable $\tilde{\xi}_s:=x_s-h_0(y_s)$ which is closest to the theoretical situation discussed in Sections \ref{sec:SDE_fs}-\ref{sec:fold_asymp}. However, this requires a priori knowledge of the critical manifold. The method (M3) requires many sample paths which is a restriction while the method (M4) requires the ability to simulate/measure the fast subsystem for a long time. In Figure \ref{fig:fig6} we compare the different methods for the Stommel-Cessi model. Figures \ref{fig:fig6}(a)-(e) provide the variance estimates together with a least squares fit of \be \label{eq:ls_fit} \text{Var}(x(y))=\frac{A}{\sqrt{y-y_c}} \ee with fitting parameters $A$ and $y_c$. The results in Figure \ref{fig:fig6} show that all methods can capture the variance increase as predicted by the theory. The sliding window technique seems to deviate the most from the theory compared to the other four methods but it requires the least amount of data as one basically produces a plot similar to Figure \ref{fig:fig6}(a) with just a single time series. By fitting \eqref{eq:ls_fit} we also obtain an estimate for the critical transition point $y_c$ which is slightly delayed due to positive $\epsilon$. All techniques capture this effect. We get the estimate that $y_c\in[0.92,0.95]$ which is a very good prediction compared to direct simulations. Overall one may conclude that the theoretical predictions of variance increase near a fold point apply very well in the context of the Stommel-Cessi model \eqref{eq:main_box_model} and that the different time series analysis methods all have advantages as well as disadvantages depending on the situation. Obviously we do not make any claims about the real THC with our calculations as this requires the analysis of temperature data sets. \subsection{Epidemics on Complex Adaptive Networks} \label{ssec:epidemics} Consider a network of social contacts and a disease that can be spread via these contacts as described in \cite{GrossDLimaBlasius}. Individuals of the population correspond to \texttt{nodes (or vertices)} and social contacts correspond to undirected \texttt{links (or edges)}. Denote the total number of nodes by $N$ and the number of links by $K$ and assume that $N$ and $K$ are constant; define the \texttt{mean degree} $\mu:=2K/N$. The dynamical states of the nodes are either \texttt{susceptible} or \texttt{infected} giving classical \texttt{SIS dynamics}. If a link between an infected and a susceptible node exists then the susceptible node becomes infected with probability $p$ at each time step. Infected nodes recover to susceptible status with probability $r$. Susceptibles might try to change their connection from an infected node to a susceptible one. To model this effect we assume that the network is \texttt{adaptive} so that the topology of the network influences the dynamics of the nodes and vice versa. We use the following dynamical variables to describe the SIS adaptive network \benn \begin{array}{rclrl} x_1&:=&\frac{\text{\# $\{$infected$\}$} }{N}&=& \text{``density of infected individuals''},\\ x_2&:=&\frac{\text{\# $\{$links between infected and infected$\}$} }{N}&=& \text{``per capita density of II-links''},\\ x_3&:=&\frac{\text{\# $\{$links between susceptible and susceptible$\}$} }{N}&=& \text{``per capita density of SS-links''}.\\ \end{array} \eenn Note that the density of susceptible individuals is $(1-x_1)$ and the per capita density of SI-links is $(\mu/2-x_2-x_3)$. To capture the full adaptive network dynamics one would have to take into account also triples (triangle subgraphs) and all other higher-order \texttt{(network) moments}. We use the \texttt{moment closure pair approximation} \cite{KeelingRandMorris} to express the higher-order moments in terms of $x$ which yields \cite{GrossDLimaBlasius} \be \label{eq:momc_Gross} \begin{array}{rcl} x_1'&=&p(\frac\mu2-x_2-x_3)-rx_1,\\ x_2'&=&p(\frac\mu2-x_2-x_3)\left(\frac{\frac\mu2-x_2-x_3}{1-x_1}+1\right)-2rx_2,\\ x_3'&=&(r+w)(\frac\mu2-x_2-x_3)-\frac{2p(\frac\mu2-x_2-x_3)x_3}{1-x_1}.\\ \end{array} \ee For our analysis we fix the following parameters \be \label{eq:para_Gross} r=0.002,\qquad w=0.4, \qquad N=10^5, \qquad K=10^6 \quad \Rightarrow \mu=20. \ee Assume that $p$ is a slow variable and increases over time. For example, we could think of a virus that evolves towards a more infectious variant in time. Using the standard notation for slow variables we let $y:=p$ and assume $y'=\epsilon$. It is also reasonable to consider the scenario that the density of infected nodes and the link densities can exhibit stochastic fluctuations; in particular, this might lead to a model that is more realistic than the moment closure ODEs. Combining this assumption, the slow equation and \eqref{eq:momc_Gross} we get \be \label{eq:momc_Gross1} \begin{array}{rcl} dx_1&=&\frac1\epsilon\left[y(\frac\mu2-x_2-x_3)-rx_1\right] ds + \frac{\sigma_1}{\sqrt\epsilon}dW^{(1)} ,\\ dx_2&=&\frac1\epsilon\left[y(\frac\mu2-x_2-x_3)\left(\frac{\frac\mu2-x_2-x_3}{1-x_1}+1\right)-2rx_2\right]ds +\frac{\sigma_2}{\sqrt\epsilon}dW^{(2)},\\ dx_3&=&\frac1\epsilon\left[(r+w)(\frac\mu2-x_2-x_3)-\frac{2y(\frac\mu2-x_2-x_3)x_3}{1-x_1}\right]ds+\frac{\sigma_3}{\sqrt\epsilon} dW^{(3)},\\ dy&=& 1~ds,\\ \end{array} \ee where we omit the subscript $s$ for $x_s$ and $W_s=(W^{(1)}_s,W^{(2)}_s,W^{(3)}_s)^T$ for notational convenience. Although the algebraic expression for the deterministic critical manifold $C_0$ of \eqref{eq:momc_Gross1} can be computed we shall only focus on the subset \benn C^*_0:=\left\{(x,y)\in\left([0,1]\times [0,\mu/2]^2\right)\times [0,1]:x_1=0=x_0,x_3=\frac\mu2 \right\}\subset C_0. \eenn The solution $x_1=0=x_2$ and $x_3=\mu/2$ corresponds to an equilibrium point of \eqref{eq:momc_Gross} with no infected nodes that can also be obtained by considering the initialization of the network as a random graph \cite{GrossDLimaBlasius}. The fast subsystem linearization around $C_0^*$ is given by \be \label{eq:lin_Gross} D_xf|_{C_0^*}=\left( \begin{array}{ccc} -r & -y & -y \\ 0 & -y-2r & -y\\ 0 & y\mu-r-w & y\mu-r-w\\ \end{array} \right). \ee \begin{figure}[htbp] \centering \psfrag{Cr}{$C_0^{*r}$} \psfrag{Ca}{$C_0^{*a}$} \psfrag{BP}{BP} \psfrag{LP}{LP} \psfrag{mH}{H} \psfrag{x1}{$x_1$} \psfrag{x2}{$x_2$} \psfrag{x3}{$x_3$} \psfrag{y}{$y$} \includegraphics[width=1\textwidth]{./fig2.eps} \caption{\label{fig:fig2}Parts of the critical manifold $C_0$ for the SIS-model \eqref{eq:momc_Gross1} where attracting branches are red and repelling branches are blue; parameters are given by \eqref{eq:para_Gross}. The manifolds (fast subsystem equilibrium branches) have been computed using numerical continuation \cite{MatCont}. A transcritical bifurcation (branch point, [BP]) is detected at $y=y_c=0.0201$. For the number of infected nodes we show the continuation of $C_0$ away from the branch point; it undergoes a fold bifurcation (limit point, [LP]) and stabilizes at a supercritical Hopf bifurcation [H].} \end{figure} Using the parameter values \eqref{eq:para_Gross} and \eqref{eq:lin_Gross} we can easily calculate that a single eigenvalue of \eqref{eq:lin_Gross} crosses the imaginary axis at $y=y_c=0.0201$. Another direct calculation shows that $C_0^*$ splits into two subsets $C_0^{*a}=\{y<y_c\}\cap C_0^*$ and $C_0^{*r}=\{y>y_c\}\cap C_0^*$ where $C_0^{*a}$ is normally hyperbolic attracting and $C_0^{*r}$ is normally hyperbolic repelling. Note that the fast subsystem bifurcation of the trivial solution $C_0^*$ to \eqref{eq:momc_Gross} suggests a transcritical or a pitchfork bifurcation. In Figure \ref{fig:fig2} we show part of the critical manifold $C_0$ including the trivial solution $C_0^*$; the computation has been carried out using numerical continuation \cite{MatCont}. Figure \ref{fig:fig2} shows that the bifurcation is transcritical and $y=y_c$ is the infection probability threshold. \begin{figure}[htbp] \centering \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{c}{(c)} \psfrag{d}{(d)} \psfrag{e}{(e)} \psfrag{f}{(f)} \psfrag{x1}{$x_1$} \psfrag{x2}{$x_2$} \psfrag{x3}{$x_3$} \psfrag{y}{$y$} \psfrag{V}{$V$} \psfrag{V-}{$\frac{1}{\bar{V}}$} \psfrag{Vb}{$\bar{V}$} \includegraphics[width=1\textwidth]{./fig3.eps} \caption{\label{fig:fig3}Simulation results for \eqref{eq:momc_Gross1} with boundary conditions to constrain $x=(x_1,x_2,x_3)\in[0,1]\times[0,\mu/2]^2$; parameter values are given in \eqref{eq:para_Gross} and $(\sigma_1,\sigma_2,\sigma_3,\epsilon)=(0.01,0.01,0.01,0.005)$. (a)-(c) show a time series and (d) shows the associated variance of this series calculated by a sliding window technique. (e) Average $\bar{V}$ of the sliding-window variance for 1000 sample paths; we see that $\bar{V}_3$ shows an increase near the bifurcation. (f) Inverse of averaged variance $1/\bar{V}$ where we clearly see that $\bar{V}_3$ scales like $1/(y_c^\epsilon-y)$ up to a delayed epidemic threshold $y_c^\epsilon$. We also show two linear fits to $(\bar{V}_3)^{-1}$, one before the threshold (early-warning regime) and one after the threshold (start of critical transition). The actual full epidemic outbreak is not shown in the plot and occurs roughly between $y=0.05$ and $y=0.07$.} \end{figure} For direct simulation of \eqref{eq:para_Gross} one has to ensure that $x\in[0,1]\times[0,\mu/2]^2$ as the densities are constrained. Therefore, we set a point that lands outside of the domain at a time step to its associated boundary value, e.g. if $x_1(s_j)<0$ for some numerical time step $s_j$ then we set $x_1(s_j)=0$. This simulation is formally outside of the theory developed in Sections \ref{sec:fast_slow}-\ref{sec:fold_asymp}. Nevertheless, Figure \ref{fig:fig3} shows that the theoretical results are useful. Figure \ref{fig:fig3}(a)-(c) shows a typical sample path and we see that the $x_3$-coordinate in (c) starts to decrease beyond the singular limit critical point whereas the other two variables do not show any recognizable trend in (a)-(b). It is interesting to note that the density of infected individuals does not seem to play a role as an early-warning sign for the epidemic outbreak. Figure \ref{fig:fig3}(d) shows the variance $V=(V_1,V_2,V_3)=(\text{Var}(x_1),\text{Var}(x_2),\text{Var}(x_3))$ associated to the sample path in (a)-(c) by using a sliding window technique; the size of the sliding window corresponds to the gap in the curves near $y=0$. Figure \ref{fig:fig3}(e) shows an average variance $\bar{V}_i$ for $i=\{1,2,3\}$ over 1000 sample paths. Observe that $x_3$ is the best predictor variable and this leads to the conjecture that the increase in variance should scale like the inverse of the distance to the critical transition; see Figure \ref{fig:fig3}(f). Note that the critical transition at $y=y_c$ for $\epsilon=0$ is delayed due to the time scale separation \cite{KuehnCT1}. We conclude from our results that it is crucial what property of a complex system we actually \emph{measure} to make predictions. Indeed, the SIS-epidemic model suggests that measuring the variance in links can be much more important than just the number of infected individuals. Furthermore, the technique we developed here can also be applied to adaptive networks in completely different contexts \cite{GrossSayama,KuehnZschalerGross}. \subsection{A Switch in Systems Biology} \label{ssec:SysBio} To understand complex molecular networks one often seeks to construct models of simpler building blocks of the network. These building blocks are composed of genes and proteins and can often act as various kinds of ``switches'' inside a more complex system. Dynamical systems methods for these systems biology questions are a highly active research area \cite{Brackleyetal}. Low-dimensional dynamical systems have been proposed to model the smallest units in a molecular network. A typical example is the \texttt{activator-inhibitor} system. Suppose the \texttt{activator} species $R$ is produced in an autocatalytic reaction but rising $R$ also promotes the production of an \texttt{inhibitor} species $X$. More concretely, one may think of both species $(R,X)$ as concentrations of proteins. Activator-inhibitor systems incorporate positive and negative feedback which can lead to oscillations. One model proposed for activator-inhibitor oscillators \cite{TysonChenNovak} is \be \label{eq:ActInh} \begin{array}{lcl} R'&=& k_0G(k_3R,k_4,J_1,J_2)+k_1S-k_2R-k_7XR\\ X'&=& k_5R-k_6X\\ \end{array} \ee where the \texttt{Goldbeter-Koshland function} $G$ \cite{GoldbeterKoshland,NovakPatakiCilibertoTyson} is \benn G(u,v,J,K)=\frac{2uK}{v-u+vJ+uK+\sqrt{(v-u+vJ+uK)^2-4(v-u)uK}} \eenn and $k_j$ for $j\in\{1,2,3,4,5,6,7\}$, $J_i$ for $i\in\{1,2\}$ and $S$ are parameters. The main bifurcation parameter is the \texttt{signal strength} $S$ which can be viewed as an external input to the system \eqref{eq:ActInh}. We are going to fix the other parameters following \cite{TysonChenNovak} as \benn k_0=4, \quad k_1=k_2=k_3=k_4=k_7=1, \quad k_5=0.1, \quad k_6=0.075, \quad J_1=J_2=0.3. \eenn \begin{figure}[htbp] \centering \psfrag{x1}{$x_1$} \psfrag{x2}{$x_2$} \psfrag{y}{$y$} \psfrag{C0}{$C_0$} \psfrag{LPC}{LPC} \psfrag{H}{H} \includegraphics[width=1\textwidth]{./fig7.eps} \caption{\label{fig:fig7}Dynamics for $\epsilon=0$ for the deterministic version of the activator-inhibitor system \eqref{eq:Tyson}. The critical manifold $C_0$ is the red-blue curve which looses normal hyperbolicity at a fast subsystem subcritical Hopf bifurcation (black dot, [H]) at $y\approx 0.09146$. The generated small limit cycles (blue) are first repelling and then undergo a fold (or saddle-node, or limit point [LPC]) bifurcation; the large fast subsystem limit cycles (red) are attracting. A critical transition occurs near the Hopf bifurcation as trajectories leave the critical manifold and jump to a large limit cycle. See also Figure \ref{fig:fig8} for the fast subsystem phase portraits.} \end{figure} \begin{figure}[htbp] \centering \psfrag{x1}{$x_1$} \psfrag{x2}{$x_2$} \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{c}{(c)} \includegraphics[width=1\textwidth]{./fig8.eps} \caption{\label{fig:fig8}Illustration of the subcritical Hopf bifurcation for the deterministic fast subsystem of \eqref{eq:Tyson}; equivalently the results apply to \eqref{eq:ActInh} with $(R,X)=(x_1,x_2)$. Nullclines are shown in magenta for $x_1$ and in orange for $x_2$. Trajectories are black and invariant sets are red (stable) and blue (unstable). (a) $y=0.01$: The system has a stable spiral sink. (b) $y=0.085$: In addition to the spiral sink there exist a small unstable and large stable limit cycle. (c) $y=0.12$: The equilibrium point is a spiral source and only the large stable limit cycle exists.} \end{figure} Let us consider the case when the external input $S$ is a slow signal that starts out sufficiently low so that no oscillations occur for \eqref{eq:ActInh}. Then we let $S=:y$ increase until a transition to large oscillations is observed. It is reasonable to assume that the variables $(R,X)=:(x_1,x_2)$ are stochastic with correlated noise. Under these assumptions we can write \eqref{eq:ActInh} as the SDE \be \label{eq:Tyson} \begin{array}{lcl} dx_1&=&\frac1\epsilon\left[ 4G(x_1,1,0.3,0.3)+y-x_1-x_1x_2 \right]ds +\frac{\sigma}{\sqrt\epsilon}\left(F_{11}dW^{(1)}+F_{12}dW^{(2)}\right),\\ dx_2&=& \frac1\epsilon\left[0.1x_1-0.075x_2\right]ds+\frac{\sigma}{\sqrt\epsilon}\left(F_{21}dW^{(1)}+F_{22}dW^{(2)}\right),\\ dy&=& 1 ~ds.\\ \end{array} \ee The critical manifold $C_0$ for the deterministic part of \eqref{eq:Tyson} is given by \benn C_0=\left\{(x_1,x_2,y)\in\R^{3}:x_2=\frac43 x_1,y=x_1+\frac43 x_1^2-4G(x_1,1,0.3,0.3)\right\}. \eenn It is easy to check that the critical manifold is attracting for $y<y_{H,1}$ and $y>y_{H,2}$ and repelling for $y_{H,1}<y<y_{H,2}$ where $y_{H,1}\approx 0.091462$ and $y_{H,2}\approx 0.440903$ are fast subsystem Hopf bifurcation points. We focus on the subcritical Hopf bifurcation at $y=y_{H,1}$. Figure \ref{fig:fig7} shows an illustration of the singular limit dynamics near this Hopf bifurcation point. Repelling fast subsystem limit cycles are generated at the Hopf bifurcation. These cycles undergo a further fold (or saddle-node, or limit point) bifurcation to attracting cycles which grow rapidly. By looking at the phase plane of the fast subsystem in Figure \ref{fig:fig8} we observe that the $x_1$-nullcline can also be viewed as another critical manifold of the two-dimensional system $(x_1,x_2)$ where $x_2$ would be fast and $x_1$ be even faster which yields a three-scale system with canard explosion \cite{KrupaPopovicKopell}. The important outcome of this mechanism is that passing from the attracting critical manifold \benn C^a_0:=\{(x_1,x_2,y)\in\R^3:y<y_{H,1}\}\cap C_0 \eenn through the Hopf bifurcation produces a critical transition to large limit cycle oscillations. The critical transition can be viewed as an almost instantaneous switch to sustained oscillations. For the stochastic simulation we recall from the definition in equation \eqref{eq:defineN} that \benn N=\left(\begin{array}{cc} N_{11} & N_{12} \\ N_{12} & N_{22} \\\end{array}\right)= \left(\begin{array}{cc} F_{11}^2+F_{12}^2 & F_{11}F_{21}F_{12}F_{22} \\ F_{11}F_{21}F_{12}F_{22} & F_{21}^2+F_{22}^2 \\\end{array}\right). \eenn \begin{figure}[htbp] \centering \psfrag{y}{$y$} \psfrag{V1}{$V_1$} \psfrag{V2}{$V_2$} \psfrag{V3}{$C_{12}$} \psfrag{AI}{Act.-Inh. \eqref{eq:Tyson}} \psfrag{Hopf}{Hopf n.f. \eqref{eq:Hopf_ex_nf}} \psfrag{a1}{(a1)} \psfrag{a2}{(a2)} \psfrag{a3}{(a3)} \psfrag{b1}{(b1)} \psfrag{b2}{(b2)} \psfrag{b3}{(b3)} \includegraphics[width=1\textwidth]{./fig9.eps} \caption{\label{fig:fig9}The row labels denote $V_1=\text{Var}(x_1)$, $V_2=\text{Var}(x_2)$ and $C_{1,2}=\text{Cov}(x_1,x_2)$. (a1)-(a3) Parameter values are $\epsilon=10^{-5}$ and $\sigma=10^{-3}$ for \eqref{eq:Tyson}. (b1)-(b3) Parameter values are $\epsilon=5\times10^{-4}$ and $\sigma=10^{-3}$ for \eqref{eq:Hopf_ex_nf}. All figures have been computed from 100 sample paths by a sliding window technique (black curves). The variances have been fitted using \eqref{eq:fit_Hopf} and the covariance have been fitted linearly (green curves). We observe that the normal form corresponds perfectly to the theory but that the three-time scale structure of the activator-inhibitor system becomes visible in the variance and covariance measurements.} \end{figure} For numerical simulations fix $N_{11}=1=N_{22}$ and $N_{12}=0.2$. In Figure \ref{fig:fig9}(a1)-(a3) the variance and covariance near the subcritical Hopf bifurcation at $y=y_{H,1}$ for the activator-inhibitor system \eqref{eq:Tyson} are shown. The variances $\text{Var}(x_{1,2})$ have been fitted using \be \label{eq:fit_Hopf} \text{Var}(x_{j}(y))=\frac{A}{y-y_c}, \qquad \text{for $j\in\{1,2\}$} \ee with fit parameters $A$ and $y_c$. The covariance has been fitted linearly. The variance of the fastest variable $\text{Var}(x_1(y))$ behaves approximately as predicted near the critical transition as $\cO(1/y)$. However, the variance $\text{Var}(x_2(y))$ of the slower variable $x_2$ does not show a clear increase and the covariance near the critical transition is not constant. This shows that the three-time scale structure requires a very careful analysis and a transformation to normal form would be needed to apply Theorem \ref{thm:CK2}. A prediction of the critical transition point can still work {e.g.}~using $\text{Var}(x_1(y))$ produces the estimate $y_c\approx 0.094$. We have also compared the activator-inhibitor results to a Hopf bifurcation normal form system \be \label{eq:Hopf_ex_nf} \begin{array}{lcl} dx_1&=&\frac1\epsilon\left[ yx_1-x_2+x_1(x_1^2+x_2^2)\right]ds +\frac{\sigma}{\sqrt\epsilon}\left(F_{11}dW^{(1)}+F_{12}dW^{(2)}\right),\\ dx_2&=& \frac1\epsilon\left[x_1+yx_2+x_2(x_1^2+x_2^2)\right]ds+\frac{\sigma}{\sqrt\epsilon}\left(F_{21}dW^{(1)}+F_{22}dW^{(2)}\right),\\ dy&=& 1 ~ds.\\ \end{array} \ee Figure \ref{fig:fig9}(b1)-(b3) shows the results which match Theorem \ref{thm:CK2} as expected. For the covariance there is a clear difference between \eqref{eq:Hopf_ex_nf} and the activator-inhibitor system; compare Figures \ref{fig:fig9}(a3) and \ref{fig:fig9}(b3). The increase of the covariance near the critical transition is not expected and might be related to deterministic rotation around the slow manifold $C^a_\epsilon$ i.e. the manifold is attracting but also a spiral sink of the fast subsystem; see also Section \ref{ssec:ecology} where another possible explanation is given.\\ To conclude, observe that the bifurcation structure displayed by \eqref{eq:Tyson} has a fast subsystem with an S-shaped (cubic) critical manifold which makes the results applicable also to typical neuroscience models such as bursting neurons \cite{Izhikevich,Rinzel}. Therefore, we have shown that subunits of molecular networks as well as neurons in neural networks do have information available that allows them to predict a future state without previous knowledge of the exact position of this state. Whether this predictive potential is actually used in a real molecular or neural network is far beyond the scope of this paper but certainly constitutes a fascinating question. For a recent application to excitable neuron models and epileptic seizures see \cite{MeiselKuehn}. \subsection{A Predator-Prey Systems near Codimension Two Bifurcation} \label{ssec:ecology} Sudden shifts in ecosystems have been a primary motivation to develop the theory of critical transitions \cite{SchefferCarpenter}. Recently also experimental evidence has been provided \cite{DrakeGriffen}. However, many studies seem to view fold critical transitions as the only relevant transition \cite{vanNesScheffer}. This viewpoint does not seem to be appropriate as codimension two (and higher codimension) bifurcations occur very frequently in ecological models \cite{BazykinKhibnikKrauskopf}. Here we focus on the analysis of a classical \texttt{predator-prey model} \cite{BazykinKhibnikKrauskopf} \be \label{eq:Kuznetsov_Baz} \begin{array}{lcl} x_1'&=& x_1-\frac{x_1x_2}{1+\alpha x_1}-\xi x_1^2,\\ x_2'&=&-\gamma x_2+\frac{x_1x_2}{1+\alpha x_1}-\delta x_2^2,\\ \end{array} \ee where $x_1$ represents prey, $x_2$ represents predators and $\alpha$, $\delta$, $\xi$, $\gamma$ are positive parameters. The bifurcation analysis of \eqref{eq:Kuznetsov_Baz} in the $(\alpha,\delta)$-parameter plane has been nicely described by Kuznetsov (see \cite{Kuznetsov}, p.327-332) under the assumptions $\gamma=1$ and $0<\xi\ll1$. For numerical simulation we fix $\gamma=1$ and $\xi=0.01$. \begin{figure}[htbp] \centering \psfrag{y1}{$y_1$} \psfrag{y2}{$y_2$} \psfrag{R1}{$Q_1$} \psfrag{R2}{$Q_2$} \psfrag{R3}{$Q_3$} \psfrag{LP}{LP} \psfrag{H}{H} \psfrag{BT}{BT} \includegraphics[width=0.85\textwidth]{./fig10.eps} \caption{\label{fig:fig10}Partial bifurcation diagram of the Bazykin predator-prey model \eqref{eq:Kuznetsov_Baz} with $\gamma=1$ and $\xi=0.01$. The parameters $(y_1,y_2)$ can be viewed as slow variables. The main organizing center in the diagram is the codimension-two Bogdanov-Takens (black dot, [BT]) point that occurs at a tangency of Hopf (red, [H]) and fold (blue, [LP]) bifurcation curves. Phase space diagrams for the different regions $Q_1$, $Q_2$ and $Q_3$ are shown in Figure \ref{fig:fig11}; note that $Q_3$ splits into two sub-regions by a homoclinic bifurcation curve which we do not show here. The dashed curve (green) shows a slow subsystem trajectory that approaches the BT point.} \end{figure} \begin{figure}[htbp] \centering \psfrag{y1}{$x_1$} \psfrag{y2}{$x_2$} \psfrag{R1}{$Q_1$} \psfrag{R2}{$Q_2$} \psfrag{R3}{$Q_3$} \includegraphics[width=0.98\textwidth]{./fig11.eps} \caption{\label{fig:fig11}Phase space diagrams for different parameter regions in Figure \ref{fig:fig10}; black curves are trajectories. $Q_1$: $(y_1,y_2)=(0.45,0.35)$; $Q_2$: $(y_1,y_2)=(0.35,0.3)$; $Q_3$: $(y_1,y_2)=(0.45,0.15)$. In $Q_1$ there is a stable spiral sink outside of the chosen range at $(x_1,x_2)\approx(92.12,3.34)$. A spiral sink equilibrium point also exists in $Q_2$ and $Q_3$ outside of the displayed ranges. In $Q_2$ we have a spiral sink (red dot) and a saddle point (blue dot) that correspond to attracting and saddle-type branches of the critical manifold. In $Q_3$ we have a spiral source and a saddle point corresponding to unstable and saddle-type critical manifolds.} \end{figure} We set $y_1:=\alpha$ and $y_2:=\delta$ to indicate that these parameters will be viewed as slow variables. Part of the bifurcation diagram for \eqref{eq:Kuznetsov_Baz} is shown in Figure \ref{fig:fig10}. Figure \ref{fig:fig10} shows two curves of fold bifurcations, which actually form a closed curve $c_{LP}$ (``isola'') in parameter space. This curve has a tangency with a supercritical Hopf bifurcation curve $c_H$ at a codimension-two Bogdanov-Takens (BT) point. We do not show the homoclinic bifurcation curve originating at the BT point in Figure \ref{fig:fig10}. The curves $c_{LP}$ and $c_{H}$ can be calculated explicitly \cite{Kuznetsov}. One simply uses the linearization of $D_xF(x^*)$ of \eqref{eq:Kuznetsov_Baz} at an equilibrium point $(x_1,x_2)=x^*$ and applies the conditions $\det(D_xF(x^*))=0$ and $\text{Tr}(D_xF(x^*))=0$; this gives \benn \begin{array}{llll} c_{LP}&=&\{y\in\R^2:&4\xi(y_1-1)^3+((y_1^2-20y_1-8)\xi^2+2y_1\xi(y_1^2-11y_1+10)\\ &&&+y_1^2(y_1-1)^2)y_2-4(y_1+\xi)^3y_2^2=0\},\\ c_{H}&=&\{y\in\R^2:&4\xi(y_1(y_1-1)+\xi(y_1+1))+(2(\xi+1)y_1^2+(3\xi^2-2\xi-1)y_1\\ &&&+\xi(\xi^2-2\xi+5))y_2+(y_1+\xi-1)^2y_2^2\}.\\ \end{array} \eenn The Bogdanov-Takens point satisfies all genericity conditions required by assumption (A2) so that Lemma \ref{lem:BT} and Theorem \ref{thm:CK2} apply. The normal form coefficient is $s=-1$ in equation \eqref{eq:BT_nform}. Hence the only stable equilibrium point near the BT-point can be found between the Hopf and fold curves in region $Q_2$ in Figure \ref{fig:fig10}. Phase portraits for different regions are shown in Figure \ref{fig:fig11}. A stable limit cycle can occur between the Hopf and homoclinic bifurcation curves in region $Q_3$ but we ignore this possibility and restrict ourselves to critical transitions via fast subsystem stable equilibrium points. It is natural to assume that the populations $(x_1,x_2)$ are subject to stochastic fluctuations and to view $(y_1,y_2)$ as slow dynamic variables, changing slowly due to evolutionary or environmental effects. This converts \eqref{eq:Kuznetsov_Baz} into the SDE \be \label{eq:Bazykin} \begin{array}{lcl} dx_1&=& \frac1\epsilon\left[x_1-\frac{x_1x_2}{1+\alpha x_1}-\xi x_1^2\right] ds+\frac{\sigma_1}{\sqrt\epsilon}dW^{(1)},\\ dx_2&=&\frac1\epsilon \left[-\gamma x_2+\frac{x_1x_2}{1+\alpha x_1}-\delta x_2^2\right]ds +\frac{\sigma_2}{\sqrt\epsilon}dW^{(2)},\\ dy_1&=&g_1(x,y)ds,\\ dy_2&=&g_2(x,y)ds,\\ \end{array} \ee where we have assumed uncorrelated noise in the fast variables. The critical manifold $C_0$ of the deterministic version of \eqref{eq:Bazykin} has an attracting branch $C^a_0$ in the region $Q_2$ (see Figures \ref{fig:fig10} and \ref{fig:fig11}) corresponding to a spiral sink of the fast subsystem \eqref{eq:Kuznetsov_Baz}. We want to approach the Bogdanov-Takens critical transition via a slow flow inside the region $Q_2$. Figure \ref{fig:fig11} shows a dashed curve (green) which is a possible slow subsystem trajectory. It is part of a candidate $\gamma_0$ that undergoes a critical transition according to Lemma \ref{lem:BT}. In principle, we could try to embed such a candidate into an explicit slow flow $\dot{y}=g(x,y)=(g_1(x,y),g_2(x,y))^T$. For numerical simulations of \eqref{eq:Bazykin} it will suffice to define a single trajectory $\gamma_0$ along which we approach the BT transition. We can obtain $\gamma_0$, for example, by polynomial interpolation of a suitable set points lying in $Q_2$ and the BT point. The initial condition for our numerical simulation is chosen as $(x_1,x_2,y_1,y_2)\approx(3.1544,1.8849,0.3,0.3293)$, where the y-coordinates lie on the dashed curve indicated in Figure \ref{fig:fig10} and the x-coordinates are on the attracting critical manifold $C^a_0$. Calculations have been carried out for 50 sample paths and the variance has been calculated via a moving window method for each path (see the gap in Figure \ref{fig:fig12}(a) for the window size) with linear detrending. Then the results is averaged over the 50 paths. Figure \ref{fig:fig12}(a) compares the variances $V_i=\text{Var}(x_i(y))$ for $i\in\{1,2\}$. \begin{figure}[htbp] \centering \psfrag{Vi}{$V_i$} \psfrag{y1}{$y_1$} \psfrag{V1}{$V_1$} \psfrag{V2}{$V_2$} \psfrag{Vari}{$V_i=\text{Var}(x_i)$ $\downarrow$} \psfrag{Fits}{Fits of $V_i$ $\rightarrow$} \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{c}{(c)} \includegraphics[width=1\textwidth]{./fig12.eps} \caption{\label{fig:fig12}Simulations averaged over 50 sample paths of the Bazykin predator-prey model \eqref{eq:Bazykin} with $\gamma=1$, $\xi=0.01$ and $(\epsilon,\sigma)=(3\times 10^{-5},1\times 10^{-3})$. (a) Variance curves $V_i=\text{Var}(x_i(y))$ for $i\in{1,2}$; the red curve corresponds to $V_1$ and the black curve to $V_2$. In (b) and (c) we repeat these curves and show different fits. The green curves correspond to \eqref{eq:fit_BT1} and the blue curve to \eqref{eq:fit_BT2}.} \end{figure} Figures \ref{fig:fig12}(b)-(c) show fits (green curves) of the variances \be \label{eq:fit_BT1} \text{Var}(x_{i}(y))=\frac{A}{y_{1,c}-y_1}, \qquad \text{for $i\in\{1,2\}$} \ee and also an inverse square-root fit (blue curve) \be \label{eq:fit_BT2} \text{Var}(x_{2}(y))=\frac{A}{\sqrt{y_{1,c}-y_1}} \ee where $A$, $y_{1,c}$ are the fitting parameters. Note that \emph{both} variances increase like $\cO_y^*(1/(y_{1,c}-y_1))$ near the critical transition and that \eqref{eq:fit_BT1} is a good fit for $V_2$ while \eqref{eq:fit_BT2} is not. At first, this might look unexpected since the normal form analysis predicts one variance to increase like $\cO_y^*(1/\sqrt{y_{1,c}-y_1})$. However, equation \eqref{eq:Bazykin} is not in normal form. To explain the effect let us consider the Bogdanov-Takens normal form \be \label{eq:BT_nform_new} \begin{array}{lcl} d\tilde{x}_1&=&\frac1\epsilon[\tilde{x}_2]ds+\frac{\sigma}{\sqrt\epsilon}F_1dW^{(1)},\\ d\tilde{x}_2&=&\frac1\epsilon[y_1+y_2\tilde{x}_2+\tilde{x}_1^2+s\tilde{x}_1\tilde{x}_2]ds+\frac{\sigma}{\sqrt\epsilon}F_2dW^{(2)},\\ \end{array} \ee with suitable slow variables $y=(y_1,y_2)$ so that we approach the critical BT-transition at $(\tilde{x},y)=(0,0)$. Consider a linear map \benn \left(\begin{array}{c} x_1 \\ x_2 \\\end{array}\right)= \left(\begin{array}{cc} b_{11} & b_{12} \\ b_{21} & b_{22}\\ \end{array}\right) \left(\begin{array}{c} \tilde{x}_1 \\ \tilde{x}_2 \\\end{array}\right)=B\left(\begin{array}{c} \tilde{x}_1 \\ \tilde{x}_2 \\\end{array}\right) \eenn where $B\in\R^{2\times 2}$ is invertible. We know that \benn \text{Var}(\tilde{x}_1(y))=\cO_y^*\left( \frac{1}{y_1}\right),\qquad \text{Var}(\tilde{x}_2(y))=\cO_y^*\left( \frac{1}{\sqrt{y_1}}\right), \qquad \text{Cov}(\tilde{x}_1(y),\tilde{x}_2(y))=\cO_y^*(1) \eenn as $y_1\ra 0$. After applying the transformation $B$ a formal calculation yields \beann \text{Var}(x_1)&=&\text{Var}(b_{11}\tilde{x}_1+b_{12}\tilde{x}_2)=b_{11}^2\text{Var}(\tilde{x}_1)+b_{12}^2\text{Var}(\tilde{x}_1)+2b_{11}b_{12}\text{Cov}(\tilde{x}_1,\tilde{x}_2)\\ &=& \cO_y^*\left(\frac{1}{y_1}\right)+\cO_y^*\left(\frac{1}{\sqrt{y_1}}\right)+\cO_y^*(1)=\cO_y^*\left(\frac{1}{y_1}\right),\\ \text{Var}(x_2)&=&\text{Var}(b_{21}\tilde{x}_1+b_{22}\tilde{x}_2)=b_{21}^2\text{Var}(\tilde{x}_1)+b_{22}^2\text{Var}(\tilde{x}_1)+2b_{21}b_{22}\text{Cov}(\tilde{x}_1,\tilde{x}_2)\\ &=& \cO_y^*\left(\frac{1}{y_1}\right)+\cO_y^*\left(\frac{1}{\sqrt{y_1}}\right)+\cO_y^*(1)=\cO_y^*\left(\frac{1}{y_1}\right). \eeann This an explanation why both variances $V_i$ increase like $\cO_y^*(1/(y_{1,c}-y_1))$ in Figure \ref{fig:fig12}. The scaling law from the Hopf bifurcation dominates the scaling law from the saddle-node bifurcation near a codimension-two Bogdanov-Takens point when the system is not in normal form. We conclude this section with some potential implications for ecological modeling and ecosystem management. Once we have passed the BT-point the system transitions with high probability to a far-away equilibrium (see Figure \ref{fig:fig11}). In particular, the density of the prey population increases dramatically. In this scenario it will be very difficult to reverse the system to the original state as the region $Q_2$ of slow variable/parameters is very narrow near the BT-point. The most interesting aspect of the BT-transition in the Bazykin model \eqref{eq:Bazykin} is that just measuring the variances, without a preliminary normal form transformation, can be misleading. Measurement and fitting indicate a variance increase governed by $\cO_y^*(1/y)$ which could just indicate a supercritical Hopf transition from region $Q_2$ to $Q_3$ i.e. passing the (red) Hopf curve in Figure \ref{fig:fig10}. This transition would not be critical and can easily be reversed. The slower variance increase of the critical fold transition is \emph{hidden} near the BT-point! \subsection{Biomechanics and Control near Instability} \label{ssec:mechanics} \begin{figure}[htbp] \centering \psfrag{A}{A} \psfrag{B}{B} \psfrag{LP}{LP} \psfrag{BP}{BP} \psfrag{th}{$\theta$} \psfrag{Fs}{$\cF_s$} \includegraphics[width=1\textwidth]{./fig13.eps} \caption{\label{fig:fig13}Panels A and B show a sketch of the Euler buckling experiment as considered in \cite{VenkadesanGuckenheimerValero-Cuevas}. The force $\cF_s$ compresses the spring which should stay in the upright/vertical position as shown in $B$. The bifurcation diagram on the right shows the subcritical pitchfork \eqref{eq:pitch1} with parameter values \eqref{eq:pvals_pitch}. The pitchfork (branch point [BP]) from the attracting equilibrium branch (think red line) occurs at $F_s=3.3$. The unstable branches (dashed blue) undergo a further fold bifurcation (limit point [LP]). In $A$ we see what happens when the spring buckles and leaves the vertical position.} \end{figure} \begin{figure}[htbp] \centering \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{c}{(c)} \psfrag{d}{(d)} \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{V}{$V$} \includegraphics[width=1\textwidth]{./fig14.eps} \caption{\label{fig:fig14}(b)-(d) Sample paths for \eqref{eq:Madu} with $F(y)=1$ (red), $F(y)=\sqrt{y_c-y}$ (green), $F(y)=y_c-y$ (blue) and $g(x,y)=1$; fixed parameter values are given in \eqref{eq:pvals_pitch} and $(\epsilon,\sigma)=(0.005,0.01)$. The initial condition is $(x_0,y_0)=(0,2)$. The realization of the noise $W=W_s$ is the same for all three paths. In (a) we calculate the variance $V=\text{Var}(x(y))$ for each path using a sliding window technique. Note that we can already spot in the time series that variance increases for $F(y)=1$, stays roughly constant for $F(y)=\sqrt{y_c-y}$ and decays to zero for $F(y)=y_c-y$ as $y$ tends towards the pitchfork critical transition at $y_c=3.3$.} \end{figure} \begin{figure}[htbp] \centering \psfrag{F1}{$F=1$} \psfrag{F2}{$F=\sqrt{y_c-y}$} \psfrag{F3}{$F=y_c-y$} \psfrag{y}{$y$} \psfrag{V}{$V$} \includegraphics[width=1\textwidth]{./fig15.eps} \caption{\label{fig:fig15}Average variance $V=\text{Var}(x(y))$ for \eqref{eq:Madu} for $F(y)=1$ (red), $F(y)=\sqrt{y_c-y}$ (green), $F(y)=y_c-y$ (blue) and $g(x,y)=1$ over 100 sample paths; fixed parameter values are given in \eqref{eq:pvals_pitch} and $(\epsilon,\sigma)=(0.005,0.007)$. The initial condition is $(x_0,y_0)=(0,2)$. The point where the three variances cross corresponds approximately to $y^*=2.3$. This is expected since the three functions in \eqref{eq:pitch_noise} are equal at $y=y^*$.} \end{figure} In \cite{VenkadesanGuckenheimerValero-Cuevas} the authors investigate how humans control a spring near instability. The experimental setup asks participants to use their thumbs to compress the spring near the threshold of the classical \texttt{Euler buckling instability}; see Figure \ref{fig:fig13}. A mathematical model for this problem is provided by a subcritical pitchfork bifurcation with quintic non-linearity given by \be \label{eq:pitch1} \theta'=p_1(\cF_s-p_2) \theta+p_3\theta^3-p_4\theta^5 \ee where $\cF_s$, $p_j$ for $j\in\{1,2,3,4\}$ are parameters and $\theta$ represents the angle of the spring with respect to its vertical/upright position \cite{VenkadesanGuckenheimerValero-Cuevas}. The parameter $\cF_s$ is viewed as the force applied to the spring. The bifurcation diagram of \eqref{eq:pitch1} is shown in Figure \ref{fig:fig13}. To stay within the framework of \cite{VenkadesanGuckenheimerValero-Cuevas} we have chosen fixed parameter values \be \label{eq:pvals_pitch} p_1=2.639, \qquad p_2=3.3, \qquad p_3=106.512, \qquad p_4=385. \ee The experiment in \cite{VenkadesanGuckenheimerValero-Cuevas} asked participants to slowly compress the spring so that it does not buckle but also comes as close as possible to the pitchfork bifurcation. In Figure \ref{fig:fig13} this corresponds to moving along the stable equilibrium branch $\{(\theta,F_s)\in\R^2:F_s<3.3\}$. The experimental data do contain quite a bit of noise so that it is very reasonable to consider the system \be \label{eq:Madu} \begin{array}{lcl} dx&=&\frac1\epsilon\left[p_1(y-p_2) x+p_3x^3-p_4x^5\right]ds+\frac{\sigma}{\sqrt\epsilon}F(y)dW,\\ dy&=& 1~ds. \end{array} \ee The deterministic critical manifold of \eqref{eq:Madu} is $C_0=\{(x,y)\in\R^2:p_1(y-p_2) x+p_3x^3-p_4x^5=0\}$. We focus on the trivial branch $C_0^*=\{x=0\}$ and the attracting subset $C_0^a:=C_0^*\cap \{y<3.3\}$. In the previous applications we usually assumed that $F(y)=const.$ which corresponds to additive noise. For the spring compression experiment this does not seem reasonable since participants could try to minimize the noisy fluctuations once they are very close the subcritical pitchfork bifurcation; in fact, they know that a noise-induced critical transition could occur before the bifurcation point. Figure \ref{fig:fig14}(b)-(d) shows sample paths for different types of noise \be \label{eq:pitch_noise} F(y)=1,\qquad F(y)=\sqrt{y_c-y},\qquad F(y)=y_c-y. \ee We used the same realization for $dW$ for all three paths. It can already be observed that we have three different behaviors (``increase, constant, decay'') for the variance $V=\text{Var}(x(y))$. Figure \ref{fig:fig15} confirms this behavior as it shows the average variance over 100 sample paths for the different types of noise given in \eqref{eq:pitch_noise}. We can calculate from Theorem \ref{thm:CK1} that to leading order in the approach towards the pitchfork, but not in a small neighbourhood near it, we have the scaling laws \benn \text{Var}(x(y))=\cO_y^*\left(\frac{N(y)}{y_c-y}\right)=\cO_y^*\left(\frac{F^2(y)}{y_c-y}\right)=\left\{\begin{array}{lcl} \cO_y^*\left(\frac{1}{y-y_c}\right) & & \text{if $F(y)=1$,}\\ \cO_y^*\left(1\right) & & \text{if $F(y)=\sqrt{y_c-y}$,}\\ \cO_y^*\left(y-y_c\right) & & \text{if $F(y)=y_c-y$.}\\ \end{array}\right. \eenn This explains precisely what is shown in Figure \ref{fig:fig15} and shows that multiplicative noise can yield a wide variety of different early-warning signals or even no visible trend of the variance near a critical transition. Hence we can conjecture that balancing/controlling objects near an instability involves suitable noisy perturbations and the quick processing of a time series history to generate the appropriate control. \section{Discussion and Outlook} \label{sec:conclusions} This paper has only started to develop a mathematical framework for critical transitions and prediction. Here we briefly outline the main steps and how this framework can be extended to address future problems. The first part of this paper, motivated by Definition \ref{defn:ct}, only covers the singular limit $\epsilon=0$, $\sigma=0$. We derive slow flow conditions to reach a critical transition and record the relevant linearizations to develop stochastic scaling laws. Although this is the most precise starting point one could consider extensions. In fact, the sample paths viewpoint of Definition \ref{defn:ct} naturally extends. Let \benn \gamma_{\epsilon,\sigma}=\gamma_{\epsilon,\sigma}(t):[0,T]\ra \R^{m+n},\qquad \gamma_{\epsilon,\sigma}(0)=\gamma(0)=(x(0),y(0)) \eenn be a sample path of \eqref{eq:gen_SDE}. The first deterministic extension is to consider $\gamma_{0,0}$ but remove the requirement from Definition \ref{defn:ct} that the transition point $p$ is normally hyperbolic and to change (C1) so that a candidate $\gamma_{0,0}(t_{j-1},t_j)$ can lie in any part of the critical manifold. This allows for canard orbits and delay as shown in Figure \ref{fig:fig4}(b). As an example consider the pitchfork bifurcation \eqref{eq:nf_pitchfork} with $y(0)<0$ then the point $(x,y)=(0,\min(-y(0),y_b))$ becomes a critical transition where $y_b>0$ is the buffer point \cite{Neishtadt1,Neishtadt2}. For delays and canards the problem of critical transitions becomes \emph{global} in at least \emph{two} ways: \begin{itemize} \item[(G1)] The initial condition matters to determine which points are critical transitions. \item[(G2)] The global distance between critical manifolds becomes relevant. \end{itemize} To understand (G2) consider the supercritical pitchfork bifurcation \eqref{eq:nf_pitchfork}. Depending on the initial condition there may be a jump at $p=(x_p,y_p)$ for $0<y_p\ll1$ or $0<y_p=1$. The length of the fast segment to the next attracting critical manifold $y=x^2$ from $p$ is $\sqrt{y_p}$; see Figure \ref{fig:fig4}. Applications clearly \emph{require} a case distinction between $\sqrt{y_p}\ll1$ which is usually not viewed as critical and $\sqrt{y_p}=1$ which should probably be called critical. Hence an extension to canards must \emph{append} a global distance measure to the sample path space, {e.g.}~the minimum or maximum distance from the transition point to the fast subsystem attractor; see Figure \ref{fig:fig4}(b) where canards with or without head usually yield two different distances. Since this paper entirely restricts to a \emph{local} theory we do not discuss this aspect further. \begin{figure}[htbp] \centering \psfrag{x}{$x$} \psfrag{xl}{\scriptsize{$x$}} \psfrag{y}{$y$} \psfrag{g1}{$\gamma_{\epsilon,0}$} \psfrag{g2}{$\gamma_{\epsilon,\sigma}$} \psfrag{g3}{$\gamma_{0,0}$} \psfrag{g4}{$\gamma_{0,\sigma}$} \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{C0}{$C_0$} \includegraphics[width=0.9\textwidth]{./fig4.eps} \caption{\label{fig:fig4}Illustration of possible extensions to Definition \ref{defn:ct}. (a) Phase space for \eqref{eq:gen_SDE} with $f(x,y)=yx-x^3$, $F(x,y)=1$ and $g(x,y)=1$. The critical manifold $C_0$ (grey) and two sample paths $\gamma_{\epsilon,0}$ and $\gamma_{\epsilon,\sigma}$ (black) for $\epsilon=0.01=\sigma$ with initial condition $(x(0),y(0))=(0.9,-0.9)$ are shown. (b) Phase space for \eqref{eq:gen_SDE} with $f(x,y)=y-\frac{x^3}{3}-x$, $g(x,y)=1-x$ and $F(x,y)=1$ with a non-generic fold at $(x,y)=(1,-2/3)$. Again we show two sample paths (black) and the critical manifold (grey). Note that the path $\gamma_{0,\sigma}$ for $\sigma=0.5$ cannot drift in $y$ but will switch, on exponentially long time scales, between the two attracting branches of the critical manifold. The inset shows a time series for this path on a subexponential time scale.} \end{figure} Another possible extension is to consider sample paths $\gamma_{\epsilon,0}$ for $0<\epsilon\ll1$; see Figure \ref{fig:fig4}(a). In this case, the extension can just be defined by requiring that $d_H(\gamma_{\epsilon,0},\gamma_{0,0})\ra 0$ as $\epsilon\ra0$ {i.e.}~by checking whether candidates that have a critical transition in the singular limit perturb. The perturbation results are known for the fold, pitchfork, transcritical and Hopf bifurcations \cite{KruSzm1,KruSzm3,KruSzm4,Neishtadt1}. Partial results are available for the Bogdanov-Takens bifurcation \cite{Chiba1} and the cusp \cite{BroerKaperKrupa} is work in progress; the remaining codimension-two problems are expected to be solvable with similar ideas. One could also add generic cases for higher-dimensional and non-minimal slow variables such as folded singularities in $\R^3$ \cite{SzmolyanWechselberger1}. The case $\gamma_{\epsilon,0}$ is primarily of mathematical interest since for $\gamma_{\epsilon,\sigma}$ and $\sigma>0$ a delay/canard effect is shortened substanially by noise in applications (see {e.g.}~Theorem 2.11 of \cite{BerglundGentz6}) as long as the noise is not exponentially small \cite{Sowers}. A typical delay time is of order $\sqrt{\epsilon|\ln\sigma|}$ so the local singular limit results are relevant; see also Figure \ref{fig:fig4}(a). However, there is a major open issue for applications we do not address here corresponding to transitions driven purely by noise or a combination of noise and bifurcations. The case $\gamma_{0,\sigma}$ for a sample path starting near an attracting critical manifold $C^{a1}_0$ is covered by the theory of large deviations \cite{FreidlinWentzell} and purely noise-induced transitions can occur to an attracting critical manifold $C^{a2}_0$; see Figure \ref{fig:fig4}(b) where the upper path will eventually escape. Again, the sample path viewpoint is well-suited as we ask for an estimate of probabilites {e.g.} \be \label{eq:prob_est} \P\left(\left[\inf_{[0,T]}t:d_h(\gamma_{0,\sigma}(t),C^{a2}_0)<\delta_2,d_H(\gamma_{0,\sigma}(0),C^{a1}_0)<\delta_1\right]>t^*\right) \ee for suitable small constants $\delta_{1,2}$ and a given time $t^*>0$. Hence one can again use paths and Definition \ref{defn:ct} as a basis but then has to add for each point on $C^{a1}_0$ a probabilistic description how likely the escape is which usually yields exponentially long time scales to escape. This is again a \emph{global} problem. For cases with one fast variable and $\epsilon=0$ it is often possible to obtain explicit solutions using Fokker-Planck equations {e.g.}~see \cite{Gardiner,ArnoldSDE,LindnerSchimansky-Geier,ThompsonSieber2,KuehnCT1}.\\ \textit{Remark:} After the suggestion of the Definition \ref{defn:ct} in \cite{KuehnCT1}, recent work of Ashwin et {al.}~\cite{AshwinWieczorekVitoloCox} suggested a related applied classification of critical transitions distinguishing between B-tipping ('bifurcation-induced'), N-tipping ('noise-induced') and R-tipping ('rate-induced'). Basically B-tipping aims to cover paths $\gamma_{\epsilon,0}$ for $\epsilon\ra 0$ and N-tipping considers paths $\gamma_{0,\sigma}$; it is currently work in progress to understand R-tipping better.\\ The most general case is to consider $\gamma_{\epsilon,\sigma}$ for $\sigma,\epsilon>0$ where noise-induced escape \emph{shortly before} a fast subsystem bifurcation point on \emph{non-exponential time scales} becomes relevant. One of the key goals of the mathematical framework presented in this paper was to also allow for a natural extension of the methods and definitions to this case. It is future work to combine the ideas from Definition \ref{defn:ct} by adding to it pathwise probability estimates of the form \eqref{eq:prob_est}. This should yield the full mathematical framework based upon sample paths with all parameters: $\sigma>0$, $\epsilon>0$, distance to the next attractor and escape probability during $[0,T]$. For the local theory of codimension-one bifurcations several studies on various regimes with $\sigma,\epsilon>0$ near bifurcation points exist. Overall, the fold \cite{BerglundGentz8,Sowers}, pitchfork/transcritical \cite{Kuske,BerglundGentz6} and Hopf bifurcations \cite{BerglundGentz2,SuRubinTerman,BerglundGentz3} are quite well understood. One basic insight is that scaling regimes are identified under which noise-induced effects or deteterministic drift dominate. Another important conclusion are probabilistic estimates for certain distinct dynamical regimes to occur. For higher codimension phenomena not many results are known but see {e.g.}~\cite{BerglundGentzKuehn,BerglundLandon}. As far as the stochastic scaling laws for codimension-two cases considered in this paper are concerned there does not seem to be any work prior to this paper in this direction.\\ The second contribution of this paper is to understand fluctuations and scaling laws of paths better before fast subsystem bifurcations to determine early-warning signs. In particular, leading-order scaling behaviour for covariance matrices have been derived. We have only covered the basic case of local bifurcations up to codimension-two with white noise in the region (R1) with a suitable scaling of noise and time scale separation which makes early escapes unlikely. Large fluctuations before the bifurcation and scaling results near bifurcations are certainly not well-studied for all bifurcations up to codimension two. Early-warning signs for other types of noise (colored noise, shot/burst noise \cite{Gardiner}), for degenerate noise terms \cite{TouboulWainrib} and for more general stochastic processes ({e.g.}~L{\'{e}}vy Processes \cite{Kallenberg,ImkellerPavlyukevich}) are interesting directions. As before, sample paths and singular limits are still available, even for very general high-dimension bifurcations and stochastic processes. Global bifurcations \cite{Kuznetsov} have not been considered and would be an interesting direction for future analysis. There is work in progress to understand these bifurcations and their warning signs in models as well as in a normal form setup. Another possible extension are early-warning signs for spatially-extended problems; see \cite{Dakosetal1,DonangeloFortDakosSchefferNes,Dakosetal2} for models from ecology. In this context, it is well-known that many classes of pattern-forming partial differential equations (PDEs) and stochastic partial differential equations (SPDEs) can be written as evolution equations with well-defined paths or stochastic sample paths \cite{Henry,DaPratoZabczyk}. Several relevant PDEs, such as excitable systems \cite{Nagumo,Barkley2} with diffusion, are often already in a natural fast-slow form. Presumably one should find many other interesting early-warning signs for spatial systems but these could also be more difficult to measure and apply in practical applications since the collection and analysis of much larger data sets arises; a typical area where this already proved to be very difficult are epileptic seizures \cite{MormannAndzejakElgerLehnertz,MeiselKuehn}.\\ The third contribution of the current work are examples, several of them in application domains (epidemics, systems biology and biomechanics) where the new techniques for early-warning signs have not been considered. Furthermore, the examples provide illustrations of the theory and also show its limitations where prediction becomes impossible or misleading if one relies on the scaling of the variance. There are many important directions for making the theory more applicable {e.g.}~detailed statistical tests such as receiver-operator curves \cite{HallerbergKantz,KuehnZschalerGross,BoettingerHastings}, analysis of limited data and its interpretation \cite{DitlevsenJohnsen}, linking critical transitions to experiments \cite{DrakeGriffen,Veraartetal}, desirable tipping points in applications and their control \cite{Jensen,KuehnNetworks} as well as networks and deterministic metastability \cite{KuehnZschalerGross}.\\ \textbf{Acknowledgments:} I would like to thank Martin Zumsande for suggesting the model from systems biology in Section \ref{ssec:SysBio} and Thilo Gross for insightful discussions about network dynamics. I also would like to thank two anonymous referees and the editor for many helpful comments that helped to improve the manuscript. Part of this work was supported by the European Commission (EC/REA) via a Marie-Curie International Re-integration Grant. \small{
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and motivation} The Lauricella hypergeometric function $F_C^{(n)}$ of $n$ variable defined by \begin{align*} &F_C^{(n)}(a,b;c_1,\dots, c_n;z_1, \dots, z_n) \\ =& \sum_{m_1, \dots, m_n\in \bold Z_{\geq 0}}\dfrac{(a,m_1+\cdots +m_n)(b,m_1+\cdots +m_n)z_1^{m_1}\cdots z_1^{m_1}} {(c_1,m_1)\cdots (c_n,m_n)m_1!\cdots m_n!}, \end{align*} and has the following integral expression (\cite{G}): $$ (\text{const.})\int \prod_{k=1}^n t_k^{-c_k}\cdot (1-\sum_{k=1}^n t_k)^{\sum c_k-a-n}\cdot (1-\sum_{k=1}^n\dfrac{z_k}{t_k})^{-b} dt_1\cdots dt_n. $$ Using Caylay technique \cite{GKZ}, the function $F_C^{(n)}$ is locally holomorphic on$(z_i)\in (\bold C^{\times})^{n}$ if the toric hypersurface $$ \{((t_i)_i,\lambda)\in (\bold C^{\times})^{n+1} \mid\lambda(1-\sum_{k=1}^n t_k)+(1-\sum_{k=1}^n \dfrac{z_i}{t_k})=0\} $$ is non-degenerate for Newton polyhadra. For non-degneracy condition, see \cite{T}. Since the non-degeneracy condition for a proper Newton polyhedra is equal to the smoothness of the varieties \begin{align*} &\{\lambda(1+\sum_{i\in I}t_i)+\sum_{j\in J}\dfrac{a_j}{t_j}=0\}, \\ &\{\lambda(\sum_{i\in I}t_i)+1+\sum_{j\in J}\dfrac{a_j}{t_j}=0\} \end{align*} for $I, J\subset \{1, \dots, n\}$ and $I\cap J=\emptyset$. Therefore the non-degeneracy condition is equivalent to the smoothness to the open face. Using Jacobian criterion, the singular locus is defined by \begin{align*} \begin{cases} 1-\sum_{k=1}^n t_k=0, \\ \lambda-\dfrac{z_i}{t_i^2}=0, \\ \lambda(1-\sum_{k=1}^n t_k)+(1-\sum_{k=1}^n \dfrac{z_i}{t_k})=0. \end{cases} \end{align*} By setting $\mu^2=\lambda, x_i^2=z_i$, and using the first and the second equations, $\mu$ is obtained by $$t_i\mu=\epsilon_ix_i,\quad \mu-\sum_{i=1}^n \epsilon_ix_i= \mu(1-\sum_{i=1}^n t_i)=0. $$ Here $\epsilon_i\in \{-1,1\}$. Again, using the first and the second equations, the third equation is equal to \begin{align*} 0&=1-\sum_{i=1}^n \dfrac{x_i^2}{t_i} =1-\lambda \sum_{i=1}^n t_i=1-\lambda=(1+\mu)(1-\mu) \\ & =(1+\sum_{i=1}^n \epsilon_ix_i) (1-\sum_{i=1}^n \epsilon_ix_i). \end{align*} Therefore under the $\mu_2^n$-covering map, $$ \bold C^n=\{(x_1,\dots, x_n)\}\ni (x_i)_i \mapsto (x_i^2)_i=(z_i)_i\in \bold C^n=\{(z_1,\dots, z_n)\}. $$ the pull back $Y_n$ of $\overline{Y_n}$ is given by $$ Y_n=\{(x_i)_i\mid \prod_{k=1}^n x_k \prod_{\epsilon_i \in \{-1,1\}}(1-\sum_{i=1}^n \epsilon_ix_i)\neq 0\}. $$ See also \cite{HT}. Therefore $\overline{Y_n}\subset \{(z_1, \dots, z_n)\}$ is isomorphic to $Y_n/\mu_2^n$. In the study of monodromy of hypergeometric function of type $F_C$, it is a basic problem to give an expression of the fundamental group of $\overline{Y_n}$. The generator and relations of the fundamental group for $n=2$ and $n=3$ is determined in \cite{GK}. We prove the following presentation of the fundamental group which is conjectured in \cite{GK}. \begin{theorem}[Main Theorem, see Theorem \ref{main theorem} and Proposition \ref{restatement of commutativity rel}] \label{main theorem introduction} The fundamental group of $\overline{Y_n}$ is generated by elements $\Gamma_0, \Gamma_1,\dots, \Gamma_n$ with the relations \begin{align*} [\Gamma_i,\Gamma_j]=1, \quad (1\leq i,j\leq n), \quad (\Gamma_0\Gamma_i)^2=(\Gamma_i\Gamma_0)^2, \quad (1\leq i\leq n), \end{align*} and \begin{equation*} [M(I)^{-1}\Gamma_0 M(I),M(J)^{-1}\Gamma_0 M(J)]=1 \end{equation*} for all subsets $I$ and $J$ of $\{1, \dots, n\}$ satisfying $I\cap J=\emptyset, I\neq \emptyset, J\neq \emptyset$ and $\#I+\#J\leq n-1$. Here we set $M(I)=\prod_{i\in I}\Gamma_i$. \end{theorem} For the proof of this theorem, we use a cell complex constructed by Salvetti \cite{S}, which is homotopic to the complement of a hyperplane arrangement in $\bold C^N$ and stable under a group action. The author is grateful for discussions with Y. Goto and K. Matsumoto in ``Workshop on Special Varieties in Tambara, 2017'', in Tambara International Seminar House. \section{Recall of a result of Salvetti} We recall a construction of $2$-skeleton of a cell complex which is homotopic to the complement of real hyperplane arrangement. A finite set $\Cal H=\{H_i\}_{i\in I}$ of complex hyperplanes in $\bold C^n$ is called a hyperplane arrangement. In this paper, we are interested in the topological space $$ Y=Y(\Cal H)=\bold C^n-\cup_{i\in I}H_i. $$ A hyperplane arrangement is called a real hyperplane arrangement if the defining equations of $H_i$ is defined over $\bold R$ for all $i\in I$. For a real hyperplane arrangement $\Cal H$, we set $H_{i,\bold R}=H_i\cap \bold R^n$. The set $\{H_{i,\bold R}\}_{i\in I}$ is denoted by $\Cal H_R$. A subset of $\bold R^n$ which can be obtained by the intersection of finite number of $H_{i,\bold R}$'s is simply called a linear subset of $\Cal H_{\bold R}$. As a special case, the total space $\bold R^n$ is an $n$-dimensional linear subset. Let $L$ be an $i$-dimensional linear subset of $\Cal H_\bold R$. A connected component of the complement of the union of proper linear subsets of $L$ in $L$ is called an $i$-chamber of $\Cal H_{\bold R}$ and the set of $i$-chamber is denoted by $\Ch_i=\Ch_i(\Cal H_\bold R)$. Each $i$-chamber is a convex set. We define the dual cell complex of $\Cal H_{\bold R}$ as follows. For each $i$-dimensional chamber $\sigma$, we choose a vertex $v_{\sigma}$ in the interior of $\sigma$. The set of $0$-cell of the dual cell complex is given by $D_{\sigma}=v_{\sigma}$, where $\sigma$ is an $n$-chamber. Let $\tau$ be an $(n-1)$-chamber. Then there exist exactly two $n$-chambers $\tau_1$ and $\tau_2$ such that $\overline{\tau_i}\supset \overline{\tau}$ for $i=1,2$. Here $\overline{\tau}$ is the closure of $\tau$ in $\bold R^n$. We consider 1-cell $D_{\tau}$ by considering the union of segments $\Delta(v_{\tau_1},v_{\tau})$ and $\Delta(v_{\tau_2},v_{\tau})$. We continue this procedure to define $2$-cell $D_{\sigma}$ attached to $(n-2)$-dimensional chamber as follows. If $\sigma_1, \sigma_2$ and $\sigma$ are $n, (n-1)$ and $(n-2)$-chambers, such that $$ \overline{\sigma_1}\supset \overline{\sigma_2}\supset \overline{\sigma}. $$ A sequence $F=F(\sigma_1,\sigma_2,\sigma)$ as above is called a (descending) flag of length $3$. The triangle $\Delta(v_{\sigma_1},v_{\sigma_2},v_{\sigma})$ is called the dual flag $F^*$ of $F$. The union of dual flags containing $v_{\sigma}$ is called the $2$-dimensional dual cell $D_{\sigma}$ of $\sigma$. \begin{figure}[htbp] \includegraphics[width=4.5cm]{dual-cell.eps} \caption{Dual cell} \label{dual cell} \end{figure} We recall the construction of the $2$-skeleton $X_2$ of the cell complex $X$ after Salvetti \cite{S}, which is homotopy equivalent to the space $Y=Y(\Cal H)$. The set $C_0(X)$ of $0$-cell in $X$ is the set $\{\widetilde{D_\sigma}\}_{\sigma\in \Ch_n}$ of the copy $\widetilde{D_{\sigma}}$ of $D_{\sigma}$. The set $C_1(X)$ of $1$-cell consists of $\widetilde D_{\sigma,\tau}$ for $\sigma \in \Ch_n, \tau \in \Ch_{n-1}$ such that $\overline{\sigma}\supset \tau$. The $n$-chamber lying on the opposite side of $\sigma$ with respect to the $(n-1)$-chamber $\tau$ is denoted by $\rho_{\tau}(\sigma)$. The attaching map $\partial \widetilde D_{\sigma,\tau}\to X_0$ is given by connecting two points $\sigma$ and $\rho_{\tau}(\sigma)$. The $1$-cell $\widetilde D_{\sigma,\tau}$ is called an arrow from $\sigma$ to $\rho_{\tau}(\sigma)$. The composite of several arrows compatible with the directions is called an oriented path. The set $C_2(X)$ of $2$-cell consists of $\widetilde D_{\sigma,\tau}$ for $\sigma \in \Ch_n, \tau \in \Ch_{n-2}$ such that $\overline{\sigma}\supset \overline{\tau}$. The $n$-chamber lying on the opposite side of $\sigma$ with respect to the $(n-2)$-chamber $\tau$ is denoted by $\rho_{\tau}(\sigma)$ and the vertex in $\rho_{\tau}(\sigma)$ is denoted by $\rho_{\tau}(v_\sigma)$ (see Figure \ref{bounding disc}). \begin{figure}[htbp] \includegraphics[width=4cm]{attaching-disc.eps} \caption{Relations} \label{bounding disc} \end{figure} Then there exist exactly two shortest paths from $v_\sigma$ to $\rho_{\tau}(v_\sigma)$. The attaching map $\partial \widetilde D_{\sigma,\tau}\to X_1$ is given by bounding the two shortest paths from $v_\sigma$ to $\rho_{\tau}(v_\sigma)$ (see Figure \ref{bounding disc}). \begin{proposition}[Salvetti] The natural inclusion $X_2 \to Y$ induces an isomorphism of fundamental groups $$ \pi_1(X_2) \to \pi_1(Y). $$ As a consequence, the fundamental groupoid is generated by $\widetilde D_{\tau_1,\tau_2}$ for $\tau_1\in \Ch_n, \tau_2\in \Ch_{n-1}, \overline{\tau_1}\supset\overline{\tau_2}$, and the relation is given by $\widetilde D_{\sigma_1,\sigma_2}$ for $\sigma_1\in \Ch_n, \sigma_2\in \Ch_{n-2}, \overline{\sigma_1}\supset\overline{\sigma_2}$. \end{proposition} \section{$F_C$-hyperplane arrangement} \subsection{The arrangement $\Cal H_n$} For an element $\epsilon=(\epsilon_1,\dots, \epsilon_n)\in \{-1,1\}^n$, we define a hyperplane $H_{\epsilon}$ by $$ H_{\epsilon}:\epsilon_1x_1+\cdots +\epsilon_nx_n=1. $$ We define $n$-dimensional $F_C$-arrangement $\Cal H_n$ by the union of the set of hyperplanes $\{H_\epsilon\}$ $(\epsilon\in \{-1,1\}^n)$ and that of coordinate hyperplanes $$ L_i:x_i=0, \quad (i=1, \dots, n). $$ The following proposition is used to classify $(n-2)$-chambers in $\Cal H_n$. \begin{proposition} \label{codimension 2 linear space} \begin{enumerate} \item Let $\epsilon,\epsilon'$ be elements in $\{-1,1\}^n$ such that $\#\{i\mid \epsilon_i\neq \epsilon'_i\}\geq 2$ and set $$ H_{\epsilon,\epsilon'}=H_{\epsilon}\cap H_{\epsilon'}. $$ A hyperplane in $\Cal H$ containing $H_{\epsilon,\epsilon'}$ is equal to $H_{\epsilon}$ or $H_{\epsilon'}$. \item For an element $\epsilon$ in $\{-1,1\}^n$ and an integer $i$ with $1\leq i\leq n$, we set $$ H_{\epsilon,i}=H_{\epsilon}\cap L_i. $$ A hyperplane in $\Cal H$ containing $H_{\epsilon,i}$ is equal to $L_i$, $H_{\epsilon}$ or $H_{g^{(i)}(\epsilon)}$. Here \begin{equation} \label{reflection gi} g^{(i)}(\epsilon_1, \dots, \epsilon_n)=(\epsilon_1,\dots,\overset{i}{-\epsilon_i},\dots, \epsilon_n). \end{equation} \item Let $i,j$ be distinct integers such that $1\leq i,j \leq n$ and set $$ H_{i,j}=L_i\cap L_j. $$ A hyperplane in $\Cal H$ containing $H_{i,j}$ is equal to $L_i$ or $L_j$. \end{enumerate} \end{proposition} \subsection{Group action} On the space $Y$, the group $\mu_2^n=\{1,-1\}^n$ acts by $$ g:\bold C^n\to \bold C^n:(x_1, \dots, x_n)\mapsto (g_1x_1,\dots, g_nx_n) $$ for $g=(g_1, \dots, g_n)\in \mu_2^n$. The group $\mu_2^n$ acts on the sets $\Ch_i$. We can choose the set of vertex $\{v_{\sigma}\}_{\sigma\in \Ch_i}$ so that they are stable under the action of $\mu_2^n$. \begin{lemma} On the topological space $X_2$, the action of the group $\mu_2^n$ on $X_2$ is cell-wise and fixed point free. \end{lemma} \begin{proof} The group acts on $\Ch_n$ freely. Therefore it acts freely on the set of $0, 1$ and $2$-cells. \end{proof} \subsection{Cell complex for the quotient space} We consider topological space $\overline{X_2}=X_2/\mu_2^n$. Then $\overline{X_2}$ is a cell complex. We have the following proposition. \begin{proposition} The natural map $\pi_1(\overline{X_2}) \to \pi_1(Y/\mu_2^n)$ is an isomorphism. \end{proposition} We describe the cell complex $\overline{X_2}$ in this subsection. We set $$ \bold R_{>0}=\{x\in \bold R\mid x> 0\},\quad \bold R_{\geq 0}=\{x\in \bold R\mid x\geq 0\}. $$ The subset of $i$-chambers in $\Ch_i$ contained in $\bold R_{\geq 0}$ is denoted by $\overline{\Ch}_{i}$. The set $C_0(\overline{X})$ of $0$-cells in $\overline{X}$ is identified with $\{\widetilde D_\sigma\mid \sigma\in \overline{\Ch}_n\}$. There are the following two kinds of $1$-cells in $\overline{X}$: The image of $\widetilde D_{\sigma,\tau}$, $(\sigma \in \overline{\Ch}_{n}, \tau \in \overline{\Ch}_{n-1})$ such that \begin{enumerate} \item(type 1, non-closed one cell) $\tau\subset H_{\epsilon}$, $(\epsilon\in \{-1,1\}^n)$. \item(type 2, closed one cell) $\tau\subset L_i$, $(1\leq i \leq n)$. \end{enumerate} There are three kinds of $2$-cells in $\overline{X_2}$: The image of $\widetilde D_{\sigma,\tau}$, $(\sigma \in \overline{\Ch}_{n}, \tau \in \overline{\Ch}_{n-2})$ such that \begin{enumerate} \item(type 1, interior disc) $\tau\subset H_{\epsilon}\cap H_{\epsilon'}$, \item(type 2, boundary disc) $\tau\subset H_{\epsilon}\cap L_i$, \item(type 3, coordinate disc) $\tau\subset L_i\cap L_j$. \end{enumerate} \begin{definition} Let $\sigma$ be an element in $\overline{\Ch}_n$. we define height $h(v_{\sigma})$ of $v_{\sigma}=\widetilde D_{\sigma}$ by the number of hyperplanes of the form $H_{\epsilon}$ $(\epsilon \in \{-1,1\}^n)$ separating $\bold 0$ and $v_{\sigma}$. The number $h(v_{\sigma})$ is also denoted by $h(\sigma)$. \end{definition} \begin{proposition} \begin{enumerate} \item A interior disc (type 1) is attached to four $1$-cells and contains four $0$-cells. The shape of height is as follows. \item A boundary disc (type 2) is attached to six $1$-cells and contains three $0$-cells. \item A coordinate disc (type 3) is attached to two $1$-cells and contains one $0$-cells. \end{enumerate} \end{proposition} We define spanning complex which is a slight generalization of spanning tree. A $1$-cell $\widetilde D_{\sigma,\tau}$ is called a spanning $1$-cell if it is type 1 and $h(\sigma)+1=h(\rho_{\tau}(\sigma))$, i.e. $\rho_{\tau}(\sigma)$ is farer from the origin than $\sigma$. A $2$-cell $\widetilde D_{\sigma,\tau}$ is called a spanning $2$-cell if it is type 1 and $h(\sigma)$ is the smallest among vertices contained in $D_{\tau}$. The union of spanning $1$ and $2$-cells forms a sub cell complex $\Cal S$ of $\overline{X_2}$. The complex $\Cal S$ is called the spanning complex of $\overline{X_2}$. \begin{lemma} The spanning complex $\Cal S$ is simply connected. \end{lemma} \begin{proof} It is identified with a $2$-skeleton of the dual cell complex of $\bold R^n_{>0}$ which is simply connected. \end{proof} We define $\overline{X_2}^{(s)}$ by obtaining contracting a subset $\Cal S\subset \overline{X_2}$ to a point $s$. By the above proposition, we have \begin{proposition} The natural map $$ \pi_1(\overline{X_2}) \to \pi_1(\overline{X_2}^{(s)},s) $$ is an isomorphism. \end{proposition} \begin{definition} A $1$-cell $\widetilde D_{\sigma,\tau}$ in $\overline{X_2}$ is called a generator if it is \begin{enumerate} \item type 1 and not spanning, or \item type 2. \end{enumerate} \end{definition} A generator defines a closed path in $\overline{X_2}^{(s)}$. Then the set of generator generates the group $\pi_1(\overline{X_2}^{(s)})$. \subsection{Relations for type 1 and type 2} \subsubsection{Type 1 relation} First, we consider a type 1 $2$-cells $\widetilde D_{\sigma,\tau}$ in $\overline{X}$ with $\tau\subset H_{\epsilon}\cap H_{\epsilon'}$. \begin{figure}[htb] \includegraphics[width=5cm]{relation02.eps} \caption{Type 1 relation} \label{type 1 rel} \end{figure} The arrows $\overline{a}, \overline{b}, \overline{c}$ and $\overline{d}$ are spanning $1$-cells and $a,b,c$ and $d$ define elements in $\pi_1(\overline{X_2}^{(s)},s)$. Their relations are given as $$ a=c,\quad b=d,\quad ab=ba. $$ \subsubsection{Type 2 relation} Next we consider a type 2 $2$-cells $\widetilde D_{\sigma,\tau}$ in $\overline{X}$ as in Figure \ref{type 2 rel}. \begin{figure}[htb] \includegraphics[width=6cm]{relation01.eps} \caption{Type 2 relation} \label{type 2 rel} \end{figure} The arrows $\overline{b}$ and $\overline{c}$ are spanning $1$-cells and reduced to one point in $\overline{X_2}^{(s)}$. We consider a $(n-2)$-chamber $\tau$ contained in $L_i$. The relations beginning from $v_1=v_{\sigma_1}, v_2=v_{\sigma_2}$ and $v_3=v_{\sigma_3}$ are the following: \begin{align*} &\partial\widetilde D_{\sigma_1,\tau}:a=d, \\ &\partial\widetilde D_{\sigma_2,\tau}:ba=dc, \\ &\partial\widetilde D_{\sigma_3,\tau}:cba=dcb. \end{align*} We can easily check the following proposition. \begin{proposition} The above relations are equivalent to \begin{equation} \label{type 2 braid relation} d=a, \quad c=a^{-1}ba, \quad (ab)^2=(ba)^2. \end{equation} \end{proposition} We consider the above situation and set $\epsilon=(\epsilon_1, \dots, \epsilon_n)$ and $\epsilon'=(\epsilon'_1, \dots, \epsilon'_n)$. Here $\epsilon'=g^{(i)}(\epsilon)$ where $g^{(i)}$ is defined as (\ref{reflection gi}). By the definition of height, $v_1$ is the closest vertex from the origin. Therefore we have $\epsilon_j=\epsilon'_j$ if $j\neq i$ and $\epsilon_i=1$ and $\epsilon'_i=-1$. \subsection{Definition of $\gamma_i$ and their relations} In this subsection, we define $\gamma_i$ and study their relations. \begin{definition} We define $\gamma_i=\widetilde D_{\sigma_0,\tau_i}$ for $i=0,1,\dots,n$ where \begin{align*} & \sigma_0=\{(x_i)\in \bold R^n\mid x_i>0 \quad (0\leq i\leq n), \quad \sum x_i<1\}, \\ & \tau_0=\{(x_i)\in \bold R^n\mid x_i>0 \quad (0\leq i\leq n), \quad \sum x_i=1\}, \\ & \tau_i=\{(x_i)\in \bold R^n\mid x_i>0 \quad (0\leq j\leq n, j\neq i), \quad \sum x_i<1, \quad x_i=0\}. \end{align*} Actually $\sigma_0$ is a chamber since if $x=(x_1, \dots, x_n)\in \sigma_0$ and $\epsilon\neq (1, \dots, 1)$ then $$ \sum_i \epsilon x_i <\sum_i x_i <1, $$ and the point $x$ is not contained in $H_{\epsilon,\bold R}$. \end{definition} By previous subsection, we have \begin{align} \label{fundamental 1} &[\gamma_i,\gamma_j]=1, \quad (1\leq i,j\leq n), \\ \nonumber &(\gamma_0\gamma_i)^2=(\gamma_i\gamma_0)^2, \quad (1\leq i\leq n). \end{align} \begin{proposition} \label{equalities in pi 1} \begin{enumerate} \item Under the notation of figure 1, we have $a=\gamma_i$ in $\pi_1(\overline{X_2}^{(s)})$. \item Let $\tau_1$ and $\tau_2$ be two elements in $\overline{\Ch}_{n-1}$ contained a common hyperplane $H_{\epsilon,\bold R}$. Suppose that $1$-cells $\widetilde D_{\sigma_1,\tau_1}$ and $\widetilde D_{\sigma_2,\tau_2}$ are generators. Then the paths obtained by them are homotopic to each other. These paths defines a common element in $\pi_1(\overline{X_2}^{(s)},s)$ which is denoted by $\gamma_{\epsilon}$. \item For $\epsilon \in \{-1,1\}^n, \epsilon\neq (-1, \dots, -1)$, we set $$ S(\epsilon)=\{i \mid 1\leq i\leq n, \epsilon_i=-1 \}, \quad m_{\epsilon}=\prod_{i\in S(\epsilon)}\gamma_i. $$ Then we have \begin{equation} \label{expression of inner path} \gamma_{\epsilon}={m_\epsilon}^{-1}\gamma_0 m_\epsilon. \end{equation} \end{enumerate} \end{proposition} \begin{proof} (1) We use the first relation (\ref{type 2 braid relation}) iteratively and have the statement. (2) This follows from the relations obtained by a type 1 $2$-cells. (3) Let $\epsilon$ be an element in $\{-1,1\}^n$ and $\epsilon\neq (-1, \dots, -1)$. We set $S(\epsilon)=\{i_1,\dots, i_k\}$. We consider a chain $\epsilon^{(0)},\dots, \epsilon^{(k)} \in \{-1,1\}^n$ defined by $$ \epsilon^{(0)}=(1,\dots, 1), \epsilon^{(1)}=g^{(i_1)}(\epsilon^{(0)}), \epsilon^{(2)}=g^{(i_2)}(\epsilon^{(1)}), \dots, \epsilon^{(k)}=g^{(i_k)}(\epsilon^{(k-1)}). $$ Then $\epsilon^{(k)}=\epsilon$. \begin{lemma} \label{desending ind lemme} $H_{\epsilon^{(j)}}\cap \bold R^n_{>0}\neq \emptyset$, $(j=0, \dots, k)$ and $$ H_{\epsilon^{(j)}}\cap H_{\epsilon^{(j-1)}}\cap \{(x_l)\in \bold R^n\mid x_l>0 \text{ for }l\neq i_j \}\neq \emptyset \quad (i=1, \dots, k). $$ \end{lemma} \begin{proof}[Proof of Lemma \ref{desending ind lemme}] By descending induction, it is enough to prove the lemma for $j=k$. We set $\epsilon^{(k)}=(\epsilon_1, \dots, \epsilon_n)$. First statement holds since $\epsilon_j=1$ for some $j$. We prove the second statement. Since $\epsilon_{i_k}=-1$, there exsists $j\neq i_k$ such that $\epsilon_j=1$. Therefore the equation $$ x_{i_k}=0, \epsilon_1 x_1+\cdots +\widehat{\epsilon_{i_k}x_{i_k}}+\cdots +\epsilon_nx_n=1. $$ has a solution satisfying $x_l>0$ for $l\neq i_j$. \end{proof} By applying the second relation in (\ref{type 2 braid relation}) iteratively, we have statement (3). \end{proof} Using Proposition \ref{equalities in pi 1} (3) and the relation of type 1, we have the following theorem. \begin{theorem} \label{satisfy type 2 relation in pi1} We have \begin{equation} \label{fundamental 2} [{m_\epsilon}^{-1}\gamma_{0}{m_\epsilon}, {m_{\epsilon'}}^{-1}\gamma_{0}{m_{\epsilon'}}]=1. \end{equation} for $\epsilon, \epsilon' \in \{-1,1\}^n$ and $ H_{\epsilon,\bold R}\cap H_{\epsilon',\bold R}\cap R_{>0}^n\neq \emptyset. $ \end{theorem} \section{Fundamental relation} \subsection{Main theorem} In this section, we prove the following theorem. \begin{theorem} \label{main theorem} The relations (\ref{fundamental 1}) and (\ref{fundamental 2}) are fundamental relations for $\pi_1(\overline{X_2}^{(s)},s)$ with generators $\gamma_0$ and $\gamma_i$ $(1\leq i\leq n)$. \end{theorem} We define $G$ as a group generated by $\Gamma_0$ and $\Gamma_i$ $(1\leq i \leq n)$ with the relations \begin{align} \label{model fundamental 1} &[\Gamma_i,\Gamma_j]=1, \quad (1\leq i,j\leq n), \\ \nonumber &(\Gamma_0\Gamma_i)^2=(\Gamma_i\Gamma_0)^2 \quad (1\leq i\leq n), \end{align} and \begin{equation} \label{model fundamental 2} [{M_\epsilon}^{-1}\Gamma_{0}{M_\epsilon}, {M_{\epsilon'}}^{-1}\Gamma_{0}{M_{\epsilon'}}]=1. \end{equation} for $H_{\epsilon,\bold R}\cap H_{\epsilon',\bold R}\cap R_{>0}^n\neq \emptyset.$ Here we set $M_{\epsilon}=\prod_{i\in S(\epsilon)}\Gamma_i$. We define group homomorphisms $$ \varphi:G\to \pi_1(\overline{X_2}^{(s)},s) \text{ and } \psi:\pi_1(\overline{X_2}^{(s)},s)\to G, $$ which are inverse to each other. \subsubsection{The definition of $\varphi$} We define $\varphi$ by $\varphi(\Gamma_i)=\gamma_i$ for $i=0,1,\dots, n$. We check that fundamental relations of $G$ are satisfied in $\pi_1(\overline{X_2}^{(s)},s)$. The relation (\ref{model fundamental 1}) is satisfied by the definition of $\varphi$. By the definition of $\varphi$, we have $$ \varphi({M_\epsilon}^{-1}\Gamma_{0}{M_\epsilon}) ={m_\epsilon}^{-1}\gamma_{0}{m_\epsilon}. $$ Thus the relation (\ref{model fundamental 2}) is satisfied by Theorem \ref{satisfy type 2 relation in pi1}. \subsubsection{The definition of $\psi$} The group $\pi_1(\overline{X_2}^{(s)},s)$ is generated by type 1 non-spanning arrow $\gamma_{\epsilon,\tau}$ and type 2 generators $\gamma_{i,\tau}$ with the relation of type 1, type 2 and type 3 relations. We set $$ \psi(\gamma_{\epsilon,\tau})=M_{\epsilon}^{-1}\Gamma_0M_{\epsilon},\quad \psi(\gamma_{i,\tau})=\Gamma_i. $$ Type 1 and type 3 relation are satisfied by the fundamental relations of $G$. The first relations of (\ref{type 2 braid relation}) is easy to check. The second relation is obtained by the relation between $\epsilon$ and $g^{(i)}(\epsilon)$. We check the third relation of (\ref{type 2 braid relation}) by using $\psi(a)=\Gamma_i, \psi(b)=M_{\epsilon}^{-1}\Gamma_0M_{\epsilon}$. Since $\Gamma_i$ and $M_{\epsilon}$ are commutative in $G$, we have \begin{align*} \psi(abab)= \Gamma_i\cdot M_{\epsilon}^{-1}\Gamma_0M_{\epsilon}\cdot \Gamma_i\cdot M_{\epsilon}^{-1}\Gamma_0M_{\epsilon} = M_{\epsilon}^{-1}\Gamma_i\Gamma_0 \Gamma_i\Gamma_0M_{\epsilon} \end{align*} and \begin{align*} \psi(baba)= M_{\epsilon}^{-1}\Gamma_0M_{\epsilon}\cdot \Gamma_i\cdot M_{\epsilon}^{-1}\Gamma_0M_{\epsilon}\cdot \Gamma_i =M_{\epsilon}^{-1}\Gamma_0 \Gamma_i\Gamma_0\Gamma_iM_{\epsilon}. \end{align*} Thus we have the equality $\psi(abab)=\psi(baba)$. Thus the homomorphism $\psi$ is well defined. \begin{proof}[Proof of Theorem \ref{main theorem}] By the definition of $\varphi$ and $\psi$, we see that the homomorphisms $\psi$ and $\varphi$ are inverse to each other. \end{proof} \subsection{Simplification} We modify the relation of (\ref{model fundamental 2}) and get the simpler form cited in \cite{GK}. By Theorem \ref{main theorem} and the following proposition, we get Main Theorem \ref{main theorem introduction}. \begin{proposition} \label{restatement of commutativity rel} For a subset $I$ of $\{1, \dots, n\}$ we set $M(I)=\prod_{i\in I}\Gamma_i$. Under the relation (\ref{model fundamental 1}), the relation (\ref{model fundamental 2}) for $H_{\epsilon,\bold R}\cap H_{\epsilon',\bold R}\cap R_{>0}^n\neq \emptyset$ is equivalent to the following set of relations. \begin{quote} \begin{equation} \label{simplification} [M(I)^{-1}\Gamma_0 M(I),M(J)^{-1}\Gamma_0 M(J)]=1 \end{equation} for all $I,J$ satisfying $I\cap J=\emptyset, I\neq \emptyset, J\neq \emptyset$ and $\#I+\#J\leq n-1$. \end{quote} \end{proposition} \begin{proof} Throughout this proof we assume the commutativity of $\Gamma_1, \dots, \Gamma_n$. First we assume the condition (\ref{simplification}) and prove the relation (\ref{model fundamental 2}). We set $K=S(\epsilon)\cap S(\epsilon')$. By the definition of $M_{\epsilon}$ and the commutativity of $\Gamma_i$, the condition (\ref{model fundamental 2}) can be rewrite as \begin{equation} \label{reduction from cell complex} [{M^*_\epsilon}^{-1}\Gamma_{0}{M^*_\epsilon}, {M^*_{\epsilon'}}^{-1}\Gamma_{0}{M^*_{\epsilon'}}]=1. \end{equation} where $M_{\epsilon}^*=\prod_{i\in S(\epsilon)-K}\Gamma_i$ and $M_{\epsilon'}^*=\prod_{i\in S(\epsilon')-K}\Gamma_i$. This is one of the conditions in (\ref{simplification}) by setting $I=S(\epsilon)-K$ and $J=S(\epsilon')-K$. We check that $I$ and $J$ satisfies the required conditions. The condition $I\cap J=\emptyset$ is clear. If $M_{\epsilon}^*=\emptyset$, then $S(\epsilon)\subset S(\epsilon')$ and this contradicts to the condition $H_{\epsilon,\bold R}\cap H_{\epsilon',\bold R}\cap \bold R_{>0}^n\neq \emptyset$. If $\#I+\#J=n$, then $\epsilon'=-\epsilon$. This also contradicts to the condition for $\epsilon$ and $\epsilon'$ since $H_{\epsilon,\bold R}\cap H_{-\epsilon,\bold R}=\emptyset$. Next we assume the condition (\ref{model fundamental 2}) and prove the relation (\ref{simplification}). Let $I$ and $J$ be subsets in $\{1, \dots, n\}$ satisfying the condition of (\ref{simplification}). We define $\epsilon=(\epsilon_1,\cdots, \epsilon_n)$ and $\epsilon'=(\epsilon'_1,\cdots, \epsilon'_n)$ by $$ \epsilon_i=\begin{cases} 1\quad (i\notin I), \\ -1\quad (i\in I), \end{cases}\quad \epsilon'_i=\begin{cases} 1\quad (i\notin J), \\ -1\quad (i\in J). \end{cases} $$ Then the relation (\ref{reduction from cell complex}) becomes the relation (\ref{simplification}). We check the condition $H_{\epsilon,\bold R}\cap H_{\epsilon',\bold R}\cap \bold R^n_{>0}\neq \emptyset$. We set $K=\{1, \dots, n\}-(I\cup J)$. Then we have $K\neq \emptyset$. The system of equations $$ \begin{cases} \sum_{i\notin I}x_i-\sum_{i\in I}x_i=1, \\ \sum_{j\notin J}x_j-\sum_{j\in J}x_j=1 \end{cases} $$ is equivalent to $$ \begin{cases} \sum_{i\in K}x_i=1, \\ \sum_{i\in I}x_i=\sum_{j\in J}x_j. \end{cases} $$ Thus it has a solution $x=(x_i)\in \bold R_{>0}^n$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A partial order $(P;\leq)$ is called \emph{semilinear} iff for all $a,b \in P$ there exists $c \in P$ such that $a \leq c$ and $b \leq c$, and for every $a \in P$ the set $\{ b \in P : a \leq b\}$ is linearly ordered, that is, contains no incomparable pair of elements. Finite semilinear orders are closely related to rooted trees: the transitive closure of a tree (viewed as a directed graph with the edges oriented towards the root) is a semilinear order, and the transitive reduction of any finite semilinear order is a rooted tree. It follows from basic facts in model theory (e.g.~Theorem 8.2.3. in~\cite{Hodges}) that there exists a countable semilinear order $(\mathbb S_2;\leq)$ which is \emph{existentially closed} in the class of all countable semilinear orders, that is, for every embedding $e$ of $(\mathbb S_2;\leq)$ into a countable semilinear order $(P;\leq)$, every existential formula $\phi(x_1,\dots,x_n)$, and all $p_1,\dots,p_n \in \mathbb S_2$ such that $\phi(e(p_1),\dots,e(p_n))$ holds in $(P;\leq)$ we have that $\phi(p_1,\dots,p_n)$ holds in $(\mathbb S_2;\leq)$. We write $x < y$ for $(x \leq y \wedge x \neq y)$ and $x \perp y$ for $\neg (x \leq y) \wedge \neg (y \leq x)$, that is, for incomparability with respect to $\leq$. Clearly, $(\mathbb S_2;\leq)$ is \begin{itemize} \item \emph{dense}: for all $x,y \in \mathbb S_2$ such that $x < y$ there exists $z \in \mathbb S_2$ such that $x < z < y$; \item \emph{unbounded}: for every $x \in \mathbb S_2$ there are $y,z \in \mathbb S_2$ such that $y < x < z$; \item \emph{binary branching}: (a) for all $x,y \in \mathbb S_2$ such that $x < y$ there exists $u \in \mathbb S_2$ such that $u< y$ and $u \perp x$, and (b) for any three incomparable elements of $\mathbb S_2$ there is an element in $\mathbb S_2$ that is larger than two out of the three, and incomparable to the third; \item \emph{nice} (following terminology from~\cite{DrosteHollandMacpherson}): for every $x,y \in \mathbb S_2$ such that $x \perp y$ there exists $z \in \mathbb S_2$ such that $z > x$ and $z \perp y$. \item \emph{without joins}: for all $x,y,z \in \mathbb S_2$ with $x,y \leq z$ and $x,y$ incomparable, there exists a $u \in \mathbb S_2$ such that $x,y \leq u$ and $u < z$. \end{itemize} It can be shown by a back-and-forth argument (Proposition~\ref{prop:backnforth}) that all countable, dense, unbounded, nice, and binary branching semilinear orders without joins are isomorphic to $(\mathbb S_2;\leq)$. Since all these properties of $(\mathbb S_2;\leq)$ can be expressed by first-order sentences, it follows that $(\mathbb S_2;\leq)$ is \emph{$\omega$-categorical}: it is, up to isomorphism, the unique countable model of its first-order theory. It also follows from general principles that the first-order theory $T$ of $({\mathbb S}_2;\leq)$ is \emph{model-complete}, that is, embeddings between models of $T$ preserve all first-order formulas, and that $T$ is the \emph{model-companion} of the theory of semilinear orders; again, we refer to~\cite{Hodges} (Theorem 8.3.6). For $k \in \mathbb{N}$, a relational structure $\Delta$ is \emph{$k$ set--homogeneous} if whenever $A$ and $B$ are isomorphic $k$--element substructures of $\Delta$, there is an automorphism $g$ of $\Delta$ such that $g[A]=B$. In~\cite{Droste}, Droste studies 2 and 3 set--homogeneous semilinear orders. Of particular relevance here, Droste proved that $(\mathbb S_2;\leq)$ is the unique countably infinite, non-linear, 3 set--homogeneous semilinear order (see Theorem 6.22 of~\cite{Droste}). The structure $(\mathbb S_2;\leq)$ plays an important role in the study of a natural class of \emph{constraint satisfaction problems (CSPs)} in theoretical computer science. CSPs from this class have been studied in artificial intelligence for qualitative reasoning about branching time~\cite{Duentsch,Hirsch,BroxvallJonsson}, and, independently, in computational linguistics~\cite{Cornell,BodirskyKutz} under the name \emph{tree description} or \emph{dominance} constraints. A \emph{reduct} of a relational structure $\Delta$ is a relational structure $\Gamma$ with the same domain as $\Delta$ such that every relation of $\Gamma$ has a first-order definition over $\Delta$ without parameters. All reducts of a countable $\omega$-categorical structure are again $\omega$-categorical~\cite{HodgesLong}. In this article we study the reducts of $(\mathbb S_2;\leq)$. Two structures $\Gamma$ and $\Gamma'$ with the same domain are called \emph{(first-order) interdefinable} when $\Gamma$ is a reduct of $\Gamma'$, and $\Gamma'$ is a reduct of $\Gamma$. We show that the reducts $\Gamma$ of $(\mathbb S_2;\leq)$ fall into three equivalence classes with respect to interdefinability: either $\Gamma$ is interdefinable with $(\mathbb S_2;=)$, with $(\mathbb S_2;\leq)$, or with $(\mathbb S_2;B)$, where $B$ is the ternary \emph{Betweenness relation}. The latter relation is defined by $$B(x,y,z) \; \Leftrightarrow \; (x < y < z) \vee (z < y < x) \vee (x < y \wedge y \perp z) \vee (z < y \wedge y \perp x) \; .$$ We also classify the \emph{model-complete cores} of the reducts of $(\mathbb S_2;\leq)$. A structure $\Gamma$ is called \emph{model-complete} iff its first-order theory is model-complete. A structure $\Delta$ is a \emph{core} iff all endomorphisms of $\Delta$ are embeddings. It is known that every $\omega$-categorical structure is \emph{homomorphically equivalent} to a model-complete core $\Delta$ (that is, there is a homomorphism from $\Gamma$ to $\Delta$ and vice versa; see~\cite{Cores-journal,BodHilsMartin}). The structure $\Delta$ is unique up to isomorphism, $\omega$-categorical, and called the \emph{model-complete core} of $\Gamma$. We show that for every reduct $\Gamma$ of $(\mathbb S_2;\leq)$, the model-complete core of $\Gamma$ is interdefinable with precisely one out of a list of ten structures (Corollary~\ref{cor:mc-cores}). The concept of model-complete cores is important for the aforementioned applications in constraint satisfaction, and implicitly used in complete complexity classifications for the CSPs of reducts of $({\mathbb Q};<)$ and the CSPs of reducts of the random graph~\cite{tcsps-journal,BodPin-Schaefer-both}; also see~\cite{Bodirsky-HDR}. Our results have applications in this context which will be described in Section~\ref{sect:csp}. There are alternative formulations of our results in the language of permutation groups and transformation monoids, which also plays an important role in the proofs. By the theorem of Ryll-Nardzewski, two $\omega$-categorical structures are first-order interdefinable if and only if they have the same automorphisms. Our result about the reducts of $(\mathbb S_2;\leq)$ up to first-order interdefinability is equivalent to the statement that there are precisely three permutation groups that contain the automorphism group of $(\mathbb S_2;\leq)$ and that are closed in the full symmetric group $\Sym(\mathbb S_2)$ with respect to the \emph{topology of pointwise convergence}, i.e., the product topology on $(\mathbb S_2)^{\mathbb S_2}$ where $\mathbb S_2$ is taken to be discrete. The link to transformation monoids comes from the fact that a countable $\omega$-categorical structure $\Gamma$ is model-complete if and only if $\Aut(\Gamma)$ is dense in the monoid $\Emb(\Gamma)$ of self-embeddings of $\Gamma$, i.e., the closure $\overline{\Aut(\Gamma)}$ of $\Aut(\Gamma)$ in $(\mathbb S_2)^{\mathbb S_2}$ equals $\Emb(\Gamma)$~\cite{RandomMinOps}. Consequently, $\Gamma$ is a model-complete core if and only if $\Aut(\Gamma)$ is dense in the endomorphism monoid $\End(\Gamma)$ of $\Gamma$, i.e., $\overline{\Aut(\Gamma)}=\End(\Gamma)$. The proof method for showing our results relies on an analysis of the endomorphism monoids of reducts of $(\mathbb S_2;\leq)$. For that, we use a Ramsey-type statement for semilattices, due to Leeb~\cite{Lee-vorlesungen-ueber-pascaltheorie} (cf.~also~\cite{GR-some-recent-developments}). By results from~\cite{BP-reductsRamsey,BPT-decidability-of-definability}, that statement implies that if a reduct of $(\mathbb S_2;\leq)$ has an endomorphism that does not preserve a relation $R$, then it also has an endomorphism that does not preserve $R$ and that behaves \emph{canonically} in a formal sense defined in Section~\ref{sect:prelims}. Canonicity allows us to break the argument into finitely many cases. We also mention a conjecture of Thomas, which states that every countable homogeneous structure $\Delta$ with a finite relational signature has only finitely many reducts up to interdefinability~\cite{RandomReducts}. By \emph{homogeneous} we mean here that every isomorphism between finite substructures of $\Delta$ can be extended to an automorphism of $\Delta$. Thomas' conjecture has been confirmed for various fundamental homogeneous structures, with particular activity in recent years~\cite{Cameron5,RandomReducts,Thomas96,Bennett-thesis,JunkerZiegler,Pon11,Poset-Reducts,42,LinmanPinsker}. The structure $(\mathbb S_2;\leq)$ is not homogeneous, but interdefinable with a homogeneous structure with a finite relational signature, so it falls into the scope of Thomas' conjecture. \ignore{ To prove Thomas' conjecture, it is necessary and sufficient to prove the following three statements. \begin{itemize} \item All reducts $\Gamma$ of $\Delta$ are interdefinable with a structure that has a finite relational signature (note that this is weaker than requiring that $\Gamma$ is \emph{homogeneous} in a finite relational signature, which is false; see the discussion in~\cite{RandomReducts}). Indeed, this is equivalent to requiring that there are no infinite ascending chains of reducts of $\Delta$: if $\Gamma_1,\Gamma_2,\dots$ would be such an infinite ascending sequence, then the reduct of $\Delta$ whose relations are precisely the relations that appear in one of the structures $\Gamma_i$, is \emph{not} interdefinable with a structure that has a finite relational signature. Conversely, from any reduct $(D;R_1,R_2,\dots)$ of $\Gamma$ that is not interdefinable with a structure that has a finite relational signature, we can obtain an infinite ascending chain $(D;R_1)$, $(D;R_1,R_2), (D;R_1,R_2,R_3), \dots$ of reducts. \item For every reduct $\Gamma$ of $\Delta$ there are finitely many closed permutation groups that contain $\Aut(\Gamma)$ and that are inclusion-wise \emph{minimal} with this property. \item There are no infinite ascending chains of closed permutation groups that contain $\Aut(\Delta)$. \end{itemize} All three steps are open. The step that potentially might be attacked in general with the method we use here is step number two. What \emph{can} be shown with this method is that there are finitely many minimal closed transformation monoids $M$ that contain $\End(\Delta)$ (see~\cite{BP-reductsRamsey}); assuming step one, this even holds for all reducts $\Gamma$ of $\Delta$. The difficulty in proving step number two is precisely the transfer from the existence of certain functions in $\End(\Gamma)$ back to the \emph{automorphisms} of $\Gamma$. } \section{Main results} \label{sect:results} To state our classification result, we need to introduce some homogeneous structures that appear in it. We have mentioned that $(\mathbb S_2;\leq)$ is not homogeneous, but interdefinable with a homogeneous structure with finite relational signature. Indeed, to obtain a homogeneous structure we can add a single first-order definable ternary relation $C$ to $(\mathbb S_2;\leq)$, defined as \begin{align} C(z,xy) \quad :\Leftrightarrow \quad x \perp y \; \wedge \;\exists u (x < u \wedge y < u \wedge u \perp z) \; . \label{C} \end{align} See Figure~\ref{fig:c}. \begin{figure} \begin{center} \includegraphics[scale=.4]{C.pdf} \end{center} \caption{Illustration of $C(z,xy)$.} \label{fig:c} \end{figure} We omit the comma between the last two arguments of $C$ on purpose, since it increases readability, pointing out the symmetry $\forall x,y,z \; (C(z,xy) \Leftrightarrow C(z,yx))$. By a back-and-forth argument (Proposition~\ref{prop:backnforth}) one can show that $(\mathbb S_2;\leq,C)$ is homogeneous, and clearly $(\mathbb S_2;\leq)$ and $(\mathbb S_2;\leq,C)$ are interdefinable. We write $(\mathbb L_2;C)$ for the structure induced in $(\mathbb S_2;C)$ by any maximal antichain of $(\mathbb S_2;\leq)$; the reducts of $(\mathbb L_2;C)$, the \emph{homogeneous binary branching $C$-relation on leaves}, were classified in~\cite{BodJonsPham}. We mention in passing that the structure $(\mathbb L_2;C')$, where $C'(x,y,z) \Leftrightarrow \big(C(x,yz) \vee (y=z \wedge x \neq y)\big)$, is a so-called \emph{C-relation}; we refer to~\cite{AdelekeNeumann} for the definition since we will not make further use of it. It is known that two $\omega$-categorical structures have the same endomorphisms if and only if they are existentially positively interdefinable, that is, if and only if each relation in one of the structures can be defined by an existential positive formula in the other structure~\cite{RandomMinOps}. We can now state one of our main results. \begin{theorem}\label{thm:4case} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$. Then at least one of the following cases applies. \begin{itemize} \item[(1)] $\End(\Gamma)$ contains a function whose range induces a chain in $(\mathbb S_2;\leq)$, and $\Gamma$ is homomorphically equivalent to a reduct of the order of the rationals $(\mathbb Q; <)$. \item[(2)] $\End(\Gamma)$ contains a function whose range induces an antichain in $(\mathbb S_2;\leq)$, and $\Gamma$ is homomorphically equivalent to a reduct of $(\mathbb L_2; C)$. \item[(3)] $\End(\Gamma)$ equals $\overline{\Aut(\mathbb S_2; B)}$; equivalently, $\Gamma$ is existentially positively interdefinable with $(\mathbb S_2;B)$. \item[(4)] $\End(\Gamma)$ equals $\overline{\Aut(\mathbb S_2; \leq)}$; equivalently, $\Gamma$ is existentially positively interdefinable with $(\mathbb S_2;<,\bot)$. \end{itemize} \end{theorem} The reducts of $(\mathbb L_2;C)$ have been classified in~\cite{BodJonsPham}. Each reduct of $(\mathbb L_2;C)$ is interdefinable with either \begin{itemize} \item $(\mathbb L_2;C)$ itself, \item $(\mathbb L_2;D)$ where $D(x,y,u,v)$ has the first-order definition $(C(u,xy) \wedge C(v,xy)) \vee (C(x,uv) \wedge C(y,uv))$ over $(\mathbb L_2;C)$, or \item $(\mathbb L_2;=)$. \end{itemize} . The reducts of $(\mathbb Q;<)$ have been classified in~\cite{Cameron5}. To describe them, it is convenient to write $\overrightarrow{x_1\cdots x_n}$ whenever $x_1,\ldots,x_n\in \mathbb{Q}$ are such that $x_1<\cdots<x_n$. Each reduct of $(\mathbb Q;<)$ is interdefinable with either \begin{itemize} \item the dense linear order $(\mathbb{Q};<)$ itself, \item the structure $(\mathbb{Q}; \Betw)$, where $\Betw$ is the ternary relation $$\big \{(x,y,z) \in {\mathbb Q}^3 : \overrightarrow{xyz} \, \vee \, \overrightarrow{zyx} \big \} \, ,$$ \item the structure $(\mathbb{Q}; \Cyc)$, where $\Cyc$ is the ternary relation $$\big \{(x,y,z) : \overrightarrow{xyz} \vee \overrightarrow{yzx} \vee \overrightarrow{zxy} \big \} \, ,$$ \item the structure $(\mathbb{Q}; \Sep)$, where $\Sep$ is the 4-ary relation \begin{align*} \big \{(x_1,y_1,x_2,y_2) : \; & \overrightarrow{x_1x_2y_1y_2} \vee \overrightarrow{x_1y_2y_1x_2} \vee \overrightarrow{y_1x_2x_1y_2} \vee \overrightarrow{y_1y_2x_1x_2} \\ \vee \; & \overrightarrow{x_2x_1y_2y_1} \vee \overrightarrow{x_2y_1y_2x_1} \vee \overrightarrow{y_2x_1x_2y_1} \vee \overrightarrow{y_2y_1x_2x_1} \big \} \, , \text{ or } \end{align*} \item the structure $(\mathbb{Q};=)$. \end{itemize} \begin{corollary}\label{cor:mc-cores} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$. Then its model-complete core has only one element, or is isomorphic to a structure which is interdefinable with either $(\mathbb S_2;<,\bot)$, $(\mathbb S_2;B)$, $(\mathbb L_2;C)$, $(\mathbb L_2;D)$, $(\mathbb Q;<)$, $(\mathbb Q;\Betw)$, $(\mathbb Q;\Cyc)$, $(\mathbb Q;\Sep)$, or $(\mathbb Q;\neq)$. \end{corollary} \begin{theorem}\label{thm:groups} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$. Then $\Gamma$ is first-order interdefinable with either $(\mathbb S_2; \leq)$, $(\mathbb S_2; B)$, or $(\mathbb S_2; =)$. Equivalently, $\Aut(\Gamma)$ equals either $\Aut(\mathbb S_2; \leq)$, $\Aut(\mathbb S_2; B)$, or $\Aut(\mathbb S_2; =)$. \end{theorem} The permutation groups on $\mathbb S_2$ that are closed within $\Sym(\mathbb S_2)$ are precisely the automorphism groups of structures with domain $\mathbb S_2$. Moreover, the closed permutation groups on $\mathbb S_2$ that contain $\Aut(\mathbb S_2;\leq)$ are precisely the automorphism groups of reducts of $(\mathbb S_2;\leq)$. Therefore, the following is an immediate consequence of Theorem~\ref{thm:groups}. \begin{corollary} The closed subgroups of $\Sym(\mathbb S_2)$ containing $\Aut(\mathbb S_2; \leq)$ are precisely the permutation groups $\Aut(\mathbb S_2;\leq)$, $\Aut(\mathbb S_2; B)$, and $\Aut(\mathbb S_2; =)$. \end{corollary} \section{Preliminaries} \label{sect:prelims} In lack of a reference for the first-order axiomatization of $(\mathbb S_2;\leq)$ that we mentioned in the introduction, we prove it here for the convenience of the reader. We also prove the claim about the homogeneity of $(\mathbb S_2;\leq,C)$ made in Section~\ref{sect:results}. We then review the Ramsey properties of $(\mathbb S_2;\leq)$ after the expansion with a suitable linear order in Section~\ref{sect:ramsey}. The Ramsey property will be used in our proof via the concept of \emph{canonical functions}; they will be introduced in Section~\ref{sect:canonical}. \subsection{Homogeneity of $({\mathbb S}_2;\leq,C)$} \label{sect:s2} We show that all countable semilinear orders that are unbounded, binary branching, nice, and without joins are isomorphic. As a by-product, we establish the homogeneity of $(\mathbb S_2;\leq,C)$. We write $U < V$ for $U,V \subseteq \mathbb S_2$ when $u<v$ holds for all $u \in U$ and $v \in V$. The notation $U \leq V$ and $U \perp V$ is defined analogously. We also write $u < V$ for $\{u\} < V$ and $u \perp V$ for $\{u\} \perp V$. \begin{lemma} \label{lem:key} Let $(P;\leq)$ be a dense, nice, and binary branching semilinear order without joins. Let $U,V,W \subset P$ be finite subsets such that $U$ is non-empty, $U < V$, and $C(w,u_1u_2)$ for all $w \in W$ and $u_1,u_2 \in U$. Then there exists an $x \in P$ such that $U < x$, $x<V$, and $x \perp W$. \end{lemma} \begin{proof} For $p,q \in V \cup W$, define $p \lhd q$ if \begin{itemize} \item $p < q$, \item $p \perp q$ and $u < p$ for all $u \in U$, or \item $C(q,pu)$ for all $u \in U$. \end{itemize} Note that $\lhd$ is transitive and irreflexive, a preorder on $U \cup V$. Let $m \in V \cup W$ be a minimal element with respect to $\lhd$. We prove the statement by induction on the number of elements in $U$ that are maximal with respect to $\leq$. Since $U$ is non-empty and finite, there exists such a maximal element $u_0$. If there is just one such element, we distinguish whether $m \in V$ or $m \in W$. If $m \in V$ then we choose $x \in P$ such that $u_0 < x < m$; such an $x$ exists by density of $(P;\leq)$. If $m \in W$ then we choose $x \in P$ such that $u_0 < x$ and $x \perp m$; such an $x$ exists since $(P;\leq)$ is nice. Now consider the case that there are two maximal elements $u_0,u_1 \in U$. Again we distinguish two cases. If $m \in V$ then there exists an element $q \in P$ such that $u_0,u_1 < x$ and $x<m$, since $(P;\leq)$ is without joins. Otherwise, $m \in W$. Since we have $C(m,u_0u_1)$ by assumption, there exists an element $x \in P$ such that $x \geq u_1,u_2$ and $x \perp m$, and this element $x$ satisfies the required conditions. Now suppose that there are at least three maximal elements $u_0,u_1,u_2$ in $U$. Since $(P;\leq)$ is binary branching, there is an $s \in P$ larger than two out of $u_0,u_1,u_2$ and incomparable to the third; without loss of generality say that $s > u_0, u > u_1$, and $s \perp u_2$. Then we apply the inductive assumption for the set $U' := U \cup \{s\} \setminus \{u_0,u_1\}$ instead of $U$, which has one maximal element less. The element $x \in P$ that we obtain for $U'$ also satisfies the requirements that we have for $U$. \end{proof} \begin{proposition}\label{prop:backnforth} All countable semilinear orders that are dense, unbounded, binary branching, nice, and without joins are isomorphic to $({\mathbb S}_2;\leq)$. The structure $({\mathbb S};\leq,C)$ is homogeneous. \end{proposition} \begin{proof} Let $(P;\leq)$ and $(Q;\leq)$ be two semilinear orders with the properties given in the statement, and let $\Gamma$ and $\Delta$ be the expansions of those structures with the signature $\{\leq,C\}$ where $C$ denotes the relation as defined in (\ref{C}) at the beginning of Section~\ref{sect:results}. We fix enumerations $(p_i)_{i \in \omega}$ and $(q_j)_{j \in \omega}$ of $P$ and $Q$, respectively. Assume that $D \subset P$ is a finite subset of $P$ and that $\rho \colon D \rightarrow E$ is an isomorphism between the substructure induced by $D$ in $\Gamma$ and the substructure induced by $E$ in $\Delta$. Let $k \in \omega$ be smallest such that $p_k \in P \setminus D$. To go forth we need to extend the domain of the partial isomorphism $\rho$ to $D \cup \{p_k\}$. Let $D_> := \{a \in D : a > p_k\}$ and $D_< := \{a \in D : a < p_k\}$ and $D_\perp := \{a \in D : a \perp p_k\}$. In each case we describe the element $q \in Q$ such that $\rho(p_k) := q$ defines an extension of $\rho$ which is a partial isomorphism between $(P;\leq,C)$ and $(Q;\leq,C)$. \paragraph{\textbf{Case 1: $D_<$ is empty.}} Suppose first that there is an element $v \in D_>$ such that $v \perp w$ for all $w \in D_\perp$. In this case we can choose $q \in Q$ such that $q < \rho(v)$ by the unboundedness of $(P;\leq)$. Otherwise, there exists an element $w_0 \in D_\perp$ such that $w_0 < v$ for all $v \in D_>$. From all those, choose $w_0$ such that $C(w,p_kw_0)$ or $C(p_k,w_0w)$ for all other $w \in D_\perp$. By Lemma~\ref{lem:key} applied to $U := \{\rho(u) \mid u \in D_\perp \mid C(p_k,uw_0)\}$, $V := \rho[D_>]$, and $W := \rho[D_\perp] \setminus U$ we obtain an element $x \in Q$ such that $U < x$, $x < V$, and $x \perp W$. Another application of this lemma gives us an element $x' \in Q$ with the same properties and $x' < x$. Since $(P;\leq)$ is binary branching there exists an element $q \in P$ with $q < x$ and $q \perp x'$, and this element has the desired properties. \paragraph{\textbf{Case 2: $D_<$ is non-empty.}} We apply Lemma~\ref{lem:key} to $U:= \rho[D_<]$, $V := \rho[D_>]$, and $W := \rho[D_\perp]$. The element $x$ from the statement of Lemma~\ref{lem:key} has the properties that we require for $q$. This allows us to take the step going forth. To take the step going back, we need to extend the range of $\rho$ to $D' \cup \{q_k\}$ where $k$ is the first such that $q_k \in Q \setminus D'$. The argument is analogous to the argument given above for going forth. This concludes the back-and-forth and the result follows. \end{proof} \subsection{The convex linear Ramsey extension} \label{sect:ramsey} Let $(S;\leq)$ be a semilinear order. A linear order $\prec$ on $S$ is called a \emph{convex linear extension of $\leq$} iff the following three conditions hold; here, the relations $<$, $B$, and $C$ are defined over $(S;\leq)$ as they were defined over $(\mathbb S_2;\leq)$. \begin{itemize} \item $\prec$ is an extension, i.e., $x< y$ implies $x\prec y$ for all $x,y\in S$; \item for all $x,y,z\in S$, if $B(x,y,z)$, then $y$ also lies between $x$ and $z$ with respect to $\prec$, i.e., $(x\prec y\prec z)\vee (z\prec y\prec x)$; \item for all $x,y,z\in S$ we have that $C(x,yz)$ implies that $x$ cannot lie between $y$ and $z$ with respect to $\prec$, i.e., $(x \prec y\wedge x\prec z)\vee (y\prec x\wedge z\prec x)$. \end{itemize} For finite semilinear orders $(S;\leq)$, the convex linear extensions are precisely those linear orders obtained by first defining $\prec$ arbitrarily on the second-largest elements of $(S;\leq)$, then ordering the elements just below those elements, and so on. Subject to these choices, $\prec$ is uniquely determined by the above convexity extension rules. Using Fra\"{i}ss\'{e}'s theorem~\cite{HodgesLong} one can show that in the case of $(\mathbb S_2;\leq)$, there exists a convex linear extension $\prec$ of $\leq$ such that $(\mathbb S_2;\leq,C,\prec)$ is homogeneous and such that $(\mathbb S_2;\leq,\prec)$ is \emph{universal} in the sense that it contains all isomorphism types of convex linear extensions of finite semilinear orders; this extension is unique in the sense that all expansions of $(\mathbb S_2;\leq,C)$ by a convex linear extension with the above properties are isomorphic. We henceforth fix any such extension $\prec$. It follows from Lindstr\"{o}m's Test (\cite{HodgesLong} Theorem 8.3.4) that $(\mathbb S_2;\leq,C,\prec)$ is model complete. The structure $(\mathbb S_2;\leq,C,\prec)$ is combinatorially well-behaved in the following sense. For structures $\Sigma, \Pi$ in the same language, we write $\mix \Sigma \Pi$ for the set of all embeddings of $\Pi$ into $\Sigma$. \begin{defn}\label{def:ramseystructure} A countable homogeneous relational structure $\Delta$ is called a \emph{Ramsey structure} iff for all finite substructures $\Omega$ of $\Delta$, all substructures $\Gamma$ of $\Omega$, and all $\chi\colon\mix \Delta \Gamma\rightarrow 2$ there exists an $e_1 \in \mix \Delta \Omega$ such that $\chi$ is constant on $e_1 \circ \mix {\Omega} \Gamma$. An $\omega$-categorical structure is called \emph{Ramsey} if its (homogeneous) expansion by all first-order definable relations is Ramsey. \end{defn} The following theorem is a special case of a Ramsey-type statement for semilinearly ordered semilattices due to Leeb~\cite{Lee-vorlesungen-ueber-pascaltheorie} (also see~\cite{GR-some-recent-developments}, page 276). A \emph{semilinearly ordered semilattice} $(S;\vee,\leq)$ is a semilinear order $(S;\leq)$ which is closed under the binary function $\vee$, the \emph{join} function, satisfying for all $x$ and $y$, that $x \vee y$ is the least upper bound of $\{x,y\}$ with respect to $\leq$. If $\prec$ is a convex linear extension of $\leq$, then $(S;\vee,\leq,\prec)$ is a convex linear extension of the semilinearly ordered semilattice $(S;\vee,\leq)$. By Fra\"{i}ss\'{e}'s Theorem~\cite{HodgesLong} and a back and forth argument, there is a countably infinite homogeneous structure $(\mathbb{T};\vee,\leq,\prec)$ which is the Fra\"{i}ss\'{e} limit of the class of finite, semilinearly ordered semilattices with a convex linear extension. \begin{theorem}[Leeb] \label{thm:Leeb} $(\mathbb{T};\vee,\leq,\prec)$ is a Ramsey structure. \end{theorem} \begin{corollary} $(\mathbb S_2;\leq,C,\prec)$ is a Ramsey structure. \end{corollary} \begin{proof} Let $(\mathbb{T};\leq,C,\prec)$ be the structure obtained from $(\mathbb{T};\vee,\leq,\prec)$ by restricting to the relations $(\leq,\prec)$ and making a definitional expansion with the ternary relation $C$ (with formal definition as in Section 2 (\ref{C})). Note that $\Aut(\mathbb{T};\vee,\leq,\prec)= \Aut(\mathbb{T};\leq,C,\prec)$. Theorem \ref{thm:Leeb} above then implies that $(\mathbb{T};\leq,C,\prec)$ is a Ramsey structure. Every finite substructure of $(\mathbb S_2;\leq,C,\prec)$ is isomorphic to a substructure of $(\mathbb{T};\leq,C,\prec)$ and vice versa, so they have the same age. As $(\mathbb S_2;\leq,C,\prec)$ is model complete, it is the model companion of $(\mathbb{T};\leq,C,\prec)$. Using Theorem 3.15 of \cite{BodirskyRamsey} we conclude that $(\mathbb S_2;\leq,C,\prec)$ is a Ramsey structure. \end{proof} \subsection{Canonical functions.} \label{sect:canonical} The fact that $(\mathbb S_2;\leq,C,\prec)$ is a relational homogeneous Ramsey structure implies that endomorphism monoids of reducts of this structure, and hence also of $(\mathbb S_2;\leq,C)$, can be distinguished by so-called \emph{canonical functions}. \begin{defn} Let $\Delta$ be a structure, and let $a$ be an $n$-tuple of elements in $\Delta$. The \emph{type} of $a$ in $\Delta$ is the set of first-order formulas with free variables $x_1,\ldots,x_n$ that hold for $a$ in $\Delta$. \end{defn} \begin{defn} Let $\Delta$ and $\Gamma$ be structures. A \emph{type condition} between $\Delta$ and $\Gamma$ is a pair $(t,s)$, such that $t$ is the type on an $n$-tuple in $\Delta$ and $s$ is the type of an $n$-tuple in $\Gamma$, for some $n\geq 1$. A function $f\colon\Delta\to\Gamma$ \emph{satisfies} a type condition $(t,s)$ iff the type of $(f(a_1),\ldots,f(a_n))$ in $\Gamma$ equals $s$ for all $n$-tuples $(a_1,\ldots,a_n)$ in $\Delta$ of type $t$. A \emph{behaviour} is a set of type conditions between $\Delta$ and $\Gamma$. We say that a function $f\colon\Delta\to\Gamma$ has a given behaviour iff it satisfies all of its type conditions. \end{defn} \begin{defn} Let $\Delta$ and $\Gamma$ be structures. A function $f:\Delta\to\Gamma$ is \emph{canonical} iff for every type $t$ of an $n$-tuple in $\Delta$ there is a type $s$ of an $n$-tuple in $\Gamma$ such that $f$ satisfies the type condition $(t,s)$. That is, canonical functions send $n$-tuples of the same type to $n$-tuples of the same type, for all $n\geq 1$. \end{defn} Note that any canonical function induces a function from the types over $\Delta$ to the types over $\Gamma$. \begin{defn} Let $\mathcal{F} \subseteq (\mathbb S_2)^{\mathbb S_2}$. We say that $\mathcal{F}$ \emph{generates} a function $g\colon \mathbb S_2\to \mathbb S_2$ iff $g$ is contained in the smallest closed submonoid of $(\mathbb S_2)^{\mathbb S_2}$ which contains $\mathcal F$. This is the case iff for every finite subset $A\subset \mathbb S_2$ there exists an $n\geq 1$ and $f_1,\ldots, f_n\in\mathcal{F}$ such that $f_1\circ\cdots\circ f_n$ agrees with $g$ on $A$. \end{defn} Our proof relies on the following proposition which is a consequence of~\cite{BP-reductsRamsey, BPT-decidability-of-definability} and the fact that $(\mathbb S_2;\leq,C,\prec)$ is a homogeneous Ramsey structure. For a structure $\Delta$ and elements $c_1,\ldots,c_n$ in that structure, let $(\Delta,c_1,\ldots,c_n)$ denote the structure obtained from $\Delta$ by adding the constants $c_1,\ldots,c_n$ to the language. \begin{prop}\label{prop:canfcts} Let $f\colon \mathbb S_2\rightarrow \mathbb S_2$ be any injective function, and let $c_1,\ldots,c_n\in \mathbb S_2$. Then $\{f\}\cup\Aut(\mathbb S_2;\leq,\prec)$ generates an injective function $g\colon \mathbb S_2 \rightarrow \mathbb S_2$ such that \begin{itemize} \item $g$ agrees with $f$ on $\{c_1,\ldots,c_n\}$; \item $g$ is canonical as a function from $(\mathbb S_2;\leq,C,\prec,c_1,\ldots,c_n)$ to $(\mathbb S_2;\leq,C,\prec)$. \end{itemize} \end{prop} \section{The Proof} \subsection{Rerootings and betweenness} We start by examining what the self-embeddings, automorphisms, and endomorphisms of $(\mathbb S_2;B)$ look like. \begin{defn}\label{defn:rerooting} A \emph{rerooting} of $(\mathbb S_2;<)$ is an injective function $f\colon\mathbb S_2\rightarrow\mathbb S_2$ for which there exists a set $S\subseteq \mathbb S_2$ such that \begin{itemize} \item $S$ contains no incomparable elements and is upward closed with respect to $<$; \item $f$ reverses the order $<$ on $S$; \item $f$ preserves $<$ and $\perp$ on $\mathbb S_2\setminus S$; \item whenever $x\in \mathbb S_2\setminus S$ and $y\in S$, then $x<y$ implies $f(x)\perp f(y)$ and $x\perp y$ implies $f(x)<f(y)$. \end{itemize} We then say that $f$ is a \emph{rerooting with respect to $S$}. \end{defn} It is not hard to see that whenever $S\subseteq \mathbb S_2$ is as above, then there is a rerooting with respect to $S$. A rerooting with respect to $S$ is a self-embedding of $(\mathbb S_2;<)$ if and only if $S$ is empty, and the image of any rerooting with respect to $S$ is isomorphic to $(\mathbb S_2;<)$ if and only if $S$ is a maximal chain or empty. In particular, there exist rerootings which are permutations of $\mathbb S_2$ and which are not self-embeddings of $(\mathbb S_2;<)$. \begin{proposition}\label{prop:rerooting} $\Emb(\mathbb S_2;B)$ consists precisely of the rerootings of $(\mathbb S_2;<)$. \end{proposition} \begin{proof} It is easy to check that rerootings preserve $B$ and its negation. Let $f\in \Emb(\mathbb S_2;B)$. We first claim that either $f\in\Emb(\mathbb S_2;<)$, or there exist $x,y\in\mathbb S_2$ such that $x<y$ and $f(x)>f(y)$. To see this, suppose first that $f$ violates $\perp$. Pick $a,b\in\mathbb S_2$ with $a\perp b$ and such that $f(a)<f(b)$. There exists $c\in\mathbb S_2$ such that $c>b$ and such that $B(a,c,b)$. Since $f$ preserves $B$ we then must have $f({c})<f(b)$, and our claim follows. Now suppose $f$ violates $<$, and pick $a,b\in\mathbb S_2$ with $a<b$ witnessing this. Then for any $c\in\mathbb S_2$ with $c>b$ we have $f({c})<f(b)$, proving the claim. Let $S:=\{x\in\mathbb S_2\mid \exists y\in\mathbb S_2 (x<y\wedge f(y)<f(x))\}$. By the above, $S$ is non-empty. Since $f$ preserves $B$, it follows easily that whenever $x\in S$, $y\in \mathbb S_2$ and $x<y$, then $f(y)>f(x)$. From this and again because $f$ preserves $B$ it follows that $S$ is upward closed, i.e., if $x\in S$ and $y\in\mathbb S_2$ satisfy $y>x$, then $y\in S$. Hence, $S$ cannot contain incomparable elements $x,y$, as otherwise for any $z\in S$ with $x<z$ and $y<z$ we would have $f(x)>f(z)$ and $f(y)>f(z)$, and so $f(x)$ and $f(y)$ would have to be comparable. But then $f$ would violate $\neg B$ on $\{x,y,z\}$. Consider $a\in\mathbb S_2\setminus S$ and $b\in S$ with $a<b$. Pick $c\in S$ with $c>b$. Then $f({c})<f(b)$ and $B(a,b,c)$ imply that $f(a)>f(b)$ or $f(a)\perp f(b)$. The first case is impossible by the definition of $S$, and so $f(a)\perp f(b)$. Next consider $a\in\mathbb S_2\setminus S$ and $b\in S$ with $a\perp b$. Picking $c\in S$ with $B(a,c,b)$, we derive that $f(a)< f(b)$. Let $x,y\in \mathbb S_2\setminus S$ with $x<y$. Pick $z\in S$ such that $y<z$. Then $B(f(x),f(y),f(z))$, $f(x)\perp f(z)$ and $f(y)\perp f(z)$ imply that $f(x)<f(y)$. Finally, given $x,y\in \mathbb S_2\setminus S$ with $x\perp y$, we can pick $z\in S$ such that $x<z$ and $y<z$. Then $f(x)\perp f(z)$, $f(y)\perp f(z)$, $\neg B(f(x),f(y),f(z))$, and $\neg B(f(y),f(x),f(z))$ together imply $f(x)\perp f(y)$. \end{proof} \begin{cor}\label{cor:rerooting} $\Aut(\mathbb S_2;B)$ consists precisely of the surjective rerootings with respect to a maximal chain or with respect to the empty set. \end{cor} \begin{cor}\label{cor:rerootinggenerate} $\Emb(\mathbb S_2;B)$ is generated by any of its functions which do not preserve $<$. \end{cor} \begin{proof} By homogeneity of $(\mathbb S_2;\leq,C)$ and topological closure. \end{proof} \begin{proposition}\label{prop:endb} Any function in $(\mathbb S_2)^{\mathbb S_2}$ that preserves $B$ is injective and preserves $\lnot B$. Consequently, $\End(\mathbb S_2; B)=\Emb(\mathbb S_2; B)=\overline{\Aut(\mathbb S_2; B)}$. \end{proposition} \begin{proof} The existential positive formula $$(a=b)\vee (b=c)\vee (c=a)\vee \exists x (B(a,x,b)\wedge B(b,x,c)) $$ is equivalent to $\lnot B(a,b,c)$. Therefore $B$ and $\lnot B$ are existentially positively interdefinable, and hence preserved by the same unary functions on $\mathbb S_2$ (cf.~ the discussion in the introduction). Moreover, for all $a,b\in \mathbb S_2$ we have that $a\neq b$ iff there exists $c\in\mathbb S_2$ such that $B(a,b,c)$, so inequality has an existential positive definition from $B$, and functions preserving $B$ must be injective. Hence, every endomorphism of $(\mathbb S_2; B)$ is an embedding. From Proposition~\ref{prop:rerooting} and~\ref{cor:rerooting} it follows that the restriction of any self-embedding of $(\mathbb S_2; B)$ to a finite subset of $\mathbb S_2$ extends to an automorphism, and hence $\Emb(\mathbb S_2; B)=\overline{\Aut(\mathbb S_2; B)}$. \end{proof} \subsection{Ramsey-theoretic analysis} \subsubsection{Canonical functions without constants} Every canonical function $f\colon (\mathbb S_2; \leq, C, \prec)\rightarrow (\mathbb S_2; \leq, C,\prec)$ induces a function on the 3-types of $(\mathbb S_2; \leq, C,\prec)$. Our first lemma shows that only few functions on those 3-types are induced by canonical functions, i.e., there are only few behaviors of canonical functions. \begin{defn} We call a function $f\colon \mathbb S_2\rightarrow \mathbb S_2$ \begin{itemize} \item \emph{flat} iff its image induces an antichain in $(\mathbb S_2; \leq)$; \item \emph{thin} iff its image induces a chain in $(\mathbb S_2; \leq)$. \end{itemize} \end{defn} \begin{lemma}\label{lem:nocnst} Let $f\colon (\mathbb S_2; \leq, C, \prec)\rightarrow (\mathbb S_2; \leq, C,\prec)$ be an injective canonical function. Then either $f$ is flat, or $f$ is thin, or $f\in\End(\mathbb S_2;<,\perp)$. \end{lemma} \begin{proof} Let $u_1,u_2,v_1,v_2\in\mathbb S_2$ be so that $u_1<u_2$, $v_1\perp v_2$, and $v_1\prec v_2$. If $f(u_1)\perp f(u_2)$ and $f(v_1)\perp f(v_2)$, then $f$ is flat by canonicity. If $f(u_1)\not\perp f(u_2)$ and $f(v_1)\not\perp f(v_2)$, then $f$ is thin. It remains to check the following cases. {\emph {Case 1: $f(u_1)\perp f(u_2)$ and $f(v_1)< f(v_2)$.}} Let $x,y,z\in \mathbb S_2$ be such that $x<y$, $x\perp z$, $y\perp z$, $z\prec x$, and $z\prec y$. Then $f(x)\perp f(y)$, $f(x)>f(z)$, and $f(y)> f(z)$, in contradiction with the axioms of the semilinear order. {\emph {Case 2: $f(u_1)\perp f(u_2)$ and $f(v_1)> f(v_2)$.}} Let $x,y,z\in \mathbb S_2$ be such that $x<y$, $x\perp z$, $y\perp z$, $x\prec z$, and $y\prec z$. Then $f(x)\perp f(y)$, $f(x)>f(z)$, and $f(y)>f(z)$, in contradiction with the axioms of the semilinear order. {\emph {Case 3: $f(u_1)<f(u_2)$ and $f(v_1)\perp f(v_2)$.}} Then $f$ preserves $<$ and $\perp$. {\emph {Case 4: $f(u_1)>f(u_2)$ and $f(v_1)\perp f(v_2)$.}} Let $x,y,z\in \mathbb S_2$ such that $x\perp y$, $x\prec y$, $x<z$, and $y< z$. Then $f(x)\perp f(y)$, $f(x)>f(z)$, and $f(y)> f(z)$, in contradiction with the axioms of the semilinear order. \end{proof} \subsubsection{Canonical functions with constants} \begin{lem}\label{lem:generateflat} Let $f\colon \mathbb S_2\rightarrow \mathbb S_2$ be a function. If $f$ preserves incomparability but not comparability in $(\mathbb S;\leq)$, then $\{f\}\cup\Aut(\mathbb S_2;\leq)$ generates a flat function. If $f$ preserves comparability but not incomparability in $(\mathbb S;\leq)$, then $\{f\}\cup\Aut(\mathbb S_2;\leq)$ generates a thin function. \end{lem} \begin{proof} We show the first statement; the proof of the second statement is analogous. We first claim that for any finite set $A\subseteq\mathbb S_2$, $f$ generates a function which sends $A$ to an antichain. To see this, let $A$ be given, and pick $a,b\in\mathbb S_2$ such that $a<b$ and $f(a)\perp f(b)$. If $A$ contains elements $u,v$ with $u<v$, then let $\alpha\in\Aut(\mathbb S_2;\leq)$ be so that $\alpha(u)=a$ and $\alpha(b)=v$. The function $f\circ \alpha$ sends $A$ to a set which has less pairs $(u,v)$ satisfying $u<v$ than $A$. Repeating this procedure on the image of $A$ and so forth and composing functions we obtain a function which sends $A$ to an antichain. Now let $\{s_0,s_1,\ldots\}$ be an enumeration of $\mathbb S_2$, and pick for every $n\geq 0$ a function $g_n$ generated by $\{f\}\cup\Aut(\mathbb S_2;\leq)$ which sends $\{s_0,\ldots,s_n\}$ to an antichain. Since $(\mathbb S;\leq)$ is $\omega$-categorical, by thinning out the sequence we may assume that for all $n\geq 0$ and all $i,j\geq n$ the type of the tuple $(g_i(s_0),\ldots,g_i(s_n))$ equals the type of $(g_j(s_0),\ldots,g_j(s_n))$ in $(\mathbb S;\leq)$. By composing with automorphisms of $(\mathbb S;\leq)$ from the left, we may even assume that these tuples are equal. But then the sequence $(g_n)_{n\in\omega}$ converges to a flat function. \end{proof} \begin{defn} When $n\geq 1$ and $R\subseteq \mathbb S_2^n$ is an $n$-ary relation, then we say that $R(X_1,\ldots,X_n)$ holds for sets $X_1,\ldots,X_n\subseteq \mathbb S_2$ iff $R(x_1,\ldots,x_n)$ holds whenever $x_i\in X_i$ for all $1\leq i\leq n$. We also use this notation when some of the $X_i$ are elements of $\mathbb S_2$ rather than subsets, in which case we treat them as singleton subsets. \end{defn} \begin{defn} For $a\in \mathbb S_2$, we set \begin{itemize} \item $U_<^a:=\{p\in \mathbb S_2\mid p<a\}$; \item $U_>^a:=\{p\in \mathbb S_2\mid p>a\}$; \item $U_{\perp,\prec}^a:=\{p\in \mathbb S_2\mid p\perp a \wedge p\prec a\}$; \item $U_{\perp,\succ}^a:=\{p\in \mathbb S_2\mid p\perp a \wedge a\prec p\}$; \item $U_{\perp}^a:=U_{\perp,\succ}^a\cup U_{\perp,\prec}^a$. \end{itemize} \end{defn} The first four sets defined above are precisely the infinite orbits of $\Aut(\mathbb S_2; \leq,\prec, a)$. \begin{lemma}\label{lem:onecnst} Let $a\in \mathbb S_2$, and let $f\colon (\mathbb S_2; \leq, C, \prec, a)\rightarrow (\mathbb S_2; \leq, C,\prec)$ be an injective canonical function. Then one of the following holds: \begin{enumerate} \item $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat or a thin function; \item $f\in\End(\mathbb S_2;<,\perp)$; \item $f{\upharpoonright}_{\mathbb S_2\setminus\{a\}}$ behaves like a rerooting function with respect to $U_>^a$, and $f(a)\not < f[U_{>}^a]$. \end{enumerate} Moreover, if $f(a)\not > f[U_<^a]$ and $f(a)\not > f[U_{>}^a]$, then $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat or a thin function. \end{lemma} \begin{proof} The set $U_<^a$ induces an isomorphic copy of $(\mathbb S_2; \leq, C, \prec)$, and the restriction of $f$ to this copy is canonical. By Lemma~\ref{lem:nocnst} we may assume that $f$ preserves $<$ and $\perp$ on $U_<^a$ as otherwise $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat or a thin function. When $u,v\in U_{\perp,\prec}^a$ satisfy $u<v$, then there exists a subset of $U_{\perp,\prec}^a$ containing $u$ and $v$ which induces an isomorphic copy of $(\mathbb S_2; \leq, C, \prec)$. As above, we may assume that $f$ preserves $<$ and $\perp$ on this subset, and hence $f(u)<f(v)$. If $u,v\in U_{\perp,\prec}^a$ satisfy $u\perp v$, then there exist subsets $R,S$ of $U_{\perp,\prec}^a$ containing $u$ and $v$, respectively, such that both $R$ and $S$ induce isomorphic copies of $(\mathbb S_2; \leq, C, \prec)$ and such that for all $r\in R$ and $s\in S$ the type of $(r,s)$ equals the type of $(u,v)$ in $(\mathbb S_2; \leq, C, \prec)$. Assuming as above that $f$ preserves $<$ and $\perp$ on both copies, $f(u)<f(v)$ would imply $f[R]<f[S]$, which is in contradiction with the axioms of a semilinear order. Hence, we may assume that $f$ preserves $<$ and $\perp$ on $U_{\perp,\prec}^a$, and by a similar argument also on $U_{\perp,\succ}^a$. The sets $U_{\perp,\prec}^a$, $U_{\perp,\succ}^a$, and $U_<^a$ are pairwise incomparable, and the relation $\perp$ between them cannot be violated, as this would contradict the axioms of the semilinear order. Thus we may assume that $f$ preserves $<$ and $\perp$ on $U_{\perp}^a\cup U_<^a$. Moreover, for no $p\in\{a\}\cup U_{>}^a$ we have $f({p})<f[U_{\perp,\prec}^a]$, $f({p})<f[U_{\perp,\succ}^a]$, or $f({p})<f[U_<^a]$, again by the properties of semilinear orders. Assume that $U_{>}^a$ is mapped to an antichain by $f$. Then canonicity of $f$ implies that $f[U_{>}^a]\perp f[U_{\perp}^a\cup U_<^a]$, as all other possibilities are in contradiction with the axioms of the semilinear order. In particular, $f$ then preserves $\perp$ on $\mathbb S_2\setminus\{a\}$. Given a finite $A\subseteq \mathbb S_2$ which is not an antichain, there exists $\alpha\in\Aut(\mathbb S_2; \leq)$ such that $\alpha[A]\subseteq \mathbb S_2\setminus\{a\}$, and two comparable points are mapped into $U_{>}^a$ by $\alpha$. Thus $f\circ \alpha$ preserves $\perp$ on $A$, and it maps at least one comparable pair in $A$ to an incomparable one. As in Lemma~\ref{lem:generateflat}, we see that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. So we may assume that the order on $U_{>}^a$ is either preserved or reversed by $f$. The rest of the proof is an analysis of the possible behaviours of $f$ in these two cases. In order to talk about the behaviour of $f$, we choose elements $u_1\in U_{\perp,\prec}^a$, $u_2\in U_{\perp,\succ}^a$ and $z_1, z_2\in U_{>}^a$ such that $z_1<z_2$, $u_i\perp z_1$, and $u_i< z_2$ for $i\in\{1,2\}$. {\emph {Case 1: $f$ preserves the order on $U_{>}^a$.}} If $f(u_1)<f(z_1)$, then by transitivity of $<$ and canonicity of $f$ we have that $f[U_{\perp,\prec}^a]<f[U_{>}^a]$. Given a finite $A\subseteq \mathbb S_2$ which is not a chain, there exists $\alpha\in\Aut(\mathbb S_2; \leq)$ such that $\alpha[A]\subseteq U_{\perp,\prec}^a\cup U_{>}^a$ and such that $\alpha(x)\in U_{\perp,\prec}^a$ and $\alpha(y)\in U_{>}^a$ for some elements $x,y\in A$ with $x\perp y$. Thus $f\circ \alpha$ preserves $<$ on $A$, and it maps at least one incomparable pair in $A$ to a comparable one. As in Lemma~\ref{lem:generateflat}, we conclude that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function. We can argue similarly when $f(u_2)<f(z_1)$. Thus we may assume that $f(u_i)\perp f(z_1)$ for $i\in \{1,2\}$. If $f(u_i)\perp f(z_2)$ for some $i\in \{1,2\}$, then a similar argument shows that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. Hence, we may assume that $f(u_i)<f(z_2)$ for $i\in \{1,2\}$, and so $f$ preserves $<$ and $\perp$ on $U_\perp^a\cup U_>^a$. Assume that $f[U_<^a]\perp f[U_{>}^a]$. Given a finite $A\subseteq \mathbb S_2$ which is not an antichain, there exists $\alpha\in\Aut(\mathbb S_2; \leq)$ such that $\alpha[A]\subseteq \mathbb S_2\setminus\{a\}$ and such that $\alpha(x)\in U_<^a$ and $\alpha(y)\in U_{>}^a$ for some $x,y\in A$ with $x<y$. Thus $f\circ \alpha$ preserves $\perp$ on $A$, and it maps at least one comparable pair in $A$ to an incomparable one. The proof of Lemma~\ref{lem:generateflat} shows that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. So we may assume that $f[U_<^a]< f[U_{>}^a]$, and consequently, $f$ preserves $<$ and $\perp$ on $\mathbb S_2\setminus \{a\}$. If $f(a)>f[U_{>}^a]$, then by transitivity of $<$ we have $f(a)>f[\mathbb S_2\setminus \{a\}]$, and we can easily show that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function. Similarly, if $f(a)\perp f[U_{>}^a]$, then by the axioms of the semilinear order we have $f(a)\perp f[\mathbb S_2\setminus \{a\}]$, and $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. Thus we may assume that $f(a)<f[U_{>}^a]$. If $f(a)> f[U_{\perp,\prec}^a]$ or $f(a)> f[U_{\perp,\succ}^a]$, then by transitivity of $<$ we have $f[U_{\perp,\prec}^a]<f[U_{>}^a]$ or $f[U_{\perp,\succ}^a]<f[U_{>}^a]$, a contradiction. Hence, $f(a)\perp f[U_{\perp}^a]$. Finally, if $f(a)\perp f[U_<^a]$, then $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. Thus we may assume that $f(a)>f[U_<^a]$, and so $f$ preserves $<$ and $\perp$, proving the lemma. {\emph {Case 2: $f$ reverses the order on $U_{>}^a$.}} If $f(u_1)\perp f(z_1)$, then by $f(z_2)<f(z_1)$ and the axioms of the semilinear order we have that $f(u_1)\perp f(z_2)$. Moreover, $f{\upharpoonright}_{U_{\perp,\prec}^a \cup U_{>}^a}$ preserves $\perp$. Since the comparable elements $u_1, z_2$ are sent to incomparable ones, the standard iterative argument shows that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. An analogous argument works if $f(u_2)\perp f(z_1)$. Thus we may assume that $f(u_i)<f(z_1)$ for $i\in \{1,2\}$. If $f(u_i)<f(z_2)$ for some $i\in \{1,2\}$, then a similar argument shows that $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function. Thus we may assume that $f(u_i)\perp f(z_2)$ for $i\in \{1,2\}$, and $f{\upharpoonright}_{U_\perp^a\cup U_>^a}$ behaves like a rerooting. Assume that $f[U_<^a]<f[U_{>}^a]$. Let $A\subseteq \mathbb S_2$ be finite. Pick a minimal element $b \in A$, and let $C\subseteq A$ be those elements $c\in A$ with $b\leq c$. Let $\alpha\in\Aut(\mathbb S_2; \leq)$ be such that $\alpha(b)\in U_<^a$, $\alpha[C\setminus\{b\}]\subseteq U_{>}^a$ and $\alpha[A\setminus C]\subseteq U_{\perp}^a$. Then there exists $\beta\in\Aut(\mathbb S_2; \leq)$ such that $\beta\circ f\circ \alpha[C]\subseteq U_{>}^a$ and $\beta\circ f\circ \alpha[A\setminus C]\subseteq U_{\perp}^a$. Let $g:=f\circ \beta\circ f\circ \alpha$. Then $g{\upharpoonright}_{A\setminus\{b\}}$ preserves $<$ and $\perp$, and $g(b)\geq g[A]$. By iterating such steps, $A$ can be mapped to a chain. Hence, as in Lemma~\ref{lem:generateflat}, $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function. Thus we may assume that $f[U_<^a]\perp f[U_{>}^a]$. By replacing $U_<^a$ with $\{a\}$ in this argument, one can show that if $f(a)<f[U_{>}^a]$, then $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function. Thus we may assume that $f(a)\not < f[U_{>}^a]$, and so Item~(3) applies. To show the second part of the lemma, suppose that $f(a)\not > f[U_<^a]$ and $f(a)\not > f[U_{>}^a]$. Then $f$ violates $<$, thus Item (2) cannot hold for $f$. Hence, either $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat or a thin function, or the conditions in Item~(3) hold for $f$. We assume the latter. In particular, $f(a)\perp f[U_\perp^a]$, by the axioms of the semilinear order, and hence $f(a)\perp f[U_{>}^a]$. Let $A\subseteq \mathbb S_2$ be finite such that $A$ is not an antichain. Pick some $x\in A$ with is maximal in $A$ with respect to $\leq$ and such that there exists $y\in A$ with $y<x$. Let $\alpha\in\Aut(\mathbb S_2; \leq)$ be such that $\alpha(x)=a$. Then $f\circ \alpha$ preserves $\perp$ on $A$, and $f(y)\perp f(x)$. Hence, iterating such steps $A$ can be mapped to an antichain, and $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat function. \end{proof} \subsubsection{Applying canonicity} \begin{lemma}\label{lem:climb1end} Let $f\colon\mathbb S_2\rightarrow \mathbb S_2$ be an injective function that violates $<$. Then either $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat or a thin function, or $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates $\End(\mathbb S_2; B)$. \end{lemma} \begin{proof} If $f$ preserves comparability and incomparability, then $f$ cannot violate $<$. If $f$ preserves comparability and violates incomparability, then $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function by Lemma~\ref{lem:generateflat}. Thus we may assume that $f$ violates comparability. Let $a,b\in\mathbb S_2$ such that $a<b$ and $f(a)\perp f(b)$. According to Proposition~\ref{prop:canfcts}, there exists a canonical function $g\colon(\mathbb S_2; \leq, C, \prec, a, b)\rightarrow (\mathbb S_2, \leq, C,\prec)$ that is generated by $\{f\}\cup \Aut(\mathbb S_2; \leq)$ such that $g(a)\perp g(b)$. The set $U_<^b$ induces in $(\mathbb S_2; \leq, C, \prec, a)$ a structure isomorphic to $(\mathbb S_2; \leq, C, \prec, a)$, and the restriction of $g$ to this set is canonical. By Lemma~\ref{lem:onecnst} either $\{g\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin or a flat function, or a rerooting, or $g$ preserves $<$ and $\perp$ on $U_<^b$. We may assume the latter. By a similar argument, either $\{g\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin or a flat function, or a rerooting, or $g$ preserves $<$ and $\perp$ on $U_<^a\cup U_\perp^b\cup U_>^b\cup\{b\}$. However, the latter is impossible as it would imply that $g(t)<g(a)$ and $g(t)< g(b)$ for all $t\in U_<^a$ while $g(a)\perp g(b)$, which is in contradiction with the axioms of the semilinear order. \end{proof} \begin{lemma}\label{lem:climb2end} Let $f\colon\mathbb S_2\rightarrow \mathbb S_2$ be an injective function that violates $B$. Then $\{f\}\cup\Aut(\mathbb S_2; B)$ generates a flat or a thin function. \end{lemma} \begin{proof} Let $a,b,c\in \mathbb S_2$ be such that $B(a,b,c)$ and $\lnot B(f(a), f(b), f(c))$. Then it follows from Corollary~\ref{cor:rerootinggenerate} that there exist $\alpha,\beta\in \Aut(\mathbb S_2; B)$ such that $\alpha(a)<\alpha(b)<\alpha({c})$ and such that $\{\beta(f(a)), \beta(f(b)),\beta(f({c}))\}$ induces an antichain. Replacing $f$ by $\beta\circ f\circ\alpha^{-1}$, we may assume that there are $a,b,c\in \mathbb S_2$ such that $a<b<c$ and such that $\{f(a), f(b), f({c})\}$ induce an antichain. By Proposition~\ref{prop:canfcts}, there exists a canonical function $g\colon (\mathbb S_2; \leq, C, \prec, a, b, c)\rightarrow (\mathbb S_2, \leq, C)$ that is generated by $\{f\}\cup \Aut(\mathbb S_2; \leq)$ such that $\{g(a), g(b),g({c})\}$ induces an antichain. By the axioms of the semilinear order, at most one $y\in\{g(a), g(b), g({c})\}$ can satisfy $y>g[U_<^a]$ and at most one such element can satisfy $y>g[U_>^c]$. Hence, there exists an $x\in \{a, b, c\}$ such that $g(x)\not > g[U_<^a]$ and $g(x)\not> g[U_>^c]$. The set $X:=U_<^a\cup \{x\}\cup U_>^c\cup U_{\perp}^c$ induces in $(\mathbb S_2; \leq, C, \prec)$ a structure isomorphic to $(\mathbb S_2; \leq, C, \prec)$, and $g{\upharpoonright}_{X}$ is canonical as a function from $(\mathbb S_2; \leq, C, \prec,x)$ to $(\mathbb S_2; \leq, C, \prec)$. According to the second part of Lemma~\ref{lem:onecnst}, $\{g\}\cup \Aut(\mathbb S_2; \leq)$ generates a flat or a thin function. \end{proof} \subsection{Endomorphisms and the proof of Theorem~\ref{thm:4case}} \begin{prop}\label{prop:embeddings} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$. Then one of the following holds. \begin{enumerate} \item $\End(\Gamma)$ contains a flat or a thin function. \item $\End(\Gamma)= \overline{\Aut(\mathbb S_2; \leq)}$. \item $\End(\Gamma)= \overline{\Aut(\mathbb S_2; B)}$. \end{enumerate} \end{prop} \begin{proof} Assume that there exist $x,y\in\mathbb S_2$ with $x<y$ and $f\in \End(\Gamma)$ such that $f(x)=f(y)$. By collapsing comparable pairs one-by-one using $f$ and automorphisms of $(\mathbb S_2; \leq)$, it is possible to generate a flat function. Similarly, if there exist a pair of elements $x\perp y$ and $f\in \End(\Gamma)$ such that $f(x)=f(y)$, then $\{f\}\cup \Aut(\mathbb S_2; \leq)$ generates a thin function. Hence, we may assume that every endomorphism of $\Gamma$ is injective. If $\End(\Gamma)$ preserves $<$ and $\perp$, then $\End(\Gamma)= \Emb(\mathbb S_2; \leq)= \overline{\Aut(\mathbb S_2; \leq)}$. If $\End(\Gamma)$ preserves $<$ and violates $\perp$, then $\End(\Gamma)$ contains a thin function. Thus we may assume that some $f\in \End(\Gamma)$ violates $<$. By Lemma~\ref{lem:climb1end} either $\End(\Gamma)$ contains a flat or a thin function, or $\Emb(\mathbb S_2; B)\subseteq \End(\Gamma)$. Since $\Emb(\mathbb S_2; B)= \overline{\Aut(\mathbb S_2; B)}$, we may assume that $\Emb(\mathbb S_2; B)\subsetneq \End(\Gamma)$, as otherwise Item (1) or (3) holds. Hence, there exists a function $f\in \End(\Gamma)$ that violates either $B$ or $\lnot B$. By Proposition~\ref{prop:endb} $f$ violates $B$, and then $\End(\Gamma)$ contains a flat or a thin function by Lemma~\ref{lem:climb2end}. \end{proof} \begin{lemma}\label{lem:canac} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$ which has a flat endomorphism. Then $\Gamma$ is homomorphically equivalent to a reduct of $(\mathbb L_2; C)$. \end{lemma} \begin{proof} Let $f$ be that endomorphism. By Zorn's lemma, there exists a maximal antichain $M$ in $\mathbb S_2$ that contains the image of $f$. By definition $M$ induces in $(\mathbb S_2;C)$ a structure $\Sigma$ which is isomorphic to $(\mathbb L_2; C)$. The structure $\Delta$ with domain $M$ and all relations that are restrictions of the relations of $\Gamma$ to $M$ is a reduct of $\Sigma$, as $(\mathbb S_2; \leq, C)$ has quantifier elimination. The inclusion map of $M$ into $\mathbb S_2$ is a homomorphism from $\Delta$ to $\Gamma$, and the function $f$ is a homomorphism from $\Gamma$ to $\Delta$. \end{proof} \begin{lemma}\label{lem:canch} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$ which has a thin endomorphism. Then $\Gamma$ is homomorphically equivalent to a reduct of the dense linear order. \end{lemma} \begin{proof} Analogous to the proof of Lemma~\ref{lem:canac}, using the obvious fact that maximal chains in $(\mathbb S_2; \leq)$ are isomorphic to $(\mathbb{Q}; \leq)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:4case}] Follows directly from Propositions~\ref{prop:endb} and \ref{prop:embeddings}, Lemmas~\ref{lem:canac} and \ref{lem:canch}, and the easily verifiable fact that $\End(\mathbb S_2;<,\perp)=\overline{\Aut(\mathbb S_2;\leq)}$. \end{proof} \subsection{Embeddings and the proof of Theorem~\ref{thm:groups}} \begin{lemma}\label{lem:thinemb} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$ with a thin self-embedding. Then $\Gamma$ is isomorphic to a reduct of $(\mathbb Q;<)$. \end{lemma} \begin{proof} By Proposition~\ref{prop:canfcts} there is a thin canonical function $g \colon (\mathbb S_2; \leq, C, \prec)\rightarrow (\mathbb S_2; \leq, C,\prec)$ such that $g\in \Emb(\Gamma)$. There are four possible behaviours of $g$, as it can preserve or reverse $<$, and independently, it can preserve or reverse $\prec$ on incomparable pairs. In all four of these cases, the structure $\Sigma$ induced by the image of $f$ in $(\mathbb S_2; \leq)$ is isomorphic to $(\mathbb Q;\leq)$. The structure $\Delta$ on this image whose relations are the restrictions of the relations of $\Gamma$ to $f[\mathbb S_2]$ is a reduct of $\Sigma$, as $(\mathbb S_2; \leq, C)$ has quantifier elimination. The claim follows as $g$ is an isomorphism between $\Gamma$ and $\Delta$. \end{proof} \begin{lemma}\label{lem:redq} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$ which is isomorphic to a reduct of $(\mathbb Q;<)$. Then $\Gamma$ is existentially interdefinable with $(\mathbb S_2; =)$. \end{lemma} \begin{proof} Pick any pairwise incomparable elements $a_1,\ldots,a_5\in \mathbb S_2$. Then there exist distinct $i,j\in\{1,\ldots,5\}$ and an automorphism of $(\mathbb S_2; \leq)$ which flips $a_i,a_j$ and fixes the other three elements. From Cameron's classification of the reducts of $(\mathbb Q;<)$ (\cite{Cameron5}, cf.~the description in Section~\ref{sect:results}) we know that the only automorphism group of such a reduct which can perform this is the full symmetric group, since all other groups fix at most one or all of five elements when they act on them. Hence, $\Aut(\Gamma)$ contains all permutations of $\mathbb S_2$. Thus, all injections of $\mathbb S_2$ are self-embeddings of $\Gamma$, and the lemma follows. \end{proof} \begin{definition} Let $R(x,y,z)$ be the ternary relation on $\mathbb S_2$ defined by the formula $$ C(z,xy)\vee (x<z\wedge y<z)\vee (x\perp z\wedge y\perp z\wedge (x<y\vee y<x)). $$ \end{definition} \begin{proposition}\label{prop:notmc} $(\mathbb S_2;R)$ and $(\mathbb S_2;\leq)$ are interdefinable. However, $(\mathbb S_2;R)$ is not model-complete, i.e., it has a self-embedding which is not an element of $\overline{\Aut(\mathbb S_2;R)}$. \end{proposition} \begin{proof} By definition, $R$ has a first-order definition in $(\mathbb S_2;\leq)$. To see the converse, observe that for $a,b\in\mathbb S_2$ we have that $a\leq b$ if and only if there exists no $c\in\mathbb S_2$ such that $R(b,c,a)$. Hence, $(\mathbb S_2;R)$ and $(\mathbb S_2;\leq)$ are interdefinable, and in particular, $\Aut(\mathbb S_2;R)=\Aut(\mathbb S_2;\leq)$. To show that $(\mathbb S_2;R)$ is not model-complete, let $f\in (\mathbb S_2)^{\mathbb S_2}$ map $\mathbb S_2$ to an antichain in $(\mathbb S_2;\leq)$ in such a way that $R(a,b,c)$ if and only if $C(f({c}),f(a)f(b))$ for all $a,b,c\in\mathbb S_2$. It is an easy proof by induction that such a mapping exists. Clearly, $f$ is not an element of $\overline{\Aut(\mathbb S_2;R)}$, since it does not preserve comparability. \end{proof} The previous proposition is the reason for the special case concerning $R$ in the following lemma. \begin{lemma}\label{lem:flatemb} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$ with a flat self-embedding. Then $\Gamma$ is isomorphic to a reduct of $(\mathbb Q;<)$, or it has a flat self-embedding that preserves $R$. \end{lemma} \begin{proof} Let $f$ be the flat self-embedding. By Proposition~\ref{prop:canfcts} we may assume that $f$ is canonical as a function from $(\mathbb S_2; \leq, C,\prec)$ to $(\mathbb S_2; \leq, C,\prec)$. By composing $f$, if necessary, from the right with an automorphism $\alpha$ of $(\mathbb S_2; \leq, C)$ which reverses the order $\prec$ on incomparable pairs, we may assume that $f$ is canonical as a function from $(\mathbb S_2; \prec)$ to $(\mathbb S_2; \prec)$; that is, it either preserves or reverses the order $\prec$. In the latter case, $\alpha\circ f$ preserves $\prec$, so in any case we may assume that $f$ preserves $\prec$. To simplify notation, we shall write $x'$ instead of $f(x)$ for all $x\in \mathbb S_2$, and we write $xy|z$ or $z|xy$ instead of $C(z,xy)$ for all $x,y,z\in\mathbb S_2$. Let $a_1,\ldots,a_5\in \mathbb S_2$ be so that $a_1\prec\cdots\prec a_5$ and so that $a_1\perp a_2$, $a_1,a_2<a_3$, $a_3\perp a_4$, and $a_1,\ldots,a_4<a_5$. We shall analyse the possible behaviours of $f$ on these elements. Since $f$ preserves $\prec$, we have that either $a_1'a_2'|a_3'$ or $a_1'|a_2'a_3'$. We claim that in the first case, $a_2'a_3'|a_4'$. Otherwise, pick $x>a_2$ such that $a_1x|a_4$. Since $a_1'a_2'|a_3'$, we must have $a_1'a_2'|a_4'$ by the properties of $\prec$, and so $a_1'x'|a_4'$ by canonicity. But then $a_2'x'|a_4'$ since $a_1'\prec a_2'\prec x'$, and hence indeed $a_2'a_3'|a_4'$ by canonicity. This together with $a_1'a_2'|a_3'$ implies $a_1'a_3'|a_4'$. Since $a_1'a_2'|a_3'$, we have $a_1'a_4'|a_5'$ by canonicity, leaving us with the following possibility which uniquely determines the type of the tuple $(a_1',\ldots,a_5')$ in $(\mathbb S_2; \leq, C,\prec)$: \begin{itemize} \item[(A1)] $a_1'a_2'|a_3'$, $a_1'a_3'|a_4'$, $a_1'a_4'|a_5'$. \end{itemize} Now assume $a_1'|a_2'a_3'$; then $a_1'|a_2'a_5'$ by canonicity. The latter implies $a_1'|a_3'a_4'$, and thus $a_2'|a_3'a_4'$ again by canonicity. Taking into account that $a_1'|a_2'a_3'$ and canonicity imply $a_3'|a_4'a_5'$, this leaves us with the following possibility: \begin{itemize} \item[(A2)] $a_1'|a_2'a_5'$, $a_2'|a_3'a_5'$, $a_3'|a_4'a_5'$. \end{itemize} Next let $b_1,\ldots,b_5\in \mathbb S_2$ be so that $b_1\prec\cdots\prec b_5$ and so that $b_1\perp b_4$, $b_2,b_3<b_4$, $b_2\perp b_3$, and $b_1,\ldots,b_4<b_5$. If $b_2'|b_3'b_4'$, then canonicity implies $b_1'|b_2'b_5'$ and $b_2'|b_3'b_5'$ leaving us with only two non-isomorphic possibilities, namely $b_3'|b_4'b_5'$ and $b_3'b_4'|b_5'$. \begin{itemize} \item[(B1)] $b_1'|b_2'b_5'$, $b_2'|b_3'b_5'$, $b_3'|b_4'b_5'$; \item[(B2)] $b_1'|b_2'b_5'$, $b_2'|b_3'b_5'$, $b_3'b_4'|b_5'$. \end{itemize} If on the other hand $b_2'b_3'|b_4'$, then canonicity tells us that $b_1'b_4'|b_5'$. One possibility here is that $b_1'b_2'|b_3'$, which together with $b_2'b_3'|b_4'$ implies $b_1'b_3'|b_4'$, and so we have: \begin{itemize} \item[(B3)] $b_1'b_4'|b_5'$, $b_1'b_3'|b_4'$, $b_1'b_2'|b_3'$. \end{itemize} Finally, suppose that $b_2'b_3'|b_4'$ and $b_1'|b_2'b_3'$. Pick $x>b_3$ such that $b_2\perp x$. Then $b_1'|b_2'x'$ by canonicity, and hence $b_2'\prec b_3'\prec x$ implies that we must have $b_1'|b_3'x'$. But then canonicity gives us $b_1'|b_2'b_4'$, and hence the following: \begin{itemize} \item[(B4)] $b_1'b_4'|b_5'$, $b_1'|b_2'b_4'$, $b_2'b_3'|b_4'$. \end{itemize} We now consider all possible combinations of these situations. Assume first that (A1) holds; then neither (B1) nor (B2) hold because otherwise $a_1'a_4'|a_5'$ and $b_1'|b_4'b_5'$ together would contradict canonicity. If we have (B3), then for all $a,b,c$ in the range of $f$ we have that $ab|c$ iff $a,b\prec c$. Hence, the formula $a\prec c\wedge b\prec c$ defines the relation $C$ on the image. It is clear that the structure induced by $f[\mathbb S_2]$ in $(\mathbb S_2;\prec)$ is isomorphic to $(\mathbb Q;<)$, since $(\mathbb S_2;\prec)$ is isomorphic to it and since $f$ preserves $\prec$. Thus $\Gamma$ is isomorphic to a reduct of $(\mathbb Q;<)$. If we have (B4), then $f$ is a flat self-embedding of $\Gamma$ that preserves $R$. Now assume that (A2) holds. Then $a_1'|a_4'a_5'$ and canonicity imply that (B1) or (B2) is the case. However, (B2) is in fact impossible by virtue of $a_1'a_3'|a_5'$ and $b_2'|b_4'b_5'$, leaving us with (B1). Here, we argue that $\Gamma$ is isomorphic to a reduct of $(\mathbb Q;<)$ precisely as in the case (A1)+(B3). \end{proof} \begin{lemma}\label{lem:funny} Let $\Gamma$ be a reduct of $(\mathbb S_2; \leq)$. Assume that there is a flat function in $\overline{\Aut(\Gamma)}$ that preserves $R$. Then $\Gamma$ is isomorphic to a reduct of $(\mathbb Q;<)$. \end{lemma} \begin{proof} Let $f$ be that function. We use induction to show that the action of $\Aut(\Gamma)$ is $n$-set transitive for all $n\geq 1$, i.e., if two subsets of $\mathbb S_2$ have the same finite cardinality $n$, then there exists an automorphism of $\Gamma$ sending one set to the other. The statement is obvious for $n=1, 2$. Assume that the claim holds for some $n\in \mathbb{N}$, and let $A_1, A_2$ be $(n+1)-$element subsets with $a_i\in A_i$ for $i\in \{1,2\}$. By the induction hypothesis, for all $i\in \{1,2\}$ there exists an $\alpha_i\in \Aut(\Gamma)$ such that $\alpha_i[A_i\setminus \{a_i\}]$ is a chain. Using the fact that $f$ preserves $R$, we then get that $(f\circ \alpha_1)[A_1]$ and $(f\circ \alpha_2)[A_2]$ induce isomorphic substructures in $(\mathbb S_2; \leq, C)$: namely, for both $i\in \{1,2\}$ there exists a linear order $\sqsubseteq_i$ on $(f\circ \alpha_i)[A_i]$ such that for all pairwise distinct $a,b,c\in (f\circ \alpha_i)[A_i]$ the relation $C(c,ab)$ holds if and only if $a\sqsubseteq_i c$ and $b\sqsubseteq_i c$. Thus there exist $\beta_1, \beta_2, \gamma\in \Aut(\Gamma)$ such that $\beta_i{\upharpoonright}_{A_i}= (f\circ \alpha_i){\upharpoonright}_{A_i}$ for $i\in \{1,2\}$ and $\gamma[(f\circ \alpha_1)[A_1]]=(f\circ \alpha_2)[A_2]$. Hence, $\beta_2^{-1}\circ \gamma\circ \beta_1[A_1]= A_2$. As $\Gamma$ is $n$-set transitive for all $n\geq 1$, the assertion follows from Cameron's theorem in \cite{Cameron5}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:groups}] Let $\Gamma'$ be the structure that we obtain from $\Gamma$ by adding all first-order definable relations in $\Gamma$. Then $\Aut(\Gamma)= \Aut(\Gamma')$ and $\overline{ \Aut(\Gamma')}= \Emb(\Gamma')=\End(\Gamma')$. The theorem follows by applying Proposition~\ref{prop:embeddings} and Lemmas~\ref{lem:thinemb}, \ref{lem:redq}, \ref{lem:flatemb}, and \ref{lem:funny} to the structure $\Gamma'$. \end{proof} \section{Applications in Constraint Satisfaction}\label{sect:csp} Let $\Gamma$ be a structure with a finite relational signature $\tau$. Then $\Csp(\Gamma)$, the \emph{constraint satisfaction problem} for $\Gamma$, is the computational problem of deciding for a given finite $\tau$-structure whether there exists a homomorphism to $\Gamma$. There are several computational problems in the literature that can be formulated as CSPs for reducts of $(\mathbb S_2;\leq)$. When $\Gamma_b$ is the reduct of $(\mathbb S_2;\leq)$ that contains precisely the \emph{binary} relations with a first-order definition in $(\mathbb S_2;\leq)$, then $\Csp(\Gamma_b)$ has been studied under the name ``network consistency problem for the branching-time relation algebra'' by Hirsch~\cite{Hirsch}; it is shown there that the problem can be solved in polynomial time. For concreteness, we mention that in particular the problem $\Csp(\mathbb S_2;<,\perp)$ can be solved in polynomial time, since it can be seen as a special case of $\Csp(\Gamma_b)$. Broxvall and Jonsson~\cite{BroxvallJonsson} found a better algorithm for $\Csp(\Gamma_b)$ which improves the running time from $O(n^5)$ to $O(n^{3.326})$, where $n$ is the number of variables in the input. Yet another algorithm with a running time that is quadratic in the input size has been described in~\cite{BodirskyKutz}. The complexity of the CSP of \emph{disjunctive reducts} of $(\mathbb S_2;\leq,\prec)$ has been determined in~\cite{BroxvallJonsson}; a \emph{disjunctive reduct} is a reduct each of whose relations can be defined by a disjunction of the basic relations in such a way that the disjuncts do not share common variables. Independently from this line of research, motivated by research in computational linguistics, Cornell~\cite{Cornell} studied the reduct $\Gamma_c$ of $(\mathbb S_2;\leq,\prec)$ containing all binary relations that are first-order definable over $(\mathbb S_2;\leq,\prec)$. Contrary to a conjecture of Cornell, it has been shown that $\Csp(\Gamma_c)$ (and in fact already $\Csp(\mathbb S_2;<,\perp)$) cannot be solved by establishing \emph{path consistency}~\cite{phylo-long}. However, $\Csp(\Gamma_c)$ can be solved in polynomial time~\cite{BodirskyKutzAI}. It is a natural but challenging research question to ask for a classification of the complexity of $\Csp(\Gamma)$ for all reducts of $(\mathbb S_2;\leq)$. In this context, we call the reducts of $(\mathbb S_2;\leq)$ \emph{tree description constraint languages}. Such classifications have been obtained for the reducts of $({\mathbb Q};\leq)$ and the reducts of the random graph~\cite{tcsps-journal,BodPin-Schaefer-both}. In both these previous classifications, the classification of the model-complete cores of the reducts played a central role. Our Theorem~\ref{thm:4case} shows that every tree description language belongs to at least one out of four cases; in cases one and two, the CSP has already been classified. It is easy to show (and this will appear in forthcoming work) that the CSP is NP-hard when case three of Theorem~\ref{thm:4case} applies. It is also easy to see (again we have to refer to forthcoming work) that in case four of Theorem~\ref{thm:4case}, adding the relations $<$ and $\perp$ to $\Gamma$ does not change the computational complexity of the CSP. The corresponding fact for the reducts of $({\mathbb Q};\leq)$ and the reducts of the random graph has been extremely useful in the subsequent classification. Therefore, the present paper and in particular Theorem~\ref{thm:4case} are highly relevant for the study of the CSP for tree description constraint languages. \section*{Acknowledgements} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} Variational calculus is a natural language for describing statics of mechanical systems. All mathematical objects that are used in statics have direct physical interpretations. Similar mathematical tools are widely used also in other theories, like dynamics of particles or field theories, however the links between mathematical language and physical system are in these cases more sophisticated. In classical mechanics, variational calculus was used first for deriving equations of motion of mechanical systems in the configuration space, i.e. the Euler-Lagrange equations. In most frameworks (e.g. the Klein's approach), deriving the Euler-Lagrange equations is the main objective. On the other hand, in numerous works by W. M. Tulczyjew (for example in the book \cite{Tu3} and papers \cite{Tu6,Tu7,Tu9,Tu10}) one may find another philosophy of using variational calculus in mechanics and field theories in which the phase dynamics plays a fundamental role. This philosophy, leading to the geometrical structure known as the {\it Tulczyjew triple}, is being more and more recognized by many theoretical physicists and mathematicians. The Tulczyjew triple has proved to be very useful in describing mechanical systems, including those with singular Lagrangians or subject to constraints \cite{TU1}. Starting from basic concepts of variational calculus, we will construct the Tulczyjew triple for first-order Field Theory. The important feature of our approach is that we do not postulate {\it ad hoc} the ingredients of the theory, but obtain them as unavoidable consequences of the variational calculus. This picture of Field Theory is covariant and complete, containing not only the Lagrangian formalism and Euler-Lagrange equations but also the phase space, the phase dynamics and the Hamiltonian formalism. Since the configuration space turns out to be an affine bundle, we have to use affine geometry, in particular the notion of affine duality and affine phase space. In our formulation, the two maps $\alpha$ and $\beta$ which constitute the Tulczyjew triple are morphisms of double structures of affine-vector bundles. We discuss also the Legendre transformation, i.e. the transition between the Lagrangian and the Hamiltonian formulation of the first-order field theory. In this survey, based on \cite{G}, we will present the Tulczyjew triple for first-order field theories in a very general setting, i.e. in the case where fields are sections of some differential fibration, with no additional structure assumed. Our paper is organized as follows. Since we have to use some affine geometry, we start, in section \ref{sec:1}, with a short sketch of some affine constructions and theorem \ref{th:1} describing a canonical isomorphism of certain phase spaces. Then, we present variational calculus in statics (section \ref{sec:3}) and its application to mechanics (sections \ref{sec:3a} and \ref{sec:4}). In section \ref{sec:7}, we describe the classical Tulczyjew triple for mechanics, and in section \ref{sec:7a} for mechanics on algebroids. In the next step, we pass to field theory. Like in mechanics, we have to start from some field theoretical construction for a bounded domain to find correct mathematical representations for certain physical quantities (section \ref{sec:9}). Lagrangian and Hamiltonian sides of the field-theoretical triple are constructed in sections \ref{sec:11} and \ref{sec:12} respectively. The remaining sections are devoted to examples. There is also an appendix containing the proof of theorem \ref{th:1}. Note finally that classical field theory is usually associated with the concept of a multisymplectic structure. The multisymplectic approach appeared first in the papers of the `Polish school' \cite{Ga,KS,KT,Tu5}. Then, it was developed by Gotay, Isennberg, Marsden, and others in \cite{GIMa,GIMb}. The original idea of the multisymplectic structure has been thoroughly investigated and developed by many contemporary authors, see e.g. \cite{CIL,CCI1,EM,FP1,FP2}. The Tulczyjew triple in the context of multisymplectic field theories appeared recently in \cite{LMS} and \cite{CGM}. A similar picture, however with differences on the Hamiltonian side, one can find in \cite{GM} (see also \cite{GMS,Kr}). \section{Affine phase spaces}\label{sec:1} Affine geometry turned out to be an important tool in mechanics and field theory. Let us begin with a short review of affine structures that will be needed later on. Details and further fundamental observations can be found e.g. in \cite{GGU1}. Let $A$ be an affine space modeled on a vector space ${\scriptscriptstyle v}(A)$. This means that the commutative group ${\scriptscriptstyle v}(A)$ acts freely and transitively on $A$ by addition $$A\times{\scriptscriptstyle v}(A)\ni (a,v)\mapsto a+v\,.$$ In other words, the naturally defined differences $a_1-a_2$ of points of $A$ belong to ${\scriptscriptstyle v}(A)$. On affine spaces there are defined \emph{affine combinations} of points, $ta_1+(1-t)a_2$, for all $a_1,a_2\in A$ and $t\in\mathbb R$. Note that {\it convex combinations} are those affine combinations $ta_1+(1-t)a_2$ for which $0\le t\le 1$. All this can be extended to affine bundles $\zt:A\to N$ modelled on a vector bundle ${\scriptscriptstyle v}(\zt):{\scriptscriptstyle v}(A)\to N$. Any vector bundle is an affine bundle and fixing a section $a_0$ of $A$ induces an isomorphism of affine bundles $A$ and ${\scriptscriptstyle v}(A)$, $${\scriptscriptstyle v}(A)\ni v\mapsto a_0+v\in A\,.$$ Using coordinates $(x^i)$ in the open set $\mathcal{O}\subset N$, a local section $a_0: \mathcal{O}\rightarrow A$, and local base of sections $e_a:\mathcal{O}\rightarrow {\scriptscriptstyle v}(A)$, we can construct an adapted coordinate system $(x^i, y^a)$ in $\tau^{-1}(\mathcal{O})$. An element $a\in A$ can be written as $a=a_0(\tau(a))+y^ae_a(\tau(a))$. \begin{definition}\label{def:av} An \emph{AV-bundle} is an affine bundle $\zz:{\scriptscriptstyle Z}\to \mathcal{M}$ modeled on a trivial one-dimensional vector bundle $\mathcal{M}\times Y$, where $Y$ is a one-dimensional vector space. In applications $Y$ will be either $\mathbb R$ or the one dimensional vector space of top forms on a manifold. \end{definition} For the affine space $A_q=\tau^{-1}(q)$, we consider its {\it affine dual}, i.e. the space $A_q^\dag(Y)$ of all affine maps from $A_q$ to the one-dimensional vector space $Y$. \begin{definition}\label{def:affdual} The bundle $\zt^\dag:A^\dag(Y) \longrightarrow N$, where $A^\dag(Y)=\sss{Aff}(A,Y)$ is the set of all affine maps on fibres of $\zt$, is called the \emph{affine dual bundle} with values in $Y$. Instead of $A^\dag(\mathbb R)$ we will write simply $A^\dag$. \end{definition} Every affine map $\phi:A_1\to A_2$ has a well-defined {\it linear part}, ${\scriptscriptstyle v}(\phi):{\scriptscriptstyle v}(A_1)\to{\scriptscriptstyle v}(A_2)$, therefore there is a projection \begin{equation}\label{theta}\zvy: A^\dag(Y)\longrightarrow {\scriptscriptstyle v}(A)^\ast\otimes_N (N\times Y)=\sss{Hom}({\scriptscriptstyle v}(A),Y).\end{equation} The above bundle is a canonical example of an AV-bundle which is modeled on \begin{equation}\label{hb} ({\scriptscriptstyle v}(A)^\ast\otimes_N (N\times Y))\times_NY\,. \end{equation} In the following we shall write ${\scriptscriptstyle v}(A)^\ast\otimes_N Y$ instead of ${\scriptscriptstyle v}(A)^\ast\otimes_N (N\times Y)$ to simplify the notation. The fibre of ${\scriptscriptstyle v}(A)^\ast\otimes_N Y$ over a point $x\in N$ is ${\scriptscriptstyle v}(A_x)^\ast\otimes Y$. Using the dual base sections $\ze^a:\mathcal{O}\rightarrow {\scriptscriptstyle v}(A)^\ast$ and a base element $u$ of $Y$, we construct an adapted coordinate system $(x^i, p_a, r)$ on $(\zt^\dag)^{-1}(\mathcal{O})$. An affine map $\varphi$ on $A_q$ can be written as $\varphi(a)=(p_a\ze^a(a-a_0(q))+r)u$. The map $\zvy$ in coordinates reads $(x^i,p_a,r)\mapsto(x^i,p_a)$. In many constructions functions on a manifold can be replaced by sections of an AV-bundle over that manifold. We can obtain also an affine analog of the differential of a function and an affine version of the cotangent bundle as follows. Given an AV-bundle $\zz:{\scriptscriptstyle Z}\to \mathcal{M}$ and $F_1,F_2\in\sss{Sec}({\scriptscriptstyle Z})$, $F_1-F_2$ may be seen as a map $$F_1-F_2:\mathcal{M}\to Y\,,$$ so the differential $$\mathrm{d}(F_1-F_2)(m)\in Y$$ is well defined. \begin{definition}\label{def:phase} The \emph{phase bundle} ${\scriptscriptstyle P}{\scriptscriptstyle Z}$ of an AV-bundle ${\scriptscriptstyle Z}$ is the affine bundle of cosets ${\underline{\mathrm{d}}} F(m)=[(m,F)]$ (`affine differentials') of the equivalence relation \begin{equation*}(m_1,F_1)\sim(m_2,F_2)\ \Leftrightarrow\ m_1=m_2\,,\quad \mathrm{d}(F_1-F_2)(m_1)=0\,.\end{equation*} \end{definition} Fixing a section $F_0:\mathcal{M}\to {\scriptscriptstyle Z}$ and a basic vector $u^\ast\in Y^\ast$, we get a diffeomorphism \begin{equation*}\psi:{\scriptscriptstyle P}{\scriptscriptstyle Z}\to\textsf{T}^*\mathcal{M}\,, \quad {\underline{\mathrm{d}}} F(m)\mapsto \mathrm{d}(u^\ast(F-F_0))(m)\,.\end{equation*} As the canonical symplectic form on $\textsf{T}^*\mathcal{M}$ is linear and invariant with respect to translations by closed 1-forms, its pull-back does not depend on the choice of $F_0$ nor $u^\ast$, and turns ${\scriptscriptstyle P}{\scriptscriptstyle Z}$ into a canonically symplectic manifold. Now, let us consider a finite-dimensional vector bundle $V$ over a manifold $N$ and choose a vector subbundle $W$ over $N$. The bundle $\tau: V\rightarrow V\slash W$, where $\tau$ is the canonical projection from $V$ onto the quotient bundle $V\slash W$, is an affine bundle modeled on the trivial bundle $${\scriptscriptstyle v}(\tau)=pr_1:V\slash W\times_N W\rightarrow V\slash W\,.$$ We can consider therefore its affine dual $V^\dag_W\rightarrow V\slash W$. We observe that the bundle $$V^\dag_W\rightarrow V\slash W\times_N W^\ast$$ is an AV-bundle. We know that the corresponding phase bundle is ${\scriptscriptstyle P} V^\dag_W$ is canonically a symplectic manifold which, somehow unexpectedly, can be identified as follows. \begin{theorem}\label{th:1} There is a canonical symplectomorphism ${\scriptscriptstyle P} V^\dag_W\simeq \textsf{T}^*V$. In particular, if \ $W\subset V$ are just vector spaces, i.e. vector bundles over single points, we have ${\scriptscriptstyle P} V^\dag_W\simeq V\times V^\ast$. \end{theorem} The proof of the theorem can be found in the Appendix. \section{Variational calculus} \label{sec:2} \subsection{Statics} \label{sec:3} Variational calculus used in mechanics and field theory is based on ideas from statics. We assume that the set of configurations of the static system we describe is a differential manifold $Q$. \begin{pict}[ht] \begin{floatrow}[2] \floatbox{pict}[0.47\textwidth] {\caption{Static system...}\label{fig:1}} {\centering\includegraphics[width=0.46\textwidth]{rys1.pdf}} \floatbox{pict}[0.47\textwidth] {\caption{and its mathematical model}\label{fig:2}} {\centering\includegraphics[width=0.35\textwidth]{rys2.pdf}} \end{floatrow} \end{pict} We are usually interested in equilibrium configurations of an isolated system, as well as a system with an interaction with other static systems. The system alone or in interaction is examined by preforming processes and calculating the cost of each process. We assume that all the processes are {\it quasi-static}, i.e. they are slow enough to produce negligible dynamical effects. Every process can be represented by a one-dimensional smooth oriented submanifold with boundary (Fig.\ref{fig:3}). \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.47\textwidth] {\caption{Quasistatic processes}\label{fig:3}} {\centering\includegraphics[width=.4\textwidth]{rys3.pdf}} \end{floatrow} \end{pict} It may happen that, for some reasons, not all the processes are admissible, i.e. the system is constrained. All the information about the system is therefore encoded in the three objects: the configuration manifold $Q$, the set of all admissible processes, and the cost function that assigns a real number to every process. The cost function should fulfill some additional conditions, e.g. it should be additive in the sense that if we break a process into two subprocesses, then the cost of the whole process should be equal to the sum of the costs of the two subprocesses. Usually we assume that the cost function is local, i.e. for each process it is an integral of a certain positively homogeneous function $W$ on $\textsf{T} Q$ or, in case of constrained system, on some subset $\Delta\subset\textsf{T} Q$. Vectors tangent to admissible processes are called {\it admissible virtual displacements} (Fig.\ref{fig:4}). \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.47\textwidth] {\caption{Virtual displacements}\label{fig:4}} {\centering\includegraphics[width=.4\textwidth]{rys5.pdf}} \end{floatrow} \end{pict} \begin{definition}\label{def:equilibrium} Point $q\in Q$ is an {\it equilibrium point} of the system if for all processes starting in $q$ the cost function is non-negative, at least initially. \end{definition} The first-order necessary condition for the equilibrium says that a point $q$ is an equilibrium point of the system if $$W(\delta q)\geq 0$$ for all vectors $\delta q\in \Delta$. Interactions between systems are described by composite systems. We can compose systems that have the same configuration space $Q$. The composite system (Fig.\ref{fig:5}) is described by the intersection of the sets of admissible processes and the sum of the cost functions $W=W_1+W_2$. From now on, the subscript $1$ will denote `our system' and the subscript $2$ an external system we use for collecting information about our system. The interaction with an external system is usually described in terms of forces $\varphi\in\textsf{T}^*Q$. \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.47\textwidth] {\caption{Composite system}\label{fig:5}} {\centering\includegraphics[width=.4\textwidth]{rys4.pdf}} \end{floatrow} \end{pict} The forces can be understood as distinguished systems, called {\it regular}, for which all the processes are admissible and the function $W_2$ is the differential of a certain function $U:Q\rightarrow \mathbb R$. A regular system at a point $q$ is represented by $\varphi=-\mathrm{d} U(q)$. The `minus' sign comes from the fact that the system is external and changing its configuration has just the opposite cost for us. \begin{definition}\label{def:constitutive} The subset $\mathcal{C}\subset\textsf{T}^\ast Q$ of all external forces which are in equilibrium with our system is called \emph{the constitutive set}. \end{definition} If our system is regular, i.e. $W_1(\zd q)=\langle\mathrm{d} U,\zd q\rangle$ for a function $U:Q\to\mathbb R$, then the constitutive set is $\mathcal{C}=\mathrm{d} U(Q)$. In mechanics and field theory the most important type of systems are analogs of regular systems and regular systems with constraints. To apply the ideas coming from statics to other theories, we shall specify for every theory \begin{itemize} \item configurations $Q$, \item processes (or at least infinitesimal processes), \item functions on $Q$ (to define regular systems), \item covectors $\textsf{T}^\ast Q$ (to define constitutive sets). \end{itemize} \subsection{Mechanics for finite time interval} \label{sec:3a} Let $M$ be a manifold of positions of a mechanical system. We will use smooth paths in $M$ and first-order Lagrangians $L:\textsf{T} M\to\mathbb R$. We also fix the time interval $[t_0,t_1]$. {\it Configurations} $q$ are pieces of smooth paths (Fig.\ref{fig:6}) $$q:[t_0, t_1]\rightarrow M.$$ \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.47\textwidth] {\caption{A configuration}\label{fig:6}} {\centering\includegraphics[width=.35\textwidth]{rys6.pdf}} \end{floatrow} \end{pict}\noindent The set of all configurations will be denoted by $Q$. Since $Q$ is not a standard manifold, we have to introduce `by hand' the concepts of process in $Q$, a smooth function on $Q$, and vector tangent to $Q$. Smooth functions will be associated with Lagrangians, i.e. for any function $L:\textsf{T} M\to\mathbb R$ we define an action functional $S:Q\rightarrow \mathbb R$ by $$S(q)=\int_{t_0}^{t_1} L(\dot q)dt.$$ Parameterized processes (Fig.\ref{fig:7}) in $Q$ come from homotopies $q_s(t)=\chi(s,t)$, i.e. smooth maps $$\chi:\mathbb R^2\supset I\times J\rightarrow M\,,$$ where $I$ is some neighborhood of zero in $\mathbb R$ and $J$ contains $[t_0, t_1]$. \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.4\textwidth] {\caption{A curve in configurations}\label{fig:7}} {\centering\includegraphics[width=.2\textwidth]{rys7.pdf}} \end{floatrow} \end{pict} Smooth curves and smooth functions are defined in such a way that the composition of a curve with a function is a real smooth function, smooth in usual sense. We can therefore employ the standard definition of tangent vectors and covectors as equivalence classes of curves and equivalence classes of functions respectively. \begin{definition}\label{def:vectors} A {\it vector tangent to $Q$} is an equivalence class of smooth curves with respect to the equivalence relation which says that two curves $q_s$ and $q'_s$ are equivalent if, for $t\in[t_0,t_1]$, $q_0(t)=q'_0(t)$ and, for all smooth functions $S$, $$\frac{\mathrm{d}}{\mathrm{d} s}_{|s=0}S(q_s)=\frac{\mathrm{d}}{\mathrm{d} s}_{|s=0}S(q'_s).$$ \end{definition} \begin{definition}\label{def:functions} A {\it covector tangent to $Q$} is an equivalence class of pairs $(q,S)$ with respect to the equivalence relation which says that two pairs $(q,S)$ and $(q',S')$ are equivalent if, for $t\in[t_0,t_1]$, $q(t)=q'(t)$ and, for all smooth curves $s\mapsto q_s$ such that $q_0=q$, we have $$\frac{\mathrm{d}}{\mathrm{d} s}_{|s=0}S(q_s)=\frac{\mathrm{d}}{\mathrm{d} s}_{|s=0}S'(q_s).$$ \end{definition} Since tangent vectors and covectors defined as equivalence classes are abstract objects hard to work with, we need some convenient representations for them. Performing integration by parts, as when deriving Euler Lagrange equations, we get \begin{equation}\label{eq:convenient} \left.\frac{d}{ds}\right|_{s=0}S(q_s)= \int_{t_0}^{t_1}\langle\mathcal{E}L(\ddot q), \delta q\rangle dt+ \left.\frac{}{}\langle\,\mathcal{P} L(\dot q),\delta q\,\rangle\right|^{t_1}_{t_0}\,, \end{equation} where $\mathcal{E}L:\textsf{T}^2M\to\textsf{T}^*M$ and $\mathcal{P} L=\mathrm{d}^\sv L:\textsf{T} M\to\textsf{T}^*M$ are bundle maps and $\delta q:[t_0, t_1]\rightarrow \textsf{T} M$ is a curve in $\textsf{T} M$ whose value at $t$ is a vector tangent to the curve $s\mapsto q_s(t)$ (Fig.\ref{fig:8}). It is easy to see that tangent vectors are in a one-to-one correspondence with paths $\zd q$ in $\textsf{T} M$, and covectors are in a one-to-one correspondence with triples $(f,p_0,p_1)$, $f:[t_0,t_1]\rightarrow\textsf{T}^\ast M$, $p_i\in\textsf{T}^\ast_{q(t_i)}M$ (Fig.\ref{fig:9}). \begin{pict}[ht] \begin{floatrow}[2] \floatbox{pict}[0.47\textwidth] {\caption{Tangent vectors}\label{fig:8}} {\centering\includegraphics[width=0.2\textwidth]{rys8.pdf}} \floatbox{pict}[0.47\textwidth] {\caption{Covectors}\label{fig:9}} {\centering\includegraphics[width=0.2\textwidth]{rys9.pdf}} \end{floatrow} \end{pict} We have found another representation of covectors which is referred to by Tulczyjew and coworkers \cite{TU2} as a {\it Liouville structure}, $$\alpha: \mathbb{P}Q=\{(f,p_0,p_1)\}\longrightarrow \textsf{T}^\ast Q\,. $$ The mechanical system with Lagrangian $L$ is, from a statical point of view, a regular system with cost function given by $\mathrm{d} S$. The constitutive set is therefore $\mathcal{C}=\mathrm{d} S(Q)$. We prefer, however, to use convenient representations and to call the appropriate set the dynamics of a system. \begin{definition}\label{def:dynamic} The {\it (phase) dynamics} of a mechanical system is a subset $\mathcal{D}$ of $\mathbb{P}Q=\{(f,p_0,p_1)\}$ defined by $$\mathcal{D}=\alpha^{-1}(dS(Q)),$$ i.e., $$\mathcal{D}=\left\{(f,p_0,p_1):\;\; f(t)=\mathcal{E}L(\ddot q(t)),\quad p_a=\mathcal{P} L(\dot q(t_a))\,,\ a=0,1\right\}\,.$$ \end{definition} \noindent Explicitly, writing in coordinates, $q=(x^i(t))$, $\dot q=(x^i(t),\dot x^j(t))$, we have $$f_i(t)=\frac{\partial L}{\partial x^i}(\dot q(t))-\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{\partial L}{\partial \dot{x}^i}(\dot q(t))\right)\,,\quad (p_a)_i=\frac{\partial L}{\partial \dot{x}^i}(\dot q(t_a))\,,\ a=0,1\,.$$ \subsection{Mechanics: Infinitesimal version} \label{sec:4} Mechanics for finite time interval is very useful for creating intuitions but not particularly convenient for analyzing the behavior of the system. For the latter purpose, we need actual differential equations for curves in the phase space and in the configuration space. We can get them by passing to the infinitesimal formulation of mechanics in which $M$ will stand as a manifold of positions of mechanical system and Lagrangians will be of the first order. As configurations we choose now `infinitesimal pieces of paths', i.e. $Q=\textsf{T} M$, if $x:\mathbb R\rightarrow M$ is a smooth curve, then $q=\dot x(0)$. \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.4\textwidth] {\caption{Infinitesimal configuration}} {\centering\includegraphics[width=.3\textwidth]{rys10.pdf}} \end{floatrow} \end{pict} This time, the configuration space is a finite-dimensional manifold, therefore we know what are smooth curves and functions, as well as tangent and cotangent spaces. If the system is regular, its cost function is given by $\mathrm{d} L$ and its constitutive set is just $\mathcal{C}=\mathrm{d} L(\textsf{T} M)\subset\textsf{T}^\ast\textsf{T} M$. However, it is interesting what we get out of convenient representatives of vectors and covectors, while passing to the infinitesimal formulation. Let us start with tangent vectors. Vectors tangent to the space of infinitesimal configurations are elements $\delta q$ of $\textsf{T}\sT M$, i.e. vectors tangent to curves in $\textsf{T} M$. Starting from a homotopy $(t,s)\mapsto \chi(t,s)$, we get a configuration $q=\dot\chi(0,0)$ which is the vector tangent to the curve $t\mapsto \chi(t, 0)$ at $t=0$, and the curve $s\mapsto \dot\chi(0,s)$, where $\dot\chi(0,s)$ is the vector tangent to the curve $t\mapsto \chi(t, s)$ at $t=0$ (Fig.\ref{fig:12}). \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.5\textwidth] {\caption{A curve in infinitesimal configuration}} {\centering\includegraphics[width=.2\textwidth]{rys11.pdf}} \end{floatrow} \end{pict} But in (\ref{eq:convenient}) we did the other way around, i.e. we first differentiated with respect to $s$, obtaining the curve $t\mapsto \delta\chi(t, 0)$ with values in $\textsf{T} M$ (Fig.\ref{fig:13}), and then with respect to $t$. What we have just described is the well-known canonical involution \begin{equation}\label{eq:kappa} \kappa_M:\textsf{T}\sT M\longrightarrow\textsf{T}\sT M,\qquad \delta\dot\chi(0,0)\longmapsto\left(\delta\chi\right)^\cdot(0,0)\,. \end{equation} A convenient representation for a tangent vector $\delta q=\delta\dot\chi(0,0)$ (infinitesimal variation) is another element of $\textsf{T}\sT M$, namely $\kappa_M(\delta q)=\left(\delta\chi\right)^\cdot(0,0)$. \begin{pict}[ht] \begin{floatrow}[2] \floatbox{pict}[0.47\textwidth] {\caption{A curve in configuration and...}\label{fig:12}} {\centering\includegraphics[width=0.2\textwidth]{rys11.pdf}} \floatbox{pict}[0.47\textwidth] {\caption{...a curve in variation.}\label{fig:13}} {\centering\includegraphics[width=0.2\textwidth]{rys8.pdf}} \end{floatrow} \end{pict} Covectors are of course elements of $\textsf{T}^\ast\textsf{T} M$ and the constitutive set $\mathcal{C}$ is a subset of $\textsf{T}^\ast\textsf{T} M$ given by the differential of a Lagrangian, provided the system is not constrained. However, we can find a better description of $\mathcal{C}$ in terms of convenient representations of covectors. For infinitesimal time interval, the formula (\ref{eq:convenient}) reads \begin{equation}\label{eq:infinitesimal} \langle\mathrm{d} L, \delta q\rangle=\langle \mathcal{E}L(\ddot\chi(0,0)), \delta \chi(0,0)\rangle+ \frac{\mathrm{d}}{\mathrm{d} t}_{|t=0}\langle\mathcal{P}L(\dot\chi(t,0)),\delta \chi(t,0)\rangle\,. \end{equation} On the left hand side there is an evaluation of the covector $\mathrm{d} L$ on the variation $\delta q$ of the infinitesimal configuration $q$, while on the right hand side we have the external force $f(0)=\mathcal{E}L(\ddot\chi(0,0))$ evaluated on the variation of the position $\delta \chi(0,0)$ and the second term involving the momentum. Let us assume that there are no external forces. The curve $p: t\mapsto \mathcal{P}L(\dot\chi(t,0))$ gives values in $\textsf{T}^\ast M$, while the curve $\gamma: t\mapsto \delta \chi(t,0)$ gives values in $\textsf{T} M$. The value of $\frac{\mathrm{d}}{\mathrm{d} t}_{|t=0}\langle\mathcal{P}L(\dot\chi(t,0)),\delta \chi(t,0)\rangle$ depends on values of the curves $\gamma$ and $p$ at $0$ and on vectors tangent to those curves. It can be understood as a coupling between two vector bundles $\textsf{T}\tau_M: \textsf{T}\sT M\rightarrow \textsf{T} M$ and $\textsf{T}\pi_M: \textsf{T}\sT^\ast M\rightarrow \textsf{T} M$, \begin{equation}\label{eq:eval} \langle\!\langle \dot p, (\delta \chi)^{\cdot}\rangle\!\rangle=\left.\frac{d}{dt}\right|_{t=0}\langle p(t), \delta\chi(t,0)\rangle. \end{equation} The bundle $\textsf{T}\pi_M$ is now dual to $\textsf{T}\tau_M$. Since $\kappa_M$ is an isomorphism between $\textsf{T}\tau_M$ and $\tau_{\textsf{T} M}$, we can find the dual isomorphism between appropriate dual bundles, namely \begin{equation}\label{eq:alpha} \alpha_M:\textsf{T}\sT^\ast M \longrightarrow \textsf{T}^\ast\textsf{T} M\,. \end{equation} Formula (\ref{eq:infinitesimal}) says that a covector $\mathrm{d} L(\dot\gamma)$ can be conveniently represented by a pair $(f,\dot p)$. If external force is equal to zero, then $\dot p$ and $\mathrm{d} L(\dot\gamma)$ are related by $\alpha_M$. The infinitesimal dynamics of a system with no external forces is therefore $$\textsf{T}\sT^\ast M\supset \mathcal{D}=\alpha_M^{-1}(\mathrm{d} L(\textsf{T} M)).$$ Since $\mathcal{D}$ is a subset of the tangent space, it can be regarded as an (implicit) first-order differential equation for curves in the phase space $\textsf{T} M$. \section{Tulczyjew triples} \label{sec:5} In sections \ref{sec:3a} and \ref{sec:4} we have discussed the results of using statical ideas in mechanics. The mechanics for finite time interval provided us with concepts of convenient representations of vectors and covectors. For the infinitesimal time interval, we have obtained a way of generating differential equations describing dynamics of a system in the phase space. The results of section \ref{sec:4} can be formulated in an elegant way as the Lagrangian side of the Tulczyjew triple. The Tulczyjew triple for mechanics is built out of maps between $\textsf{T}\sT^\ast M$, $\textsf{T}^\ast\textsf{T}^\ast M$, and $\textsf{T}^\ast\textsf{T} M$ which are examples of double vector bundles. \subsection{Double vector bundles} \label{sec:6} The following geometric definition (cf. \cite{GR,GR1}) is a simplification of the original categorical concept of a double vector bundle due to Pradines \cite{Pr1}, see also \cite{KU,Ma}. \begin{definition} A \emph{double vector bundle} is a manifold with two compatible vector bundle structures. Compatibility means that the Euler vector fields (generators of homothethies), associated with the two structures, commute. \end{definition} This definition implies that, with every double vector bundle, we can associate the following diagram of vector bundles in which both pairs of parallel arrows form vector bundle morphisms: \begin{equation}\label{F1.3x}\xymatrix{ & K\ar[dr]^{\tau_2}\ar[dl]_{\tau_1} & \\ K_1\ar[dr]^{\tau'_2} & & K_2\ar[dl]_{\tau'_1} \\ & M & } \end{equation} The first example of double vector bundle mentioned in these notes is $\textsf{T}\sT M$ with two projections over $\textsf{T} M$: the canonical one, $\zt_{\textsf{T} M}$, which associates to a vector tangent to $\textsf{T} M$ the point in $\textsf{T} M$ where the vector is attached, and the tangent one, $\textsf{T}\tau_M$, which associates to a vector tangent to $\textsf{T} M$ its tangent projection on $\textsf{T} M$. Local coordinates $(x^i)$ in an open subset $\mathcal{O}\subset M$ can be used to define adapted coordinates $(x^i,\dot x^j)$ in $\tau_M^{-1}(\mathcal{O})$ and $(x^i,\dot x^j,\zd x^k,\delta\dot{x}^l)$ in an appropriate subset of $\textsf{T}\sT M$. The vector fields $\nabla_1$ and $\nabla_2$ are the two commuting Euler vector fields for the two compatible vector bundle structures in $\textsf{T}\sT M$. \begin{minipage}[c]{0.45\linewidth} $$\xymatrix@R+5pt{ & \textsf{T}\sT M\ar[dr]^{\textsf{T}\zt_M}\ar[dl]_{\zt_{\textsf{T} M}} & \\ \textsf{T} M\ar[dr]^{\zt_M} & & \textsf{T} M\ar[dl]^{\zt_M} \\ & M & }$$ \end{minipage} \hspace{0.5cm} \begin{minipage}[c]{0.45\linewidth} \begin{align*} \nabla_1&=\zd{x}^i\partial_{\zd{x}^i}+\delta\dot{x}^j\partial_{\delta\dot{x}^j}\,, \\ \nabla_2&=\dot{x}^i\partial_{\dot{x}^i}+\delta\dot{x}^j\partial_{\delta\dot{x}^j}\,. \end{align*} \end{minipage} The diffeomorphism $\zk_M:\textsf{T}\sT M\to\textsf{T}\sT M$ interchanges the two vector bundle structures, $$(x^i,\dot{x}^j,\zd x^k,\delta\dot{x}^l)\mapsto(x^i,\zd x^k,\dot{x}^j,\delta\dot{x}^l)\,.$$ It contains also the information about the bracket of vector fields. More general examples of double vector bundles are: the tangent bundle $\textsf{T} E$ and the cotangent bundle $\textsf{T}^\ast E$ for a vector bundle $\zt: E\rightarrow M$. In coordinates $(x^i, y^a)$ in $E$ and $(x^i, y^a, \dot x^j, \dot y^b)$ in $\textsf{T} E$, we get \begin{minipage}[c]{0.45\linewidth} $$\xymatrix@R+5pt{ & \textsf{T} E\ar[dr]^{\textsf{T}\zt}\ar[dl]_{\tau_{E}} & \\ E\ar[dr]^{\zt} & & \textsf{T} M\ar[dl]^{\tau_M} \\ & M & }$$ \end{minipage} \hspace{0.5cm} \begin{minipage}[c]{0.45\linewidth} \begin{align*} \tau_{E}:\textsf{T} E&\longrightarrow E \\ (x^i,y^a, \dot x^j,\dot y^b)&\longmapsto (x^i,y^a) \\ \quad\\ \textsf{T}\tau: \textsf{T} E&\longrightarrow \textsf{T} M \\ (x^i,y^a, \dot x^j,\dot y^b)&\longmapsto (x^i,\dot x^j) \end{align*} \end{minipage} \medskip\noindent For the cotangent bundle $\textsf{T}^\ast E$, we use coordinates $(x^i,y^a,p_j,\tx{i}_b)$. The structure of $\textsf{T}^\ast E$ as a double vector bundle is the following: \begin{minipage}[c]{0.45\linewidth} $$\xymatrix@R+5pt{ & \textsf{T}^\ast E\ar[dr]^{\zz_E}\ar[dl]_{\zt_{E}} & \\ E\ar[dr]^{\zt} & & E^*\ar[dl]^{\pi} \\ & M & }$$ \end{minipage} \hspace{0.5cm} \begin{minipage}[c]{0.45\linewidth} \begin{align*} \pi_E: \textsf{T}^\ast E&\longrightarrow E \\ (x^i,y^a,p_j,\tx{i}_b)&\longmapsto (x^i,y^a)\\ \quad \\ \zz_E: \textsf{T}^\ast E&\longrightarrow E^* \\ (x^i,y^a,p_j,\tx{i}_b)&\longmapsto (x^i,\tx{i}_a) \end{align*} \end{minipage} \medskip\noindent The projection $\zz_E$ is constructed as follows. Let us observe that vectors tangent to the fibre of a vector bundle can be identified with elements of the fibre itself, because they are just vectors tangent to a vector space. Every covector $\varphi\in\textsf{T}^\ast E$ restricted to vectors tangent to the fibre defines an element of the space dual to the fibre. The projection $\zeta_E$ associates to a covector $\varphi$ its restriction to vectors tangent to the fibre. Replacing the bundle $\tau$ with its dual $\pi: E^\ast\rightarrow M$ in the diagram for $\textsf{T} E$, we get an appropriate diagram for $\textsf{T} E^\ast$ with projections $\textsf{T}\pi$ on $\textsf{T} M$ and $\zt_{E^\ast}$ on $E^\ast$. Replacing the bundle $\tau$ with its dual $\pi$ in the diagram for $\textsf{T}^\ast E$, we get an appropriate diagram for $\textsf{T}^\ast E^\ast$ with projections $\pi_{E^\ast}$ on $E^\ast$ and $\zz_{E^\ast}$ on $E$. Let us notice, that the diagrams for $\textsf{T}^\ast E$ and $\textsf{T}^\ast E^\ast$ are very similar: \medskip \begin{minipage}[c]{0.45\linewidth} $$\xymatrix@R+5pt{ & \textsf{T}^\ast E^\ast\ar[dr]^{\zz_{E^*}}\ar[dl]_{\pi_{E^\ast}} & \\ E^\ast\ar[dr]^{\pi} & & E\ar[dl]^{\tau} \\ & M & }$$ \end{minipage} \hspace{0.5cm} \begin{minipage}[c]{0.45\linewidth} $$\xymatrix@R+5pt{ & \textsf{T}^\ast E\ar[dr]^{\pi_E}\ar[dl]_{\zz_E} & \\ E^\ast\ar[dr]^{\pi} & & E\ar[dl]^{\tau} \\ & M & }$$ \end{minipage} \medskip \noindent Actually, the double vector bundles $\textsf{T}^\ast E$ and $\textsf{T}^\ast E^\ast$ are canonically isomorphic. The isomorphism \begin{equation}\label{eq:izoR} \mathcal{R}_E:\textsf{T}^\ast E\to\textsf{T}^* E^* \end{equation} is also an anti-symplectomorphism and an isomorphism of double vector bundles. The graph of $\mathcal{R}_E$ is the Lagrangian submanifold in $(\textsf{T}^\ast E\times\textsf{T}^\ast E^\ast, \omega_E+\omega_{E^\ast})$ generated by the evaluation of covectors and vectors $$E\times_M E^\ast\ni(e,p)\longmapsto p(e)\in\mathbb R\,.$$ In coordinates $(x^i,y^a,p_j,\tx{i}_a)$ in $\textsf{T}^\ast E$ and $(x^i,\tx{i}_a,p_j,y^b)$ in $\textsf{T}^\ast E^\ast$, the isomorphism $\mathcal{R}_E$ reads $$\mathcal{R}_E: (x^i, y^a, p_j,\xi_b)\longmapsto (x^i,\xi_b,-p_j,y^a)\,.$$ \subsection{The classical Tulczyjew triple} \label{sec:7} Now we are ready to present the Lagrangian part of the Tulczyjew triple. It consists of the map $\alpha_M$ defined in section \ref{sec:4}. The map $\alpha_M$ is an isomorphism of double vector bundles $\textsf{T}\sT^\ast M$ and $\textsf{T}^\ast\textsf{T} M$. Both total spaces are symplectic manifolds. The map $\alpha_M$ is also a symplectomorphism. In the following diagram we can see the structure of $\alpha_M$ as a double vector bundle morphism: $$\xymatrix@C-10pt@R-5pt{ & \textsf{T}\sT^\ast M \ar[rrr]^{\alpha_M} \ar[dr]^{\textsf{T}\pi_M}\ar[ddl]_{\tau_{\textsf{T}^\ast M}} & & & \textsf{T}^\ast\textsf{T} M\ar[dr]^{\pi_{\textsf{T} M}}\ar[ddl]_/-10pt/{\tx{i}} & \\ & & \textsf{T} M\ar[rrr]^/-15pt/{id_{\textsf{T} M}}\ar[ddl]_/-10pt/{\tau_M} & & & \textsf{T} M \ar[ddl]_{\tau_M}\\ \textsf{T}^\ast M\ar[rrr]^/+25pt/{id_{\textsf{T}^\ast M}}\ar[dr]^{\pi_M} & & & \textsf{T}^\ast M\ar[dr]^{\pi_M} & & \\ & M\ar[rrr]^{id_M}& & & M & }$$ Recall that in infinitesimal mechanics, $M$ denoted the manifold of positions, $\textsf{T} M$ the manifold of infinitesimal (kinematic) configurations, and $\textsf{T}^\ast M$ was the phase space. The constitutive set was a subset of $\textsf{T}^\ast\textsf{T} M$ given by $\mathcal{C}=\mathrm{d} L(\textsf{T} M)$, while the dynamics was defined as $\mathcal{D}=\alpha^{-1}_M(\mathcal{C})$. W can therefore complete the diagram: $$\xymatrix@C-10pt@R-5pt{ { \mathcal{D}}\ar@{ (->}[r]& \textsf{T}\sT^\ast M \ar[rrr]^{\alpha_M} \ar[dr]\ar[ddl] & & & \textsf{T}^\ast\textsf{T} M\ar[dr]\ar[ddl] & \\ & & \textsf{T} M\ar[rrr]\ar[ddl] & & & \textsf{T} M \ar[ddl]\ar@/_1pc/[ul]_{\mathrm{d} L}\ar[dll]_{\mathcal{P} L}\\ \textsf{T}^\ast M\ar[rrr]\ar[dr] & & & \textsf{T}^\ast M\ar[dr] & & \\ & M\ar[rrr]& & & M & }$$ The map $\mathcal{P} L:\textsf{T} M\rightarrow \textsf{T}^\ast M$ is the {\it Legendre map} that associates momenta to velocites and is defined as $\mathcal{P} L=\tx{i}\circ\mathrm{d} L$. In coordinates $(x^i, \dot x^j)$ in $\textsf{T} M$ and $(x^i, p_j)$ in $\textsf{T}^\ast M$, the Legendre map reads $$\mathcal{P} L(x^i, \dot x^j)=(x^i, \frac{\partial L}{\partial \dot x^j} )\,,$$ therefore $\mathcal{D}$ is given as $$\mathcal{D}=\left\{(x^i,p_j,\dot x^k,\dot p_l):\;\; p_j=\frac{\partial L}{\partial \dot x^j},\quad \dot p_l=\frac{\partial L}{\partial x^l}\right\}\,.$$ The dynamics $\mathcal{D}\subset \textsf{T}\sT^\ast M$ is a Lagrangian submanifold with respect to the symplectic form $\mathrm{d}_\textsf{T}\omega_M$, i.e. the tangent lift of the canonical symplectic form of $\textsf{T}^\ast M$. In some cases (e.g. for hyperregular Lagrangians) the dynamics is the image of a Hamiltonian vector field. In such a case, we can look for a Hamiltonian function that generates the field. We observe, however, that even if $\mathcal{D}$ is not the image of a vector field, it is still worth looking for a Hamiltonian generating object ({\it Morse family}), even if it is not as simple as just one function on $\textsf{T}^\ast M$. In section \ref{sec:6} we have observed that two manifolds $\textsf{T}^\ast E$ and $\textsf{T}^\ast E^\ast$ are isomorphic as double vector bundles and as symplectic manifolds. Taking $E=\textsf{T} M$, we get, according to (\ref{eq:izoR}), the canonical isomorphism $\mathcal{R}_{TM}$ between $\textsf{T}^\ast\textsf{T} M$ and $\textsf{T}^\ast\textsf{T}^\ast M$. As a symplectic relation the isomorphism is generated by a function $(p, v)\rightarrow \langle p,\,v\rangle$ defined on the submanifold $\textsf{T}^\ast M\times_M\textsf{T} M$ of $\textsf{T}^\ast M\times \textsf{T} M$. Following the rules of composing symplectic relations \cite{LM}, we get that $\mathcal{R}_{TM}(\mathrm{d} L(\textsf{T} M))$ is generated by a family of functions on $\textsf{T}^\ast M$ parameterized by elements of $\textsf{T} M$, $$\textsf{T}^\ast M\times_M \textsf{T} M\ni (p,v)\longmapsto L(v)-\langle p,\,v\rangle\in\mathbb R.$$ This most general generating object can sometimes be reduced to simpler one, but not always to just one Hamiltonian function. The composition of double bundle morphisms $\mathcal{R}_{TM}$ and $\alpha_M$ gives the morphism $\beta_M$, the musical isomorphism associated with the canonical symplectic structure on $\textsf{T}^\ast M$, which constitutes the Hamiltonian side of the Tulczyjew triple: $$\xymatrix@C-10pt@R-5pt{ & \textsf{T}^\ast\textsf{T}^\ast M \ar[dr]_{\zeta} \ar[ddl]^{\pi_{\textsf{T}^\ast M}} & & & \textsf{T}\sT^\ast M\ar[dr]_{\textsf{T}\pi_{M}}\ar[ddl]_/-10pt/{\tau_{\textsf{T}^\ast M}} \ar[lll]_{\beta_M}& { \mathcal{D}}\ar@{ (->}[l] \\ & & \textsf{T} M\ar[ddl]_/-10pt/{\tau_M} & & & \textsf{T} M \ar[ddl]_{\tau_M}\ar[lll]\\ \textsf{T}^\ast M\ar[dr]^{\pi_M} \ar@/^1pc/[uur]^{\mathrm{d} H} & & & \textsf{T}^\ast M\ar[dr]^{\pi_M}\ar[lll] & & \\ & M& & & M\ar[lll] & }$$ It is well known that the map $\beta_M$ is associated also with the canonical symplectic structure $\omega_M$ on $\textsf{T}^\ast M$. For any $X\in\textsf{T}\sT^\ast M$, we have $\beta_M(X)=\omega_M(\cdot, X)$. If the Hamiltonian generating object reduces to one Hamiltonian function, then $\mathcal{D}$ is the image of the Hamiltonian vector field $X_H$ according to the formula $$\mathrm{d} H=\omega_M(\cdot, X_H).$$ The same can be written as $$\mathcal{D}=\beta_M^{-1}(\mathrm{d} H(\textsf{T}^\ast M))\,;$$ in coordinates: $$\mathcal{D}=\left\{(x,p,\dot x,\dot p):\;\; \dot p=-\frac{\partial H}{\partial x},\quad \dot x=\frac{\partial H}{\partial p}\right\}\,.$$ The full Tulczyjew triple in mechanics is the diagram $$\xymatrix@C-20pt@R-5pt{ &&&& { \mathcal{D}}\ar@{ (->}[d]&&&&\\ & \textsf{T}^\ast\textsf{T}^\ast M \ar[dr]_{} \ar[ddl]_{} & & & \textsf{T}\sT^\ast M \ar[rrr]^{\alpha_M} \ar[dr]_{} \ar[ddl]_{}\ar[lll]_{\beta_M} & & & \textsf{T}^\ast\textsf{T} M\ar[dr]^{}\ar[ddl]_/-10pt/{} & \\ & & \textsf{T} M\ar[ddl]_/-10pt/{} & & & \textsf{T} M\ar[rrr]\ar[ddl]_/-10pt/{}\ar[lll] & & & \textsf{T} M \ar[ddl]_{}\ar[ddl]_{}\ar@/_1pc/[ul]_{\mathrm{d} L}\\ \textsf{T}^\ast M\ar[dr]^{}\ar@/^1pc/[uur]^{\mathrm{d} H} & & & \textsf{T}^\ast M\ar[rrr]\ar[dr]^{}\ar[lll] & & & \textsf{T}^\ast M\ar[dr]^{} & & \\ & M & & & M\ar[rrr]\ar[lll]& & & M & }$$ Using the structure encoded in the Tulczyjew triple, we can describe more complicated mechanical systems than those known in the traditional Lagrangian and Hamiltonian mechanics. In geometrical optics, for example, there are systems for which we need more general generating object on the Lagrangian side. In relativistic mechanics, we need generating families on the Hamiltonian side. The above diagram shows also that, from the mathematical point of view, Hamiltonian and Lagrangian mechanics are equivalent only if we agree to use these more general generating objects. We should however keep in mind that Lagrangian mechanics has variational origins and comes from Lagrangian mechanics for a finite time interval. We understand the Hamiltonian mechanics as an alternative way of generating the dynamics of a system: the image of the dynamics by the map $\beta_M$ is a lagrangian submanifold of the cotangent bundle $\textsf{T}^\ast\textsf{T}^\ast M$, therefore we can consider its generating objects. This works only in the infinitesimal formulation, so that Hamiltonian formalism is genuinely infinitesimal. For finite time interval, the only isomorphism of $\mathbb{P}Q$ with a cotangent bundle is $\alpha$, associated with Lagrangian mechanics, and no Hamiltonian description is available. \subsection{Mechanics on algebroids} \label{sec:7a} We can generalize the classical Tulczyjew triple {\it mutatis mutandis} to a mechanics on algebroids \cite{GG,GGU,LMM}. The starting point is the diagram in which a vector bundle $E$ over $M$ replaces $\textsf{T} M$ and $\za$ and $\zb$ are replaced by morphisms in the reverse directions: $$\xymatrix@C-10pt@R-5pt{ &&&& { \mathcal{D}}\ar@{ (->}[d]&&&&\\ &\textsf{T}^\ast E^\ast \ar[rrr]^{\ze\circ \mathcal{R}_E^{-1}} \ar[ddl] \ar[dr] & & & \textsf{T} E^\ast \ar[ddl] \ar[dr] & & & \textsf{T}^\ast E \ar[ddl] \ar[dr] \ar[lll]_{\varepsilon} & \\ & & E \ar[rrr]^/-20pt/{\zr}\ar[ddl] & & & \textsf{T} M\ar[ddl] & & & E\ar[lll]_/+20pt/{\zr}\ar[ddl]\ar[ddl]_{}\ar[ddl]_{}\ar@/_1pc/[ul]_{\mathrm{d} L} \\ E^\ast\ar[rrr]^/-20pt/{id} \ar[dr]\ar[dr]^{}\ar@/^1pc/[uur]^{\mathrm{d} H} & & & E^\ast\ar[dr] & & & E^\ast\ar[dr]\ar[lll]_/-20pt/{id} & & \\ & M\ar[rrr]^{id} & & & M & & & M\ar[lll]_{id} & }$$ Thus, $\ze:\textsf{T}^* E\to\textsf{T} E^*$ represents the structure of an algebroid with the {\it anchor map} $\zr:E\to\textsf{T} M$. The connection with the standard definition of an algebroid by means of a bracket of sections can be found in \cite{GU,Gr2}. The construction of dynamics is the same: the left-hand side is Hamiltonian with Hamiltonians being functions $H:E^*\to\mathbb R$, the right-hand side is Lagrangian with Lagrangians being functions $L:E\to\mathbb R$, and the phase dynamics lives in the middle. Note finally that the above formalisms can still be generalized to include constraints (cf. \cite{GG1}) and that a rigorous optimal control theory on Lie algebroids can be developed as well \cite{CM,GJ}. \section{The geometry of classical field theories} \label{sec:8} Following the statical ideas we can go into classical field theory. Like in the case of mechanics, we start with finite domain formulation to find geometric models for physical quantities. We need configurations, processes, cost functions, and constitutive sets. In the following we will not go into the details of all these constructions. Since general rules are already known, we can just list main results. \subsection{Classical fields for bounded domains} \label{sec:9} Configurations in $Q$ (fields) are smooth sections $q:D\to E$ of a locally trivial fibration $\zz:E\to M$ over a manifold $M$ of dimension $m$, supported on compact discs $D\subset M$ with smooth boundary $\partial D$. We will use the coordinates $(x^i,y^a)$ in $E$. Parameterized processes $s\mapsto q_s$ come from vertical homotopies $\chi:\mathbb R\times M\supset I\times \mathcal{O}\to E$, $D\subset\mathcal{O}$, so that infinitesimal processes (tangent vectors) in convenient representation are vertical vector fields $\zd q$ over $D$, $\zd q:D\to{\scriptscriptstyle V} E$ (Fig.\ref{fig:14}). \begin{pict}[ht] \begin{floatrow}[1] \floatbox{pict}[0.4\textwidth] {\caption{Infinitesimal process}\label{fig:14}} {\centering\includegraphics[width=.3\textwidth]{rys20.pdf}} \end{floatrow} \end{pict} Functions are associated with first-order Lagrangians, i.e. bundle maps $L:{\scriptscriptstyle J}^1E\to\zW^m$, where we denote $\zW^k=\wedge^k\textsf{T}^*M$, $$S(q)=\int_DL(j^1q)\,.$$ Covectors are equivalence classes of functions. It was very easy to guess convenient representatives for tangent vectors. To do the same for covectors, we need some calculations. According to Stokes theorem, \begin{equation}\label{eq:stokes} \frac{\mathrm{d} }{\mathrm{d} s}_{|s=0}S(q_s)= \int_D\langle\mathcal{E}L\circ j^2 q,\zd q\rangle+\int_{\partial D} \langle\,\mathcal{P} L\circ j^1q,\delta q\,\rangle\,. \end{equation} Here \begin{align} \mathcal{E}L: {\scriptscriptstyle J}^2E&\longrightarrow {\scriptscriptstyle V}^* E\otimes_E\zW^m \\ \intertext{is the \emph{Euler-Lagrange operator} and} \mathcal{P} L=\xi\circ\mathrm{d}^\sv L:J^1E&\longrightarrow{\scriptscriptstyle V}^* E\otimes_E\zW^{m-1}\\ \intertext{is the \emph{Legendre map}, where $\xi$ is a certain canonical map} \xi:{\scriptscriptstyle V}^*{\scriptscriptstyle J}^1E\otimes_E\zW^m&\longrightarrow{\scriptscriptstyle V}^* E\otimes_E\zW^{m-1}\,. \end{align} The map $\xi$ is analogous to the second projection $\zz_E$ in the structure of the double vector bundle $\textsf{T}^\ast E$ (see section \ref{sec:6}), although ${\scriptscriptstyle V}^*{\scriptscriptstyle J}^1E\otimes_E\zW^m$ is not a double vector bundle, but double vector-affine bundle \cite{GRU}. It follows that covectors are represented by pairs of sections $(f,p)$, where \begin{align} f:D&\longrightarrow {\scriptscriptstyle V}^+E={\scriptscriptstyle V}^* E\otimes_E\zW^m\,, \\ p:\partial D&\longrightarrow\mathcal{P} E={\scriptscriptstyle V}^* E\otimes_E\zW^{m-1}\,. \end{align} We have found additionally that the phase space (in mechanics the space of momenta) is $\mathcal{P} E$. Since we are again interested in differential equations describing the fields and phase sections, we pass immediately to infinitesimal formulation. The section $f$ plays the role of the source of a field. In the infinitesimal approach we shall put sources equal to zero for simplicity, however in principle they can be added to the picture. \subsection{Classical fields: Infinitesimal version} \label{sec:10} Infinitesimal formulation arises when we, informally speaking, integrate over an infinitesimal domain of $M$. In such a case configurations are elements of $Q={\scriptscriptstyle J}^1E$. In ${\scriptscriptstyle J}^1 E$, we will use coordinates $(x^i, y^a, y^c_k)$ coming from coordinates in $E$. Now, the space of configurations is again a manifold, however we have to use density valued functions $L:{\scriptscriptstyle J}^1E\to\zW^m$ instead of real valued functions. We assume that $M$ is oriented and we can identify densities with $m$-forms. Since $Q$ is a manifold, we know what tangent vectors are, but we keep in mind that in finite domain formulation we used only vertical homotopies. Starting with a vertical homotopy $\chi:I\times \mathcal{O}\rightarrow E$ and taking first the infinitesimal part with respect to $M$ ($s\mapsto{\scriptscriptstyle j}^1\chi(s,x)$) and then tangent vector in the vertical direction ($\delta{\scriptscriptstyle j}^1\chi(0,x)$), we get $\textsf{T} Q={\scriptscriptstyle V}{\scriptscriptstyle J}^1 E$. Convenient representatives of tangent vectors in finite domain formulations were obtained by taking the tangent vector in vertical direction first ($\delta\chi(0,\cdot)$). We now need an infinitesimal part of that vertical vector field, i.e. ${\scriptscriptstyle j}^1\delta\chi(0,x)$. The correspondence between vectors tangent to $Q$ and their convenient representatives is now expressed as an isomorphism $\kappa: {\scriptscriptstyle V}{\scriptscriptstyle J}^1E\rightarrow {\scriptscriptstyle J}^1{\scriptscriptstyle V} E$ of double vector-affine bundles, $$\xymatrix@C-20pt@R-8pt{ & {\scriptscriptstyle V}{\scriptscriptstyle J}^1 E\simeq{\scriptscriptstyle J}^1{\scriptscriptstyle V} E\ar[dr]\ar[dl]_{} & \\ {\scriptscriptstyle J}^1E\ar[dr]^{} & & {\scriptscriptstyle V} E\ar[dl]^{} \\ & E & }$$ \medskip Covectors on $Q$ come from classifying functions with respect to vertical curves in $Q$. This means that we get ${\scriptscriptstyle V}^\ast{\scriptscriptstyle J}^1E\otimes_{{\scriptscriptstyle J}^1E}\zW^m$. Since the notation here is becoming heavy, we choose another symbol for the space of covectors. From now on ${\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E$ will denote ${\scriptscriptstyle V}^\ast{\scriptscriptstyle J}^1E\otimes_{{\scriptscriptstyle J}^1E}\zW^m$. The infinitesimal version of the application of the Stokes theorem (\ref{eq:stokes}) reads \begin{equation}\label{eq:infstokes} \langle\mathrm{d} L, \delta{\scriptscriptstyle j}^1\chi\rangle=\langle\mathcal{E}L({\scriptscriptstyle j}^2\chi),\delta\chi\rangle+ \mathrm{d}(\langle\mathcal{P}L({\scriptscriptstyle j}^1\chi), \delta\chi\rangle). \end{equation} Since the evaluation of a section of $\mathcal{P} E\rightarrow M$ with a section of ${\scriptscriptstyle V} E$ is an $(m-1)$-form on $M$, we can differentiate it and evaluate at a point $x\in M$. The result depends on the first jet of the section of $\mathcal{P} E\rightarrow M$ and a first jet of vertical vector field. In this way, we have obtained a bilinear evaluation, $$ \langle\!\langle\cdot,\cdot\rangle\!\rangle: {\scriptscriptstyle J}^1\mathcal{P}E\times_{{\scriptscriptstyle J}^1 E}{\scriptscriptstyle J}^1{\scriptscriptstyle V} E\longrightarrow \Omega^m\,, $$ defined on ${\scriptscriptstyle j}^1 p(x_0)$ and ${\scriptscriptstyle j}^1\delta\sigma(x_0)$ by the formula \begin{equation}\label{eq:evalfield} \langle\!\langle\,{\scriptscriptstyle j}^1 p(x_0),{\scriptscriptstyle j}^1\delta\sigma(x_0)\,\rangle\!\rangle= \mathrm{d} \langle p,\delta\sigma\rangle(x_0)\,. \end{equation} The identification of $\sss{Hom}({\scriptscriptstyle V}{\scriptscriptstyle J}^1E,\zW^m)$ with ${\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E$ defines the map \begin{equation}\label{eq:alphafield} \alpha:{\scriptscriptstyle J}^1\mathcal{P} E \longrightarrow {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E \end{equation} which is dual to $\kappa$. In the adapted coordinates $(x^i,y^a,p^j_b,y^c_k,p^l_{dm})$ in ${\scriptscriptstyle J}^1\mathcal{P} E$ and $(x^i,y^a,y^c_k,\pi_d, \pi^l_e)$ in ${\scriptscriptstyle V}^+{\scriptscriptstyle J}^1 E$, we have $$\za(x^i,y^a,p^j_b,y^c_k,p^l_{dm})=(x^i,y^a,y^c_k,\sum_lp^l_{dl}, p^j_b)\,.$$ We have used here the same letters $\kappa$ and $\alpha$ that appeared already in the context of mechanics (\ref{sec:3a}). In mechanics for finite time interval they denoted te correspondence between vectors and covectors on configurations and their convenient representations. In field theory they play exactly the same role. \subsection{Tulczyjew triple: Lagrangian side for field theory} \label{sec:11} The map $\alpha$ constitutes the Lagrangian side of the Tulczyjew triple for first order field theory. The Lagrangian side can be written in a form of a diagram $$\xymatrix@C-15pt@R-5pt{ { \mathcal{D}}\ar@{ (->}[r]& {\scriptscriptstyle J}^1\mathcal{P} E \ar[rrr]^{\alpha} \ar[dr]_{} \ar[ddl]_{} & & & {\scriptscriptstyle V}^+ {\scriptscriptstyle J}^1E\ar[dr]^{}\ar[ddl]_/-10pt/{} & \\ & & {\scriptscriptstyle J}^1E\ar[rrr]\ar[ddl]_/-10pt/{} & & & {\scriptscriptstyle J}^1E \ar[ddl]_{}\ar@/_1pc/[ul]_{\mathrm{d}^\sv L}\ar[dll]_{\mathcal{P} L}\\ \mathcal{P} E\ar[rrr]\ar[dr]^{} & & & \mathcal{P} E\ar[dr]^{} & & \\ & E\ar[rrr]& & & E & }$$ According to the rules we developed while analyzing the mechanical triple, the dynamics of the field consists of the convenient representatives of elements of the constitutive set, i.e. \begin{equation}\label{eq:dynamicfield} \mathcal{D}=\alpha^{-1}(\mathrm{d}^\sv L({\scriptscriptstyle J}^1E)). \end{equation} There is also the Legendre map, $$\mathcal{P} L:{\scriptscriptstyle J}^1E\rightarrow\mathcal{P} E, \quad \mathcal{P} L=\tx{i}\circ\mathrm{d}^\sv L\,,$$ that associates phase elements to configuration elements. In coordinates, the dynamics reads $$\mathcal{D}=\left\{(x^i,y^a,p^j_b,y^c_k,p^l_{dm}):\;\; p^j_b=\frac{\partial L}{\partial y^b_j},\quad \sum_lp^l_{dl}=\frac{\partial L}{\partial y^d}\right\}\,,$$ while the Legendre map is $$\mathcal{P} L(x^i,y^a,y^b_j)= \left(x^i,y^a,\frac{\partial L}{\partial y^b_j}\right)\,.$$ We get also the Euler-Lagrange equations $$\frac{\partial L}{\partial y^a}=\frac{\partial}{\partial x^i}\frac{\partial L}{\partial y^a_i}\,.$$ The manifolds ${\scriptscriptstyle J}^1\mathcal{P} E$ and ${\scriptscriptstyle V}^+ {\scriptscriptstyle J}^1E$ are both double vector-affine bundles with affine structure over $\mathcal{P} E$ and linear structure over ${\scriptscriptstyle J}^1 E$. They both carry some sort of a symplectic structure. Every fibre of ${\scriptscriptstyle V}^+ {\scriptscriptstyle J}^1E$ over $M$ is a manifold equipped with a symplectic form with values in $\zW^m$. Every fibre of ${\scriptscriptstyle J}^1\mathcal{P} E$ over $M$ is a manifold equipped with presymplectic form with values in $\zW^m$. The map $\alpha$ is a morphism of double bundle structures and symplectic structures. This time, however, it is not an isomorphism. It is possible to reduce the space ${\scriptscriptstyle J}^1\mathcal{P} E$ to get an isomorphism, but then we loose the natural interpretation of the dynamics as a first order partial differential equation, because it is no more a subset of a first jet bundle. \subsection{Tulczyjew triple: Hamiltonian side for field theory} \label{sec:12} Defining the Hamiltonian side of the triple in field theory means looking for another generating object for the dynamics. In mechanics, we could use two structures: one was the duality between momenta and velocities that allowed us to define the symplectic relation $\mathcal{R}_{TM}$ (see (\ref{eq:izoR})), the other was the canonical structure on the phase space. Both ways can be followed in field theory, however not in an easy way. We shall concentrate on the analog of the duality between momenta and velocities. First of all, let us notice that there is no duality between the configuration space ${\scriptscriptstyle J}^1 E$ and the phase space $\mathcal{P} E$. The phase elements are naturally evaluated on virtual displacements, not on configurations. Actually, in mechanics we have the same: from the construction of momenta we get that they are to be evaluated on virtual displacements. In mechanics, virtual displacements are represented by the same geometrical objects as infinitesimal configurations, therefore we can write $\langle p,\dot x\rangle$ as well as $\langle p,\delta x\rangle$. It is not the case in field theory. To find objects dual to infinitesimal configurations, we have to use affine geometry. Let us fix a point $x$ in $M$. Having in mind the contents of section \ref{sec:1} we can take the affine bundle $A={\scriptscriptstyle J}^1_xE$ over $N=E_x$, and the vector space $Y=\zW^m_x$. Let ${\scriptscriptstyle J}^\dag_xE=A^\dag(Y)=\sss{Aff}(A,Y)$ be the affine-dual bundle which is an AV-bundle over $\textsf{T}_x M\otimes{\scriptscriptstyle V}^*_xE\otimes\zW^m_x\simeq \mathcal{P}_xE$, $$\zvy_x:{\scriptscriptstyle J}^\dag_xE\to\mathcal{P}_xE\,,\quad (y^a,p^j_b,r)\mapsto(y^a,p^j_b)\,.$$ Let ${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag_x E$ be the corresponding affine phase bundle of affine differentials ${\underline{\mathrm{d}}} H_x$ of sections $H_x:\mathcal{P}_xE\to{\scriptscriptstyle J}^\dag_xE$ (see \ref{def:phase}). Collecting the affine phase bundles ${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag_x E$ point by point in $M$, we obtain the affine phase bundle ${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E$ which is the bundle of `vertical affine differentials' ${\underline{\mathrm{d}}}^v H$ of sections $H$ of the bundle $\zvy:{\scriptscriptstyle J}^\dag E\to\mathcal{P} E$, $${\scriptscriptstyle P}\zvy:{\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E\to\mathcal{P} E\,,$$ In natural coordinates in ${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E$, the projection reads $$\quad(x^i,y^a,p^j_b,p_c,y^d_l)\mapsto(x^i,y^a,p^j_b)\,.$$ The bundle ${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E$ is actually a double affine-vector bundle isomorphic canonically with ${\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E$, $$\xymatrix@C-15pt@R-8pt{ & {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E \ar[rrr]^{\mathcal{R}} \ar[dr]^{} \ar[ddl]_{\xi} & & & {\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E\ar[dr]^{}\ar[ddl]^/10pt/{{\scriptscriptstyle P}\zvy} & \\ & & {\scriptscriptstyle J}^1E\ar[rrr]^/-20pt/{}\ar[ddl]_/-20pt/{} & & & {\scriptscriptstyle J}^1E \ar[ddl]_{}\\ \mathcal{P} E\ar[rrr]^/-20pt/{id}\ar[dr]^{} & & & \mathcal{P} E\ar[dr]^{} & & \\ & E\ar[rrr]^{id}& & & E & } $$ The map $\mathcal{R}$ is generated analogously to $\mathcal{R}_E$ by evaluation between elements of ${\scriptscriptstyle J}^1 E$ and ${\scriptscriptstyle J}^\dag E$ over $E$. The construction involves some affine geometry and symplectic reduction. The details can be found in \cite{G}. Composing $\za$ with $\mathcal{R}$, we get a map $\zb=\mathcal{R}\circ\za$ constituting the Hamiltonian side of the triple $$\zb:{\scriptscriptstyle J}^1\mathcal{P} E\to {\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E\,.$$ In adapted coordinates, $$\zb(x^i,y^a,p^j_b,y^c_k,p^l_{dm})=(x^i,y^a,p^j_b,-\sum_lp^l_{dl},y^c_k)\,.$$ The Hamiltonian side of the Tulczyjew triple for field theory can be written in a form of a diagram $$\xymatrix@C-10pt@R-5pt{ & {\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E \ar[dr]_{} \ar[ddl]^{{\scriptscriptstyle P}\zvy} & & & {\scriptscriptstyle J}^1\mathcal{P} E\ar[dr]^{}\ar[ddl]_/-10pt/{} \ar[lll]_{\beta}& { \mathcal{D}}\ar@{ (->}[l] \\ & & {\scriptscriptstyle J}^1E\ar[ddl]_/-10pt/{} & & & {\scriptscriptstyle J}^1E \ar[ddl]_{}\ar[lll]\\ \mathcal{P} E\ar[dr]^{} \ar@/^1pc/[uur]^{{\underline{\mathrm{d}}}^v H} & & & \mathcal{P} E\ar[dr]^{}\ar[lll] & & \\ & E& & & E\ar[lll] & }$$ Hamiltonians are sections of the one-dimensional affine bundle $\zvy:{\scriptscriptstyle J}^\dag E\to\mathcal{P} E$. The dynamics is generated by means of $\beta$ by $$\mathcal{D}=\beta^{-1}({\underline{\mathrm{d}}}^v H(\mathcal{P} E))\,.$$ In adapted coordinates, we get $$\mathcal{D}=\left\{(x^i,y^a,p^j_b,y^c_k,p^l_{dm}):\;\; \sum_l p^l_{dl}=-\frac{\partial H}{\partial y^d},\quad y^c_k=\frac{\partial H}{\partial p^k_c}\right\}\,.$$ Of course, on the Hamiltonian side we can encounter all the problems we have in mechanics, concerning the fact that the dynamics is not always generated by one section. We can always get a generating family of sections adding generating objects of the constitutive set and the relation $\mathcal{R}$. \subsection{The Tulczyjew triple for field theory} \label{sec:13} The complete Tulczyjew triple for first order field theory has the form of the following diagram. $$ \xymatrix@C-15pt{ &&&& { \mathcal{D}}\ar@{ (->}[d]&&&&\\ & {\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E\ar[dl]_{}\ar[ddr]^/-10pt/{} & & & {\scriptscriptstyle J}^1\mathcal{P}E\ar[lll]_\beta\ar[rrr]^\alpha\ar[dl]_{}\ar[ddr]^/-10pt/{}& & & {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E \ar[dl]\ar[ddr]^/-10pt/{}& \\ {\mathcal{P}E\ }\ar[ddr]^/-10pt/{}\ar@/^1pc/[ur]^{{\underline{\mathrm{d}}}^v H} & & & {\mathcal{P}E\ }\ar[rrr]\ar[lll]\ar[ddr]^/-10pt/{} & & & {\ \mathcal{P}}E\ar[ddr]^/-10pt/{} & & \\ & & {\scriptscriptstyle J}^1 E\ar[dl]_{} & & & {\scriptscriptstyle J}^1 E\ar[rrr]\ar[lll]\ar[dl]_{} & & & {\scriptscriptstyle J}^1 E\ar[dl]_{}\ar@/_1pc/[uul]_{\mathrm{d}^\sv L} \\ & E & & & E\ar[rrr]\ar[lll] & & & E & } $$ \medskip\noindent All the three double bundles are double vector-affine bundles. The structure over $\mathcal{P} E$ is affine, while the structure over ${\scriptscriptstyle J}^1 E$ is linear. The Lagrangian and Hamiltonian bundles are isomorphic. They are both, fiber by fiber over $M$, equipped with canonical symplectic forms with values in $\zW^m$. The canonical structure of the phase bundle is the tautological form $\vartheta_\mathcal{P}$ which, in adapted coordinates, reads $$\vartheta_\mathcal{P}=p^i_a\mathrm{d} y^a\otimes\eta_i,$$ where $\eta_i=\imath(\partial_i)\eta$ with $\eta=\mathrm{d} x^1\wedge\cdots\wedge\mathrm{d} x^m$. The tautological form, differentiated vertically, gives the form $$\omega_\mathcal{P}= (\mathrm{d} p^i_a\wedge\mathrm{d} y^a)\otimes\eta_i$$ with values in $\zW^{m-1}$. The latter, lifted to ${\scriptscriptstyle J}^1\mathcal{P} E$, is, fiber by fiber over $M$, a presymplectic form with values in $\zW^m$. We have observed that Hamiltonians are sections of the one-dimensional affine bundle ${\scriptscriptstyle J}^\dag E\rightarrow \mathcal{P} E$. The space ${\scriptscriptstyle J}^\dag E$, i.e. the space of the values of Hamiltonians can be identified with a subspace of $m$-forms on $E$ with the property that they vanish, while evaluated on two vertical vectors \cite{V}. We denote this space by $\wedge^m_1\textsf{T}^\ast E$. To see the identification, we have to be able to evaluate an element of $\wedge^m_1\textsf{T}^\ast E$ on the first jet of a section of $\zz$. Let us use coordinates to simplify the presentation. En element $\varphi$ of $\wedge^m_1\textsf{T}^\ast E$ can be written locally as $$\varphi=A\,\eta+B^i_a dy^a\wedge\eta_i\,.$$ Note that here we have used a different $dy^a$ than previously. The difference is that $\mathrm{d} y^a$ is an element of ${\scriptscriptstyle V}^\ast E$, while $dy^a$ is an element of $\textsf{T}^\ast E$. Using the first jet given in coordinates by $(x^i, y^a, y^b_j)$, we can split the space $\textsf{T} E$ at point $(x^i,y^a)$ into vertical vectors ${\scriptscriptstyle V} E$ and horizontal vectors. Vertical space is spanned by vectors $(\partial_a)$, while horizontal by $(\partial_i+y^a_i\partial_a)$. The dual space $\textsf{T}^\ast E$ is also split. The anihilator of the space of vertical vectors is spanned by $(\mathrm{d} x^i)$, and the anihilator of the space of horizontal vectors is spanned by $(dy^a-y^a_i\mathrm{d} x^i)$. We can therefore identify vertical differentials $\mathrm{d} y^a$ with $dy^a-y^a_i\mathrm{d} x^i$. Let us look at elements of $\wedge^m_1\textsf{T}^\ast E$ in the basis induced by the jet: \begin{equation}\label{eq:split} \varphi=A\,\eta+B^i_a dy^a\wedge\eta_i= A\,\eta+B^i_a (\mathrm{d} y^a+y^a_j\mathrm{d} x^j)\wedge\eta_i= (A+B^j_ay^a_j)\eta+B^i_a\mathrm{d} y^a\wedge\eta_i\,. \end{equation} We see in (\ref{eq:split}) that, using the jet, we can split also the space $\wedge^m_1\textsf{T}^\ast E$ into purely horizontal forms and forms that have one vertical factor. The value of an affine map corresponding to $\varphi$ is just the horizontal part of the form $\varphi$ under the splitting induced by the jet. The part with one vertical factor can be written now as $B^i_a\mathrm{d} y^a\otimes\eta_i$ and identified with the projection of an element of $\wedge^m_1\textsf{T}^\ast E$ on $\mathcal{P} E$. \subsection{Example: Tulczyjew triple for time-dependent systems} \label{sec:14} Our first example will be the Tulczyjew triple for a time dependent system for a fixed observer, i.e. when we can write the space of positions in the form of cartesian product with time. We put here $\zz:E=Q\times\mathbb R\to\mathbb R=M$ and get the following identifications: \begin{align*} {\scriptscriptstyle J}^1E&\simeq\textsf{T} Q\times\mathbb R\,,\\ \mathcal{P} E&\simeq \textsf{T}^*Q\times\mathbb R\,,\\ {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1E&\simeq \textsf{T}^*\textsf{T} Q\times\mathbb R\,,\\ {\scriptscriptstyle J}^1\mathcal{P} E&\simeq \textsf{T}\sT^* Q\times\mathbb R\,,\\ {\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E&\simeq \textsf{T}^*\textsf{T}^* Q\times\mathbb R\,. \end{align*} Thus, the Tulczyjew triple takes the form $$\xymatrix@C-35pt@R-5pt{ &&&& { \mathcal{D}}\ar@{ (->}[d]&&&&\\ & \textsf{T}^\ast\textsf{T}^\ast Q\times\mathbb R \ar[dr]^{} \ar[ddl]_{} & & & \textsf{T}\sT^\ast Q\times\mathbb R\ar[lll]_{{\beta}}\ar[rrr]^{\alpha}\ar[dr]^{}\ar[ddl]_/-20pt/{} & & & \textsf{T}^\ast\textsf{T} Q\times\mathbb R \ar[ddl]_/-25pt/{} \ar[dr]^{} & \\ & & \textsf{T} Q\times\mathbb R\ar[ddl] & & & \textsf{T} Q\times\mathbb R \ar[lll]_/+10pt/{} \ar[rrr]^/-10pt/{}\ar[ddl]_{} & & & \textsf{T} Q\times\mathbb R\ar[ddl]_{}\ar@/_1pc/[ul]_/-5pt/{\mathrm{d}^\sv L} \\ \textsf{T}^\ast Q\times\mathbb R\ar[dr]^{}\ar@/^1pc/[uur]^{{\underline{\mathrm{d}}}^v H} & & & \textsf{T}^\ast Q\times\mathbb R\ar[rrr]^{}\ar[dr]^{}\ar[lll]_{} & & & \textsf{T}^\ast Q\times\mathbb R \ar[dr]^{} & & \\ & Q\times\mathbb R& & & Q\times\mathbb R\ar[lll]_{id}\ar[rrr]^{} & & & Q\times\mathbb R& } $$ \subsection{Example: Scalar fields} \label{sec:15} The theory of a scalar field is based on the fibration $E=M\times\mathbb R\rightarrow M$. On the Lagrangian side, we get the following identifications: $$\begin{aligned} {\scriptscriptstyle J}^1 E & \simeq \mathbb R\times\textsf{T}^\ast M\,, & (\varphi,f)\,,\\ \mathcal{P}E &\simeq \mathbb R\times\Omega^{m-1}\,,& (\varphi,p)\,, \\ {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1 E& \simeq \mathbb R\times\textsf{T}^\ast M\times_M\Omega^{m}\times_M\Omega^{m-1}\,, & (\varphi,f,a,p)\,, \\ {\scriptscriptstyle J}^1\mathcal{P} & \simeq \mathbb R\times\textsf{T}^\ast M\times_M{\scriptscriptstyle J}^1\Omega^{m-1}\,, & (\varphi, f, {\scriptscriptstyle j}^1 p)\,. \end{aligned}$$ The map $\alpha: {\scriptscriptstyle J}^1\mathcal{P}\longrightarrow {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1 E$ reads $$\alpha(\varphi, f, {\scriptscriptstyle j}^1 p)=(\varphi,f,\mathrm{d} p(x), p(x)),$$ where $x\mapsto p(x)$ is any representative of ${\scriptscriptstyle j}^1 p$. Let $g$ be a metric tensor on $M$. Denote with $G:\textsf{T} M\rightarrow \textsf{T}^\ast M$ the corresponding isomorphism, with $\omega$ -- the volume form associated with the metrics, and with $\star f=G^{-1}(f)\,\lrcorner\,\omega$ -- the Hodge operator. For the Lagrangian $$L(\varphi,f)=\frac12 f\wedge\star f\,,$$ we get $$\mathrm{d}^{{\scriptscriptstyle v}}L(\varphi,f)=(\varphi,\; f,\; 0,\; \star f)\quad{ \in\mathbb R\times\textsf{T}^\ast M\times_M\Omega^{m}\times_M\Omega^{m-1}}.$$ A section $M\ni x\longmapsto (\varphi(x), p(x))\in\mathcal{P} E$ is a solution of the Lagrange equations if $${\scriptscriptstyle j}^1(\varphi,p)(x)\in\alpha^{-1}(\mathrm{d}^{\scriptscriptstyle v} L(\varphi(x),\mathrm{d} \varphi(x)))\,,$$ i.e. \begin{align*} \mathrm{d} \varphi(x)&=f\,, \\ p&=\star f\,, \\ \mathrm{d} p&=0\,.\end{align*} The corresponding Euler-Lagrange equation is therefore $$\mathrm{d} \star\mathrm{d} \varphi=0,\quad\text{i.e.}\quad \Delta \varphi=0\,.$$ Since ${\scriptscriptstyle J}^1E=\mathbb R\times\textsf{T}^\ast M\rightarrow M\times\mathbb R=E$ is a vector bundle, the Hamiltonian side is simplified. The fibre of the bundle ${\scriptscriptstyle J}^1E\rightarrow E$ over $(\varphi, x)$ is equal to $\textsf{T}_x^\ast M$, so any affine map $A_{\varphi,x}:\textsf{T}_x^\ast M\rightarrow \Omega_x^m$ on the fibre takes the form $$A_{\varphi,x}(f)=f\wedge p+a\,,$$ where $p\in\Omega_x^{m-1}$ and $a\in\Omega_x^m$. We have therefore $$\begin{aligned} {\scriptscriptstyle J}^\dag E & \simeq \mathbb R\times\Omega^{m-1}\times_M\Omega^{m}\,, & (\varphi, p, a)\,, \\ {\scriptscriptstyle P} {\scriptscriptstyle J}^\dag E & \simeq \mathbb R\times\Omega^{m-1}\times_M\textsf{T}^\ast M\times_M \Omega^m\,, & (\varphi,p, f, a)\,. \\ \end{aligned}$$ The map $\beta: {\scriptscriptstyle J}^1\mathcal{P}\longrightarrow {\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E$ reads $$\beta(\varphi, f, {\scriptscriptstyle j}^1 p)=(\varphi,p(x),f,\mathrm{d} p(x)),$$ where $x\mapsto p(x)$ is any representative of ${\scriptscriptstyle j}^1 p$. Since the bundle $\theta: {\scriptscriptstyle J}^\dag E\rightarrow \mathcal{P}$ is trivial, Hamiltonians are maps $H:\mathcal{P}\rightarrow \Omega^m$. For the Hamiltonian $$H(\varphi, p)=\frac12p\wedge\star p\,,$$ we get $$\mathrm{d}^{{\scriptscriptstyle v}}H(\varphi, p)=(\varphi, p, \star p, 0).$$ The Hamilton equations for a section $M\ni x\longmapsto (\varphi(x), p(x))$ read \begin{align} \mathrm{d} \varphi(x)&=\star p, \\ \mathrm{d} p&=0.\end{align} The above equations lead to the following equation for hr field $x\mapsto \varphi(x)$: $$\mathrm{d}\star\mathrm{d}\varphi=0,\quad\text{i.e.}\quad \Delta \varphi=0.$$ \subsection{Example: Vector fields} \label{sec:16} Let us suppose that the bundle $\zz: E\rightarrow M$ is a vector bundle. In such a case, the bundle ${\scriptscriptstyle J}^1 E\rightarrow M$ is also a vector bundle with a distinguished subbundle $W=\{{\scriptscriptstyle j}^1\sigma(x):\; \sigma(x)=0_x\}$. The space $\textsf{T}_{0_x}E$ is a direct sum of the space of vertical vectors ${\scriptscriptstyle V}_{0_x}E\simeq E_x$ and the space of vectors tangent to the zero-section which can be identified with $\textsf{T}_xM$. It follows that $W\simeq \textsf{T}^\ast M\otimes E$. The projection ${\scriptscriptstyle J}^1E\rightarrow E$ coincides with the projection ${\scriptscriptstyle J}^1 E\rightarrow ({\scriptscriptstyle J}^1 E\slash W)\simeq E$. Let us denote with ${\scriptscriptstyle J}^\ast E\rightarrow M$ the bundle dual to ${\scriptscriptstyle J}^1 E\rightarrow M$. There is a projection ${\scriptscriptstyle J}^\ast E\rightarrow W^\ast\simeq \textsf{T} M\otimes E^\ast$. We have the identifications \begin{align*} {\scriptscriptstyle V}^+{\scriptscriptstyle J}^1 E&\simeq {\scriptscriptstyle J}^1E\times_M({\scriptscriptstyle J}^\ast E\otimes\Omega^m)\,, \\ \mathcal{P} E&\simeq E\times_M (W^\ast\otimes\Omega^m)\,, \\ {\scriptscriptstyle J}^1\mathcal{P}&\simeq {\scriptscriptstyle J}^1E\times_M {\scriptscriptstyle J}^1(W^\ast\otimes\Omega^m)\,. \end{align*} The map $$\alpha: {\scriptscriptstyle J}^1E\times_M {\scriptscriptstyle J}^1(W^\ast\otimes\Omega^m) \longrightarrow {\scriptscriptstyle J}^1E\times_M{\scriptscriptstyle J}^\ast E\otimes\Omega^m $$ separates into two maps $\alpha=\alpha_1\times\alpha_2$. The first factor is the identity on ${\scriptscriptstyle J}^1E$, and the second is a bundle morphism $$\xymatrix{ {\scriptscriptstyle J}^1(W^\ast\otimes\Omega^m)\ar[r]^{\alpha_2}\ar[d] & {\scriptscriptstyle J}^\ast E\otimes\Omega^m \ar[d] \\ W^\ast\otimes\Omega^m\ar[r]^{=}& W^\ast\otimes\Omega^m}\,.$$ Starting from coordinates $(x^i, y^a)$ linear in fibres of $\zeta$, we get coordinates $(x^i, \varphi_a, \varphi^j_b)$ in ${\scriptscriptstyle J}^\ast E\otimes\Omega^m$ linear in fibres over $M$ and $(x^i, p^i_a, p^j_{bl})$ in ${\scriptscriptstyle J}^1(W^\ast\otimes\Omega^m)$. The map $\alpha_2$ reads $$\alpha_2(x^i, p^j_a, p^k_{bl})=(x^i, \sum_j p^j_{aj}, p^k_b).$$ On the Hamiltonian side, in view of theorem \ref{th:1} in its version for vector spaces, there is a canonical identification, $${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E\simeq{\scriptscriptstyle J}^1 E\times_M{\scriptscriptstyle J}^\ast E \otimes\Omega^m\,,$$ with the two projections: $pr_1$ on ${\scriptscriptstyle J}^1 E$ and $\tx{i}$ on $E\times_M W^\ast\otimes \Omega^m$. Out of coordinates $(x^i, y^a, p^i_b, r)$ in ${\scriptscriptstyle J}^\dag E$, we get coordinates $(x^i, y^a, p^j_b, \pi_b, \pi^c_k)$ in ${\scriptscriptstyle P}{\scriptscriptstyle J}^\dag E$ with projections \begin{align*} (x^i, y^a, p^j_b, \pi_b, \pi^c_k)& \longmapsto (x^i, y^a, \pi^c_k)\in{\scriptscriptstyle J}^1 E \\ \intertext{and} (x^i, y^a, p^j_b, \pi_b, \pi^c_k)& \longmapsto (x^i, y^a, p^j_b)\in E\times_M W^\ast\otimes\Omega^m. \end{align*} The map $\beta$ reads \begin{align*} \beta: {\scriptscriptstyle J}^1E\times_M {\scriptscriptstyle J}^1(W^\ast\otimes\Omega^m)&\longrightarrow {\scriptscriptstyle J}^1 E\times_M{\scriptscriptstyle J}^\ast E \otimes\Omega^m\,;\\ \intertext{in coordinates:} (x^i y^a, y^b_j, p^k_c, p^l_{dm})&\longmapsto (x^i, y^a, p^j_b, -\sum_kp^k_{ck}, y^b_j). \end{align*} The Lagrangian and Hamiltonian spaces are, as usual, isomorphic. Here, it is even more visible, because of theorem \ref{th:1}. However, we have to remember that, on the Lagrangian side, the dynamics is generated out of the map $L:{\scriptscriptstyle J}^1E\rightarrow\Omega^m$, and on the Hamiltonian side out of the section $H:E\times W^\ast\otimes\Omega^m\rightarrow {\scriptscriptstyle J}^\dag E$. In some sense, the Lagrangian side is associated with the projection $pr_1$ on ${\scriptscriptstyle J}^1E$, while the Hamiltonian side with the projection $\tx{i}$ on $E\times W^\ast\otimes\Omega^m$. \subsection{Example: Electromagnetics} \label{sec:20} Now let us check how Electromagnetics, i.e the true physical theory, fits into the general scheme. In Electrodynamics, fields (electromagnetic potentials, $A$), are one forms on the four dimensional manifold $M$ equipped with a metrics with Lorenz signature, but we can write with $M$ of arbitrary dimension $m>1$. In our model, $E=\textsf{T}^\ast M$ and $\zz=\pi_M$. The symbols $\vee$ and $\wedge$ denote the symmetrized and antisymmetrized tensor product, respectively. The canonical density associated to the metric is $\omega$, while $\omega_M$ stands, as usual, for the canonical symplectic form on $\textsf{T}^\ast M$. Let us take a closer look at the structure of the vector space ${\scriptscriptstyle J}^1_x\textsf{T}^\ast M$ for a fixed $x\in M$. Like in the general case of a vector field, the space ${\scriptscriptstyle J}^1_x\textsf{T}^\ast M$ is a vector space with the distinguished subspace $W_x$ of jets of one-forms on $M$ which take the value $0$ at $x$. It follows from the general considerations concerning vector fields that $W_x\simeq \textsf{T}^\ast_xM\otimes\textsf{T}^\ast_xM$. Using the canonical splitting of two tensors into symmetric and antisymmetric parts, we get that $W_x\simeq \vee^2\textsf{T}^\ast_xM\oplus\wedge^2\textsf{T}^\ast_x M$. In ${\scriptscriptstyle J}^1_x\textsf{T}^\ast M$ there is another vector subspace $S_x$ of jets of closed forms. Since everything is local here, we can consider them as jets of differentials of local functions on $M$. It is easy to see that $\vee^2\textsf{T}^\ast_xM=W_x\cap S_x$ and ${\scriptscriptstyle J}^1_x\textsf{T}^\ast M=W_x\oplus S_x$. Moreover, there is an isomorphism ${\scriptscriptstyle J}^1_x\textsf{T}^\ast M\slash S_x\simeq \wedge^2\textsf{T}^\ast_x M$. Note also canonical maps: $$\zg:{\scriptscriptstyle J}^1\textsf{T}^\ast M\to \wedge^2\textsf{T}^\ast M\,,\quad \zg(j^1A(x))=\mathrm{d} A(x)\,.$$ and $$\bar L:\wedge^2\textsf{T}^\ast M\to\zW^m\,,\quad \bar L(F)=\frac{1}{2}F\wedge\star F\,,$$ where $\star$ is the `Hodge star' associated with the metric. {Taking now $L=\bar L\circ\zg:{\scriptscriptstyle J}^1\textsf{T}^\ast M\to\zW^m$ as our Lagrangian, we get $$\mathrm{d}^{\scriptscriptstyle v} L(j^1A)(j^1B)=\frac12(\mathrm{d} B\wedge\star F+F\wedge\star\mathrm{d} B)=\mathrm{d} B\wedge\star F\,,$$ where $F=\mathrm{d} A$.} The Lagrangian is constant on fibres of the projection $\zg$. As the phase space we get $\mathcal{P}\simeq \textsf{T}^\ast M\times_M W^\ast\otimes\Omega^m$. Since $W^\ast$ can also be split into symmetric and antisymmetric part, we have $\mathcal{P}\simeq \textsf{T}^\ast M\times_M (\vee^2\textsf{T} M\oplus\wedge^2\textsf{T} M)\otimes\Omega^m$. The Legendre map $\lambda: {\scriptscriptstyle J}^1 \textsf{T}^\ast M\rightarrow\mathcal{P}$ associated with the electromagnetic Lagrangian reads $$\lambda({\scriptscriptstyle j}^1 A(x))=(\;A(x),\; 0,\; G(\mathrm{d} A(x))\otimes\eta\;).$$ The symmetric part of the momentum vanishes as a consequence of the fact that the Lagrangian depends only on the antisymmetric part of the jet. The only nontrivial part of the momentum is then a bivector density or (according to Weyl duality) an odd two-form. The constitutive set $\mathcal{D}_L$ on the Lagrangian side is a subset of ${\scriptscriptstyle J}^1\textsf{T}^\ast M\times{\scriptscriptstyle J}^\ast\textsf{T}^\ast M\otimes\Omega^m$ given by $$\mathcal{D}_L=\alpha^{-1}(\mathrm{d}^\sv L({\scriptscriptstyle J}^1\textsf{T}^\ast M))=\{(j^1A(x),\Phi(x)): \bar\za(\Phi(x))=\mathrm{d}^{\scriptscriptstyle v} L(j^1A(x)))\}\,.$$ {A 1-form $A$ satisfies the Euler-Lagrange equation if $\mathrm{d}^{\scriptscriptstyle v} L(j^1(A))$ is $\za$-related to the first jet $j^1(\chi)$ of a section $\chi=X^k\otimes\zvy_k$ of $\mathcal{P}$, i.e., for all 1-forms $B$, $$\mathrm{d} B\wedge\star F=\mathrm{d}(B\wedge\star F)+B\wedge\mathrm{d}\star F=\mathrm{d}\left(\langle X^k,B\rangle\right)\zvy_k\,.$$ } {It is easy to see, that $\mathrm{d}(B\wedge\star F)$ is always of the required form, and $B\wedge\mathrm{d}\star F$ is never, except for the case $\mathrm{d}\star F=0$.} {In this way we have obtained the Maxwell equations (without sources).} \medskip On the Hamiltonian side of the triple, we need a generating object of the constitutive set in the form of a section of the bundle ${\scriptscriptstyle J}^+\textsf{T}^\ast M\rightarrow \textsf{T}^\ast M\times_M W^\ast$ (or, more generally, a section supported on a submanifold or a family of sections). Out of the general theory we know that $\mathcal{D}_L$ is generated for sure by the family of sections corresponding to the following family of density valued functions: $$H:{\scriptscriptstyle J}^+ \textsf{T}^\ast M\times_{\textsf{T}^\ast M}{\scriptscriptstyle J}^1\textsf{T}^\ast M\rightarrow \Omega^m,\qquad H(\varphi, {\scriptscriptstyle j}^1A)=\varphi({\scriptscriptstyle j}^1 A)-L({\scriptscriptstyle j}^1 A).$$ Critical points of this family are given by the Legendre map, i.e. $(\varphi,{\scriptscriptstyle j}^1 A)$ is critical if $\lambda({\scriptscriptstyle j}^1 A)$ equals the linear part of $\varphi$. It follows that the generating family $H$ can be replaced by a simpler generating object, namely one section $h$ supported on the submanifold $\lambda({\scriptscriptstyle J}^1\textsf{T}^\ast M)$, $$\lambda({\scriptscriptstyle J}^1\textsf{T}^\ast M)=\{(A, r,p)\in\textsf{T}^\ast M\times_M \vee^2\textsf{T}^\ast M\times_M\wedge^2\textsf{T}^\ast M:\quad r=0\}.$$ The value $h$ at $(A, 0, p)$ is an affine map on the fibre of ${\scriptscriptstyle J}^1\textsf{T}^\ast M$ over $A\in\textsf{T}^\ast M$. To know $h(A,0,p)$ we have to know how it acts on the jet ${\scriptscriptstyle j}^1\alpha(x)$, where $\alpha(x)=A$. For $h$, we get the following formula: $$h(A, 0, p)({\scriptscriptstyle j}^1\alpha(x))=\langle\; p,\; \mathrm{d}\alpha(x)\;\rangle\omega-\frac12 p\wedge\star p\,.$$ \section*{Appendix: The proof of theorem \ref{th:1}} \label{sec:17} \begin{proof} Let $U\subset V$ be a vector subbundle of $V$ such that $V\simeq W\oplus_N U$. As a set then, $V\simeq W\times_N U$. Once we have chosen $U$, we can get the following identifications: $$ V\simeq U\times_N W,\qquad V^\ast\simeq U\times_N W^*, \qquad V^\dag_W\simeq (U\times_N W^*)\times\mathbb R,$$ and finally \begin{equation}\label{eq:pv} {\scriptscriptstyle P} V^\dag_W\simeq \textsf{T}^*(U\times_N W^\ast)\simeq \textsf{T}^*U\times_{\textsf{T}^*N} \textsf{T}^*W^*\,. \end{equation} On the other hand, $$\textsf{T}^*V=\textsf{T}^*(U\times_N W)\simeq \textsf{T}^*U\times_{\textsf{T}^*N} \textsf{T}^*W $$ and we obtain a symplectomorphism, using identity on the first factor and the canonical double vector bundle morphism $\mathcal{R}_W:\textsf{T}^\ast W\rightarrow\textsf{T}^\ast W^\ast$ (see (\ref{eq:izoR})) composed with minus identity on the second factor. Of course, the identifications we have used depend on the choice of $U$. We have to show now that the isomorphism between ${\scriptscriptstyle P} V^\dag$ and $\textsf{T}^*V$ is canonical, even if we pass through two non-canonical maps. The vector bundle $V$ and its subbundle $W$ give rise to the following canonical structures. We have the affine bundle $\zt:V\rightarrow V\slash W$, the subbundle $W^0$ of $V^\ast$, the affine bundle $\pi:V^\ast\rightarrow V^\ast\slash W^0$ and canonical isomorphisms $(V\slash W)^\ast\simeq W^0$, $W^\ast\simeq V^\ast\slash W^0$. The choice of $U$ gives rise to two isomorphisms: \begin{align*} F:V\slash W &\longrightarrow U \subset V\,, \\ G:V^\ast\slash W^0\simeq W^\ast &\longrightarrow U^0 \subset V^\ast\,, \end{align*} where, clearly, $W^*=U^0, U^*=W^0\subset V^*$ are the annihilators of the subbundles $U,W\subset V$, respectively. Choosing $U'$ instead of $U$, we get $F'$ and $G'$. Choosing an appropriate linear map $A:V\slash W\rightarrow W$, we can write \begin{align*} F'(q)&=F(q)+A(q)\,,\\ G'(a)&=G(a)-A^\ast(a)\,. \end{align*} For any $v\in\tau^{-1}(q)$ and $\alpha\in\pi^{-1}(a)$, we get two decompositions: \begin{equation}\label{eq:decompositons} v=w+F(q)=w'+F'(q)\,, \qquad \alpha=G(a)+b=G'(a)+b'\,, \end{equation} with \begin{equation}\label{eq:decompositons2}w'=w-A(q)\quad\text{and}\quad b'= b+A^\ast(a)\,.\end{equation} Using $U$ and $U'$, we get also two decompositions: $$V^\dag_W\;\simeq\; U\times_N U^0\times\mathbb R\;\simeq\; U'\times_N (U')^0\times\mathbb R.$$ As our considerations are local, we can assume that the bundles are trivial and ignore the basic coordinates of $N$. An element $\varphi\in V^\dag_W$ over $q$, with the linear part equal to $a$, is represented by $(F(q), G(a), r)$ or $(F'(q), G'(a), r')$, where \begin{equation}\label{eq:equal}r'=r+\langle G(a), A(q)\rangle.\end{equation} A section $\sigma$ of the bundle $V^\dag_W\rightarrow V\slash W\times_N W^\ast$ in the neighbourhood of a point $(q_0, a_0)\in V\slash W\times_N W^\ast$ can be written as $$\sigma(q_0+\delta q, a_0+\delta a)=(F(q_0+\delta q), G(a_0+\delta a), r(q_0+\delta q,a_0+\delta a))\,.$$ For the purpose of studying ${\scriptscriptstyle P} V^\dag_W$, it is enough to consider sections that are affine with respect to $\delta q$ and $\delta a$. Such sections (in the decomposition given by $U$) are defined by two elements $w\in W$ and $b\in W^0$, $$r(q_0+\delta q,a_0+\delta a)=\langle b, F(q_0+\delta q)\rangle-\langle G(a_0+\delta a), w\rangle\,.$$ For the pair $(v_0, \alpha_0)\in V\times V^* $ such that $\tau(v_0)=q_0$ and $\pi(\alpha_0)=a_0$, we get two decompositions as in (\ref{eq:decompositons}) and (\ref{eq:decompositons2}). Now we should check whether the two sections $\sigma$ for $(w_0, b_0)$ and $\sigma'$ for $(w_0', b_0')$ are equivalent, i.e. whether they have the same affine differential. We have: \begin{align*} &\sigma(q_0+\delta q, a_0+\delta a)=(F(q_0+\delta q), G(a_0+\delta a), r(q_0+\delta q,a_0+\delta a))\,,\\ \intertext{with} &r(q_0+\delta q,a_0+\delta a)=\langle b_0, F(q_0+\delta q)\rangle-\langle G(a_0+\delta a), w_0\rangle\,, \\ \intertext{and} &\sigma'(q_0+\delta q, a_0+\delta a)=(F'(q_0+\delta q), G'(a_0+\delta a), r'(q_0+\delta q,a_0+\delta a))\,,\\ \intertext{with} &r'(q_0+\delta q,a_0+\delta a)=\langle b'_0, F'(q_0+\delta q)\rangle-\langle G'(a_0+\delta a), w'_0\rangle\,. \end{align*} The difference between those two sections is a function on $V\slash W\times W^\ast$ (we have used (\ref{eq:equal}) here) which reads as \begin{multline*}\sigma'(q_0+\delta q, a_0+\delta a)-\sigma(q_0+\delta q, a_0+\delta a)=\\ r'(q_0+\delta q,a_0+\delta a)-r(q_0+\delta q,a_0+\delta a)-\langle G(a_0+\delta a), A(q_0+\delta q)\rangle=\\ \langle G(a_0), A(q_0)\rangle-\langle G(\delta a),A(\delta q)\rangle. \end{multline*} The differential of this function at $(q_0, a_0)$ is equal to zero, since the first term is constant and the second is quadratic. It means that the affine differentials $\underline{\mathrm{d}}\sigma(q_0, a_0)$ and $\underline{\mathrm{d}}\sigma'(q_0, a_0)$ are equal. The differential is therefore given by $(v_0, \alpha_0)$\,. \end{proof} \begin{remark} Note that a side result of the above theorem is the following `exotic' double vector-affine bundle structure on the bundle ${\scriptscriptstyle P}(V^\dag_W)\simeq\textsf{T}^\ast V$: $$\xymatrix@C-10pt{ & {\scriptscriptstyle P}(V^\dag_W)\simeq\textsf{T}^\ast V\ar[dl]_{\tau\times\pi}\ar[dr]^{pr_1} & \\ V\slash W\times_N W^\ast & & V \,.\\ }$$ The isomorphisms ${\scriptscriptstyle P}(V^\dag_W)\simeq\textsf{T}^\ast V$, where $W$ runs through all vector subbundles of $V$, yields in particular $id_{\textsf{T}^\ast V}$ for $W=\{ 0\}$ and $-\mathcal{R}_V:\textsf{T}^\ast V\to\textsf{T}^\ast V^\ast$, for $W=V$ (see (\ref{eq:izoR})). In the theorem \ref{th:1}, the canonical symplectomorphism can be replaced by a canonical anti-symplectomorphism, its negative, which is often used in physical theories. \end{remark} \begin{remark} An alternative `symplectic' proof of theorem \ref{th:1} is also possible. As in the case of the canonical isomorphism $\textsf{T}^\ast V\simeq\textsf{T}^\ast V^\ast$, there is a symplectic relation $S\subset\textsf{T}^\ast V\times\textsf{T}^\ast V^\dag_W$ generated by the evaluation $V^\dag_W\times_{V\slash W } V\ni(\varphi, v)\mapsto\varphi(v)\in \mathbb R$. The relation, composed with symplectic reduction with respect to a certain coisotropic submanifold in $\textsf{T}^\ast V^\dag_W$, gives the isomorphism between $\textsf{T}^\ast V$ and ${\scriptscriptstyle P} V^\dag_W$. It is clear from the construction that the isomorphism is a symplectomorphism and a double vector-affine bundle morphism. \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} With the discovery of a Higgs-like boson at the LHC \cite{Aad:2012tfa,Chatrchyan:2012ufa}, the question of the Standard Model (SM) vacuum stability has received a renewed attention, with several high-precision analysis on the subject \cite{Holthausen:2011aa,EliasMiro:2011aa,Bezrukov:2012sa,Degrassi:2012ry,Alekhin:2012py, Masina:2012tz,Buttazzo:2013uya,Antipin:2013sga,Branchina:2013jra} (see also \cite{Lindner:1988ww,Arnold:1989cb,Sher:1988mj,Sher:1993mf,Ford:1992mv,Altarelli:1994rb,Casas:1994qy, Espinosa:1995se,Casas:1996aq,Isidori:2001bm,Isidori:2007vm,Ellis:2009tp} for earlier works). Absolute vacuum stability bounds are usually obtained by requiring that the electroweak vacuum is the absolute minimum of the effective potential, at least up to some cutoff scale, $\Lambda_{\rm{SM}}$, where the SM is not valid anymore and new physics is required in order to modify the shape of the effective potential.\footnote{Such a requirement can be relaxed if the tunnelling probability of the electroweak vacuum is small enough to comply with the age of the universe.} It would be tempting (as it is often done) to identify the physical threshold, $\Lambda_{\rm{SM}}$, with the SM vacuum instability scale, $\Lambda$, which is operatively defined by the field value at which the effective potential becomes deeper than the electroweak minimum. However, due to the gauge dependence of the effective potential, $\Lambda$ suffers from an irreducible gauge ambiguity which makes its identification with $\Lambda_{\rm{SM}}$ problematic. The gauge dependence of the effective potential is known since long. Soon after the seminal work of Coleman and Weinberg \cite{Coleman:1973jx}, it was realized by Jackiw \cite{Jackiw:1974cv} that the effective potential is actually gauge dependent, thus raising the question of its physical significance. Since then, many authors have dealt with this subject \cite{Dolan:1974gu,Kang:1974yj,Fischler:1974ue,Frere:1974ia,Nielsen:1975fs,Fukuda:1975di, Aitchison:1983ns,Johnston:1984sc,Thompson:1985hp,Kobes:1990dc,Ramaswamy:1995np,Metaxas:1995ab,DelCima:1999gg,Gambino:1999ai,Alexander:2008hd} and it is now a well-established practice to extract the physical content of the effective potential by means of the so-called Nielsen identities \cite{Nielsen:1975fs}. \\ In particular, the issue of the gauge dependence of the effective potential in the analysis of the SM vacuum stability was already pointed out at the end of the 90's by Loinaz and Willey \cite{Loinaz:1997td}, which challenged the possibility of setting gauge-independent lower bounds on the Higgs boson mass from vacuum stability constraints. More recently, the problematic identification between the cutoff scale of the SM and the instability scale $\Lambda$ was mentioned again in Ref.~\cite{Gonderinger:2012rd}. The aim of this paper is to clarify some issues related to the gauge dependence of the quantities entering the vacuum stability analysis. While the critical value of the Higgs boson mass, marking the transition between the stable and unstable phase of the SM, can be formally proven to be gauge independent, the SM instability scale is actually gauge dependent. This is explicitly shown by a direct calculation of the gauge dependent one-loop effective potential in the SM. \\ The SM effective potential is known in the Landau gauge at one \cite{Coleman:1973jx} and two loops \cite{Ford:1992pn,Martin:2001vx} since long. Recently, even the three-loop QCD and top-Yukawa corrections have been included \cite{Martin:2013gka}. On the other hand, calculations of the SM effective potential beyond the Landau gauge are less explored. Barring few exceptions, like for instance in Ref.~\cite{Patel:2011th} where a background-field-dependent gauge fixing with a single gauge-fixing parameter was employed, the gauge dependence of the SM effective potential is usually not taken into consideration. The paper is organized as follows: in \sect{SMoneloopEP} we provide a pedagogical derivation of the SM one-loop effective potential in the Fermi gauge (generalized Lorentz gauge) and consider its renormalization group (RG) improvement. In \sect{physobsvacstab} we discuss the physical observables entering the vacuum stability analysis. In particular, by using the Nielsen identity \cite{Nielsen:1975fs}, we formally prove that the lower bound on the Higgs boson mass derived from the electroweak-vacuum-stability condition is gauge independent. On the other hand, the extrema of the effective potential and, in particular, the instability scale are in general gauge dependent. In \sect{gaugedepSMinst} we numerically quantify at the next-to-leading order (NLO) accuracy the gauge dependence of $\Lambda$ in the Fermi gauge by varying the gauge-fixing parameters in their perturbative domain and comment on the gauge-fixing scheme dependence of $\Lambda$. The interpretation and the physical implications of the gauge dependence of $\Lambda$ are discussed in \sect{conclusions}. The two-loop renormalization group equations (RGEs) of the SM parameters in the Fermi gauge are collected in \app{RGEapp}, while in \app{BCKGgaugefull} we report on the calculation of the SM one-loop effective potential in a background $R_\xi$ gauge with the most general set of gauge-fixing parameters. As a by-product we also obtain the SM one-loop effective potential in the standard $R_\xi$ gauge, whose expression might be useful for broken-phase calculations. \section{The SM effective potential at one loop} \label{SMoneloopEP} In order to set the notation, let us split the classical Lagrangian density of the electroweak sector of the SM in a gauge, Higgs and fermion part \begin{equation} \label{Lclassical} \mathcal{L}_{\rm{C}} = \mathcal{L}_{\rm{YM}} + \mathcal{L}_{\rm{H}} + \mathcal{L}_{\rm{F}} \, , \end{equation} with \begin{align} \label{LYM} \mathcal{L}_{\rm{YM}} &= -\frac{1}{4} \left( \partial_\mu W^a_\nu - \partial_\nu W^a_\mu + g \epsilon^{abc} W^b_\mu W^c_\nu \right)^2 -\frac{1}{4} \left( \partial_\mu B_\nu - \partial_\nu B_\mu \right)^2 \, , \\ \label{LH} \mathcal{L}_{\rm{H}} &= \left( D_\mu H \right)^\dag \left( D^\mu H \right) - V(H) \, , \\ \label{LF} \mathcal{L}_{\rm{F}} &= \overline{Q}_L i \gamma_\mu D^\mu Q_L + \overline{t}_R i \gamma_\mu D^\mu t_R + \left(- y_t \overline{Q}_L (i\sigma^2) H^* t_R + \rm{h.c.}\right) + \ldots \, , \end{align} where $W^a_\mu$ ($a=1,2,3$) and $B_\mu$ are the SU(2) and U(1) gauge fields, $H$ is the SM Higgs doublet with hypercharge $Y=1$ and $Q_L^T = (t_L, b_L)$ is the left-handed third generation quark doublet. Only the top quark is retained among the fermions and the QCD indices are suppressed in the quark sector. The covariant derivative is defined as \begin{equation} \label{defcovder} D_\mu = \partial_\mu - i g \frac{\sigma^a}{2} W^a_\mu + i g' \frac{Y}{2} B_\mu \, , \end{equation} where $\sigma^a$ ($a=1,2,3$) are the usual Pauli matrices and with the term involving $g$ being absent for right-handed fermions. The Higgs potential is \begin{equation} V (H) = -m^2 H^\dag H + \lambda (H^\dag H)^2 \, . \end{equation} The effective potential can be conveniently computed by means of the background field method of Jackiw \cite{Jackiw:1974cv}. After homogeneously shifting the scalar fields of the theory by a background (spacetime independent) field $\phi$, the one-loop effective potential is obtained by directly evaluating the path integral expression of the effective action in the Gaussian approximation. After some standard manipulations (see e.g.~also \cite{Delaunay:2007wb,Patel:2011th}), the one-loop effective potential \begin{equation} \label{1loopEP} V_{\rm{eff}}^{\rm{1-loop}} (\phi) = V^{(0)}_{\rm{eff}} (\phi) + V^{(1)}_{\rm{eff}} (\phi) \, , \end{equation} can be recast in terms of the well-known formulas \cite{Jackiw:1974cv} \begin{align} \label{treelevelEP} V^{(0)}_{\rm{eff}} (\phi) &= V(\phi) \, , \\ \label{1loopEPclosed} V^{(1)}_{\rm{eff}} (\phi) &= i \sum_{n \, = \, \text{SM fields}} \eta \int \frac{d^4 k}{(2\pi)^4} \log \mbox{det}\, i \tilde{\mathcal{D}}^{-1}_n \{ \phi; k \} \, . \end{align} The matrix $i \tilde{\mathcal{D}}^{-1}_n \{ \phi; k \}$ denotes the $\phi$-dependent inverse propagators of the SM fields in momentum space, the determinant acts on all the internal indices and $\eta = -1/2 \ (1)$ for bosons (fermions/ghosts) is the power of the functional determinant due to the Gaussian path integral. Gauge invariance allows us to perform the shift of the Higgs doublet in a specific direction of the $\rm{SU(2)} \otimes \rm{U(1)}$ space: \begin{equation} \label{Hshift} H(x) \rightarrow \frac{1}{\sqrt{2}} \left( \begin{array}{c} \chi^1(x) + i \chi^2(x) \\ \phi + h(x) + i \chi^3(x) \end{array} \right) \, , \end{equation} where $h$ denotes the Higgs field and $\chi^a$ ($a=1,2,3$) the Goldstone boson fields. At tree level, the effective potential reads \begin{equation} \label{V0} V^{(0)}_{\rm{eff}} (\phi) = -\frac{m^2}{2} \phi^2 + \frac{\lambda}{4} \phi^4 \, , \end{equation} while in order to compute the quantum correction, $V^{(1)}_{\rm{eff}}$, one needs to work out the inverse propagators of the dynamical fields in the shifted SM Lagrangian. For exemplification, we consider in the next section the computation of the one-loop SM effective potential in the Fermi gauge. The calculation of the SM effective potential in a background-field-dependent $R_\xi$ gauge and in the standard $R_\xi$ gauge is instead presented in \app{BCKGgaugefull}. \subsection{Fermi gauge} \label{Fermigauge} As long as we are interested in the high-energy behaviour of the the effective potential, we can directly work in the unbroken phase of the SM. Then, the most convenient way to fix the gauge is by means of the Fermi gauge (generalized Lorentz gauge): \begin{equation} \label{gflagFermi} \mathcal{L}^{\rm{Fermi}}_{\rm{g.f.}} = -\frac{1}{2 \xi_W} \left( \partial^\mu W^a_\mu \right)^2 -\frac{1}{2 \xi_B} \left( \partial^\mu B_\mu \right)^2 \, . \end{equation} We are thus interested in the determination of the quadratic ($\phi$-dependent) part of the Lagrangian, $\mathcal{L}_{\rm{C}} + \mathcal{L}^{\rm{Fermi}}_{\rm{g.f.}}$, after the shift in \eq{Hshift}.\footnote{One can easily see that the bilinear ghost terms are $\phi$-independent. Hence, in the Fermi gauge the ghost contribution decouples from the one-loop effective potential.} A straightforward calculation yields \begin{align} \label{LYMquadFermi} \mathcal{L}^{\rm{quad}}_{\rm{YM}} &= \tfrac{1}{2} W^a_\mu \left( \Box \, g^{\mu\nu} - \partial^\mu \partial^\nu \right) \delta^{ab} W^b_\nu + \tfrac{1}{2} B_\mu \left( \Box \, g^{\mu\nu} - \partial^\mu \partial^\nu \right) B_\nu \, , \\ \label{LHquadFermi} \mathcal{L}^{\rm{quad}}_{\rm{H}} &= \tfrac{1}{2} h \left( - \Box - \bar{m}_h^2 \right) h + \tfrac{1}{2} \chi^a \left( - \Box - \bar{m}_\chi^2 \right) \delta^{ab} \chi^b + \tfrac{1}{2} \bar{m}_W^2 W^a_\mu W^{a\mu} + \tfrac{1}{2} \bar{m}_B^2 B_\mu B^{\mu} \nonumber \\ & + \bar{m}_W \bar{m}_B W^3_\mu B^{\mu} - \bar{m}_W \partial_\mu \chi^1 W^{2\mu} - \bar{m}_W \partial_\mu \chi^2 W^{1\mu} + \bar{m}_W \partial_\mu \chi^3 W^{3\mu} + \bar{m}_B \partial_\mu \chi^3 B^{\mu} \, , \\ \label{LFquadFermi} \mathcal{L}^{\rm{quad}}_{\rm{F}} &= \overline{t} \left( i \slashed{\partial} - \bar{m}_t \right) t + \ldots \, , \end{align} where $\Box \equiv \partial_\mu \partial^\mu$ and we defined the $\phi$-dependent masses \begin{align} \label{mhphiFermi} \bar{m}_h^2 &= -m^2 + 3 \lambda \phi^2 \, , \\ \label{mchiphi} \bar{m}_{\chi}^2 &= -m^2 + \lambda \phi^2 \, , \\ \label{defmwFermi} \bar{m}_W &= \tfrac{1}{2} g \phi \, , \\ \label{defmbFermi} \bar{m}_B &= \tfrac{1}{2} g' \phi \, , \\ \label{mtphiFermi} \bar{m}_t &= \frac{y_t}{\sqrt{2}} \phi \, , \end{align} while $\mathcal{L}^{\rm{Fermi}}_{\rm{g.f.}}$ is already quadratic in the gauge boson fields. The only technical complication in the Fermi gauge is the presence of a Goldstone--gauge boson mixing already at tree level (cf.~\eq{LHquadFermi}). The latter can be treated by defining an extended field vector \begin{equation} \label{extX} X^T = \left(V^T_\mu , \chi^T\right) \, , \end{equation} where \begin{equation} \label{defVchiFermi} V^T_\mu = \left( W^1_\mu, W^2_\mu, W^3_\mu, B_\mu \right) \, , \qquad \chi^T = \left( \chi^1, \chi^2, \chi^3 \right) \, . \end{equation} Then the quadratic part of the Goldstone--gauge sector can be rewritten as \begin{equation} \label{quadgoldgauge} \frac{1}{2} X^T \left( i \mathcal{D}_X^{-1} \right) X = \frac{1}{2} \left( V_\mu^T, \chi^T \right) \left( \begin{array}{cc} i \left( \mathcal{D}^{-1}_V \right)^\mu_{\nu} & \bar{m}^T_{\text{mix}} \, \partial^\mu \\ - \bar{m}_{\text{mix}} \, \partial_\nu & i \mathcal{D}^{-1}_\chi \end{array} \right) \left( \begin{array}{c} V^\nu \\ \chi \end{array} \right) \, , \end{equation} with \begin{equation} \label{defmmix} \bar{m}_{\text{mix}} = \left( \begin{array}{cccc} 0 & - \bar{m}_W & 0 & 0 \\ - \bar{m}_W & 0 & 0 & 0 \\ 0 & 0 & \bar{m}_W & \bar{m}_B \end{array} \right) \, . \end{equation} After Fourier transformation, $\partial_\mu \rightarrow i k_\mu$, the mixed inverse propagator matrix becomes \begin{equation} \label{invpropX} i \tilde{\mathcal{D}}_X^{-1} = \left( \begin{array}{cc} i ( \tilde{\mathcal{D}}^{-1}_V )^\mu_{\nu} & i k^\mu \bar{m}^T_{\text{mix}} \\ - i k_\nu \bar{m}_{\text{mix}} & i \tilde{\mathcal{D}}^{-1}_\chi \end{array} \right) \, , \end{equation} where $(\tilde{\mathcal{D}}^{-1}_V)^\mu_{\nu}$ is conveniently split into a transversal and a longitudinal part \begin{equation} \label{invpropgaugeFermi} ( \tilde{\mathcal{D}}^{-1}_V )^\mu_\nu = i \tilde{\mathcal{D}}^{-1}_{T} \,(\Pi_{T})^\mu_\nu + i \tilde{\mathcal{D}}^{-1}_{L} \,(\Pi_{L})^\mu_\nu \, , \end{equation} with \begin{equation} \label{defPiTLFermi} (\Pi_{T})^\mu_\nu = g^\mu_\nu - \frac{k^\mu k_\nu}{k^2} \, , \qquad (\Pi_{L})^\mu_\nu = \frac{k^\mu k_\nu}{k^2} \, , \end{equation} and \begin{align} \label{invpropTFermi} i \tilde{\mathcal{D}}^{-1}_{T} &= \left( \begin{array}{cccc} - k^2 + \bar{m}_W^2 & 0 & 0 & 0 \\ 0 & - k^2 + \bar{m}_W^2 & 0 & 0 \\ 0 & 0 & - k^2 + \bar{m}_W^2 & \bar{m}_W \bar{m}_B \\ 0 & 0 & \bar{m}_W \bar{m}_B & - k^2 + \bar{m}_B^2 \end{array} \right) \, , \\ \label{invpropLFermi} i \tilde{\mathcal{D}}^{-1}_{L} &= \left( \begin{array}{cccc} - \xi_W^{-1} k^2 + \bar{m}_W^2 & 0 & 0 & 0 \\ 0 & - \xi_W^{-1} k^2 + \bar{m}_W^2 & 0 & 0 \\ 0 & 0 & - \xi_W^{-1} k^2 + \bar{m}_W^2 & \bar{m}_W \bar{m}_B \\ 0 & 0 & \bar{m}_W \bar{m}_B & -\xi_B^{-1} k^2 + \bar{m}_B^2 \end{array} \right) \, . \end{align} The Goldstone boson inverse propagator reads \begin{equation} \label{invpropchiFermi} i \tilde{\mathcal{D}}^{-1}_{\chi} = \left( \begin{array}{ccc} k^2 - \bar{m}_\chi^2 & 0 & 0 \\ 0 & k^2 - \bar{m}_\chi^2 & 0 \\ 0 & 0 &k^2 - \bar{m}_\chi^2 \end{array} \right) \, , \end{equation} while those of the Higgs and top quark fields are \begin{align} \label{invprophFermi} i \tilde{\mathcal{D}}^{-1}_{h} &= k^2 - \bar{m}_h^2 \, , \\ \label{invproptFermi} i \tilde{\mathcal{D}}^{-1}_{t} &= \slashed{k} - \bar{m}_t \, . \end{align} The next step (see \eq{1loopEPclosed}) is the evaluation of $\log\mbox{det}\, i \tilde{\mathcal{D}}_n^{-1}$, for $n = X, h ,t$. Only the former and the latter present some non-trivial steps. Let us start by expressing the determinant of the block matrix in \eq{invpropX} as \begin{align} \label{evaluationdetX} \mbox{det}\, i \tilde{\mathcal{D}}_X^{-1} &= \mbox{det}\, i \tilde{\mathcal{D}}_\chi^{-1} \mbox{det}\, \left( i ( \tilde{\mathcal{D}}^{-1}_V )^\mu_{\nu} - k^\mu k_\nu \bar{m}^T_{\text{mix}} \left( i \tilde{\mathcal{D}}^{-1}_{\chi} \right)^{-1} \bar{m}_{\text{mix}} \right) \nonumber \, , \\ &= \mbox{det}\, i \tilde{\mathcal{D}}_\chi^{-1} \mbox{det}\, \left( i \tilde{\mathcal{D}}^{-1}_{T} (\Pi_{T})^\mu_\nu + \left( i \tilde{\mathcal{D}}^{-1}_{L} - k^2 \bar{m}^T_{\text{mix}} \left( i \tilde{\mathcal{D}}^{-1}_{\chi} \right)^{-1} \bar{m}_{\text{mix}} \right) (\Pi_{L})^\mu_\nu \right) \, , \end{align} where in the last step we used \eq{invpropgaugeFermi}, and perform a Lorentz transformation in $d$ spacetime dimensions,\footnote{We already anticipate the fact that we are going to regulate the divergent integrals in dimensional regularization.} $k_\mu \rightarrow (k_0, 0, 0, 0, \ldots)$, such that $(\Pi_{L})^\mu_\nu \rightarrow (1,0,0,0,\ldots)$ and $(\Pi_{T})^\mu_\nu \rightarrow (0,1,1,1,\ldots)$. Using the Loretz invariance of the determinant, we obtain \begin{equation} \label{evaluationlogdetX} \log\mbox{det}\, i \tilde{\mathcal{D}}_X^{-1} = (d-1) \log \mbox{det}\, \tilde{\mathcal{D}}^{-1}_{T} + \log \mbox{det}\, i \tilde{\mathcal{D}}_\chi^{-1} \mbox{det}\, \left( i \tilde{\mathcal{D}}^{-1}_{L} - k^2 \bar{m}^T_{\text{mix}} \left( i \tilde{\mathcal{D}}^{-1}_{\chi} \right)^{-1} \bar{m}_{\text{mix}} \right) \, . \end{equation} The explicit evaluation of the two summands in the right-hand side of \eq{evaluationlogdetX} yields \begin{equation} \label{logdetTFermi} \log \mbox{det}\, i \tilde{\mathcal{D}}^{-1}_T = 2 \log \left( -k^2 + \bar{m}_W^2 \right) + \log \left( -k^2 + \bar{m}_{Z}^2 \right) + \ldots \, , \\ \end{equation} and \begin{align} \label{rhsexpl} & \log \mbox{det}\, i \tilde{\mathcal{D}}_\chi^{-1} \mbox{det}\, \left( i \tilde{\mathcal{D}}^{-1}_{L} - k^2 \bar{m}^T_{\text{mix}} \left( i \tilde{\mathcal{D}}^{-1}_{\chi} \right)^{-1} \bar{m}_{\text{mix}} \right) \nonumber \\ & = 2 \log \left( k^4 -k^2 \bar{m}_\chi^2 + \bar{m}_\chi^2 \xi_W \bar{m}_W^2 \right) + \log \left( k^4 -k^2 \bar{m}_\chi^2 + \bar{m}_\chi^2 (\xi_W \bar{m}_W^2 + \xi_B \bar{m}_B^2) \right) + \ldots \nonumber \\ & = 2 \log \left( k^2 - \bar{m}_{A^{+}}^2 \right) + 2 \log \left( k^2 - \bar{m}_{A^{-}}^2 \right) + \log \left( k^2 - \bar{m}_{B^{+}}^2 \right) + \log \left( k^2 - \bar{m}_{B^{-}}^2 \right) + \ldots \, , \end{align} where the ellipses stand for $\phi$-independent terms and we defined the $\phi$-dependent masses \begin{align} \label{defmassZFermi} \bar{m}_{Z}^2 &= \bar{m}_W^2 + \bar{m}_B^2 \, , \\ \label{defmassApm} \bar{m}_{A^{\pm}}^2 &= \frac{1}{2} \bar{m}_\chi \left( \bar{m}_\chi \pm \sqrt{ \bar{m}_\chi^2 - 4 \xi_W \bar{m}_W^2} \right) \, , \\ \label{defmassBpm} \bar{m}_{B^{\pm}}^2 &= \frac{1}{2} \bar{m}_\chi \left( \bar{m}_\chi \pm \sqrt{ \bar{m}_\chi^2 - 4 (\xi_W \bar{m}_W^2 + \xi_B \bar{m}_B^2) } \right) \, . \end{align} For the evaluation of the fermionic determinant of \eq{invproptFermi} we employ a naive treatment of $\gamma_5$ in dimensional regularization (i.e.~$\{ \gamma_5, \gamma_\mu \} = 0$ in $d$ dimensions) and make the standard choice $\mbox{Tr}\, \mathbf{1}_\text{Dirac} = 4$ in $d$ dimensions.\footnote{A different choice, e.g.~$\mbox{Tr}\, \mathbf{1}_\text{Dirac} = 2^{d/2}$, would just lead to a different renormalization scheme \cite{Collins:1984xc}.} Explicitly, one has \begin{align} \log \mbox{det}\, \left( \slashed{k} - \bar{m}_t \right) &= \mbox{Tr}\, \log \left( \slashed{k} - \bar{m}_t \right) = \mbox{Tr}\, \log \gamma^5 \left( \slashed{k} - \bar{m}_t \right) \gamma^5 = \mbox{Tr}\, \log \left( - \slashed{k} - \bar{m}_t \right) \nonumber \\ &= \frac{1}{2} \left[ \mbox{Tr}\, \log \left( \slashed{k} - \bar{m}_t \right) + \mbox{Tr}\, \log \left( - \slashed{k} - \bar{m}_t \right) \right] = \frac{1}{2} \mbox{Tr}\, \log \left( - k^2 + \bar{m}_t^2 \right) \nonumber \\ &= \frac{1}{2} 4 \times 3 \log \left( -k^2 + \bar{m}_t^2 \right) \, , \end{align} where the extra factors in the last step are due to the trace in the Dirac and color space. Including all the relevant degrees of freedom and working in dimensional regularization with $d = 4 - 2 \epsilon$, the one-loop contribution to the effective potential (cf.~again \eq{1loopEPclosed}) can be adjusted in the following way: \begin{align} \label{EP1loopSMdrexplFermi} V^{(1)}_{\rm{eff}}(\phi)|^{\rm{Fermi}} &= -\frac{i}{2} \mu^{2\epsilon} \int \frac{d^d k}{(2\pi)^d} \left[ - 12 \log \left( - k^2 + \bar{m}_t^2 \right) + (d-1) \left( 2 \log \left( -k^2 + \bar{m}_W^2 \right) \right. \right. \nonumber \\ & \left. + \log \left( -k^2 + \bar{m}_Z^2 \right) \right) + \log \left( k^2 - \bar{m}_h^2 \right) + 2 \log \left( k^2 - \bar{m}_{A^{+}}^2 \right) + 2 \log \left( k^2 - \bar{m}_{A^{-}}^2 \right) \nonumber \\ & \left. + \log \left( k^2 - \bar{m}_{B^{+}}^2 \right) + \log \left( k^2 - \bar{m}_{B^{-}}^2 \right) + \ \text{$\phi$-independent} \right] \, . \end{align} The integrals are easily evaluated after Wick rotation, yielding \begin{equation} \label{integralFermi} -\frac{i}{2} \mu^{2\epsilon} \int \frac{d^d k}{(2\pi)^d} \log (-k^2 + m^2) = \frac{1}{4} \frac{m^4}{(4\pi)^2} \left( \log \frac{m^2}{\mu^2} -\frac{3}{2} -\Delta_\epsilon \right) \, , \end{equation} where we introduced the modified minimal subtraction ($\overline{\rm{MS}}$) term \cite{Bardeen:1978yd} \begin{equation} \label{defDeltaepsFermi} \Delta_\epsilon = \frac{1}{\epsilon} - \gamma_E + \log 4 \pi \, . \end{equation} After the $\epsilon$-expansion the one-loop contribution to the effective potential is given by \begin{align} \label{1loopEPbareFermi} & V^{(1)}_{\rm{eff}}|_{\rm{bare}}^{\rm{Fermi}} = \frac{1}{4 (4 \pi)^2} \left[ -12 \bar{m}_t^4 \left( \log\frac{\bar{m}_t^2}{\mu^2} - \frac{3}{2} - \Delta_\epsilon \right) +6 \bar{m}_W^4 \left( \log\frac{\bar{m}_W^2}{\mu^2} - \frac{5}{6} - \Delta_\epsilon \right) \right. \\ & \left.+3 \bar{m}_Z^4 \left( \log\frac{\bar{m}_Z^2}{\mu^2} - \frac{5}{6} - \Delta_\epsilon \right) +\bar{m}_h^4 \left( \log\frac{\bar{m}_h^2}{\mu^2} - \frac{3}{2} - \Delta_\epsilon \right) +2 \bar{m}_{A^+}^4 \left( \log\frac{\bar{m}_{A^+}^2}{\mu^2} - \frac{3}{2} - \Delta_\epsilon \right) \right. \nonumber \\ & \left. +2 \bar{m}_{A^-}^4 \left( \log\frac{\bar{m}_{A^-}^2}{\mu^2} - \frac{3}{2} - \Delta_\epsilon \right) + \bar{m}_{B^+}^4 \left( \log\frac{\bar{m}_{B^+}^2}{\mu^2} - \frac{3}{2} - \Delta_\epsilon \right) + \bar{m}_{B^-}^4 \left( \log\frac{\bar{m}_{B^-}^2}{\mu^2} - \frac{3}{2} - \Delta_\epsilon \right) \right] \, . \nonumber \end{align} In particular, in terms of the SM couplings the divergent part of \eq{1loopEPbareFermi} reads \begin{align} \label{1loopdivFermi} V^{(1)}_{\rm{eff}}|_{\rm{bare-pole}}^{\rm{Fermi}} &= \frac{\Delta_\epsilon}{(4\pi)^2} \left[ -m^4 + \left(3 \lambda - \frac{1}{8} \xi _B g'^2 - \frac{3}{8} \xi _W g^2 \right) m^2 \phi^2 \right. \nonumber \\ & \left. + \left( -\frac{3}{64} g'^4 -\frac{3}{32} g'^2 g^2 -\frac{9}{64} g^4 +\frac{3}{4} y_t^4 -3 \lambda ^2 +\frac{1}{8} \xi _B g'^2 \lambda +\frac{3}{8} \xi _W g^2 \lambda \right) \phi^4 \right] \, . \end{align} While the $m^4$-dependent pole in \eq{1loopdivFermi} can be always subtracted by a constant shift in the effective potential,\footnote{A constant shift in the effective potential does not affect the equations of motion, as long as gravity is ignored.} the remaining divergences are canceled by the multiplicative renormalization of the bare field and couplings appearing in $V^{(0)}_{\rm{eff}}$ (cf.~\eq{V0}): \begin{equation} \label{renV0parFermi} \phi_0 = Z_{\phi}^{1/2}|^{\rm{Fermi}} \phi \, , \qquad m^2_0 = Z_{m^2} m^2 \, , \qquad \lambda_0 = Z_{\lambda} \lambda \, , \end{equation} where the renormalization constants can be conveniently computed in the unbroken phase of the SM. Their expressions at one loop in the $\overline{\rm{MS}}$ scheme read (see e.g.~\cite{Chetyrkin:2012rz,Mihaila:2012pz}): \begin{align} \label{ZphiFermi} Z_{\phi}^{1/2}|^{\rm{Fermi}} &= 1 + \frac{\Delta_\epsilon}{(4\pi)^2} \left( \frac{3}{8} g'^2 + \frac{9}{8} g^2 -\frac{3}{2} y_t^2 -\frac{1}{8} \xi _B g'^2-\frac{3}{8} \xi _W g^2 \right) \, , \\ \label{Zm2Fermi} Z_{m^2} &= 1+ \frac{\Delta_\epsilon}{(4\pi)^2} \left( -\frac{3}{4} g'^2 -\frac{9}{4} g^2 +3 y_t^2 +6 \lambda \right) \, , \\ \label{ZlamFermi} Z_{\lambda} &= 1+ \frac{\Delta_\epsilon}{(4\pi)^2} \left( -\frac{3}{2} g'^2 -\frac{9}{2} g^2 +6 y_t^2 +12 \lambda +\frac{3 }{16} \frac{g'^4}{\lambda } +\frac{3}{8} \frac{g'^2 g^2}{\lambda } +\frac{9}{16}\frac{g^4}{\lambda } -3 \frac{y_t^4}{\lambda } \right) \, . \end{align} It is a simple exercise to check that the renormalization of the tree-level potential, via the renormalization constants in \eqs{ZphiFermi}{ZlamFermi}, cancels the $\phi$-dependent poles in \eq{1loopdivFermi}. Let us point out that in the Fermi gauge the field $\phi$ gets only multiplicatively renormalized by the wavefunction of the Higgs field. This feature is due to the invariance of the complete SM Lagrangian (including the gauge-fixing term in \eq{gflagFermi}) under the transformation $h \rightarrow h + a$ and $\phi \rightarrow \phi - a$, as shown in \cite{Pilaftsis:1997fe,Binosi:2005yk}. As we will see in \app{BCKGgaugefull}, this property does not hold anymore in the background $R_\xi$ gauge. Hence, after the renormalization procedure, the one-loop contribution to the effective potential in the $\overline{\rm{MS}}$ scheme reads \begin{align} \label{1loopEPFermi} V^{(1)}_{\rm{eff}}|^{\rm{Fermi}} &= \frac{1}{4 (4 \pi)^2} \left[ -12 \bar{m}_t^4 \left( \log\frac{\bar{m}_t^2}{\mu^2} - \frac{3}{2} \right) +6 \bar{m}_W^4 \left( \log\frac{\bar{m}_W^2}{\mu^2} - \frac{5}{6} \right) \right. \nonumber \\ & \left.+3 \bar{m}_Z^4 \left( \log\frac{\bar{m}_Z^2}{\mu^2} - \frac{5}{6} \right) +\bar{m}_h^4 \left( \log\frac{\bar{m}_h^2}{\mu^2} - \frac{3}{2} \right) +2 \bar{m}_{A^+}^4 \left( \log\frac{\bar{m}_{A^+}^2}{\mu^2} - \frac{3}{2} \right) \right. \nonumber \\ & \left. +2 \bar{m}_{A^-}^4 \left( \log\frac{\bar{m}_{A^-}^2}{\mu^2} - \frac{3}{2} \right) + \bar{m}_{B^+}^4 \left( \log\frac{\bar{m}_{B^+}^2}{\mu^2} - \frac{3}{2} \right) + \bar{m}_{B^-}^4 \left( \log\frac{\bar{m}_{B^-}^2}{\mu^2} - \frac{3}{2} \right) \right] \, , \end{align} where the definitions of the $\phi$-dependent mass terms are given in \eqs{mhphiFermi}{mtphiFermi} and \eqs{defmassZFermi}{defmassBpm}. In particular, for $\xi_W = \xi_B = 0$ one has $\bar{m}_{A^+} = \bar{m}_{B^+} = \bar{m}_{\chi}$ and $\bar{m}_{A^-} = \bar{m}_{B^-} = 0$, so that \eq{1loopEPFermi} reproduces the standard one-loop result in the Landau gauge \cite{Coleman:1973jx}. Let us stress that the gauge dependence of $V^{(1)}_{\rm{eff}}$ cannot be removed by a suitable choice of the renormalization scheme, as it can be verified by adding finite terms in \eqs{ZphiFermi}{ZlamFermi}. Notice, however, that on the tree-level minimum, $m^2 = \lambda \phi^2$ (hence $\bar{m}_{\chi} = 0$ and $\bar{m}_{A^\pm} = \bar{m}_{B^\pm} = 0$), the gauge dependence drops from $V^{(1)}_{\rm{eff}}|^{\rm{Fermi}} $. We will discuss this aspect in more detail in \sect{physobsvacstab}. \subsection{Renormalization group improvement} \label{rengroup} In applications where the behavior of $V_{\rm{eff}}(\phi)$ at large $\phi$ is needed, like for the vacuum stability analysis, one has to deal with potentially large logarithms of the type $\log (\phi/\mu)$ which may spoil the applicability range of perturbation theory. The standard way to resum such logarithms is by means of the RGEs. Since $V_{\rm{eff}}$ is independent of the renormalization scale $\mu$ for fixed values of the bare parameters, one obtains the RGE \begin{equation} \label{RGE} \left( \mu \frac{\partial}{\partial \mu} + \beta_i \frac{\partial}{\partial \lambda_i} - \gamma \phi \frac{\partial}{\partial \phi} \right) V_{\rm{eff}} = 0 \, , \end{equation} where the beta functions \begin{equation} \label{defbf} \beta_i = \mu \frac{d \lambda_i}{d \mu} \, , \end{equation} correspond to each of the SM coupling $\lambda_i$ (including the gauge-fixing parameters) and the anomalous dimension of the background field is defined by \begin{equation} \label{defanomdim} \gamma = - \frac{\mu}{\phi} \frac{d \phi}{d \mu} \, . \end{equation} The formal solution of the RGE in \eq{RGE} can be obtained by applying the method of the characteristics \cite{Ford:1992mv}: \begin{equation} \label{solRGE} V_{\rm{eff}} (\mu, \lambda_i, \phi ) = V_{\rm{eff}}(\mu (t), \lambda_i (t), \phi (t) ) \, , \end{equation} where \begin{align} \label{mut} \mu(t) &= \mu e^t \, , \\ \label{phit} \phi(t) &= e^{\Gamma(t)} \phi \, , \end{align} with \begin{equation} \label{Gammat} \Gamma(t) = - \int_0^t \gamma(\lambda(t')) \, dt' \, , \end{equation} and $\lambda_i (t)$ are the SM running couplings, determined by the equation \begin{equation} \label{betat} \frac{d \lambda_i (t)}{dt} = \beta_i (\lambda_i(t)) \, , \end{equation} and subject to the boundary condition $\lambda_i (0) = \lambda_i$. The usefulness of the RG is that $t$ can be chosen in such a way that the convergence of perturbation theory is improved. For instance, a standard choice in vacuum stability analyses is $\mu (t) = \phi$ (see e.g.~Ref.~\cite{Degrassi:2012ry}). Without sticking, for the time being, to any specific choice of scale, the RG improved effective potential can be rewritten as \begin{equation} \label{1loopEPimproved} V_{\rm{eff}} (\phi,t) = \Omega_{\rm{eff}}(\phi,t) -\frac{m_{\rm{eff}}^2(\phi,t)}{2} \phi^2 + \frac{\lambda_{\rm{eff}}(\phi,t)}{4} \phi^4 \, , \end{equation} where the functional form of the effective couplings in \eq{1loopEPimproved} depends on the chosen gauge. In particular, in the limit $\phi \gg m$ the effective potential takes the universal form \begin{equation} \label{1loopEPimprovedapprox} V_{\rm{eff}} (\phi,t) \approx \frac{\lambda_{\rm{eff}}(\phi,t)}{4} \phi^4 \, , \end{equation} with \begin{equation} \label{lameffapprox} \lambda_{\rm{eff}}(\phi,t) \approx e^{4 \Gamma(t)} \left[ \lambda(t) + \frac{1}{(4\pi)^2} \sum_p N_p \kappa_p^2 (t) \left( \log \frac{\kappa_p(t) e^{2 \Gamma(t)} \phi^2}{\mu(t)^2} - C_p \right) \right] \, , \end{equation} since $\phi$ is the only massive parameter. The coefficients $N_p$, $C_p$ and $\kappa_p$ appearing in \eq{lameffapprox} are explicitly listed in \Table{tab:pvaluesFermi} for the Fermi gauge and in \Table{tab:pvaluesBCKG} of \app{BCKGgaugefull} for the background $R_\xi$ gauge. \begin{table*}[h] \begin{center} \begin{tabular}{|c|cccccc|} \hline $p$ & $t$ & $W$ & $Z$ & $h$ & $A^{\pm}$ & $B^{\pm}$ \\ \hline $N_p$ & $-12$ & $6$ & $3$ & $1$ & $2$ & $1$ \\ $C_p$ & $\frac{3}{2}$ & $\frac{5}{6}$ & $\frac{5}{6}$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\frac{3}{2}$ \\ $\kappa_p$ & $\frac{y_t^2}{2}$ & $\frac{g^2}{4}$ & $\frac{g^2+g'^2}{4}$ & $3 \lambda$ & $\frac{1}{2} \left( \lambda \pm \sqrt{\lambda^2 - \lambda \xi_W g^2} \right)$ & $\frac{1}{2} \left( \lambda \pm \sqrt{\lambda^2 - \lambda (\xi_W g^2 + \xi_B g'^2)} \right)$ \\ \hline \end{tabular} \caption{\label{tab:pvaluesFermi} The $p$-coefficients entering the expression of $\lambda_{\rm{eff}}$ in \eq{lameffapprox} for the Fermi gauge. } \end{center} \end{table*} Let us finally note that the gauge dependence of the RG improved effective potential is twofold. The gauge fixing parameters appear both in the couplings $\kappa_p$ (cf.~\Table{tab:pvaluesFermi}), and in the anomalous dimension of $\phi$ (cf.~\eq{RGE2lphi} in \app{RGEapp}) and hence in its integral $\Gamma$. \section{Physical observables in the vacuum stability analysis} \label{physobsvacstab} The present Section is devoted to a general discussion on the gauge dependence/independence of the quantities entering the vacuum stability analysis. To fix the ideas, let us assume that all the parameters of the SM are exactly determined, but the Higgs boson mass. After choosing the renormalization scale $t$, the RG improved effective potential, $V_{\rm{eff}} (\phi, M_h; \xi)$, is a function of $\phi$, the Higgs pole mass $M_h$, and the gauge fixing parameters, which are collectively denoted by $\xi$. One can think of $M_h$ as an order parameter, whose variation modifies the shape of the effective potential, as for instance sketched in \fig{Mchplot}. \begin{figure}[h] \centering \includegraphics[angle=0,width=12cm]{Mhc_plot2} \caption{\label{Mchplot} Schematic representation of the SM effective potential for different values of the Higgs boson mass. For $M_h < M_h^c$, the electroweak vacuum is unstable.} \end{figure} The absolute stability bound on the Higgs boson mass can be obtained by defining a ``critical'' mass, $M_h^c$, for which the value of the effective potential at the electroweak minimum, $\phi_{\rm{ew}}$, and at a second minimum, $\tilde{\phi} > \phi_{\rm{ew}}$, are the same. Analytically, this translates into the three conditions: \begin{align} \label{absvacstab1} & V_{\rm{eff}} (\phi_{\rm{ew}}, M_h^c; \xi) - V_{\rm{eff}} (\tilde{\phi}, M_h^c; \xi) = 0 \, , \\ \label{absvacstab2} & \left. \frac{\partial V_{\rm{eff}}}{\partial \phi} \right|_{\phi_{\rm{ew}}, M_h^c} = \left. \frac{\partial V_{\rm{eff}}}{\partial \phi} \right|_{\tilde{\phi}, M_h^c} = 0 \, . \end{align} In the $\phi \gg \phi_{\rm{ew}}$ limit, the RG improved SM effective potential is well approximated by \begin{equation} \label{Veffapprox} V_{\rm{eff}} (\phi) = \left( \frac{\Omega_{\rm{eff}}(\phi)}{\phi^4} - \frac{1}{2} \frac{m_{\rm{eff}}^2 (\phi)}{\phi^2} + \frac{1}{4} \lambda_{\rm{eff}}(\phi) \right) \phi^4 \approx \frac{1}{4} \lambda_{\rm{eff}}(\phi) \phi^4 \, . \end{equation} Indeed, at the leading order in the $m^2/\phi^2$ expansion, where $m^2 \sim \phi_{\rm{ew}}^2$ is the electroweak parameter of the Higgs potential, the effective couplings $\Omega_{\rm{eff}}$ and $m^2_{\rm{eff}}$ turn out to be proportional to $m^4$ and $m^2$ respectively.\footnote{ Moreover, since the beta function of $m$ is proportional to $m$ itself, the value of $m$ does not change much even after a scale running of many orders of magnitude.} Hence, the absolute stability condition in \eqs{absvacstab1}{absvacstab2} can be equivalently rewritten in the following way \cite{Bezrukov:2012sa}: \begin{align} \label{absvacstab1mod} \lambda_{\rm{eff}} (\tilde{\phi}, M_h^c; \xi) = 0 \, , \\ \label{absvacstab2mod} \left. \frac{\partial \lambda_{\rm{eff}}}{\partial \phi} \right|_{\tilde{\phi}, M_h^c} = 0 \, , \end{align} up to $\phi_{\rm{ew}}^2 / \tilde{\phi}^2 \ll 1$ corrections. On the other hand, due to the explicit presence of $\xi$ in the vacuum stability condition, it is not obvious a priori which are the physical (gauge-independent) observables entering the vacuum stability analysis. The basic tool, in order to capture the gauge-invariant content of the effective potential is given by the Nielsen identity \cite{Nielsen:1975fs} \begin{equation} \label{NielsenIdbis} \frac{\partial}{\partial \xi} V_{\text{eff}}(\phi,\xi) = - C(\phi, \xi) \frac{\partial}{\partial \phi} V_{\text{eff}}(\phi,\xi) \, , \end{equation} where $C(\phi, \xi)$ is a correlator involving the ghost fields and the gauge-fixing functional, whose explicit expression will not be needed for our argument. \eq{NielsenIdbis} is valid for the class of linear gauges and can be derived from the BRST non-invariance of a composite operator involving the ghost field and the gauge fixing functional (see e.g.~\cite{Metaxas:1995ab} for a concise derivation). The identity in \eq{NielsenIdbis} carries the following interpretation: the effective potential is gauge independent where it is stationary and hence spontaneous symmetry breaking is a gauge-invariant statement. In the rest of this section we will use the Nielsen identity, in combination with the vacuum stability condition in \eqs{absvacstab1}{absvacstab2}, in order to formally prove that the critical Higgs boson mass, $M_h^c$, is a gauge-independent quantity, while the position of the extrema of the effective potential (e.g.~$\tilde{\phi}$) or the point where $V_{\rm{eff}}$ takes a special value (for instance zero) are essentially gauge dependent. Our arguments are similar to those presented in Ref.~\cite{Patel:2011th}, about the gauge independence of the critical temperature of a first order phase transition in the context of the finite temperature effective potential. \subsection{Gauge independence of the critical Higgs boson mass} \label{gaugeindepMHc} Let us assume that simultaneously inverting \eqs{absvacstab1}{absvacstab2} would yield gauge dependent field values and critical Higgs boson mass: $ \phi_{\rm{ew}} = \phi_{\rm{ew}} (\xi)$, $\tilde{\phi} = \tilde{\phi} (\xi)$ and $M_h^c = M_h^c (\xi)$. The total differential of \eq{absvacstab1} with respect to $\xi$ then reads \begin{multline} \label{totaldeiffxiA} \left. \frac{\partial V_{\rm{eff}}}{\partial \phi} \right|_{\phi_{\rm{ew}}, M_h^c} \frac{\partial \phi_{\rm{ew}}}{\partial \xi} + \left. \frac{\partial V_{\rm{eff}}}{\partial M_h} \right|_{\phi_{\rm{ew}}, M_h^c} \frac{\partial M_h^c}{\partial \xi} + \left. \frac{\partial V_{\rm{eff}}}{\partial \xi} \right|_{\phi_{\rm{ew}}, M_h^c} = \\ \left. \frac{\partial V_{\rm{eff}}}{\partial \phi} \right|_{\tilde{\phi}, M_h^c} \frac{\partial \tilde{\phi}}{\partial \xi} + \left. \frac{\partial V_{\rm{eff}}}{\partial M_h} \right|_{\tilde{\phi}, M_h^c} \frac{\partial M_h^c}{\partial \xi} + \left. \frac{\partial V_{\rm{eff}}}{\partial \xi} \right|_{\tilde{\phi}, M_h^c} \, . \end{multline} The first term in both the left-hand side (lhs) and the right-hand side (rhs) of \eq{totaldeiffxiA} vanishes because of the stationary conditions in \eq{absvacstab2}. The third term in both the lhs and the rhs of \eq{totaldeiffxiA} vanishes for the same reason, after using the Nielsen identity. Hence, we are left with \begin{equation} \label{leftwith1} \left( \left. \frac{\partial V_{\rm{eff}}}{\partial M_h} \right|_{\phi_{\rm{ew}}, M_h^c} - \left. \frac{\partial V_{\rm{eff}}}{\partial M_h} \right|_{\tilde{\phi}, M_h^c} \right) \frac{\partial M_h^c}{\partial \xi} = 0 \, . \end{equation} Since the expression in the bracket of \eq{leftwith1} is in general different from zero, one concludes that \begin{equation} \label{leftwith} \frac{\partial M_h^c}{\partial \xi} = 0 \, , \end{equation} namely, the critical Higgs boson mass is gauge independent. Let us notice, however, that the statement above formally holds at all orders in perturbation theory. \subsection{Gauge dependence of the extrema of the effective potential} Let us consider now the total differential with respect to $\xi$ of the second expression in \eq{absvacstab2} \begin{equation} \label{totaldeiffxiB} \left. \frac{\partial^2 V_{\rm{eff}}}{\partial \phi^2} \right|_{\tilde{\phi}, M_h^c} \frac{\partial \tilde{\phi}}{\partial \xi} + \left. \frac{\partial^2 V_{\rm{eff}}}{\partial M_h \ \partial \phi} \right|_{\tilde{\phi}, M_h^c} \frac{\partial M_h^c}{\partial \xi} + \left. \frac{\partial^2 V_{\rm{eff}}}{\partial \xi \ \partial \phi} \right|_{\tilde{\phi}, M_h^c} = 0 \, . \end{equation} The second term is zero due to \eq{leftwith}. By differentiating the Nielsen identity with respect to $\phi$, and evaluating it at the point $(\tilde{\phi}, M_h^c)$, we get \begin{equation} \label{diffNI} \left. \frac{\partial^2 V_{\text{eff}}}{\partial \phi \ \partial \xi} \right|_{\tilde{\phi}, M_h^c} = - \left. \frac{\partial C}{\partial \phi} \right|_{\tilde{\phi}, M_h^c} \left. \frac{\partial V_{\text{eff}}}{\partial \phi} \right|_{\tilde{\phi}, M_h^c} - C(\tilde{\phi}, \xi) \left. \frac{\partial^2 V_{\text{eff}}}{\partial \phi^2} \right|_{\tilde{\phi}, M_h^c} \, . \end{equation} The first term in the rhs of \eq{diffNI} vanishes because of the stationary condition in \eq{absvacstab2}. Hence, we can substitute the third term in \eq{totaldeiffxiB}, by means of \eq{diffNI}, and get: \begin{equation} \label{totaldeiffxiBsub} \left( \frac{\partial \tilde{\phi}}{\partial \xi} - C(\tilde{\phi}, \xi) \right) \left. \frac{\partial^2 V_{\text{eff}}}{\partial \phi^2} \right|_{\tilde{\phi}, M_h^c} = 0 \, . \end{equation} Since the curvature at the extremum is in general different from zero, \eq{totaldeiffxiBsub} implies \begin{equation} \label{gaugedepextr} \frac{\partial \tilde{\phi}}{\partial \xi} = C(\tilde{\phi}, \xi) \, . \end{equation} The same holds for any extremum of the effective potential, like e.g.~the maximum in \fig{Mchplot} or the electroweak minimum $\phi_{\rm{ew}}$. This latter fact should not actually come as a surprise. The explicit gauge dependence of the unrenormalized $\phi_{\rm{ew}}$ in the $R_\xi$ gauge was discussed for instance in \cite{Appelquist:1973ms} and in the case of the SM it can be found in \cite{Sirlin:1985ux}. A renormalized gauge-invariant $\phi_{\rm{ew}}$ can always be defined by subtracting the divergent and gauge-dependent contributions to $\phi_{\rm{ew}}$ at on-shell points in terms of physical quantities. \subsection{Gauge dependence of the SM vacuum instability scale} The SM vacuum instability scale is operatively defined as the field value $\phi = \Lambda$, for which the effective potential has the same depth of the electroweak minimum (see e.g.~\fig{Mchplot}). This is analytically expressed by \begin{equation} \label{definstscale2} V_{\text{eff}} (\Lambda; \xi) = V_{\text{eff}} (\phi_{\rm{ew}}; \xi) \, . \end{equation} The rhs of \eq{definstscale2} is a gauge-independent quantity, since $\phi_{\rm{ew}}$ is by definition a minimum and we can apply the Nielsen identity. Hence, by solving \eq{definstscale2}, one has in general $\Lambda = \Lambda (\xi)$. In particular, by taking the total differential of \eq{definstscale2} with respect to $\xi$, we get \begin{equation} \label{totdiffdefinstscale2} \left. \frac{\partial V_{\text{eff}}}{\partial \phi} \right|_{\Lambda} \frac{\partial \Lambda}{\partial \xi} + \left. \frac{\partial V_{\text{eff}}}{\partial \xi} \right|_{\Lambda} = 0 \, . \end{equation} By using the Nielsen identity, we can substitute back the second term in \eq{totdiffdefinstscale2}, thus obtaining \begin{equation} \label{totdiffdefinstscale2bis} \left( \frac{\partial \Lambda}{\partial \xi} - C(\Lambda, \xi) \right) \left. \frac{\partial V_{\text{eff}}}{\partial \phi} \right|_{\Lambda} = 0 \, . \end{equation} Since, in general, $\Lambda$ is not an extremum of the effective potential, \eq{totdiffdefinstscale2bis} yields \begin{equation} \label{totdiffdefinstscale2bisyields} \frac{\partial \Lambda}{\partial \xi} = C(\Lambda, \xi) \, . \end{equation} \clearpage \section{Numerical analysis} \label{gaugedepSMinst} In this Section we numerically estimate the gauge dependence of the SM vacuum instability scale $\Lambda$. Let us first focus on the case of the Fermi gauge. Since in the SM $\Lambda \gg \phi_{\rm{ew}}$, the condition in \eq{definstscale2} is well approximated by (see also \eq{Veffapprox}) \begin{equation} \label{instLambda} \lambda_{\rm{eff}} (\Lambda) = 0 \, , \end{equation} up to corrections of ${\cal O}(\phi_{\rm{ew}}^2 / \Lambda^2)$ . For the onset of the RG running, we choose $\mu(0) = M_t$ (hence $\mu(t) = M_t e^t$), where $M_t = 173.35 \ \rm{GeV}$ is the pole mass of the top quark and we consider the central values of the SM parameters taken from \cite{Buttazzo:2013uya}:\footnote{Notice that these values are extracted from experimental data with two-loop accuracy. However, we will not perform a NNLO analysis, since the issue of the gauge dependence of the instability scale already arises at the NLO level.} \begin{align} \label{lamMt} \lambda (M_t) &= 0.12710 \, , \\ \label{ytMt} y_t (M_t) &= 0.93697 \, , \\ \label{g3Mt} g_3 (M_t) &= 1.1666 \, , \\ \label{gMt} g (M_t) &= 0.6483 \, , \\ \label{gpMt} g' (M_t) &= 0.3587 \, . \end{align} In order to resum possible large logs in \eq{lameffapprox} due to the growth of the anomalous dimension, we make the scale choice \begin{equation} \label{scalechoice} \mu(\overline{t}) = e^{\Gamma(\overline{t})} \phi \, , \end{equation} which implicitly defines $\overline{t}$ as a function of $\phi$. Then the effective quartic coupling can be written as \begin{equation} \label{lameffapproxscalechoice} \lambda_{\rm{eff}}(\phi) = e^{4 \Gamma(\overline{t}(\phi))} \left[ \lambda(\overline{t}(\phi)) + \frac{1}{(4\pi)^2} \sum_p N_p \kappa_p^2 (\overline{t}(\phi)) \left( \log \kappa_p(\overline{t}(\phi)) - C_p \right) \right] \, . \end{equation} Since the overall exponential factor in \eq{lameffapproxscalechoice} never changes the zeros of $\lambda_{\rm{eff}} (\phi)$, in order to find the instability scale, $\Lambda$, it is equivalent (and also numerically more convenient) to seek directly the zeros of $\lambda_{\rm{eff}} (\phi) e^{-4 \Gamma(\overline{t}(\phi))}$ in terms of the parameter $\overline{t}_\Lambda \equiv \overline{t}(\Lambda)$, defined by\footnote{It may actually happen that $\lambda$ turns negative before approaching the instability scale. In such a case, $\log \kappa_p$ develops an imaginary part for $p=h,A^{\pm},B^{\pm}$ (see \Table{tab:pvaluesFermi}). Though the imaginary part of the effective potential might have an interpretation in terms of a decay rate of an unstable state \cite{Weinberg:1987vp}, the role of such an imaginary component in the determination of the instability scale is not clear. Hence, we pragmatically require only the real part of \eq{zerossimplified} to be zero and notice that this problem has nothing to do with the issue of the gauge dependence, since it occurs also in the standard analysis in the Landau gauge.} \begin{equation} \label{zerossimplified} \lambda(\overline{t}_\Lambda) + \frac{1}{(4\pi)^2} \sum_p N_p \kappa_p^2 (\overline{t}_\Lambda) \left( \log \kappa_p(\overline{t}_\Lambda) - C_p \right) = 0 \, , \end{equation} and then relate it to the instability scale by inverting \eq{scalechoice} \begin{equation} \label{explicitlysolv} \Lambda = \mu(\overline{t}_\Lambda) e^{-\Gamma(\overline{t}_\Lambda)} = M_t e^{\overline{t}_\Lambda - \Gamma(\overline{t}_\Lambda)} \, , \end{equation} where we recall the definition (see \eq{Gammat}) \begin{equation} \label{GammatLambda} \Gamma(\overline{t}_\Lambda) = - \int_0^{\overline{t}_\Lambda} \gamma(t) \, dt \, . \end{equation} Before discussing in more detail the gauge dependence of $\Lambda$, let us turn to the issue of the UV behaviour of the gauge fixing parameters $\xi_W$ and $\xi_B$ for the Fermi gauge. Their RGEs are collected in \app{RGEapp} and can be easily integrated at one loop (see \app{UVbehaviorxiBW}). While the running of the Abelian gauge-fixing parameter $\xi_B$ is very simple ($\xi_B g'^2$ is actually constant under the RG flow, as a consequence of a Ward identity) two peculiar RG behaviours can be identified for $\xi_W$. For $\xi_W (M_t) \gg \frac{1}{6}$ one has a quasi-fixed point in the UV (cf.~left panel in \fig{runxi}), while, for $\xi_W (M_t) < 0$, the running can easily generate a Landau pole (cf.~right panel in \fig{runxi}). \begin{figure}[htb] \centering \begin{tabular}{@{}cccc@{}} \includegraphics[width=.48\textwidth]{runxi20} & \includegraphics[width=.48\textwidth]{runxi5m} & \end{tabular} \caption{\label{runxi} Two-loop running of the gauge-fixing parameters $\xi_W$ and $\xi_B$ in the Fermi gauge, for different values of $\xi \equiv \xi_W (M_t) = \xi_B (M_t)$: $\xi = 20$ (left panel) and $\xi = -5$ (right panel).} \end{figure} The gauge dependence of $\Lambda$ (cf.~\eq{explicitlysolv}) comes both from $t_\Lambda$ and $\Gamma (t_\Lambda)$. The former is due to the couplings $\kappa_p$, when $p$ runs over $A^{\pm}$ and $B^{\pm}$ (cf.~\eq{zerossimplified} and \Table{tab:pvaluesFermi}), while the latter is because of the gauge dependence of the anomalous dimension. The running of the anomalous dimension and its integral, $\Gamma$, are shown in \fig{rungGammaxi0205m} for three different initial values of $\xi \equiv \xi_B (M_t) = \xi_W (M_t)$. \begin{figure}[htb] \centering \begin{tabular}{@{}cccc@{}} \includegraphics[width=.48\textwidth]{runmgammaxi0205m} & \includegraphics[width=.48\textwidth]{runGammaxi0205m} & \end{tabular} \caption{\label{rungGammaxi0205m} Two-loop running of $-\gamma$ (left panel) and $\Gamma$ (right panel) for different values of $\xi \equiv \xi_W (M_t) = \xi_B (M_t)$.} \end{figure} From the right panel in \fig{rungGammaxi0205m} one can see that if $\abs{\xi}$ is large enough, $\Gamma$ can easily be of $\mathcal{O} (1)$ at intermediate scales below the Planck mass. This justifies the choice of scale done in \eq{scalechoice}, which resums the potentially large logs in \eq{lameffapprox}. The gauge dependence of the instability scale is shown in \fig{instscalevsxi}. For simplicity, we set $\xi_W (M_t) = \xi_B (M_t) \equiv \xi$. In addition, we employ two-loop RGEs for all the parameters in \eq{zerossimplified} and \eq{GammatLambda} that determine $\Lambda$. The higher-order RGEs allow us to resum the leading and next-to-leading logarithms implicitly contained in \eq{explicitlysolv}. For illustration, we depict with a dashed line in \fig{instscalevsxi} the gauge dependence of the instability scale obtained without running the gauge-fixing parameters ($\beta_{\xi} = 0$ case). As it can be read from the figure the difference between the resummed (full line) and not resummed one (dashed line) amounts to more than three orders of magnitude. However, even after performing the resummation, the instability scale in the Fermi gauge increases by almost an order of magnitude when the gauge-fixing parameters are varied in the interval $[0,300]$. Let us also mention that by varying the SM parameters within their experimental uncertainties (e.g.~for a lower top mass) the gauge dependence of the scale $\Lambda$ is always found to be of about one order of magnitude. \begin{figure}[h] \centering \includegraphics[angle=0,width=11cm]{Instxi_improved_Fermi4} \caption{\label{instscalevsxi} Instability scale as a function of $\xi \equiv \xi_W (M_t) = \xi_B (M_t)$ for the Fermi gauge. The dashed line corresponds to the case where the gauge-fixing parameters are not run. The full line encodes the resummation of the next-to-leading logs by means of two-loop RGEs. } \end{figure} Another important aspect for the analysis of the gauge dependence of $\Lambda$ is the determination of the perturbativity domain of the gauge fixing parameters $\xi_{W,B}$. For instance, for the gauge-fixing parameter $\xi_W$ one can require that the two-loop correction to its beta function is smaller than the one-loop contribution, thus obtaining (cf.~\eq{RGE2lxiW} in \app{RGEapp}): \begin{equation} \label{pertdomxiW} \left| \frac{\xi_W^2 \alpha_2^2}{(4 \pi)^2} \right| < \left| \frac{\xi_W \alpha_2}{4 \pi} \right| \, , \end{equation} which sets the absolute upper bound \begin{equation} \label{pertdomxiWabsolute} |\xi_W| < \frac{4 \pi}{\alpha_2} \, . \end{equation} Taking $\alpha_2 (M_t) \approx 0.033$,\footnote{For $\alpha_2 (\mu > M_t)$ the bound becomes less stringent, due to the asymptotic freedom of $\alpha_2$ in the SM.} one gets $|\xi_W (M_t)| < 376$. Notice, however, that this estimate does not take into account the running of $\xi_W$. For $\xi_W (M_t) \lesssim - 5$ a Landau pole can be developed before the Planck scale (cf.~right panel in \fig{runxi}), and perturbation theory starts soon to break down. This is why we do not show the negative branch of the plot in \fig{instscalevsxi}. On the contrary, the running behaviour for $\xi \gg 0$ is smoother, with a quasi-fixed point in the UV for $\xi_W$ (cf.~left panel in \fig{runxi}). By studying the evolution of the gauge-dependent anomalous dimension at one, two and three loops we verified, for instance, that $\xi \approx 300$ is still in the perturbative regime. Nonetheless, for a more solid statement about the perturbative domain of $\xi$, one should inspect the gauge dependent two-loop effective potential, whose calculation goes beyond the scope of the present paper and it is postponed for a future work. One can imagine, however, that a similar condition as in \eq{pertdomxiW} will be at play, since the gauge-fixing parameters are always associated with the square of the gauge couplings, both in the propagators and in the vertices of the theory. Finally, for a comprehensive analysis one should also vary the gauge-fixing condition itself. In \app{BCKGgaugefull} we report on the calculation of the SM one-loop effective potential in a background $R_\xi$ gauge. A numerical study, similar to the one presented in this Section, shows that the instability scale decreases by another order of magnitude when the gauge-fixing parameters are varied in their perturbative domain. Such a qualitatively different behaviour in the background $R_\xi$ gauge can be understood by noticing the sign flip (with respect to the case of the Fermi gauge) in the contribution of the gauge-fixing parameters to the one-loop anomalous dimension of $\phi$ in \eq{RGE2lphiBCKD}. We can thus conclude that the gauge dependence of the instability scale materializes in a variation of about two orders of magnitude, depending on the choice of the gauge condition and of the gauge-fixing parameters. This strengthens our statement that the instability scale $\Lambda$ as defined in \eq{instLambda} should not be interpreted as a physical quantity. \clearpage \section{Discussion and conclusions} \label{conclusions} Once a calculable UV completion of the SM is specified (for instance, the SM itself extrapolated at extremely high energies\footnote{Under the assumption that Planck-scale physics decouples from the SM even at energies beyond the Planck mass and that the Laundau pole of the hypercharge does not pose any conceptual problem.}) the fate of the electroweak vacuum, whether it is absolutely stable or not, is a physical statement which does not depend on the choice of the gauge. This is equivalent to say that the critical Higgs boson mass (or, in general, the critical values of the SM parameters) distinguishing between the stable and unstable phase of the SM is a gauge-independent quantity, as we formally proved in \sect{gaugeindepMHc}. In this respect, it is worth to recall that the tunnelling probability of the electroweak vacuum is formally gauge independent as well \cite{Einhorn:1980ik,Metaxas:1995ab,Isidori:2001bm}. On the other hand, the absolute stability condition is sometimes formulated by requiring that the electroweak minimum, $\phi_{\rm{ew}}$, is the global minimum of the effective potential over the range of validity of the SM \begin{equation} \label{vacstabcond} V_{\rm{eff}} (\phi_{\rm{ew}}) < V_{\rm{eff}} (\phi) \quad \rm{for} \quad \phi < \Lambda_{\rm{SM}} \, , \end{equation} where $\Lambda_{\rm{SM}}$ is a physical threshold (e.g.~the Planck scale). Above this scale new physics is supposed to alter the shape of the effective potential. However, since $V_{\rm{eff}} (\phi)$ is gauge dependent (unless $\phi$ is an extremum), the condition in \eq{vacstabcond} is clearly gauge dependent too. \\ From a low-energy point of view, it is a relevant question to seek a connection between the instability scale, $\Lambda$, and the scale of new physics, $\Lambda_{\rm{SM}}$. The latter being, of course, of utmost importance for experiments. The irreducible gauge dependence of $\Lambda$, however, makes its identification with $\Lambda_{\rm{SM}}$ ambiguous, since we are not comparing two physical quantities. Though the gauge dependence of $\Lambda$ amounts to about one order of magnitude in the case of the Fermi gauge (cf.~\fig{instscalevsxi}), this result \emph{cannot} be used to give an absolute upper bound on the gauge dependence of $\Lambda$. The reason is that, on one hand, different gauge-fixing schemes generally lead to different results (as, for instance, in the case of the background $R_\xi$ gauge discussed in \app{BCKGgaugefull}) and, on the other hand, we cannot say much beyond perturbation theory. Notice, indeed, that there is no physical principle that restricts the range of the gauge-fixing parameters. Hence, we rather stick to the conclusion that $\Lambda_{\rm{SM}}$ is a model dependent parameter which cannot be determined by just extrapolating the SM parameters at high energies.\footnote{Even without considering the issue of the gauge dependence, the connection between $\Lambda$ and the maximum allowed value of the scale of new physics required to stabilize the electroweak vacuum is anyway not so direct, due to the presence of extra parameters (e.g.~couplings and masses) in any UV completion of the SM \cite{Hung:1995in,Casas:2000mn}.} Let us finally recall that, given the central values of the SM parameters and assuming that new physics at e.g.~the Planck scale does not affect the tunnelling computation \cite{Branchina:2013jra}, the lifetime of the electroweak vacuum turns out to be much longer than the age of the universe \cite{Buttazzo:2013uya}. A metastable electroweak vacuum can comply with the data and new physics is not necessarily implied. Hence, the problem of the gauge dependence of the SM vacuum instability scale and its connection with the scale of new physics might seem an academic one. However, this does not need to be necessarily the case. For instance, we would like to mention the recent measurement of the primordial tensor fluctuations in the cosmic microwave background by the BICEP2 collaboration \cite{Ade:2014xna} which suggests a high inflationary scale of about $10^{14}$ GeV. As pointed out in \cite{Espinosa:2007qp,Kobakhidze:2013tn,Fairbairn:2014zia,Enqvist:2014bua,Kobakhidze:2014xda,Hook:2014uia} the Higgs field might be subject to quantum fluctuations generated during the primordial stage of inflation which can easily destabilize the electroweak vacuum. In particular, since the quantity $\Lambda$ (or, more precisely, the field value where the effective potential reaches its maximum) enters in the calculation of the electroweak vacuum survival probability, its physical identification should be addressed with care. \section*{Acknowledgments} We thank Stefano Bertolini, Ramona Gr{\"o}ber and Marco Nardecchia for useful discussions. This work was supported by the DFG through the SFB/TR 9 ``Computational Particle Physic''.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Supplementary information} \section{Zero-temperature electronic structure calculations} \subsection{Preparation and optimisation of quantum Monte Carlo (QMC) wave function} As described in the Methods Section, before running finite-temperature calculations, we optimize a QMC variational wave function $| \Psi_{\mathbf{q}} \rangle$ at zero temperature, written as a Jastrow Antisymmetrised Geminal Power (JAGP) ansatz\cite{Casula2004}. During the finite-temperature molecular dynamics, energy and forces are computed at the variational Monte Carlo (VMC) level, based on the variational optimisation of the QMC wave function. Both Jastrow and AGP expansions are developed over a primitive O(3s2p1d) H(2s1p) and O(5s5p2d) H(4s2p) Gaussian basis functions, respectively. The primitive basis sets are then contracted using the geminal embedded orbitals (GEOs) scheme\cite{Sorella2015}. Previous works on the Zundel ion\cite{Dagrada2014,Mouhat2017} found that the optimal balance between accuracy and computational cost for the determinantal part is reached by the O$[8]$H$[2]$ contracted GEO basis, in self-explaining notations. As the protonated water hexamer is a very similar system, in this work we used the same O$[8]$H$[2]$ GEO contraction for the AGP part. Moreover, we further simplified the variational wave function previously developed for the Zundel ion, by contracting also the Jastrow basis set, using the same GEO embedding scheme. We tried different contraction sets, and tested them on the water dimer dissociation energy curve, as reported in Fig~\ref{figure:jas}. The water dimer is a stringent benchmark for the quality of our wave function, as it has a chemical complexity similar to the Zundel ion, with the main difference of being charge neutral. Charge neutrality allows us to directly probe the Jastrow capability of controlling charge fluctuations in the system, a fundamental property when coupled with the AGP determinantal part\cite{eric_size_cons}. \begin{figure}[!htb] \centering \includegraphics[width=0.8\textwidth]{Jas_contractions.pdf} \caption{\label{figure:jas} Water dimer dissociation energy curve as a function of the oxygen-oxygen distance obtained by VMC, using different contracted basis sets in the Jastrow factor of a Jastrow-Slater wave function. Each trial wave function is built using the same basis set for the determinantal part, which is optimised together with the various Jastrow factors tested here. The black curve indicates the reference CCSD(T) result.} \end{figure} As shown in Fig.~\ref{figure:jas}, we find a systematic improvement as the number of GEOs orbitals increases, with the O$[6]$H$[2]$ set yielding energies very close to the Jastrow primitive basis set reference at all oxygen-oxygen distances. As reported in Tab.~\ref{table:Jparameters}, this is obtained with a number $p$ of variational parameters significantly smaller than the one of the primitive basis set expansion. Thus, we used the O$[6]$H$[2]$ GEO basis set for the Jastrow factor, and the O$[8]$H$[2]$ GEO basis for the AGP part in all our subsequent molecular dynamics (MD) simulations. For the protonated water hexamer, this results into a total number of 6418 variational parameters, comprising $g_{\mu,\nu}^{a,b}$, $\lambda_{\mu,\nu}^{a,b}$, the parameters of the homogeneous one-body and two-body Jastrow factors, and the linear coefficients of the Jastrow and determinantal basis sets (see Methods Section for a detailed description of the wave function parameters). \begin{table}[!htp] \centering \hspace*{-10mm} \begin{tabular}{|c|c|c|} \hline Basis set & $p$ & E$_{\text{bind}}$ (kcal/mol) \\ \hline \hline Primitive Jastrow and primitive determinant & $6303$ & $ 4.46(8)$ \\ Primitive Jastrow and O$[8]$H$[2]$ GEO determinant & $2089$ & $ 4.40(8)$ \\ O$[6]$H$[2]$ GEO Jastrow and O$[8]$H$[2]$ GEO determinant & $1283$ & $4.26(8)$ \\ \hline \end{tabular} \caption {\label{table:Jparameters} Water dimer binding energies for QMC variational wave functions obtained with different types of basis set contractions. The corresponding number $p$ of variational parameters is also reported.} \end{table} A more extended description of the variational wave function can be found in Ref.~\cite{Dagrada2014}. \subsection{Comparison between QMC and other electronic structure methods} To probe the microscopic rearrangement which triggers proton transfer (PT) processes in the protonated water hexamer at finite temperature, it is necessary to determine the potential energy surface (PES), in particular around the equilibrium geometry of the cluster. At variance with the Zundel cation, the protonated hexamer minimum energy configuration is asymmetric, implying that the hydrated proton is not equally shared between the two central water molecules. In Fig.~\ref{figure:staticgeohex}, we report the equilibrium position of the hydrated proton H$^+$ - expressed as distance d$_{{\text{\scriptsize H$^+$}}{\text{\scriptsize O}}_1}$ (d$_{{\text{\scriptsize H$^+$}}{\text{\scriptsize O}}_2}$) from the flanking oxygen atom O$_1$ (O$_2$) - as a function of the distance d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ between the two central oxygen atoms. Fig.~\ref{figure:staticgeohex} shows the equilibrium geometries at constrained d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ obtained by various methods: density functional theory (DFT) with the PBE functional (dark green), DFT modified to include dispersive van der Waals (vdW) interactions in the DF2 implementation\cite{Lee2010} (magenta), variational Monte Carlo (blue circles) and M\o ller-Plesset (MP2) (red triangles). The d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ distance corresponding to the global minimum is indicated by a vertical dashed line for each method. \begin{figure}[!htb] \centering \includegraphics[width=0.8\textwidth]{hexa_geo_beautiful.png} \caption { Equilibrium position of the excess proton H - expressed as distance d$_{{\text{\scriptsize H}}{\text{\scriptsize O}}_1}$ (d$_{{\text{\scriptsize H}}{\text{\scriptsize O}}_2}$) from the flanking oxygen atom O$_1$ (O$_2$) - as a function of the separation d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ between the two central oxygen atoms, reported for different computational methods. Vertical dashed lines indicate the equilibrium d$_{\text{O}_1\text{O}_2}$ for each method. \label{figure:staticgeohex} } \end{figure} The PBE functional predicts a short-Zundel-like symmetric global minimum, which is erroneous. More generally, this functional gives a poor description of the proton location when one stretches the d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ distance. After inclusion of dispersion effects in the DF2 functional, the geometric properties of the system are significantly improved, displaying a better agreement with both QMC and MP2. Nevertheless, the predicted equilibrium d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ is too large, which leads to an overly asymmetric cluster. Therefore, we expect the DF2 static barriers to be inaccurate, which is problematic in the perspective of studying PT at finite temperature by this method. Instead, MP2 and QMC are in excellent agreement, especially around d$_{\text{O}_1\text{O}_2} = 2.4$ \AA. In Tab.~\ref{table_geometry_hex_core_dist} we report the equilibrium geometries yielded by PBE, DF2, MP2 and VMC, by showing the relevant distances involving the Zundel core of the protonated water hexamer. The VMC minimum geometry is in a very good agreement with the MP2 one, with an accurate description of the excess proton localisation, as previously seen in Fig.~\ref{figure:staticgeohex}. The largest discrepancy between VMC and MP2 is for the $\overline{OH}$ intramolecular distances, which are slightly shorter in VMC. However, the overall evolution of equilibrium position of the hydrated proton H$^{+}$ as a function of the oxygen-oxygen distance predicted by QMC follows well the one obtained by MP2, as one can see from Fig.~\ref{figure:staticgeohex}. \begin{table}[!htp] \centering \hspace*{-8mm} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Theory} & \multicolumn{1}{c|}{$\overline{O_1O_2}$} & \multicolumn{1}{c|}{$\overline{O_1H^+}$} & \multicolumn{1}{c|}{$\overline{H^+O_2}$} & \multicolumn{1}{c|}{$\overline{O_1H_1}$} & \multicolumn{1}{c|}{$\overline{O_1H_2}$} & \multicolumn{1}{c|}{$\overline{O_2H_3}$} & \multicolumn{1}{c|}{$\overline{O_2H_4}$} \\ \hline DFT-PBE & 2.4156 & 1.2078 & 1.2078 & 0.9935 & 0.9935 & 0.9935 & 0.9935 \\ DFT-DF2 & 2.4541 & 1.1196 & 1.3346 & 0.9913 & 0.9911 & 0.9797 & 0.9797 \\ MP2 & 2.3867 & 1.1690 & 1.2188 & 0.9877 & 0.9878 & 0.9847 & 0.9848 \\ VMC & 2.3930(5) & 1.1555(5) & 1.2375(5) & 0.9800(8) & 0.9798(8) & 0.9752(8) & 0.9748(8) \\ \hline \end{tabular} \caption {Geometric properties (distances in \AA) of the core of the protonated water hexamer minimum. Comparison between different computational methods. \label{table_geometry_hex_core_dist} } \end{table} \vspace*{3mm} Due to the central distorted H-bond, the hydrated proton motion from one flanking water molecule to another is conditioned by the necessary energy it should acquire to go across the PT static barrier. This quantity is defined as the energy difference between the equilibrium structure (asymmetric) and the symmetrised one, where the proton is located at the mid-point of the considered $\text{d}_{\text{O}_1\text{O}_2}$ distance. PT static barriers have been estimated at $\text{d}_{\text{O}_1\text{O}_2} = 2.45$ \AA \, and $2.5$ \AA. The barriers, converted into effective temperatures, are reported in Tab.~\ref{table:staticbis} for different levels of theory. \begin{table}[!htp] \centering \begin{tabular}{|c|c|c|} \hline $\text{d}_{\text{O}_1\text{O}_2}$ (\AA) & 2.45 \AA & 2.50 \AA \\ \hline DFT-DF2 & 39 & 483 \\ CCSD(T) & 141 & 431 \\ MP2 & 85 & 327 \\ VMC & $195 \pm 25$ &$562 \pm 27$ \\ \hline \end{tabular} \caption {\label{table:staticbis} Static symmetrisation barriers (in Kelvin) of the H$_{13}$O$_{6}^+$ cation at different $\text{d}_{\text{O}_1\text{O}_2}$ distances for various computational methods.} \end{table} DFT-DF2 seems unable to predict accurate PT barriers due to the misplacement of the global energy minimum. This further motivates the use of correlated methods to describe the electronic PES, such as coupled cluster CCSD(T) and MP2 theories. Their values are closer to the barriers predicted by VMC. They differ between each other by 0.1-0.2 kcal/mol, a difference within chemical accuracy. However, the QMC method exhibits a milder scaling with the system size. Therefore, the QMC approach is certainly the best candidate to perform fully \emph{ab initio} MD simulations of small protonated water clusters at an affordable computational cost. \section{Finite-temperature calculations: QMC-driven classical and quantum Langevin dynamics} \label{simulations} We simulated the protonated water hexamer by treating the electrons at the VMC level (JAGP wave function) and the nuclei considered as quantum particles as well, within a path integral (PI) formalism. To understand the impact of quantum effects, simulations with classical nuclei have also been performed. The Langevin dynamics (LD) algorithms used have been described in the Methods Section. In Tab.~\ref{simulations_table}, we report the list of VMC+PILD simulations done. For their importance, particularly long simulations are performed for the quantum case at temperatures of 50, 100, 200, and 300 K. \begin{table}[h] \caption{Summary of the simulations carried out in this work. In both classical and quantum calculations, a time step $\delta t$ of 1 ft is used for all temperatures. } \label{simulations_table} \begin{tabular}{| c | c | c | c |} \hline & \multicolumn{2}{c|}{\makebox[100pt][r]{quantum simulations}} & \makebox[100pt][r]{classical simulations} \\ \hline \makebox[50pt][c]{$T$(K)} & \makebox[120pt][c]{$N_{\textrm{beads}}$} & \makebox[120pt][c]{$N_{\textrm{iterations}}$} & \makebox[120pt][c]{$N_{\textrm{iterations}}$} \\ \hline 50 & 128 & 35282 & - \\ 100 & 128 & 52184 & 21454 \\ 150 & 64 & 11218 & - \\ 200 & 64 & 32553 & 20478 \\ 250 & 32 & 23912 & 24154 \\ 300 & 32 & 31929 & 22656 \\ 350 & 32 & 18489 & 26481 \\ 400 & 32 & 23026 & 27517 \\ \hline \end{tabular} \end{table} \section{Structural properties of protonated water clusters} \subsection{Role of solvation: Zundel ion versus protonated water hexamer} As mentioned in the main text of the paper, the protonated water hexamer is the smallest water cluster including the full first-solvation shell, due to the presence of a Zundel core surrounded by 4 solvating H$_2$O molecules. To quantify the impact of the solvation shell on the Zundel core, we compare in Fig.~\ref{figure:concludesolv} the $\text{O}_1$-$\text{O}_2$ potential, $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ (left), and the corresponding classical equilibrium geometry (right) of the two clusters at various d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ (distance between the 2 central oxygen atoms). At short d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, the slope of the protonated hexamer $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ is slightly larger than the Zundel one, due to a greater electrostatic repulsion because of steric hindrance. At large d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, the protonated hexamer PES is softer than the Zundel one, because the solvating H$_2$O molecules enhance the polarisability of the core atoms. As explained in the paper, the balance between short- and long-range repulsion, once supplemented with the zero-point energie (ZPE), is key to quantify the relative abundance of short-Zundel and distorted-Eigen configurations, and thus, it allows for a quantitative understanding of the PT mechanism. \begin{figure}[!htb] \centering \begin{tabular}{cc} \includegraphics[width=0.495\textwidth]{plot_energy_cl.pdf} & \includegraphics[width=0.505\textwidth]{plot_geometry_cl.pdf} \end{tabular} \caption{\label{figure:concludesolv} Comparison of the protonated water dimer and hexamer $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ potential (left) and equilibrium geometry (right) as a function of the (central) oxygen-oxygen distance d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$. Vertical dashed lines indicate the corresponding equilibrium d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$. Notice that VMC and lattice regularised diffusion Monte Carlo (LRDMC)\cite{Casula2005} energies are in nice statistical agreement for d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ $\in [2.3, 2.6]$ \AA, the phase-space range explored by our MD simulations.} \end{figure} We also find the H$_{13}$O$_6^+$ equilibrium d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, represented by a vertical dashed line in Fig.~\ref{figure:concludesolv}, to be $\sim 0.05$ \AA \, larger than the H$_{5}$O$_2^+$ one. More importantly, at variance with the Zundel cation which is centrosymmetric, the protonated water hexamer equilibrium geometry is \emph{asymmetric} with classical ions. This fundamental symmetry modification of the PES is induced by solvation effects, which tend to stabilize the hexamer into its elongated-Zundel configuration. This can rationalise some THz/FTIR absorption spectroscopy fingerprints of the solvated proton\cite{Decka2015}, which have been related to a fast inter-conversion between the (distorted-)Eigen and (short-)Zundel forms. Furthermore, it is noteworthy that at fixed d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, the predicted barriers are larger in H$_{13}$O$_{6}^+$ (Tab.~\ref{table:staticbis}) than in the Zundel cation\cite{Dagrada2014}. This is clearly due to the additional price that must be paid for the rearrangement of the molecules in the solvation shell during the PT process. \subsection{H$_{13}$O$_6^+$ bidimensional distribution functions} We report some finite-temperature properties of the H$_{13}$O$_6^+$ system, studied by looking at the bidimensional distributions functions $\rho_{2D}$, which correlate the distance between the central proton and the neighbouring oxygen atoms ($d_\textrm{H$^+$O$_1$}$, $d_\textrm{H$^+$O$_2$}$) with $d_\textrm{O$_1$O$_2$}$. They are shown in Figs.~\ref{g2d_quantum} and \ref{g2d_classical}, for quantum and classical simulations, respectively. \newpage \begin{figure}[!h] \centering \begin{tabular}{lr} \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_100Kq.png} & \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_200Kq.png} \\ \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_250Kq.png} & \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_300Kq.png} \\ \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_350Kq.png} & \\ \end{tabular} \caption{$\rho_{2D}$ computed from VMC-PIMD simulations at different temperatures. } \label{g2d_quantum} \end{figure} \newpage \begin{figure}[!h] \centering \begin{tabular}{lr} \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_100K.png} & \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_200K.png} \\ \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_250K.png} & \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_300K.png} \\ \includegraphics[width=0.5\columnwidth]{2Dgrr_plot_350K.png} & \\ \end{tabular} \caption{$\rho_{2D}$ computed from VMC-MD simulations, with classical nuclei, at different temperatures. } \label{g2d_classical} \end{figure} The difference between quantum and classical distributions is striking. The impact of nuclear quantum effects (NQEs) on the temperature-dependence of H$_{13}$O$_6^+$ structural properties and PT mechanism are analysed and detailed in the main text, based on the proton density distributions shown in Figs.~\ref{g2d_quantum} and \ref{g2d_classical}. \section{Towards an accurate modeling of the potential energy surface (PES)} \label{towards_PES} We exploit the calculation of VMC forces not only to perform QMC-driven classical and quantum LD, but also to extract the best PES fitting functional form for the excess proton and for the water-water interaction in the Zundel core. The final goal is to derive the two-dimensional (2D) model potential $V_\textrm{2D}=V_\textrm{2D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta)$, where d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ is the distance between the two central oxygen atoms and $\delta$ is the proton sharing coordinate, referenced to the midpoint of the $\text{O}_1\text{H}^+\text{O}_2$ complex: $\delta \equiv \tilde{d}_{\text{\scriptsize O}_{1/2}\text{\scriptsize H}^+} -\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}/2$, with $\tilde{d}_{\text{\scriptsize O}_{1/2}\text{\scriptsize H}^+}$ the $\text{O}_{1/2}$-$\text{H}^+$ distance projected onto the O$_1$O$_2$ direction. The projection of the full interatomic potential on the restricted 2D manifold is done by integrating the other degrees of freedom over the thermal partition function, sampled during the MD dynamics, i.e. $V_\textrm{2D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta) \equiv \langle V(q_1, q_2, \mathbf{q}_{N-2}) \delta(q_1-\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}) \delta(q_2-\delta) \rangle $, where $\langle \ldots \rangle$ is the average over the partition function of the classical/quantum statistical ensemble at fixed temperature, and $V$ is the 3$N$-dimensional potential depending on the generalised nuclear coordinates of the full system. Analogously, one can define the one-dimensional (1D) potential acting between O$_1$ and O$_2$ as $V_\textrm{1D}=V_\textrm{1D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}) \equiv \langle V(q_1, q_2, \mathbf{q}_{N-2}) \delta(q_1-\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}) \rangle $, according to previous notations. Derivatives of the previous potentials with respect to $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ and/or $\delta$ can be defined in the same way. For instance, $\partial V_\textrm{2D}/ \partial \delta \equiv \langle \partial V(q_1, q_2, \mathbf{q}_{N-2})/\partial q_2 ~ \delta(q_1-\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}) \delta(q_2-\delta) \rangle $, and $\partial V_\textrm{1D}/ \partial \text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \equiv \langle \partial V(q_1, q_2, \mathbf{q}_{N-2})/\partial q_1 ~ \delta(q_1-\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}) \rangle $. Given these definitions, we can proceed with the calculations of the corresponding quantities with the aim at modeling the potentials $V_\textrm{1D}$ and $V_\textrm{2D}$. To do so, we will integrate the other degrees of freedom using the classical Boltzmann distribution in $\langle \ldots \rangle$, as generated by the QMC-driven classical Langevin dynamics at 100, 250 and 350 K. Employing the classical partition function has the advantage that the potentials \emph{sampled} in this way will tend to the original PES of the system as $\beta \rightarrow \infty$, while the quantum partition function will lead to averaged potentials biased by quantum fluctuations even in the zero temperature limit. To compute these quantities from an MD sampling, the $\delta$-functions in their definitions above are replaced by bins, whose size is given by the spacing between neighbouring points. In Fig.~\ref{forceOO_classical}, we study the $V_\textrm{1D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2})$ potential depending on the water-water distance $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ (left column), and its derivative $\partial V_\textrm{1D}/ \partial \text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ (right column). As one can see, the energy profile, at the left-hand side, is much more noisy than the behavior of its gradient, from where we can extract a precise value of the equilibrium $d_\textrm{O$_1$O$_2$}$ distance, and the evolution of the potential around the minimum. This shows the advantage of computing QMC forces in order to determine the PES, and suggests that a robust way of deriving the $V_\textrm{1D}$ potential is by fitting and integrating its derivatives, rather than by directly fitting the energies. \begin{figure}[!h] \centering \begin{tabular}{lr} \includegraphics[width=0.4\columnwidth]{pesOO_energy_new_100K.png} & \includegraphics[width=0.4\columnwidth]{pesOO_force_new_100K.png} \\ \includegraphics[width=0.4\columnwidth]{pesOO_energy_new_250K.png} & \includegraphics[width=0.4\columnwidth]{pesOO_force_new_250K.png} \\ \includegraphics[width=0.4\columnwidth]{pesOO_energy_new_350K.png} & \includegraphics[width=0.4\columnwidth]{pesOO_force_new_350K.png} \\ \end{tabular} \caption{Left-hand side: total energy variation of the cluster as a function of the $d_\textrm{O$_1$O$_2$}$ distance ($V_\textrm{1D}$). Right-hand side: sum of the energy gradients with respect to $\mathbf{q}_\textrm{O$_1$}$ and $\mathbf{q}_\textrm{O$_2$}$ variations projected along the O$_1$O$_2$ direction, for classical simulations at different temperatures. This corresponds to $\partial V_\textrm{1D}/ \partial \text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, resulting in the force that drives the O$_1$-O$_2$ stretching mode. } \label{forceOO_classical} \end{figure} \newpage In Fig.~\ref{2DforceH_classical}, we study the $V_\textrm{2D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta)$ potential depending on the proton coordinate $\delta$, at various (fixed) $d_\textrm{O$_1$O$_2$}$ distances. In the left column, we show $\partial V_\textrm{2D}/ \partial \delta$ in a contour plot as a function of both $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ and $\delta$. Positive (negative) values of $\partial V_\textrm{2D}/ \partial \delta$ are coloured in red (blue). The white region indicates the extrema of the 2D-PES. The classical proton is clearly asymmetric for $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \gtrsim 2.37$ \AA, with a minimum departing from the $\delta=0$ axis. In the right column, the same information is provided by superposing $\partial V_\textrm{2D}/ \partial \delta$ plotted as a function of $\delta$ and taken at fixed $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distances. \begin{figure}[!h] \centering \begin{tabular}{lr} \includegraphics[width=0.45\columnwidth]{pes_force_symm_100K.png} & \includegraphics[width=0.45\columnwidth]{pes_layer_force_symm_denser2_new_100K.png} \\ \includegraphics[width=0.45\columnwidth]{pes_force_symm_250K.png} & \includegraphics[width=0.45\columnwidth]{pes_layer_force_symm_denser2_new_250K.png} \\ \includegraphics[width=0.45\columnwidth]{pes_force_symm_350K.png} & \includegraphics[width=0.45\columnwidth]{pes_layer_force_symm_denser2_new_350K.png} \\ \end{tabular} \caption{Left column: contour plot of $\partial V_\textrm{2D}/ \partial \delta$ as a function of both $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ and $\delta$. Right column: superposition of $\partial V_\textrm{2D}/ \partial \delta$, plotted as a function of $\delta$ at various (fixed) $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ values. The force acting on H$^+$ projected along the O$_1$O$_2$ direction is given by $-\partial V_\textrm{2D}/ \partial \delta$. Notice that the size of the $(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta)$ space accessible by MD to sample these quantities increases as a function of the temperature. } \label{2DforceH_classical} \end{figure} \newpage \section{Projected two-dimensional PES} \label{2D-PES} Using the data obtained in Sec.~\ref{towards_PES}, let us determine an analytic form for the $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ potential, which depends on both $d_\textrm{O$_1$O$_2$}$ and $\delta$ coordinates. This will take into account the variation of the proton-oxygen potential along the proton shuttling mode as the distance between the two inner water molecules varies. We first derive the $V_\textrm{1D}$ potential between the two water molecules, which depends only on the $d_\textrm{O$_1$O$_2$}$ stretching coordinate, by fitting the derivatives shown in Fig.~\ref{forceOO_classical}, for the simulation at 100 K, which yields less noisy datapoints than the one at higher temperatures. As fitting function, we choose the Morse potential, such that: \begin{equation} V_\textrm{1D}(x)=b_\textrm{Morse} \left(\exp(-2 c_\textrm{Morse} (x-d_\textrm{Morse}))-2 \exp(-c_\textrm{Morse} (x-d_\textrm{Morse}))\right) - b_\textrm{Morse}, \label{Morse_pot} \end{equation} where we have chosen to set the zero of energy at the potential minimum. The results of the fit are plotted in Fig.~\ref{Morse_fit}, together with the potential derivatives evaluated by classical MD driven by QMC forces at 100 K, 250 K and 350 K. \begin{figure}[!htb] \centering \includegraphics[width=0.5\columnwidth]{pesOO_force_morse_new.png} \caption{Fit of the QMC estimates of $\partial V_\textrm{1D}/ \partial \text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ at 100 K. As fitting function we have used the derivative of the $V_\textrm{1D}$ Morse potential, as defined in Eq.~\ref{Morse_pot}.} \label{Morse_fit} \end{figure} From this analysis, the estimated equilibrium distance between two water molecules in the Zundel core of the protonated water hexamer is 2.408 \AA\ at 100 K, in good agreement with the analysis based on the radial distribution function reported in Fig.~2 of the main text. While the data derived from QMC-MD simulations are less noisy at 100 K, they however explore a smaller phase space, due to a probability density distribution more localised in the $(d_\textrm{O$_1$O$_2$},\delta)$ space at lower temperatures. This turns out to be a problem, if one aims at estimating the behavior of the $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ potential not only around its equilibrium geometry but also over its tails. A way to overcome this issue within the projection framework described in Sec.~\ref{towards_PES}, is to sample the projected potential from QMC-MD simulations carried out at higher temperatures. As clearly shown in Fig.~\ref{2DforceH_classical}, at 250 K and 350 K the $V_\textrm{2D}$ behavior can be evaluated on a much larger window in both $\delta$ and $d_\textrm{O$_1$O$_2$}$ directions. Moreover, we can increase the statistics of higher temperatures datapoints by averaging the 250 K and 350 K estimates. \begin{figure}[!htb] \centering \includegraphics[width=0.5\columnwidth]{pesOO_force_morse_avT250T350_new.png} \caption{Fit of the QMC forces using $\partial V_\textrm{1D}/\partial x$ as fitting function, where the Morse potential $V_ \textrm{1D}(x)$ is defined in Eq.~\ref{Morse_pot} and the fitted dataset is taken by averaging the outcome of 250 K and 350 K simulations.} \label{Morse_fit_av} \end{figure} We then fit the $V_\textrm{1D}$ potential in Eq.~\ref{Morse_pot} by using a dataset averaged over 250 K and 350 K. The corresponding Morse potential fit is reported in Fig.~\ref{Morse_fit_av}. From this analysis, the equilibrium distance between two water molecules in the Zundel core of the protonated water hexamer is 2.425\AA\ at $\approx$ 300 K, again in a satisfactory agreement with the analysis based on the radial distribution function reported in Fig.~2 of the main text. Fitting over these datapoints leads to an increase of the equilibrium distance by 0.16\AA\ as the temperature is raised from 100 K to $\approx$ 300 K. The average based on the radial distribution function of the full VMC-MD simulations yields a cluster expansion of $\approx$ 0.25\AA. The Morse potential determined from points computed at 100 K and the one from points averaged over 250 K and 350 K are plotted in Fig.~\ref{Morse_compT}, where the two fitting functions are superimposed. The ZPE analysis described in the main text of the paper is carried out using the dataset averaged over 250 K and 350 K. As one can see in Fig.~\ref{Morse_compT}, in the energy range below 1000 K, the two curves are just shifted from one another, having nearly the same curvature around the minimum. Therefore, the conclusions on the ZPE effect reached by using a model potential projected at larger temperatures would not be different from the ones one could reach using the potential derived at 100 K. \begin{figure}[!htb] \centering \includegraphics[width=0.5\columnwidth]{plot_PES_avT250T350_new.png} \caption{ $V_\textrm{1D}$ determined from classical MD at 100 K and from the averaged dataset of classical MD at 250 K and 350 K. The energy is expressed in Kelvin. The horizontal and vertical dashed lines are guides for the eye. } \label{Morse_compT} \end{figure} As a second step, we derive the $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ potential. For every $d_\textrm{O$_1$O$_2$}$ slice, in Fig.~\ref{Landau_fit_av} we plot the estimated values of the $\partial V_\textrm{2D}/\partial \delta$ derivative as a function of $\delta$, computed by averaging over the 250 K and 350 K classical MD samples, as we did for the $V_\textrm{1D}$ potential. \newpage \begin{figure}[!h] \centering \begin{tabular}{lr} \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.315_new.png} & \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.345_new.png} \\ \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.375_new.png} & \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.405_new.png} \\ \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.435_new.png} & \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.465_new.png} \\ \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.495_new.png} & \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.525_new.png} \\ \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.555_new.png} & \includegraphics[width=0.32\columnwidth]{pesOH_dOO2.585_new.png} \\ \end{tabular} \caption{Fit of the QMC forces using $\partial V_{d_\textrm{O$_1$O$_2$}}(x)/\partial x$ as fitting function for different $d_\textrm{O$_1$O$_2$}$, where the potential $V_{d_\textrm{O$_1$O$_2$}}(x)$ is defined in Eq.~\ref{Landau_pot}. The points are calculated as average over the two temperatures of 250 K and 350 K. } \label{Landau_fit_av} \end{figure} \newpage At every $d_\textrm{O$_1$O$_2$}$, we fit the derivative of the energy with respect to $\delta$, by using a symmetric quartic function, i.e. a Landau potential, as fitting model for the energy dependence: \begin{equation} V_{d_\textrm{O$_1$O$_2$}} (x)=a_\textrm{Landau} + b_\textrm{Landau} x^2 + c_\textrm{Landau} x^4 ~~~~~\textrm{at fixed $d_\textrm{O$_1$O$_2$}$ distance.} \label{Landau_pot} \end{equation} The fits for selected $d_\textrm{O$_1$O$_2$}$ values are also reported in Fig.~\ref{Landau_fit_av}. The parameters $a_\textrm{Landau}$, $b_\textrm{Landau}$ and $c_\textrm{Landau}$ have thus an implicit dependence on $d_\textrm{O$_1$O$_2$}$, which need to be further included in the full $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ function . We found that a good parametrisation for $b_\textrm{Landau}$ is given by: \begin{equation} b_\textrm{Landau}(d_\textrm{O$_1$O$_2$})=\alpha + \beta d_\textrm{O$_1$O$_2$}, \label{b_dep} \end{equation} while for $c_\textrm{Landau}$ is: \begin{equation} c_\textrm{Landau}(d_\textrm{O$_1$O$_2$})=\epsilon \exp(-\gamma d_\textrm{O$_1$O$_2$}). \label{c_dep} \end{equation} Note that the $d_\textrm{O$_1$O$_2$}$-dependence in Eq.~\ref{c_dep} guarantees that the potential in Eq.~\ref{Landau_pot} always binds for $\epsilon > 0$. Fig.~\ref{Lparameters_vs_dOO_av} demonstrates how this dependence, shown by the parameters evolution, is well taken into account by the functional forms in Eqs.~\ref{b_dep} and \ref{c_dep}. \begin{figure}[!h] \centering \begin{tabular}{l r} \includegraphics[width=0.5\columnwidth]{landau_bpar_evolution_new.png} & \includegraphics[width=0.5\columnwidth]{landau_cpar_evolution_new.png} \\ \end{tabular} \caption{Fit of the $b_\textrm{Landau}$ and $c_\textrm{Landau}$ dependence on the $d_\textrm{O$_1$O$_2$}$ distance, based on the functional forms in Eqs.~\ref{b_dep} and \ref{c_dep}. } \label{Lparameters_vs_dOO_av} \end{figure} From Eq.~\ref{b_dep} and its fitting parameters, the bifurcation point turns out to be located at $d_\textrm{symm} \approx 2.37$\AA, in quite good agreement with the relaxation of the ground state geometry. The final 2D potential $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ is thus fully determined by the following function: \begin{equation} V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta) = a_\textrm{Landau}(d_\textrm{O$_1$O$_2$})+b_\textrm{Landau}(d_\textrm{O$_1$O$_2$}) \delta^2 + c_\textrm{Landau}(d_\textrm{O$_1$O$_2$}) \delta^4, \label{2D_pot} \end{equation} with $b_\textrm{Landau}(d_\textrm{O$_1$O$_2$})$ and $c_\textrm{Landau}(d_\textrm{O$_1$O$_2$})$ already defined in Eqs.~\ref{b_dep} and \ref{c_dep}, respectively, while $a_\textrm{Landau}(d_\textrm{O$_1$O$_2$})$ is defined as follows: \begin{equation} a_\textrm{Landau}(d_\textrm{O$_1$O$_2$})=V_\textrm{1D}(d_\textrm{O$_1$O$_2$})+\Delta(d_\textrm{O$_1$O$_2$}). \label{a_dep} \end{equation} In the above Equation, $\Delta(d_\textrm{O$_1$O$_2$})$ is the proton barrier of the Landau potential $V_{d_\textrm{O$_1$O$_2$}}(x)$ in Eq.~\ref{Landau_pot}, such that the bottom of $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ at a given $d_\textrm{O$_1$O$_2$}$ distance follows exactly the Morse potential $V_\textrm{1D}$ in Eq.~\ref{Morse_pot}. In particular, $\Delta(d_\textrm{O$_1$O$_2$})$ reads: \begin{equation} \Delta(d_\textrm{O$_1$O$_2$})= \begin{cases} 0, & \text{if } d_\textrm{O$_1$O$_2$} \leq d_\textrm{symm} \\ \frac{b^2_\textrm{Landau}(d_\textrm{O$_1$O$_2$})}{4 c_\textrm{Landau}(d_\textrm{O$_1$O$_2$})}, & \text{otherwise}. \end{cases} \label{barrier} \end{equation} The resulting 2D potential $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ and its derivative with respect to $\delta$ are drawn in the contour plot of Fig.~\ref{fig_2D_pot_avOO_avOH}. \begin{figure}[!hbt] \centering \begin{tabular}{l r} \includegraphics[width=0.5\columnwidth]{pes_energy_Morse_new.png} & \includegraphics[width=0.5\columnwidth]{pes_force_new.png} \\ \end{tabular} \caption{Left panel: contour plot of the $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ 2D model potential. Right panel: contour plot of the model-potential derivative $\frac{\partial}{\partial\delta} V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$, to be compared with the contour plot of the mid- and bottom-left panels of Fig.~\ref{2DforceH_classical}, directly obtained from MD sampled datapoints. } \label{fig_2D_pot_avOO_avOH} \end{figure} $\frac{\partial}{\partial\delta} V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ compares very well with the same quantity directly evaluated by QMC-driven classical MD at both 250 K and 350 K, as shown in Fig.~\ref{2DforceH_classical}, mid- and bottom-left panels. This is an \emph{a posteriori} check of the quality of our 2D-PES determination. As we have seen, there is a residual temperature dependence in the determination of $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$, due to the projection scheme employed. Using a range of temperatures $T \in [250, 350]$ K guarantees an optimal sampling of the configuration space during the MD, allowing for a more extended determination of the 2D-PES model. In the main text of the paper, we used the function plotted in Fig.~\ref{fig_2D_pot_avOO_avOH} to carry out an anharmonic vibrational analysis of the shuttling mode. The range of temperatures at which the model potential $V_\textrm{2D}(d_\textrm{O$_1$O$_2$},\delta)$ has been derived is consistent with the temperatures where the PT shows a ``sweet spot'', supporting the outcome of our analysis. \section{Proton transfer: adiabatic events versus quantum tunneling} The picture unveiled in this work proves that the synergy of NQEs and thermal effects is fundamental to understand PT in an aqueous environment. Beside the ZPE, NQEs can contribute to the proton diffusion by means of instantaneous tunneling, which can further accelerate the PT dynamics. Both adiabatic crossing boosted by the ZPE, and instantaneous tunneling are plausible and non-competing scenarios in our system. As we have seen, the former is certainly favoured by the shortness of d$_{{\text{\scriptsize O}}_1{\text{\scriptsize O}}_2}$ in the short-Zundel configurations, where the barrier is absent along the shuttling trajectory $\delta$, while the latter could sustain PT events even in case of high barriers, such as in Eigen-like configurations. In order to assess the role played by quantum tunneling, we evaluate the localisation level of the excess proton during its dynamics, by computing the root-mean-square (RMS) displacement correlation functions\cite{Chandler1994}, $\mathcal{R}_{\textrm{d}_{\textrm{{\scriptsize OH}}^+}}(\tau) = \langle |\textrm{d}_{\textrm{\scriptsize OH}^+}(0) - \textrm{d}_{\textrm{{\scriptsize OH}}^+}(\tau)|^2 \rangle$, in imaginary time $\tau \in [0, \beta \hbar]$, with $\beta = 1/k_BT$ and $\textrm{d}_{\textrm{\scriptsize OH}^+} = |\mathbf{q}_\text{\scriptsize O}-\mathbf{q}_{\textrm{\scriptsize H}^+}|$. For localised states, trapped in potential energy minima, $\mathcal{R}_{\textrm{d}_{\textrm{\scriptsize OH}^+}}(\tau)$ is flat in the intermediate $\tau$ range, while, for delocalised states, $\mathcal{R}_{\textrm{d}_{\textrm{\scriptsize OH}^+}}(\tau)$ is roughly parabolic \cite{Chandler1994}. A quantum particles undergoing a tunneling event will look like a free particle with parabolic behavior, as unaffected by the tunneled potential energy barrier. \begin{figure}[!htp] \centering \includegraphics[width=\columnwidth]{tunneling.pdf} \caption {\label{figure:correl} Root mean square displacement $R(\tau)$ in quantum imaginary time $\tau^* \in [0,\beta \hbar/M] \equiv [0,1]$ for the proton displacement with respect to the side oxygen atoms, $\textrm{d}_{\textrm{\scriptsize OH}^+} = |\mathbf{q}_\text{\scriptsize O}-\mathbf{q}_{\textrm{\scriptsize H}^+}|$, in the core of the H$_{13}$O$_6^+$ cation. Calculations are performed at 100 K, 250 K and 350 K, reported in panels a), b) and c), respectively. Samples are taken from the instanton population, and averages are performed in a species-selected way: short-Zundel (gray points), elongated-Zundel (yellow points) and Eigen-like species (green points) are separated, according to the $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distance at which they occur. As reference, the root-mean-square displacement of the free proton at the given temperature is also reported in red solid lines.} \end{figure} The evolution of the RMS displacement correlations functions $\mathcal{R}_{\textrm{d}_{\textrm{\scriptsize OH}^+}}(\tau)$ with the temperature is depicted in Fig.~\ref{figure:correl} for the instanton configurations, the most relevant for the PT description. Also in this case, we distinguish instantons belonging to the three different regimes. A striking difference is observed among the three temperatures investigated, i.e. 100 K, 250 K and 350 K. For the lowest temperature (Fig.~\ref{figure:correl}(a)), the TS correlation function $\mathcal{R}_{\textrm{d}_{\textrm{\scriptsize OH}^+,\textrm{\scriptsize TS}}}(\tau)$ is flat, \emph{i.e.} more localised, in all regimes. At intermediate temperatures (Fig.~\ref{figure:correl}(b)), within the ``sweet spot'' range, the $\mathcal{R}_{\textrm{d}_{\textrm{\scriptsize OH}^+,\textrm{\scriptsize traj}}}(\tau)$ measured in the distorted-Eigen configurations is the closest to the free particle behavior, while the short-Zundel averages are clearly bound by the confining symmetric potential. This suggests that some PT events in the distorted Eigen, which need to overcome taller barriers, could be enhanced by quantum tunneling, as additional PT channel, beside the very effective short-Zundel TS mechanism. At the highest temperature (Fig.~\ref{figure:correl}(c), all $\mathcal{R}_{\textrm{d}_{\textrm{\scriptsize OH}^+,\textrm{\scriptsize traj}}}(\tau)$ are very close to the free particle. However, in this case, thermal effects, rather than quantum tunneling, make the instantons behave like free particles. \section{Introduction} \label{intro} For more than 200 years and the seminal work of von Grotthus, the properties of the hydrated proton H$^+_{(\textrm{aq})}$ have intrigued the scientific community \cite{200years_after,CUKIERMAN2006876}. Despite significant advances, the exact role of the solvated proton in proton transfer (PT) reactions in chemical and biological systems is not fully elucidated yet. The textbook picture is that the hydrated proton exists as classical hydronium cation H$_{3}$O$^+$, but it looks more appropriately described as a delocalised charge defect shared by multiple molecules. The spread of this charge defect blurs the identity of the excess proton among two limiting structures, namely the Zundel\cite{Zundel1968} and the Eigen\cite{Eigen1964} ions. Indeed, the hydrated proton Infrared (IR) spectrum displays a combination of a few discrete absorption bands on top of an absorption continuum, broadly extended over the entire spectrum. Neither the symmetrically solvated hydronium H$_{9}$O$_4^+$ (Eigen) ion, nor the equally shared proton in the H$_{5}$O$_2^+$ (Zundel) ion, can individually rationalise this characteristic IR fingerprint. Models involving fast inter-conversions between these two ionic species are also shown to fail\cite{Vuilleumier1999}. To deal with these issues, Stoyanov \emph{et al.}\cite{Stoyanov2010} have introduced the stable H$_{13}$O$_6^+$ species, the protonated water hexamer, which is Zundel-type in the sense that the excess proton is equally shared between two water molecules. The core of the cluster is characterised by a central oxygen-oxygen distance $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ that, however, is more elongated than in the Zundel cation\cite{Bell1975}. On the other hand, recent Molecular Dynamics (MD) simulations suggest the existence of a distorted, nonsymmetric Eigen-type cation, remaining at the heart of a dynamical charge defect spanning multiple water molecules \cite{Knight2012}. Thus, the protonated water hexamer represents the smallest protonated water cluster for which both of these characteristic binding motifs coexist\cite{Jiang2000,Mizuse2011}. It is the simplest one that can provide a meaningful description of the hydrated proton in water. The main protonated hexamer configurations are represented in Fig.~\ref{fig:config}, for both Zundel- and Eigen-like forms. The protonated hexamer Potential Energy Surface (PES) has been partially explored by IR spectroscopy \cite{Wang2003,zundel_vibrational,Heine2013} and electronic structure calculations performed within Density Functional Theory (DFT), M\o{}ller-Plesset (MP2), and Coupled Cluster (CC) approaches, also supplemented by Machine Learning techniques, \cite{Wei1994,zundel_vibrational,Jiang2000,Wang2003,heindel2018,schran2021} confirming that the two structures introduced above are the lowest energy isomers. The quantum nature of the proton however induces a delocalised structure on this PES. In this work, we apply MD simulations fully retaining the nuclear quantum nature of the atoms. In particular, the quantum proton, described within the Feynman path integral (PI) approach, evolves in a very accurate PES estimated by means of Quantum Monte Carlo (QMC). This stochastic technique introduces an intrinsic noise, which affects the forces driving the ion dynamics and, consequently, the simulation temperature. Relying on the generalised fluctuation-dissipation theorem, a Langevin-based approach has been developed to address this issue for classical \cite{Attaccalite2008} and quantum \cite{Mouhat2017} ions. It allows one to sample microscopic configurations in the canonical ensemble with an unprecedented level of quantum accuracy. Details of the method are provided in the Methods Section and in Secs.~S1 and S2 of the Supplementary Information (SI). We find that the hydrogen bond (H-bond) mediated by the hydrated proton shows a remarkably low thermal expansion from zero temperature up to $300$ K, with a nearly temperature-independent length that becomes \emph{shorter} than the classical-ion counterpart in the [$200$-$350$] K temperature range. A non-trivial behaviour of the H-bond has also been found in H-rich crystals and ferroelectric materials, such as the potassium dihydrogen phosphate (KDP)\cite{koval2002ferroelectricity}, first detected by Ubbelohde in 1939 upon isotopic substitution\cite{robertson1939structure}. In the latter case, the lighter the hydrogen, the shorter the H-bond. This was interpreted as a quantum manifestation of proton delocalisation, strengthening the H-bond. In the present situation, the strength of the H-bond results from a non-trivial cooperation of nuclear quantum effects (NQEs) and thermal activation, as we will show in this work. Indeed, NQEs strongly affect the vibrational levels of the proton shuttling mode bridging the central $\text{O}_1$ and $\text{O}_2$ oxygen atoms. These levels are then thermally occupied according to the $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distance of a given configuration. We can thus distinguish three regimes (see Fig.~\ref{fig:config}): (i) ``short-Zundel" configurations with the shortest $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, where the shuttling proton feels a quadratic potential and it is perfectly shared between the two central water molecules; (ii) ``elongated-Zundel'' configurations for intermediate $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, comprising the equilibrium distance, where a potential energy barrier starts to develop in between $\text{O}_1$ and $\text{O}_2$ and the proton is delocalised only due to NQEs; (iii) ``distorted-Eigen" configurations at even larger $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, where the central barrier is large enough that the hydrated proton is localised on one of the two flanking water molecules, forming an Eigen-like complex. We will see that the occurrence of short-Zundel configurations is key to understand the H-bond thermal robustness and to enhance the proton transfer dynamics. Despite being energetically disfavoured by the short $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distances at the classical level, these configurations are populated thanks to the synergistic action of NQEs and temperature, yielding a sweet spot for proton transfer in the [$250$-$300$] K temperature range. \section{Results and discussion} \label{results} \paragraph*{Thermal expansion of the H-bond} To rationalise our main outcome, we first study the zero-temperature classical geometry and PES of the protonated water hexamer, and compare it with the Zundel cation. While the latter system misses a large part of water solvation effects, the former includes the full contribution of the first and second shells of the solvated proton. The zero-temperature results are reported in Sec.~S3.1 of the SI. We simply highlight here that the Variational Monte Carlo (VMC) equilibrium O$_1$-O$_2$ distance is found to be $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} = \text{d}_\text{min} = 2.3930(5)$ \AA, in good agreement with MP2 calculations, the most widely used post Hartree-Fock theory to study water clusters (see Sec.~S1.2 of the SI for a more extended comparison between VMC and MP2). VMC has a milder scale with the system size than MP2, allowing one to perform extensive calculations of the protonated hexamer. At variance with the Zundel cation\cite{JONES1989283,Stoyanov2006}, the protonated hexamer equilibrium geometry is asymmetric, implying that the global minimum is split into a double well, separated by an energy barrier between the two central water molecules. This barrier vanishes at $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} = \text{d}_\text{symm} \simeq 2.38$ \AA, a distance that separates the short Zundel below from the elongated-Zundel configurations above. The height of the barrier is less than $100$ K (in $k_B$ units) at $\text{d}_{\text{min}}$, rapidly increasing as a function of $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$. We therefore expect several consequences on the hydrated proton distribution and on its mobility at finite temperature, once the NQEs are taken into account. To understand how the dynamics of the hydrated proton evolves with temperature, QMC-driven \emph{ab initio} MD simulations are relevant. Such calculations are carried out for both classical and quantum nuclei of the H$_{13}$O$_{6}^+$ ion, within the temperature interval T $\in [50-350]$ K, thanks to the methodological developments detailed in Ref.\cite{Mouhat2017} and in the Methods Section. At these conditions, the clusters are stable during the simulated time frame ($\approx 30$ ps), allowing us to access the thermal properties of the hydrated proton and the $\text{O}_1\text{H}^+\text{O}_2$ bond over an extended temperature range. From our QMC-MD simulations, we extract the normalised Pair Correlation Functions (PCFs) g$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ for the two oxygen atoms O$_1$ and O$_2$ of the cluster core (Fig.~\ref{figure:radial}). The expected broadening of the PCFs due to nuclear quantisation is significant over the whole temperature range (Fig.~\ref{figure:radial}(b)). Only at temperatures as high as 350 K, the classical g$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ (Fig.~\ref{figure:radial}(a)) starts resembling the quantum distribution. This implies that the NQEs cannot be neglected for temperatures up to this value, above ambient conditions. We also notice that, when comparing to the Zundel ion results\cite{Suzuki2013}, the peak position is shifted up by at least $\sim 0.01$ \AA. Thus, it appears that the H$_{13}$O$_{6}^+$ cluster frequently adopts elongated-Zundel configurations\cite{Markovitch2008,kreuer2004transport,asthagiri2005ab} at the lowest temperatures considered here. This is at variance with the protonated water dimer, where the hydrated proton lives in a single minimum symmetrically located between the two water molecules. Focusing our attention to $\langle \textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \rangle $ (Fig.~\ref{figure:radial}(c)), its classical and quantum behaviours are remarkably different as a function of temperature. On the one hand, the classical $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ keeps increasing with temperature, as more energy is given to the intermolecular vibration modes. On the other hand, the quantum $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ displays a nearly flat behaviour with the cluster temperature, up to 300 K. This very low thermal expansion extended over a wide temperature range leads to a temperature regime where $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ for the quantum system become shorter than the classical values at the same temperatures. This is clearly seen in Fig.~\ref{figure:radial}(c). We will come back to this point later. Finally, as the temperature further increases, the NQEs reduction weakens the central H-bond strength. Consequently, $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ spreads out, due to stochastic fluctuations of the core and the solvent, and a more classical regime is reached, when the averaged $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ values for classical and quantum nuclei meet again. The PCF distributions display longer tails, with more configurations covering regions with $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \in [2.5-2.7]$ \AA, and the peak position rapidly shifts to larger values. Configurations with such a large $\langle \textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \rangle$ are of distorted-Eigen type\cite{Markovitch2008,thecomputer}. \paragraph*{A cooperative thermal-quantum species: the short-Zundel ion} To refine our structural analysis, we compute the bidimensional distribution function $\rho_{2D}$, which correlates the oxygen-oxygen (O$_1$O$_2$) and the oxygen-proton (O$_{1/2}$H$^+$) distances, and study its temperature dependence $\rho_{2D}=\rho_{2D}(T)$. In Fig.~\ref{figure:2Dgrrnucvst}, we show the contour plot of the temperature-driven $\rho_{2D}$ variation (see also Sec.~S3.2 of the SI). By taking $\rho_{2D}(250 \,\, \text{K})$ as reference, four temperature variations are explored: $100$ K, $200$ K, room temperature (RT), and $350$ K (from the top to the bottom of Fig.~\ref{figure:2Dgrrnucvst}). In the \emph{classical} protonated hexamer (Fig.~\ref{figure:2Dgrrnucvst}, left column), rising the temperature from $250$ K up to $350$ K tends to stretch $\langle \text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \rangle$, by promoting configurations from the elongated Zundel (blue central distribution with d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \in [2.38,2.5]$ \AA\ in Fig.~\ref{figure:2Dgrrnucvst}) to an Eigen-like arrangement with larger d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ and a proton much more localised on one of the two central oxygen atoms (red wings). The situation is reversed at lower temperatures (100 K and 200 K) if compared to the 250 K reference, with positive (red) variations in the elongated Zundel and negative (blue) variations in the wings. Thus, for classical nuclei, there is a progressive depletion of the elongated Zundel and a corresponding population of the distorted-Eigen wings upon temperature rise. Short-Zundel configurations, highlighted in Fig.~\ref{figure:2Dgrrnucvst} by a gray background, seem to play a very marginal role in the temperature-driven density distribution shift. The scenario is strikingly different with \emph{quantum} nuclei (right column), particularly at the lowest temperatures (100 K and 200 K). In this regime, distorted-Eigen configurations are barely populated or depleted, and the density shift upon rising temperature takes place between the elongated-Zundel region and the short-Zundel sector. The latter is significantly more populated at 250 K than at lower temperatures at the expense of the elongated Zundel, which instead loses density with respect to the classical counterpart at the same temperature. In the higher-temperature limit, at 350K, NQEs are less relevant and, by consequence, the classical and quantum variations have a qualitatively similar behaviour. In both classical and quantum case, we notice the presence of red wings at large oxygen-oxygen distances (d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \in [2.5,2.7]$ \AA), which are the signature of thermally activated Eigen-like states, with a strongly localised proton. This is related to less frequent elongated-Zundel configurations, indicated by the depleted distribution for d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2} < 2.5$ \AA, confirming that the distorted-Eigen configurations are indeed promoted by high temperature. For quantum nuclei, the corresponding depletion goes well below the elongated-Zundel region, by touching also short-Zundel configurations, down to d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \sim 2.3$ \AA, at variance with the classical case, where the short-Zundel configurations are not involved. To interpret these results, we first construct an accurate effective potential by projecting the full PES, computed during QMC-driven classical MD calculations, onto the degrees of freedom mostly relevant to understand the dynamics of the hydrated proton. These are the d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distance and the proton sharing coordinate $\delta$, referenced to the midpoint of the $\text{O}_1\text{H}^+\text{O}_2$ complex: $\delta \equiv \tilde{d}_{\text{\scriptsize O}_{1/2}\text{\scriptsize H}^+} -\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}/2$, with $\tilde{d}_{\text{\scriptsize O}_{1/2}\text{\scriptsize H}^+}$ the $\text{O}_{1/2}$-$\text{H}^+$ distance projected onto the O$_1$O$_2$ direction. The resulting two-dimensional (2D) potential is $V_\textrm{2D}=V_\textrm{2D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta)$. We refer the reader to Secs.~S4 and S5 of the SI for technical details about the PES projection. We highlight that the potential $V_\textrm{2D}$ is derived here at VMC quality. We also notice that $\delta$ is the vibrational coordinate of the proton shuttling mode, while d$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ is related to the stretching mode of the two water molecules in the cluster core. Given $V_\textrm{2D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta)$, we then proceed to quantize the variable $\delta$. Indeed, while $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ can be taken as classical, for it is related to the motion of heavier oxygen atoms of mass $m_\text{O}$, the $\delta$ coordinate must be quantised, owing to the light mass ($m_\text{H}$) of the hydrated proton. At the leading order in $2m_\text{H}/(m_\text{O}+m_\text{H})$, we separate the stretching mode from the shuttling one, by invoking a Born-Oppenheimer type of approximation for the two species. We finally solve quantum-mechanically the Hamiltonian of a proton in the potential $\text{V}_\delta \equiv V_\textrm{2D}(\alpha,\delta)|_{\alpha=\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}}$ at fixed $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ value. In Fig.~\ref{figure:ZPE}(a-c) we plot the ground state distribution and eigenvalues obtained for three distances, i.e. at $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$=2.375 \AA, in the short-Zundel region close to the boundary between the short and the elongated Zundel, at $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$= 2.495 \AA, in the elongated-Zundel region close to the frontier between the elongated Zundel and the distorted Eigen, and finally at $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$= 2.585 \AA, deep into the distorted Eigen regime. One can notice three different quantum behaviours of the vibrational shuttling mode, that provide a more quantitative ground to the three-regime distinction made at the beginning. In the short Zundel, $\text{V}_\delta$ is indeed a quadratic potential with a single minimum at the core center, which widens as $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ gets close to $\text{d}_\text{symm} \simeq 2.38 \text{\AA}$, a distance where it becomes quartic because its curvature changes sign. The ground state energy, i.e. the zero point energy (ZPE) of the shuttling mode, decreases as the potential widens, as reported in Fig.~\ref{figure:ZPE}(d). In the elongated Zundel, a central barrier starts to develop, with a ground-state proton distribution that stays uni-modal thanks to a ZPE larger than its height, till $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \simeq 2.5$ \AA, where the ZPE equals the barrier height. In this regime, for $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \in [\text{d}_\text{symm},2.5 \text{\AA}]$, the ZPE is particularly small, due to the quartic nature of $\text{V}_\delta$, and weakly $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$-dependent, as shown in Fig.~\ref{figure:ZPE}(d). Finally, for $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} > 2.5$ \AA, we enter the distorted-Eigen regime, with an even larger central barrier ($> 1000$ K), such that the quantum proton is instantaneously localised in one of the two wells, and its distribution is then bimodal. The ZPE starts to rise again as $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ is stretched, with a slope steeper - in absolute value - than the ZPE decrease in the short Zundel, because it is now set by the much deeper lateral minima of the double-well potential. This can be seen again in Fig.~\ref{figure:ZPE}(d). We can now correct the classical $\text{O}_1$-$\text{O}_2$ potential, defined as $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \equiv V_\textrm{2D}(\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2},\delta)|_{\delta=\delta_\text{min}}$, where $\delta_\text{min}$ is the $V_\textrm{2D}$ minimum at fixed $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ value, by adding the ZPE $\forall \text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$, obtained from the quantisation of the shuttling mode $\delta$. The resulting potential is plotted in Fig.~\ref{figure:ZPE}(e). Amazingly, the anharmonic classical $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ potential becomes harmonic after ZPE-correction. It is a consequence of the much larger ZPE in the distorted-Eigen configurations than in the short Zundel, which compensates for the underlying $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ anharmonicity. This rationalises two main features. On the one hand, it explains the very low thermal expansion of $\langle \textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \rangle$, being the average position in a harmonic potential temperature-independent. On the other hand, it proves that NQEs enhance the occurrence of short-Zundel configurations upon heating, while the distorted Eigen is penalised by its large ZPE with respect to the classical counterpart. Above RT, the distorted Eigen configurations will eventually become dominant again. This can be understood within this framework as well. Indeed, thermal excitations are energetically more available in the distorted Eigen, where the spacing between the ZPE and the first-excited state shrinks, and higher excited states are piled up more densely than in the short and elongated Zundel (see Fig.~\ref{figure:ZPE}(a-c)). \paragraph*{Optimal proton transfer from instantons statistics} The analysis made so far highlights the paramount importance of the NQEs to set the non-trivial temperature behaviour of the H$_{13}$O$_6^+$ cluster. At this stage, direct information about the excess proton dynamics along the QMC-PIMD trajectory is necessary to estimate more quantitatively its impact on the PT processes occurring in the system. One way to achieve this goal is by analysing the statistics of selected transition-state (TS) configurations, defined by means of instanton theory. Within the PI formalism, the instanton path seamlessly connects the reactants and products minima, along the minimal action trajectory, periodic in the quantum imaginary time $\tau = \beta \hbar$\cite{Richardson2009}. It provides a generalisation of the TS theory for anharmonic quantum systems\cite{Vanshten1982}, and it has been very recently applied in a QMC framework \cite{Jiang2017,Mazzola2017}, by efficiently recovering the proper scaling of ground-state tunneling rates. TS configurations are therefore identified as those where each half of the instanton path is located on either side of the central O$_1$O$_2$ midpoint, sampled during the QMC-PIMD dynamics. With the aim at resolving the contribution of the three different regimes to the PT dynamics, we collect the instanton events and compute their statistical distribution as a function of $\text{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$. We plot the instanton density distribution function in Fig.~\ref{figure:instantons}(a) at various temperatures. To deepen our analysis, we compute also the cumulative density distribution function in Fig.~\ref{figure:instantons}(b), after normalising it based on the algorithmic frequency of the instanton occurrences, as counted during our QMC-PIMD simulations. Although this does not give direct access to real-time quantities, the ring-polymer MD with Langevin thermostat has been shown to yield physically reliable information on frequencies and frequency variations\cite{hele2016}. Note that the coupling with the Langevin thermostat is kept constant across the full temperature range analysed here\cite{hele2016}. The fully integrated frequency distribution gives the total proton hopping frequency, plotted in Fig.~\ref{figure:instantons}(c) as a function of temperature. This shows a clear maximum located in the $[250-300]$ K temperature range. Consequently, we expect the hydrated proton mobility to be optimal in a near-RT window, with a maximised Grotthus diffusion. To understand the source of this temperature ``sweet spot'', in the same panel (c) we plot the contribution to the total frequency of instanton events occurring in the short-Zundel region. This is yielded by the cumulative frequency distribution of panel (b) evaluated at the boundaries between short and elongated Zundel, i.e. at $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} = \textrm{d}_\textrm{symm}$. The short-Zundel contribution to the total frequency shows a peak of the same intensity as the total one in the same temperature range, clearly pointing to the key role played by thermally activated short-Zundel configurations to the PT dynamics. The short-Zundel arrangement enables instantaneous proton jumps between the two sides of the cation, since there is no barrier to cross. Thus, the ``sweet spot'' constitutes the best compromise between acquiring enough thermal energy to access short-distance configurations, boosted by NQEs, and controlling the amplitude of the chemical (covalent or H-) bonds fluctuations, that might trap the proton into an asymmetric well. Indeed, at larger temperatures ($> 300$ K), the onset of distorted-Eigen and the corresponding fall of short-Zundel configurations localize the hydrated proton around its closest oxygen atom, thus reducing its shuttling probability. Beside this PT mechanism, which is adiabatic in nature and driven by the synergy of ZPE and thermal effects, NQEs could also contribute to the proton diffusion by means of instantaneous tunneling, which can further accelerate the PT dynamics. By computing the root-mean-square (RMS) displacement correlation functions\cite{Chandler1994} over the instantons population, we verified that tunneling events could take place only in the distorted Eigen and in the intermediate temperature range (see Sec.~S6 of SI for a detailed analysis). This additional PT channel has however a marginal effect with respect to the main mechanism unveiled here. Indeed, Fig.~\ref{figure:instantons}(c) shows that the ``sweet spot'' is mainly due to PT events originating in short-Zundel configurations, where quantum tunneling is not relevant. \section{Conclusions} \label{conclusions} Using unprecedentedly accurate QMC-PIMD simulations of the H$_{13}$O$_6^+$ cation at finite temperature, we found a remarkably low thermal expansion of the protonated water hexamer core. It stems from a cooperative action of both NQEs and thermal effects, which leads to the emergent behaviour of short-Zundel species as PT booster, where the excess proton is perfectly shared between two neighbouring water molecules. The relevance of short-Zundel configurations is enhanced by NQEs, which instead penalize the distorted-Eigen states, having a larger ZPE. In the intermediate temperature range, comprising RT, the occurrence of short-Zundel events is maximised by thermal population, leading to a ``sweet spot'' in the PT dynamics. Around these temperatures, distorted-Eigen states can still contribute to PT with quantum tunneling processes, although occurring at much lower rates. The cluster core spreads out again at larger temperatures, as soon as stronger thermal fluctuations favor the formation of more classical distorted-Eigen structures, where the proton gets strongly localised in one of the flanking molecules. The short-Zundel quantum species is crucial for an efficient proton diffusion, as the shortness of its structure enables a fast charge redistribution during the adiabatic PT process. Very recent progress in ultrafast broadband two-dimensional (2D) IR spectroscopy\cite{Dahms2017,Fournier2018} allowed to probe the vibrational properties of protonated water at vibrational frequencies around the hydrated proton stretching mode, by measuring the lowest-lying excitations in the mid-infrared continuum\cite{Fournier2018}. These state-of-the-art experiments revealed a strongly inhomogeneous behaviour of the pump-probe spectra, implying large structural distributions in proton asymmetry and $\text{O}_1\text{O}_2$ distance. Therefore, the traditional ``Zundel limit''\cite{Zundel1968} needs to be revisited and extended, in order to cover the broad range of structures detected experimentally\cite{daly2017decomposition,carpenter2020}. In particular, the occurrence of qualitatively different short hydrogen-bond configurations, straightforwardly connected with the short-Zundel species described here, has been detected and highlighted in a recent fully solvated (HF$_2$)$^-$(H$_2$O)$_6$ experiment through femtosecond 2D IR spectroscopy in Ref.~\cite{dereka2021}. The present work crucially extends those findings by providing a temperature resolved analysis of the short hydrogen-bond events and by revealing their fundamental relation with the PT dynamics. While proton transfer and proton transport occur in a variety of environments, from solutions to membrane proteins and fuel-cell membranes, the protonated water hexamer is the smallest cluster to incorporate most of the PT experimental features and solvation effects at the leading order. Our findings thus call for further efforts to explore the temperature behaviour of the proton dynamics and transport both in aqueous systems and in other environments, more specifically for biological systems around life conditions. \section{Methods} \label{methods} \subsection{Zero-temperature electronic structure calculations} Before running finite-temperature calculations, we build a quantum Monte Carlo (QMC) variational wave function. The molecular dynamics (MD) will develop on the potential energy surface (PES) generated by the variational energy of this wave function. All zero- and finite-temperature calculations have been carried out with the TurboRVB code\cite{nakano2020}. We choose the variational ansatz such that the chemical accuracy in the binding energy of the Zundel ion and water dimer is reached. We highlight the fact that benchmarking the binding energy is much stricter than taking energy differences of geometries around the minimum, because the configurations involved in the binding energy are very different. The variational wave function $| \Psi_{\mathbf{q}} \rangle$ we used in our work is written as a Jastrow Antisymmetrised Geminal Power (JAGP) product\cite{Casula2004} \begin{equation} \Psi_{\mathbf{q}}(\mathbf{x}_1,\dots,\mathbf{x}_{N_\mathrm{el}}) = J_{\mathbf{q}}(\mathbf{r}_1,\dots,\mathbf{r}_{N_\mathrm{el}}) \Psi_{AGP,\mathbf{q}}(\mathbf{x}_1,\dots,\mathbf{x}_{N_\mathrm{el}}). \end{equation} The set $\left\{ \mathbf{x}_i = (\mathbf{r}_i,\sigma_i) \right\}_{i=1,\dots,N_\mathrm{el}}$ represents spatial and spin coordinates of the $N_\mathrm{el}$ electrons. The JAGP wave function has been described extensively described elsewhere\cite{Casula2005,Sorella2007,michele_agp2,Dagrada2014}. Here, we report its main ingredients. The Bosonic Jastrow factor is written as a product of one-body, two-body and three/four-body terms $J_{\mathbf{q}} = J_{1,\mathbf{q}}J_{2,\mathbf{q}}J_{3,\mathbf{q}}$. The one body term reads \begin{eqnarray} J_{1,\mathbf{q}}& = \exp \left ( - \sum\limits_i^{N_\mathrm{el}}\sum\limits_j^{N}(2Z_j)^{3/4}u\left((2Z_j)^{1/4}|\mathbf{r}_i - \mathbf{q}_j |\right) \right ) \label{1body} \end{eqnarray} with $u(|\mathbf{r}-\mathbf{q}|) = \frac{1-e^{-b|\mathbf{r}-\mathbf{q}|}}{2b}$ and $b$ is a variational parameter, which satisfies the electron-ion Kato cusp conditions. $N$ the number of atoms and $Z_j$ the electric charge of the j-th atom. In the protonated hexamer Hamiltonian, the hydrogen keeps the bare Coulomb potential, while for the oxygen atoms are replaced by the Burkatzki-Filippi-Dolg (BFD) pseudopotential\cite{filippi_pseudo}, smooth at the electron-ion coalescence points. Thus, $J_{1,\mathbf{q}}$ is applied only to the hydrogen atom. The two-body correlations are dealt with by the higher-order Jastrow factors. The two-body Jastrow factor is defined as \begin{equation} J_{2,\mathbf{q}} = \exp \left ( \sum\limits_{i < j}^{N_\mathrm{el}}u(|\mathbf{r}_i-\mathbf{r}_j|) \right ), \label{2body} \end{equation} where $u$ is a function of the same form as in Eq.~(\ref{1body}), but with a different variational parameter. The three-four body Jastrow factor is \begin{equation} J_{3,\mathbf{q}} = \exp \left( \sum\limits_{i<j}^{N_\mathrm{el}} \Phi_{J_{\mathbf{q}}}(\mathbf{r}_i,\mathbf{r}_{j}) \right), \end{equation} with \begin{equation} \Phi_{J_{\mathbf{q}}}(\mathbf{r}_i,\mathbf{r}_j) = \sum\limits_{a,b}^{N}\sum\limits_{\mu,\nu}^{N^J_\mathrm{basis}}g_{\mu,\nu}^{a,b}\Psi_{a,\mu}^{J}(\mathbf{r}_i-\mathbf{q}_a)\Psi_{b,\nu}^{J}(\mathbf{r}_j-\mathbf{q}_b), \label{jastrow_pairing} \end{equation} where $N^J_\mathrm{basis}$ is the number of the basis set functions $\Psi_{a,\mu}^{J}$. We used optimally contracted geminal embedded orbitals (GEOs)\cite{Sorella2015} as basis set, expanded over a primitive O(3s,2p,1d) H(2s,1p) Gaussian basis. The convergence study of the water dimer binding energy as a function of the Jastrow GEO expansion is reported in Sec.~S1.1 of the SI. The Fermionic part of the wave function is expressed as an antisymmetrised product of the spin singlet geminals or pairing (AGP) functions $\Phi_{\mathbf{q}}(\mathbf{x}_i,\mathbf{x}_j)$: \begin{equation} \Psi_{AGP,\mathbf{q}}(\mathbf{x}_1,\dots,\mathbf{x}_{N_\mathrm{el}}) = \hat{A}\left[\Phi_{\mathbf{q}}(\mathbf{x}_1,\mathbf{x}_2),\dots,\Phi_{\mathbf{q}}(\mathbf{x}_{N_\mathrm{el}-1},\mathbf{x}_{N_\mathrm{el}})\right]. \end{equation} The spatial part $\phi_{\mathbf{q}}(\mathbf{r}_i,\mathbf{r}_j)$ of the spin singlets $\Phi_{\mathbf{q}}(\mathbf{x}_i,\mathbf{x}_j)$ is expanded over $N^{AGP}_\mathrm{basis}$ optimally contracted GEOs, such that \begin{equation} \phi_{\mathbf{q}}(\mathbf{r}_i,\mathbf{r}_j) = \sum\limits_{a,b}^{N}\sum\limits_{\mu,\nu}^{N^{AGP}_\mathrm{basis}}\lambda_{\mu,\nu}^{a,b}\bar{\Psi}_{a,\mu}^{AGP}(\mathbf{r}_i-\mathbf{q}_a)\bar{\Psi}_{b,\nu}^{AGP}(\mathbf{r}_j-\mathbf{q}_b). \label{agp} \end{equation} The AGP GEOs are linear combination of primitive O(5s5p2d) H(4s2p) Gaussian basis functions. We highlight that the Fermionic part is fully optimised at the QMC level for each MD step, and not generated by on-the-fly DFT calculations. The GEOs are very effective in reducing the total number of variational parameters, by keeping a high level of accuracy. This makes the wave function optimisation\cite{Sorella2001,Casula2004,Umrigar2007} much more efficient. Dealing with a compact wave function is very important, if one wants to use it as variational ansatz in a MD simulation, because in a MD framework the wave function needs to be optimised at every MD iteration. Indeed, the wave function optimisation is by far the most time consuming step of our QMC-driven MD approach. Previous works on the Zundel ion\cite{Dagrada2014,Mouhat2017} found that the optimal balance between accuracy and computational cost for the determinantal part is reached by the O$[8]$H$[2]$ contracted GEO basis, in self-explaining notations. As the protonated water hexamer is a very similar system, in this work we used the same O$[8]$H$[2]$ GEO contraction for the AGP part. Moreover, we further simplified the variational wave function previously developed for the Zundel ion, by contracting also the Jastrow basis set, using the same GEO embedding scheme. We found that the O$[6]$H$[2]$ GEO basis set for the Jastrow factor is a very good compromise between accuracy and number of parameters, as we checked in the water dimer (see Sec.~S1.1 of the SI). The final accuracy is about 1 kcal/mol in the dissociation energy of the water dimer, and supposedly it is much higher around the stable geometries of water clusters. Thus, we used the O$[6]$H$[2]$ GEO basis set for the Jastrow factor, and the O$[8]$H$[2]$ GEO basis for the AGP part in all our subsequent MD simulations. For the protonated water hexamer, this results into a total number of 6418 variational parameters, comprising $g_{\mu,\nu}^{a,b}$ in Eq.~\ref{jastrow_pairing}, $\lambda_{\mu,\nu}^{a,b}$ in Eq.~\ref{agp}, the parameters of the homogeneous one-body (Eq.~\ref{1body}) and two-body (Eq.~\ref{2body}) Jastrow factors, and the linear coefficients of the Jastrow and determinantal basis sets. \subsection{Finite-temperature calculations} \subsubsection{Path Integral Langevin Dynamics} At finite temperature, we carried out path integral Langevin dynamics simulations to include NQEs. To do so, we used the recently developed algorithm published in Ref.~\cite{Mouhat2017}, which combines a path integral approach with very accurate Born-Oppenheimer (BO) forces computed by QMC. It is an efficient approach, alternative to the coupled electron ion Monte Carlo (CEIMC) developed by Ceperley and coworkers\cite{morales2010,liberatore2011,rillo2018}. The intrinsic noise of the QMC force estimator is treated by the noise correction scheme developed in Refs.~\cite{Attaccalite2008,Mazzola2014,Mouhat2017}, which is based on the fulfilment of the fluctuation-dissipation theorem. This implies that the friction matrix $\gamma$ governing the dumped dynamics is related to the random force ${\boldsymbol \eta}$ via the $\alpha$ matrix: \begin{eqnarray} \alpha \left( {\mathbf{q}} \right) &=& 2 k_B T\gamma \left( {\mathbf{q}} \right), \label{fluctuation_dissipation}\\ \left\langle {{\eta _i}\left( t \right){\eta _j}\left( {t'} \right)} \right\rangle &=& {\alpha _{i,j}}\left( {\mathbf{q}} \right)\delta \left( {t - t'} \right), \label{stochastic_force} \end{eqnarray} with $\mathbf{q}$ the vector of nuclear coordinates. The $\mathbf{q}$-dependence comes from the QMC noise correction, implemented through the relations: \begin{eqnarray} \alpha \left( {\mathbf{q}} \right) & = & {\gamma_{BO}/(2 k_B T)} + {\Delta _0}{\alpha ^{{\text{QMC}}}}\left( {\mathbf{q}} \right) \label{alpha_matrix} \\ \alpha _{i,j}^{^{{\text{QMC}}}}\left( {\mathbf{q}} \right) & = & \left\langle {\left( {{\mathbf{f}_i}\left( {\mathbf{q}} \right) - \left\langle {{\mathbf{f}_i}\left( {\mathbf{q}} \right)} \right\rangle } \right)} \right\rangle \left\langle {\left( {{\mathbf{f}_j}\left( {\mathbf{q}} \right) - \left\langle {{\mathbf{f}_j}\left( {\mathbf{q}} \right)} \right\rangle } \right)} \right\rangle, \label{qmc_force_cov} \end{eqnarray} where $\alpha{^{{\text{QMC}}}}$ in Eq.~\ref{qmc_force_cov} is the covariance matrix of QMC ionic forces, measuring their stochastic fluctuations, and $\gamma_{BO}$ and $\Delta_0$ parameters taking values for an optimal Langevin dynamics. The random force ${\boldsymbol \eta}$ is then used to thermalise the system to a target temperature, according to the Langevin thermostat of Eq.~\ref{fluctuation_dissipation}. The quantum particles are described by necklaces extended in the imaginary time interval $[0,\hbar \beta]$, with $\beta=1/(k_B T)$, following the quantum-to-classical isomorphism. This imaginary time interval is divided into $M$ slices, the so-called ``beads'', leading to an effective classical system of $NM$ particles. The path integral Langevin dynamics we developed in Ref.~\cite{Mouhat2017} is very efficient, because the quantum harmonic forces and the Langevin thermostat - thermalising the quantum degrees of freedom - are evolved together by means of an exact propagator, without Trotter breakup. The evolution of quantum particles in a thermal bath \`a la Langevin represents a \emph{quantum} Ornstein-Uhlenbeck dynamics. Therefore, we dubbed our integration scheme as ``path integral Ornstein Uhlenbeck dynamics (PIOUD)''. The algorithm is detailed in Ref.~\cite{Mouhat2017}. In order to evolve the system and to sample the thermal quantum partition function, nuclear forces must be estimated at each iteration by computing the gradients $\mathbf{f}=-\boldsymbol{\nabla}_{\mathbf{q}}V(\mathbf{q})$. The potential energy surface $V(\mathbf{q})$ is evaluated by VMC, namely: \begin{equation} V(\mathbf{q}) = \frac{\langle \Psi_{\mathbf{q}}|H(\mathbf{q})|\Psi_{\mathbf{q}}\rangle}{\langle \Psi_{\mathbf{q}}|\Psi_{\mathbf{q}}\rangle}, \label{V_wave_function} \end{equation} where $|\Psi_{\mathbf{q}}\rangle$ is the QMC wave function, which minimizes the expectation value of $H(\mathbf{q})$ for each bead configuration $\mathbf{q}^{(k)}$, according to the Born-Oppenheimer approximation. The electronic variational wave function depends on the coordinates $\mathbf{q}^{(k)}$ of the $k$-th bead in two ways. Directly, through the explicit dependence on the ion positions provided by the localised basis set, and in an indirect way, through the wave function parameters optimised at each $\mathbf{q}^{(k)}$, in compliance with the Born-Oppenheimer approximation. While the former dependence is of leading order, the latter can be neglected in a first approximation. A clever way to do so is to average the optimal parameters across different beads, by gaining a significant fraction of statistics in the QMC energy minimisation. More precisely, the beads are gathered in groups of $N_\textrm{groups}$ members each, to share the same set of wave function parameters. This is called ``bead grouping approximation'', and it has been introduced in Ref.~\cite{Mouhat2017}. Once the number of groups $N_\textrm{groups}$ is set for the beads, the electronic wave function is optimised at each new ionic position generated by the dynamics. This is done by energy minimisation via the most advanced optimisation techniques\cite{Sorella2007,Umrigar2007}. Between two consecutive steps of ion dynamics, one needs to perform $N_\textrm{opt}$ steps of energy minimisation, in an iterative fashion. $N_\textrm{opt}$ must be large enough to converge the wave function for each new ionic configuration. Thus, this parameter is tuned such that the BO approximation is fulfilled, and the dynamics follows the correct PES along the PIOUD trajectories. During the dynamics, the GTO exponents in both the Jastrow and the AGP part of the wave function are kept frozen to make the simulation stable. Due to the continuity of the nuclear trajectories, the number of energy minimisation steps is significantly smaller than the one required for a wave function optimisation from scratch. There is a set of sensitive parameters one needs to tune to have stable and unbiased simulations. They are: \begin{itemize} \item[(i)] Convergence for quantum effects set by $M$. In our calculations, we used M$=32$ for T$=300-400$~K and M$=64$ for T$=200$~K. The bead grouping approximation is made with $N_\textrm{group}=1$. Therefore the whole ring shares the same wave function parameters, except for the nuclear coordinates; \item[(ii)] Time step $\delta t$ for the integration of the equations of motion. We used $\delta t = 1$ fs for a controlled time integration error, yielding a difference between the virial and primitive estimators of the quantum kinetic energy below a $25$ mHa threshold along all the trajectories; \item[(iii)] A stable target temperature is reached despite the QMC noise by setting the parameters $\gamma_{BO}$ and $\Delta_0$, defining the $\alpha$ matrix in Eq.~\ref{alpha_matrix}. We used $\gamma_{BO}=0$ and $\Delta_0 = 0.5 ~ \delta t$, optimal values for other similar systems, such as the Zundel ion\cite{Mouhat2017}. Indeed, the simulation is efficient when the damping in the BO sector is minimised. The thermalisation of the system is guaranteed thanks to the optimal damping condition\cite{Ceriotti2010} applied to the internal modes of the ring-polymer, with the damping parameter of the center-of-mass translational modes set to $0.231$ fs$^{-1}$; \item[(iv)] To enforce the fulfilment of the BO approximation, we allowed for $N_\textrm{opt}=5$ iterative wave function optimisation steps at each MD iteration, such that the electronic energy minimum is reached within the statistical accuracy for every ionic configuration, and the PES is sampled without stochastic bias. \end{itemize} \subsubsection{Classical Langevin Dynamics} For classical MD calculations, we used an improved variant of the original algorithm developed by Attaccalite and Sorella\cite{Attaccalite2008}. This variant has been detailed in Refs.~\cite{Mazzola2014,Mouhat2017}. It includes a better integration scheme, involving a Langevin noise touching both coordinates and momenta, which are therefore correlated. As in the PIOUD calculations, relevant parameters are the time step $\delta t = 1$ fs, the QMC noise correction parameters $\gamma_{BO} =0.2$ fs$^{-1}$ and $\Delta_0 = \delta t$, and the number of QMC optimisation steps per nuclear iteration $N_\textrm{opt}=5$. \section*{Acknowledgments} {\small FM, RV, AMS and MC acknowledge Dominik Marx for useful discussions. MP, RV, AMS and MC thank the CNRS for the 80PRIME doctoral grant allocation. FM and MC acknowledge computational resources provided by the PRACE projects number 2016133322 and 2017153936. MC thanks GENCI for providing computational resources at TGCC and IDRIS under the grant number 0906493, and the Grands Challenges DARI for allowing calculations on the Joliot-Curie Rome HPC cluster under the project number gch0420. TM and MC are grateful to the European Centre of Excellence in Exascale Computing TREX-Targeting Real Chemical Accuracy at the Exascale, which partially supported this work. This project has received funding from the European Unions Horizon 2020 Research and Innovation program under Grant Agreement No. 952165. } \clearpage \begin{figure} \begin{tabular}{c|c|c} \includegraphics[width=0.33\linewidth]{short_zundel_new.png} & \includegraphics[width=0.33\linewidth]{quantum_zundel_label.png} & \includegraphics[width=0.33\linewidth]{distorted_eigen_new.png} \\ short Zundel & elongated Zundel & distorted Eigen \end{tabular} \caption{\label{fig:config} Different regimes of the protonated water hexamer H$_{13}$O$_{6}^+$. Left panel: short-Zundel configuration with a Zundel center (H$_{5}$O$_{2}^+$) in colors and its first solvation shell (4 H$_{2}$O) in gray shades. Central panel: elongated Zundel with the quantum nature of hydrogen atoms highlighted by the full representation of its imaginary-time positions in a PI configuration. Right panel: distorted-Eigen configuration with an Eigen cation (H$_{9}$O$_{4}^+$) in colors accompanied by two solvating water molecules (2 H$_{2}$O) in gray shades. The O$_1$, O$_2$ and H$^+$ labels are used throughout the paper to refer to the corresponding atoms, as indicated here.} \end{figure} \begin{figure}[!htp] \centering \includegraphics[width=1.0\textwidth]{figure2.pdf} \caption {\label{figure:radial} Classical (panel a)) and quantum (panel b)) oxygen-oxygen g$_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ pair correlation functions of the H$_{13}$O$_{6}^+$ ion as a function of temperature. The dashed vertical lines indicate the average $\langle \textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \rangle $ distance for each simulation, at the corresponding temperature. The dotted vertical line is located at the classical equilibrium geometry. Panel c) shows the T-dependence of the $\langle \textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} \rangle $ average distance. The classical equilibrium geometry is represented by a short-dashed horizontal black line. At 250 K and 300 K the oxygen-oxygen distance is \emph{shortened} by NQEs with respect to the classical counterpart.} \end{figure} \begin{figure}[!htp] \centering \includegraphics[width=1.0\textwidth]{fig3.pdf} \caption {\label{figure:2Dgrrnucvst} Difference between bidimensional oxygen-oxygen/oxygen-proton distributions $\rho_{2D}$ obtained by QMC-driven LD simulations for classical (left panels) and quantum (right panels) particles, computed at different temperatures. The bidimensional distribution computed at 250 K is taken as reference. Positive (negative) regions are in red (blue) color. The black filled circles correspond to the zero-temperature equilibrium geometries of the H$_{13}$O$_{6}^+$ ion at a fixed $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distance. The coloured background highlights the three different regimes explained in the paper: the short Zundel (gray), the elongated Zundel (yellow), and the distorted Eigen (green) species.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{figure5_new2.pdf} \caption{NQEs on the shuttling mode, and their impact on the $\textrm{O}_1$-$\textrm{O}_2$ interatomic potential $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$. We quantize the proton shuttling mode $\delta$, defined as the displacement along the segment connecting the two oxygen atoms in the core of the cluster from its mid-point position. We study the ground-state wave function and the first 5 eigenvalues for the confining potential $V_\delta$, as a function of $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$. Panels a), b) and c) report the ground state wave function and the lowest 5 energy levels for $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$= 2.375, 2.495 and 2.585 \AA, respectively. In panel d), the variation of the zero-point (ground-state) energy (ZPE) as a function of $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ is explicitly plotted. While the ZPE dependence is very flat in the elongated-Zundel region (depicted by the yellow shaded area), the ZPE increases in both short-Zundel (gray shaded area) and distorted-Eigen (green shaded area) regions, with a much steeper slope in the latter. In panel e), the ZPE is added to the classical interatomic potential $V_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ (solid blue line) to yield the quantum-corrected effective interatomic potential (solid dark-pink line) between the two inner oxygen atoms. } \label{figure:ZPE} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{figure4.pdf} \caption{a) Instanton distribution resolved as a function of the $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2}$ distance for different temperatures. b) Cumulative distribution of a) normalised by the occurrence frequency of the instanton (proton hopping) events during the PIMD simulations. c) Proton hopping frequency as a function of temperature, together with the contribution coming from the short Zundel configurations, with $\textrm{d}_{\text{\scriptsize O}_1\text{\scriptsize O}_2} < \textrm{d}_\textrm{symm} = 2.38$ \AA. The $\textrm{d}_\textrm{symm} $ value is reported as vertical dashed line in panels a) and b). Here, we report simulations performed also at 400 K, a temperature at which the cluster is still stable or meta-stable. } \label{figure:instantons} \end{figure} \clearpage \setcounter{figure}{0} \setcounter{page}{1} \setcounter{section}{0} \setcounter{table}{0} \renewcommand{\thepage}{S\arabic{page}} \renewcommand{\thesection}{S\arabic{section}} \renewcommand{\thetable}{S\arabic{table}} \renewcommand{\thefigure}{S\arabic{figure}} \include{SI} \setcounter{page}{1} \renewcommand{\thepage}{R\arabic{page}}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Graphs are ubiquitous structures that are used, among other things, for modelling \emph{conflicts} between pairs of elements. Such conflicts arise naturally in resource allocation. An independent set in a graph corresponds to a subset of non-conflicting elements, while a vertex coloring of the graph implies a \emph{schedule} of the elements in groups of non-conflicting sets. Such pairwise conflicts are though only the simplest form of \emph{constraints}. An example of more general constraints on resource usage: ``at most two out of these three elements can be active simultaneously''. Such constraints are captured with \emph{hypergraphs}, whose hyperedges correspond to not-all-active-simultaneously constraints. The concepts of independent sets and colorings carry over to hypergraphs as well. The downside of this generalization is that hypergraphs have proven to be much less amenable to efficient or effective solutions. They are also harder to reason about, with less powerful theoretic tools available. This paper proposes a way to finesse the hardness of working with hypergraphs, by reducing them to graphs. We form a \emph{sketch} of a given hypergraph that conservatively captures the essential constraints. The sketch is an ordinary graph with the property that the solution of an optimization problem on the graph is also a valid solution in the hypergraph. Necessarily, the other direction need not hold exactly, but the big question is how much of a loss in precision is sacrificed by sketching. The obvious benefit of sketching is that the rich theory of graph algorithmics can be brought to bear, with commensurate conceptual simplifications. The object of study in this work are certain geometrically-defined hypergraphs that capture interferences in wireless systems. Our main result is that they can be sketched at a low cost. This implies major improvements for a large family of such scheduling problems. \mypar{Wireless scheduling} The effective use of wireless networks revolves around utilizing fully all available diversity. This can include power control, scheduling, routing, channel assignment and transmission rate control on the communication links. At the heart of this large space of optimization problems are certain fundamental problems, which either involve maximizing throughput within a time frame or minimizing the number of time slots. Consider the following prototypical problem, known as Max Weighted Independent Set of Links (\textsc{Mwisl}): We are given a set of links, each of which is a pair of sender and receiver nodes, and a positive weight associated with each link. Underlying is a system of constraints that stipulate which subsets of links can be simultaneously active due to the unavoidable \emph{interference} between links. The objective is to find a maximum weight subset of links that can be simultaneously active. To capture interference, the model of choice for analytic studies of wireless systems is the \emph{physical} or SINR (\emph{Signal to Interference and Noise Ratio}) model. Each node is located in a metric space and each active transmission incurs \emph{fractional} interference on every other link, that is a function of the relative positions of the nodes of the two links. A transmission is successful as long as the total interference from the other links does not exceed a given threshold. This model is provably more accurate than binary (or graph-based) models. It is not without its weaknesses in fully capturing the reality of wireless systems, which we will address later in the paper. However, it is arguably the measuring stick with which we compare other models, and forms the basis of more refined models. All the scheduling problems of interest here are NP-hard. Our objective is to give efficient algorithms that provide good performance guarantees. When constant-approximations are out of reach, we seek slow-growing functions of the key parameters: $n$, the number of links, and $\Delta$, the diversity in link lengths (i.e., the ratio between the length of the longest to the shortest link). A secondary objective is to derive \emph{simple} algorithms based on \emph{local} rules, as such methods are most likely to be applicable or informative in constrained system setting, e.g., distributed. Our approach is to produce two graphs, $G_{lo}$ and $G_{hi}$, that \emph{sandwich} the input hypergraph ${\cal H}$ in the following sense: every independent set of $G_{hi}$ is also an independent set of ${\cal H}$, and every independent set of ${\cal H}$ is also an independent set of $G_{lo}$. These graphs belong to a new class that generalizes the intersection graphs of disks, and they share the desirable properties of constant-approximability of (weighted) maximum independent set and graph coloring problems, among others. For instance, to solve the {\textsc{Mwisl}} problem on ${\cal H}$, we simply run a weighted independent set algorithm on $G_{hi}$ and output the solution. The ``price'' of the graph abstraction is given by the difference between the upper and the lower sandwich graphs. Technically, it is bounded by taking an independent set in $G_{lo}$ and considering its chromatic number in $G_{hi}$. This factor is either\footnote{All logarithms in this paper are base-2.} $O(\log^* \Delta)$ or $O(\log\log \Delta)$, depending on the setting. We show that this is actually the best possible price that can be achieved with any conflict graph representation. \subsection{Our Results} We develop a general approximation framework that can tackle nearly all wireless scheduling problems, such as TDMA scheduling, joint routing and scheduling and others. The problems handled can additionally involve path or flow selection, multiple channels and radios, and packet scheduling. The approximation factors are \emph{double-logarithmic} (in link and rate diversity) approximation for these problems, exponentially improving the previously known logarithmic approximations, and, importantly, extending them to incorporate \emph{different fixed data rates and rate control}. Our approach also finesses the task of selecting optimum power settings by using \emph{oblivious} power assignment, one that depends only on the properties of the link itself and not on other links. The performance bounds are however in comparison with the optimum solution that can use arbitrary power settings. In the special case of fixed uniform rates (where all links require the same data rate), our approach yields an even better $O(\log^* \Delta)$-approximation, if we are willing to forego the advantage of oblivious power assignments. We show that this is actually the best possible, not only for our construction, but for any formulation involving conflict graph abstractions. The same holds for the double-logarithmic factor involving non-uniform data rates. \mypar{Assumptions} We make some undemanding assumptions about the settings. We assume that nodes can adjust their transmission power. We assume that the networks are \emph{interference-constrained}, in that interference, rather than the ambient noise, is the determining factor of proper reception. This assumption is common and is particularly natural in settings with rate control, since the impact of noise can always be made negligible by avoiding the highest rates, losing only a small factor in performance. We also assume that nodes are (arbitrarily) located in a doubling metric, which generalizes Euclidean space, allowing the modeling of some of non-geometric effects seen in practice. We show that all of our assumptions are necessary (to obtain results of the form given here). We have not attempted to minimize the constant factors involved in the analysis. \mypar{Paper Organization} We first introduce our sandwiching technique in Sec.~\ref{s:sandwiching} and outline the necessary properties of applicable problems. We then describe in detail (in Sec.~\ref{s:problems}) a large class of scheduling problems and explain why our results apply to them. The conflict graph construction is given in Sec.~\ref{s:conflict}, where we then proceed to bound in general terms the quality of the sandwiching attained. We also derive the key graph-theoretic properties that allow for constant approximability. The most technical material is in Sec.~\ref{s:feas}, where we finally introduce the physical model of interference. The main effort is in showing that independent sets in the conflict graphs correspond to feasible sets of links (as per the hypergraph formulation). This is shown separately for general fixed rates with oblivious power control, and for fixed uniform rates with arbitrary power control. In Sec.~\ref{s:limitations}, we show that our formulations are best possible, both by showing that no better bounds can be achieved with our types of conflict graphs, and by arguing that every conflict graph formulation essentially matches one of our conflict graphs. We also show that our assumptions are all necessary, including power control, metric space, and interference-limited setting. Finally, we provide some context in Sec.~\ref{s:context}, first describing related work that did not fall purely under one of the problems studied (Sec. \ref{s:problems}). We then address the issue of strengths and weaknesses of models of interference. \section{Sandwiching Hypergraphs with Graphs} \label{s:sandwiching} \mypar{Independence systems} A \emph{hypergraph} ${\cal F} = (V,{\cal E})$ consists of a collection ${\cal E}$ of \emph{hyperedges}, which are subsets of a finite set $V$. A graph is a hypergraph with edges only of size 2. In our context, the vertices of the hypergraph correspond to \emph{communication links} and the hyperedges encode constraints caused by interference: if a set of concurrently transmitting links contains one of the hyperedges, then some of the transmissions fail. A subset of vertices is \emph{independent} if it contains no hyperedge. The \emph{independence system} ${\cal I}_{\cal F}$ consists of all the independent sets in the hypergraph ${\cal F}$. \mypar{Sandwiching} We seek a pair of graphs: a graph ${G_{hi}}$, that constrains the hypergraph from above, and ${G_{lo}}$, that constrains it from below, satisfying: \[ {\cal I}_{{G_{hi}}} \subseteq {\cal I}_{\cal F} \subseteq {\cal I}_{{G_{lo}}}\ . \] Sandwiching by itself is trivial (using the empty and the complete graph) but we seek graphs with not-too-different independence systems. Specifically, the pair of graphs are a \emph{$\rho$-sandwich} if \[ \chi({G_{hi}}[S]) \le \rho \cdot \chi({G_{lo}}[S])\ , \] where $\chi(G)$ is the (vertex) chromatic number of $G$. In other words, every independent set in ${G_{lo}}$ can then be partitioned into at most $\rho$ independent sets in ${G_{hi}}$. We refer to the smallest such $\rho$ as the \emph{tightness} of the sandwiching, which determines the quality of the sandwiching. The graph ${G_{lo}}$ used will simply consist of the 2-edges of ${\cal F}$, namely the incompatible pairs of links: $E({G_{lo}}) = \{e \in {\cal E} : |e|=2\}$. We will generally omit the mention of ${G_{lo}}$ and refer to ${G_{hi}}$ as the hypergraph sketch, as well as referring to the \emph{tightness} of ${G_{hi}}$. The idea behind sandwiching is to obtain efficient approximations of an optimization problem involving independence constraints given by a hypergraph ${\cal F}$ by simply solving the same problem with a modified independence system given by the graph ${G_{hi}}$. This always gives a feasible solution, and if the problem at hand is ``nice'' (as discussed below), then the tightness of sandwiching gives an upper bound on the efficiency of approximation. \mypar{Properties of problems for which sandwiching applies} Sandwiching can be applied to a wide variety of optimization problems that involve constraints in the form of a hypergraph ${\cal F}$. The problems can, e.g., involve various other data outside of the scope of ${\cal F}$. It suffices that three properties hold: \begin{description} \item[Monotonicity] If ${\cal F}, {\cal F}'$ are hypergraphs with ${\cal I}_{\cal F} \subseteq {\cal I}_{{\cal F}'}$, then $OPT({\cal F}) \ge OPT({\cal F}')$ for a minimization problem, and $OPT({\cal F}) \le OPT({\cal F}')$ for a maximization problem, where $OPT$ is the optimum measure of the problem. \item[Tightness] The increase (or decrease) in the objective function between the graphs in a $\rho$-sandwich is at most proportional to the tightness of the sandwiching, on every induced subgraph. Namely, $OPT({G_{hi}}[S])/OPT({G_{lo}}[S]) = O(\rho)$, for every $S \subseteq V$ (for minimization problems). \item[Approximability] The problem admits a $c$-approximation algorithm on the class of sandwich graphs from which ${G_{hi}}$ is chosen, for a parameter $c$. \end{description} Given these properties, the strategy is simply to solve the problem at hand over the constraints given by the graph ${G_{hi}}$. We have a $c$-approximation for this restricted form, due to the approximability property, and the tightness and monotonicity properties ensure that, e.g., for a minimization problem, $OPT({G_{hi}}) = O(\rho)\cdot OPT({G_{lo}}) = O(\rho)\cdot OPT({\cal F})$. Hence, we have a $O(c\rho)$-approximation for the problem in ${\cal F}$. We show in Sec.~\ref{s:problems} how most wireless scheduling problems can be handled with this strategy. This approximation allows us to bring to bear the large body of theory of graph algorithms, simplifying both the exposition and the analysis. We also present several problems that do not fall under this framework, but can nevertheless be solved using sandwiching in a more customized manner. \section{Wireless Scheduling Problems} \label{s:problems} In many wireless scheduling problems, the basic object of study is a set $L$ of $n$ (potential) communication links, where each link $i\in L$ represents a single-hop communication request between two wireless nodes -- a sender node $s_i$ and a receiver node $r_i$. Transmissions on links cause interference to other links. The transmission rate of a link that is scheduled in a given slot depends on its signal to interference ratio (SIR). We consider two kinds of scheduling problems. In \emph{fixed-rate} problems, every link $i$ has a fixed SIR threshold $\beta_i$, and the only requirement is that it achieve the rate associated with this threshold: a link is \emph{successful} if and only if it is scheduled so that its SIR is at least $\beta_i$. Such fixed thresholds give rise to a feasibility formulation that is described in terms of a hypergraph ${\cal F}=(L,{\cal E})$ on the links: if $S \subseteq L$ is the set of links transmitting (in a given time/frequency slot), then all the links in $S$ are successful if and only if $S \in {\cal I}_{\cal F}$. We say then that $S$ is a \emph{feasible} set of links. We also consider problems involving \emph{rate control}. Here, the goal is not to achieve a fixed minimum rate, but to optimize some function of achieved data rates, e.g., the total rate over all links. Hence, this case is not described directly with the hypergraph formulation above, but we can reduce such problems to their fixed-rate variants (essentially) preserving the approximation factor. The property of our conflict graphs that provides the approximability property is that they are \emph{$O(1)$-inductive independent}. More strongly, they are \emph{$O(1)$-simplicial}, as defined below (See Sec.~\ref{s:graphalgo} for proofs). A \emph{$k$-simplicial elimination order} is one where the \emph{post-neighbors} of each vertex, or the neighbors appearing to its right, can be covered with $k$ cliques. A graph is $k$-simplicial if it has a $k$-simplicial elimination order. In \emph{$k$-inductive independence} graphs, the set of post-neighbors of each vertex is only required to have independence number bounded by $k$ (hence, a $k$-simplicial graph is also $k$-inductive independent). These graph classes have been well studied, and it is known that among others, vertex coloring and maximum weight independent set problems are $k$-approximable in $k$-inductive independent and $k$-simplicial graphs~\cite{akcoglu, kammertholey, yeborodin}. \subsection{Fixed-Rate Problems} \label{ss:fixedrate} These problems can be classified as \emph{covering} or \emph{packing} problems, where in the former we seek to minimize the number of time slots, while in the latter to maximize a weighted feasible selection of links. Various other objectives might also apply, such as the sum of completion times (i.e.\ indices of time slots), that we do not address here. Monotonicity of all these problems is easy to check. They also have efficient approximations on our conflict graphs (and more generally on $O(1)$-inductive independent graphs). So we only need to demonstrate their tightness (for a given $\rho$-sandwich), when not obvious. In some special cases, we need an ad-hoc approach for obtaining the approximation. It should also be mentioned that fixed-rate problems can be considered in two regimes: \emph{Uniform thresholds}, where the thresholds $\beta_i$ are equal for all links, and \emph{general thresholds}, where there is no restriction. The only difference in our results concerning these two regimes is that the tightness $\rho$ of sandwiching is significantly better in the case of uniform thresholds. However, the analysis of the problems below does not depend on the particular regime, and assumes a general $\rho$-sandwich is given. \subparagraph*{Max (Weight) Independent Set of Links ({\textsc{Mwisl}})} Find a feasible set of links of maximum cardinality or weight. A local-ratio algorithm gives constant-approximation in constant-simplicial graphs \cite{yeborodin}. \subparagraph*{Admission Control} The \emph{online} {\textsc{Mwisl}}, or \emph{admission control} problem, is defined as follows: the links arrive one-by-one, and the algorithm is to irrevocably admit or reject the current link in the feasible set. The quality of a solution is evaluated via the \emph{competitive ratio}, that is, the ratio between the solution value obtained by the online algorithm and that of the optimum offline solution. It is known that deterministic online algorithms perform rather poorly, when compared with the offline optimum~\cite{FanghanelGHV13}. Hence,~\cite{GHKSV14} considers algorithms on \emph{stochastic input models}, such as the \emph{secretary model}, in which an adversarial graph is presented in a random order, and the \emph{prophet-inequality model}, in which a random graph is presented in an adversarial order. They present expected constant-competitive ($O(\log n)$-competitive) algorithms for unweighted (weighted, resp.) variants of the problem on constant-inductive independent graphs. Applying this to ${G_{hi}}$ and using sandwiching, we obtain expected competitive ratios $O(\rho)$ and $O(\rho \log n)$, respectively, compared with the optimum offline solution in the hypergraph ${\cal F}$. \subparagraph*{(TDMA) Link Scheduling} Partition the input set of links into the minimum number of feasible subsets. A simple \emph{first-fit} style greedy algorithm gives constant factor approximation to vertex coloring in constant-simplicial graphs \cite{yeborodin}. \subparagraph*{Online Link Scheduling} The online variant of Link Scheduling we consider is as follows. The links arrive one by one, in an online manner, and the algorithm should assign each arriving link to a time slot, so that the set of links in each slot is feasible, and the number of slots is minimal. Once a link is assigned to a slot, it cannot be moved to another one, but its power level can be adjusted with newly arriving links, to reinforce feasibility. In order to approximate the online scheduling problem, we simply apply an online vertex coloring algorithm to the graph ${G_{hi}}$. A graph $G$ is \emph{$d$-inductive} if there is an ordering of the vertices, such that each vertex has at most $d$ post-neighbors in the ordering. It is well known that a simple greedy online algorithm colors $d$-inductive graphs using $O(d\log{n})$ colors~\cite{iranionline}, where $n$ is the number of vertices. It is a simple observation that every constant-inductive independent graph $G$ is $O(\chi(G))$-inductive. Hence, we have an algorithm that colors ${G_{hi}}$ with $O(\chi({G_{hi}})\log n)$ colors. By sandwiching, $\chi({G_{hi}})\le \rho \chi({G_{lo}})$, implying that the obtained algorithm is $O(\rho \log n)$-competitive, compared with the optimum offline solution in the hypergraph ${\cal F}$. \subparagraph*{Multi-Channel Selection} Given a natural number $c$ -- the number of channels -- select a maximum number (or weight) of links that can be partitioned into $c$ feasible subsets (a subset for each channel). There is a constant-factor approximation algorithm for constant-simplicial graphs \cite{yeborodin}. \subparagraph*{Fractional Scheduling} In this fractional variant of Link Scheduling, we are additionally given a real-valued demand $d(i)$ on each link $i$, indicating the amount of time that each link needs to be scheduled. A \emph{fractional schedule} of the links is a collection of feasible sets with rational values ${\cal S}=\{(I_k,t_k) : k=1,2\dots,q\}\subseteq {\cal I}_{{\cal F}}\times\mathbb{R}_+$, where ${\cal I}_{{\cal F}}$ is the set of all feasible subsets of $L$. The sum $\sum_{k=1}^q{t_k}$ is the \emph{length} of the schedule ${\cal S}$. The \emph{link capacity vector} $c_{{\cal S}}:L\rightarrow \mathbb{R}_+$ associated with the schedule ${\cal S}$ is given by $c_{{\cal S}}(i) = \sum_{(I,t)\in {\cal S}: I\ni i}t$, indicating how much scheduling time the link gets. The \emph{fractional scheduling problem} is a covering problem, where given a demand vector $d$, the goal is to compute a minimum length schedule that serves the demands, namely, for each link $i\in L$, $c_{{\cal S}}(i)\ge d(i)$. A greedy algorithm presented in~\cite{wan13} achieves constant-approximation on constant inductive independent graphs. \subparagraph*{Joint Routing and Scheduling} Consider a set of source-destination node pairs (multihop communication requests) $(u_i,v_i)$, $i=1,2,\dots,p,$ with associated weights/utilities $\omega_i>0$. The nodes are located in a multihop network given by a directed graph $G$, where the \emph{edges} of the graph are the transmission links. Let ${\cal P}_i$ denote the set of directed $(u_i,v_i)$ paths in $G$ and let ${\cal P}=\cup_i {\cal P}_i$. A \emph{path flow} for the given set of requests is a set $F=\{(P_k,\delta_k): k=1,2,\dots\}\subseteq {\cal P} \times \mathbb{R}_+$. The \emph{link flow vector} $f_{F}$ corresponding to path flow $F$, with $f_{F}(i)=\sum_{(P,\delta)\in F: P\ni i}{\delta},$ gives the flow along each link $i$. The \emph{multiflow routing and scheduling problem} is a covering problem, where given source-destination pairs with associated utilities, the goal is to find a path flow $F$ together with a fractional link schedule ${\cal S}$ of length $1$, such that\footnote{Essentially, the schedule here gives a probability distribution over the feasible sets of links.} for each link $i$, the link flow is at most the link capacity provided by the schedule, $f_{F}(i) \le c_{{\cal S}}(i)$, and the \emph{flow value} \[ W=\sum_{i=1}^p \omega_i\cdot \sum_{(P_k,\delta_k)\in F, P_k\in {\cal P}_i}{\delta_k} \] is maximized. A constant-approximation algorithm of~\cite{wan14} (the result holds with unit utilities) for constant-inductive independent graphs applies here. It should also be noted that the fractional scheduling and routing and scheduling problems can be reduced to the {\textsc{Mwisl}} problem using linear programming techniques (described e.g.\ in~\cite{jansen03}), as shown in~\cite{wan09}. We will further discuss this in Sec.~\ref{ss:ratecontrol}. Let us verify that this problem satisfies the tightness property. Consider a feasible solution in ${G_{lo}}$ that consists of a path flow $F=\{(P_k,\delta_k): k=1,2,\dots\}$ and a schedule ${\cal S}=\{(I_k,t_k) : k=1,2,\dots\}$ of length $\sum_{k\ge 1}t_k=1$, such that $f_{F}(i) \le c_{{\cal S}}(i)$. By the sandwiching property, the schedule ${\cal S}$ can be refined into a schedule ${\cal S}'=\{(I_k^s,t_k)\}_{k,s}$ in ${G_{hi}}$, where ${\cal S}'$ serves the same demand vector as ${\cal S}$, and ${\cal S}'$ has length at most $\rho$ times the length of ${\cal S}$. We then scale the refined schedule to have length 1, so that the scaled path flow $F'=\{(P_k,\delta_k/\rho): k=1,2,\dots\}$ together with the new schedule will be feasible in ${G_{hi}}$, as all link demands will be served. Clearly, the value of $F'$ is at least a $1/\rho$ fraction of the value of $F$, so we achieve a tightness of $\rho$. \subparagraph*{Multi-Channel Multi-Antenna Extensions} All the problems above can be naturally generalized to the case when there are multiple channels (e.g.\ frequency bands) available and moreover, wireless nodes are equipped with multiple antennas and can operate in different channels simultaneously (MC-MA.) Each node $u$ is equipped with $a(u)$ antennas numbered from $1$ to $a(u)$ and can (only) use a subset ${\cal C}(u)$ of channels. For each link $i = (s_i, r_i)$, we form a collection of $a(s_i)a(r_i)|{\cal C}(s_i)\cap {\cal C}(r_i)|$ \emph{virtual} links, that correspond to each selection of an antenna of the sender node $s_i$, an antenna of receiver node $r_i$ and a channel $c\in {\cal C}(s_i)\cap {\cal C}(r_i)$ available to both nodes. We call link $i$ \emph{the original} of its virtual links. A set of virtual links $S$ is feasible in MC-MA if and only if no two links in $S$ share an antenna (i.e., they do not use the same antenna of the same node), and the set of originals of links in $S$ using each channel is feasible (in ${\cal F}$). We show that the conflict graphs ${G_{hi}}$ and ${G_{lo}}$ can be extended to this setting, preserving their properties. Let $L$ denote the set of virtual links and $L_o$ the corresponding originals. We define the conflict graphs ${G_{hi}^M}(L)$ and ${G_{lo}^M}(L)$ that have a node for each virtual link, with two virtual links adjacent if at least one of the following holds: 1. they share an antenna, or 2. they share a channel \emph{and} their originals are adjacent in ${G_{hi}}(L_o)$ or respectively in ${G_{lo}}(L_o)$, i.e., in the single channel setting. In particular, the replicas of the same original link form an independent set in both graphs. We prove that if ${G_{hi}}$ is $k$-simplicial, then ${G_{hi}^M}$ is $k+2$-simplicial. The other properties follow by similar arguments. To this end, consider a virtual link $i\in L$, and let us see which links are in the neighborhood of $i$. The neighborhood of $i$ can be partitioned into three sets: 1. The virtual links that share the channel with $i$, denoted $O$, 2. The links that use the sender antenna of $i$, denoted $S$, and 3. The links that use the receiver antenna of $i$, denoted $R$. Note that $O$ consists of replicas of \emph{distinct} links in $L_o$, which are all adjacent with the original of $i$ in ${G_{hi}}$. Also, note that $S$ and $R$ form cliques in ${G_{hi}^M}$. It is now easy to see that ${G_{hi}^M}$ is $k+2$-simplicial, where the simplicial ordering is induced by the simplicial ordering of ${G_{hi}}$. \subparagraph*{Spectrum Auctions With Sub-Modular Valuations} The \emph{spectrum auction problem} is a packing problem that can be considered a generalization of {\textsc{Mwisl}}, where there are multiple channels and a not-necessarily-additive weight function. Given a set $L$ of links, a natural number $c$ (number of available channels) and a valuation function $\omega:L\times 2^{[c]}\rightarrow \mathbb{N}$, find a \emph{feasible} allocation $A:L\rightarrow 2^{[c]}$ that maximizes the sum of valuations $\omega(A)=\sum_{i\in L}{\omega_{i,A(i)}}$, where $[c]={1,2,\dots,c}$. Note that each feasible allocation is a collection of $c$ feasible sets, each corresponding to a channel. Note also that the problem is reduced to solving a number of {\textsc{Mwisl}} problems when the valuation function is additive, i.e.\ $\omega_{i,T}=\sum_{j\in T}\omega_{i,j}$, leading to a $O(\rho)$-approximation. In the more general case when the valuation function $\omega_{i,T}$ is a \emph{submodular} function of $T$ for each link $i$, i.e.\ for any sets $T,T'$ of channels, $\omega_{i,T\cup T'} + \omega_{i,T\cap T'}\le \omega_{i,T} + \omega_{i,T'}$, randomized algorithms presented in~\cite{HoeferK15} give constant-factor approximation for constant-inductive independent graphs and $O(\log n)$-approximation for the physical model, in expectation. These approximations hold for a particular kind of submodular functions called \emph{matroid rank sum functions}. Thus, in order to obtain an (expected) $O(\rho)$-approximation for matroid rank sum functions, we only need to verify the tightness property. First, note that any non-negative submodular function $f$ is subadditive, i.e.\ for each set $S$, $f(S)\le \sum_{e\in S} f(e)$. Consider a feasible allocation $S_1,S_2,\cdots,S_c$ in ${G_{lo}}$. Using sandwiching, we can split each $S_t$ into $\rho$ independent sets $S_t^1,S_t^2,\cdots,S_t^\rho$ in ${G_{hi}}$ (where some of the subsets may be empty). Consider (at most) $\rho$ tentative allocations $\{S_1^j,S_2^j,\cdots,S_c^j\}$ for $j=1,2,\cdots,\rho$ and consider the sum of total valuations of these allocations. Let $i$ be any fixed link. In each of the obtained allocations, link $i$ gets a subset of channels and the subsets corresponding to different allocations are disjoint and sum up to the set of channels allocated to $i$ in the original allocation. This observation and the fact that the valuation function for each link is subadditive imply that the sum of total valuations is at least the total valuation of the original allocation. Since there are at most $\rho$ refined valuations, this implies that the best one of them gives total valuation at most $\rho$ times that of the original allocation. \subparagraph*{Spectrum Auctions with General Valuations} When the valuation functions $\omega_{i,T}$ are unrestricted, our framework may not be applied directly, as we cannot guarantee the tightness property. However, in this case we can take advantage of a particular solution proposed in \cite{HoeferKV14}, where a linear programming approach is developed. Using this approach an $O(\sqrt{c})$-approximation is obtained for constant inductive independent graphs, where $c$ is, as before, the number of channels. For a vertex $v$ in a $k$-inductive independent graph $G$, let $N^+_G(v)$ denote the set of post-neighbors in the inductive independence order. The linear program (which is a relaxation of the corresponding integer linear program (ILP)) for $k$-inductive independent graphs presented in~\cite{HoeferKV14} is as follows. \begin{align} \nonumber \text{Maximize } &\sum_{v\in V}{\sum_{T\subseteq[c]}{\omega_{v,T}x_{v,T}}} &&\\ \nonumber \text{s.t. } &\sum_{u\in N^+_G(v)}{\sum_{T\subseteq [c],T\ni j}{x_{u,T}}} \le k && v\in V, j\in [c]\\ \label{E:ilp} & \sum_{T\subseteq [c]}{x_{v,T}} \le 1 && v\in V\\ \nonumber & x_{v,T}\ge 0 && v\in V,T\subseteq[c] \end{align} The first constraint corresponds to $k$-inductive independence: The number of post-neighbors of a vertex $v$ that are assigned the same channel $j$ must be bounded by $k$. The second constraint states that each vertex is assigned a single set of channels. An algorithm based on randomized rounding of the linear program solution is presented in~\cite{HoeferKV14}, giving $O(k\sqrt{c})=O(\sqrt{c})$-approximate solution in expectation. Again, the problem with this solution is that in the absence of tightness property, as we do not know how the ILP solution compares with the optimal solution in ${G_{lo}}$. However, we can ``plant'' the tightness property in the ILP, as follows. The key observation is that the only constraint that really depends on the underlying graph is the inductive independence constraint, so by simply replacing the right-hand side of the constraint with $k'=k\cdot \rho$, we obtain, due to sandwiching, that every solution in ${G_{lo}}$ is a feasible solution in the ILP\footnote{Here we also use the fact that ${G_{hi}}$ and ${G_{lo}}$ have the same inductive independence order.}, even though the ILP is formulated in terms of ${G_{hi}}$. This means that the ILP optimum is a lower bound on the optimum in ${G_{lo}}$. Hence, the randomized rounding algorithm of~\cite{HoeferKV14} gives us a $O(\rho\sqrt{c})$-approximation of the optimum in ${\cal F}$. A similar approach can be used to obtain an expected $O(\rho)$-approximation for another special case considered in~\cite{HoeferK15}, when the valuation function for each link is \emph{symmetric}, i.e.\ the valuation depends only on the number of channels rather than on specific subsets: for each link $i$ and subsets $T,T'$ of channels, $\omega_{i,T}=\omega_{i,T'}$ if $|T|=|T'|$. \subsection{Rate Control and Scheduling} \label{ss:ratecontrol} Most of the fixed-rate problems also have variants where choosing the data rates is part of the problem. We describe here how these problems can be reduced to fixed-rate problems with minimal overhead. Again, the approximability of the problems in the physical model depends linearly on the tightness of the sandwiching.\footnote{In general, the tightness parameter depends on rates, and since we don't have fixed rates in this case, tightness will depend on max/min rates.} It is important to stress that the reduction is done to fixed-rate problems with \emph{non-uniform} thresholds, which means that the approximation guarantees for non-uniform thresholds apply here. \subparagraph*{{\textsc{MWISL}} with Rate Control} By Shannon's theorem, given a set $S$ of links simultaneously transmitting in the same channel, the transmission data (bit-)rate $r(S,i)$ of a link $i$ is a non-decreasing function of the SIR, $SIR(S,i)$, of link $i$ (with other parameters, e.g.\ frequency, fixed). Thus, we consider the {\textsc{Mwisl}} problem where each link $i$ has an associated non-decreasing \emph{utility function} $u^i:\mathbb{R}_+\rightarrow \mathbb{R}_+$, and the weight of link $i$ is the value of $u^i$ at $SIR(S,i)$ if link $i$ is selected in the set, and $0$ otherwise. The goal is, given the links with utility functions, to find a subset $S$ that maximizes the total utility $\sum_{i\in S} u^i(r(S,i))$. We assume that $u^i(r(S,i))=0$ if $SIR(S,i)<1$, namely when the signal is weaker than the interference. An $O(\log n)$-approximation for this variant of {\textsc{Mwisl}} was obtained in~\cite{KesselheimESA12}. We show, by reducing the problem to {\textsc{Mwisl}} in a modified fixed-rate instance, that this ratio can be replaced with $O(\rho)$, where $\rho$ is the tightness of the fixed-rate instance. Let us fix a utility function $u$. First, assume that the possible set of weights for each link is a discrete set $u_{min}=u_1<u_2<\cdots<u_{t}=u_{max}$. Then, we can replace each link $i$ with $t$ copies $i_1,i_2,\cdots,i_t$ with different thresholds and fixed weights, where $\omega_{i_k}=u_k$ and $\beta_{i_k}=\min\{x: u^{i_k}(x) \ge u_k\}$, but $\omega_{i_k}=0$ if $\beta_{i_k}< 1$ (the latter is justified by our assumption that $SIR<1 \Rightarrow u=0$). Now, the problem becomes a {\textsc{Mwisl}} problem for the modified instance $L'$ with link replicas and fixed weights. Observe that no feasible set in $L'$ contains more than a single copy of the same link, \footnote{This follows from the definition of the physical model (Sec.~\ref{s:model}), the assumption that $\beta_i\ge 1$ for each link $i$, and that the copies occupy the same geometric place.} implying that each feasible set of the modified instance corresponds to a feasible set of the original instance, with an obvious transformation. For the case when the number of possible utility values is too large or the set is continuous, a standard trick can be applied. Let $u^i_{max},u^i_{min}$ be the minimum and maximum possible utility values for the given link $i$. The modified instance $L'$ is constructed by replacing each link $i$ with $O(\log u^i_{max}/u^i_{min})$ copies $i_1,i_2,\dots$ of itself and assigning each replica $i_k$ weight $\omega_k=2^{k-1}$ and threshold $\beta_k=\min\{x : 2^{k-1} \le u^i(x) \le 2^{k}\}$ if $\beta_k\ge 1$ and let $\omega_k=0$ otherwise. If the value $\log u^i_{max}/u^i_{min}$ is still too large, it may be inefficient to have $O(\log u^i_{max}/u^i_{min})$ copies for each link. It is another standard observation that only the last $O(\log n)$ copies of each link really matter, as restricting to only those links degrades the approximation by at most a factor 2. \subparagraph*{Fractional Scheduling with Rate Control} In this formulation, we redefine a fractional schedule to be a set ${\cal S}=\{(I_k,t_k) : k=1,2\dots,q\}\subseteq 2^L\times\mathbb{R}_+$, namely, $I_k$ are arbitrary subsets rather than independent ones. We redefine the link capacity vector ${\hat c}_{{\cal S}}$ to incorporate the data rates as follows: \begin{equation}\label{E:ratecap} {\hat c}_{{\cal S}}(i) = \sum_{(I,t)\in {\cal S}: I\ni i}t\cdot r(I,i). \end{equation} The \emph{fractional scheduling with rate control} problem is to find a minimum length schedule ${\cal S}$ that serves a given demand vector $d$, namely, such that for each link $i\in L$, $ {\hat c}_{{\cal S}}(i)\ge d(i). $ The problem can be formulated as an exponential size linear program $LP_1$, as follows. \begin{align} \nonumber \text{Minimize } &\sum_{I\subseteq L}{t_I} &&\\ \nonumber \text{s.t } & \sum_{I\subseteq L : I\ni i}t_I \cdot r(I,i) \ge d(i) && \forall i\in L\\ \nonumber & t_I\ge 0 && \forall I\subseteq L \end{align} The dual program $LP_2$ is then: \begin{align} \nonumber \text{Maximize } &\sum_{i\in L}{d(i) y_i} &&\\ \nonumber \text{s.t. } & \sum_{i\in I}y_i\cdot r(I,i) \ge 1 && \forall I\subseteq L\\ \nonumber & y_i\ge 0 && \forall i\in L \end{align} As~\cite[Thm. 5.1]{jansen03} states, if there is an approximation algorithm that finds a set $\hat I$ such that $\sum_{i\in \hat I}y_i r(\hat I,i)\ge \frac{1}{a}\max_{I\subseteq L}\sum_{i\in I}y_i r(I,i)$, then there is an $a$-approximation algorithm for $LP_1$, where the former algorithm acts as an approximate \emph{separation oracle} for $LP_1$. This auxiliary problem is simply a special case of {\textsc{Mwisl}} with rate control. Thus, there is an approximation preserving reduction from the fractional scheduling with rate control to {\textsc{Mwisl}} with rate control. \subparagraph*{Routing, Scheduling and Rate Control} The rate-control variant of the routing and scheduling problem is formulated in the same way as for the fixed rate setting, with only the capacity constraints modified to involve the modified link capacity vector ${\hat c}_{{\cal S}}$ of (\ref{E:ratecap}) incorporating the data rates on the links, instead of $c_{{\cal S}}$. This problem can also be reduced to {\textsc{Mwisl}} with rate control, using similar methods as for the fractional scheduling problem. The reduction is nearly identical to the reduction of fixed rate versions of these problems to {\textsc{Mwisl}}, presented in~\cite[Thm. 4.1]{wan09}. \section{Conflict Graphs} \label{s:conflict} Consider a set $L$ of links, whose nodes are represented as points in a metric space with distance function $d$. We denote $d_{ij}=d(s_i,r_j)$\label{G:asymdistance} and denote by $l_i=d(s_i,r_i)$\label{G:li} the \emph{length} of link $i$, where $s_i$ ($r_i$) denotes the sender node (receiver node, resp.) of link $i$. Let further $d(i,j)$ denote the minimum distance between the nodes of links $i$ and $j$. Each link $i$ has an associated \emph{sensitivity} ${\mathfrak l}_i$, which indicates how sensitive it is to interference. This depends linearly on the strength of the transmission on the link, which depends on the length of the link, but can also depend on the coding. Higher data rates mean higher sensitivity. For technical reasons, we shall require that ${\mathfrak l}_i \ge 4l_i$; we show in Sec.~\ref{s:model} why this is apropos. Let ${\Delta}(S)=\max_{i,j\in S}\{{\mathfrak l}_i/{\mathfrak l}_j\}$ denote the \emph{sensitivity diversity} of a set $S\subseteq L$ of links. The conflict graphs are parameterized by a positive function $f$. The graph $G_f = (L,E)$ is defined by \begin{equation} (i,j) \in E \quad \Leftrightarrow \quad d_{ij}d_{ji} < {\mathfrak l}_i{\mathfrak l}_j f\left({\mathfrak l}_{max}/{\mathfrak l}_{min}\right)\ , \label{eq:gengraphdef} \end{equation} where ${\mathfrak l}_{min}=\min\{{\mathfrak l}_i,{\mathfrak l}_j\},{\mathfrak l}_{max}=\max\{{\mathfrak l}_i,{\mathfrak l}_j\}$. When the sensitivity of links is proportional to the length, i.e.,\ ${\mathfrak l}_i\sim l_i$ for all $i\in L$ (the ``uniform thresholds'' case considered below), the graph definition can be simplified to: \begin{equation} (i,j) \in E \quad \Leftrightarrow \quad d(i,j) < {\mathfrak l}_{min} f\left({\mathfrak l}_{max}/{\mathfrak l}_{min}\right)\ , \label{eq:unifgraphdef} \end{equation} We will generally assume that $f$ is \emph{sub-linear}, i.e., $f(x)=o(x)$. We will choose the graph ${G_{lo}}$ to be $G_1$ (i.e., $f(x)\equiv 1$), and ${G_{hi}}$ to be $G_f$ for an appropriate non-decreasing sublinear function $f$, depending on the setting, as discussed in Sec.~\ref{s:feas}. If follows easily from the properties of the physical model (Sec.~\ref{s:model}) that ${G_{lo}}$ is precisely the set of $2$-edges of the hypergraph corresponding to the physical model. We next show the key properties of these conflict graphs: their tightness and efficiency of (approximate) computation. We first need to discuss the metric under consideration. \emph{Metric}. We will assume that the metric in which the nodes are located shares some of the aspects of the Euclidean plane. Specifically, the number of unit balls that can fit without overlap in an $R$-ball is bounded by a polynomial in $R$. Formally, we consider metric spaces of \emph{bounded doubling dimension}, where the doubling dimension $m$ of the metric space is the infimum of all numbers $\delta > 0$ such that for every $\epsilon \in (0,1]$, every ball of radius $r>0$ has at most $C\epsilon^{-\delta}$ points of mutual distance at least $\epsilon r$, where $C\geq 1$ is an absolute constant. This generalizes Euclidean spaces, as the $m$-dimensional Euclidean space has doubling dimension $m$~\cite{heinonen}. \subsection{Tightness} We begin by bounding the number of independent sets in ${G_{hi}}=G_f$ that are necessary to cover a feasible set. We show that this number is $O(f^*({\Delta}(S)))$ for any feasible set $S$, where $f^*$, the \emph{iterated} $f$, is defined for every sub-linear function, as follows. For each integer $c\geq 1$, the function $f^{(c)}(x)$ is defined inductively by: $f^{(1)}(x)=f(x)$ and $f^{(c)}(x)=f(f^{(c-1)}(x))$\label{G:frepeated}. Let $x_0=\sup\{x\geq 1, f(x) \ge x\} +1$; such a point exists for every $f(x)=o(x)$. The function $f^*(x)$\label{G:fstar} is defined by: $ f^*(x)=\arg\min_c\{f^{(c)}(x)\le x_0\} $ for arguments $x> x_0$, and $f^*(x)=1$ for the rest. Note that for a function $f(x)=\gamma x^\delta$ with constants $\gamma>0$ and $\delta\in (0,1)$, $f^*({\Delta})=\Theta(\log{\log{{\Delta}}})$, while for $f(x)=\gamma \log^{t} x$ with constants $\gamma>0$ and $t\ge 1$, $f^*({\Delta})=\Theta(\log^*{{\Delta}})$. Those will be the functions we are most interested in. \begin{theorem}\label{T:sandwich} Our conflict graphs are $O(f^*({\Delta}(S)))$-tight, for any non-decreasing sub-linear function $f$. That is, if $S$ is independent in ${G_{lo}}$, then $\chi({G_{hi}}[S]) = O(f^*({\Delta}(S)))$. \end{theorem} Fix a non-decreasing sub-linear function $f$. The proof requires the following three lemmas, which encapsulate the technicalities of dealing with our conflict graphs. \begin{lemma}\label{L:adjacent} Let links $i,j$, ${\mathfrak l}_i\le {\mathfrak l}_j$, be adjacent in ${G_{hi}}$. Then, $d_{ij} + d_{ji} \le l_i + l_j + 2\sqrt{{\mathfrak l}_i{\mathfrak l}_j f({\mathfrak l}_j/{\mathfrak l}_i)}$. \end{lemma} \begin{proof} Denote $m_{ij}=\min(d_{ji}, d_{ij})$ and $m'_{ij} = \max(d_{ji}, d_{ij})$. Since $i$ and $j$ are adjacent, $m_{ij}m'_{ji}\le {\mathfrak l}_i{\mathfrak l}_jf({\mathfrak l}_j/{\mathfrak l}_i)$, which implies that $m_{ij} \le \sqrt{{\mathfrak l}_i{\mathfrak l}_j f({\mathfrak l}_j/{\mathfrak l}_i)}$. By the triangular inequality, $m'_{ij} \le l_i + l_j + m_{ij}$. \end{proof} \begin{lemma}\label{L:triangles} Let $i,j,k$ be links and $c>0$ be a number, such that $c\cdot {\mathfrak l}_i \le {\mathfrak l}_j\le {\mathfrak l}_k$, $f(x)\le x/22$ for all $x\ge c$, and $i$ is adjacent to both $j$ and $k$ in ${G_{hi}}$. Then, $ d_{jk}d_{kj} < 3{\mathfrak l}_i{\mathfrak l}_kf({\mathfrak l}_k/{\mathfrak l}_i) + (2/3){\mathfrak l}_j{\mathfrak l}_k. $ \end{lemma} \begin{proof} The triangle inequality and the assumptions ${\mathfrak l}_t\ge 4l_t$ (for all $t$) and ${\mathfrak l}_j\ge {\mathfrak l}_i$ imply that \begin{align*} d_{jk}&\le \min(d_{ik} + l_i+d_{ji}, d_{ik} + l_j+d_{ij})\le {\mathfrak l}_j/4 + m_{ij}+ d_{ik},\\ d_{kj}&\le \min(d_{ki} + l_i+d_{ij}, d_{ki} + l_j+d_{ji}) \le {\mathfrak l}_j/4 + m_{ij} + d_{ki}, \end{align*} where we denote $m_{ij}=\min(d_{ji}, d_{ij})$. Multiplying these inequalities gives us: \begin{equation}\label{E:trianglesmain} d_{jk}d_{kj} \le {\mathfrak l}_j^2/16 + m_{ij}^2 + {\mathfrak l}_{j}m_{ij}/2 + ({\mathfrak l}_j/4 + m_{ij})(d_{ik} + d_{ki}) + d_{ik}d_{ki}. \end{equation} Denote $g_{j} = {\mathfrak l}_i {\mathfrak l}_j f({\mathfrak l}_j/{\mathfrak l}_i)$ and $g_k={\mathfrak l}_i {\mathfrak l}_k f({\mathfrak l}_k{\mathfrak l}_i)$. Since links $j,k$ are adjacent to link $i$, we have that $m_{ij}\le \sqrt{d_{ij} d_{ji}} \le \sqrt{g_j}$ and $d_{ik}d_{ki}\le g_k$, and using Lemma~\ref{L:adjacent}, we also have that $d_{ik} + d_{ki}\le {\mathfrak l}_i/4 + {\mathfrak l}_k/4 + 2\sqrt{g_k}$. By plugging these bounds in (\ref{E:trianglesmain}) and simplifying, we obtain \begin{align*} d_{jk}d_{kj} &< \frac{3{\mathfrak l}_j{\mathfrak l}_k}{16} + g_j + \frac{{\mathfrak l}_j}{2}\sqrt{g_j} + {\mathfrak l}_j\sqrt{g_k} + \frac{{\mathfrak l}_k}{2}\sqrt{g_j} + 2\sqrt{g_jg_k} + g_k\\ &\le \frac{3{\mathfrak l}_j{\mathfrak l}_k}{16} + \frac{{\mathfrak l}_j^2}{22} + \frac{{\mathfrak l}_j^2}{2\sqrt{22}} + \frac{{\mathfrak l}_j{\mathfrak l}_k}{\sqrt{22}} + \frac{{\mathfrak l}_j{\mathfrak l}_k}{2\sqrt{22}} + 2g_k + g_k\\ &< 3g_k + (2/3){\mathfrak l}_j{\mathfrak l}_k, \end{align*} where to obtain the second inequality, we used the assumption that $f({\mathfrak l}_j/{\mathfrak l}_i)\le {\mathfrak l}_j/(22{\mathfrak l}_i)$ and $f({\mathfrak l}_k/{\mathfrak l}_i)\le {\mathfrak l}_k/(22{\mathfrak l}_i)$, and the inequality $g_j\le g_k$, which follows from the assumption that $f$ is non-decreasing and that ${\mathfrak l}_k\ge {\mathfrak l}_j$. \end{proof} \begin{lemma}\label{P:simpleset} Let $c>0$ be a constant, $i$ be a link, and $S$ be a set of neighbors of $i$ in ${G_{hi}}$ such that ${\mathfrak l}_i \le {\mathfrak l}_j\le c\cdot {\mathfrak l}_i$ holds for all $j\in S$. Then, $S$ can be partitioned into $O(1)$ cliques in ${G_{lo}}$. \end{lemma} \begin{proof} Observe that any two links, whose receivers are within distance $3{\mathfrak l}_i/4$, are adjacent in ${G_{lo}}$. Indeed, if $j,k$ are such that $d(r_j,r_k)\le 3{\mathfrak l}_i/4$, then by the triangle inequality, $d_{jk}\le l_j+3{\mathfrak l}_i/4 \le {\mathfrak l}_j$ (recall that ${\mathfrak l}_j\ge 4l_j$), and similarly, $d_{kj}\le {\mathfrak l}_k$, implying that $d_{jk}d_{kj}\le{\mathfrak l}_j{\mathfrak l}_k$. Now, consider the following partitioning of $S$ into cliques: 1. Start with an arbitrary link $i$, and let $K_i$ be the set of all links $j\in S$ with $d(r_j,r_i)\le 3{\mathfrak l}_i/8$. 2. Remove $K_i$ from $S$ and repeat, until $S$ is empty. This procedure partitions $S$ into subsets $K_{i_t}$ indexed by links $i_t$. By the discussion above, each subset $K_{i_t}$ is a clique in ${G_{lo}}$. Moreover, by construction, the sender nodes of $i_t$ and $i_{t'}$ for $t\neq t'$ are at mutual distance at least $3{\mathfrak l}_i/8$. On the other hand, as assumed, each link $j$ in $S$ satisfies $d_{ji} d_{ij} \le {\mathfrak l}_i {\mathfrak l}_j f({\mathfrak l}_j/{\mathfrak l}_i) \le c f(c){\mathfrak l}_i^2 $, so $d(i,j) \le \sqrt{c \cdot f(c)} {\mathfrak l}_i$. Then, it is easy to see that all sender nodes $r_j$ of links $j\in S$ are located within a ball of radius $c' {\mathfrak l}_i$, with $r_i$ as center, where $c'=c+1+\sqrt{cf(c)}$. Hence, the number of cliques obtained is at most $(8c'/3)^m=O(1)$, where $m$ is the doubling dimension of the space. \end{proof} \begin{proof}[Proof of Theorem~\ref{T:sandwich}] Recall that a graph is $d$-inductive (or $d$-degenerate) if there is an ordering of the vertices so that each vertex has at most $d$ post-neighbors. It is well known that a greedy algorithm uses at most $d+1$ colors on $d$-inductive graphs. Thus, it suffices for us to show that each independent set $S$ in ${G_{lo}}$ induces a $O(f^*(\Delta))$-\emph{inductive} subgraph in ${G_{hi}}$, with respect to a non-decreasing order of links by sensitivity. Namely, for every link $i\in S$, the number of links $j\in S$ with ${\mathfrak l}_j\ge {\mathfrak l}_i$ that are adjacent to $i$ is in $O(f^*(\Delta(S)))$. To show inductiveness, consider a link $i\in S$, and let $S^+$ denote the set of links $j\in S$ with ${\mathfrak l}_j\ge {\mathfrak l}_i$ that are adjacent to $i$ in ${G_{hi}}$. Further, partition $S^+$ into subsets $S_{\ge c}^+$ and $S_{<c}^+$, containing the links $j$ with sensitivity greater (respectively, less) than $c{\mathfrak l}_i$, where $c>0$ is a constant described in Lemma~\ref{L:triangles}. Lemma~\ref{P:simpleset} implies that $|S_{<c}^+|=O(1)$, so it remains to show that $|S_{\ge c}^+|=O(f^*(\Delta(S)))$. Let $j,k\in S^+_{\ge c}$. Lemma~\ref{L:triangles} implies that $d_{jk}d_{kj} < 3{\mathfrak l}_i{\mathfrak l}_kf({\mathfrak l}_k/{\mathfrak l}_i) + (2/3){\mathfrak l}_j{\mathfrak l}_k$. On the other hand, we have $d_{jk}d_{kj}>{\mathfrak l}_j{\mathfrak l}_k$, since $j$ and $k$ are not adjacent in ${G_{lo}}$. Combining, we obtain that $(1/3) {\mathfrak l}_i {\mathfrak l}_k < 3{\mathfrak l}_i {\mathfrak l}_k f({\mathfrak l}_k/{\mathfrak l}_i)$, which leads to $\lambda_j < 9f(\lambda_k)$, using notation $\lambda_t={\mathfrak l}_t/{\mathfrak l}_i$. Assume, w.l.o.g., that $S^+_{\ge c}=\{1,2,\dots,h\}$, and ${\mathfrak l}_j\le {\mathfrak l}_k$ for $j<k$. Then, denoting $g(x)\equiv 9f(x)$ we have: \[ c\le \lambda_1 <g(\lambda_2)<g(g(\lambda_3))<\dots<g^{(h-1)}(\lambda_h), \] which, together with the assumption that $f(x)<x$ for all $x\ge c$ (see Lemma~\ref{L:triangles}) implies that $h-1\le g^*(\lambda_h)=O(f^*(\Delta(S)))$. \end{proof} \subsection{Algorithmic Properties of the Conflict Graphs}\label{s:graphalgo} Computability of our conflict graph construction is demonstrated through the notion of \emph{$k$-simplicial graphs}, which generalize chordal graphs. In particular, we show that every conflict graph $G_f$ has a constant-simplicial elimination ordering, where the set of post-neighbors of every vertex can be covered with $O(1)$ cliques. A function $f$ is \emph{strongly sublinear} if for each constant $c\ge 1$, there is a constant $c'$ such that $cf(x)/x\le f(y)/y$ for all $x,y\ge 1$ with $x\ge c'y$. For example, the functions $f(x)=x^{\delta}$, $\delta<1$, and $f(x)=\log{x}$ are strongly sublinear. \begin{theorem}\label{T:inductiveindep} Let $f$ be a strongly sublinear function with $f(x)\ge 1$ for all $x\ge 1$. For every set $L$, the graph $G_f(L)$ is $O(1)$-simplicial. The corresponding ordering is given by non-decreasing sensitivity. \end{theorem} \begin{proof} Order the links in non-decreasing sensitivity (ties broken arbitrarily). Fix a link $i\in L$, and let $T$ be its neighbors in $G_f$ with ${\mathfrak l}_j\ge{\mathfrak l}_i$, for all $j\in T$. We shall show that the links in $T$ of sensitivity greater than $c {\mathfrak l}_i$ form a single clique, for some constant $c=c_f$ (specified below). The links $j\in T$ of sensitivity ${\mathfrak l}_j\le c {\mathfrak l}_i$ can be covered with constant number of cliques, by Lemma~\ref{P:simpleset}. Let $j,k$ be two links in $T$, with ${\mathfrak l}_k\ge {\mathfrak l}_j\ge c{\mathfrak l}_i$. It suffices to show that $j$ and $k$ are adjacent in $G_f$. By Lemma~\ref{L:triangles}, we have that $ d_{jk}d_{kj} \le 3{\mathfrak l}_i{\mathfrak l}_kf({\mathfrak l}_k/{\mathfrak l}_i) + (2/3){\mathfrak l}_j{\mathfrak l}_k. $ Since $f(x)$ is strongly sublinear we can choose constant $c$ such that $f(x)/x\le (1/9)f(y)/y$ for all $x,y$ with $x\ge c y$. Hence, $f({\mathfrak l}_k/{\mathfrak l}_i)/({\mathfrak l}_k/{\mathfrak l}_i) \le (1/9)f({\mathfrak l}_k/{\mathfrak l}_j)/({\mathfrak l}_k/{\mathfrak l}_j)$, provided that ${\mathfrak l}_k/{\mathfrak l}_i\ge c{\mathfrak l}_k/{\mathfrak l}_j$, i.e., ${\mathfrak l}_j\ge c{\mathfrak l}_i$. Thus, \[ d_{jk}d_{kj} \le (1/3){\mathfrak l}_j{\mathfrak l}_kf({\mathfrak l}_k/{\mathfrak l}_j) + (2/3){\mathfrak l}_j{\mathfrak l}_k\le {\mathfrak l}_j{\mathfrak l}_kf({\mathfrak l}_k/{\mathfrak l}_j), \] since $f({\mathfrak l}_k/{\mathfrak l}_j)\ge 1$. Hence $j$ and $k$ are adjacent in $G_f$, as claimed. \end{proof} \section{Feasibility in the Physical Model} \label{s:feas} In this section, we derive the graphs ${G_{hi}}$ that achieve the sandwiching property for the physical model. The discussion is split into two parts: the case of general thresholds, and the special case of uniform thresholds. The threshold $\beta_i$ of link $i$ (formally defined below) is the factor by which the interference must be smaller than the signal, in order to have successful transmission in $i$, and 'uniform' refers to the assumption that all links have equal thresholds. \begin{theorem} The graph ${G_{hi}}=G_{\gamma x^\delta}$, for appropriate constants $\gamma>1$ and $\delta\in (0,1)$, together with ${G_{lo}}$ defined in Sec.~\ref{s:conflict}, gives an $O(\log\log{\Delta})$-sandwich of the physical model with general thresholds. \end{theorem} As a bonus, this approach uses only \emph{oblivious power assignments}, namely assignments where the power of every link depends only on its length (and global parameters, such as maximum link length). Moreover, the selected graph ${G_{hi}}$ guarantees \emph{bi-directional} feasibility, meaning that the feasible sets remain so even after one reverses the directions of a subset of links. For uniform thresholds, we consider the graph ${G_{hi}}=G_{\gamma\widehat{\log}}$, for a constant $\gamma>1$, where $\widehat{\log}(x)=\max(\log^{t}(x), 1)$, for a constant $t>1$ (specified in Sec.~\ref{s:unithresholds}). \begin{theorem} The graph $G_{\gamma\widehat{\log}}$, for an appropriate constant $\gamma>1$, together with ${G_{lo}}$ defined in Sec.~\ref{s:conflict}, gives an $O(\log^*{\Delta})$-sandwich of the physical model with uniform thresholds. \end{theorem} In this case too, independent sets in ${G_{hi}}$ are bidirectionally feasible (potentially using different power assignments for different orientations of links). Note that by the results of Sec.~\ref{s:conflict}, the tightness provided by the graph $G_{\gamma x^\delta}$ is $O(\log\log{\Delta})$, while the tightness provided by $G_{\gamma \widehat{\log}}$ is $O(\log^*\Delta)$. Therefore, the main goal towards the proof of the theorems above is to choose the parameters of the proposed conflict graphs, so as to guarantee feasibility. Thms.~\ref{T:obliviouspowers} and~\ref{T:globalmain} provide these results. Before going on to proofs, we give the formal definitions of the physical model. \subsection{Model} \label{s:model} \textbf{Feasibility.} The nodes have adjustable transmission power levels. A \emph{power assignment} for the set $L$ is a function $P:L\rightarrow \mathbb{R}_+$. For each link $i$, $P(i)$\label{G:power} defines the power level used by the sender node $s_i$. In the \emph{physical model} of communication, when using a power assignment $P$, a transmission of a link $i$ is successful if and only if \begin{equation}\label{E:sinr} SIR(S,i)=\frac{P(i)/l_i^{\alpha}}{\sum_{j\in S\setminus \{i\}} P(j)/d_{ji}^{\alpha}}> \beta_i, \end{equation} where $\beta_i\ge 1$\label{G:beta} denotes the minimum signal to noise ratio required for link $i$, $\alpha$\label{G:alpha} is the so-called path loss exponent and $S$ is the set of links transmitting concurrently with link $i$. Here, $P(i)/d^\alpha$ is the power of the sender node $s_i$ received at a distance $d$ from it; hence, the left-hand side of (\ref{E:sinr}) is in fact the ratio of the intended signal over the accumulated interference at the receiver $r_i$. Note that we omit the noise term in the formula above, since we focus on interference-limited networks. This can be justified by the fact that one can simply slightly decrease the data rates to make the effect of the noise negligible, then restore the rates by paying only constant factors in approximation. This is further elaborated in the last paragraph below. A set $S$ of links is called $P$-\emph{feasible} if the condition~(\ref{E:sinr}) holds for each link $i\in S$ when using power assignment $P$. We say $S$ is \emph{feasible} if there exists a power assignment $P$ for which $S$ is $P$-feasible. We assume that $\alpha>m$, where $m$ is the doubling dimension of the space; this corresponds to the standard assumption $\alpha >2$ in the Euclidean plane which is necessary to ensure a degree of locality to the communications. \textbf{Sensitivity.} We define the sensitivity of link $i$ as ${\mathfrak l}_i=\beta_i^{1/\alpha} l_i$. We call a set $S$ of links \emph{equilength} if for every two links $i,j\in S$, ${\mathfrak l}_i \le 2{\mathfrak l}_j$, i.e., ${\Delta}(S) \le 2$. Note that with the introduction of sensitivity, the feasibility constraint~(\ref{E:sinr}) becomes: \[ \frac{P(i)}{{\mathfrak l}_i^\alpha} \ge \sum_{j\in S\setminus \{i\}}\frac{P(j)}{d_{ji}^{\alpha}}\ . \] \textbf{Power Control.} Different power control regimes give different notions of feasibility. Guaranteeing just feasibility might require \emph{global power control}, i.e., optimizing the power assignment based on the whole network state. Another option is \emph{oblivious power assignments}, where the power level of a link depends only on local parameters. We will work with a family of oblivious power assignments $P_\tau$ parameterized with $\tau\in (0,1)$, where $P_\tau(i)\sim {\mathfrak l}_i^{\tau\alpha}$ for each link $i$. Oblivious power assignments are preferable because they are simple and robust to link churn, but they may give worse performance than global power control. \textbf{Bi-directional Feasibility.} Our positive results hold with a stronger notion of \emph{bi-directional feasibility}, where a set $S$ of links is called bi-directionally feasible if it is feasible and remains so even if we reverse the directions of a subset of links in $S$ (i.e., switch the roles of senders and receivers). Note that this might require different power assignments for different orientations. For an oblivious power assignment $P_\tau$, we say a set $S$ is bi-directionally $P_\tau$-feasible if it is $P_\tau$-feasible with every orientation of links. Note also that bi-directional feasibility is merely a ``bonus'' from our approach; we still compete with the optima of the original model. \textbf{Ambient Noise and Sensitivity.} The complete condition for signal reception in the physical model is as follows: $P(i)/{\mathfrak l}_i^{\alpha} > \sum_{j\in S\setminus \{i\}} P(j)/d_{ji}^{\alpha} + N_i$, where $N_i\ge 0$ is the \emph{ambient noise} at the receiver $r_i$. We assume, as done in the majority of the related work, that $P(i)\ge cN_i{\mathfrak l}_i^\alpha$, holds for a constant $c>1$, and every link $i$. The rationale behind this assumption is to exclude \emph{weak} or \emph{noise-limited} links from consideration, namely links that can tolerate very little interference. Note that $P(i)\ge N_i{\mathfrak l}_i^\alpha$ is necessary for the link to be usable even without interference. Scheduling weak links can be considered a separate problem, and is further discussed in Sec.~\ref{ss:weaklb}. On the other hand, the assumption $P(i)\ge cN_i{\mathfrak l}_i^\alpha$ can be used to suppress the noise term, as follows. Given power assignment $P$ and a number $t>1$, let us call a set $S$ of links \emph{$t$-strong} if $P(i)/{\mathfrak l}_i^{\alpha} > t\cdot \sum_{j\in S\setminus \{i\}} P(j)/d_{ji}^{\alpha}+N_i$. Note that a $1/(1-1/c)$-strong set with zero noise ($N_i=0$) is feasible with non-zero noise: $$P(i)/{\mathfrak l}_i^{\alpha} -N_i\ge (1-1/c)P(i)/{\mathfrak l}_i^{\alpha}> \sum_{j\in S\setminus \{i\}} P(j)/d_{ji}^{\alpha}.$$ Hence, instead of working with non-zero noise term, we can work with $N_i=0$ and ${\mathfrak l}_i'={\mathfrak l}_i\cdot 1/(1-1/c)^{1/\alpha}$. The following result (reformulation of \cite[Cor. 2]{HB15}) shows that this only affects the constant factors in our approximations. It also justifies our assumption in Sec.~\ref{s:conflict} that ${\mathfrak l}_i\ge 4l_i$: if the latter does not hold, we simply scale ${\mathfrak l}_i$ by a factor of $4$, for all $i$. \begin{theorem}\cite{HB15}\label{T:signalstrengthening} Every $1$-strong set can be partitioned into $\left\lceil 2t\right\rceil$ sets that are $t$-strong. \end{theorem} \subsection{Feasibility of Independent Sets: General Thresholds} The main technical task is showing feasibility: that each independent set $S$ in $G_f$ corresponds to a feasible set of links. We break the task into bounding the interference of a given link $i$ in $S$ from links in $S$ with more (less) sensitivity than $i$ in Lemma~\ref{P:mainlemma1} (Lemma~\ref{P:mainlemma2}), respectively. Both of those lemmas are based on splitting $S$ into sets of roughly equal lengths and bounding the resulting interference as a geometric sum. The bound for the interference from an equilength set is given in Lemma~\ref{P:oblcore}, which itself is a geometric sum, here in terms of the contributions of links of different distances from the link $i$. The actual bound of that inner geometric sum is given in Lemma \ref{L:summation}, which is a variation of a frequently given argument in terms of concentric annuli around the link $i$. Additionally, we necessarily bound in Lemma \ref{L:distreduction} from below the minimum distance of links that are non-adjacent in $G_f$. The argument for uniform thresholds (Sec.~\ref{s:unithresholds}) follows the same pattern, but is somewhat simpler. Now to the formal arguments. Our goal is to identify constants $\gamma,\delta$ such that each independent set in ${G_{hi}}=G_{\gamma x^\delta}$ is feasible. We show that this can be achieved by using an oblivious power assignment $P_\tau$, for an appropriate $\tau \in (0,1)$. Moreover, this even holds for bidirectional feasibility. \begin{theorem}\label{T:obliviouspowers} Let $\delta_0=\frac{\alpha-m+1}{2(\alpha-m) + 1}$. If $\delta\in (\delta_0,1)$ and the constant $\gamma>1$ is large enough, there is a value $\tau \in (0,1)$ such that each independent set in $G_{\gamma x^\delta}$ is bidirectionally $P_{\tau}$-feasible. \end{theorem} First, we introduce a measure of interference under power $P_\tau$. The interference of link $j$ on link $i$ is \[ I_{\tau}(j,i)= \frac{{\mathfrak l}_j^{\tau\alpha}{\mathfrak l}_i^{(1-\tau)\alpha}}{d_{ji}^\alpha} < 1, \] when $j\ne i$ and $I_{\tau}(i,i)=0$. The interference of the other links in a set $S$ on $i$ is $I_{\tau}(S,i) = \sum_{j\in S\setminus \{i\}} I(j,i)$. Showing feasibility of set $S$ under $P_\tau$ is equivalent to showing that $I_{\tau}(S,i) \le 1$. We bound this in Lemma~\ref{P:mainlemma1} (Lemma \ref{P:mainlemma2}) for links with less (more) sensitivity than $i$, respectively, and derive the choices for $\tau$ based on those bounds. At high level, the argument proceeds by splitting $S$ into groups of roughly equal length and equal distance from $i$ and bounding the size of these groups as well as their interference on $i$. This is then combined into a double geometric sum that converges to a constant that can be made smaller than one. \begin{proof}[Proof of Thm.~\ref{T:obliviouspowers}] Consider any $\delta\in (\delta_0,1)$. Lemmas~\ref{P:mainlemma1} and \ref{P:mainlemma2} bound the interference ($I_\tau$) from links with less and more (resp.) sensitivity than $i$, that form an independent set of links in $G_{\gamma x^\delta}$. If $\gamma$ is sufficiently large, the two bounds add up to less than one. The lemmas require different constraints on $\tau$, but it can be checked that when $\delta\in (\delta_0, 1)$, $b:=1-\delta\cdot \frac{\alpha-m}{\alpha} < e:=1-(1-\delta)\cdot\frac{\alpha - m + 1}{\alpha}$, and hence $\tau$ can be chosen to be any point in the interval $(b,e)$. \end{proof} \begin{lemma}\label{P:mainlemma1} Let $\gamma>2$ and $ \tau > 1- \delta (\alpha-m)/\alpha$. If $S$ is a set of links that is independent in $G_{\gamma x^\delta}$, and $i$ is a link in $S$ satisfying ${\mathfrak l}_i\ge\max_{j\in S}{\mathfrak l}_j$, then $ I_{\tau} (S, i)= O\left(\gamma^{-\alpha/2}\right). $ \end{lemma} \begin{proof} We partition $S$ into equilength subsets $ L_t=\{j\in S: 2^{t-1}{\mathfrak l}_0 \leq {\mathfrak l}_j<2^t {\mathfrak l}_0\}, $ $t=1,2,\ldots,$ where ${\mathfrak l}_0=\min_{j\in S}\{{\mathfrak l}_j\}$, and bound each $L_t$ separately. The independence condition between $i$ and any other link $j\in S$ is $d_{ij}d_{ji} > \gamma {\mathfrak l}_i^{1+\delta}{\mathfrak l}_j^{1-\delta}$. Using Lemma~\ref{L:distreduction}, we obtain the more convenient bound $d(i,j)>\gamma' {\mathfrak l}_i^{\delta}{\mathfrak l}_j^{1-\delta}$, where $\gamma'=\frac{\gamma}{\sqrt{\gamma+1}+1}-1=\Theta(\sqrt{\gamma})$. Fix a given set $L_t$ and let ${\mathfrak m}_t=\min_{j\in L_t}{\mathfrak l}_j$. We similarly observe that $d(j,k) > \gamma'{\mathfrak m}_t$ for all $j,k\in L_t$. Hence, Lemma~\ref{P:oblcore} applies for $L_t$ with parameters $\gamma=\gamma'$ and $\mu=\delta$, giving us the bound \[ {I_{\tau}(L_t,i)} = O\left(\gamma^{-\alpha/2}\left(\frac{{\mathfrak m}_t}{{\mathfrak l}_i}\right)^{\mu(\alpha-m) - (1-\tau)\alpha }\right). \] We combine the bounds for $L_t$ into a geometric series for $S$: \[ {I_{\tau}(S,i)} = \sum_{t=1}^{\infty}{I_{\tau}(L_t,i)} \le \frac{O(\gamma^{-\alpha/2})}{{\mathfrak l}_i^{\mu(\alpha-m) - (1-\tau)\alpha}}\sum_{t=0}^{\lceil\log{{\mathfrak l}_i/{\mathfrak l}_0}\rceil}{(2^{t}{\mathfrak l}_0)^{\mu(\alpha-m) - (1-\tau)\alpha}}. \] Recall that we assumed $\tau > 1- \delta (1-m/\alpha)$; hence, $\mu(\alpha-m) - (1-\tau)\alpha> 0$. Thus, the last sum is bounded by $O({\mathfrak l}_i^{(1-\tau)\alpha - \mu(\alpha-m)})$, which implies the claim. \end{proof} \begin{lemma}\label{P:mainlemma2} Let $\gamma>2$ and $\tau < 1- (1-\delta)(\alpha - m + 1)/\alpha$. If $S$ is a set of links that is independent in $G_{\gamma x^\delta}$, and $i$ is a link in $S$ satisfying ${\mathfrak l}_i = \min_{j\in S}{\mathfrak l}_j$, then $ I_{\tau} (S, i)= O\left(\gamma^{-\alpha/2}\right). $ \end{lemma} \begin{proof} We proceed as in Lemma~\ref{P:mainlemma1}. Let us split $S$ into equilength subsets $L_1, L_2,\dots$, where $ L_t=\{j\in S: 2^{t-1}{\mathfrak l}_i \leq {\mathfrak l}_j<2^t {\mathfrak l}_i\}. $ Let ${\mathfrak m}_t=\min_{j\in L_t}{\mathfrak l}_j \ge 2^{t-1}{\mathfrak l}_i$. Using independence and applying Lemma~\ref{L:distreduction}, we have $d(i,j) > \gamma'{\mathfrak l}_j^{\delta}{\mathfrak l}_i^{1-\delta}$ for each $j\in S$ (recall that ${\mathfrak l}_i\le {\mathfrak l}_j$), and $d(j,k)>\gamma' {\mathfrak m}_t$ for all $j,k\in L_t$, where $\gamma'=\frac{\gamma}{\sqrt{\gamma+1}+1}-1=\Theta(\sqrt{\gamma})$. We apply Lemma~\ref{P:oblcore} with $\gamma = \gamma'$ and $\mu=1-\delta$ to the set $L_t$ and link $i$ to obtain: \[ {I_{\tau}(L_t,i)} = O\left(\gamma^{-\alpha/2}\left(\frac{{\mathfrak l}_i}{{\mathfrak m}_t}\right)^{(1-\tau)\alpha - \mu(\alpha-m+1)}\right) =O\left(\gamma^{-\alpha/2}\left(\frac{1}{2^{t-1}}\right)^{(1-\tau)\alpha - \mu(\alpha-m+1)}\right). \] Our assumption on $\tau$ implies that $\eta :=(1-\tau)\alpha - \mu(\alpha-m+1) > 0$. Thus, we have: $ {I_{\tau}(L,i)}= \sum_{1}^{\infty}{{I_{\tau}(L_t,i)}} = O\left(\gamma^{-\alpha/2}\right)\cdot \sum_{t=0}^{\infty}{\frac{1}{2^{\eta t}}} = O\left(\gamma^{-\alpha/2}\right). $ \end{proof} The next lemma shows that when two links are independent in the conflict graph $G_f$, they must also be well separated in space. \begin{lemma}\label{L:distreduction} Let $f$ be a non-decreasing function, such that $f(x)\le \gamma x$, for a constant $\gamma>2$ and for all $x\ge 1$. Let links $i,j$ be independent in $G_f$, and such that ${\mathfrak l}_i\ge {\mathfrak l}_j$. Then $d(i,j)>\frac{1}{\sqrt{\gamma+1}+1}{\mathfrak l}_j f({\mathfrak l}_i/{\mathfrak l}_j)-l_j$. \end{lemma} \begin{proof} Let $D=\max\{d_{ij},d_{ij}\}$ and $d=\min\{d_{ij},d_{ji}\}$. Let $z>2$ be a parameter. Consider the following two cases: \begin{enumerate} \item $D > z{\mathfrak l}_i$. The triangle inequality and the assumption $\beta_i,\beta_j\ge 1$ imply that \[d(i,j)\ge D - l_i - l_j > (z-2){\mathfrak l}_i \ge \frac{z-2}{\gamma}\cdot {\mathfrak l}_j \cdot \gamma({\mathfrak l}_i/{\mathfrak l}_j)\ge \frac{z-2}{\gamma}\cdot {\mathfrak l}_j f({\mathfrak l}_i/{\mathfrak l}_j).\] \item $D\le z{\mathfrak l}_i$. The independence condition implies that $d_{ij}d_{ji}>{\mathfrak l}_i{\mathfrak l}_j f({\mathfrak l}_i/{\mathfrak l}_j)$. Hence, in this case, $d>\frac{1}{z}\cdot {\mathfrak l}_j f({\mathfrak l}_i/{\mathfrak l}_j)$. By the triangle inequality, $d(i,j)\ge d-l_j$. \end{enumerate} Choosing $z=\sqrt{\gamma+1}+1$ implies the claim. \end{proof} The next lemma is the common part of Lemmas~\ref{P:mainlemma1} and~\ref{P:mainlemma2}: It bounds the interference from a group of equilength links that are both well separated internally and sufficiently far from the link $i$, and covers both the cases of links more and less sensitive than link $i$. We separate its core technical part into Lemma~\ref{L:summation}, which will be later reused in Sec.~\ref{s:unithresholds}. \begin{lemma}\label{P:oblcore} Let $\mu, \tau \in (0,1)$ and $\gamma \ge 1$ be parameters, let $S$ be a set of equilength links such that for all $j,k\in S$, $d(j,k) > \gamma {\mathfrak l}_0$, where ${\mathfrak l}_0=\min_{j\in S}{\mathfrak l}_j$, and let $i$ be a link in $S$ satisfying $d(i,j) > \gamma {\mathfrak l}_i^\mu{\mathfrak l}_j^{1-\mu}$ for all $j\in S$. Then, \[ \displaystyle I_{\tau}(S,i)= O\left(\gamma^{-\alpha} \left(\frac{{\mathfrak l}_i}{{\mathfrak l}_0}\right)^{(1-\tau)\alpha - \mu(\alpha-m)}\cdot \min\left\{1,\frac{{\mathfrak l}_i}{{\mathfrak l}_0}\right\}^{-\mu} \right). \] \end{lemma} \begin{proof} Consider first the subset $S' \subseteq S$ of links that are closer to $r_i$ than to $s_i$, \[ S'=\{j\in S: \min\{d(s_j,r_i), d(r_j,r_i)\} \leq \min\{d(s_j,s_i), d(r_j,s_i)\}\}. \] For each link $j\in S'$, let $p_j$ denote the endpoint of $j$ that is closest to $i$'s receiver, $r_i$. Denote $q= ({\mathfrak l}_i/{\mathfrak l}_0)^{\mu}$. Consider the nested subsets $S_1,S_2,\dots$ of $S'$, where \[ S_r=\{j\in S': d(j,i)=d(p_j,r_i)\leq \gamma(q {\mathfrak l}_0+(r-1){\mathfrak l}_0)\}. \] Note that $S_1=\emptyset$: for every $j\in S'$, $d(j,i)>\gamma q{\mathfrak l}_0$. Fix $r>1$. For every $j,k\in S_r$, we have that $d(p_j,p_k) \ge d(j,k) > \gamma{\mathfrak l}_0$ and that $d(p_j,r_i)\leq \gamma (q {\mathfrak l}_0+(r-1){\mathfrak l}_0)$ for each $j\in S_r$ (by the definition of $S_r$). By the doubling property of the metric space, \begin{equation} |S_r|=|\{p_j\}_{j\in S_r}|\leq C\cdot \left(\frac{\gamma(q{\mathfrak l}_0+(r-1){\mathfrak l}_0 )}{\gamma{\mathfrak l}_0}\right)^{m} = C \left(q+r-1\right)^{m}.\label{E:strs} \end{equation} Note also that ${\mathfrak l}_j \leq 2{\mathfrak l}_0$ and $d(i,j) \ge \gamma(q{\mathfrak l}_0+(r-2){\mathfrak l}_0)$ for every link $j\in S_r\setminus S_{r-1}$ with $r>1$; hence, by the definition of $I_\tau$, \begin{equation} I_{\tau} (j, i) \le \frac{{\mathfrak l}_j^{\tau\alpha} {\mathfrak l}_i^{(1 - \tau)\alpha}}{d(i,j)^\alpha} \leq \left(\frac{{\mathfrak l}_i}{{\mathfrak l}_0}\right)^{(1-\tau)\alpha}\left(\frac{2{\mathfrak l}_0}{\gamma(q{\mathfrak l}_0+(r-2){\mathfrak l}_0)}\right)^\alpha =\frac{Z_i}{\left(q+r-2\right)^\alpha},\label{E:feqs} \end{equation} where $Z_i=(2/\gamma)^\alpha ({\mathfrak l}_i/{\mathfrak l}_0)^{(1-\tau)\alpha}$. Applying Lemma~\ref{L:summation} to the set $S'=\cup_{r\ge 2}S_r$, with the parameters $q,h=C$ and function $A=I_\tau(\cdot,i)/Z_i$, we get that ${I_{\tau}(S',i)} = O(Z_i)\cdot q^{m-\alpha}\cdot O(\min(1,q)^{-1})$. which implies the desired bound for the set $S'$ by plugging the values of $q$ and $Z_i$. The proof holds symmetrically for the set $S \setminus S'$ of links closer to the sender $s_i$ than to the receiver $r_i$. We can define the set $\{p_j\}_{j\in S\setminus S'}$ where $p_j$ is the endpoint of link $j$ that is closest to $r_i$, for each $j\in S\setminus S'$. The rest of the proof will be identical, by replacing $r_i$ with $s_i$ in the formulas. \end{proof} The following lemma is used to combine the interference contributions of groups of different distances from $i$ (after combining links of different lengths but within same distance of $i$), represented by the sets $S_r$. Intuitively, if the number of links within a given distance grows polynomially slower than the interference contribution of those links, then the total interference will converge as a geometric sum. \begin{lemma}\label{L:summation} Let $q,h>0$ be parameters. Consider sets $\emptyset=S_1\subseteq S_2\subseteq S_3\subseteq\dots$ of links and let $S = \cup_{r\ge 1} S_r$. Suppose $|S_r| \le h\cdot (q+r-1)^m$ and assume a function $A:S\rightarrow \mathbb{R}_+$ such that $A(j)\le 1/(q+r-2)^\alpha$, for all $r>1$ and $j\in S_r\setminus S_{r-1}$. Then, $\sum_{j\in S}A(j) = O(h)\cdot q^{m-\alpha}\cdot \min(1,q)^{-1}$. \end{lemma} \begin{proof} First, using the assumptions and sum rearrangements we have (explanations below): \begin{align*} \sum_{S}{A(j)} & = \sum_{S_1}A(j) + \sum_{r\geq 2}{\sum_{S_r\setminus S_{r-1}}{A(j)}} \nonumber \\ & \le \sum_{r\geq 2}\frac{|S_r\setminus S_{r-1}|}{(q+r-2)^\alpha} \nonumber \\ & = \sum_{r\geq 2} \frac{|S_r|- |S_{r-1}|}{(q+r-2)^\alpha} \nonumber \\ & = \sum_{r\geq 2}|S_r| \left(\frac{1}{(q+r-2)^\alpha} - \frac{1}{(q+r-1)^\alpha}\right) \nonumber, \end{align*} where the first inequality uses $S_1=\emptyset$ and the upper bounds on $A$, the following equality holds because $\{S_r\}$ is a nested sequence, and the last one is a result of a sum rearrangement and the assumption that $S_1=\emptyset$. The convexity of the function $f(x)=x^{-\alpha}$ implies\footnote{For every convex differentiable function $f$ and $x,y\in \textbf{dom}\ f$, $f(x)-f(y)\ge f'(y)(x-y)$.} that $1/(x-1)^\alpha - 1/x^\alpha \le \alpha/(x-1)^{\alpha+1}$. Thus, continuing and using the bound on $|S_r|$, \[ \sum_{S}{A(j)} \le \alpha \sum_{r\geq 2} \frac{|S_r|}{(q+r-2)^{\alpha+1}} \le \alpha h\cdot \sum_{r\geq 0}{(q+r)^{m-\alpha-1}} \ . \] To complete the proof, note that $m-\alpha-1<-1$, so the sum converges and can be bounded by an integral, as follows: \[ \sum_{r\geq 0}{(q+r)^{m-\alpha-1}} \le q^{m-\alpha-1} + \int_{q}^\infty x^{m-\alpha-1} \, dx = q^{m-\alpha-1} + \frac{q^{m-\alpha}}{\alpha-m} = q^{m-\alpha}\cdot O(\min(1,q)^{-1})\ . \] \end{proof} \subsection{Feasibility of Independent Sets: Uniform Thresholds}\label{s:unithresholds} Now let us consider the special case when all thresholds $\beta_i$ are fixed at a uniform value $\beta\ge 1$, which, however, may be arbitrary, i.e., it may depend on network size or link lengths. We show that in this case feasibility can be guaranteed with much smaller tightness, namely $O(\log^*\Delta)$. This is achieved by choosing a slow-growing function $f$ in the definition of the conflict graphs, and by using global power control. For dealing with global power control, we use the convenient sufficient condition for feasibility due to Kesselheim \cite{kesselheimconstantfactor}, which defines a purely geometric constraint on the set of links that implies feasibility. Consider an additive \emph{interference operator} $I$ defined as follows. For links $i,j$, let $I(i,j)=\frac{\beta l_i^\alpha}{d(i,j)^\alpha}=\frac{{\mathfrak l}_i^\alpha}{d(i,j)^\alpha}$ and define $I(i,i)=0$ for simplicity of notation. The operator $I$ is additively expanded: for a set $S$ of links and a link $i$, let $I(S,i)=\sum_{j\in S}I(j,i)$ and $I(i,S)=\sum_{j\in S}I(i,j)$. We will use the notation $I(L)=\max_{i\in L}{I(\{j\in L: l_j\le l_i\},i)}$, which denotes the maximum influence on any link by shorter links in $L$. \begin{theorem}\cite{kesselheimconstantfactor}\label{T:kesselheimconstant} For any set $L$ of links in a metric space, if $I(L)<\frac{1}{12\cdot 3^{\alpha}}$, then $L$ is feasible. \end{theorem} We consider the conflict graph ${G_{hi}}=G_f$ with $f(x)=\gamma\widehat{\log}(x)=\gamma \max(\log^{2/(\alpha - m)}(x), 1)$ for an appropriate constant $\gamma>0$. We show that for a large enough constant $\gamma> 0$, independence in $G_{\gamma\widehat{\log}}$ implies feasibility. In particular, we show that if a set $S$ is independent then $I(S)= O(\gamma^{-\alpha/2})$. An appropriate choice of $\gamma$ yields feasibility via Thm.\ \ref{T:kesselheimconstant}. \begin{theorem}\label{T:globalmain} There is a constant $\gamma > 2$ such that every independent set in $G_{\gamma\widehat{\log}}$ is feasible. \end{theorem} The idea behind the proof is similar to the one for general thresholds. The main difference is that the absence of arbitrary (non-geometric) thresholds $\beta_i$ allows us to obtain feasibility with much less independence than before. Again, in order to bound the interference of an independent set of links on a (longer) link $i$, we split the whole set into equilength subsets, bound the interference of each equilength subset using Lemma~\ref{L:pcequilength}, and combine them into a series that converges when we choose $f(x)=\gamma\widehat{\log}(x)$. All this is done in the following lemma. \begin{lemma}\label{L:globalmain} Let $S$ be an independent set in $G_{\gamma\widehat{\log}}$ with $\gamma> 2$. Then $ I(S) = O\left(\gamma^{-\alpha/2} \right). $ \end{lemma} \begin{proof} Fix an arbitrary link $i\in S$, and denote $S_i^-=\{j\in S: l_j\le l_i\}$. It suffices to show that $I(S_i^-,i)=O\left(\gamma^{-\alpha/2} \right)$. Let $\gamma'$ be such that $\gamma\widehat{\log}(x) \le \gamma' x$, for every $x\ge 1$. Consider any link $j\in S_i^-$. Using independence and applying Lemma~\ref{L:distreduction}, we obtain that $d(i,j) > \gamma'' {\mathfrak l}_{j}\widehat{\log}(l_i/l_j)$, where $\gamma''=\frac{\gamma}{\sqrt{\gamma'+1}+1}-1=\Theta(\sqrt{\gamma})$. Similarly, for every pair of links $j,k\in S_i^-$ with $l_j\ge l_k$, we have $d(j,k)>\gamma'' {\mathfrak l}_{k}\widehat{\log}(l_i/l_j)\ge \gamma''\widehat{\log}(1) \cdot {\mathfrak l}_{k}$. Partition $S_i^-$ into equilength subsets $S_t = \{j\in S_i^- : l_i/2^{t-1} \leq l_j < l_i/2^t\},$ $t=1,2,\ldots$ Let $\ell_t$ denote the smallest link length in $S_t$. The conditions of Lemma~\ref{L:pcequilength} hold for each $S_t$, so applying the lemma gives us the bound: \[ I(S_t,i) = O(1)\cdot (\gamma''\widehat{\log}(l_i/\ell_t))^{m-\alpha}/(\gamma'')^{m}=O(\gamma^{-\alpha/2})\cdot \widehat{\log}(l_i/\ell_t)^{m-\alpha}. \] Observe that $\log (l_i/\ell_t) = t$, so $\widehat{\log}(l_i/\ell_t) = t^{2/(\alpha-m)}$. Thus, \[ I(S_i^-,i)= \sum_{t\ge 1}{I(S_t,i)}\le O(\gamma^{-\alpha/2})\cdot\left(1+ \sum_{t\ge 1}{\left(t^{2/(\alpha-m)}\right)^{m-\alpha}}\right)=O(\gamma^{-\alpha/2}) \sum_{t\ge 1} t^{-2} = O(\gamma^{-\alpha/2}) \ , \] which completes the proof. \end{proof} The following lemma is the analogue of Lemma~\ref{P:oblcore}: It bounds the interference of an equilength independent set $S$ on a longer link $i$ that is independent from the set $S$. \begin{lemma}\label{L:pcequilength} Let $f$ be a non-decreasing function, such that $f(x)\geq 1$ whenever $x\geq 1$. Let $S$ be an equilength set of links such that for all $j,k\in S$ with $d(j,k)>f(1) \cdot \min\{{\mathfrak l}_j,{\mathfrak l}_k\} $, and let $i$ be a link in $S$ such that, for each $j\in S$, $l_i\ge l_j$ and $d(i,j)>{\mathfrak l}_j f(l_i/l_j)$. Then $ I(S,i) = O(1)\cdot f(l_i/\ell)^{m - \alpha}/f(1)^m, $ where $\ell$ denotes the largest link length in $S$. \end{lemma} \begin{proof} \def \hat \ell {\hat \ell} We proceed as in the proof of Lemma~\ref{P:oblcore}. We treat the subset $S'$ of links in $S$ that are closer to $r_i$ than to $s_i$ and bound $I(S',i)$, while the symmetric case of $S\setminus S'$ is omitted. Let us denote $q=f(l_i/\ell)$ and $\hat \ell = \beta\ell$. Note that $q\geq 1$ because $l_i/\ell\ge 1$. For a link $j\in S'$, let $p_j$ denote the endpoint of link $j$ that is closest to $r_i$, i.e., $d(i,j)=d(p_j,r_i)$. Consider the nested sequence of subsets $S_1\subseteq S_2\subseteq \dots\subseteq S'$, where \[ S_r=\{j\in S': d(j,i)=d(p_j,r_i)\leq q \hat \ell/2+(r-1)\hat \ell/2\}. \] We will apply Lemma~\ref{L:summation}. First, note that $S_1=\emptyset$: $S'$ is an equilength set with maximum link length $\ell$ and $f$ is non-decreasing, so by our independence assumption, $d(i,j)>{\mathfrak l}_j f(l_i/l_j)\ge q \hat \ell/2$. Next, let us bound $|S_r|$ using the doubling dimension of the metric space. Consider any $j,k\in S_r$ such that $l_j\geq l_k$. By our assumption, $d(p_j,p_k) \ge d(j,k)> f(1) \cdot \min\{{\mathfrak l}_j,{\mathfrak l}_k\}\geq f(1)\hat \ell/2.$ By the definition of $S_r$, $d(p_j,r_i)\leq q \hat \ell/2+(r-1)\hat \ell/2$ for each $j\in S_r$. Thus, \[ |S_r|=|\{p_j\}_{j\in S_r}|< C\cdot \left(\frac{ q\hat \ell/2+(r-1)\hat \ell/2 }{f(1)\hat \ell/2}\right)^{m} = \frac{C}{f(1)^m} \left(q+r-1\right)^{m}. \] Next, let us bound $\max_{j\in S_{r}\setminus S_{r-1}}\{I(j,i)\}$. For each $r>1$ and for any link $j\in S_r\setminus S_{r-1}$, we have that $l_j \leq \ell$ and $d(i,j) > q\hat \ell/2+(r-2)\hat \ell/2$; hence, $ I (j, i) = \frac{{\mathfrak l}_j^{\alpha}}{d(i,j)^\alpha} < \left(\frac{\hat \ell}{q\hat \ell/2+(r-2)\hat \ell/2}\right)^\alpha =\frac{2^\alpha}{\left(q+r-2\right)^\alpha}. $ Thus, we apply Lemma~\ref{L:summation} to the set $S'=\cup_{r\ge 2}S_r$ with the parameters $q,h=C/f(1)^m$ and function $A=I(\cdot,i)/2^\alpha$, to obtain: $I(S',i) =O(q^{m-\alpha}/f(1)^m)$, where we also use the fact that $q\ge 1$. \end{proof} \section{Limitations of the Conflict Graph Method} \label{s:limitations} Our results are best possible (up to constant factors) in several different ways. Specifically, conflict graphs for the physical model: \begin{itemize}[noitemsep] \item are necessarily of the form we consider (for uniform data rates) (Sec.~\ref{ss:necessity}), \item incur cost of $\Omega(\log\log \Delta)$-factor for general data rates (Sec.~\ref{ss:genlb}), \item incur cost of $\Omega(\log^* \Delta)$-factor for uniform data rates (Sec.~\ref{ss:uniflb}), \item give only the trivial approximation (of $n$) as a function of the number of links (Sec.~\ref{ss:uniflb}), \item give no approximation (in terms of $\Delta)$ in some non-doubling metrics (Sec.~\ref{ss:genmetrics-lb}), \item cannot represent uniform power (with non-trivial tightness) (Sec.~\ref{ss:uniformlb}), and \item cannot represent noise-limited networks (Sec.~\ref{ss:weaklb}). \end{itemize} Note that the instances we construct are embedded on the real line, i.e., in one dimensional space. \subsection{Necessity} \label{ss:necessity} What kind of graphs are conflict graphs? By a ``conflict graph formulation'' we mean a deterministic rule for forming graphs on top of a set of links. For it to be meaningful as a general purpose mechanism, such a formulation cannot be too context-sensitive. We shall postulate some axioms (that by nature should be self-evident) that lead to a compact description of the space of possible conflict graph formulations. For simplicity, we focus on the case of uniform thresholds, also because this case lies at the heart of all our limitation results. \begin{axiom} A conflict graph formulation is defined in terms of the \emph{pairwise relationship} of links. \label{axiom:pairwise} \end{axiom} By nature, graphs represent pairwise relationships; conflict graph formulations are boolean predicates of pairs of links. More specifically, though, we expect the conflict graph to be defined in terms of the relative standings of the link pairs. That is, the existence of an edge between link $i$ and link $j$ should depend only on the properties of the two links, not on other links in the instance. The only properties of note are the $\binom{4}{2} = 6$ distances between the nodes in the links. We refer to a \emph{conflict} between two links if the formulation specifies them to be adjacent in the conflict graph; otherwise, they are \emph{conflict-free}. \begin{axiom} A conflict graph formulation is \emph{invariant to translation and scaling}. Translating or scaling links by a fixed factor does not change the conflict relationship. \label{axiom:scale-free} \end{axiom} An essential feature of the SINR formula -- that distinguishes it from other formulations, like unit-disc graphs -- is that only relative distances matter. Thus, the positions of the nodes should not matter, only the pairwise distances, and only the relative factors among the distances. There is a practical limit to which links can truly grow, due to the ambient noise term. However, that only matters when lengths are very close to that limit; we will treat that case separately. \begin{axiom} A conflict formulation is \emph{monotonic} with increasing distances. \label{axiom:monotonicity} \end{axiom} The reasoning is that a conflict formulation should represent the degree of conflict between pairs of links, or their relative ``nearness''. Specifically, if two links conflict and their separation (i.e., one of the distances between endpoints on distinct links) decreases while the links stay of the same length, then the links still conflict. Similarly, if two links are conflict-free and the length of one of them decreases (while their separation stays unchanged), the links stay conflict-free. \begin{axiom} A conflict formulation should respect pairwise incompatibility. That is, if two links cannot coexist in a feasible solution, they should be adjacent in the conflict graph. \label{axiom:incompatible-pair} \end{axiom} \smallskip In the case of conflict graphs for links in the SINR model with arbitrary power control, we propose an additional axiom. \begin{axiom} A conflict formulation for links under arbitrary power control is symmetric with respect to senders and receivers. \label{axiom:symmetry} \end{axiom} Namely, it should not matter which endpoint of a link is the sender and which is the receiver when determining conflicts. The key rationale for this comes from Kesselheim's sufficient condition for feasibility, given here as Thm.~\ref{T:kesselheimconstant}. As we show in Sec.\ \ref{ss:uniflb}, this formula is also a necessary condition in doubling metrics, up to constant factors. Thus, feasibility is fully characterized by a symmetric rule (modulo constant factors). As we shall see, the axioms and the properties of doubling metrics imply that only two distances really matter in the formulation of conflict graphs: the length of the longer link, and the distance between the nearest nodes on the two links (both scaled by the length of the shorter link). We now argue that all conflict formulations satisfying the above axioms are essentially of the form $G_f$, for a function $f$ (as defined in (\ref{eq:unifgraphdef})). They can only differ from $G_f$ by what can be accounted for by an appropriate constant factor in the definition of $f$. \begin{proposition} Every conflict graph formulation ${\cal K}$ is captured by $G_f$, for some non-decreasing function $f$. Namely, ${\cal K}$ is sandwiched by $G_f$ and $G_{g}$, i.e., $G_f(L) \subseteq {\cal K}(L) \subseteq G_{\gamma f}(L)$, for every link set $L$, where $g(x)=c_1f(c_2x)$, for some constants $c_1,c_2>0$. \label{prop:gf-suffices} \end{proposition} \begin{proof} By Axiom \ref{axiom:pairwise}, ${\cal K}$ is a function of link pairs, more specifically, the distances among the four points. By Axiom \ref{axiom:scale-free}, we can use normalized distances, and will choose to factor out the length of the shorter link. By Axiom \ref{axiom:symmetry}, it does not matter which of them involve senders and which involve receivers. Now, consider two links $i = (s_i,r_i)$ and $j = (s_j,r_j)$, where $l_i \le l_j$. Let us denote for short $d = d(i,j)$. We aim to show that decisions regarding adjacency in ${\cal K}$ can be determined in terms of constant multiples of $d$ and $l_j$. First, recall that by Axiom \ref{axiom:incompatible-pair}, pairwise incompatible links must be adjacent in any conflict graph. As observed in Sec.~\ref{s:conflict}, this is encoded in ${G_{lo}}$, so we may restrict attention to independent sets in ${G_{lo}}$, where we have $d_{ij}d_{ji} \ge {\mathfrak l}_i{\mathfrak l}_j\ge 16l_il_j$. Let us first show that in this case, $d \ge 3l_i$. Indeed, assuming the opposite, and assuming w.l.o.g.\ $d_{ij} \ge d_{ji}$, we obtain, by triangular inequality, that either $d_{ji}= d <3l_i$ and $d_{ij}\le d + l_i + l_j < 5l_j$, or $d_{ji}\le d+l_i<4l_i$ and $d_{ij}\le d+l_j<4l_j$, both cases contradicting the independence assumption. We can relate the other distances to a combination of $d(i,j)$ and $l_j$, using triangular inequality. Assume w.l.o.g.\ that $d=d(s_i,s_j)$. First, observe that the distance $d(r_i,s_j)$ is at most constant times the distance from $i$ to $j$, i.e., $d(s_i,s_j) \le d(r_i,s_j) \le d + l_i \le (1 + 1/3) d(s_i,s_j)$. Next, we claim that $d(s_i,r_j)$ and $d(r_i,r_j)$ are within a constant multiple of $q = \max(d, l_j/2)$. We have: $d(s_i,r_j) \ge l_j - d(s_i,s_j) = l_j - d$, and hence $d(s_i,r_j) \ge \max(d,l_j-d) \ge q$, and also $d(s_i,r_j) \le d + l_j \le 3q$. Similarly, $d(r_i,r_j)\ge q$ and $d(r_i,r_j) \le d + l_i + l_j \le 4q$. It follows that all four distances between endpoints are within constant multiples of $d(i,j)$ and $l_j$. Hence, by monotonicity (Axiom \ref{axiom:monotonicity}), ${\cal K}$ is dominated by a conflict graph formulation ${\cal H}$ defined by a monotone boolean predicate of two variables: length of the longer link $l_j$, and the distance $d(i,j)$ between the links (scaled by the shorter link). However, an arbitrary monotone boolean predicate of two variables $x, y$ can be represented by a relationship of the form $y > f(x)$, for some monotonic function $f$. Thus, ${\cal K}$ is dominated by $G_f$, for some non-decreasing function $f$. Also, by the same arguments, ${\cal K}$ dominates $G_{c_1f(c_2x)}$ for constants $c_1,c_2>0$. \end{proof} Finally, we can observe that sub-linearity is necessary if one seeks non-trivial approximations. Namely, linear functions correspond to disc graphs, and Moscibroda and Wattenhofer \cite{moscibrodaconnectivity} gave an instance of a feasible set of links that induces a clique in disc graphs. The length diversity $\Delta$ in their construction is $2^n$, thus the best approximation one can hope for is logarithmic in $\Delta$. This holds equally for any super-linear function, by the same construction. \subsection{Optimality of Tightness Bounds} We begin with a general result showing that the tightness bound of $O(f^*(\Delta))$ in Thm.~\ref{T:sandwich} cannot be improved. The result further shows that for the physical model, there is no choice of the lower bound graph ${G_{lo}}$ that can improve the tightness bound. Finally, this result shows that the number of links $n$ alone is not a meaningful measure for tightness. The result holds even for the special case when the links have uniform thresholds, and the nodes are arranged on the real line. The construction is based on the following technical observations. On the one hand, it follows from Thm.\ \ref{T:kesselheimconstant} that any set of exponentially growing links arranged sequentially by the order of length on the real line is (almost) feasible. On the other hand, given such a set $S$ of links on the line, a new link $j$ can be formed so that $j$ is adjacent (in $G_f$) to all the links in $S$ while the set $S\cup j$ stays feasible; the only requirement is that $j$ be long enough. Our construction then builds recursively on these ideas. \begin{theorem}\label{T:ndependence} Let $f(x)=\omega(1)$, and assume uniform threshold $\beta\ge 1$. For any integer $n > 0$, there is a \emph{feasible} set $L$ of $n$ links arranged on the real line, such that $G_f(L)$ is a clique, i.e., $\chi(G_f(L))=n$. Moreover, if $f(x)\ge g(x)$ ($x\ge 1$) for a strongly sub-linear increasing function $g(x)$ with $g(x)=\omega(1)$, then $n = \Omega(g^*(\Delta))$. \end{theorem} \begin{proof} Let us assume, for simplicity, that $\beta=1$; the argument extends straightforwardly to any constant $\beta> 1$. Consider a set of $2n$ links, $\{1,2,\dots,2n\}$, arranged consecutively from left to right on the real line. For each $i=1,2,\dots,n-1$, the node $r_i$ is to the right of $s_i$, and shares the same location on the line with $s_{i+1}$, i.e., $r_i=s_i+l_i=s_{i+1}$. See Figure~\ref{fig:notcomplicated}. The lengths of the links are defined inductively, as follows. We set $l_1=1$, and for $i\ge 1$, we choose $l_{i+1}$ to be the minimum value satisfying: \begin{align} \label{E:feasibility} l_{i+1}&\ge cl_i \\ \label{E:clique}2d(i+1,j)=2d_{i+1,j}&\le l_jf(l_{i+1}/l_j)\mbox{ for all } j\leq i, \end{align} where $c\ge 2$ is a large enough constant, specified below. Such a value of $l_{i+1}$ can be chosen as follows. By the inductive hypothesis, we have $l_j\ge cl_{j-1}\ge 2l_{j-1}$ for $j=2,3,\dots,i$, which implies that $l_i> \sum_{j=1}^{i-1}{l_j}$. Then, we have that $d_{i+1,j}=\sum_{t=j+1}^i{l_t}< 2l_i$ for all $j\le i$. Thus, it is enough to choose $l_{i+1}$ such that $l_{i+1}\ge cl_i$ and $4l_i \le l_jf(l_{i+1}/l_j)$ for all $j\le i$, which can be done using the assumption $f=\omega(1)$ and the fact that the values of $l_j$ for $j\le i$ are already fixed at this point. This completes the construction. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\textwidth]{notcomplicatedinstance} \caption{The construction in Thm.\ \ref{T:ndependence}.} \label{fig:notcomplicated} \end{center} \end{figure} First, observe that (\ref{E:clique}) implies that $G_f(L)$ is a clique. Indeed, consider two links $i,j$, such that $i>j$. Then $d_{ji}\le 2l_i$, which, multiplied with (\ref{E:clique}), shows that links $i,j$ are adjacent in $G_f$ (recall that $\beta=1$). Next, we prove feasibility. Consider the odd numbered links $S=\{1,3,\dots,...,2n-1\}$. Let us fix a link $2k+1\in S$. Let $T=\{j\in S: l_j< l_{2k+1}\}$. Note that for each $j\in T$, $d(j,2k+1)\ge l_{2k}\ge c^{2k-j-1}$. We have that \[ I(T,2k+1)=\sum_{j\in T}{\frac{l_j^{\alpha}}{d(j,2k+1)^{\alpha}}}< \sum_{t=1}^\infty {c^{-t\alpha}}, \] where the last sum is a geometric series that can be made smaller than any constant, by choosing constant $c$ appropriately. Thus, there is a choice of constant $c$ for which $T$ is feasible, as per Thm.\ \ref{T:signalstrengthening}. This proves the first part of the theorem. Now, let us assume that $f(x)\ge g(x)$ for a strongly sub-linear function $g(x)$ with $g(x)=\omega(1)$. Then, there is a constant $x_0$ such that $g(x) < x$ for all $x \ge x_0$ (because $g(x)=o(x)$) and there is a constant $c'$ such that $2g(x)/x\le g(y)/y$ whenever $x\ge c'y$ (strong sub-linearity). In this case, we repeat the construction above with a few modifications. We set $l_1=\max(c',x_0)$ and set $l_{i+1}$ be the minimum value s.t. $g(l_{i+1}) \ge cl_i$, for $i=1,2,\dots$ (such a value exists because $g(x)=\omega(1)$), where $c$ is the constant from (\ref{E:feasibility}). Let us show that the conditions (\ref{E:feasibility}-\ref{E:clique}) hold for these links. Since $l_{i+1}\ge x_0$, we have that $l_{i+1} > g(l_{i+1}) \ge cl_i$, which implies (\ref{E:feasibility}). This in turn implies, as observed in the first part of the proof, that $d(i+1,j) < 2l_i$ for all $2\le j\le i$. Let us denote $x=l_{i+1}/l_1=l_{i+1}$ and $y=l_{i+1}/l_j$. Note that $x/y=l_j \ge c'$, so we have, by strong sub-linearity of $g$, that $g(y)/y \ge 2g(x)/x$, or equivalently, that $l_j\cdot g(l_{i+1}/l_{j}) \ge 2\cdot g(l_{i+1})$; hence $l_j\cdot g(l_{i+1}/l_j) \ge 4l_i > 2d(i+1,j)$ for all $2\le j \le i$, and (\ref{E:clique}) holds. It remains to prove the lower bound on $n$. Recall that the value of $l_{i+1}$ is the minimum satisfying $g(l_{i+1})\ge 2l_i$, for $i=1,2,\dots,n-1$. Then, we have $g(l_{i+1}/2) < 2l_i$ or, equivalently, $h(l_{i+1}/2) < l_i/2$, where $h(x)=g(x)/4$. Thus, \[ 1/2=l_1/2 > h(l_2/2) > h(h(l_3/2))> \dots > h^{(n-1)}(l_n/2)=h^{(n-1)}(\Delta/2), \] which implies that $n =\Omega( h^*(\Delta/2))=\Omega(g^*(\Delta))$. \end{proof} \subsubsection{Optimality for General Thresholds} \label{ss:genlb} Here we show that the obtained tightness is essentially best possible, by demonstrating that every reasonable conflict graph formulation must incur an $O(\log\log{\Delta})$ factor. First, since the feasibility of a set of links is precisely determined by the values ${\mathfrak l}_i$ and $d_{ij}$, we can assume, by a similar reasoning as in Sec.~\ref{ss:necessity}, that the conflict relation is a function of $\frac{{\mathfrak l}_{max}}{{\mathfrak l}_{min}}, \frac{d_{ij}}{{\mathfrak l}_{min}}, \frac{d_{ji}}{{\mathfrak l}_{min}}$, where ${\mathfrak l}_{min}$ and ${\mathfrak l}_{max}$ are the smaller and larger values of ${\mathfrak l}_i,{\mathfrak l}_j$, respectively. Our construction will consist of only \emph{unit-length} links (i.e.\ $l_i=1$) of mutual distance at least 3. In this case, we can further reduce the number of variables by observing that in such instances, $d_{ij}=\Theta( d_{ji})=\Theta(d(i,j))$. Thus, the conflict relation is essentially determined by two variables: $\frac{d(i,j)}{{\mathfrak l}_{min}}$ and $\frac{{\mathfrak l}_{max}}{{\mathfrak l}_{min}}$. By separating the variables, the conflict predicate boils down to a relation $ \frac{d(i,j)}{{\mathfrak l}_{min}} > f(\frac{{\mathfrak l}_{max}}{{\mathfrak l}_{min}}) $ for a function $f$. Let us show that feasibility of independent sets requires that $f(x)=\Omega(\sqrt{x})$ in such a graph. Let us fix a function $f:[1,\infty)\rightarrow [1,\infty)$. Let $i,j$ be unit-length links with $\beta_j=1$ and $\beta_i=X^\alpha >1$, where $X$ is a parameter. Further assume that the links $i,j$ are placed on the plane so that $d(i,j)=f(X)+3=f({\mathfrak l}_i/{\mathfrak l}_j)+3$, which implies that the links are non-adjacent in $G_f$. Thus, $i,j$ must form a feasible set: $\frac{P(i)}{{\mathfrak l}_i^\alpha} > \frac{P(j)}{d_{ji}^\alpha}$ and $\frac{P(j)}{{\mathfrak l}_j^\alpha} > \frac{P(i)}{d_{ij}^\alpha}$. Multiplying these inequalities together and canceling $P(i)$ and $P(j)$ out, gives: $d_{ij}d_{ji} > {\mathfrak l}_i{\mathfrak l}_j=X$. Since the links have unit lengths, while $d(i,j)>2\max(l_i,l_j)$, the triangle inequality implies that $d(i,j)= \Theta(\max(d_{ij},d_{ji}))=\Theta(\sqrt{d_{ij}d_{ji}})=\Omega(\sqrt{X})$, which in turn implies that $f(X)=d(i,j)-3=\Omega(\sqrt{X})$. Now, the main claim of this section, that is, the tightness must be at least $\Omega(\log\log{\Delta})$, follows from Thm.~\ref{T:ndependence}, because for $f(x)=\Omega(\sqrt{x})$, we have $f^*(x)=\Omega(\log\log x)$. \subsubsection{Optimality for Uniform Thresholds} \label{ss:uniflb} The strategy of proving a lower bound on $f$ for which $G_f$ is a ``working'' conflict graph, used in the case of general thresholds, seems difficult to apply for the uniform thresholds case. Instead, our strategy here is as follows. First, we observe that for $f(x)=\Omega(\log^{(c)}x)$ with any constant $c\ge 1$, the lower bound $\Theta(\log^*{\Delta})$ on tightness follows from Thm.~\ref{T:ndependence}. In particular, our analysis of $G_{\gamma\widehat{\log}}$ is tight. This, however, leaves the possibility that a slower-growing function $f$ could give better tightness. To close this gap, we prove the following theorem. \begin{theorem}\label{T:hardinstance} Let $f(x)=O(\log^{1/\alpha}x)$. Assume uniform and fixed thresholds $\beta\ge 1$. For each $\Delta>0$, there is a set $L$ of links on the real line with $\Delta(L)=\Omega(\Delta)$, such that $G_f(L)$ has no edges, but $L$ cannot be partitioned into fewer than $\Theta(\log^*{\Delta(L)})$ feasible subsets. \end{theorem} The construction follows the general structure of a lower bound for scheduling the edges of a minimum spanning tree of a set of points in the plane~\cite[Thm.\ 7]{SODA12}. In order to prove the theorem, we need a necessary condition for feasibility, which we present first. We show in the following result that the sufficient condition for feasibility stated in Thm.\ \ref{T:kesselheimconstant} is essentially necessary in doubling metric spaces. This result is of independent interest, as it may prove useful for improved analysis of various problems. It should be noted that this theorem does not hold in general metric spaces (as opposed to Thm.\ \ref{T:kesselheimconstant}). \begin{theorem}\label{T:necessary} Let $\beta\ge 3^\alpha$, and $L$ be a feasible set of links. Then, $I(L) = O(1)$. \end{theorem} \begin{proof} The proof consists of two parts, bounding the interference on a link $i$ by faraway links (i.e., links that are highly independent from link $i$) on one hand, adapting the proof of Lemma~\ref{L:globalmain}, and by near links (the rest) on the other hand, using simple manipulations of the SINR condition. Let us fix a link $i\in L$ and denote $S=\{j\in L : l_j\le l_i\}$. Let constant $c$ be such that $(c-1)x\ge \beta^{1/\alpha}\widehat{\log}(x)$, for each $x\ge 1$. We split $S$ into two subsets, $S_1=\{j\in S: \max\{d_{ij},d_{ji}\} > cl_i\}$ and $S_2=S\setminus S_1$. We bound the interference on $i$ from $S_1$ and $S_2$ separately. For $S_1$, we adapt the proof of Lemma~\ref{L:globalmain}. Feasibility implies independence in ${G_{lo}}$: For each pair $j,k\in S_1$, $d_{kj}d_{jk} > \beta^{2/\alpha}l_jl_k\ge 9l_jl_k$. We claim that the latter implies that $d(k,j)>2\min(l_k,l_j)=2\widehat{\log}(1)\cdot \min(l_k,l_j)$. Assume, w.l.o.g.,\ that $l_j\le l_k$. Assume, for contradiction, that $d(j,k)\le 2l_j$. Then by the triangle inequality, we have $\min(d_{jk},d_{kj})\le l_j + d(j,k)\le 3l_j$, and $\max(d_{jk},d_{kj})\le l_k + d(j,k)\le 3l_k$, contradicting independence. On the other hand, $d(i,j)\ge \max\{d_{ij},d_{ji}\}-l_i> (c-1)l_i\ge {\mathfrak l}_j\widehat{\log}({\mathfrak l}_i/{\mathfrak l}_j)$ holds for each $j\in S_1$, by the definition of $S_1$. We can proceed now as in the proof of Lemma~\ref{L:globalmain}: Partition $S_1$ into equilength subsets, apply Lemma~\ref{L:pcequilength} to each of those (we have just shown that the assumptions of the lemma hold), and combine the obtained bounds into a convergent series. We omit the technical details. Now, consider the set $S_2$. Let $P$ be a power assignment for which $L$ is $P$-feasible. By the definition of SINR feasibility, \[ \frac{P(i)}{l_i^\alpha} > 3^{\alpha}\sum_{j\in S_2}{\frac{P(j)}{d_{ji}^\alpha}},\mbox{ and }\frac{P(j)}{l_j^\alpha} > 3^{\alpha}\frac{P(i)}{d_{ij}^\alpha}\mbox{ for all }j\in S_2. \] By replacing $P(j)$ with $3^{\alpha}\frac{P(i)l_j^{\alpha}}{d_{ij}^\alpha}$ in the first inequality and simplifying the expression, we get: \begin{equation}\label{E:equation2} \sum_{j\in S_2}\frac{l_i^\alpha l_j^\alpha}{d_{ij}^\alpha d_{ji}^\alpha} \le 9^{-\alpha}. \end{equation} In order to extract a bound on $I(S_2,i)$ from (\ref{E:equation2}), we it suffices to show that for each $j\in S_2$, $\max\{d_{ij},d_{ji}\}=\Theta(l_i)$ and $\min\{d_{ij},d_{ji}\}=\Theta(d(i,j))$. Assume, w.l.o.g., that $d_{ij}\ge d_{ji}$. First, as it was observed above, feasibility implies that $d_{ji}\ge d(i,j) > 2l_j$. Hence, $d_{ji}\ge d(i,j)\ge d_{ji} - l_j > d_{ji}/2$. Next, consider $d_{ij}$. Recall that $d_{ij} \le cl_i$, by the definition of $S_1$. To prove $d_{ij}=\Omega(l_i)$, consider two cases: If $l_j\ge l_i/2$, then $d_{ij} \ge d(i,j)> 2l_j\ge l_i$, and otherwise the triangle inequality implies $d_{ij}\ge l_i-d(i,j)>l_i/2$. \end{proof} \noindent \emph{Remark.} Note that Thm.\ \ref{T:signalstrengthening} implies that any feasible set can be refined into a constant number of $3^\alpha$-feasible subsets. Thus, the interference function $I$ fully captures feasibility in doubling metrics, modulo constant factors. \begin{proof}[Proof of Thm.~\ref{T:hardinstance}] For a set $S$ of links, we will use $diam(S)$ to denote the \emph{diameter} of $S$, or the maximum distance between nodes in $S$. We assume, for simplicity, that $\beta=3^\alpha$. The argument can easily be extended to any other value $\beta\ge 1$. We will construct a set of links that cannot be partitioned/scheduled in fewer than $\Theta(\log^*{\Delta})$ feasible slots, relying on the necessary condition for feasibility (Thm.\ \ref{T:necessary}). Let us fix a function $f$. Note that since $f=O(\log^{1/\alpha})$, there is a constant $C\geq 1$ s.t. $f(x)\le C\log^{1/\alpha}x$. We construct sets $L_t$ of links, $t=1,2,\dots$, recursively. The construction is illustrated in Figure~\ref{fig:complicated}. All the links are arranged on the real line and the receiver of each link is to the right of the sender. Initially, we have a set $L_1$ consisting of a single link of length $1$, for which a single slot is sufficient and necessary. Suppose that we have already constructed $L_{t}$ with the property that at least $t$ slots are required for scheduling $L_{t}$. The instance $L_{t+1}$ is constructed as follows, using $k$ scaled copies of $L_t$, where $k$ is to be determined. First, we place a single very long link $j_{t+1}$ in the line. We then add, in order from left to right, copies $L_t^1,L_t^2,\dots,L_t^{k}$ of $L_t$ to the right of $j_{t+1}$, where $L_t^s$ is the copy of $L_t$ scaled by a factor $8^s$. The aim is to ensure the following properties: \begin{enumerate}[noitemsep,label=\roman*] \setlength{\parskip}{0cm}% \item {$L_{t+1}$ is $f$-independent,}\label{EN:independence} \item {$t=\Omega(\log^*{\Delta(L_t)})$,} \label{EN:lowerbound} \item {for any set $S=\{i_1,i_2,\dots,i_k\}$ with $i_s\in L_t^s$, $s=1,2,\dots,k$, we have that $I(S,j_{t+1})>c_0$, for a constant $c_0$ of our choice.}\label{EN:inconsistency2} \end{enumerate} The last property ensures that each partitioning of $L_{t+1}$ into feasible subsets must put a complete copy $L_t^s$ in a slot separate from $j_{t+1}$. Indeed, the existence of such a partitioning that placed at least one link from each copy $L_t^s$ in the same slot with $j_{t+1}$ would contradict (\ref{EN:inconsistency2}): we would have $I(S,j_{t+1}) =O(1)$ for some $S$ as above, due to Thm.\ \ref{T:necessary}. Recall that $L_t$ needs at least $t$ slots to be scheduled, and so does each copy of it. It follows that $L_{t+1}$ needs at least $t+1=\Omega(\log^*{\Delta(L_t)})$ slots to be scheduled, one for $j_t$ and at least $t$ for scheduling the copies of $L_t$. Proving the properties (\ref{EN:independence}-\ref{EN:inconsistency2}) will complete the proof of the theorem. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.85\textwidth]{complicatedinstance} \caption{The recursive construction of $L_{t+1}$.} \label{fig:complicated} \end{center} \end{figure} Now let us describe the inductive step of the construction in detail. Let $\ell_t=diam(L_t)$ denote the diameter of $L_t$. The number of copies of $L_t$ is $k=2^{c\ell_t}$, for a large enough constant $c$. The length of link $j_{t+1}$ is set to $l_{j_{t+1}}=8^{k+1}\ell_t$. It remains to specify the placement of each copy $L_t^s$ so as to guarantee the desired properties of $L_{t+1}$. By induction, the links within each copy of $L_t$ are $f$-independent. We place the copies $L_t^s$ so that the links between any two copies are $f$-independent and are $f$-independent from $j_{t+1}$. Let $\ell_t^{s}=diam(L_t^s)=8^s\ell_t$ denote the diameter of $L_t^s$. Let $g(x)=C\log^{1/\alpha}x$. We place each copy $L_t^s$ at a distance $d(L_t^s,j_{t+1})=9\ell_t^sg(l_{j_{t+1}}/\ell_t^s)$ from $j_{t+1}$. The construction is ready. We first prove the property (\ref{EN:independence}). \begin{claim} With the distances defined as above, the set $L_{t+1}$ is $f$-independent. \end{claim} \begin{proof} Since the links are arranged linearly, the maximum of $d_{ij}, d_{ji}$ for every pair of links $i,j$ is at least $\max\{l_i,l_j\}$. Hence, it suffices to prove that for any pair of links $i,j$ with $l_i\ge l_j$, $d(i,j)>9l_j f(l_i/l_j)$ (recalling that $\beta^{1/\alpha}=3$). Consider any link $i\in L_t^s$. We have that \[ d(i,j_{t+1})\ge d(L_t^s,j_{t+1})=9\ell_t^sg(l_{j_{t+1}}/\ell_t^s)\ge 9l_ig(l_{j_{t+1}}/l_i)\ge 9l_if(l_{j_{t+1}}/l_i), \] where the second inequality follows from the fact that $xg(c/x)$ is an increasing function of $x$ and that $l_i<\ell_t^s$, and the third inequality follows because $f(x)\leq g(x)$ for all $x$. Thus, all the links in $L_t^s$ are $f$-independent from $j_{t+1}$. Now, let us show that any two links $i,k$ with $l_i\leq l_k$ from different copies $L_t^s$ and $L_t^r$ with $s > r$ are $f$-independent (no matter which link is from which copy). Since $f(x)\leq g(x)$, it will be enough to show that \begin{equation}\label{E:pessdist} d(i,k)> 9l_ig(l_k/l_i). \end{equation} Recall that $xg(c/x)$ is an increasing function of $x$. Then, for a fixed $k$, the right-hand side of (\ref{E:pessdist}) is maximized when $l_i$ is maximum. On the other hand, for a fixed $i$, the value $g(l_k/l_i)$ is maximized when $l_k$ is maximum, because $g$ is an increasing function. Let $j_t$ denote the maximum length link in $L_t$. Then, the maximum link length in $L_t^s$ (in $L_t^r$) is $8^sl_{j_{t}}$ ($8^rl_{j_{t}}$, resp.). Therefore, it is enough to show that \[ d(i,k)>9\ell_t^rg(8^sl_{j_t}/(8^rl_{j_t}))=9\ell_t^rg(8^{s-r})=9C(3(s-r))^{1/\alpha}\ell_t^r. \] We have that \[ d(i,k) \geq d(L_t^s,L_t^r)=d(L_t^s,j_{t+1}) - d(L_t^r,j_{t+1}) - \ell_t^r\ge 9\ell_t^sg(l_{j_{t+1}}/\ell_t^s) - 10\ell_t^rg(l_{j_{t+1}}/\ell_t^r). \] The term $g(l_{j_{t+1}}/\ell_t^r)$ can be bounded as (using $\alpha\ge 1$) \[ g(l_{j_{t+1}}/\ell_t^r)=g(8^{s-r}l_{j_{t+1}}/\ell_t^s) < 3(s-r)\cdot g(l_{j_{t+1}}/\ell_t^s), \mbox{hence} \] \begin{align*} d(i,k) &\ge 9\ell_t^s g(l_{j_{t+1}}/\ell_t^s) - 30(s-r)\ell_t^rg(l_{j_{t+1}}/\ell_t^s)\\ & > C(9\cdot 8^{s-r} - 30(s-r))\ell_t^r \\ & > 27C(s-r)\cdot\ell_t^r. \end{align*} \end{proof} Next, observe that (the first line follows because the links are arranged linearly) \begin{align} \nonumber \ell_{t+1}&=l_{j_{t+1}}+d(L_t^k,j_{t+1}) + \ell_t^k\\ \nonumber &\leq l_{j_{t+1}}+9\ell_t^kg(l_{j_{t+1}}/\ell_t^k) + \ell_t^k\\ \nonumber &=8^{k+1}\ell_t + 9\cdot 8^kg(8)\ell_t + 8^k\ell_t\\ &=O(8^{ 2^{c\ell_t}}). \end{align} Since the minimum link-length in $L_{t+1}$ is $1$, we can conclude that $\Delta(L_{t})\le\ell_t\leq 2\uparrow (c_1t)$ for a constant $c_1$ and for each $t$, where $\uparrow$ denotes the tower function. This implies that $t=\Omega(\log^*{\Delta(L_t)})$. The property (\ref{EN:lowerbound}) is now proven. It remains to check (\ref{EN:inconsistency2}). Consider a link $i_s$ from $L_{t}^s$ where $i_s$ is the copy of a link $i\in L_{t}$. We have that \[ d(i_s,j_t)\leq \ell_t^s+d(L_t^s,j_t)= \ell_t^s + 9C\ell_t^s\log^{1/\alpha}{(l_{j_{t+1}}/\ell_t^s)}\leq c_2\ell_t^s(k-s+1)^{1/\alpha}, \] for a constant $c_2$. This implies: \[ I(i_s,j_{t+1})=\left(\frac{l_{i_s}}{d(i_s,j_{t+1})}\right)^\alpha \geq \left(\frac{l_{i_s}}{ c_2(k-s+1)^{1/\alpha}\ell_{t}^s} \right)^\alpha \geq \frac{1}{c_3(k-s)\ell_{t-1}}, \] where we used the fact that $l_{i_s}/\ell_t^s=l_i/\ell_t\geq 1/\ell_t$. Now, let $\{i_s\in L_{t}^s | s=1,2,\dots,k\}$ be a set of links where $i_s\in L_{t}^s$, but they are not necessarily the copies of the same link of $L_{t}$. Then, \[ I(S,j_{t+1}) = \sum_{s=1}^{k}I(i_s,j_{t+1}) > \sum_{1}^{k}{\frac{1}{c_3(k-s+1)\ell_{t}}} = \Omega\left(\frac{\log{k}}{\ell_{t}}\right). \] Recall that $k=2^{c\ell_t}$. By choosing a large enough constant $c$, we can thus guarantee the property (\ref{EN:inconsistency2}). This completes the proof of all the properties of $L_t$ and the proof of the theorem. \end{proof} \subsection{Conflict Graphs without Power Control} \label{ss:uniformlb} Our results thus far show that conflict graphs can be used to obtain good approximation for scheduling problems that allow power control. That turns out not to be the case when power control is not available, that is, when we have the fixed uniform power assignment $P_0$: If there is no power control, there is no conflict graph sandwich with tightness smaller than $\Theta(\log\Delta/\log\log\Delta)$. This claim is in contrast with the special case of unit length links (and uniform thresholds), where simple disk-graphs provide constant-tightness sandwiching~\cite{us:talg12}. We prove the claim for linear (i.e., 1-dimensional) instances with $\alpha=2$ and uniform thresholds $\beta=1$. It is not hard to show that uniform power scheduling is equivalent to its bidirectional variant (up to constant factors)~\cite{tonoyan11}, where we replace the distances $d_{ij},d_{ji}$ in the SINR formula with $d(i,j)$. Hence, we consider any conflict graph formulation ${\cal G}$ that is, in view of the observations made in Sec.~\ref{ss:necessity}, of the following form: For every pair $i,j$ of links, they are \emph{independent in ${\cal G}$} if $d(i,j)\ge c_1f(l_i,l_j)$, and are \emph{adjacent in ${\cal G}$} if $d(i,j)<c_2f(l_i,l_j)$, where $c_1,c_2$ are constants and $f$ is an arbitrary function. The values of constants $c_1,c_2$ will not be important, so assume, for simplicity, that $c_1=c_2=1$. First, let us show that there is a constant $h>0$, such that for every $\ell>0$, $f(\ell, \ell) \le h\ell$. It is an easy special case of Lemma~\ref{P:oblcore} that for some constant $h'>0$, every set of links of length $\ell$ arranged linearly with distance $d(i,j)=h'\ell$ between consecutive links is $P_0$-feasible. Hence, if $f(\ell, \ell) > h\ell$ for a number $h>h'$, then the tightness of the graph formulation ${\cal G}$ is at least $\lfloor h/h'\rfloor$. On the other hand, we have $\Delta=1$ for the described instances, which means that the tightness must be bounded by a constant (in the context of our main claim), which implies that $h$ is bounded by a constant. Next, we bound from below $f(\ell_0,\ell_1)$, for any $\ell_0>\ell_1$. Consider a link $0$ of length $l_0=\ell_0$ and a large number of links $\{1,2,\dots,k\}$ of length $\ell_1< \ell_0$ arranged on the line such that $s_0$ is at the origin, $r_0$ is at coordinate $\ell_0$, and for $i=1,2,\dots,k$, $s_i$ is at $r_0+f(\ell_0,\ell_1)+(i-1)(h+1)\ell_1$, and $r_i$ is at $s_i+\ell_1$. Thus, $d(0,i)=f(\ell_0,\ell_1)+(i-1)(h+1)\ell_1$, and the spacing between any two links of length $\ell_1$ is at least $h\ell_1$, so the constructed set is independent in ${\cal G}$, and must be feasible. The total interference-to-signal ratio on link $0$ is \begin{align*} I_0(\{1,2,\dots,k\}, 0) &=\sum_{i\ge 1} \frac{\ell_0^2}{(f(\ell_0,\ell_1)+(i-1)(h+1)\ell_1)^2} \\ &\ge \int_{0}^\infty \frac{\ell_0^2}{(f(\ell_0,\ell_1)+(h+1)\ell_1 x)^2}\ dx \\ &= \frac{\ell_0^2}{(h+1)\ell_1} \cdot \frac{1}{f(\ell_0,\ell_1)+(h+1)\ell_1}\ . \end{align*} For feasibility, the right-hand side must be less than 1, i.e., $f(\ell_0,\ell_1) > \frac{\ell_0^2}{(h+1)\ell_1}-(h+1)\ell_1$ must hold for every pair of distances $\ell_0,\ell_1$. Now, we can use the obtained bound on $f(\ell_0,\ell_1)$ to construct an instance that is a clique in ${\cal G}$, but is feasible with uniform power. For a number $n>4(h+1)^2$, consider the set ${1,2,\dots,k}$ of $k=n/\log n$ links, arranged on the line in the order $1,2,\dots,k$, such that $l_i=n^i$, and the minimum distance between links $i$ and $i+1$ is $$d(i,i+1)=\frac{l_{i+1}^2}{(h+1)l_i}-(h+1)l_i<n^{i+2}/(h+1).$$ Now, let us show that the obtained conflict graph has large chromatic number. Due to symmetry, it suffices to consider the conflicts with the longest link $k$. The link $k-1$ is adjacent to $k$, by the definition of the distances above. For each $i<k-1$, assuming $n$ is sufficiently large, we have $$d(i,k)<\sum_{t=1}^{k-1}\frac{n^{t+2}}{h+1} + n^{t}<\frac{2n^{k+1}}{h+1} + n^{k-1}<\frac{3n^{k+1}}{h+1}<\frac{l_{k}^2}{(h+1)l_i}-(h+1)l_i,$$ where in the last two inequalities we used $n>4(h+1)^2$. This means that the obtained conflict graph is a clique of size $k$. On the other hand, it is easy to see that the instance is feasible: For every pair of links $i,j$ with $i>j$, the interference to signal ratio is at most $$\frac{l_i^2}{d(i,j)^2}\le \frac{l_i^2}{d(i,i-1)^2}\le \frac{n^{2i}}{(n^{i+1}/(h+1)-(h+1)n^{i-1})^2}\le \frac{1}{n},$$ since $n>4(h+1)^2$. Finally, note that for the constructed instance, $\Delta=2^n$, which means that the approximation ratio provided by any conflict graph is at least $\Omega(\log\Delta/\log\log\Delta)$. \subsection{General Metric Spaces} \label{ss:genmetrics-lb} The following proposition shows that conflict graphs can be arbitrarily poor approximation of the SINR model in general metric spaces. Given a function $f$, the construction consists of an $f$-independent set of \emph{unit length} links. Since all links have length $1$, $f$-independence is equivalent to $f(1)$-independence (that is, $g$-independence, where $g(x)\equiv f(1)$). The separation between the links is just enough to ensure $f(1)$-independence. However, since all the links are equally ($f(1)$-) separated from any given link, their interference accumulates and only a constant number of links can be scheduled in the same slot. This leads to schedules of length $\Theta(n)$. \begin{proposition} For every positive function $f$ and any $n\geq 1$, there is an $f$-independent set of $n$ unit length links (i.e., $\Delta=1$) that cannot be partitioned into less than $\Theta(n)$ feasible subsets, under uniform thresholds $\beta\ge 1$. \end{proposition} \begin{proof} Let $L=\{1,2,\dots,n\}$ be a set of links of unit length. The distance between every two senders of links is the same: $d(s_i, s_j) = 2\beta f(1)$. Distances to and between receivers are then induced by these distances and lengths; e.g., distances between receivers is $d(r_i, r_j) = d(s_i,s_j) + l_i + l_j = 2\beta f(1)+2$. The set $L$ is $f$-independent, since $d(i,j)>\beta f(1)\cdot l_i={\mathfrak l}_if({\mathfrak l}_j/{\mathfrak l}_i)$. Consider any $P$-feasible subset $S$ of $k$ links for a power assignment $P$, and fix a link $i\in S$. The SINR condition implies that $P(i) > \beta \sum_{j\in S\setminus\{i\}}\frac{P(j)l_{i}^\alpha}{d_{ji}^\alpha}$ and $P(j) > \beta \frac{P(i)l_j^\alpha}{d_{ij}^\alpha}$ for all $j\in S\setminus \{i\}$. Substituting for $P(j)$ in the first inequality and canceling the term $P(i)$, we obtain: \[ 1 >\sum_{j\in S\setminus\{i\}} \beta^2 {\frac{l_i^{\alpha }l_j^\alpha}{d_{ij}^\alpha d_{ji}^\alpha}}=\beta^2 \sum_{j\in S\setminus\{i\}}{\frac{1}{(2f(1)+1)^2}}=\frac{(|S|-1)\beta^2}{(2f(1)+1)^2}, \] which implies that $|S| < \left(2f(1)+1\right)^2/\beta^2+1=O(1)$. Since $S$ was an arbitrary feasible subset of $L$, we conclude that $L$ cannot be split into fewer than $\Theta(n)$ feasible subsets. \end{proof} \subsection{Noise-Limited Networks} \label{ss:weaklb} \newcommand{P_{max}}{P_{max}} \newcommand{l_{max}}{l_{max}} \newcommand{\hat{l}}{\hat{l}} \newcommand{l_{min}}{l_{min}} Recall that in order to obtain our approximations, we assumed in Sec~\ref{s:model} that there is a constant $c>1$, such that for each link $i$, $P(i)\ge c N_i {\mathfrak l}_i^\alpha$. However, this is not always achievable when nodes have limited power. Suppose that each sender node has maximum power $P_{max}$. For concreteness, we assume that $c=2$, $N_i=N>0$, $\beta_i=1$, for all links $i$, and the links are in a Euclidean space. Thus, a link $i$ is \emph{weak} if $P_{max} \le 2 N l_i^\alpha$. Note that a link is weak because it is too long for its maximum power, i.e.\ $l_i \ge l_{max}/2^{1/\alpha}$, where $l_{max}=(P_{max}/ N)^{1/\alpha}$ is the maximum length a link can have to be able to overcome the noise when using maximum power. Scheduling weak links may be considered as a separate problem. Let $\tau$-{\textsf{WScheduling}} denote the problem of scheduling weak links using power assignment $P_{\tau}$. We show here that the problem of scheduling (not necessarily weak links) with uniform power assignment (i.e., $P_0$), denoted {\textsf{UScheduling}}, can be reduced to $\tau$-{\textsf{WScheduling}} for any given $\tau\in [0,1]$, modulo constant approximation factors. Namely, a $\mu$-approximation algorithm for $\tau$-{\textsf{WScheduling}} can be turned into a $O(\mu)$-approximation algorithm for {\textsf{UScheduling}}. To our knowledge, there is no known approximation algorithm for {\textsf{UScheduling}} with ratio in $o(\min(\log\Delta, \log n))$. \begin{theorem}\label{T:weakhard} There is a polynomial-time reduction from {\textsf{UScheduling}} to $\tau$-{\textsf{WScheduling}} for any $\tau\in [0,1]$, that preserves approximation ratios up to constant factors. \end{theorem} The proof directly follows from the two Lemmas below. \begin{lemma} There is a polynomial-time reduction from $0$-{\textsf{WScheduling}} to $\tau$-{\textsf{WScheduling}}, with any given $\tau\in [0,1]$, that preserves approximation ratios up to constant factors. \end{lemma} \begin{proof} Consider a $P_{\tau}$-feasible set $S$ of weak links. It is enough to show that $S$ can be partitioned into a constant number of $P_{max}$-feasible subsets. Recall that $S$ is $P_{\tau}$-feasible if for each link $i\in S$, $\frac{P_{\tau}(i)}{l_i^\alpha} > \sum_{j\in S\setminus i} \frac{P_\tau(j)}{d_{ji}^\alpha} + N$, or equivalently, \[ \frac{P_{max}}{l_i^\alpha} > \sum_{j\in S\setminus i} \frac{P_\tau(j)}{P_\tau(i)}\cdot \frac{P_{max}}{d_{ji}^\alpha} + \frac{P_{max} N}{P_\tau(i)}. \] Since the links are weak, we have $l_j/l_i\le 2^{1/\alpha}$, implying that $P_{\tau}(j)/P_{\tau}(i)\le 2^\tau$, and have $P_{max}\le 2Nl_i^\alpha$, implying $\frac{P_{max}}{P_{\tau}(i)}\le \frac{2Nl_{i}^\alpha}{Nl_i^{\alpha}}=2$, where we also used the fact that $P_{\tau}(i)\ge Nl_i^\alpha$, as $S$ is $P_{\tau}$-feasible. Hence, a $2$-strong subset of $S$, w.r.t. $P_\tau$, is $P_{max}$-feasible. The proof is completed by recalling (Thm.~\ref{T:signalstrengthening}) that each $P_{\tau}$-feasible set can be partitioned into four $2$-strong subsets w.r.t. $P_\tau$. \end{proof} \begin{lemma} There is a polynomial-time reduction from {\textsf{UScheduling}} to $0$-{\textsf{WScheduling}} that preserves approximation ratios up to constant factors. \end{lemma} \begin{proof} We show that a given $P_{max}$-feasible set $S$ of non-weak links can be transformed into a set $S'$ of weak links that can be partitioned into $O(1)$ subsets, each $P_{max}$-feasible. Recall that set $S$ is $P_{max}$-feasible if and only if $\sum_{j\in S\setminus i}\left(\frac{g_i}{d_{ji}}\right)^\alpha < 1$ holds for every link $i\in S$, where $g_i=\frac{l_i}{(1-\beta N l_i^\alpha/P_{max})^{1/\alpha}}=\frac{l_i}{(1-(l_i/l_{max})^\alpha)^{1/\alpha}}$. The idea is to apply a geometric transformation on the set $S$, so that every link becomes weak, while the ratios $\frac{g_i}{d_{ji}}$ change by no more than constant factors. To this end, we first scale the set of sender nodes in $S$ (taken as points in the space) by a factor $X>0$, then ``stretch'' each link separately, by moving only its receiver node. Let $l_{min}$ denote the smallest link length in $S$, and $\hat{l}=l_{max}/2^{1/\alpha}$ denote the border link length between weak and non-weak links. We want to map the links with length in range $[l_{min}, l_{max})$ to the range $[\hat{l}, l_{max})$, as described above. Denote $g(x)=\frac{x}{(1-(x/l_{max})^\alpha)^{1/\alpha}}$ the function that ``generates'' the coefficients $g_i=g(l_i)$. Since $g(x):(0,l_{max})\rightarrow (0,\infty)$ is a continuous and monotonically increasing function, so is its inverse $f=g^{-1}:(0,\infty)\rightarrow (0,l_{max})$. Now, the set $S'$ of links is constructed as follows. To each link $i\in S$ corresponds a single link $i'\in S'$. The sender node $s_{i'}$ is located at the point $r_{i'}=X\cdot s_i$ with $X=l_{max}/l_{min}$. The receiver node $r_{i'}$ is located at $r_{i'}=s_{i'}+f(Xl_i)\cdot(r_i-s_i)/l_i$. Thus, $l_{i'}=f(Xl_i)<l_{max}$. Also, the facts that $g(\hat{l})=2^{1/\alpha}\hat{l} =l_{max}\le Xl_i$ and that $f$ is an increasing function, imply that $l_{i'} = f(X l_i) \ge f(g(\hat{l})) = \hat{l}$, that is, $i'$ is indeed a weak link. In order to complete the proof, we need to show that $S'$ can be split into a constant number of feasible subsets. To this end, we first use Thm.~\ref{T:signalstrengthening} to split $S$ into at most $\lceil 2\cdot 4^\alpha\rceil$ subsets, each $4^\alpha$-strong. Let $T$ be one of those. It suffices to show that $T'\subseteq S'$, the image of $T$ under our mapping, is feasible. Let $i,j\in T$ be any pair of links. First, note that since $i$ is a non-weak link, $g_i\in [l_i,2^{1/\alpha}l_i]$, and by the choice of the length transformations, $g_{i'}=g(f(Xl_i))=X l_i\le 2^{1/\alpha}X g_i$. Next, we show that $d_{j'i'}\ge Xd_{ji}/2$. Since $T$ is $4^\alpha$-strong, it is easy to show that $d_{ji}>4l_i$, which implies that $d(s_i,s_j)\ge d_{ji}-l_i> 3d_{ji}/4>3l_i$. By construction, $d(s_{i'},s_{j'})=X\cdot d(s_i,s_j)>3Xl_i\ge 3f(Xl_i)=3l_{i'}$, where we also used $f(x)\le x$ for all $x\in (0,l_{max})$, which follows from the fact that $g(x)\ge x$. Again, by the triangle inequality, \[ d_{j'i'}\ge d(s_{i'},s_{j'})-l_{i'}>2d(s_{i'},s_{j'})/3=2Xd(s_{i},s_{j})/3>Xd_{ji}/2. \] Putting all together, we see that $(g_{i'}/d_{j'i'})^\alpha \le (2\cdot 2^{1/\alpha}\cdot g_{i}/d_{ji})^\alpha\le 4^\alpha \cdot (g_{i}/d_{ji})^\alpha$. Since $T$ is a $4^\alpha$-strong set, this easily implies that $T'$ is feasible. \end{proof} \section{Context} \label{s:context} \subsection{Related Work} Gupta and Kumar introduced the SINR model of interference/communication in their influential paper~\cite{kumar00}. Moscibroda and Wattenhofer \cite{moscibrodaconnectivity} initiated worst-case analysis of scheduling problems in networks of arbitrary topology, which is also the setting of interest in this paper. There is a huge literature on wireless scheduling problems, but we focus here on algorithms with performance guarantees. There has been significant progress during the past decade in understanding scheduling problems with fixed uniform data rates. NP-completeness results have been given for different variants \cite{goussevskaiacomplexity, katz2010energy,lin2012complexity}. Early work on approximation algorithms involve (directly or indirectly) partitioning links into length groups, which results in performance guarantees that are at least logarithmic in $\Delta$, the link length diversity: TDMA scheduling and uniform weights {\textsc{Mwisl}} \cite{goussevskaiacomplexity,dinitz,us:talg12}, non-preemptive scheduling \cite{fu2009power}, joint power control, scheduling and routing \cite{chafekarcrosslayer}, and joint power control, routing and throughput scheduling in multiple channels \cite{AG10}, to name a few. Constant-factor approximations are known for uniform weight {\textsc{Mwisl}}, with uniform power \cite{GoussevskaiaHW14}, oblivious power \cite{SODA11}, and (general) power control \cite{kesselheimconstantfactor}. The characterization of feasibility under general power in \cite{kesselheimconstantfactor} is essential for our results. Standard approaches translate the constant-factor approximations for the uniform weight {\textsc{Mwisl}} into $O(\log n)$-approximations for TDMA link scheduling and general {\textsc{Mwisl}}. On the other hand, \cite{wan09,jansen03} present approximation-preserving (up to constant factors) reductions from the fractional scheduling and routing and scheduling problems to {\textsc{Mwisl}}, which in combination with the results above gives us $O(\log n)$-approximations for those problems (note, however, that this reduction uses computationally heavy linear programming techniques). The observation that extending inductive independent graphs to the multi-channel multi-radio case essentially preserves inductive independence (Sec.~\ref{s:problems}) has been made in~\cite{WanCWY11}. The $O(\log n)$-approximation results do not require the assumption we make regarding interference-constrained networks, and some also work in general metrics. Algorithms for the graph-based variants of flow routing and scheduling problems have initially been addressed in \cite{KodialamN03,Kumar2005,AlicherryBL06,WanCWY11,wan14}, among others. Algorithms (based on the primal-dual method) with performance guarantees in terms of inductive independence are presented in \cite{wan14}. Algorithms with performance guarantees for the graph-based variant of combinatorial auctions are presented in~\cite{HoeferKV14,HoeferK15}. Those works also present algorithms for the SINR model, with an extra $O(\log n)$ approximation factor. Many problems become easier in the regime of linear power assignments, and constant factor approximations are known for {\textsc{Mwisl}} \cite{wang2011constant,halmitcognitive} and TDMA link scheduling \cite{fangkeslinear,tonoyanlinear}. The communication ability of packet networks is characterized by the capacity region, i.e.\ the set of data rates that can be supported by any scheduling policy. In order to achieve \emph{low delays} (i.e.,\ polynomially bounded queues) and \emph{optimal throughput}, the classic result of Tassiulas and Ephremides \cite{TE92} and followup work in the area (e.g.~\cite{LinShroff04}) imply that {\wcapacity} is a core optimization problem that lies at the heart of such questions. This reduction applies to very general settings involving single-hop and multi-hop, as well as fixed and controlled transmission rate networks. Moreover, approximating {\textsc{Mwisl}} within any factor implies achieving the corresponding fraction of the capacity region. In general, even approximating the capacity region in polynomial time within a non-trivial bound, while keeping the delays low, is hard under standard complexity-theoretic assumptions~\cite{ShahTT11}. Methods with good performance, such as Carrier Sense Multiple Access (CSMA) \cite{jiang2010distributed} necessarily require exponential time in the worst case \cite{JiangLNSW12}. Very few results on performance guarantees are known for problems involving rate control. The constant-factor approximation for {\textsc{Mwisl}} with uniform weights and arbitrary but fixed data rates proposed in \cite{KesselheimESA12} can be used to obtain $O(\log n)$-approximations for TDMA link scheduling and {\textsc{Mwisl}} with rate control, where $n$ is the number of links. A recent work~\cite{goussevskaia2016wireless} handles the TDMA scheduling problem with fixed but different rates, obtaining an approximation independent of the number of links $n$, but the ratio is polynomial in ${\Delta}$. The idea of modeling SINR with graphs arose early. Disc graphs were shown to be insufficient in general \cite{moscibrodaconnectivity}. However, it was observed rather early that equilength sets of links can be captured with unit disc graphs \cite{goussevskaiacomplexity}. In fact, a sandwiching result with constant tightness holds for equilength links \cite{us:talg12}. For links of widely varying lengths, less was known: a $O(\log\log \Delta \log n)$-tight construction was given in \cite{us:talg12}. \subsection{Modeling Issues} The SINR model has been an object of intense study given its closer fidelity to reality than binary models. Indeed, the additivity of interference and the near-threshold nature of signal reception has been well established in experiments. The model is though far from perfect: the assumption of signal strength decreasing inversely polynomially with distance can be far off \cite{son2006,MaheshwariJD08,sevani2012sir,us:MSWiM14}. We discuss here the various proposed alternatives and explain why analysis in the pure SINR model is of fundamental importance. In stochastic analysis (see e.g.\ \cite{haenggi2009}), as well as in simulation studies, the canonical approach is to assume stochastic \emph{fading} or \emph{shadowing}, where signal strengths include a multiplicative random component. Such stochastic components are a natural fit for stochastic analysis, but less so for the every-case analysis aimed for here. The stochastic models \emph{seem} to generate instances that are similar to real ones, but little rigorous validation exists. The question is then what we can say about the instance at hand, rather than some distribution. One can distinguish between two types of stochastic effects: time-varying \emph{fading} and time-invariant \emph{shadowing}. It is typically assumed that shadowing is independent across space, and fading is independent across time. The only work we are aware of involving performance guarantee analysis in the presence of shadowing is our recent work \cite{us:mobihoc17}. It suggests that the usual assumptions about independence of the random variables across space has a major effect, as it can lead to counter-intuitive improvements in the size of optimal solutions. Much more remains though to be considered on this front. For time-varying effects under Rayleigh fading, it has been shown that applying algorithms based on the deterministic formula results in nearly equally good results \cite{dams2015}. In fact, for {\wcapacity}, this only affects the constant factor \cite{us:mobihoc17}. Thus, asymptotic results in the standard non-fading model carry fully over to settings with Rayleigh fading, including our approximation ratios. For every-case analysis, a natural generalization of the SINR model would be to shed the geometry and allow for an arbitrary signal-quality matrix. One could in practice obtain this in the form of facts-on-the-ground signal strength measurements \cite{us:MSWiM14,us:PODC14}. This generalization is, however, too expensive as it runs into the computational intractability monster: with such a formulation one can encode the coloring problem in general graphs \cite{GoussevskaiaHW14}, which is known to be famously hard to approximate \cite{FeigeKilian}. A more moderate approach is to relax the Euclidean assumption to more general metric spaces, as first proposed in \cite{fangkeslinear}. We assume here doubling metrics \cite{us:talg12}, which has been a standard assumption when dealing with problems beyond unweighted throughput. Alternatively one can analyze algorithms in terms of some parameters of the signal-quality matrix. Such results then apply directly to the SINR model, but do not depend on the exact features of the model. The most successful such effort has involved the so-called \emph{inductive independence} number, proposed in \cite{HoeferK15}, which has been applied for spectrum auctions \cite{HoeferK15,HoeferKV14}, dynamic packet scheduling \cite{kesselheimStability}, online independent sets \cite{GHKSV14}, and connectivity \cite{HHMW17}. The parameter is known to be a constant in SINR settings with power control \cite{HHMW17}. Another parameter is \emph{$C$-independence}, proposed in \cite{dams2014jamming}, based on a formulation in \cite{infocom11}. It is constant-bounded in SINR models with uniform power \cite{infocom11}. It has been applied to the distributed optimization of (uniform weights) {\wcapacity} via learning \cite{infocom11}, and its extensions involving jamming \cite{dams2014jamming} and channel availabilities \cite{dams2013sleeping}. Both parameters, inductive independence and $C$-independence, however, are useful for unweighted throughput maximization, but have failed to give sublogarithmic bounds for weighted throughput or scheduling latency minimization thus far. Ultimately, the pure SINR model lies at the core of all these models. It is exact in free space, forms the base case under stochastic fading, and is the starting point for any of the worst-case models. It is essential to understand properly how this fundamental case works, and then relax the assumptions as much as possible. It does appear that the doubling metrics we use are necessary for the results of the kind that we obtain. It remains to be seen what can be done in other settings. \section{Conclusions} Our results suggest that a reassessment of the role of graphs as wireless models might be in order. By paying a small factor (recalling, as well, that $\log^*(x) \le 5$ in this universe), we can work at higher levels of abstraction, with all the algorithmic and analytical benefits that it accrues. At the same time, hopes for fully constant-factor approximation algorithms for core scheduling problems may have somewhat receded. It would be interesting to see if other natural classes of hypergraphs admit efficient sketches. It would also be interesting to explore further properties of generalized disk graphs. \section*{Acknowledgements} We are grateful to Eyj\'olfur Ingi \'Asgeirsson for collaborations and experimentation. We thank Allan Borodin, Guy Even, Stephan Holzer, Calvin Newport and Roger Wattenhofer for helpful discussions. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we study the following {\it semilinear} partial differential equation \begin{equation}\label{u01} u_t+\frac{1}{2} {\sigma }^2 x^2 u_{xx}+ r x u_x +f(u)=0 \end{equation} which can be considered as a weakly nonlinear generalization of the celebrated Black--Scholes--Merton equation \begin{equation}\label{u02} u_t+\frac{1}{2} {\sigma }^2 x^2 u_{xx}+ r x u_x -r u =0 \end{equation} that plays a remarkable role in financial Mathematics. Usually the terminal condition \begin{equation}\label{tc} u(x,T)=1, \end{equation} describing the evolution of standard or ``vanilla" products; the price of a zero-coupon bond (or of a financial option), $u(x,t)$, which is exercised when $t=T$ is considered \cite{BlaScho72,BlaScho73,Me74,Ugur2k8}. Furthermore, Eq.~\eqref{u02} may satisfy also other kinds of options, like the barrier option. A barrier option can be considered an exotic option and as such has features that makes it more complex than the ``vanilla" option, \cite{ha2k11, Kwo2k8, HaSoLea2k13}. The underlying idea is that now a barrier $H(t)$ exists and when the asset price $x$ crosses it, the barrier option $u(x,t)$ becomes extinguished or comes into existence. Those two types are also known as \emph{down-and-out} and \emph{down-and-in} respectively. Often a rebate, $R(t)$, is paid if the option is extinguished. In what follows we shall consider the down-and-out type. In the context of the Black--Scholes--Merton equation, the barrier option is expressed by the conditions \begin{subequations}\label{barrierCond} \begin{align} &u(H(t),t)=R(t),\label{barrierCond1}\\ &u(x,T)=\max(x-K,0), \end{align} \end{subequations} where the barrier option $u(x,t)$ satisfies Eq.~\eqref{u02} for $x>H(t),\ t<T$, $T$ again is the terminal time where the barrier option is exercised and $K$ is the strike price. A common assumption for the barrier function $H$ is to have the exponential form \begin{equation}\label{H0} H(t) = bKe^{-\alpha t}, \end{equation} where $a\ge0$ and $0\le b\ge1$ \cite[p. 187]{Kwo2k8}. Although Eq.~\eqref{u01} is just a mathematical abstraction, it can also be derived from the same stochastic argument as the Black--Scholes--Merton equation \eqref{u02} is derived. In particular, by assuming that the wealth of the portfolio involves a nonlinear function of the value of an option $u(x,t)$ instead of a linear function and utilizing the same boundary conditions \eqref{tc}, \eqref{barrierCond}. Finally, the constants $r$ and ${\sigma }^2$ represent the risk-free interest rate and the variance of the rate of the return on $u$ respectively. The main purpose of this paper is to carry out a complete group classification of a generalized Black--Scholes--Merton equation of type (\ref{u01}). Recall that to perform a complete group classification of a differential equation (or a system of differential equations) involving arbitrary functions or/and parameters, means to find the Lie point symmetry group $\mathcal{G}$ for the most general case, and then to find specific forms of the differential equation for which $\mathcal{G}$ can be enlarged, \cite[p. 178]{Olver2k}. Quite often, there is good physical or geometrical motivation to study such cases. Moreover for the Black--Scholes--Merton equation \eqref{u02} the motivation is a financial one: for more than three decades this equation has shown its value and gain the trust of the market. But as banks and hedge funds relied more and more on this and its siblings equations they became more and more vulnerable to mistakes or over-simplifications in the mathematics involved in deriving them; it is a linear equation and as such can't follow the dynamic nature and the intricacies of the modern market. In \cite{GazIbr98,SinLeaHara2k8,DiAndTsLe2k9} the Black--Scholes--Merton equation (\ref{u02}) as well as other linear evolutionary equations which appear in financial Mathematics have been investigated from the point of view of the S. Lie symmetry theory. Also, the barrier option was studied recently for Eq.~\eqref{u02} under the same prism \cite{HaSoLea2k13}. The present work can be considered as an extension of that research. As such, the linear case $f(u)=\alpha u +\beta$, to avoid a repetition of already known and established facts, will not be treated explicitly in the present paper. It will only briefly accounted for the purpose of completeness. Afterwards, for each nonlinear case of the obtained group classification we look for group invariant solutions taking also into account the boundary conditions \eqref{tc}, \eqref{barrierCond}. Before we proceed with the group classification, we would like to remind some facts concerning the group analysis of differential equations. The symmetry analysis of differential equations is a method first developed in the 19th century by Sophus Lie. One of the main benefits of this method is that by following a completely algorithmic procedure one is able to determine the symmetries of a differential equation or systems of differential equations. \textit{Grosso modo}, the symmetries of a differential equation transform solutions of the equation to other solutions. The Lie point symmetries comprise a structural property of the equation, in essence it is the DNA of an equation. The knowledge of the symmetries of an equation enables one to utilize them for a variety of purposes, from obtaining analytical solutions and reducing its order to finding of integrating factors and conservation laws. In fact, many, if not all, of the different empirical methods for solving ordinary differential equations (ODEs) we have learned from standard courses at the undergraduate level emerge from a symmetry. For instance, having at our disposal a Lie point symmetry of a first order ODE, we can immediately get explicitly an integrating factor. Furthermore, even the knowledge of a trivial solution of the equation can be used for creating nontrivial solutions by using the equation's symmetries. And all these are due to the rich underlining algebraic structure of the Lie groups and algebras with which we give flesh to the symmetries of a differential equation. Another important characteristic of the symmetry method is that in some situations the symmetries of an equation may indicate that it can be transformed to a linear equation. In addition, its symmetries provide the means to construct the needed transformation. A strong indication for that is the existence of an infinite dimensional Lie algebra, \cite{bk}. Recently, it has been shown, by using only the algebraic properties of the symmetries of the equations involved, that the majority of the well-known differential equations used in economics are linked via an invertible transformation to the heat equation, \cite{DiAndTsLe2k9}. Although for some of the studied equations a transformation between them and the heat equation was already known by other means, the authors of \cite{DiAndTsLe2k9} used the algebraic properties of the symmetries of the equations and followed a straightforward and algorithmic approach. In fact, over the last forty years there was a considerable development in the mathematical analysis of partial differential equations which arise in financial Mathematics. However when one reads various papers devoted to the resolution of evolution partial differential equations which arise as, more or less, the final stage of the mathematical modeling of some financial process, one can see some \textit{ad hoc} and naive procedures to bring the considered equation under control. A viable alternative way is the employment of symmetry analysis of differential equations. Moreover, one valuable tool when considering classes of equations is the use of the \emph{equivalence} or \emph{admissible transformations}, \cite{Ovsiannikov82,Ibra2k9,PoEshra2k5}. Equivalence transformations of a class of differential equations are point transformations that keep this class invariant, in other words they map an equation from this class to another member of the same class. In the recent years equivalence transformations have found much application either as a stand alone analytic tool for the group classification of differential equations, \cite{RoTo99}, or at the core of the \emph{enhanced group analysis}, \cite{CheSeRa2k8,CaBiPo2k11,IvaPoSo2k10,VaPoSo2k9}. One of the advantages of this approach, as already emphasized, is that it provides a well defined algorithmic procedure which essentially enables one to find the involved linearizing transformations, conservation laws, invariant solutions, etc. On the other hand, the calculations involved are usually very difficult and extensive even for the simplest equations. Thus, it may become very tedious and error prone. For this reason the real progress in this area occurred in the last few decades with the advances in computer technology and the development of computer algebra systems like {\it Mathematica, Maple, Reduce}, etc. Based on these systems, a handful of symbolic packages for determining the symmetries of differential equations exists, \cite{Head93,Nucci1,Nucci2,Baumann2k}. One such symbolic package, based on {\it Mathematica}, \cite{Wolfram2k10}, has been devised and developed by S. Dimas as part of his PhD thesis, \cite{Dimas2k8}. The package, named SYM, \cite{Dimas2k8,DiTs2k5a,DiTs2k6}, was developed from the ground up using the symbolic manipulation power of {\it Mathematica} and the artificial intelligence capabilities which it offers. It is being used by many researchers around the world. It was extensively used for all the results in the present paper, both for the interactive manipulation of the found symmetries and for the classification of the equations employing the symbolic tools provided by it. This paper is organized as follows. In section 2 we present the basic concepts of the Lie point symmetry approach to differential equations used in the paper. In section 3 we obtain the complete group classification of generalized Black--Scholes--Merton equations of type (\ref{u01}). In section 4 we provide invariant solutions for each non-linear case found by the group classification under the two specific boundary problem studied, the ``vanilla" option and the barrier option. Finally, in section 5 we discuss the obtained results as well as possible applications. \section{Preliminaries} In this section we expose some notions of the modern group analysis that will be encountered in the main sections of the article suitably adapted to the article's needs. For a full treatise in the subject there is a wealth of classical texts that encompass all aspects of the theory, \cite{Olver2k,bk,Ibr85,Ovsiannikov82,Hydon2k,Stephani90}. A Lie point symmetry of Eq.~\eqref{u01} is a one-parameter transformation of the independent and dependent variables \begin{align}\label{LG} \bar x &= \bar x(x,t,u,\epsilon),\notag\\ \bar t &= \bar t(x,t,u,\epsilon),\\ \bar u &= \bar u(x,t,u,\epsilon),\notag \end{align} that keeps \eqref{u01} invariant: \begin{equation}\label{SC} \bar u_{\bar t}+\frac{1}{2} {\sigma }^2 \bar x^2 \bar u_{\bar x\bar x}+ r \bar x \bar u_{\bar x} +f(\bar u)=0. \end{equation} By substituting in \eqref{SC} the point transformations \eqref{LG} and using \eqref{u01} we obtain an equation called the \emph{symmetry condition}. From it the exact form of the point transformations is obtained. However, trying to determine the transformation \eqref{LG} from \eqref{SC} is more challenging than trying to solve the Eq.~\eqref{u01} itself! The novelty behind S. Lie idea resides on \textit{linearizing} the problem of determining the symmetries of an equation. To do that, the notion of the \textit{infinitesimal generator} is introduced. Namely, the infinitesimal generator is a differential operator $$ X={\xi }^1\frac{\partial }{\partial x}+{\xi }^2\frac{\partial }{\partial t}+ \eta \frac{\partial }{\partial u}$$ with the functions ${\xi }^i={\xi}^i(x,t,u)$ and $\eta=\eta(x,t,u)$, called infinitesimals, defined as \begin{align}\label{infinitesimals} \xi^1 = \left.\frac{\partial \bar x}{\partial \varepsilon } \right\rvert_{\varepsilon =0}&,& \xi^2 = \left.\frac{\partial \bar t}{\partial \varepsilon } \right\rvert_{\varepsilon =0}&,& \eta = \left.\frac{\partial \bar u}{\partial \varepsilon }\right\rvert_{\varepsilon =0}. \end{align} An infinitesimal generator of this type determines a Lie point symmetry of Eq.~\eqref{u01}, if and only if, its action on the equation will be, modulo the equation itself, identically zero, that is: \begin{equation}\label{lsc} \left.X^{(2)}\left[u_t+\frac{1}{2} {\sigma }^2 x^2 u_{xx}+ r x u_x +f(u)\right]\right\rvert_\eqref{u01} \equiv 0, \end{equation} where $X^{(2)}$ is the second order prolongation of the operator $X$ given by \begin{equation}\label{prolong} X^{(2)}= X + {\eta }^{(1)}_{i} \frac{\partial }{\partial u_i}+ {\eta }^{(2) }_{i_1 i_2} \frac{\partial }{\partial u_{i_1 i_2}},\ i,i_j=1,2 \end{equation} and \begin{equation}\label{prolongedCoefficients} {\eta }^{(1)}_{i}=D_i\eta - (D_i{\xi }^j)u_j,\,\text{and\ }\, {\eta }^{(2)}_{i_1 i_2} =D_{i_2} {\eta }^{(1)}_{i_1} - (D_{i_2}{\xi }^{j})u_{i_1 j},\ i,i_k,j=1,2 \end{equation} where $ u_i = \frac{\partial u}{\partial x^i},\ u_{i_1i_2} = \frac{\partial^2 u}{\partial x^{i_1}\partial x^{i_2}},\ i,i_j=1,2$ and $(x^1,x^2)=(x,t)$, where the Einstein summation convention implied over the indexes. From the Eq.~\eqref{lsc}, called \emph{linearized symmetry condition}, an overdetermined system of linear partial differential equations emerges. By solving this system, called the \emph{determining equations}, we find the infinitesimals and hence we obtain the point symmetries of the equation. The group classification occurs in that phase. The determining equations contain also the parameters $\sigma,\rho$ and the function $f(u)$. The group classification is performed by investigating each case where specific relations among the unknown elements remove equations from the set of determining equations and hence, expanding the solution space. This set of symmetries, represented as differential operators, form a Lie algebra. The system of ODEs \eqref{infinitesimals} with the addition of the conditions $\bar x\rvert_{\varepsilon=0}=x,\ \bar t\rvert_{\varepsilon=0}=t,\ \bar u\rvert_{\varepsilon=0}=u$ forms a well posed initial value problem. By solving it, we can obtain the corresponding local continuous transformations, which form a Lie group. This process is called \emph{exponentiation}. Henceforth, we shall identify a Lie point symmetry with its infinitesimal generator. Having the symmetries, as a Lie algebra, there is a wealth of things that can be done with. In the present paper, we use them to obtain \emph{invariant} or \textit{similarity solutions} of the Eq.~\eqref{u01}. By invariant solutions we mean solutions of \eqref{u01} that are invariant under one of the found symmetries $\mathfrak{X}$, e.g. \begin{equation}\label{isc} \left.\mathfrak{X}[u-\varphi (x,t)]\right\rvert_{u=\varphi (x,t)}\equiv0. \end{equation} The Eq.~\eqref{isc} is a linear PDE called \textit{invariant surface condition} and by solving it we obtain a way to reduce Eq.~\eqref{u01}. For example, the symmetry $\pd t$ yields the invariant surface condition $u_t=0$. Solving it, we get the invariant solution $u(x,t)=\phi(x)$ which, in turn, can be used to reduce the order of Eq.~\eqref{u01} effectively turning it from a PDE to an ODE. Similarly, when we look for a similarity solution of Eq.~\eqref{u01} along with a initial/boundary condition we have to choose the subalgebra leaving also invariant that condition and its boundary: \begin{equation}\label{tca} \left.X(t-T)\right\rvert_{t=T}\equiv0 \end{equation} and \begin{equation}\label{tcb} \left.X(u-1)\right\rvert_{t=T}\equiv0. \end{equation} for the boundary condition \eqref{tc}. And \begin{equation}\label{boa} \left.X(x-H(t))\right\rvert_{x=H(t)}\equiv0 \end{equation} and \begin{equation}\label{bob} \left.X(u-R(t))\right\rvert_{x=H(t)}\equiv0. \end{equation} for the boundary condition \eqref{barrierCond1}. For obtaining the equivalence transformations there two possible roads. The first one is by a direct search for the equivalence transformations, an approach that gives in theory the most general equivalence set of transformation, the equivalence group. But it has the same pitfalls as trying to obtain the symmetry group for the equation as briefly discussed already. And the second approach is through the calculation of the \emph{equivalence algebra} from which the continuous equivalence group can be obtained. In the present work the second road will be realized by complementing the usual prolongation of the infinitesimal generator with a \emph{secondary prolongation}, \cite{Ibr85}. To calculate the equivalence algebra, an extension of Eq.~\eqref{u01} must be considered with the arbitrary elements $\sigma, r, f$, now functions of $x,t,u$, and by including the following constraints on them, $$ \sigma_x=\sigma_t=\sigma_u=r_x=r_t=r_u=f_x=f_t=0.$$ For this extended system the infinitesimal generator is \begin{equation} \label{eig} X={\xi }^1\frac{\partial }{\partial x}+{\xi }^2\frac{\partial }{\partial t}+ \eta \frac{\partial }{\partial u}+ \phi^1 \frac{\partial }{\partial \sigma}+ \phi^2 \frac{\partial }{\partial r}+ \phi^3 \frac{\partial }{\partial f}, \end{equation} where now the coefficients of this operator depend on the extended space: ${\xi }^i={\xi}^i(x,t,u,\sigma,r,f)$, $\eta=\eta(x,t,u,\sigma,r,f)$ and $\phi^i=\phi^i(x,t,u,\sigma,r,f)$. The second prolongation needed to obtain the determining equation now becomes \begin{multline}\label{prolong2} X^{(2)}= X + {\eta }^{(1)}_{i} \frac{\partial }{\partial u_i}+ {\eta }^{(2) }_{i_1 i_2} \frac{\partial }{\partial u_{i_1 i_2}}+ {\phi}^{1,(1)}_{j} \frac{\partial }{\partial \sigma_j}+ {\phi}^{2,(1)}_{j} \frac{\partial }{\partial r_j}\\ + {\phi}^{3,(1)}_{j} \frac{\partial }{\partial f_j},\ i,i_k=1,2,\ j=1,2,3 \end{multline} where the coefficients $ {\eta }^{(1)}_{i}, {\eta }^{(2) }_{i_1 i_2}$ are calculated as usual by the formula \eqref{prolongedCoefficients} while for the coefficients ${\phi}^{i,(1)}_{j} $ with the secondary prolongation, $$ {\phi }^{i, (1)}_{j}=\tilde D_j\phi^i - (\tilde D_j{\xi }^1)p^i_x- (\tilde D_j{\xi }^2)p^i_t- (\tilde D_j\eta )p^i_u,\ , i,j=1,2,3 $$ where $(p^1,p^2,p^3 )=(\sigma,r,f)$, $(x^1,x^2,x^3)=(x,t,u)$ and $$ \tilde D_j = \frac{\partial}{\partial x^j}+ p^i_{x^j} \frac{\partial }{\partial p^i}, $$ again the Einstein summation convention is used for the index $i$. After that point we follow Lie's algorithm as usual. Having the equivalence algebra by exponentiation one can obtain the continuous part of the equivalence group. By using the method proposed in \cite[pp. 187 c.f.]{Hydon2k}, \cite{Hy2k} one can also obtain the discrete part and hence retrieve the whole set of equivalence transformations permissible by this class of equations. Another useful notion is that of the \emph{additional equivalence transformation}. An additional equivalence transformation is a point transformation that connects inequivalent classes of differential equations. The knowledge of such transformations greatly helps the classification. \section{Group classification} In this section we proceed with the group classification of the generalized Black--Scholes--Merton equation \eqref{u01}. First, the best representative for the class of equations \eqref{u01} is obtained utilizing its equivalence algebra. To do that, the continuous part of the equivalence group is constructed and with its help as many as possible arbitrary elements are zeroed. \begin{thm} The equivalence algebra $\hat{\mathcal{L}}_\mathcal{E}$ of class \eqref{u01} is generated by the following vector fields \begin{gather*} \partial _t,\ \partial _u,\ x\partial _x,\ \partial _r+t x\partial _x,\ f\partial _f+u\partial _u,\\ x \left(2 r t-t \sigma ^2+2 \log \lvert x\rvert\right)\partial _x+4 t\partial _t-4 f\partial _f, \\ x \left(\left(2 r+\sigma ^2\right)t-2 \log\lvert x\rvert\right)\partial _x -2 \sigma \partial _{\sigma }. \end{gather*} \end{thm} \begin{proof} By applying the second order prolongation \eqref{prolong2} of the infinitesimal generator \eqref{eig} to the extended system \begin{gather*} u_t+\frac{1}{2} {\sigma }(x,t,u)^2 x^2 u_{xx}+ r(x,t,u) x u_x +f(x,t,u)=0,\\ \sigma_x=\sigma_t=\sigma_u=r_x=r_t=r_u=f_x=f_t=0, \end{gather*} modulo the extended system itself, we get the system of determining equations: \begin{gather*} {\eta_3}_{f}=0,\ {\eta_4}_{f}=0,\ \xi ^2_{f}=0,\ \xi ^2_{f}=0,\ \xi ^2_{f}=0,\ \xi ^2_{ff}=0,\ {\eta_3}_{u}=0,\ {\eta_4}_{u}=0,\\ \xi ^2_{u}=0,\ \xi ^2_{uf}=0,\ \xi ^2_{uu}=0,\ {\eta_1}_{t}=0,\ {\eta_2}_{t}=0,\ {\eta_3}_{t}=0,\ {\eta_4}_{t}=0,\ {\eta_1}_{x}=0,\\ {\eta_2}_{x}=0,\ {\eta_3}_{x}=0,\ {\eta_4}_{x}=0,\ \xi ^2_{x}=0,\ {\eta_1}_{f}+f \xi ^2_{f}=0,\ {\eta_1}_{f}+f \xi ^2_{f}=0,\\ \xi ^1_{f}-r x \xi ^2_{f}=0,\ \xi ^1_{ff}-r x \xi ^2_{ff}=0,\ \xi ^1_{uf}-r x \xi ^2_{uf}=0,\ \xi ^1_{uu}-r x \xi ^2_{uu}=0,\\ 2 \xi ^2_{f}+{\eta_1}_{ff}+f \xi ^2_{ff}=0,\ 2 \xi ^1_{f}-x \left(2 \left(r+\sigma ^2\right) \xi ^2_{f}+x \sigma ^2 \xi ^2_{xf}\right)=0,\\ 2 \xi ^1_{u}-x \left(2 \left(r+\sigma ^2\right) \xi ^2_{u}+x \sigma ^2 \xi ^2_{xu}\right)=0,\\ 2 r \xi ^2_{u}+{\eta_1}_{uu}+f \xi ^2_{uu}-2 \xi ^1_{xu}+2 r x \xi ^2_{xu}=0, \\ r \xi ^2_{f}+\xi ^2_{u}+{\eta_1}_{uf}+f \xi ^2_{uf}-\xi ^1_{xf}+r x \xi ^2_{xf}=0,\\ f \xi ^1_{f}+x \left(-f r \xi ^2_{f}+x \sigma ^2 \left(\xi ^2_{x}+{\eta_1}_{xf}+f \xi ^2_{xf}\right)\right)=0,\\ 4 x {\eta_3}+\sigma \left(4 \xi ^1+x \left(-2 f \xi ^2_{u}+2 \xi ^2_{t}-4 \xi ^1_{x}+6 r x \xi ^2_{x}+4 x \sigma ^2 \xi ^2_{x}+x^2 \sigma ^2 \xi ^2_{xx}\right)\right)=0, \\ \begin{split} 2 {\eta_2}-2 f {\eta_1}_{u}-2 f^2 \xi ^2_{u}+2 {\eta_1}_{t}+2 f \xi ^2_{t}+2 r x {\eta_1}_{x}+2 f r x \xi ^2_{x}\\ +\sigma ^2 x^2( {\eta_1}_{xx}+f \xi ^2_{xx})=0 \end{split}\\ \begin{split} 2 x {\eta_4}+2 r \xi ^1+2 f \xi ^1_{u}-2 f r x \xi ^2_{u}-2 \xi ^1_{t}+2 r x \xi ^2_{t}-2 r x \xi ^1_{x}+2 r^2 x^2 \xi ^2_{x}+2 r x^2 \sigma ^2 \xi ^2_{x}\\ +2 x^2 \sigma ^2 {\eta_1}_{xu}+2 f x^2 \sigma ^2 \xi ^2_{xu}-x^2 \sigma ^2 \xi ^1_{xx}+r x^3 \sigma ^2 \xi ^2_{xx}=0. \end{split} \end{gather*} Solving the above system the equivalence algebra $\hat{\mathcal{L}}_\mathcal{E}$ is obtained. \end{proof} \begin{lem} The continuous part of the equivalence group, $\hat{\mathcal{E}}_\mathcal{C}$, consists of the transformations \begin{align*} \tilde x &= e^{\frac{1}{2} t \delta _6 \left(\left(\sigma ^2-2 r\right) \delta _7-\sigma ^2 \delta _7^2 \delta _6+2 \delta _6 (r+\delta_5)\right)} \lvert x\rvert^{\delta _6 \delta _7} \delta _4,\\ \tilde t &= \delta _1+ \delta _6^2t,\quad\tilde u = \delta _2+ \delta _3u,\quad\tilde r = r+\delta _5,\quad\tilde\sigma = \delta _7\sigma ,\quad\tilde f = \frac{ \delta _3}{\delta _6^2}f, \end{align*} where $\delta_i$ are arbitrary constants and $\delta_3,\delta_4,\delta_6,\delta_7\ne0$. \end{lem} \begin{rem} Due to the fact that the transformation for $x$ depends also on the arbitrary elements of Eq.~\eqref{u01}, $\hat{\mathcal{E}}_\mathcal{C}$ is also called the continuous part of the \emph{generalized equivalence group}. If one has chosen to assume that the equivalence transformations for $x,t,u$ do not depend also on the arbitrary elements $r,\sigma,f$, i.e. $\xi^i = \xi^i(x,t,u),\ \eta=\eta(x,t,u)$ in \eqref{eig}, then the continuous part of the \emph{usual equivalence group} $\mathcal{E}_\mathcal{C}$ would be obtained. \end{rem} Readily, using $\hat{\mathcal{E}}_\mathcal{C}$ one can find an equivalence transformation that $\tilde\sigma\rightarrow\sqrt{2}$ and $\tilde r\rightarrow0$: \begin{equation}\label{equivTrans} \tilde x = e^{\left(\frac{\sigma ^2-2 r}{\sqrt{2} \sigma }-1\right)t} \lvert x\rvert^{\frac{\sqrt{2}}{\sigma }},\quad\tilde t = t,\quad\tilde u = u,\quad\tilde r = 0,\quad\tilde\sigma = \sqrt2,\quad\tilde f = f. \end{equation} Using \eqref{equivTrans}, Eq.~\eqref{u01} turns into the equation \begin{equation}\label{u01a} \tilde u_{\tilde t}+ \tilde x^2 \tilde u_{\tilde x\tilde x} +\tilde f(\tilde u)=0. \end{equation} Next, using the additional equivalence transformation \begin{equation} \hat x = \log\lvert \tilde x\rvert,\quad \hat t=t,\quad\hat u = \lvert\tilde x\rvert^{-1/2} \tilde u, \end{equation} the heat equation with a nonlinear source is obtained (for clarity henceforth the hats are dropped) \begin{equation}\label{heat} u_{ t}+ u_{ x x}+e^{- x/2}f(e^{ x/2} u)-\frac{1}{4} u=0. \end{equation} Therefore, without any loss of generality, Eq.~\eqref{heat} will be classified instead. In addition, the terminal condition \eqref{tc} is transformed to the condition \begin{equation}\label{tc2} u(x,T) = \exp(-x/2) \end{equation} and for the barrier option condition \eqref{barrierCond1} we have: \begin{equation}\label{barrierCond1a} e^{\frac{1}{2}\left(\frac{\sigma^2-2r}{\sqrt{2}\sigma}t-1\right)}\lvert H(t)\rvert^\frac{\sqrt{2}}{2\sigma}u\left(\left(\frac{\sigma^2-2r}{\sqrt{2}\sigma}-1 \right)t+\frac{\log\lvert H(t) \rvert}{\sqrt{2}\sigma},t\right)=R(t) \end{equation} Finally, the equivalence group for Eq.~\eqref{heat} is given by the following theorem \begin{thm} The equivalence group, $\mathcal{E}$, of Eq.~\eqref{heat} consists of the transformations \begin{align*} \tilde x &=\delta_5\beta x + (\beta-\delta_5)\delta_5 t + 2 \delta_4,\quad\tilde t = \delta _5 t+\delta _1,\\ \tilde u &=e^{-\frac{1}{2}\left(\delta_5 x-(\delta_5-1)\delta_5t+2\delta _4\right)}\left(\alpha \delta_3 e^{\frac{x-(\beta-1)(x+t)}{2}} u+\delta_2 \right),\quad\tilde f =\alpha \frac{ \delta _3}{\delta _5^2}f, \end{align*} where $\delta_i$ are arbitrary constants, $\delta_3,\delta_5\ne0$ and $\alpha,\beta=\pm1$. \end{thm} \begin{proof} The process is analogous to the one for Eq.~\eqref{u01} with the only difference that now we have only one arbitrary element, the function $f$. In addition, using the process described in \cite{Hy2k} we find the four discrete equivalence transformations $$ (x,t,u,f)\rightarrow(\beta x+(\beta-1)t,t,\alpha e^{-\frac{1}{2}(\beta-1)(x+t)} u,\alpha f), $$ where $\alpha,\beta=\pm1$. Together the two sets of transformations comprise the usual equivalence group of transformations $\mathcal{E}$. \end{proof} Equation \eqref{heat} belongs to the class \begin{equation*} u_t = u_{xx} + f(t,x,u,u_x) \end{equation*} that describes nonlinear heat conductivity processes. Due to the fact that the above class was completely classified in \cite{ZhdaLa99}. Apart from mentioning the classification equation \begin{multline*} \frac{1}{2} \left(u f^\prime\left(e^{x/2} u\right)-e^{-x/2} f\left(e^{x/2} u\right)\right) \left(\mathcal{F}_3(t)+\frac{x \mathcal{F}_2^\prime(t)}{2}\right)+u \mathcal{F}^\prime_4(t)\\ +\left(e^{-x/2} f\left(e^{x/2} u\right)-\frac{u}{4}\right) \left(\mathcal{F}^\prime_2(t)-\mathcal{F}_4(t)-\frac{1}{8} x \left(4 \mathcal{F}^\prime_3(t)+x \mathcal{F}_2^{\prime \prime}(t)\right)\right)\\ +\left(f^\prime\left(e^{x/2} u\right)-\frac{1}{4}\right) \left(\mathcal{F}_1(x,t)+\frac{1}{8} u \left(8 \mathcal{F}_4(t)+x \left(4 \mathcal{F}_3^\prime(t)+x \mathcal{F}_2{}^{\prime \prime }(t)\right)\right)\right)\\ +\frac{1}{8} u \left(2 \mathcal{F}_2^{\prime \prime }(t)+x \left(4 \mathcal{F}_3^{\prime \prime }(t)+x \mathcal{F}_2^{\prime \prime \prime}(t)\right)\right)+{\mathcal{F}_1}_{t}(x,t)+{\mathcal{F}_1}_{xx}(x,t)=0 \end{multline*} no further details of the calculations involved will be showed. We proceed with presenting the resulting classification. \begin{enumerate} \item For an arbitrary $f$ the Lie point symmetries of (\ref{heat}) are determined by the infinitesimal generators: \begin{align}\label{u03} \mathfrak{X}_1&=\frac{\partial}{\partial t},& \mathfrak{X}_2&=2 \frac{\partial}{\partial x}-u\frac{\partial}{\partial u}. \end{align} \item For $f(\zeta) =-\frac{\gamma}{\beta}(\alpha +\beta \zeta)\left(\delta +\log\lvert\alpha+\beta\zeta\rvert\right),\,\beta,\gamma\neq 0$, in addition to the symmetries \eqref{u03} we get also the symmetries \begin{subequations}\label{u03a} \begin{align} \mathfrak{X}_3 &= \frac{\alpha+\beta e^{x/2}u}{\beta}e^{-\frac{x}{2}+\gamma t}\frac{\partial}{\partial u},\\ \mathfrak{X}_4 &= \frac{2}{\gamma}e^{\gamma t}\frac{\partial}{\partial x} + \frac{\beta \gamma (t+x)e^{x/2}u + \alpha (1+\gamma (t+x) )}{\beta\gamma}e^{-\frac{x}{2}+\gamma t}\frac{\partial}{\partial u}. \end{align} \end{subequations} \item For $f(\zeta)=\alpha(\zeta-\beta)^2,\,\alpha\ne0$, in addition to the symmetries \eqref{u03} we have the symmetry \begin{align}\label{u03b} X_3&=2(x-t)\pd{x}+4t\pd{t}+((t-x-4)u+4\beta e^{-x/2})\pd{u}. \end{align} \item For completeness, we present also the additional symmetries for the linear cases: \begin{itemize} \item $f(\zeta)=\beta \zeta+\alpha,\, \beta\ne0$: \begin{align*} X_3&=(u+\frac{\alpha}{\beta}e^{-x/2})\pd u,\\ X_4&=2t\pd x+\left(u x+\frac{ \alpha(t+x)e^{-x/2} }{\beta }\right)\pd u,\\ X_5&=2x\pd x+4t\pd t+\frac{\left(\alpha x- (4 \beta -1) \left(\alpha +\beta e^{x/2} u \right)t\right)}{\beta }e^{-x/2} \pd u,\\ X_6&=4xt\pd x+4t^2\pd t+\frac{1}{\beta}\left(\alpha \left(2 (x-1)t+x^2+(1-4 \beta )t^2 \right)e^{-x/2}\right.\\ &\left.+\beta \left(x^2+(t-4\beta t -2)\right)tu\right)\pd u,\\ X_\infty&= \mathcal{F}(x,t)\pd u, \end{align*} where the smooth function $ \mathcal{F} =\mathcal{F}(x,t)$ satisfies the linear equation $$\mathcal{F}_{t}+ \mathcal{F}_{xx}+(\alpha-\frac{1}{4})\mathcal{F}=0$$ and, \item $f(\zeta)=\alpha$: \begin{align*} X_3&=(u+\alpha x e^{-x/2})\pd u,\\ X_4&=4t\pd x+\left(2 e^{x/2} x u+\alpha\left(t^2-x (x+2)\right) \right)e^{-x/2} \pd u,\\ X_5&=4x\pd x+8t\pd t+ \left(2 e^{x/2} t u+\alpha\left(t^2-(x-6) x\right) \right)e^{-x/2}\pd u,\\ X_6&=12xt\pd x+12t^2\pd t+\left(3 \left((t-2) t+x^2\right)u\right.\\ &\left.+\alpha \left(t^3-12 t^2-3 t x^2-2 x (x (x+3)+6)\right)e^{-x/2} \right)\pd u,\\ X_\infty&= \mathcal{F}(x,t)\pd u, \end{align*} \end{itemize} where the smooth function $ \mathcal{F} =\mathcal{F}(x,t)$ satisfies the linear equation $$\mathcal{F}_{t}+ \mathcal{F}_{xx}+(\alpha-\frac{1}{4})\mathcal{F}=0.$$ \end{enumerate} \begin{rem} Looking at the corresponding Lie algebars for the two linear cases, it is evident that they are linked to the heat equation via a point transformation, \cite{DiAndTsLe2k9}, --- an additional equivalence transformation --- a fact well established in the literature, \cite{GazIbr98,SinLeaHara2k8}. \end{rem} \section{Invariant solutions} Having obtained the complete group classification for Eq.~\eqref{heat}, and consequently for Eq.~\eqref{u01}, we can look for invariant solutions under the terminal condition \eqref{tc2} and the barrier option \eqref{barrierCond1a}: For each one of the two nonlinear cases the appropriate sub algebra of symmetries also admitted by each problem is found using the two required conditions \eqref{tca}, \eqref{tcb} and \eqref{boa}, \eqref{bob} adapted now to Eq.~\eqref{heat}. Finally, by using the subalgebra obtained for every subcase that surfaced from the two conditions a similarity solution is constructed. \subsection{The terminal condition} \begin{case}$f(\zeta)=\alpha(\zeta-\beta)^2,\,\alpha\ne0$ \end{case} Let the arbitrary element of the Lie algebra spanned by \eqref{u03} and \eqref{u03b} be $\mathfrak X = c_1 \mathfrak X_1+c_2 \mathfrak X_2+c_3 \mathfrak X_3$. Using \eqref{tca}, \eqref{tcb} and \eqref{tc2} we obtain the conditions: $$ c_1=-4Tc_3 $$ and $$ (\beta-1)c_3=0. $$ From the above conditions two specific subcases occur: \begin{subcase} $c_1=c_3=0,\ c_2\ne0$ \end{subcase} For this case the only symmetry that keeps invariant both Eq.~\eqref{heat} and \eqref{tc2} is the $ \mathfrak{X}_2=2 \partial_x-u\partial_u$. \begin{subcase} $\beta=1,\ c_1=-4Tc_3$ \end{subcase} For this case the only symmetries that keeps invariant both Eq.~\eqref{heat} and \eqref{tc2} are \begin{align*} Z_1&=2 \partial_x-u\partial_u,\\ Z_2&=2(x-t)\partial_x+4(t-T)\partial_t+((t-x-4)u+4 e^{-x/2})\partial_u \end{align*} \begin{case} $f(\zeta) =-\frac{\gamma}{\beta}(\alpha +\beta \zeta)\left(\delta +\log\lvert\alpha+\beta\zeta\rvert\right),\,\beta,\gamma\neq 0$ \end{case} Let the arbitrary element of the Lie algebra spanned by \eqref{u03} and \eqref{u03a} be $\mathfrak X = c_1 \mathfrak X_1+c_2 \mathfrak X_2+c_3 \mathfrak X_3+c_4\mathfrak X_4$. Using \eqref{tca}, \eqref{tcb} and \eqref{tc2} the conditions are $$ c_1=0 $$ and $$ (\alpha +\beta) \left(c_4+\gamma \left(c_3+(t+x)c_4\right)\right)=0. $$ From the above conditions two specific subcases occur: \begin{subcase} $c_1=c_3=c_4=0,\ c_2\ne0$ \end{subcase} For this case the only symmetry that keeps invariant both Eq.~\eqref{heat} and \eqref{tc2} is the $ \mathfrak{X}_2=2 \partial_x-u\partial_u$. \begin{subcase} $\beta=-\alpha\ne0,\ \gamma\ne0,\ c_1=0$ \end{subcase} For this case the only symmetries that keeps invariant both Eq.~\eqref{heat} and \eqref{tc2} are \begin{align*} Z_1&=2 \partial_x-u\partial_u,\\ Z_2&=(1-e^{x/2}u)e^{-\frac{x}{2}+\gamma t}\partial_u\\ Z_3&= \frac{2}{\gamma}e^{\gamma t}\partial_x + \frac{\gamma (t+x)e^{x/2}u - (1+\gamma (t+x) )}{\gamma}e^{-\frac{x}{2}+\gamma t}\partial_u \end{align*} The common denominator for all the above cases is the symmetry $2 \partial_x-u\partial_u$. This symmetry gives the invariant solution $$ u(x,t)= C(t) e^{-x/2}. $$ Going back to the Eq.~\eqref{u01} we see that it corresponds to the \emph{trivial} assumption $$ u(x,t) = C(t), $$ i.e. solutions that do not have dependency on the variable $x$. \subsection{The condition for barrier option} \begin{case}$f(\zeta)=\alpha(\zeta-\beta)^2,\,\alpha\ne0$ \end{case} Let the arbitrary element of the Lie algebra spanned by \eqref{u03} and \eqref{u03b}, $\mathfrak X = c_1 \mathfrak X_1+c_2 \mathfrak X_2+c_3 \mathfrak X_3$. Using \eqref{boa} and the equivalence transformations the first condition turns to the ODE \begin{multline}\label{ode1} \left(2 \sqrt{2} r \left(c_1+2 t c_3\right)+\sigma \left(\left(2-\sqrt{2} \sigma \right) c_1+4 c_2-2 \sqrt{2} t\sigma c_3t\right)\right)H\\ +4 \sqrt{2} c_3 H\log\lvert H\rvert -2 \sqrt{2} \left(c_1+4 t c_3\right) H^\prime=0 \end{multline} two cases are discerned, $c_3\ne0$ and $c_3=0$: \begin{subcase} $c_3\ne0$ \end{subcase} The solution of \eqref{ode1} is \begin{equation}\label{H1a} H(t)=e^{-\frac{\sigma}{\sqrt{2} }\lambda+(r- \frac{1}{2} \sigma ^2)\left(t+\mathcal A \sqrt{\kappa+t}\right)} \end{equation} where $\kappa = \frac{c_1}{4c_3},\ \lambda=\frac{c_1+2c_2}{2c_3}$ and $\mathcal A$ the constant of integration. Using this solution, conditions \eqref{bob} and \eqref{barrierCond1a} we get the ODE \begin{equation}\label{ode2} \beta -R-\left(\kappa+t \right) R^\prime=0. \end{equation} Its solution is \begin{equation}\label{R1a} R(t)=\frac{\mathcal B+ \beta t}{\kappa+t}, \end{equation} where $\mathcal B$ the constant of integration. Having found the functions $H,R$ admitted by the symmetries we proceed with the reduction of the Eq.~\eqref{heat}. From the invariant surface condition \eqref{isc} the invariant solution is \begin{equation}\label{u1a} u(x,t)=\frac{e^{-\frac{1}{2} \left(x+\lambda\right)} \left(4 e^{\frac{\lambda}{2}} \beta t +F\left(\frac{x + t+\lambda}{\sqrt{\kappa +t}}\right)\right)}{4(\kappa +t)}. \end{equation} Substituting \eqref{u1a} to \eqref{heat} for this particular case for $f$ we arrive at the reduction \begin{equation*} 16 e^{\frac{1}{2} \lambda} \beta \kappa (\beta \kappa +\frac{1}{ \alpha} )-8( \frac{1}{ 2\alpha}+ \beta \kappa ) F(\zeta)+e^{-\frac{1}{2} \lambda} F(\zeta)^2-\frac{2}{ \alpha} \zeta F^\prime+\frac{1}{ \alpha} F^{\prime \prime }=0, \end{equation*} where $\zeta = \frac{x + t+\lambda}{\sqrt{\kappa +t}}$. Although the general solution of this equation cannot be found in a closed form, it has the special solution $$ F(\zeta) = 4e^{\frac{1}{2} \lambda } \left(\beta \kappa-\frac{3 }{2\alpha \zeta ^2}\right). $$ Using it in combination with \eqref{u1a} we arrive to the invariant solution $$ u(x,t) = \frac{ \left(\beta (t+x+\lambda)^2-\frac{6}{\alpha}\right)}{(t+x+\lambda )^2}e^{-x/2}. $$ Finally, using the equivalence transformations and the boundary condition \eqref{barrierCond1} we arrive to the similarity solution for Eq.~\eqref{u01} with $f(\zeta)=\alpha(\zeta-\beta)^2$ $$ u(x,t) = \beta -\frac{24 \sigma ^2}{\alpha \left(2 \lambda \sigma +\sqrt{2}( \sigma ^2-2 r)t+2 \sqrt{2} \log\lvert x\rvert\right)^2} $$ with $$ H(t) = e^{-\frac{\sigma}{\sqrt{2} }\lambda+(r- \frac{1}{2} \sigma ^2)\left(t+\mathcal A \sqrt{\kappa+t}\right)} $$ and $$ R(t)=\beta -\frac{12 \sigma ^2}{\mathcal A^2 \alpha (t+\kappa ) \left(\sigma ^2-2r\right)^2} $$ \begin{subcase} $c_3=0$ \end{subcase} For $c_3=0$ the solution of \eqref{ode1} becomes \begin{equation}\label{H1b} H(t)=\mathcal Ae^{\frac{1}{2}(2r - \sigma^2)t+ \lambda t} \end{equation} where $\lambda=\frac{\left(\mathbf{c}_1+2 \mathbf{c}_2\right)}{\sqrt{2}\mathbf{c}_1}\sigma$\footnote{$c_1\ne0$ otherwise $c_2$ must also be zero.} and $\mathcal A\ne0$ the constant of integration. Using this solution, conditions \eqref{bob} and \eqref{barrierCond1a} we get the ODE \begin{equation}\label{ode3} c_1 R^\prime=0. \end{equation} Hence \begin{equation}\label{R1b} R(t)= \mathcal B, \end{equation} where $\mathcal B$ is constant. Having found the functions $H,R$ admitted by the symmetries we proceed with the reduction of the Eq.~\eqref{heat}. From the invariant surface condition \eqref{isc} the invariant solution is \begin{equation}\label{u1b} u(x,t)=e^{-x/2} F\left(\frac{x+t \left(1-\sqrt{2} \lambda \right)}{1-\sqrt{2} \lambda }\right). \end{equation} Substituting \eqref{u1b} to \eqref{heat} for this particular case for $f$ we arrive to the reduction \begin{equation*} \left(\sqrt{2} \lambda -1\right) \left(\alpha\left(1-\sqrt{2} \lambda \right) (\beta -F(\zeta))^2-\sqrt{2} \lambda F^\prime\right)- F^{\prime \prime }=0, \end{equation*} where $\zeta = \frac{x+t \left(1-\sqrt{2} \lambda \right)}{1-\sqrt{2} \lambda }$. Although this equation cannot be analytically solved it has for $\lambda=0$\footnote{Actually for $\lambda=0$ its general solution can be given but in implicit form containing transcendental functions, hence not suitable for our analysis.} the special solution $$ F(\zeta) =\beta-\frac{6\alpha }{\alpha\zeta^2} . $$ Using it in combination with \eqref{u1b} we arrive at the invariant solution $$ u(x,t) = e^{-x/2} \left(\beta-\frac{6}{\alpha(t+x)^2} \right). $$ Finally, using the equivalence transformations and the boundary condition \eqref{barrierCond1} we obtain the similarity solution for Eq.~\eqref{u01} with $f(\zeta)=\alpha(\zeta-\beta)^2$ $$ u(x,t) = \beta -\frac{12 \sigma ^2}{ \alpha \left(t \left(\sigma ^2-2 r\right)+2 \log\lvert x\rvert\right)^2} $$ with $$ H(t)=\mathcal Ae^{\frac{1}{2}(2r - \sigma^2)t} $$ and $$ R(t)=\beta -\frac{3 \sigma ^2}{\alpha \log^2\lvert\mathcal A\rvert} $$ \begin{case} $f(\zeta) =-\frac{\gamma}{\beta}(\alpha +\beta \zeta)\left(\delta +\log\lvert\alpha+\beta\zeta\rvert\right),\,\beta,\gamma\neq 0$ \end{case} Let the arbitrary element of the Lie algebra spanned by \eqref{u03} and \eqref{u03a} be $\mathfrak X = c_1 \mathfrak X_1+c_2 \mathfrak X_2+c_3 \mathfrak X_3+c_4\mathfrak X_4$. Using \eqref{boa} and the equivalence transformations the first condition turns to the ODE: \begin{equation}\label{ode4} 2 \left(c_3+\frac{c_4}{\gamma}e^{t \gamma }\right)+c_1 \left(1+\frac{\sqrt{2} r}{\sigma }-\frac{\sigma }{\sqrt{2}}-\frac{\sqrt{2} H'}{\sigma H}\right)=0, \end{equation} where $c_1\ne0$. The solution of \eqref{ode4} is \begin{equation}\label{H2a} H(t)=\mathcal A e^{\frac{1}{2}(2r -\sigma^2)t+\frac{1}{2} \sigma \left(\lambda t+\mu e^{\gamma t}\right)}, \end{equation} where $\mu = \frac{2\sqrt{2}c_4}{\gamma^2c_1},\ \lambda=\sqrt{2}(1+2\frac{c_3}{c_1})$ and $\mathcal A$ the constant of integration. Using the above solution, conditions \eqref{bob} and \eqref{barrierCond1a} we get the ODE \begin{multline}\label{ode5} e^{t \gamma } \left(\kappa +2 \gamma \mu \left(\sqrt{2} \sigma+\sigma\gamma \lambda t +2 \gamma \log\mathcal A\right)\right) (\alpha +\beta R)\\ +2 e^{2 \gamma t } \gamma ^2 \mu ^2 \sigma (\alpha +\beta R)-8 \beta \sigma R'=0. \end{multline} Its solution is \begin{equation}\label{R2a} R(t)=-\frac{\alpha }{\beta }+\mathcal Be^{\frac{\left(\kappa +\gamma \mu \left(2 \sqrt{2}-2 \lambda +2 \gamma \lambda t +\gamma \mu e^{t \gamma } \right) \sigma +4 \gamma ^2 \mu \log\mathcal A\right)}{8 \gamma \sigma }e^{\gamma t} } \end{equation} where $\kappa = 8\frac{c_2}{\sigma c_1}$ and $\mathcal B$ the constant of integration. Having found the functions $H,R$ admitted by the symmetries we proceed with the reduction of the Eq.~\eqref{heat}. From the invariant surface condition \eqref{isc} the invariant solution is \begin{multline}\label{u2a} u(x,t)=-\frac{ \alpha }{\beta }e^{-x/2}+e^{\frac{1}{8} \left(\frac{\left(2 \gamma \left(\sqrt{2} ) \gamma(t+x) -\lambda \right) \mu +\kappa \sigma \right)}{\gamma }e^{t \gamma } -2 \left(\sqrt{2} \lambda -2\right)t-\gamma \mu ^2e^{2 t \gamma } \right)}\\ F\left(t+x-\frac{ \lambda }{\sqrt{2}}t-\frac{\mu }{\sqrt{2}}e^{t \gamma } \right). \end{multline} Substituting \eqref{u2a} to \eqref{heat} for this particular case for $f$ we arrive at the reduction \begin{multline*} F(\zeta ) \left(2 \gamma (2 \delta +\zeta )-1+\sqrt{2} \lambda +4 \gamma (\log\beta +\log F(\zeta ))\right)+2 \left(\sqrt{2} \lambda -2\right) F^\prime\\ -4 F^{\prime \prime }=0, \end{multline*} where $\zeta = t+x-\frac{ \lambda }{\sqrt{2}}t-\frac{\mu }{\sqrt{2}}e^{t \gamma }$. Although for this equation the general solutions cannot be found in a closed form, two particular solutions can be obtained, one when $\lambda\ne0$ and one when $\lambda=0$. Each of them is considered separately: \begin{itemize} \item $\lambda\ne0$ \begin{equation}\label{specialSol1} F(\zeta) = \Delta_1 e^{ -\frac{1}{2} \zeta -\log\beta} \end{equation} with $\Delta_1=e^{-\delta}$. Using it in combination with \eqref{u2a} we arrive at the invariant solution $$ u(x,t) = \frac{e^{-\frac{x}{2}} }{\beta}\left(\Delta_1e^{-\frac{\gamma ^2 \mu ^2e^{2 \gamma t} -e^{ \gamma t} \left(2 \gamma \left(\sqrt{2}+\sqrt{2} (t+x) \gamma -\lambda \right) \mu +\kappa \sigma \right)}{8 \gamma }}- \alpha \right). $$ Again, using the equivalence transformations and the boundary condition \eqref{barrierCond1} we arrive to the similarity solution for Eq.~\eqref{u01} with $f(\zeta)=-\frac{\gamma}{\beta}(\alpha +\beta \zeta)\left(\delta +\log\lvert\alpha+\beta\zeta\rvert\right)$ $$ u(x,t) = \frac{\Delta_1}{\beta}e^{\frac{e^{\gamma t} \left(\sigma \left(\kappa \sigma +\gamma \mu \left(2 \sqrt{2}-2 \lambda -\gamma \mu e^{\gamma t} +2 \gamma \sigma t\right)\right)-4 r \gamma ^2 \mu t\right)}{8 \gamma \sigma }}\lvert x\rvert^{\frac{\gamma\mu}{2\sigma}e^{\gamma t}}-\frac{\alpha }{\beta } $$ with $$ H(t) = \mathcal A e^{\frac{1}{2}(2r -\sigma^2)t+\frac{1}{2} \sigma \left(\lambda t+\mu e^{\gamma t}\right)} $$ and $$ R(t)=\frac{\Delta_1}{\beta}e^{\frac{e^{\gamma t} \left(\sigma \left(\gamma \mu \left(2 \sqrt{2}-2 \lambda +2 \gamma \lambda t+ \gamma \mu e^{\gamma t}\right)+\kappa \sigma \right)+4 \gamma ^2 \mu \log\mathcal A\right)}{8 \gamma \sigma }}-\frac{\alpha }{\beta }. $$ \item $\lambda=0$ \begin{equation}\label{specialSol2} F(\zeta) = \Delta_2 e^{\frac{1}{4} \gamma \zeta ^2+\mathcal C \zeta} \end{equation} where $\mathcal C$ is a constant and $\Delta_2 = e^{\frac{\left(\gamma (2-4 \delta )+(1+2\mathcal C)^2-4 \gamma \log\beta\right)}{4\gamma }}$. Using it in combination with \eqref{u2a} we arrive to the invariant solution \begin{multline*} u(x,t) = e^{-x/2} \left(\Delta_2 e^{\frac{1}{8} \left(2 (t+x) (2+(t+x) \gamma +4 \mathcal C)+\frac{e^{\gamma t} \left(\kappa \sigma -4 \sqrt{2} \gamma \mu \mathcal C\right)}{\gamma }\right)} -\frac{\alpha}{\beta }\right). \end{multline*} Again, using the equivalence transformations and the boundary condition \eqref{barrierCond1} we obtain the similarity solution for Eq.~\eqref{u01} with $f(\zeta)=-\frac{\gamma}{\beta}(\alpha +\beta \zeta)\left(\delta +\log\lvert\alpha+\beta\zeta\rvert\right)$ \begin{multline*} u(x,t) = \\ \Delta_2 e^{\frac{\sigma ^2 \left(\kappa \sigma -4 \sqrt{2} \gamma \mu \mathcal C\right)e^{t \gamma } +\gamma \left(\sigma ^2-2 r\right) \left(\sigma \left(\gamma \sigma t+2 \sqrt{2} (1+2 \mathcal C)\right)-2 r \gamma t\right)t +4 \gamma ^2 \log^2\lvert x\rvert}{8\gamma \sigma ^2}}\times\\ \lvert x\rvert^{\gamma \left(\frac{1}{2}-\frac{r}{\sigma ^2}\right)t+\frac{1+2\mathcal C}{\sqrt{2} \sigma }}-\frac{\alpha }{\beta } \end{multline*} with $$ H(t) = \mathcal A e^{\frac{1}{2}(2r -\sigma^2)t+\frac{1}{2} \sigma \mu e^{\gamma t}} $$ and $$ R(t)= \Delta_2 e^{\frac{1}{8} \left(e^{2 t \gamma } \gamma \mu ^2+\frac{e^{t \gamma } \left(2 \sqrt{2} \gamma \mu +\kappa \sigma \right)}{\gamma }+\frac{4 \gamma \log^2\lvert\mathcal A\rvert}{\sigma ^2}\right)} \mathcal A^{\frac{ \gamma \mu e^{\gamma t}+\sqrt{2} (1+2\mathcal C)}{2 \sigma }}-\frac{\alpha }{\beta }. $$ \end{itemize} \section{Conclusion} In the present paper a generalization of the celebrated Black--Scholes--Merton equation \eqref{u02} was proposed and studied under the prism of the modern group analysis or symmetry method. To that end, we harnessed the advantage that the equivalence transformations offer when studying classes of differential equations, the knowledge of the best representative for this class of equations. This fact substantially simplifies the task of classifying it and obtaining its point symmetries. Through this classification interesting cases, from the point of symmetries, arise. Nonlinear equations in general have few or no symmetries so cases that augment the set of symmetries at disposal are like an oasis in the desert. Quite commonly a dynamical system possessing an ample number of symmetries is more probable to relate with a physical system or model a more realistic process. Furthermore, in the case that we wish to study a boundary problem, because of the fact that not all of the symmetries admit the boundary and its condition, some of the symmetries will be excluded. Hence the bigger the set of symmetries the bigger the probability that some will survive the scrutiny of the boundary conditions and give an invariant solution for the problem in its entirety. For the equation studied here both sides of this fact were revealed. The terminal condition was too strict and gave only trivial solutions. On the other hand the boundary condition imposed for the barrier option allowed us to obtain non trivial invariant solutions, undoubtedly, the arbitrary functions involved in the boundary condition helped in that direction. The insight provided through the above symmetry analysis might prove practical to anyone looking for a more realistic economic model without departing from the reasoning behind the Black--Scholes-Merton equation that made it such a successful model on the first place. Moreover, when one studies more exotic kinds of options that gain ground in the Asian markets that in turn play an ever increasing role in the world market. Last but not least although the Black--Scholes-Merton model is a standard way to price traditional options it encounters difficulties with exotic ones. The nonlinear variants of the traditional model given here, along with the found analytical solutions, might turn the table in that respect. We leave to the interested reader the possible economical interpretation and use of the obtained results. \section*{Acknowledgements} Y. Bozhkov would like to thank FAPESP and CNPq, Brasil, for partial financial support. S. Dimas is grateful to FAPESP (Proc. \#2011/05855-9) for the financial support and IMECC-UNICAMP for their gracious hospitality. \bibliographystyle{model3-num-names}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\uparrow}{\uparrow} \newcommand{\downarrow}{\downarrow} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{\dagger}{\dagger} \newcommand{{\rm i}}{{\rm i}} \newcommand{{\rm e}}{{\rm e}} \newcommand{{\rm d}}{{\rm d}} \newcommand{\partial}{\partial} \newcommand \lab {\label} \newcommand{\varepsilon}{\varepsilon} \newcommand{\hspace{0.7cm}}{\hspace{0.7cm}} \newcommand{\sigma}{\sigma} \newcommand{\longrightarrow}{\rightarrow} \newcommand{\bar{z}}{\bar{z}} \def\theta{\theta} \def\longrightarrow{\longrightarrow} \setlength{\textwidth}{160mm} \setlength{\textheight}{230mm} \setlength{\headsep}{0in} \setlength{\baselineskip}{0.375in} \setlength{\oddsidemargin}{0cm} \setlength{\evensidemargin}{0cm} \newcommand{\Cblu}[1]{\textcolor{blue}{#1}} \newcommand{\Cgre}[1]{\textcolor{green}{#1}} \newcommand{\Cred}[1]{\textcolor{red}{#1}} \begin{document} \setcounter{page}{0} \topmargin 0pt \oddsidemargin 5mm \renewcommand{\thefootnote}{\arabic{footnote}} \newpage \setcounter{page}{0} \topmargin 0pt \oddsidemargin 5mm \renewcommand{\thefootnote}{\arabic{footnote}} \newpage \begin{titlepage} \begin{flushright} SISSA 07/2013/FISI \\ \end{flushright} \vspace{0.5cm} \begin{center} {\large {\bf Interfaces and wetting transition on the half plane.}\\ {\bf Exact results from field theory}}\\ \vspace{1.8cm} {\large Gesualdo Delfino and Alessio Squarcini}\\ \vspace{0.5cm} {\em SISSA -- Via Bonomea 265, 34136 Trieste, Italy}\\ {\em INFN sezione di Trieste}\\ \end{center} \vspace{1.2cm} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \begin{abstract} \noindent We consider the scaling limit of a generic ferromagnetic system with a continuous phase transition, on the half plane with boundary conditions leading to the equilibrium of two different phases below criticality. We use general properties of low energy two-dimensional field theory to determine exact asymptotics of the magnetization profile perperdicularly to the boundary, to show the presence of an interface with endpoints pinned to the boundary, and to determine its passage probability. The midpoint average distance of the interface from the boundary grows as the square root of the distance between the endpoints, unless the reflection amplitude of the bulk excitations on the boundary possesses a stable bound state pole. The contact angle of the phenomenological wetting theory is exactly related to the location of this pole. Results available from the lattice solution of the Ising model are recovered as a particular case. \end{abstract} \end{titlepage} \newpage \section{Introduction} Interfacial phenomena at boundaries are a subject of relevant interest for both theory and applications. On the theoretical side, the one this paper is concerned with, the effects of the boundary on an interface separating different phases of a statistical system have been extensively studied using phenomenological, mean field, renormalization group and other approximation methods ([1-8] is a certainly incomplete list of review articles). The only exact result that has been available concerns the Ising model on the half plane \cite{Abraham_wetting,Abraham}, a circumstance that, while confirming a specificity of the two-dimensional case, raises the question about the role of Ising solvability in these exact findings. We show in this paper that exact results including those of \cite{Abraham} as a particular case are obtained quite generally for any two-dimensional model exhibiting a continuous phase transition. This is done extending to the half plane the non-perturbative field theoretical approach recently used in \cite{DV} to study phase separation on the whole plane. As in that case, general exact results emerge because, when its end-to-end distance $R$ is much larger than the correlation length, the interface is described by a single particle (domain wall) state, in a low energy limit leading to a general solution. In this way, the fluctuations of the interface turn out to be ruled by the low energy singularity of the matrix element of the order parameter field (as for the whole plane), with the fields pinning the interface endpoints to the boundary producing boundary reflection and an average midpoint distance from the boundary of order $\sqrt{R}$. The result changes qualitatively if boundary and domain wall excitation admit a stable bound state, which becomes dominant in the spectral sum at low energies and bounds the interface to the boundary. The contact angle and the spreading coefficient of the phenomenological theory of wetting then emerge in a completely natural way within the field theoretical formalism. The paper is organized as follows. In the next section we illustrate the field theoretical setting and derive the results for the unbound interface. Section~3 is then devoted to the effects produced by the bound state and to the characterization of the wetting transition, while section~4 contains some final remarks. \section{Interfaces on the half plane} Consider a ferromagnetic spin model of two-dimensional classical statistical mechanics in which spins take discrete values labelled by an index $a=1,2,\ldots,n$. The energy of the system is invariant under global transformations of the spins according to a symmetry whose spontaneous breaking below a critical temperature $T_c$ is responsible for the presence on the infinite plane of $n$ translation invariant pure phases; we denote $\langle\cdots\rangle_{a}$ statistical averages in the phase $a$. Assuming a continuous transition, we consider the scaling limit below $T_c$, corresponding to a Euclidean field theory defined on the plane with coordinates $(x,y)$, which can be seen as the analytic continuation to imaginary time of a (1+1)-dimensional relativistic field theory with space coordinate $x$ and time coordinate $t=iy$. If $H$ and $P$ are the Hamiltonian and momentum operators and $\Phi$ a field of the theory, translation invariance on the plane yields the relation \begin{equation} \Phi(x,y)=e^{ixP+yH}\Phi(0,0)e^{-ixP-yH}\,. \label{translations} \end{equation} The (1+1)-dimensional theory possesses degenerate vacua $|0\rangle_a$ associated to the pure phases of the system. The elementary excitations correspond to stable kink states $|K_{ab}(\theta)\rangle$ interpolating between different vacua $|0\rangle_a$ and $|0\rangle_b$. We introduced the rapidity variable $\theta$ which conveniently parameterizes the energy and momentum of the kinks as $(E,p)=(m\cosh\theta,m\sinh\theta)$, $m$ being the kink mass or inverse correlation length. The trajectory of the kink on the Euclidean plane corresponds to a domain wall between the phases $a$ and $b$. Multi-kink excitations take the form $|K_{aa_1}(\theta_1)K_{a_1a_2}(\theta_2)\ldots K_{a_{n-1}b}(\theta_n)\rangle$. Within the scattering framework \cite{ELOP} we consider, these are asymptotic states, incoming if considered long before the collisions among the kinks, outgoing if considered long after, and their energy is simply $\sum_{i=1}^nm\cosh\theta_i$. \begin{figure}[t] \begin{center} \includegraphics[width=9cm]{wet_merge.eps} \caption{Elastic scattering (reflection) of a kink off the boundary (a), and interface pinned at the boundary (b).} \label{wet_merge} \end{center} \end{figure} Consider now the system on the half-plane $x\geq 0$. We denote by $B_a$ a boundary condition at $x=0$ which is $y$-independent and breaks the symmetry of the bulk in the direction $a$ in order parameter space; this can be realized applying a constant boundary magnetic field pointing in the direction $a$. We denote $\langle\cdots\rangle_{B_a}$ statistical averages in presence of the boundary condition $B_a$. Preservation of translation invariance in the $y$ direction yields energy conservation in the $(1+1)$-dimensional picture. The bulk excitations are still the kink states described for the full plane case, but now they are restricted to $x>0$; we indicate this restriction by a subscript $B_a$. Hence $|0\rangle_{B_a}$ denotes the vacuum (no excitations in the bulk) on the half-plane with the boundary condition $B_a$. If $\sigma$ is the spin field, the magnetization $\langle\sigma(x,y)\rangle_{B_a}={}_{B_a}\langle 0|\sigma(x,y)|0\rangle_{B_a}$ points in the direction $a$ and depends only on the distance $x$ from the boundary; in particular \begin{equation} \lim_{x\to\infty}\langle\sigma(x,y)\rangle_{B_a}=\langle\sigma\rangle_a\,, \label{bulk_magnetization} \end{equation} where $\langle\sigma\rangle_a$ is the constant magnetization in phase $a$ on the full plane. The state $|0\rangle_{B_a}$ is eigenstate of the Hamiltonian $H_{B_a}$ of the system on the half line. We consider the case in which boundary conditions $B_a$ and $B_b$ are related by the symmetry, so that $|0\rangle_{B_a}$ and $|0\rangle_{B_b}$ have the same energy $E_B$. The asymptotic scattering state $|K_{ba}(\theta)\rangle_{B_a}$ corresponds to an incoming kink (travelling towards the boundary) if its momentum is negative, i.e. if $\theta<0$. If its energy is lower than the energy $2m$ needed to produce two kinks upon interaction with the boundary, it will simply be reflected into an outgoing kink\footnote{As emphasized in \cite{GZ}, the analogies between bulk and boundary scattering become evident thinking of the boundary as the propagation of an infinitely heavy particle sitting at $x=0$.} with rapidity $-\theta$ (Fig.~1a). The state $|K_{ba}(\theta)\rangle_{B_a}$ is eigenstate of $H_{B_a}$ with eigenvalue $E_{B}+m\cosh\theta$. We are now ready to set up the configuration we want to study, namely a boundary condition which is of type $B_a$ if $|y|>R/2$ and of type $B_b$ if $|y|<R/2$. The interest of such a boundary condition, that we denote $B_{aba}$, is easily understood observing that the limit for $x\to\infty$ of the magnetization profile $\langle\sigma(x,0)\rangle_{B_{aba}}$ has to tend to $\langle\sigma\rangle_a$ if $R$ is finite, and to $\langle\sigma\rangle_b$ if $R$ is infinite. The natural way to account for this situation is to expect the formation of an interface pinned at $R/2$ and $-R/2$ on the boundary, separating an inner phase $b$ from an outer phase $a$ (Fig.~1b), and whose average distance from the boundary at $y=0$ diverges with $R$. The remainder of this section is devoted to see how such a picture indeed emerges within our general field theoretical framework. Technically the change from the boundary condition $B_a$ to $B_b$ at a point $y$ is realized starting with $B_a$ and inserting on the boundary a field $\mu_{ab}(0,y)$ which acting on the vacuum $|0\rangle_{B_a}$ creates kink states interpolating between phase $a$ and phase $b$. Hence the simplest non-vanishing matrix element of the boundary field $\mu_{ab}$ is \begin{equation} {}_{B_a}\langle 0|\mu_{ab}(0,y)|K_{ba}(\theta)\rangle_{B_a}=e^{-ym\cosh\theta} {}_{B_a}\langle 0|\mu_{ab}(0,0)|K_{ba}(\theta)\rangle_{B_a}\equiv e^{-ym\cosh\theta}{\cal F}_\mu(\theta)\,. \label{bff} \end{equation} The partition function of the system with boundary condition $B_{aba}$ reads \begin{equation} Z={}_{B_a}\langle 0|\mu_{ab}(0,R/2)\mu_{ba}(0,-R/2)|0\rangle_{B_a}=\int_0^\infty\frac{d\theta}{2\pi}|{\cal F}_\mu(\theta)|^2 e^{-mR\cosh\theta}+O(e^{-2mR})\,, \end{equation} where the last expression is obtained expanding over an intermediate set of outgoing kink states and retaining only the lightest (single kink) contribution which is leading in the large $mR$ limit we will consider from now on. Since the above integral is dominated by small rapidities and ${\cal F}_\mu$ is expected to behave as\footnote{Linear behavior of matrix elements at small rapidities in two-dimensional theories is well known. Within the framework of integrable boundary field theory \cite{GZ} exact examples can be found in \cite{BPT}. More generally, see \cite{Smirnov} about matrix elements in integrable theories.} \begin{equation} {\cal F}_\mu(\theta)=a\,\theta+O(\theta^2)\,, \label{fmu} \end{equation} the partition function becomes \begin{equation} Z\sim |a|^2\int_0^\infty\frac{d\theta}{2\pi}\,\theta^2\,e^{-mR(1+\theta^2/2)}=\frac{|a|^2\,e^{-mR}}{2\sqrt{2\pi}\,(mR)^{3/2}}\,. \label{Z} \end{equation} The magnetization profile along the $x$ axis is given by \begin{eqnarray} && \langle\sigma(x,0)\rangle_{B_{aba}}=\frac{1}{Z}\,{}_{B_a}\langle 0|\mu_{ab}(0,R/2)\sigma(x,0)\mu_{ba}(0,-R/2)|0\rangle_{B_a} \label{profile1}\\ && \sim \frac{1}{Z}\int_{-\infty}^{+\infty}\frac{d\theta_1}{2\pi}\frac{d\theta_2}{2\pi}{\cal F}_\mu(\theta_1)\langle K_{ab}(\theta_1)|\sigma(0,0)|K_{ba}(\theta_2)\rangle{\cal F}_\mu^*(\theta_2) e^{m[i(\sinh\theta_1-\sinh\theta_2)x-(\cosh\theta_1+\cosh\theta_2)\frac{R}{2}]}\,,\nonumber \end{eqnarray} where in the last line we have taken $mR\gg 1$ to project on the one-kink intermediate states, but also $mx\gg 1$ to be able to treat $\sigma(x,0)$ as a bulk field which satisfies (\ref{translations}) and is evaluated on bulk kink states (whose rapidities take both positive and negative values). In other words, for $mx$ large the only effect of the boundary on the magnetization comes from the boundary changing fields at $(0,\pm R/2)$; in their absence one would simply observe the constant value $\langle\sigma\rangle_a$. The bulk matrix element of the spin field between one-kink states is related by the crossing relation\footnote{Crossing a particle from the initial to the final state (or vice versa) involves reversing the sign of its energy and momentum \cite{ELOP}, namely an $i\pi$ rapidity shift. The delta function term in (\ref{crossing}) is a disconnected part arising from annihilation of the two kinks.} \begin{equation} \langle K_{ab}(\theta_1)|\sigma(0,0)|K_{ba}(\theta_2)\rangle={ F}_{\sigma}(\theta_1+i\pi-\theta_2)+2\pi\delta(\theta_1-\theta_2)\langle\sigma\rangle_{a}\,, \label{crossing} \end{equation} to the form factor \begin{equation} {F}_{\sigma}(\theta_1-\theta_2)\equiv{}_a\langle 0|\sigma(0,0)|K_{ab}(\theta_1)K_{ba}(\theta_2)\rangle\,. \label{ff} \end{equation} As already observed in \cite{DV} for the case of phase separation on the whole plane, it is crucial that quite generally, due to non-locality of the kinks with respect to the spin field, ${F}_{\sigma}(\theta)$ possesses an annihilation pole at $\theta=i\pi$ with residue \cite{DC98} \begin{equation} -i\,\text{Res}_{\theta=i\pi}{F}_{\sigma}(\theta)=\langle\sigma\rangle_{a}-\langle\sigma\rangle_{b}\equiv\Delta\langle\sigma\rangle\,. \label{residue} \end{equation} Since $mR$ is large (\ref{profile1}) is dominated by small rapidities and (\ref{fmu}), (\ref{crossing}) and (\ref{residue}) lead to \begin{equation} \langle\sigma(x,0)\rangle_{B_{aba}}\sim 2\langle\sigma\rangle_{a}+i\,\Delta\langle\sigma\rangle\ \frac{|a|^2}{Z}e^{-mR}\int_{-\infty}^{+\infty}\frac{d\theta_1}{2\pi}\frac{d\theta_2}{2\pi}\,\frac{\theta_1\theta_2}{\theta_1-\theta_2} e^{m[i(\theta_1-\theta_2)x-(\theta_1^2+\theta_2^2)\frac{R}{4}]}\,. \label{profile2} \end{equation} Differentiation removes the singularity of the integrand and gives \begin{eqnarray} \partial_{mx}\langle\sigma(x,0)\rangle_{B_{aba}}&\sim & -\Delta\langle\sigma\rangle\,\frac{|a|^2 e^{-mR}}{(2\pi)^2 Z}\,g(x)g(-x)\nonumber\\ &=& \Delta\langle\sigma\rangle\,\frac{4\sqrt{2}}{\sqrt{\pi\,mR}}\,z^2\,e^{-z^2}\,,\hspace{2cm}z\equiv\sqrt{\frac{2m}{R}}\,x \label{derivative} \end{eqnarray} where we used (\ref{Z}) and \begin{equation} g(x)=\int_{-\infty}^{+\infty}d\theta\,\theta\,e^{-mR\theta^2/4+imx\theta}=\frac{2i\sqrt{2\pi}}{mR}\,z\,e^{-z^2/2}\,. \end{equation} Integrating (\ref{derivative}) with the asymptotic condition $\langle\sigma(\infty,0)\rangle_{B_{aba}}=\langle\sigma\rangle_{a}$ gives \begin{equation} \langle\sigma(x,0)\rangle_{B_{aba}}\sim \langle\sigma\rangle_{b}-\frac{2}{\sqrt{\pi}}\,\Delta\langle\sigma\rangle\left(z\,e^{-z^2}-\int_0^z du\,e^{-u^2}\right),\hspace{1cm}mx\gg 1\,. \label{profile} \end{equation} From this result we can compute exactly $\lim_{R\to\infty}\langle\sigma(\frac{\alpha}{m}\,(mR)^\delta,0)\rangle_{B_{aba}}$, obtaining $\langle\sigma\rangle_b$ for $0<\delta<1/2$, $\langle\sigma\rangle_a$ for $\delta>1/2$, and the r.h.s. of (\ref{profile}) with $z=\alpha\sqrt{2}$ for $\delta=1/2$. For $\langle\sigma\rangle_a=-\langle\sigma\rangle_b=\langle\sigma\rangle_+$ these are precisely the limits obtained from the lattice in \cite{AI,Abraham_wetting} for the Ising model on the half plane with boundary spins fixed to be positive for $|y|>R/2$ and negative for $|y|<R/2$. The derivative (\ref{derivative}) of the magnetization profile is peaked around $z=1$, confirming the presence of an interface whose average distance from the boundary increases as $\sqrt{R/m}$. It is also easy to see that the result for the magnetization profile is consistent with a simple probabilistic interpretation. Since we are computing the magnetization on a scale $R$ much larger than the correlation length and far away from the boundary, we can think of the interface as a sharp separation between pure phases\footnote{It has been shown in \cite{DV} how the internal structure of the interface arises from subleading terms in the large $mR$ expansion.}, and write \begin{equation} \langle\sigma(x,0)\rangle_{B_{aba}}\sim\langle\sigma\rangle_a\int^{x}_{0}du~p(u)+\langle\sigma\rangle_b\int_{x}^{\infty}du~p(u),\hspace{1cm}mx\gg 1\,, \end{equation} where $p(u)du$ is the probability that the interface intersects the $x$-axis in the interval $(u,u+du)$, so that the two integrals are the left and right passage probabilities with respect to $x$. Differentiating and comparing with (\ref{derivative}) gives the passage probability density \begin{equation} p(x)=4\sqrt{\frac{2m}{\pi R}}\,z^2\,e^{-z^2}\,, \label{probability} \end{equation} which correctly satisfies $\int_{0}^{\infty}dx~p(x)=1$. \section{Wetting transition} The results of the previous section are modified if the kink-boundary system associated to the asymptotic state $|K_{ab}(\theta)\rangle_{B_b}$ admits a stable bound state $|0\rangle_{B'_{a}}$, corresponding to the binding of the kink $K_{ab}$ on the boundary ${B_{b}}$. As usual for stable bound states \cite{ELOP}, such a binding will correspond to a ``virtual'' value $\theta_0$ of the kink rapidity leading to a bound state energy $E_{B}+m\cosh\theta_0$ real and smaller than the unbinding energy $E_{B}+m$. This amounts to taking $\theta_0=iu$ with $0<u<\pi$, so that \begin{equation} E_{B'}=E_{B}+m\cos u\,. \label{bs} \end{equation} The existence of the bound state manifests in particular through a simple pole in the elastic scattering amplitude of the kink off the boundary, which reads ${\cal R}(\theta)\sim ig^2/(\theta-iu)$ for $\theta\to iu$, with $g$ a kink-boundary coupling constant (Fig.~2a). This pole is inherited by the matrix element (\ref{bff}), for which we have\footnote{Exact solutions exhibiting boundary bound states poles can be found in \cite{GZ} for scattering amplitudes and in \cite{BPT} for matrix elements.} (Fig.~2b) \begin{equation} {\cal F}_\mu(\theta)={}_{B_a}\langle 0|\mu_{ab}(0,0)|K_{ba}(\theta)\rangle_{B_a}\sim\frac{ig}{\theta-iu}\,{}_{B_a}\langle 0|\mu_{ab}(0,0)|0\rangle_{B'_{a}}\,,\hspace{1cm}\theta\to iu\,. \label{ff_pole} \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=12cm]{wet_bound.eps} \caption{The boundary bound state (double line) originating in kink-boundary scattering (a), and a pictorial representation of equation (\ref{ff_pole}) (b).} \label{wet_bound} \end{center} \end{figure} The boundary bound state affects the results of the previous section for the boundary condition $B_{aba}$ because the leading low-energy contribution in the expansion over intermediate states now comes from $|0\rangle_{B'_{a}}$ rather than from $|K_{ba}(\theta)\rangle_{B_a}$. So the partition function becomes \begin{equation} Z={}_{B_a}\langle 0|\mu_{ab}(0,R/2)\mu_{ba}(0,-R/2)|0\rangle_{B_a}= \left|{}_{B_a}\langle 0|\mu_{ab}(0,0)|0\rangle_{B'_a}\right|^2e^{-mR\cos u}+O(e^{-mR})\,, \label{Z1} \end{equation} and the magnetization profile \begin{eqnarray} \langle\sigma(x,0)\rangle_{B_{aba}}&\sim &\frac{1}{Z}\,{}_{B_a}\langle 0|\mu_{ab}(0,R/2)|0\rangle_{B'_a}\,{}_{B'_a}\langle 0|\sigma(x,0)|0\rangle_{B'_a}\,{}_{B'_a}\langle 0|\mu_{ba}(0,-R/2)|0\rangle_{B_a} \nonumber\\ &=&\langle\sigma(x,0)\rangle_{B'_a}\,. \label{profile3} \end{eqnarray} We see then that, as a consequence of (\ref{bulk_magnetization}), the magnetization profile now tends to $\langle\sigma\rangle_{a}$ at large $mx$, in contrast to what obtained in the previous section, where it tended to $\langle\sigma\rangle_b$ for $R$ large enough. This corresponds to the fact that now the asymptotic behavior is determined by the state in which the interface, and then the phase $b$, are bound to the boundary, while before the dominant state was that in which phase $b$ extended to an average midpoint distance of order $\sqrt{R}$ from the boundary. \begin{figure}[t] \begin{center} \includegraphics[width=2.5cm]{wet_contact.eps} \caption{Splitting and recombination of the boundary bound state $B_a'$ corresponds to ``partial wetting'', in which a drop of phase $b$ makes an equilibrium contact angle $\theta_e$ with the boundary. Equation (\ref{bs}) with $u=\theta_e$ gives the surface tension balance condition at the contact points.} \label{wet_contact} \end{center} \end{figure} Consistency of the asymptotic expansion requires that the corrections to (\ref{profile3}) vanish as $R\to\infty$. For $mx$ large, the first of these corrections is that due to the $|K_{ba}(\theta)\rangle$ intermediate states given in (\ref{profile1}). The $Z$ in the denominator, however, is now (\ref{Z1}) rather than (\ref{Z}), so that the correction behaves as $e^{mR(\cos u-1)}$ at large $R$. Hence, if $u$ approaches $0$, i.e. if the interface approaches the unbinding point, consistency requires that $R$ diverges faster than $1/u^2$. If we adopt a vocabulary within which $b$ is a liquid phase and $a$ a vapor phase, we can say that as $u\to 0$ a thin layer of the liquid phase spreads all over the boundary. The relation with the usual characterization of interfacial phenomena at boundaries becomes more transparent if we consider the situation usually referred to as ``partial wetting'', corresponding to a drop of liquid sorrounded by a thin layer of liquid adsorbed on the rest of the boundary (see e.g. \cite{BEIMR}). In our formalism this amounts to splitting and recombination of the boundary bound state $B_a'$ (Fig.~3). Considering that the kink mass $m$ is the surface tension of the interface \cite{DV}, that $E_B$ is the surface tension between the boundary and the drop, and that $E_{B'}$ is the surface tension between the boundary away from the drop and phase $a$, we recognize in (\ref{bs}) the Young equilibrium condition at contact points (see e.g. \cite{deGennes} and references therein), with $u$ playing the role of the equilibrium contact angle $\theta_e$ (Fig.~3). In addition, the combination $m(\cos u-1)$ encountered a moment ago is recognized as the so called ``equilibrium spreading coefficient'' (see \cite{BEIMR}). We also see that interface unbinding at $u=0$ corresponds to vanishing of the contact angle, namely to the usual characterization of the wetting transition point (passage from partial to complete wetting). The boundary bound state is a property of the theory with translationally invariant boundary condition $B_b$. Parameters of this theory are the temperature, related to the kink mass as $m\propto(T_c-T)^\nu$, and a coupling $\lambda$ entering the boundary term $\lambda\int dy\,\phi(0,y)$ of the classical reduced Hamiltonian. If $X$ is the scaling dimension\footnote{The exponents $\nu$ and $X$ are known exactly from bulk \cite{BPZ} and boundary \cite{Cardy} conformal field theory, respectively.} of the boundary field $\phi(0,y)$, $u$ is function of the dimensionless combination $\lambda/m^{1-X}$. If $\lambda$ is kept fixed, the condition $u=0$ determines a wetting transition temperature $T_w(\lambda)<T_c$. The results (\ref{Z}), (\ref{Z1}) and (\ref{profile3}) account for those reported in \cite{Abraham_wetting,Abraham} for the particular case of an Ising model with boundary condition $B_{+-+}$ and coupling between the boundary spins and their nearest neighbors different from the coupling within the rest of the lattice; this modified coupling corresponds to the boundary parameter $\lambda$ in this case. The generality of our results also explains why approximated treatments of other models resulted in findings similar to the Ising ones (see \cite{Abraham} and references therein). \section{Conclusion} In this paper we studied the scaling limit of a generic ferromagnetic system with a continuous phase transition, below criticality and on the half plane, with boundary conditions favoring one of the phases along an interval of length $R$, and a different phase outside this interval. We used field theory to determine exact large $R$ asymptotics of the magnetization profile perperdicularly to the boundary at the middle of the interval. We showed that, generically, the large $R$ asymptotic behavior corresponds to the presence of an interface pinned at the boundary condition changing points, with an average midpoint distance from the boundary which grows as $\sqrt{R}$. The passage probability density of the interface has the gaussian form found in \cite{DV} for the whole plane, modified by a quadratic factor which accounts for the presence of the boundary. These results are modified if the scattering on the boundary admits a stable bound state, which then becomes leading at low energies and corresponds to the binding of the interface to the boundary. In this case we showed how field theory accounts at a fundamental level for the contact angle and spreading coefficient of the phenomenological wetting theory. These results follow from general low energy properties of two-dimensional field theory. In particular, the annihilation singularity of the spin field matrix element on one-kink states and the boundary-kink bound state pole play a key role in determining the asymptotics of the magnetization profile in the unbound and bound regimes, respectively. Additional interfacial properties, such as the internal structure arising from subleading terms of the large $R$ expansion or double interfaces appearing in some models for particular choices of boundary conditions, can be analyzed in the same way it was done in \cite{DV} on the whole plane; we refer the reader to that paper on these points.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Intro \tableofcontents \section{Introduction} In our previous article \cite{bks1}, we showed that a natural theory of zeta elements associated to the multiplicative group $\mathbb{G}_m$ over finite abelian extensions of number fields shed light on the equivariant Tamagawa number conjecture (or eTNC for short in the remainder of this introduction) in the setting of untwisted Tate motives. In particular, in this way we derived a wide range of explicit results and predictions concerning, amongst other things, families of fine integral congruence relations between Rubin-Stark elements of different ranks and also several aspects of the detailed Galois module structures of ideal class groups and of the natural Selmer groups (and their homotopy-theoretic transposes) that are associated to $\mathbb{G}_m$. For details see \cite{bks1}. The main aims of the present article are now to develop an explicit Iwasawa theory for these zeta elements, to use this theory to describe a new approach to proving some important special cases of the eTNC and to describe some initial concrete applications of this approach. In the next two subsections we discuss briefly the main results that we shall obtain in this direction. \vspace{-3mm} \subsection{Iwasawa main conjectures for general number fields}\label{iwc intro} The first key aspect of our approach is the formulation of an explicit main conjecture of Iwasawa theory for abelian extensions of {\it general} number fields (we refer to this conjecture as a `higher rank main conjecture' since the rank of any associated Euler system would in most cases be greater than one). To give a little more detail we fix a finite abelian extension $K/k$ of general number fields and a $\ZZ_{p}$-extension $k_{\infty}$ of $k$ and set $K_{\infty}=Kk_{\infty}$. In this introduction, we suppose that $k_{\infty}/k$ is the cyclotomic $\ZZ_{p}$-extension but this is purely for simplicity. Our {\it higher rank main conjecture} is first stated (as Conjecture \ref{IMC}, which for pedagogical reasons we refer to as (hIMC) in this introduction) in terms of the existence of an Iwasawa-theoretic zeta element which effectively plays the role of $p$-adic $L$-functions for general number fields and has precise prescribed interpolation properties in terms of the values at zero of the higher derivatives of abelian $L$-series. We then subsequently reinterpret this conjecture, occasionally under suitable hypotheses, in several more explicit ways: firstly, in terms of the properties of Iwasawa-theoretic `Rubin-Stark elements' (see Conjecture \ref{IMC rubinstark}), then in terms of the existence of natural Iwasawa-theoretic measures (see Conjecture \ref{measure conjecture}), then in terms of the explicit generation, after localization at height one prime ideals, of the higher exterior powers of Iwasawa-theoretic unit groups (see Conjecture \ref{IMCexplicit}) and finally in a very classical way in terms of the characteristic ideals of suitable ideal class groups and concrete torsion modules constructed from Rubin-Stark elements (see Conjecture \ref{char IMC}; in fact, readers who are familiar with the classical formulation of main conjectures may wish to look at this formulation of the conjecture first). In particular, for the minus part of a CM-abelian extension of a totally real field, we show that (hIMC) is equivalent to the usual main conjecture involving the $p$-adic $L$-function of Deligne-Ribet (see the proof of Theorem \ref{CM theorem}(i)) and that for a real abelian field over $\QQ$ it is equivalent to the standard formulation of a main conjecture involving cyclotomic units. In general, the conjecture (hIMC) is implied by the validity of the relevant case of the eTNC for extensions $K'/k$ as $K'$ runs over all (sufficiently large) finite extensions of $K$ in $K_{\infty}$ but is usually very much weaker than the eTNC (and hence also than the main conjecture formulated by Fukaya and Kato in \cite{FukayaKato}). For example, if any $p$-adic prime of $k$ splits completely in $K$, then our conjectured zeta element encodes no information concerning the $L$-values of characters of $\Gal(K/k)$. More precisely, if for any character $\chi$ of $\Gal(K/k)$ there exists a $p$-adic prime of $k$ whose decomposition subgroup in $\Gal(K/k)$ is contained in the kernel of $\chi$ (in which case one says that the $p$-adic $L$-function for $\chi$ has a trivial zero at $s=0$), then the zeta element that we predict to exist has no conjectured interpolation property involving the leading term of $L_{k,S,T}(\chi,s)$ at $s=0$. \vspace{-2mm} \subsection{eTNC and congruences between Rubin-Stark elements}\label{rs intro} We now turn to discuss how (hIMC) leads to a concrete strategy to prove some interesting new cases of the eTNC. Here, a key role is played by a detailed Iwasawa-theoretic study of the fine congruence relations between Rubin-Stark elements of differing ranks that were independently formulated in the context of finite abelian extensions by Mazur and Rubin in \cite{MRGm} (where the congruences are referred to as a `refined class number formula for $\mathbb{G}_m$') and by the third author in \cite{sano}. In particular, working in the setting of the extension $K_{\infty}/k$ we formulate an explicit conjecture, denoted for convenience (MRS) here, which, roughly speaking, describes the precise relation between the natural Rubin-Stark elements for $K_{\infty}/k$ and for $K/k$. (For full details see Conjectures \ref{mrs1} and \ref{mrs2}). To better understand the context of this conjecture we prove in Theorem \ref{GS thm} that it constitutes a natural generalization of the so-called `Gross-Stark conjecture' formulated by Gross in \cite{Gp}. It is easy to see that, as already observed above, the eTNC implies the validity of (hIMC) and, in addition, one of the main results of our previous work \cite{bks1} allows us to prove in a straightforward way that it also implies the validity of (MRS). One of the key observations of the present article is that, much more significantly, one can prove under certain natural hypotheses a powerful converse to these implications. To be a little more precise, we shall prove a result of the following sort (for a detailed statement of which see Theorem \ref{mainthm}). \begin{theorem}\label{IntroTh} If the Galois coinvariants of a certain natural Iwasawa module is finite (as has been conjectured to be the case by Gross), then the validity of both (hIMC) and (MRS) for the extension $K_{\infty}/k$ combine to imply the validity of the $p$-component of the eTNC for every finite subextension $F/k$ of $K_\infty/k$. \end{theorem} To give a first indication of the usefulness of the above theorem, we apply it in the case that $k$ is totally real and $K$ is CM and consider the `minus component' of the $p$-part of the eTNC. In this context we write $K^+$ for the maximal totally real subfield of $K$. We recall that if no $p$-adic place splits in $K/K^{+}$ and the Iwasawa-theoretic $\mu$-invariant of $K_\infty/K$ vanishes, then the validity of the minus component of the $p$-part of the eTNC is already known (as far as we are aware, such a result was first implicitly discussed in the survey article of Flach \cite{flachsurvey}). However, by combining Theorems \ref{GS thm} and \ref{mainthm} with recent work of Darmon, Dasgupta and Pollack \cite{DDP} and of Ventullo \cite{ventullo} on the Gross-Stark conjecture, we can now prove the following result (for a precise statement of which see Corollary \ref{MC1}). \begin{corollary}\label{IntroCor} Let $K$ be a finite CM-extension of a totally real field $k$. Let $p$ be an odd prime for which the Iwasawa-theoretic $\mu$-invariant of $K_\infty/K$ vanishes and at most one $p$-adic place of $k$ splits in $K/K^{+}$. Then the minus component of the $p$-part of the eTNC for $K/k$ is (unconditionally) valid. \end{corollary} We remark that this result gives {\it the first verifications} of the (minus component of the $p$-part of the) eTNC for the untwisted Tate motive over abelian CM-extensions of a totally real field that is not equal to $\mathbb{Q}$ and for which {\it the relevant $p$-adic $L$-series possess trivial zeroes}. For details of some concrete applications of Corollary \ref{IntroCor} in this regard, see Examples \ref{RemarkExample}. In another direction, Corollary \ref{IntroCor} also leads directly to a strong refinement of one of the main results of Greither and Popescu in \cite{GreitherPopescu} (for details of which see Corollary \ref{CMunconditional4}). \vspace{-0.5mm} \subsection{Further developments} Finally we would like to point out that the ideas presented in this article extend naturally in at least two different directions and that we intend to discuss these developments elsewhere. Firstly, one can formulate a natural generalization of the theory discussed here in the context of arbitrary Tate motives. In this setting our theory is related to natural generalizations of both the notion of Rubin-Stark element (which specializes in the case of Tate motives of strictly positive weight over abelian extensions of $\QQ$ to recover Soul\'e's construction of cyclotomic elements in higher algebraic $K$-theory) and of the Rubin-Stark conjecture itself. In particular, our approach leads in this context to the formulation of precise conjectural congruence relations between Rubin-Stark elements of differing `weights' which can be seen to constitute a wide-ranging (conjectural) generalization of the classical Kummer congruences involving Bernoulli numbers. For more details in this regard see \S\ref{negative}. Secondly, many of the constructions, conjectures and results that are discussed here extend naturally to the setting of non-commutative Iwasawa theory and can then be used to prove the same case of the eTNC that we consider here over natural families of Galois extensions that are both non-abelian and of degree divisible by a prime $p$ at which the relevant $p$-adic $L$-series possess trivial zeroes. \begin{notation} For the reader's convenience we start by collecting some basic notation. For any (profinite) group $G$ we write $\widehat{G}$ for the group of homomorphisms $G \to \CC^\times$ of finite order. Let $k$ be a number field. For a place $v$ of $k$, the residue field of $v$ is denoted by $\kappa(v)$, and its order is denoted by ${\N}v$. We denote the set of places of $k$ which lie above the infinite place $\infty$ of $\QQ$ (resp. a prime number $p$) by $S_\infty(k)$ (resp. $S_p(k)$). For a Galois extension $L/k$, the set of places of $k$ which ramify in $L$ is denoted by $S_{\rm ram}(L/k)$. For any set $\Sigma$ of places of $k$, we denote by $\Sigma_L$ the set of places of $L$ which lie above places in $\Sigma$. Let $L/k$ be an abelian extension with Galois group $G$. For a place $v$ of $k$, the decomposition group at $v$ in $G$ is denoted by $G_v$. If $v$ is unramified in $L$, the Frobenius automorphism at $v$ is denoted by $\Fr_v$. Let $E$ be either a field of characteristic $0$ or $\ZZ_p$. For an abelian group $A$, we denote $E\otimes_\ZZ A$ by $EA$ or $A_E$. For a $\ZZ_p$-module $A$ and an extension field $E$ of $\QQ_p$, we also write $EA$ or $A_E$ for $E\otimes_{\ZZ_p} A$. (This abuse of notation would not make any confusion.) We use similar notation for complexes. For example, if $C$ is a complex of abelian groups, then we denote $E\otimes_\ZZ^{\mathbb{L}} C$ by $EC$ or $C_E$. Let $R$ be a commutative ring, and $M$ be an $R$-module. Let $r$ and $s$ be non-negative integers with $r\leq s$. There is a canonical paring $$\bigwedge_R^s M \times \bigwedge_R^r \Hom_R(M,R) \to \bigwedge_R^{s-r}M$$ defined by $$(a_1\wedge\cdots\wedge a_s,\varphi_1\wedge\cdots\wedge \varphi_r)\mapsto \sum_{\sigma \in {\mathfrak{S}_{s,r}} }{\rm{sgn}}(\sigma)\det(\varphi_i(a_{\sigma(j)}))_{1\leq i,j\leq r} a_{\sigma(r+1)}\wedge\cdots\wedge a_{\sigma(s)},$$ where $$\mathfrak{S}_{s,r}:=\{ \sigma \in \mathfrak{S}_s \mid \sigma(1) < \cdots < \sigma(r) \text{ and } \sigma(r+1) <\cdots<\sigma(s) \}.$$ (See \cite[Proposition 4.1]{bks1}.) We denote the image of $(a,\Phi)$ under the above pairing by $\Phi(a)$. For any $R$-module $M$, we denote the linear dual $\Hom_R(M,R)$ by $M^\ast$. The total quotient ring of $R$ is denoted by $Q(R)$. \end{notation} \section{Zeta elements for $\GG_m$} In this section, we review the zeta elements for $\GG_m$ that were studied in \cite{bks1}. \subsection{The Rubin-Stark conjecture} \label{rssection} We review the formulation of the Rubin-Stark conjecture \cite[Conjecture B$'$]{R}. Let $L/k$ be a finite abelian extension of number fields with Galois group $G$. Let $S$ be a finite set of places of $k$ which contains $S_\infty(k)\cup S_{\rm ram}(L/k)$. We fix a labeling $S=\{v_0,\ldots,v_n\}$. Take $r\in \ZZ$ so that $v_1,\ldots,v_r$ split completely in $L$. We put $V:=\{v_1,\ldots , v_r\}$. For each place $v$ of $k$, we fix a place $w$ of $L$ lying above $v$. In particular, for each $i$ with $0\leq i \leq n$, we fix a place $w_i$ of $L$ lying above $v_i$. Such conventions are frequently used in this paper. For $\chi \in \widehat G$, let $L_{k,S}(\chi,s)$ denote the usual $S$-truncated $L$-function for $\chi$. We put $$r_{\chi,S}:=\ord_{s=0}L_{k,S}(\chi,s).$$ Let $\cO_{L,S}$ be the ring of $S_L$ integers of $L$. For any set $\Sigma$ of places of $k$, put $Y_{L,\Sigma}:=\bigoplus_{w\in \Sigma_L}\ZZ w$, the free abelian group on $\Sigma_L$. We define $$X_{L,\Sigma}:=\{ \sum_{w\in \Sigma_L} a_w w\in Y_{L,\Sigma} \mid \sum_{w\in \Sigma_L}a_w=0\}.$$ By Dirichlet's unit theorem, we know that the homomorphism of $\RR[G]$-modules $$\lambda_{L,S}: \RR \cO_{L,S}^\times \stackrel{\sim}{\to} \RR X_{L,S}; \ a\mapsto -\sum_{w\in S_L}\log|a|_w w$$ is an isomorphism. By \cite[Chap. I, Proposition 3.4]{tate} we know that \begin{eqnarray} r_{\chi,S}&=&\dim_\CC(e_\chi \CC \cO_{L,S}^\times)=\dim_\CC(e_\chi \CC X_{L,S}) \nonumber \\ &=& \begin{cases} \# \{ v\in S \mid \chi(G_v)=1\} &\text{ if $\chi \neq 1$}, \\ n(=\#S-1) &\text{ if $\chi=1$}, \end{cases} \nonumber \end{eqnarray} where $e_\chi:=\frac{1}{\#G}\sum_{\sigma\in G}\chi(\sigma)\sigma^{-1}$. From this fact, we see that $r \leq r_{\chi,S}$. Let $T$ be a finite set of places of $k$ which is disjoint from $S$. The $S$-truncated $T$-modified $L$-function is defined by $$L_{k,S,T}(\chi,s):=(\prod_{v\in T}(1-\chi(\Fr_v){\N}v^{1-s}))L_{k,S}(\chi,s).$$ The $(S,T)$-unit group of $L$ is defined by $$\cO_{L,S,T}^\times:=\ker(\cO_{L,S}^\times \to \bigoplus_{w \in T_L}\kappa(w)^\times).$$ Note that $\cO_{L,S,T}^\times$ is a subgroup of $\cO_{L,S}^\times$ of finite index. We have $$r \leq r_{\chi,S}=\ord_{s=0}L_{k,S,T}(\chi,s)=\dim_\CC(e_\chi \CC \cO_{L,S,T}^\times).$$ We put $$L_{k,S,T}^{(r)}(\chi,0):=\lim_{s\to 0}s^{-r}L_{k,S,T}(\chi,s).$$ We define the $r$-th order Stickelberger element by $$\theta_{L/k,S,T}^{(r)}:=\sum_{\chi \in \widehat G}L_{k,S,T}^{(r)}(\chi^{-1},0)e_\chi \in \RR[G]. $$ The ($r$-th order) Rubin-Stark element $$\epsilon_{L/k,S,T}^V \in \RR \bigwedge_{\ZZ[G]}^r \cO_{L,S,T}^\times$$ is defined to be the element which corresponds to $$\theta_{L/k,S,T}^{(r)}\cdot (w_1-w_0)\wedge\cdots\wedge(w_r-w_0) \in \RR\bigwedge_{\ZZ[G]}^r X_{L,S}$$ under the isomorphism $$\RR \bigwedge_{\ZZ[G]}^r \cO_{L,S,T}^\times \stackrel{\sim}{\to} \RR\bigwedge_{\ZZ[G]}^r X_{L,S}$$ induced by $\lambda_{L,S}$. Now assume that $\cO_{L,S,T}^\times$ is $\ZZ$-free. Then, the Rubin-Stark conjecture predicts that the Rubin-Stark element $\epsilon_{L/k,S,T}^V$ lies in the lattice $$\bigcap_{\ZZ[G]}^r \cO_{L,S,T}^\times:=\{a\in \QQ \bigwedge_{\ZZ[G]}^r \cO_{L,S,T}^\times \mid \Phi(a)\in\ZZ[G]\text{ for all }\Phi\in\bigwedge_{\ZZ[G]}^r\Hom_{\ZZ[G]}(\cO_{L,S,T}^\times,\ZZ[G])\}.$$ (See \cite[Conjecture B$'$]{R}.) In this paper, we consider the `$p$-part' of the Rubin-Stark conjecture for a fixed prime number $p$. We put $$U_{L,S,T}:=\ZZ_p \cO_{L,S,T}^\times.$$ We also fix an isomorphism $\CC \simeq \CC_p$. From this, we regard $$\epsilon_{L/k,S,T}^V \in \CC_p \bigwedge_{\ZZ_p[G]}^r U_{L,S,T}.$$ We define $$\bigcap_{\ZZ_p[G]}^r U_{L,S,T}:=\{a \in \QQ_p\bigwedge_{\ZZ_p[G]}^r U_{L,S,T} \mid \Phi(a) \in \ZZ_p[G] \text{ for all $\Phi \in \bigwedge_{\ZZ_p[G]}^r \Hom_{\ZZ_p[G]}(U_{L,S,T},\ZZ_p[G])$}\}.$$ We easily see that there is a natural isomorphism $\ZZ_p\bigcap_{\ZZ[G]}^r \cO_{L,S,T}^\times\simeq \bigcap_{\ZZ_p[G]}^r U_{L,S,T}$. We often denote $\bigwedge_{\ZZ_p[G]}^r$ and $\bigcap_{\ZZ_p[G]}^r $ simply by $\bigwedge^r$ and $\bigcap^r $ respectively. We propose the `$p$-component version' of the Rubin-Stark conjecture as follows. \begin{conjecture}[{${\rm RS}(L/k,S,T,V)_p$}] $$\epsilon_{L/k,S,T}^V \in \bigcap^r U_{L,S,T}.$$ \end{conjecture} \begin{remark} Concerning known results on the Rubin-Stark conjecture, see \cite[Remark 5.3]{bks1} for example. Note that the Rubin-Stark conjecture is a consequence of the eTNC. This result was first proved by the first author in \cite[Corollary 4.1]{burns}, and later by the present authors \cite[Theorem 5.13]{bks1} in a simpler way. \end{remark} \subsection{The eTNC for the untwisted Tate motive} In this subsection, we review the formulation of the eTNC for the untwisted Tate motive. Let $L/k,G,S,T$ be as in the previous subsection. Fix a prime number $p$. We assume that $S_p(k) \subset S$. Consider the complex $$C_{L,S}:=R\Hom_{\ZZ_p}(R\Gamma_c(\cO_{L,S},\ZZ_p),\ZZ_p)[-2].$$ It is known that $C_{L,S}$ is a perfect complex of $\ZZ_p[G]$-modules, acyclic outside degrees zero and one. We have a canonical isomorphism $$H^0(C_{L,S})\simeq U_{L,S}(:=\ZZ_p\cO_{L,S}^\times),$$ and a canonical exact sequence $$0 \to A_S(L) \to H^1(C_{L,S}) \to \cX_{L,S}\to 0,$$ where $A_S(L):=\ZZ_p\Pic(\cO_{L,S})$ and $\cX_{L,S}:=\ZZ_pX_{L,S}$. The complex $C_{L,S}$ is identified with the $p$-completion of the complex obtained from the classical `Tate sequence' (if $S$ is large enough), and also identified with $\ZZ_p R\Gamma((\cO_{L,S})_{\mathcal{W}},\GG_m)$, where $R\Gamma((\cO_{L,S})_{\mathcal{W}},\GG_m)$ is the `Weil-\'etale cohomology complex' constructed in \cite[\S 2.2]{bks1} (see \cite[Proposition 3.3]{BFongal} and \cite[Proposition 3.5(e)]{pnp}). By a similar construction with \cite[Proposition 2.4]{bks1}, we construct a canonical complex $C_{L,S,T}$ which lies in the distinguished triangle $$C_{L,S,T} \to C_{L,S} \to \bigoplus_{w\in T_L}\ZZ_p\kappa(w)^\times[0].$$ (Simply we can define $C_{L,S,T}$ by $\ZZ_p R\Gamma_T((\cO_{L,S})_{\mathcal{W}},\GG_m)$ in the terminology of \cite{bks1}.) We have $$H^0(C_{L,S,T})=U_{L,S,T}$$ and the exact sequence $$0 \to A_S^T(L) \to H^1(C_{L,S,T}) \to \cX_{L,S} \to 0,$$ where $A_S^T(L)$ is the $p$-part of the ray class group of $\cO_{L,S}$ with modulus $\prod_{w\in T_L} w$. We define the leading term of $L_{k,S,T}(\chi,s)$ at $s=0$ by $$L_{k,S,T}^\ast(\chi,0):=\lim_{s\to 0}s^{-r_{\chi,S}}L_{k,S,T}(\chi,s).$$ The leading term at $s=0$ of the equivariant $L$-function $$\theta_{L/k,S,T}(s):=\sum_{\chi \in \widehat G}L_{k,S,T}(\chi^{-1},s)e_\chi$$ is defined by $$\theta_{L/k,S,T}^\ast(0):=\sum_{\chi \in \widehat G}L_{k,S,T}^\ast(\chi^{-1},0)e_\chi \in \RR[G]^\times.$$ As in the previous subsection we fix an isomorphism $\CC \simeq \CC_p$. We regard $\theta_{L/k,S,T}^\ast(0)\in \CC_p[G]^\times$. The zeta element for $\GG_m$ $$z_{L/k,S,T} \in \CC_p{\det}_{\ZZ_p[G]}(C_{L,S,T})$$ is defined to be the element which corresponds to $\theta_{L/k,S,T}^\ast(0)$ under the isomorphism \begin{eqnarray} \CC_p{\det}_{\ZZ_p[G]}(C_{L,S,T}) &\simeq& {\det}_{\CC_p[G]}(\CC_pU_{L,S,T}) \otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p \cX_{L,S}) \nonumber \\ &\stackrel{\sim}{\to}& {\det}_{\CC_p[G]}(\CC_p \cX_{L,S})\otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p \cX_{L,S}) \nonumber \\ &\stackrel{\sim}{\to}& \CC_p[G], \nonumber \end{eqnarray} where the second isomorphism is induced by $\lambda_{L,S}$, and the last isomorphism is the evaluation map. Note that determinant modules must be regarded as graded invertible modules, but we omit the grading of any graded invertible modules as in \cite{bks1}. The eTNC for the pair $(h^0(\Spec L),\ZZ_p[G])$ is formulated as follows. \begin{conjecture}[{${\rm eTNC}(h^0(\Spec L),\ZZ_p[G])$}] $$\ZZ_p[G]\cdot z_{L/k,S,T}={\det}_{\ZZ_p[G]}(C_{L,S,T}).$$ \end{conjecture} \begin{remark} When $p$ is odd, $k$ is totally real, and $L$ is CM, we say that the minus part of the eTNC (which we denote by ${\rm eTNC}(h^0(\Spec L),\ZZ_p[G]^-)$) is valid if we have the equality $$e^-\ZZ_p[G]\cdot z_{L/k,S,T}=e^-{\det}_{\ZZ_p[G]}(C_{L,S,T}),$$ where $e^-:=\frac{1-c}{2}$ and $c \in G$ is the complex conjugation. \end{remark} \subsection{The eTNC and Rubin-Stark elements} In this subsection, we interpret the eTNC, using Rubin-Stark elements. The result in this subsection will be used in \S \ref{descent section}. We continue to use the notation in the previous subsection. Take $\chi \in \widehat G$, and suppose that $r_{\chi,S} <\# S $. Put $L_\chi:=L^{\ker \chi}$ and $G_\chi:=\Gal(L_\chi/k)$. Take $V_{\chi,S}\subset S$ so that all $v\in V_{\chi,S}$ split completely in $L_\chi$ (i.e. $\chi(G_v)=1$) and $\#V_{\chi,S}=r_{\chi,S}$. Note that, if $\chi\neq 1$, we have $$V_{\chi,S}=\{v\in S \mid \chi(G_v)=1\}.$$ Consider the Rubin-Stark element $$\epsilon_{L_\chi/k,S,T}^{V_{\chi,S}}\in \CC_p \bigwedge^{r_{\chi,S}}U_{L_\chi,S,T}.$$ Note that a Rubin-Stark element depends on a fixed labeling of $S$, so in this case a labeling of $S$ such that $S=\{ v_0,\ldots,v_n\}$ and $V_{\chi,S}=\{v_1,\ldots,v_{r_{\chi,S}}\}$ is understood to be chosen. For a set $\Sigma$ of places of $k$ and a finite extension $F/k$, put $\cY_{F,\Sigma}:=\ZZ_p Y_{F,\Sigma}=\bigoplus_{w \in \Sigma_F}\ZZ_p w$ and $\cX_{F,\Sigma}:=\ZZ_pX_{F,\Sigma}=\ker(\cY_{F,\Sigma} \to \ZZ_p)$. The natural surjection $$\cX_{L_\chi,S} \to \cY_{L_\chi,V_{\chi,S}}$$ induces an injection $$\cY_{L_\chi,V_{\chi,S}}^\ast \to \cX_{L_\chi,S}^\ast,$$ where $(\cdot)^\ast:=\Hom_{\ZZ_p[G_\chi]}(\cdot, \ZZ_p[G_\chi])$. Since $\cY_{L_\chi,V_{\chi,S}} \simeq \ZZ_p[G_\chi]^{\oplus r_{\chi,S}}$ and $\dim_{\CC_p}(e_\chi \CC_p\cX_{L,S})=r_{\chi,S}$, the above map induces an isomorphism $$e_\chi\CC_p\cY_{L_\chi,V_{\chi,S}}^\ast \stackrel{\sim}{\to} e_\chi \CC_p \cX_{L,S}^\ast.$$ From this, we have a canonical identification $$e_\chi\CC_p( \bigwedge^{r_{\chi,S}}U_{L_\chi,S,T} \otimes\bigwedge^{r_{\chi,S}} \cY_{L_\chi,V_{\chi,S}}^\ast )= e_\chi({\det}_{\CC_p[G]}(\CC_pU_{L,S,T}) \otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p\cX_{L,S})).$$ Since $\{w_1,\ldots , w_{r_{\chi,S}}\}$ is a basis of $\cY_{L_\chi,V_{\chi,S}}$, we have the (non-canonical) isomorphism $$\bigwedge^{r_{\chi,S}}U_{L_\chi,S,T} \stackrel{\sim}{\to} \bigwedge^{r_{\chi,S}}U_{L_\chi,S,T}\otimes \bigwedge^{r_{\chi,S}}\cY_{L_\chi,V_{\chi,S}}^\ast; \ a \mapsto a\otimes w_1^\ast \wedge \cdots \wedge w_{r_{\chi,S}}^\ast,$$ where $w_i^\ast$ is the dual of $w_i$. Hence, we have the (non-canonical) isomorphism $$e_\chi \CC_p \bigwedge^{r_{\chi,S}}U_{L_\chi,S,T} \simeq e_\chi({\det}_{\CC_p[G]}(\CC_pU_{L,S,T}) \otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p\cX_{L,S})).$$ \begin{proposition} \label{etncbyrs} Suppose that $r_{\chi,S} <\# S $ for every $\chi \in \widehat G$. Then, ${\rm eTNC}(h^0(\Spec L),\ZZ_p[G])$ holds if and only if there exists a $\ZZ_p[G]$-basis $\mathcal{L}_{L/k,S,T}$ of ${\det}_{\ZZ_p[G]}(C_{L,S,T})$ such that, for every $\chi \in \widehat G$, the image of $e_\chi \mathcal{L}_{L/k,S,T}$ under the isomorphism $$e_\chi \CC_p{\det}_{\ZZ_p[G]}(C_{L,S,T}) \simeq e_\chi({\det}_{\CC_p[G]}(\CC_pU_{L,S,T}) \otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p\cX_{L,S})) \simeq e_\chi \CC_p \bigwedge^{r_{\chi,S}}U_{L_\chi,S,T}$$ coincides with $e_\chi \epsilon_{L_\chi/k,S,T}^{V_{\chi,S}}$. \end{proposition} \begin{proof} By the definition of Rubin-Stark elements, we see that the image of $e_\chi\epsilon_{L_\chi/k,S,T}^{V_{\chi,S}}$ under the isomorphism \begin{eqnarray} e_\chi \CC_p \bigwedge^{r_{\chi,S}}U_{L_\chi,S,T} &\simeq & e_\chi({\det}_{\CC_p[G]}(\CC_pU_{L,S,T}) \otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p\cX_{L,S})) \nonumber \\ &\simeq & e_\chi({\det}_{\CC_p[G]}(\CC_p\cX_{L,S}) \otimes_{\CC_p[G]} {\det}_{\CC_p[G]}^{-1}(\CC_p\cX_{L,S})) \nonumber\\ &\simeq& e_\chi \CC_p[G] \nonumber \end{eqnarray} is equal to $e_\chi L_{k,S,T}^\ast(\chi^{-1},0)$. The `only if part' follows by putting $\mathcal{L}_{L/k,S,T}:=z_{L/k,S,T}$. The `if part' follows by noting that $\mathcal{L}_{L/k,S,T}$ must be equal to $z_{L/k,S,T}$. \end{proof} \subsection{The canonical projection maps} \label{section explicit} Let $L/k,G,S,T,V,r$ be as in \S \ref{rssection}. We put $$e_r:=\sum_{\chi \in \widehat G, \ r_{\chi,S}=r} e_\chi \in \QQ[G].$$ As in Proposition \ref{etncbyrs}, we construct the (non-canonical) isomorphism $$e_r\CC_p{\det}_{\ZZ_p[G]}(C_{L,S,T}) \simeq e_r \CC_p \bigwedge^{r}U_{L,S,T}.$$ In this subsection, we give an explicit description of the map $$\pi_{L/k,S,T}^V:{\det}_{\ZZ_p[G]}(C_{L,S,T}) \stackrel{e_r \CC_p \otimes}{\to} e_r\CC_p{\det}_{\ZZ_p[G]}(C_{L,S,T}) \simeq e_r \CC_p \bigwedge^{r}U_{L,S,T} \subset \CC_p \bigwedge^{r}U_{L,S,T}.$$ This map is important since the image of the zeta element $z_{L/k,S,T}$ under this map is the Rubin-Stark element $\epsilon_{L/k,S,T}^V$. Firstly, we choose a representative of $C_{L,S,T}$ $$\Pi \stackrel{\psi}{\to} \Pi,$$ where the first term is placed in degree zero, such that $\Pi$ is a free $\ZZ_p[G]$-module with basis $\{ b_1,\ldots, b_d\}$ ($d$ is sufficiently large), and that the natural surjection $$\Pi \to H^1(C_{L,S,T}) \to \cX_{L,S}$$ sends $b_i$ to $w_i-w_0$ for each $i$ with $1\leq i \leq r$. For the details of this construction, see \cite[\S 5.4]{bks1}. Note that the representative of $R\Gamma_T((\mathcal{O}_{K,S})_{\mathcal{W}},\GG_m)$ chosen in \cite[\S 5.4]{bks1} is of the form $$P\to F,$$ where $P$ is projective and $F$ is free. By Swan's theorem \cite[(32.1)]{curtisr}, we have an isomorphism $\ZZ_p P\simeq \ZZ_p F$. This shows that we can take the representative of $C_{L,S,T}$ as above. We define $\psi_i \in \Hom_{\ZZ_p[G]}(\Pi,\ZZ_p[G])$ by $$\psi_i:=b_i^\ast\circ \psi,$$ where $b_i^\ast$ is the dual of $b_i$. Note that $\bigwedge_{r<i\leq d}\psi_i \in \bigwedge^{d-r} \Hom_{\ZZ_p[G]}(\Pi,\ZZ_p[G])$ defines the homomorphism $$\bigwedge_{r<i\leq d}\psi_i :\bigwedge^d \Pi \to \bigwedge^r \Pi$$ given by $$(\bigwedge_{r<i\leq d}\psi_i) (b_1\wedge \cdots \wedge b_d)=\sum_{\sigma \in \mathfrak{S}_{d,r}}{\rm sgn}(\sigma) \det(\psi_i(b_{\sigma(j)}))_{r< i,j \leq d} b_{\sigma(1)} \wedge \cdots \wedge b_{\sigma(r)}$$ (see Notation.) \begin{proposition} \label{explicit projector}\ \begin{itemize} \item[(i)] We have $$\bigcap^r U_{L,S,T}=(\QQ_p\bigwedge^r U_{L,S,T})\cap \bigwedge^r \Pi,$$ where we regard $U_{L,S,T}\subset \Pi$ via the natural inclusion $$U_{L,S,T}=H^0(C_{L,S,T}) =\ker \psi \hookrightarrow \Pi.$$ \item[(ii)] If we regard $\bigcap^r U_{L,S,T} \subset \bigwedge^r \Pi$ by (i), then we have $$\im (\bigwedge_{r<i\leq d}\psi_i :\bigwedge^d \Pi \to \bigwedge^r \Pi) \subset \bigcap^r U_{L,S,T} .$$ \item[(iii)] The map $${\det}_{\ZZ_p[G]}(C_{L,S,T}) = \bigwedge^d \Pi \otimes \bigwedge^d \Pi^\ast \to \bigcap^r U_{L,S,T}; \ b_1\wedge \cdots \wedge b_d\otimes b_1^\ast \wedge \cdots \wedge b_d^\ast \mapsto (\bigwedge_{r<i\leq d}\psi_i) (b_1\wedge \cdots \wedge b_d)$$ coincides with $(-1)^{r(d-r)}\pi_{L/k,S,T}^V$. In particular, we have $$\pi_{L/k,S,T}^V(b_1\wedge \cdots \wedge b_d\otimes b_1^\ast \wedge \cdots \wedge b_d^\ast)=(-1)^{r(d-r)}\sum_{\sigma \in \mathfrak{S}_{d,r}}{\rm sgn}(\sigma) \det(\psi_i(b_{\sigma(j)}))_{r < i,j \leq d} b_{\sigma(1)} \wedge \cdots \wedge b_{\sigma(r)}$$ and $$\im \pi_{L/k,S,T}^V \subset \{ a\in \bigcap^r U_{L,S,T} \mid e_r a =a\}.$$ \end{itemize} \end{proposition} \begin{proof} For (i), see \cite[Lemma 4.7(ii)]{bks1}. For (ii) and (iii), see \cite[Lemma 4.3]{bks1}. \end{proof} \section{Higher rank Iwasawa theory}\label{hrit sec} \subsection{Notation} We fix a prime number $p$. We use the following notation: \begin{itemize} \item $k$: number field; \item $K_\infty/k$: Galois extension such that $\G:=\Gal(K_\infty/k)\simeq \Delta \times \Gamma$, where $\Delta$ is a finite abelian group and $\Gamma\simeq \ZZ_p$; \item $\Lambda:=\ZZ_p[[\G]]$; \item Fix an isomorphism $\CC\simeq \CC_p$, and identify $\widehat \Delta$ with $\Hom_\ZZ(\Delta,\overline \QQ_p^\times)$. For $\chi \in \widehat \Delta$, put $\Lambda_\chi:=\ZZ_p[\im \chi][[\Gamma]].$ \end{itemize} Note that the total quotient ring $Q(\Lambda)$ has the decomposition $$Q(\Lambda)\simeq \bigoplus_{\chi \in \widehat \Delta/{\sim}_{\QQ_p}} Q(\Lambda_\chi),$$ where the equivalence relation $\sim_{\QQ_p}$ is defined by $$\chi \sim_{\QQ_p} \chi' \Leftrightarrow \text{there exists $\sigma \in G_{\QQ_p}$ such that $\chi=\sigma \circ \chi'$}.$$ \begin{itemize} \item $K:=K_\infty^\Gamma$ (so $\Gal(K/k)=\Delta$); \item $k_\infty:=K_\infty^{\Delta}$ (so $k_\infty/k$ is a $\ZZ_p$-extension with Galois group $\Gamma$); \item $k_n$: the $n$-th layer of $k_\infty/k$; \item $K_n$: the $n$-th layer of $K_\infty/K$; \item $\G_n:=\Gal(K_n/k)$. \end{itemize} For each character $\chi \in \widehat{\G}$ we also set \begin{itemize} \item $L_\chi:=K_{\infty}^{\ker \chi}$; \item $L_{\chi,\infty}:=L_\chi \cdot k_\infty$; \item $L_{\chi,n}$: the $n$-th layer of $L_{\chi,\infty}/L_\chi$; \item $\G_\chi:=\Gal(L_{\chi,\infty}/k)$; \item $\G_{\chi,n}:=\Gal(L_{\chi,n}/k)$; \item $G_\chi:=\Gal(L_\chi/k)$; \item $\Gamma_\chi:=\Gal(L_{\chi,\infty}/L_\chi)$; \item $\Gamma_{\chi,n}:=\Gal(L_{\chi,n}/L_\chi)$; \item $S$: a finite set of places of $k$ which contains $S_\infty(k)\cup S_{\rm ram}(K_\infty/k) \cup S_p(k)$; \item $T$: a finite set of places of $k$ which is disjoint from $S$; \item $V_\chi:=\{ v\in S \mid v \text{ splits completely in } L_{\chi,\infty} \}$ (this is a proper subset of $S$); \item $r_\chi:=\# V_\chi.$ \end{itemize} For any intermediate field $L$ of $K_\infty/k$, we denote $\varprojlim_{F}U_{F,S,T}$ by $U_{L,S,T}$, where $F$ runs over all intermediate field of $L/k$ which is finite over $k$ and the inverse limit is taken with respect to norm maps. Similarly, $C_{L,S,T}$ is defined to be the inverse limit of $C_{F,S,T}$. We denote $\varprojlim_F \cY_{F,S}$ by $\cY_{L,S}$, where the inverse limit is taken with respect to the maps $$\cY_{F',S}\to \cY_{F,S}; \ w_{F'} \mapsto w_F, $$ where $F \subset F'$, $w_{F'} \in S_{F'}$, and $w_F\in S_F$ is the place lying under $w_{F'}$. We use similar notation for $\cX_{L,S}$ etc. \subsection{Iwasawa main conjecture I} In this section we formulate the main conjecture of Iwasawa theory for general number fields, that is a key to our study. \subsubsection{}For any character $\chi$ in $\widehat{\G}$ there is a natural composite homomorphism \begin{eqnarray} \lambda_\chi :{\det}_\Lambda(C_{K_\infty,S,T}) &\to& {\det}_{\ZZ_p[G_\chi]}(C_{L_\chi,S,T}) \nonumber \\ &\hookrightarrow& {\det}_{\CC_p[G_{\chi}]}(\CC_pC_{L_{\chi},S,T}) \nonumber \\ &\stackrel{\sim}{\to}&{\det}_{\CC_p[G_{\chi}]}(\CC_p U_{L_{\chi},S,T}) \otimes_{\CC_p[G_{\chi}]}{\det}_{\CC_p[G_{\chi}]}^{-1}(\CC_p \cX_{L_{\chi},S}) \nonumber \\ & \stackrel{\sim}{\to}&{\det}_{\CC_p[G_{\chi}]}(\CC_p \cX_{L_\chi,S}) \otimes_{\CC_p[G_{\chi}]}{\det}_{\CC_p[G_{\chi}]}^{-1}(\CC_p \cX_{L_{\chi},S}) \nonumber \\ &\simeq& \CC_p[G_{\chi}] \nonumber\\ &\stackrel{\chi}{\to } &\CC_p, \nonumber \end{eqnarray} where the fourth map is induced by $\lambda_{L_\chi,S}$, the fifth map is the evaluation, and the last map is induced by $\chi$. We can now state our higher rank main conjecture of Iwasawa theory in its first form. \begin{conjecture}[{${\rm IMC}(K_\infty/k,S,T,p)$}] \label{IMC} There exists a $\Lambda$-basis $\mathcal{L}_{K_\infty/k,S,T}$ of the module ${\det}_\Lambda(C_{K_\infty,S,T})$ for which, at every $\chi \in \widehat \Delta$ and every $\psi\in \widehat{\G_{\chi}}$ for which $r_{\psi,S} = r_\chi$ one has $\lambda_\psi(\mathcal{L}_{K_\infty/k,S,T})=L_{k,S,T}^{(r_\chi)}(\psi^{-1},0).$ \end{conjecture} \begin{remark} It is important to note that this conjecture is much weaker than the (relevant case of the) equivariant Tamagawa number conjecture. For example, if $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension, then for any $\psi$ that is trivial on the decomposition group in $\mathcal{G}_\chi$ of any $p$-adic place of $k$ one has $r_{\psi,S} > r_\chi$ and so there is no interpolation condition at $\psi$ specified above. When $r_{\chi}=0$, (the $\chi$-component of) the element $\mathcal{L}_{K_\infty/k,S,T}$ is the $p$-adic $L$-function, and in the general case $r_{\chi}>0$, it plays a role of $p$-adic $L$-functions. We show later that Conjecture \ref{IMC} can also be naturally interpreted in terms of the existence of suitable Iwasawa-theoretic measures (see Proposition \ref{measureIMC}).\end{remark} \subsubsection{}\label{negative} In this subsection we assume that $K_{\infty}/K$ is the cyclotomic $\ZZ_{p}$-extension and that $K$ contains a primitive $p$-th root of unity and we briefly discuss how in this case the element $\mathcal{L}_{K_\infty/k,S,T}$ predicted by Conjecture ${\rm IMC}(K_\infty/k,S,T,p)$ should encode information about the $L$-values at $s=n$ for arbitrary integers $n$. To do this we use the twisting map $${\rm tw}: \Lambda \to \Lambda$$ defined by setting ${\rm tw}(\sigma):=\chi_{\rm cyc}(\sigma)\sigma$, where $\sigma \in \G$ and $\chi_{\rm cyc} : \G \to \ZZ_p^\times$ is the cyclotomic character. For an integer $n$, the ring $\Lambda$, which is regarded as a $\Lambda$-algebra via ${\rm tw}^n$, is denoted by $\Lambda(n)$. For a finite extension $L/k$ and a set of places $\Sigma$ of $k$ which contains $S_\infty(k)\cup S_{\rm ram}(L/k) \cup S_p(k)$, we put $$C_{L,\Sigma}(n):=R\Hom_{\ZZ_p}(R\Gamma_c(\mathcal{O}_{L,\Sigma},\ZZ_p(n)),\ZZ_p)[-2].$$ For a set of places $T$ of $k$ which is disjoint from $\Sigma$, one can construct a canonical complex $C_{L,\Sigma,T}(n)$ which lies in the exact triangle $$C_{L,\Sigma,T}(n) \to C_{L,\Sigma}(n) \to \bigoplus_{w \in T_L} H^1(\kappa(w),\ZZ_p(1-n))[0]$$ and is such that there exists a canonical isomorphism $${\det}_\Lambda(C_{K_\infty,S,T})\otimes_{\Lambda} \Lambda(n) \simeq {\det}_\Lambda (C_{K_\infty,S,T}(n)),$$ where $C_{K_\infty,S,T}(n)$ is defined by taking the inverse limit of the complexes $C_{K_m,S,T}(n)$ (see \cite[Proposition 1.6.5(3)]{FukayaKato}). Assuming the validity of ${\rm IMC}(K_\infty/k,S,T,p)$, we then define $\mathcal{L}_{K_\infty/k,S,T}(n)$ to be the element of ${\det}_\Lambda (C_{K_\infty,S,T}(n))$ which corresponds to $\mathcal{L}_{K_\infty/k,S,T}\otimes 1$ under the above isomorphism and we denote the image of $\mathcal{L}_{K_\infty/k,S,T}(n)$ under the canonical surjection ${\det}_\Lambda(C_{K_\infty,S,T}(n)) \to {\det}_{\ZZ_p[\Delta]}(C_{K,S,T}(n))$ by $\mathcal{L}_{K/k,S,T}(n)$. Then the conjecture of Fukaya-Kato \cite[Conjecture 2.3.2]{FukayaKato} suggests that $\mathcal{L}_{K/k,S,T}(n)$ is the zeta element for $(h^0(\Spec K)(n),\ZZ_p[\Delta])$, namely, the element which corresponds to the leading term $$\theta_{K/k,S,T}^\ast(n)=\sum_{\chi \in \widehat \Delta} L_{k,S,T}^\ast(\chi^{-1},n)e_\chi \in \RR[\Delta]^\times \subset \CC_p[\Delta]^\times$$ under the canonical isomorphism $$\CC_p {\det}_{\ZZ_p[\Delta]}(C_{K,S,T}(n)) \simeq \CC_p\otimes_\QQ \Xi(h^0(\Spec K)(n)) \simeq \CC_p[\Delta],$$ where $\Xi(h^0(\Spec K)(n))$ is the fundamental line for $(h^0(\Spec K)(n),\QQ[\Delta])$ (see \cite[(29)]{BF Tamagawa}), and the first (resp. second) isomorphism is the $p$-adic regulator isomorphism $\tilde \vartheta_p$ in \cite[p.479]{BF Tamagawa2} (resp. the regulator isomorphism $\vartheta_\infty$ in \cite[p.529]{BF Tamagawa}). In a subsequent paper we shall study these subjects thoroughly. In particular, we generalize the Rubin-Stark conjecture to the setting of Tate motives $h^0(\Spec K)(n)$ for an arbitrary integer $n$ (the original Rubin-Stark conjecture being regarded as the special case of this conjecture in the case $n=0$). We also show that the corresponding Rubin-Stark elements for twisted Tate motives are generalizations of Soul\'e's cyclotomic elements and we find that the above conjectural property of $\mathcal{L}_{K_\infty/k,S,T}$ predicts the existence of precise congruence relations between the Rubin-Stark elements for $h^0(\Spec K)(n)$ and $h^0(\Spec K)(n')$ for arbitrary integers $n$ and $n'$ which constitute a natural extension of the classical congruences of Kummer. \subsubsection{}In the next result we record a useful invariance property of Conjecture \ref{IMC}. In the proof of this result we set \[ \delta_T := \prod_{v \in T}(1-{\rm Fr}_v^{-1}{\rm N}v) \in\Lambda\] where ${\rm Fr}_v$ denotes the arithmetic Frobenius in $\mathcal{G}$ of any place $w$ of $K_\infty$ that lies above $v$. This element belongs to $Q(\Lambda)^\times$ since for each $\chi \in \widehat \Delta$ and each $v\in T$ the image $1-{\rm Fr}_v^{-1}{\N}v $ under the map $\Lambda \stackrel{\chi}{\to} \Lambda_\chi$ is non-zero. \begin{lemma}\label{independent} The validity of Conjecture \ref{IMC} is independent of the choice of $T$. \end{lemma} \begin{proof} It is enough to consider replacing $T$ by a larger set $T'$. Set $T'' := T'\setminus T$. Then, one finds that there exists an exact triangle \[ C_{K_\infty,S,T} \to C_{K_\infty,S,T'} \to \bigoplus_{w \in T''_{K_\infty}}(\ZZ_p\kappa(w)^\times )[0]\] ($\kappa(w)^\times$ is defined by the inverse limit) and hence an equality \[{\rm det}_\Lambda(C_{K_\infty,S,T'}) = {\det}_\Lambda(C_{K_\infty,S,T})\prod_{v \in T''_{K_\infty}}{\rm Fitt}^0_\Lambda(\ZZ_p\kappa(w)^\times) = {\det}_\Lambda(C_{K_\infty,S,T})\delta_{T''},\] where ${\rm Fitt}^0$ denotes the (initial) Fitting ideal (see \cite{North}). Given this, it is straightforward to check that an element $\mathcal{L}_{K_\infty/k,S,T}$ validates Conjecture \ref{IMC} with respect to $T$ if and only if the element $\delta_{T''}\cdot \mathcal{L}_{K_\infty/k,S,T}$ validates Conjecture \ref{IMC} with respect to $T'$. \end{proof} Following Lemma \ref{independent} we shall assume in the sequel that $T$ contains two places of unequal residue characteristics and hence that each group $U_{L,S,T}$ is torsion-free. \subsubsection{} For each $\Phi$ in $\bigwedge^{r_\chi}\Hom_{\ZZ_p[\G_{\chi,n}]}(U_{L_{\chi,n},S,T}, \ZZ_p[\G_{\chi,n}])$, Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ implies only that $\Phi(\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi})$ belongs to $\ZZ_p[\G_{\chi,n}]$. By contrast, if Conjecture \ref{IMC} is valid, then the following result shows that the elements $\Phi(\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi})$ encode significant arithmetic information. In this result we write $\Fitt^a$ for the $a$-th Fitting ideal (see \cite{North}). \begin{theorem}\label{imcrs} Assume that the Iwasawa main conjecture (Conjecture \ref{IMC}) is valid for $(K_\infty/k,S,T)$. Then, for each $\chi \in \widehat \Delta$ and each positive integer $n$, we have $$\Fitt_{\ZZ_p[\G_{\chi,n}]}^{r_\chi}(H^1(C_{L_{\chi,n},S,T}))= \{ \Phi(\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi}) \mid \Phi \in \bigwedge^{r_\chi}\Hom_{\ZZ_p[\G_{\chi,n}]}(U_{L_{\chi,n},S,T}, \ZZ_p[\G_{\chi,n}])\}.$$ In particular, Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid. \end{theorem} \begin{proof} The explicit definition of the elements $\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi}$ implies directly that the assertion of Conjecture \ref{IMC} is valid if and only if there is a $\Lambda$-basis $\mathcal{L}_{K_\infty/k,S,T}$ of ${\det}_\Lambda(C_{K_\infty,S,T})$ for which, for every character $\chi \in \widehat \Delta$ and every positive integer $n$, the image of $\mathcal{L}_{K_\infty/k,S,T}$ under the map \begin{eqnarray} {\det}_{\Lambda}(C_{K_\infty,S,T}) \to {\det}_{\ZZ_p[\G_{\chi,n}]}(C_{L_{\chi,n},S,T}) \stackrel{\pi_{L_{\chi,n}/k,S,T}^{V_\chi}}{\to} e_{r_\chi}\CC_p\bigwedge^{r_\chi}U_{L_{\chi,n},S,T} \nonumber \end{eqnarray} is equal to $\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi}$. Given this equivalence, the claimed result follows directly from Proposition \ref{explicit projector}(iii) and the same argument used to prove \cite[Theorem 7.5]{bks1}. \end{proof} \subsubsection{} For each character $\chi \in \widehat \Delta$, there is a natural ring homomorphism $$\ZZ_p[[\G_\chi]] = \ZZ_p[[G_\chi \times \Gamma]] \stackrel{\chi}{\to} \ZZ_p[\im \chi][[\Gamma]] =\Lambda_\chi \subset Q(\Lambda_\chi).$$ In the sequel we use this homomorphism to regard $Q(\Lambda_\chi)$ as a $\ZZ_p[[\G_\chi]]$-algebra. In the next result we describe an important connection between the element $\mathcal{L}_{K_\infty/k,S,T}$ that is predicted to exist by Conjecture \ref{IMC} and the inverse limit (over $n$) of the Rubin-Stark elements $\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi}$. This result shows, in particular, that the element $\mathcal{L}_{K_\infty/k,S,T}$ in Conjecture \ref{IMC} is unique (if it exists). In the sequel we set \[ \bigcap^{r_\chi} U_{L_{\chi,\infty},S,T}:= \varprojlim_n \bigcap^{r_\chi} U_{L_{\chi,n},S,T},\] where the inverse limit is taken with respect to the map $$\bigcap^{r_\chi}U_{L_{\chi,m},S,T} \to \bigcap^{r_\chi} U_{L_{\chi,n},S,T}$$ induced by the norm map $U_{L_{\chi,m},S,T} \to U_{L_{\chi,n},S,T}$, where $n \leq m$. Note that Rubin-Stark elements are norm compatible (see \cite[Proposition 6.1]{R} or \cite[Proposition 3.5]{sano}), so if we know that Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid for all sufficiently large $n$, then we can define the element $$\epsilon_{L_{\chi,\infty}/k,S,T}^{V_\chi}:=\varprojlim_n \epsilon_{L_{\chi,n}/k,S,T}^{V_\chi} \in \bigcap^{r_\chi} U_{L_{\chi,\infty},S,T}.$$ \begin{theorem} \label{lemisom} \ \begin{itemize} \item[(i)] For each $\chi \in \widehat \Delta$, the homomorphism $${\det}_{\Lambda}(C_{K_\infty,S,T}) \to {\det}_{\ZZ_p[\G_{\chi,n}]}(C_{L_{\chi,n},S,T}) \stackrel{\pi_{L_{\chi,n}/k,S,T}^{V_\chi}}{\to} \bigcap^{r_\chi}U_{L_{\chi,n},S,T} $$ (see Proposition \ref{explicit projector}(iii)) induces an isomorphism of $Q(\Lambda_\chi)$-modules $$\pi_{L_{\chi,\infty}/k,S,T}^{V_\chi}:{\det}_{\Lambda}(C_{K_\infty,S,T}) \otimes_\Lambda Q(\Lambda_\chi) \simeq (\bigcap^{r_\chi}U_{L_{\chi,\infty},S,T})\otimes_{\ZZ_p[[\G_\chi]]}Q(\Lambda_\chi).$$ \item[(ii)] If Conjecture \ref{IMC} is valid, then we have $$\pi_{L_{\chi,\infty}/k,S,T}^{V_\chi}(\mathcal{L}_{K_\infty/k,S,T})=\epsilon_{L_{\chi,\infty}/k,S,T}^{V_\chi}.$$ (Note that in this case Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid for all $n$ by Theorem \ref{imcrs}.) \end{itemize} \end{theorem} \begin{proof} Since the module $A_{S}^T(K_\infty)\otimes_\Lambda Q(\Lambda_\chi)$ vanishes, there are canonical isomorphisms \begin{eqnarray}\label{first step} & &{\det}_{\Lambda}(C_{K_\infty,S,T}) \otimes_\Lambda Q(\Lambda_\chi) \\ &\simeq& {\det}_{Q(\Lambda_\chi)}(C_{K_\infty,S,T}\otimes_\Lambda Q(\Lambda_\chi))\nonumber \\ &\simeq& {\det}_{Q(\Lambda_\chi)}(U_{K_\infty,S,T} \otimes_{\Lambda}Q(\Lambda_\chi)) \otimes_{Q(\Lambda_\chi)} {\det}_{Q(\Lambda_\chi)}^{-1}(\cX_{K_\infty,S}\otimes_\Lambda Q(\Lambda_\chi)).\nonumber \end{eqnarray} It is also easy to check that there are natural isomorphisms $$U_{K_\infty,S,T}\otimes_\Lambda Q(\Lambda_\chi)\simeq U_{L_{\chi,\infty},S,T} \otimes_{\ZZ_p[[\G_\chi]]}Q(\Lambda_\chi)$$ and $$\cX_{K_\infty,S}\otimes_\Lambda Q(\Lambda_\chi) \simeq \cX_{L_{\chi,\infty},S}\otimes_{\ZZ_p[[\G_\chi]]}Q(\Lambda_\chi) \simeq \cY_{L_{\chi,\infty},V_\chi}\otimes_{\ZZ_p[[\G_\chi]]}Q(\Lambda_\chi),$$ and that these are $Q(\Lambda_\chi)$-vector spaces of dimension $r:=r_\chi(=\# V_\chi)$. The isomorphism (\ref{first step}) is therefore a canonical isomorphism of the form \[{\det}_{\Lambda}(C_{K_\infty,S,T}) \otimes_\Lambda Q(\Lambda_\chi) \simeq (\bigwedge^r U_{L_{\chi,\infty},S,T}\otimes \bigwedge^r \cY^*_{L_{\chi,\infty},V_\chi})\otimes_{\ZZ_p[[\G_\chi]]} Q(\Lambda_\chi).\] Composing this isomorphism with the map induced by the non-canonical isomorphism $$\bigwedge^r \cY_{L_{\chi,\infty},V_\chi}^\ast \stackrel{\sim}{\to} \ZZ_p[[\G_\chi]]; w_1^\ast \wedge \cdots \wedge w_r^\ast \mapsto 1,$$ we have $${\det}_{\Lambda}(C_{K_\infty,S,T}) \otimes_\Lambda Q(\Lambda_\chi) \simeq (\bigwedge^rU_{L_{\chi,\infty},S,T})\otimes_{\ZZ_p[[\G_\chi]] }Q(\Lambda_\chi).$$ As in the proofs of Proposition \ref{explicit projector}(iii) and of \cite[Lemma 4.3]{bks1}, this isomorphism is induced by $\varprojlim_n \pi_{L_{\chi,n}/k,S,T}^{V_\chi}$. Now the isomorphism in claim (i) is thus obtained directly from Lemma \ref{technical limit} below. Claim (ii) follows by noting that the image of $\mathcal{L}_{K_\infty/k,S,T}$ under the map \begin{eqnarray} {\det}_{\Lambda}(C_{K_\infty,S,T}) \to {\det}_{\ZZ_p[\G_{\chi,n}]}(C_{L_{\chi,n},S,T}) \stackrel{\pi_{L_{\chi,n}/k,S,T}^{V_\chi}}{\to} \bigcap^{r_\chi}U_{L_{\chi,n},S,T} \nonumber \end{eqnarray} is equal to $\epsilon_{L_{\chi,n}/k,S,T}^{V_\chi}$ (see the proof of Theorem \ref{imcrs}). \end{proof} \begin{lemma}\label{technical limit} With notation as above, there is a canonical identification \[ (\bigcap^{r} U_{L_{\chi,\infty},S,T}) \otimes_{\ZZ_p[[\G_\chi]]} Q(\Lambda_\chi) = (\bigwedge^r U_{L_{\chi,\infty},S,T})\otimes_{\ZZ_p[[\G_\chi]]} Q(\Lambda_\chi).\]\end{lemma} \begin{proof} Take a representative of $C_{L_{\chi,\infty},S,T}$ $$\Pi_{\infty} \to \Pi_{\infty}$$ as in \S \ref{section explicit}. Put $\Pi_n:=\Pi_{\infty}\otimes_{\ZZ_p[[\G_\chi]]} \ZZ_p[\G_{\chi,n}]$. We have $$\bigcap^r U_{L_{\chi,n},S,T}=(\QQ_p \bigwedge^r U_{L_{\chi,n},S,T} ) \cap \bigwedge^r \Pi_{n}$$ (see Proposition \ref{explicit projector}(i)) and so $\varprojlim_n {\bigcap}_{\ZZ_p[\G_{\chi,n}]}^r U_{L_{\chi,n},S,T}$ can be regarded as a submodule of the free $\ZZ_p[[\G_\chi]]$-module \[ \varprojlim_n \bigwedge^r\Pi_{n}= \bigwedge^r \Pi_{\infty}.\] For simplicity, we set \begin{itemize} \item $G_n:=\G_{\chi,n}$, \item $G:=\G_\chi$, \item $U_n:=U_{L_{\chi,n},S,T}$, \item $U_\infty:= U_{L_{\chi,\infty},S,T}$, \item $Q:=Q(\Lambda_\chi)$. \end{itemize} We show the equality $$((\varprojlim_n \QQ_p \bigwedge^r U_n)\cap \bigwedge^r \Pi_\infty)\otimes_{\ZZ_p[[G]]}Q = (\bigwedge^r U_\infty)\otimes_{\ZZ_p[[G]]} Q$$ of the submodules of $(\bigwedge^r \Pi_\infty)\otimes_{\ZZ_p[[G]]} Q$. It is easy to see that $$(\bigwedge^r U_\infty)\otimes_{\ZZ_p[[G]]} Q \subset ((\varprojlim_n \QQ_p \bigwedge^r U_n)\cap \bigwedge^r \Pi_\infty)\otimes_{\ZZ_p[[G]]}Q.$$ Conversely, take $a \in (\varprojlim_n \QQ_p \bigwedge^r U_n)\cap \bigwedge^r \Pi_\infty$ and set $$M_n:=\coker (U_n \to \Pi_n).$$ Then we have $$\varprojlim_n M_n \simeq \coker (U_\infty \to \Pi_\infty)=:M_\infty.$$ Since $$\Pi_\infty \otimes_{\ZZ_p[[G]]}Q \simeq (U_\infty\otimes_{\ZZ_p[[G]]}Q)\oplus (M_\infty\otimes_{\ZZ_p[[G]]}Q),$$ we have the decomposition $$(\bigwedge^r \Pi_\infty)\otimes_{\ZZ_p[[G]]} Q \simeq \bigoplus_{i=0}^r ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty )\otimes_{\ZZ_p[[G]]}Q.$$ Write $$a=(a_i)_i \in \bigoplus_{i=0}^r ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty )\otimes_{\ZZ_p[[G]]}Q.$$ It is sufficient to show that $a_i=0$ for all $i>0$. We may assume that $$a_i \in \im ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty \to ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty )\otimes_{\ZZ_p[[G]]}Q )$$ for every $i$. Since $a \in \bigwedge^r \Pi_\infty$, we can also write $$a=(a_{(n)})_n \in \varprojlim_n \bigwedge^r \Pi_n.$$ For each $n$, we have a decomposition $$\QQ_p \bigwedge^r \Pi_n \simeq \bigoplus_{i=0}^r ( \QQ_p \bigwedge^{r-i}U_n \otimes_{\QQ_p[G_n]} \QQ_p \bigwedge^i M_n ),$$ and we write $$a_{(n)}=(a_{(n),i})_i \in\bigoplus_{i=0}^r ( \QQ_p \bigwedge^{r-i}U_n \otimes_{\QQ_p[G_n]} \QQ_p \bigwedge^i M_n ).$$ Since $a \in \varprojlim_n \QQ_p \bigwedge^r U_n$, we must have $a_{(n),i}=0$ for all $i>0$. To prove $a_i=0$ for all $i>0$, It is sufficient to show that the natural map \begin{multline} \label{injective map} \im ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty \to ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty )\otimes_{\ZZ_p[[G]]}Q ) \\ \to \varprojlim_n ( \QQ_p \bigwedge^{r-i}U_n \otimes_{\QQ_p[G_n]} \QQ_p \bigwedge^i M_n ) \end{multline} is injective. Note that $M_\infty$ is isomorphic to a submodule of $\Pi_\infty$, since $M_\infty \simeq \ker (\Pi_\infty \to H^1(C_{L_{\chi,\infty},S,T}))$. Hence both $U_\infty$ and $M_\infty$ are embedded in $\Pi_\infty$, and we have \begin{multline*} \ker( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty \to ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty )\otimes_{\ZZ_p[[G]]}Q )\\ = \ker(\bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty \stackrel{\alpha}{\to} (\bigwedge^r(\Pi_\infty\oplus\Pi_\infty))\otimes_{\ZZ_p[[G]]} \Lambda_\chi). \end{multline*} Set $\Lambda_{\chi,n}:=\ZZ_p[\im \chi][\Gamma_{\chi,n}]$. The commutative diagram $$\xymatrix{ \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty \ar[r]^{\alpha} \ar[d]_{\beta} & (\bigwedge^r(\Pi_\infty\oplus\Pi_\infty))\otimes_{\ZZ_p[[G]]} \Lambda_\chi \ar[d]^{f} \\ \varprojlim_n \QQ_p((\bigwedge^{r-i} U_n \otimes \bigwedge^i M_n)\otimes_{\ZZ_p[G_n]}\Lambda_{\chi,n}) \ar[r]_{g}& \varprojlim_n \QQ_p((\bigwedge^r(\Pi_n\oplus\Pi_n))\otimes_{\ZZ_p[G_n]}\Lambda_{\chi,n}) } $$ and the injectivity of $f$ and $g$ implies $\ker \alpha =\ker \beta$. Hence we have \begin{eqnarray*} &&\ker( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty \to ( \bigwedge^{r-i}U_\infty \otimes \bigwedge^i M_\infty )\otimes_{\ZZ_p[[G]]}Q )\\ &=&\ker \alpha \\ &=& \ker \beta. \end{eqnarray*} This shows the injectivity of (\ref{injective map}). \end{proof} By Theorem \ref{lemisom}, we can formulate the following conjecture, which is equivalent to Conjecture \ref{IMC} under the assumption that Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid for all $\chi \in \widehat \Delta$ and $n$. \begin{conjecture} \label{IMC rubinstark} Assume that Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid for all $\chi \in \widehat \Delta$ and $n$. Define $\mathcal{L}_{K_\infty/k,S,T}\in {\det}_\Lambda(C_{K_\infty,S,T})\otimes_\Lambda Q(\Lambda)$ by \begin{eqnarray} \mathcal{L}_{K_\infty/k,S,T}&:=&(\pi_{L_{\chi,\infty}/k,S,T}^{V_\chi,-1}(\epsilon_{L_{\chi,\infty}/k,S,T}^{V_\chi}))_\chi \nonumber \\ & \in& \bigoplus_{\chi \in {\widehat \Delta}/{\sim}_{\QQ_p}}({\det}_\Lambda(C_{K_\infty,S,T})\otimes_\Lambda Q(\Lambda_\chi)) \nonumber \\ &=&{\det}_\Lambda(C_{K_\infty,S,T})\otimes_\Lambda Q(\Lambda). \nonumber \end{eqnarray} Then, we have $$\Lambda\cdot \mathcal{L}_{K_\infty/k,S,T} = {\det}_\Lambda(C_{K_\infty,S,T}).$$ \end{conjecture} \subsection{Iwasawa main conjecture II} In this section we reinterpret Conjecture \ref{IMC} in terms of the existence of suitable Iwasawa-theoretic measures. To do this we assume to be given, for each $\chi$ in $\widehat \Delta/{\sim}_{\QQ_p}$, a homomorphism of $\ZZ_p[[\mathcal{G}_\chi]]$-modules \[ \varphi_\chi: \bigwedge^{r_\chi}\cX_{L_{\chi,\infty},S} \to \bigcap^{r_\chi} U_{L_{\chi,\infty},S,T}\] for which $\ker(\varphi_\chi)$ is a torsion $\ZZ_p[[\mathcal{G}_\chi]]$-module. The Rubin-Stark Conjecture implies the existence of a canonical such homomorphism $\varphi_\chi$. Indeed, if we assume Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ for all $n$, then we can define a homomorphism $$\bigwedge^{r_\chi}\cX_{L_{\chi,\infty},S} \to \bigwedge^{r_\chi}\cY_{L_{\chi,\infty},V_\chi} \to \bigcap^{r_\chi}U_{L_{\chi,\infty},S,T},$$ where the first map is the natural surjection, and the second is given by $$w_1\wedge\cdots \wedge w_{r_\chi} \mapsto \epsilon_{L_{\chi,\infty}/k,S,T}^{V_\chi}.$$ Using Lemma \ref{technical limit}, one sees that the kernel of this homomorphism is torsion, and that this homomorphism is canonical. For each character $\psi\in \widehat{\mathcal{G}_\chi}$ this homomorphism induces, upon taking coinvariance, a homomorphism of $\ZZ_p[G_\psi]$-modules \[\varphi_{(\psi)}: \bigwedge^{r_\chi} \cX_{L_\psi,S} \to \bigcap^{r_\chi}U_{L_{\psi},S,T}. \] Consider the endomorphism $$e_\psi\CC_p \bigwedge^{r_\chi}\cX_{L_\psi,S} \stackrel{\varphi_{(\psi)}}{\to} e_\psi\CC_p \bigwedge^{r_\chi}U_{L_\psi,S,T} \stackrel{\lambda_{L_\psi,S}^{-1}}{\to} e_\psi\CC_p\bigwedge^{r_\chi}\cX_{L_\psi,S}.$$ We denote the determinant of this endomorphism by $\mathcal{L}_\varphi(\psi)$. In addition, since each $\Lambda$-module $\ker(\varphi_\chi)$ is torsion, the collection $\varphi = (\varphi_\chi)_\chi$ combines with the canonical isomorphisms (\ref{first step}) and the result of Lemma \ref{technical limit} to give a composite isomorphism of $Q(\Lambda)$-modules \[ \mu_\varphi: {\det}_{\Lambda}(C_{K_\infty,S,T}) \otimes_\Lambda Q(\Lambda) \simeq Q(\Lambda).\] For each $\psi$ in $\widehat{\mathcal{G}}$ we write $\mathfrak{q}_\psi$ for the kernel of the ring homomorphism $\psi: \Lambda \to \CC_p$. This is a height one prime ideal of $\Lambda$ and for any element $\lambda$ of the localization $\Lambda_{\mathfrak{q}_\psi}$ there exists a non-zero divisor $\lambda'$ of $\Lambda$ for which both $\lambda'\lambda\in \Lambda$ and $\psi(\lambda')\not= 0$. In particular, in any such case the value of the quotient $\psi(\lambda'\lambda)/\psi(\lambda')$ is independent of the choice of $\lambda'$ and will be denoted in the sequel by $\int\!\psi\,{\rm d}\lambda$. \begin{conjecture}\label{measure conjecture} For any collection $\varphi = (\varphi_\chi)_\chi$ as above there exists a $\Lambda$-basis $\lambda_\varphi\in Q(\Lambda)$ of $\mu_\varphi({\rm det}_{\Lambda}(C_{K^\infty,S,T}))$ such that for every $\chi\in \widehat \Delta/{\sim}_{\QQ_p}$ and every $\psi\in \widehat{\mathcal{G}_\chi}$ for which both $r_{\psi,S} = r_\chi$ and $\mathcal{L}_\varphi(\psi)\not= 0$ one has $\lambda_\varphi\in \Lambda_{\mathfrak{q}_\psi}$ and \begin{equation}\label{integral} \int\! \psi{\rm d}\lambda_\varphi = \mathcal{L}_\varphi(\psi)\cdot L^{(r_\chi)}_{k,S,T}(\psi^{-1},0).\end{equation} \end{conjecture} \begin{proposition}\label{measureIMC} Conjecture \ref{IMC} is equivalent to Conjecture \ref{measure conjecture}. \end{proposition} \begin{proof} Fix a $\Lambda$-basis $\mathcal{L}$ of ${\rm det}_{\Lambda}(C_{K^\infty,S,T})$. Then it is enough to prove that this element satisfies the interpolation conditions of Conjecture \ref{IMC} if and only if for any choice of data $\varphi$ as above, the element $\lambda_\varphi := \mu_\varphi(\mathcal{L})$ belongs to $\Lambda_\mathfrak{q_\psi}$ and satisfies the interpolation property (\ref{integral}). Set $r := r_\chi$ and $V:=V_\chi$. Then it is enough for us to fix a character $\psi\in \widehat{\G_\chi}$ for which $r_{\psi,S} = r$ and to show both that there exists a homomorphism $\varphi_\chi$ for which the map $$e_\psi\CC_p \bigwedge^r\cX_{L_\psi,S} \stackrel{\varphi_{(\psi)}}{\to} e_\psi \CC_p \bigwedge^r U_{L_\psi,S,T}$$ is injective (and hence $\mathcal{L}_\varphi(\psi)\not= 0$) and also that for any such $\varphi_\chi$ there exists a commutative diagram of the form \begin{equation}\label{key CD}\begin{CD} {\rm det}_{\Lambda}(C_{K^\infty,S,T}) @> \mu_\varphi >> \mu_\varphi({\rm det}_{\Lambda}(C_{K^\infty,S,T}))\\ @V \lambda_\psi VV @VV {x\mapsto \int\! \psi {\rm d}x} V \\ \CC_p @> \times \mathcal{L}_\varphi(\psi) >> \CC_p.\end{CD}\end{equation} Set $\mathfrak{q}:= \mathfrak{q}_\psi$. Then, since $\mathfrak{q}$ is a height one prime ideal of $\Lambda$ which does not contain $p$ the localization $\Lambda_\mathfrak{q}$ is a discrete valuation ring. In addition, $\psi$ induces isomorphisms $\Lambda/\mathfrak{q} \simeq \ZZ_p[\im \psi]$ and $\Lambda_\mathfrak{q}/\mathfrak{q}\Lambda_\mathfrak{q} \simeq \QQ_p(\psi):=\QQ_p(\im \psi)$ and, if for each $\ZZ_p[G_\psi]$-module $M$ we set $M_\psi := e_\psi(\QQ_p(\psi)\otimes_{\ZZ_p} M)$, then we have an isomorphism of $\QQ_p(\psi)$-vector spaces \begin{multline}\label{key iso0} H^1(C_{L_{\chi,\infty},S,T})_\mathfrak{q}/\mathfrak{q}\Lambda_\mathfrak{q} = \QQ_p\otimes_{\ZZ_p}H^1(C_{L_{\chi,\infty},S,T})/\mathfrak{q} \simeq H^1(C_{L_\psi,S,T})_\psi \\ = (\cX_{L_\psi,S})_\psi = (\cY_{L_\psi,V})_\psi.\end{multline} Here the second equality follows from the exact sequence \begin{equation}\label{loc exact} 0 \to A_{S}^T(L_{\chi,\infty}) \to H^1(C_{L_{\chi,\infty},S,T}) \stackrel{\pi}{\to} \cX_{L_{\chi, \infty},S}\to 0 \end{equation} and the third from the assumption $r_{\psi,S} = r$. Since $\cY_{L_{\chi,\infty},V,\mathfrak{q}}$ is a free $\Lambda_\mathfrak{q}$-module there is a direct sum decomposition $\cX_{L_{\chi, \infty},S,\mathfrak{q}} = \cX_{L_{\chi, \infty},S\setminus V,\mathfrak{q}}\oplus \cY_{L_{\chi,\infty},V,\mathfrak{q}}$. Thus, since the composite isomorphism (\ref{key iso0}) factors through the map $\pi$ in (\ref{loc exact}), the module $\cX_{L_{\chi, \infty},S\setminus V,\mathfrak{q}}/\mathfrak{q}\Lambda_\mathfrak{q}$ vanishes, and hence also (by Nakayama's Lemma) the module $\cX_{L_{\chi, \infty},S\setminus V,\mathfrak{q}}$ vanishes. The $\Lambda_\mathfrak{q}$-module $\cX_{L_{\chi,\infty},S,\mathfrak{q}} = \cY_{L_{\chi,\infty},V,\mathfrak{q}}$ is therefore free of rank $r$ and, given this, the $\mathfrak{q}$-localisation of the tautological exact sequence \begin{eqnarray} 0 \to U_{L_{\chi,\infty},S,T} \to \Pi_{\infty} \to \Pi_{\infty} \to H^1(C_{L_{\chi,\infty},S,T})\to 0 \nonumber \end{eqnarray} implies $U_{L_{\chi,\infty},S,T,\mathfrak{q}}$ is also a free $\Lambda_\mathfrak{q}$-module of rank $r$. In particular, since the $\Lambda_\mathfrak{q}$-modules $\cX_{L_{\chi,\infty},S,\mathfrak{q}}$ and $U_{L_{\chi,\infty},S,T,\mathfrak{q}}$ are isomorphic we may choose a homomorphism of $\Lambda$-modules $\varphi'_\chi: \cX_{L_{\chi,\infty},S} \to U_{L_{\chi,\infty},S,T}$ with the property that $\ker(\varphi'_\chi)_\mathfrak{q}$ vanishes. It is then easily checked that the $r$-th exterior power of $\varphi'_\chi$ induces a homomorphism of the required sort $\varphi_\chi$ for which the induced map $$e_\psi\CC_p \bigwedge^r\cX_{L_\psi,S} \stackrel{\varphi_{(\psi)}}{\to} e_\psi \CC_p \bigwedge^r U_{L_\psi,S,T}$$ is injective. To prove the existence of a commutative diagram (\ref{key CD}) we note first that, since the $\Lambda_\mathfrak{q}$-module $\cX_{L_{\chi,\infty},S,\mathfrak{q}}=\cY_{L_{\chi,\infty},V,\mathfrak{q}}$ is free, the exact sequence (\ref{loc exact}) splits and so the isomorphism (\ref{key iso0}) combines with Nakayama's Lemma to imply $A_{S}^T(L_{\chi,\infty})_\mathfrak{q}$ vanishes. It follows that $H^0(C_{L_{\chi,\infty},S,T})_\mathfrak{q} = U_{L_{\chi,\infty},S,T,\mathfrak{q}}$ and $H^1(C_{L_{\chi,\infty},S,T})_\mathfrak{q} = \cX_{L_{\chi, \infty},S,\mathfrak{q}}$ are both free $\Lambda_\mathfrak{q}$-modules of rank $r$. This gives a canonical isomorphism of $\Lambda_\mathfrak{q}$-modules \[ {\rm det}_{\Lambda}(C_{K^\infty,S,T})_\mathfrak{q} \simeq (\bigwedge^{r}_{\Lambda_\mathfrak{q}}U_{L_{\chi,\infty},S,T,\mathfrak{q}})\otimes_{\Lambda_\mathfrak{q}} (\bigwedge^{r}_{\Lambda_\mathfrak{q}}\cX_{L_{\chi,\infty},S,\mathfrak{q}})^*.\] and by combining this isomorphism with the natural projection map \[ (\bigwedge^{r}_{\Lambda_\mathfrak{q}}U_{L_{\chi,\infty},S,T,\mathfrak{q}})\otimes_{\Lambda_\mathfrak{q}} (\bigwedge^{r}_{\Lambda_\mathfrak{q}}\cX_{L_{\chi,\infty},S,\mathfrak{q}})^* \to (\bigwedge^{r}_{\QQ_p(\psi)}(U_{L_{\psi},S,T})_\psi) \otimes_{\QQ_p(\psi)} (\bigwedge^{r}_{\QQ_p(\psi)}(\cX_{L_\psi,S})_\psi)^*\] we obtain the horizontal arrow in the following diagram. \begin{equation*} \xymatrix{& & \CC_p \\ \\ {\rm det}_{\Lambda}(C_{K^\infty,S,T}) \ar[r] \ar[rruu]^{(x\mapsto \int \! \psi{\rm d}x)\circ\mu_\varphi\,\,} \ar[rrdd]_{\lambda_\psi} & ({\bigwedge}^{r}_{\QQ_p(\psi)}(U_{L_{\psi},S,T})_\psi)\otimes_{\QQ_p(\psi)}({\bigwedge}^{r}_{\QQ_p(\psi)}(\cX_{L_\psi,S})_\psi)^* \ar[ruu]_{\theta_1} \ar[rdd]^{\theta_2}\\ \\ & & \CC_p\ar[uuuu]_{\times\mathcal{L}_\varphi(\psi)}} \end{equation*} Here $\theta_1$ and $\theta_2$ are the maps induced by $\varphi_{(\psi)}^{-1}$ and $\lambda_{L_\psi,S}$ and the respective evaluation maps. The commutativity of the right hand triangle is then clear and the commutativity of the two remaining triangles follows by an explicit comparison of the definitions of the maps involved. Since the whole diagram commutes this then gives a commutative diagram of the form (\ref{key CD}), as required.\end{proof} \subsection{Iwasawa main conjecture III}\label{IMC3} In this subsection, we work under the following simplifying assumptions: $$ (\ast) \begin{cases} p \text{ is odd}, \\ \text{for every $\chi \in \widehat \Delta$, $V_\chi$ contains no finite places.} \end{cases} $$ We note that the second assumption here is satisfied whenever $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension. \subsubsection{}We start by quickly reviewing some basic facts concerning the height one prime ideals of $\Lambda$. We say that a height one prime ideal $\frp$ of $\Lambda$ is `regular' (resp. `singular') if one has $p \notin \frp$ (resp. $p \in \frp$). We will often abbreviate `height one regular (resp. singular) prime ideal' to `regular (resp. singular) prime'. If $\frp$ is regular, then $\Lambda_\frp$ is identified with the localization of $\Lambda[1/p]$ at $\frp \Lambda[1/p]$. Since we have the decomposition \begin{eqnarray} \Lambda\left[\frac1p\right]=\bigoplus_{\chi \in\widehat \Delta/{\sim}_{\QQ_p}} \Lambda_\chi\left[\frac 1p\right],\label{decomposition} \end{eqnarray} we see that $\Lambda_\frp$ is equal to the localization of some $\Lambda_\chi[1/p]$ at $\frp \Lambda_\chi[1/p]$. This shows that $Q(\Lambda_\frp)=Q(\Lambda_\chi)$. This $\chi \in {\widehat \Delta}/{\sim}_{\QQ_p}$ is uniquely determined by $\frp$, so we denote it by $\chi_\frp$. Since $\Lambda_\chi[1/p]$ is a regular local ring, we also see that $\Lambda_\frp$ is a one-dimensional regular local ring i.e. discrete valuation ring. Next, suppose that $\frp$ is a singular prime. We have the decomposition $$\Lambda=\bigoplus_{\chi \in \widehat \Delta'/{\sim}_{\QQ_p}} \ZZ_p[\im \chi][\Delta_p][[\Gamma]],$$ where $\Delta_p$ is the Sylow $p$-subgroup of $\Delta$, and $\Delta'$ is the unique subgroup of $\Delta$ which is isomorphic to $\Delta/\Delta_p$. From this, we see that $\Lambda_\frp$ is identified with the localization of some $\ZZ_p[\im \chi][\Delta_p][[\Gamma]]$ at $\frp \ZZ_p[\im \chi][\Delta_p][[\Gamma]]$. By \cite[Lemma 6.2(i)]{bg}, we have $$\frp \ZZ_p[\im \chi][\Delta_p][[\Gamma]]=(\sqrt{p\ZZ_p[\im\chi][\Delta_p]}),$$ where we denote the radical of an ideal $I$ by $\sqrt{I}$. This shows that there is a one-to-one correspondence between the set of all singular primes of $\Lambda$ and the set $\widehat \Delta'/{\sim}_{\QQ_p}$. We denote by $\chi_\frp \in \widehat \Delta'/{\sim}_{\QQ_p}$ the character corresponding to $\frp$. The next lemma shows that $$Q(\Lambda_\frp)=\bigoplus_{\chi \in \widehat \Delta/{\sim}_{\QQ_p}, \chi \mid_{\Delta'}=\chi_\frp} Q(\Lambda_\chi). $$ \begin{lemma} \label{lemmasingular} Let $E/\QQ_p$ be a finite unramified extension, and $\cO$ be its ring of integers. Let $P$ be a finite abelian group whose order is a power of $p$. Put $\Lambda:=\cO[P][[\Gamma]]$ and $\frp:=\sqrt{p \cO[P]}\Lambda$. ($\frp$ is the unique singular prime of $\Lambda$.) Then we have $$Q(\Lambda_\frp)=Q(\Lambda)=\bigoplus_{\chi \in \widehat P/{\sim}_{E}} Q(\cO[\im \chi][[\Gamma]]).$$ \end{lemma} \begin{proof} Note that $p$ is not a zero divisor of $\Lambda$, so we have $$Q(\Lambda_\frp)=Q\left(\Lambda_\frp \left[\frac1p\right]\right).$$ We have the decomposition $$\Lambda_\frp \left[\frac1p\right]=\bigoplus_{\chi \in \widehat P/{\sim}_{E}}e_\chi \Lambda_\frp \left[\frac1p\right],$$ where $e_\chi:=\sum_{\chi' \sim_E \chi}e_{\chi'}$. It is easy to see that each $e_\chi \Lambda_\frp [1/p]$ is a domain. Therefore we have $$Q\left(\Lambda_\frp \left[\frac1p\right]\right)=\bigoplus_{\chi \in \widehat P/{\sim}_{E} }Q\left( e_\chi \Lambda_\frp \left[\frac1p\right]\right).$$ For $\chi \in \widehat P/{\sim}_E$, put $\mathfrak{q}_\chi:=\ker(\Lambda \stackrel{\chi}{\to} \cO[\im \chi][[\Gamma]])$. Note that $\sqrt{p \cO[P]}=(p, I_\cO(P))$, where $I_\cO(P)$ is the kernel of the augmentation map $\cO[P] \to \cO$. This can be shown as follows. Note that any prime ideal of $\cO/p\cO[P]$ is the kernel of some surjection $f: \cO/p\cO[P]\to R$ with some finite domain $R$. It is well-known that every finite domain is a field, so we must have $R\simeq \cO/p\cO$, and $f$ is the augmentation map $\cO/p\cO[P] \to \cO/p\cO\simeq R$. This shows that $\ker f$ is the unique prime ideal of $\cO/p\cO[P]$. Hence we have $\sqrt{p \cO[P]}=(p, I_\cO(P))$. From this, we also see that $$\sqrt{p \cO[P]}=\ker(\cO[P] \stackrel{\chi}{\to} \cO[\im \chi] \to \cO[\im \chi]/\pi_\chi\cO[\im\chi]\simeq \cO/p\cO)$$ holds for any $\chi \in \widehat P/{\sim}_E$, where $\pi_\chi \in \cO[\im \chi]$ is a uniformizer. This shows that $\mathfrak{q}_\chi \subset \frp$. Hence, we know that $\Lambda_{\mathfrak{q}_\chi}$ is the localization of $\Lambda_{\mathfrak{p}}[1/p]$ at $\mathfrak{q}_\chi \Lambda_{\frp}[1/p]$. One can check that $\Lambda_{\mathfrak{q}_\chi}=Q(e_\chi \Lambda_\frp[1/p])$. Since we have $\Lambda_{\mathfrak{q}_\chi}=Q(\cO[\im \chi][[\Gamma]])$, the lemma follows. \end{proof} For a height one prime ideal $\frp$ of $\Lambda$, define a subset $\Upsilon_\frp \subset \widehat \Delta/{\sim}_{\QQ_p}$ by $$\Upsilon_\frp:=\begin{cases} \{ \chi_\frp \} &\text{ if $\frp$ is regular,}\\ \{ \chi \in \widehat \Delta/{\sim}_{\QQ_p} \mid \chi\mid_{\Delta'}=\chi_\frp \} &\text{ if $\frp$ is singular.} \end{cases} $$ The above argument shows that $$Q(\Lambda_\frp)=\bigoplus_{\chi \in \Upsilon_\frp}Q(\Lambda_\chi).$$ To end this section we recall a useful result concerning $\mu$-invariants. \begin{lemma} \label{lemmamu} Let $M$ be a finitely generated torsion $\Lambda$-module. Let $\frp$ be a singular prime of $\Lambda$. Then the following are equivalent: \begin{itemize} \item[(i)] The $\mu$-invariant of the $\ZZ_p[[\Gamma]]$-module $e_{\chi_\frp} M$ vanishes. \item[(ii)] For any $\chi \in \Upsilon_\frp$, the $\mu$-invariant of the $\ZZ_p[\im \chi][[\Gamma]]$-module $M\otimes_{\ZZ_p[\Delta']} \ZZ_p[\im \chi]$ vanishes. \item[(iii)] $M_\frp=0$. \end{itemize} \end{lemma} \begin{proof} See \cite[Lemma 5.6]{flachsurvey}. \end{proof} \subsubsection{}In the rest of this subsection we assume the condition $(\ast)$. \begin{lemma} Let $\frp$ be a singular prime of $\Lambda$. Then $V_\chi$ is independent of $\chi \in \Upsilon_\frp$. In particular, for any $\chi \in \Upsilon_\frp$, the $Q(\Lambda_\frp)$-module $U_{K_\infty,S,T}\otimes_\Lambda Q(\Lambda_\frp)$ is free of rank $r_\chi$. \end{lemma} \begin{proof} It is sufficient to show that $V_\chi=V_{\chi_\frp}$ for any $\chi \in \Upsilon_\frp$. Note that the extension degree $[L_{\chi,\infty}:L_{\chi_\frp,\infty}]=[L_\chi:L_{\chi_\frp}]$ is a power of $p$. Since $p$ is odd by the assumption $(\ast)$, we see that an infinite place of $k$ which splits completely in $L_{\chi_\frp,\infty}$ also splits completely in $L_{\chi,\infty}$. By the assumption $(\ast)$, we know every places in $V_{\chi_\frp}$ is infinite. Hence we have $V_\chi=V_{\chi_\frp}$. \end{proof} The above result motivates us, for any height one prime ideal $\frp$ of $\Lambda$, to define $V_\frp:=V_\chi$ and $r_\frp:=r_\chi$ by choosing some $\chi \in \Upsilon_\frp$. Assume that Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ holds for all $\chi \in \widehat \Delta$ and $n$. We then define the `$\frp$-part' of the Rubin-Stark element $$\epsilon_{K_\infty/k,S,T}^\frp \in (\bigwedge^{r_\frp} U_{K_\infty,S,T})\otimes_\Lambda Q(\Lambda_\frp)$$ as the image of $$(\epsilon_{L_{\chi,\infty}/k,S,T}^{V_\chi})_{\chi \in \Upsilon_\frp} \in \bigoplus_{\chi \in \Upsilon_\frp} \bigcap^{r_\frp }U_{L_{\chi,\infty},S,T}$$ under the natural map $$\bigoplus_{\chi \in \Upsilon_\frp} \bigcap^{r_\frp }U_{L_{\chi,\infty},S,T} \to \bigoplus_{\chi \in \Upsilon_\frp} (\bigcap^{r_\frp }U_{L_{\chi,\infty},S,T})\otimes_{\ZZ_p[[\G_\chi]]}Q(\Lambda_\chi) = (\bigwedge^{r_\frp} U_{K_\infty,S,T})\otimes_\Lambda Q(\Lambda_\frp).$$ (see Lemma \ref{technical limit}.) We can now formulate a much more explicit main conjecture. \begin{conjecture} \label{IMCexplicit} If condition ($*$) is valid, then for every height one prime ideal $\frp$ of $\Lambda$ there is an equality $$\Lambda_\frp\cdot\epsilon^\frp_{K_{\infty}/k,S,T}={\rm Fitt}_\Lambda^0(A_S^T(K_\infty)){\rm Fitt}_\Lambda^0(\cX_{K_\infty,S\setminus V_\frp})\cdot ({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T} )_\frp.$$ \end{conjecture} \begin{remark} \label{remarkIMC} At every height one prime ideal $\frp$ there is an equality $${\rm Fitt}_\Lambda^0(A_S^T(K_\infty))_\frp {\rm Fitt}_\Lambda^0(\cX_{K_\infty,S\setminus V_\frp})_\frp={\rm Fitt}_\Lambda^{r_\frp}(H^1(C_{K_\infty,S,T}))_\frp.$$ If $\frp$ is regular, then $\Lambda_\frp$ is a discrete valuation ring and this equality follows directly from the exact sequence $$0 \to A_S^T(K_\infty) \to H^1(C_{K_\infty,S,T}) \to \cX_{K_\infty,S}\to 0.$$ If $\frp$ is singular, then the equality is valid since the result of Lemma \ref{lemmamu} implies $(\cX_{K_\infty,S\setminus V_\frp})_\frp$ vanishes and so $H^1(C_{K_\infty,S,T})_\frp$ is isomorphic to the direct sum $A_S^T(K_\infty)_\frp \oplus (\cY_{K_\infty,V_\frp})_\frp$. Conjecture \ref{IMCexplicit} is thus valid if and only if for every height one prime $\frp$ one has $$\Lambda_\frp\cdot\epsilon^\frp_{K_{\infty}/k,S,T}={\rm Fitt}_\Lambda^{r_\frp}(H^1(C_{K_\infty,S,T}))\cdot ({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T} )_\frp .$$ \end{remark} \begin{remark} \label{remarkIMCsingular} If the prime $\frp$ is singular, then $(\cX_{K_\infty,S\setminus V_\frp})_\frp$ vanishes and one has ${\rm Fitt}_\Lambda^0(A_S^T(K_\infty))_\frp = \Lambda_\frp$ if and only if the $\mu$-invariant of the $\ZZ_p[[\Gamma]]$-module $ e_{\chi_\frp} A_{S}^T(K_\infty)$ vanishes (see Lemma \ref{lemmamu}). Thus, for any such $\frp$ Conjecture \ref{IMCexplicit} implies that the invariant of $ e_{\chi_\frp} A_{S}^T(K_\infty)$ vanishes if and only if one ha \begin{equation}\label{easy singular} \Lambda_\frp\cdot\epsilon^\frp_{K_{\infty}/k,S,T}=({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T} )_\frp.\end{equation} In a similar way, one finds that the vanishing of the $\mu$-invariant of $e_{\chi_\frp} A_{S}^T(K_\infty)$ implies that Conjecture \ref{IMCexplicit} for $\frp$ is itself equivalent to the equality (\ref{easy singular}). \end{remark} \begin{remark}\label{noT version} For every height one prime ideal $\frp$ of $\Lambda$, put $\epsilon_{K_\infty/k,S}^\frp:=\delta_T^{-1} \cdot \epsilon_{K_\infty/k,S,T}^\frp$. Then Lemma \ref{independent} implies that the equality of Conjecture \ref{IMCexplicit} is valid at $\frp$ if and only if one has \begin{eqnarray*} \Lambda_\frp \cdot \epsilon_{K_\infty/k,S}^\frp=\Fitt_\Lambda^0(A_S(K_\infty))\Fitt_\Lambda^0(\cX_{K_\infty,S\setminus S_\infty}) \cdot ({\bigwedge}_{\Lambda}^{r} U_{K_\infty,S})_\frp. \label{withoutT} \end{eqnarray*} \end{remark} \subsubsection{}Before comparing Conjecture \ref{IMCexplicit} to the more general Conjecture \ref{IMC} we show that the assumed validity of the $p$-part of the Rubin-Stark conjecture already gives strong evidence in favour of Conjecture \ref{IMCexplicit}. We note, in particular, that if $\frp$ is a singular prime of $\Lambda$ (and an appropriate $\mu$-invariant vanishes), then the inclusion proved in the following result constitutes `one half' of the equality (\ref{easy singular}) that is equivalent in this case to Conjecture \ref{IMCexplicit}. \begin{proposition} \label{propositionfree} Let $\frp$ be a height one prime ideal of $\Lambda$. When $\frp$ is singular, assume that the $\mu$-invariant of $e_{\chi_\frp} A_{S}^T(K_\infty)$ (as $\ZZ_p[[\Gamma]]$-module) vanishes. Then the following claims are valid. \begin{itemize} \item[(i)] The $\Lambda_\frp$-module $(U_{K_\infty,S,T})_\frp$ is free of rank $r_\frp$. \item[(ii)] If Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid for every $\chi$ in $\widehat \Delta$ and every natural number $n$, then there is an inclusion $$\Lambda_\frp\cdot \epsilon^\frp_{K_{\infty}/k,S,T} \subset ({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T})_\frp.$$ \end{itemize} \end{proposition} \begin{proof} As in the proof of Lemma \ref{technical limit}, we choose a representative of $C_{K_\infty,S,T}$ $$\Pi_\infty \stackrel{\psi_\infty}{\to} \Pi_\infty. $$ We have the exact sequence \begin{eqnarray} 0 \to U_{K_\infty,S,T} \to \Pi_\infty \stackrel{\psi_\infty}{\to} \Pi_\infty \to H^1(C_{K_\infty,S,T}) \to 0. \label{tate infty} \end{eqnarray} If $\frp$ is regular, then $\Lambda_\frp$ is a discrete valuation ring and the exact sequence (\ref{tate infty}) implies that the $\Lambda_\frp$-modules $(U_{K_\infty,S,T})_\frp$ and $\im(\psi_{\infty})_\frp$ are free. Since $U_{K_\infty,S,T}\otimes_\Lambda Q(\Lambda_\frp)$ is isomorphic to $\cY_{K_\infty,V_\frp} \otimes_\Lambda Q(\Lambda_\frp)$, we also know that the rank of $(U_{K_\infty,S,T})_\frp$ is $r_\frp$. Suppose next that $\frp$ is singular. Since the $\mu$-invariant of $e_{\chi_\frp}\cX_{K_\infty,S\setminus V_\frp}$ vanishes, we apply Lemma \ref{lemmamu} to deduce that $(\cX_{K_\infty,S})_\frp=(\cY_{K_\infty,V_\frp})_\frp$. In a similar way, the assumption that the $\mu$-invariant of $e_{\chi_\frp} A_{S}^T(K_\infty)$ vanishes implies that $A_S^T(K_\infty)_\frp=0$. Hence we have $H^1(C_{K_\infty,S,T})_\frp=(\cY_{K_\infty,V_\frp})_\frp$. By assumption $(\ast)$, we know that $\cY_{K_\infty,V_\frp}$ is projective as $\Lambda$-module. This implies that $H^1(C_{K_\infty,S,T})_\frp=(\cY_{K_\infty,V_\frp})_\frp$ is a free $\Lambda_\frp$-module of rank $r_\frp$. By choosing splittings of the sequence (\ref{tate infty}), we then easily deduce that the $\Lambda_\frp$-modules $(U_{K_\infty,S,T})_\frp$ and $\im(\psi_{\infty})_\frp$ are free and that the rank of $(U_{K_\infty,S,T})_\frp$ is equal to $r_\frp$. At this stage we have proved that, for any height one prime ideal $\frp$ of $\Lambda$, the $\Lambda_\frp$-module $(U_{K_\infty,S,T})_\frp$ is both free of rank $r_\frp$ (as required to prove claim (i)) and also a direct summand of $(\Pi_\infty)_\frp$, and hence that \begin{equation}\label{intersect}({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T})_\frp=({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T}\otimes_\Lambda Q(\Lambda_\frp)) \cap ({\bigwedge}_\Lambda^{r_\frp} \Pi_\infty)_\frp.\end{equation} Now we make the stated assumption concerning the validity of the $p$-part of the Rubin-Stark conjecture. This implies, by the proof of Theorem \ref{lemisom}(i), that for each $\frp$ the element $\epsilon^\frp_{K_\infty/k,S,T}$ lies in both $({\bigwedge}_\Lambda^{r_\frp} \Pi_\infty)_\frp$ and $$\bigoplus_{\chi\in\Upsilon_\frp}({\bigwedge}_\Lambda^{r_\chi}U_{K_\infty,S,T} )\otimes_\Lambda Q(\Lambda_\chi)=({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T} )\otimes_\Lambda Q(\Lambda_\frp),$$ and hence, by (\ref{intersect}) that it belongs to $({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T})_\frp$, as required to prove claim (ii). \end{proof} In the next result we compare Conjecture \ref{IMCexplicit} to the more general Conjecture \ref{IMC}. \begin{proposition}\label{equiv prop} Assume that Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ holds for all characters $\chi$ in $\widehat \Delta$ and all sufficiently large $n$ and that for each character $\chi$ in $\widehat \Delta'/{\sim}_{\QQ_p}$ the $\mu$-invariant of the $\ZZ_p[[\Gamma]]$-module $e_\chi A_{S}^T(K_\infty)$ vanishes. Then Conjectures \ref{IMC} and \ref{IMCexplicit} are equivalent. \end{proposition} \begin{proof} Since ${\det}_\Lambda(C_{K_\infty,S,T})$ is an invertible $\Lambda$-module the equality $\Lambda\cdot \mathcal{L}_{K_\infty/k,S,T}={\det}_\Lambda(C_{K_\infty,S,T})$ in Conjecture \ref{IMC} is valid if and only if at every height one prime ideal $\frp$ of $\Lambda$ one has \begin{eqnarray} \Lambda_\frp \cdot \mathcal{L}_{K_\infty/k,S,T}={\det}_\Lambda(C_{K_\infty,S,T})_\frp \label{localimc} \end{eqnarray} (see \cite[Lemma 6.1]{bg}). If $\frp$ is regular, then one easily sees that this equality is valid if and only if the equality $$\Lambda_\frp\cdot\epsilon^\frp_{K_{\infty}/k,S,T}={\rm Fitt}_\Lambda^{r_\frp}(H^1(C_{K_\infty,S,T}))\cdot ({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T} )_\frp$$ is valid, by using Theorem \ref{lemisom}(ii). If $\frp$ is singular, then the assumption of vanishing $\mu$-invariants and the argument in the proof of Proposition \ref{propositionfree}(i) shows that the $\Lambda_\frp$-modules $(U_{K_\infty,S,T})_\frp$ and $H^1(C_{K_\infty,S,T})_\frp$ are both free of rank $r_\frp$. Noting this, we see that (\ref{localimc}) holds if and only if one has $$\Lambda_\frp\cdot\epsilon^\frp_{K_{\infty}/k,S,T}=({\bigwedge}_\Lambda^{r_\frp}U_{K_\infty,S,T} )_\frp$$ and so in this case the claimed result follows from Remark \ref{remarkIMCsingular}. \end{proof} \subsubsection{} In our earlier paper \cite{bks1} we defined canonical Selmer modules $\mathcal{S}_{S,T}(\GG_{m/F})$ and $\mathcal{S}^{{\rm tr}}_{S,T}(\GG_{m/F})$ for $\mathbb{G}_m$ over number fields $F$ that are of finite degree over $\QQ$. For any intermediate field $L$ of $K_{\infty}/k$, we now set $$ \mathcal{S}_{p,S,T}(\GG_{m/L}) :=\varprojlim_{F} \mathcal{S}_{S,T}(\GG_{m/F}) \otimes \ZZ_{p}, \hspace{3mm} \mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/L}):=\varprojlim_{F} \mathcal{S}^{{\rm tr}}_{S,T}(\GG_{m/F})\otimes \ZZ_{p} $$ where in both limits $F$ runs over all finite extensions of $k$ in $L$ and the transition morphisms are the natural corestriction maps. We note in particular that, by its very definition, $\mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/L})$ coincides with $H^{1}(C_{L,S,T})$. In addition, this definition implies that for any subset $V$ of $S$ comprising places that split completely in $L$ the kernel of the natural (composite) projection map $$ \mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/L})_{V}:=\ker(\mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/L}) \to \cX_{L,S} \to \cY_{L,V}) $$ lies in a canonical exact sequence of the form \begin{equation}\label{canonical it ses} 0 \to A_S^T(L) \to \mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/L})_{V} \to \cX_{L,S \setminus V} \to 0.\end{equation} We now interpret our Iwasawa main conjecture in terms of characteristic ideals. \begin{conjecture} \label{char IMC} Assume Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ holds for all $\chi \in \widehat \Delta$ and all non-negative integers $n$. Then for any $\chi \in \widehat \Delta$ there are equalities \begin{align}\label{IMC41} {\rm char}_{\Lambda_{\chi}}\! ( (\bigcap^{r_\chi}U_{L_{\chi,\infty},S,T}/\langle \epsilon_{L_{\chi,\infty}/k,S,T}^{V_{\chi}} \rangle)^{\chi} ) &\!=\! {\rm char}_{\Lambda_{\chi}}\!(\mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/ L_{\chi,\infty}})^{\chi}_{V_{\chi}}) \\ &\!=\! {\rm char}_{\Lambda_{\chi}}\!(A_S^T(L_{\chi, \infty})^{\chi}) {\rm char}_{\Lambda_{\chi}}\!((\cX_{L_{\chi, \infty},S\setminus V_\chi})^{\chi}).\notag \end{align} Here, for any $\ZZ_{p}[[\G_\chi]]$-module $M$ we write $M^{\chi}$ for the $\Lambda_{\chi}$-module $M \otimes_{\ZZ_{p}[G_\chi]} \ZZ_{p}[\im \chi]$ and ${\rm char}_{\Lambda_{\chi}}(M^{\chi})$ for its characteristic ideal in $\Lambda_{\chi}$. In addition, the second displayed equality is a direct consequence of the appropriate case of the exact sequence (\ref{canonical it ses}). \end{conjecture} \begin{proposition}\label{IMC4} Assume that Conjecture ${\rm RS}(L_{\chi,n}/k,S,T,V_\chi)_p$ is valid for all characters $\chi$ in $\widehat \Delta$ and all sufficiently large natural numbers $n$ and that for each character $\chi \in \widehat \Delta'/{\sim}_{\QQ_p}$ the $\mu$-invariant of the $\ZZ_p[[\Gamma]]$-module $e_\chi A_{S}^T(K_\infty)$ vanishes. Then Conjectures \ref{IMC} is equivalent to Conjecture \ref{char IMC}. \end{proposition} \begin{proof} Note that by our assumption $\mu=0$ we have $({\bigcap}^{r_\frp}U_{K_\infty,S,T})_\frp= ({\bigwedge}^{r_\frp}U_{K_\infty,S,T})_\frp$ for any height one prime $\frp$, using (\ref{intersect}). Therefore, Conjecture \ref{IMCexplicit} implies the equality (\ref{IMC41}) for any $\chi$. On the other hand, for a height one regular prime $\frp$, we can regard $\frp$ to be a prime of $\Lambda_{\chi}$ for some $\chi$, so the equality (\ref{IMC41}) implies the equality in Conjecture \ref{IMCexplicit}. For a singular prime $\frp$, by Lemma \ref{lemmamu}, (\ref{IMC41}) for any $\chi$ implies $({\bigwedge}^{r_\frp}U_{K_\infty,S,T})_\frp/ \langle \epsilon^{\frp}_{K_{\infty}/k,S,T} \rangle=0$, thus Conjecture \ref{IMCexplicit}. The proposition therefore follows from Proposition \ref{equiv prop}. \end{proof} \subsection{The case of CM-fields}\label{CM field subsec} In this section, we use the following strengthening of the condition ($\ast$) used above. $$ (\ast\ast) \begin{cases} p \text{ is odd}, \\ k \text{ is totally real and $K$ is either totally real or a CM-field},\\ k_\infty/k \text{ is the cyclotomic $\ZZ_p$-extension.} \end{cases} $$ Under this hypothesis Iwasawa has conjectured that for every $\chi \in \widehat \Delta'/{\sim}_{\QQ_p}$ the $\mu$-invariant of the $\ZZ_p[[\Gamma]]$-module $e_\chi A_{S}^T(K_\infty)$ vanishes and, if this is true, then Proposition \ref{equiv prop} implies that the Conjectures \ref{IMC} and \ref{IMCexplicit} are equivalent. In addition, in this case we can use the main results of Wiles \cite{Wiles} and of B\"uy\"ukboduk in \cite{buyuk} to give the following concrete evidence in support of these conjectures. In the following we denote $S_\infty(k)$ and $S_p(k)$ simply by $S_\infty$ and $S_p$ respectively. \begin{theorem}\label{CM theorem} Assume the condition ($\ast\ast$). \begin{itemize} \item[(i)] If $K$ is a CM-field and the $\mu$-invariant of $K_\infty/K$ vanishes, then the minus part of Conjecture \ref{IMC} is valid for $(K_\infty/k,S,T)$. \item[(ii)] Suppose that $\chi$ is an even character. Then the equality of Conjecture \ref{char IMC} is valid for $\chi$ whenever all of the following conditions are satisfied: \begin{itemize} \item[(a)] all $v \in S_p$ are unramified in $L_{\chi}$, \item[(b)] $k/\QQ$ is unramified at $p$, \item[(c)] every $v \in S \setminus S_\infty$ satisfies $\chi(G_v)\neq 1$, \item[(d)] the order of $\chi$ is prime to $p$, \item[(e)] with $T$ chosen as in \cite[Remark 3.1]{buyuk}, the Rubin-Stark conjecture holds for $(F/k,S,T,S_\infty)$ for all $F$ in the set $\mathcal{K}$ of finite abelian extensions of $k$ that is defined in \cite[\S 3]{buyuk} (where our $L_\chi$ corresponds to the field $L$), \item[(f)] the Leopoldt conjecture holds for $L_{\chi,n}$ for all positive integer $n$. \end{itemize} \end{itemize} \end{theorem} \subsubsection{}We obtain Theorem \ref{CM theorem}(i) as a straightforward consequence of the main conjecture proved by Wiles \cite{Wiles}. In fact, for an odd character $\chi$, one has $r_{\chi}=0$ and the Rubin-Stark elements are Stickelberger elements. Therefore, $\epsilon_{L_{\chi,\infty}/k,S,T}^{V_{\chi}}$ is the $p$-adic $L$-function of Deligne-Ribet. We shall prove the equality (\ref{IMC41}) in Conjecture \ref{char IMC} for each odd $\chi \in \widehat \Delta$. We fix such a character $\chi$, and may take $K=L_{\chi}$ and $S=S_{\infty}(k) \cup S_{{\rm ram}}(K_{\infty}/k) \cup S_{p}(k)$. Let $S'_{p}$ be the set of $p$-adic primes which split completely in $K$. If $v \in S\setminus V_\chi$ is prime to $p$, it is ramified in $L_{\chi}=K$, so we have ${\rm char}_{\Lambda_{\chi}}(\cX_{L_{\chi, \infty},S\setminus V_\chi}^{\chi}) = {\rm char}_{\Lambda_{\chi}}(\cY_{L_{\chi, \infty},S'_{p}}^{\chi})$. Let $A^T(L_{\chi, \infty})$ be the inverse limit of the $p$-component of the $T$-ray class group of the full integer ring of $L_{\chi,n}$. By sending the prime $w$ above $v$ in $S'_{p}$ to the class of $w$, we obtain a homomorphism $\cY_{L_{\chi, \infty},S'_{p}}^{\chi} \longrightarrow A^T(L_{\chi, \infty})^{\chi}$, which is known to be injective. Since the sequence \[ \cY_{L_{\chi, \infty},S}^{\chi} \longrightarrow A^T(L_{\chi, \infty})^{\chi} \longrightarrow A_{S}^T(L_{\chi, \infty})^{\chi} \longrightarrow 0\] is exact and the kernel of $\cY_{L_{\chi, \infty},S}^{\chi} \longrightarrow \cY_{L_{\chi, \infty},S'_{p}}^{\chi}$ is finite, we have $$ {\rm char}_{\Lambda_{\chi}}(A_S^T(L_{\chi, \infty})^{\chi}) {\rm char}_{\Lambda_{\chi}}((\cY_{L_{\chi, \infty},S})^{\chi}) = {\rm char}_{\Lambda_{\chi}}(A^T(L_{\chi, \infty})^{\chi}). $$ Therefore, by noting $\chi \neq 1$, the equality (\ref{IMC41}) in Conjecture \ref{char IMC} becomes $${\rm char}_{\Lambda_{\chi}}(A^T(L_{\chi, \infty})^{\chi})=\theta_{L_{\chi, \infty}/k,S,T}^{\chi}(0) \Lambda_{\chi}, $$ where $\theta_{L_{\chi, \infty}/k,S,T}^{\chi}(0)$ is the $\chi$-component of $\epsilon_{L_{\chi,\infty}/k,S,T}^{\emptyset}$, which is the Stickelberger element in this case. The above equality is nothing but the usual main conjecture proved by Wiles \cite{Wiles}, so we have proved (i). \subsubsection{}We now derive Theorem \ref{CM theorem}(ii) from the main result of B\"uy\"ukboduk in \cite{buyuk}. To do this we assume condition ($\ast\ast$) and (without loss of generality) that $K$ is totally real. Set $r:=[k:\QQ]=\# S_\infty$. Since $K$ is totally real, one has $V_\chi=S_\infty$ and $r_\chi=r$. By our assumptions (c) and (d), the $\chi$-component of $\cX_{L_{\chi, \infty},S\setminus S_\infty}$ vanishes. Therefore, the equality (\ref{IMC41}) becomes $${\rm char}_{\Lambda_{\chi}} ( (\bigcap^{r}U_{L_{\chi,\infty},S,T}/\langle \epsilon_{L_{\chi,\infty}/k,S,T}^{V_{\chi}} \rangle)^{\chi} ) = {\rm char}_{\Lambda_{\chi}}(A_S^T(L_{\chi, \infty})^{\chi}).$$ Since $K$ is totally real and $p$ is odd, we may assume that $T$ is empty. Note that, since $L_{\chi,\infty}/L_{\chi}$ is the cyclotomic $\ZZ_p$-extension, the weak Leopoldt conjecture holds, and we have the canonical exact sequence \begin{eqnarray} 0 \to U_{L_{\chi,\infty}} \to U_{L_{\chi,\infty}}^{\rm sl} \to \Gal(M/L_{\chi,\infty}) \to A(L_{\chi,\infty}) \to 0, \label{semilocal} \end{eqnarray} where $U_{L_{\chi,\infty}}^{\rm sl}$ is the semi-local unit of $L_{\chi,\infty}$ at $p$, and $M$ is the maximal abelian $p$-extension of $L_{\chi,\infty}$ unramified outside $p$. By our assumptions (c) and (d) again, $U_{L_{\chi,\infty},S}^{\chi}= U_{L_{\chi,\infty}}^{\chi}$ and $A(L_{\chi,\infty})^{\chi}=A_{S}(L_{\chi,\infty})^{\chi}$. Therefore, what we have to prove is $${\rm char}_{\Lambda_{\chi}} ( ({\bigwedge}^r U_{L_{\chi,\infty}}^{\rm sl}/\langle \epsilon_{L_{\chi,\infty}/k,S}^{V_{\chi}} \rangle)^{\chi}) ={\rm char}_{\Lambda_{\chi}} (\Gal(M/L_{\chi,\infty})^{\chi}).$$ This is nothing but \cite[Theorem A]{buyuk}. Note that all of the hypotheses (a)-(f) occur as assumptions in the latter result. Indeed, (a) and (b) are (A1) and (A2) in \cite{buyuk} respectively, (c) is (A3) and the assumption on $S$ in \cite{buyuk}, and (d)-(f) are assumed in his main result. This completes the proof of Theorem \ref{CM theorem}(ii). \subsection{Consequences for number fields of finite degree} In this subsection we assume the condition ($\ast\ast$) stated at the beginning of \S\ref{CM field subsec} and also that $K$ is a CM-field of finite degree over $\QQ$. We shall describe unconditional results for $K$ which follow the validity of Theorem \ref{CM theorem}(i). To do this we set $\Lambda:=\ZZ_{p}[[\Gal(K_{\infty}/k)]]$ and for any $\Lambda$-module $M$ we denote by $M^-$ the minus part consisting of elements on which the complex conjugation acts as $-1$ (namely, $M^-=e^-M$). We note, in particular, that $\theta_{K_{\infty}/k,S,T}(0)$ belongs to $\Lambda^-$. We also write $x \mapsto x^{\#}$ for the $\ZZ_{p}$-linear involutions of both $\Lambda$ and the group rings $\ZZ_p[G]$ for finite quotients $G$ of $\Gal(K_{\infty}/k)$ which is induced by inverting elements of $\Gal(K_{\infty}/k)$. \begin{corollary} \label{CMunconditional1} If the $p$-adic $\mu$-invariant of $K_{\infty}/K$ vanishes, then one has $$\Fitt_{\Lambda^-}(\mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/K_{\infty}})^-) = \Lambda\cdot \theta_{K_{\infty}/k,S,T}(0)$$ and $$\Fitt_{\Lambda^-}(\mathcal{S}_{p,S,T}(\GG_{m/K_{\infty}})^-) = \Lambda\cdot \theta_{K_{\infty}/k,S,T}(0)^{\#}.$$ \end{corollary} \begin{proof} Since one has $r_{\chi}=0$ for any odd character $\chi$, the first displayed equality is equivalent to Conjecture \ref{IMC} in this case and is therefore valid as a consequence of Theorem \ref{CM theorem}. The second displayed equality is then obtained directly by applying the general result of \cite[Lemma 2.8]{bks1} to the first equality. \end{proof} \begin{corollary} \label{CMunconditional2} Let $L$ be an intermediate CM-field of $K_\infty/k$ which is finite over $k$, and set $G:=\Gal(L/k)$. If the $p$-adic $\mu$-invariant of $K_{\infty}/K$ vanishes, then there are equalities $$\Fitt_{\ZZ_{p}[G]^-}(\mathcal{S}^{{\rm tr}}_{p,S,T}(\GG_{m/L})^-) = \ZZ_{p}[G]\cdot\theta_{L/k,S,T}(0)$$ and $$\Fitt_{\ZZ_{p}[G]^-}(\mathcal{S}_{p,S,T}(\GG_{m/L})^-) =\ZZ_{p}[G]\cdot\theta_{L/k,S,T}(0)^{\#}.$$ \end{corollary} \begin{proof} This follows by combining Corollary \ref{CMunconditional1} with the general result of Lemma \ref{fitt descent} below and standard properties of Fitting ideals. \end{proof} \begin{lemma}\label{fitt descent} Suppose that $L/k$ is a Galois extension of finite number fields with Galois group $G$. Then there are natural isomorphisms $$ \mathcal{S}^{{\rm tr}}_{S,T}(\GG_{m/L})_{G} \stackrel{\sim}{\rightarrow} \mathcal{S}^{{\rm tr}}_{S,T}(\GG_{m/k})$$ and $$ \mathcal{S}_{S,T}(\GG_{m/L})_{G} \stackrel{\sim}{\rightarrow} \mathcal{S}_{S,T}(\GG_{m/k}).$$ \end{lemma} \begin{proof} The `Weil-\'etale cohomology complex' $R\Gamma_T((\mathcal{O}_{L,S})_{\mathcal{W}},\GG_m)$ is perfect and so there exist projective $\ZZ[G]$-modules $P_1$ and $P_2$, and a homomorphism of $G$-modules $P_1 \to P_2$ whose cokernel identifies with $\mathcal{S}^{{\rm tr}}_{S,T}(\GG_{m/L})$ and is such that the cokernel of the induced map $P_1^G \to P_2^G$ identifies with $\mathcal{S}^{{\rm tr}}_{S,T}(\GG_{m/k})$ (see \cite[\S 5.4]{bks1}). The first isomorphism is then obtained by noting that the norm map induces an isomorphism of modules $(P_{2})_{G} \stackrel{\sim}{\rightarrow} P_{2}^G$. The second claimed isomorphism can also be obtained in a similar way, noting that $\mathcal{S}_{S,T}(\GG_{m/L})$ is obtained as the cohomology in the highest (non-zero) degree of a perfect complex (see \cite[Proposition 2.4]{bks1}).\end{proof} We write $\mathcal{O}_L$ for the ring of integers of $L$ and ${\rm Cl}^{T}(L)$ for the ray class group of $\mathcal{O}_{L}$ with modulus $\Pi_{w \in T_{L}} w$. We denote the Sylow $p$-subgroup of ${\rm Cl}^{T}(L)$ by $A^{T}(L)$ and write $(A^{T}(L)^-)^\vee$ for the Pontrjagin dual of the minus part of $A^{T}(L)$. The next corollary of Theorem \ref{CM theorem}(i) that we record coincides with one of the main results of Greither and Popescu in \cite{GreitherPopescu}. \begin{corollary} \label{CMunconditional3} Let $L$ be an intermediate CM-field of $K_\infty/k$ which is finite over $k$, and set $G := \Gal(L/k)$. If the $p$-adic $\mu$-invariant for $K_{\infty}/K$ vanishes, then one has $$\theta_{L/k,S,T}(0)^{\#} \in \Fitt_{\ZZ_{p}[G]^-}((A^{T}(L)^-)^\vee).$$ \end{corollary} \begin{proof} The canonical exact sequence $$0 \to {\rm Cl}^{T}(L)^{\vee} \to \mathcal{S}_{S_{\infty}(k),T}(\GG_{m/L}) \to \Hom(\mathcal{O}_{L}^{\times}, \ZZ) \to 0$$ from \cite[Proposition 2.2]{bks1} implies that the natural map $$\mathcal{S}_{p,S_{\infty}(k),T}(\GG_{m/L})^{-} \simeq (A^{T}(L)^-)^\vee$$ is bijective. In addition, from \cite[Proposition 2.4(ii)]{bks1}, we know that the canonical homomorphism $$\mathcal{S}_{S,T}(\GG_{m/L}) \rightarrow \mathcal{S}_{S_{\infty}(k),T}(\GG_{m/L})$$ is surjective. The stated claim therefore follows directly from the second equality in Corollary \ref{CMunconditional2}. \end{proof} \begin{remark}\ \noindent{}(i) Our derivation of the equality in Corollary \ref{CMunconditional3} differs from that given in \cite{GreitherPopescu} in that we avoid any use of the Galois modules related to 1-motives that are constructed in loc. cit. \noindent{}(ii) The Brumer-Stark conjecture predicts $\theta_{L/k,S_{\rm ram}(L/k),T}(0)$ belongs to the annihilator $\Ann_{\ZZ_{p}[G]^-}(A^{T}(L))$ and if no $p$-adic place of $L^{+}$ splits in $L$, then Corollary \ref{CMunconditional3} implies a stronger version of this conjecture. \end{remark} We have assumed throughout \S\ref{hrit sec} that the set $S$ contains all $p$-adic places of $k$ and so the Stickelberger element $\theta_{L/k,S,T}(0)$ in Corollary \ref{CMunconditional3} can be imprimitive. In particular, if any $p$-adic prime of $k$ splits completely in $L$, then $\theta_{L/k,S,T}(0)$ vanishes and the assertion of Corollary \ref{CMunconditional3} is trivially valid. However, by applying Corollary \ref{IntroCor} in this context, we can now prove the following non-trivial result. \begin{corollary} \label{CMunconditional4} Let $L$ be an intermediate CM-field of $K_\infty/k$ which is finite over $k$, and set $G := \Gal(L/k)$. If the $p$-adic $\mu$-invariant for $K_{\infty}/K$ vanishes and at most one $p$-adic place of $k$ splits in $L/L^{+}$, then one has $$\theta_{L/k,S_{\rm ram}(L/k),T}(0) \in \Fitt_{\ZZ_{p}[G]^-}((A^{T}(L)^-)^\vee).$$ \end{corollary} \begin{proof} This follows immediately by combining \cite[Corollary 1.14]{bks1} with Corollary \ref{IntroCor}. \end{proof} \section{Iwasawa-theoretic Rubin-Stark congruences} In this section, we formulate an Iwasawa-theoretic version of the conjecture proposed by Mazur and Rubin \cite{MRGm} and by the third author \cite{sano} (see also \cite[Conjecture 5.4]{bks1}). This conjecture is a natural generalization of the Gross-Stark conjecture \cite{Gp}, and plays a key role in the descent argument that we present in the next section. We use the notation as in the previous section. \subsection{Statement of the congruences} \label{formulate mrs} We first recall the formulation of the conjecture of Mazur and Rubin and of the third author. Take a character $\chi \in \widehat \G$. Take a proper subset $V'\subset S$ so that all $v\in V'$ splits completely in $L_\chi$ (i.e. $\chi(G_v)=1$) and that $V_\chi\subset V'$. Put $r':=\#V'$. We recall the formulation of the conjecture of Mazur and Rubin and of the third author for $(L_{\chi,n}/L_\chi/k,S,T,V_\chi,V')$. For simplicity, put \begin{itemize} \item $L_n:=L_{\chi,n}$; \item $L:=L_\chi$; \item $\G_{n}:=\G_{\chi,n}=\Gal(L_{\chi,n}/k)$; \item $G:=G_\chi=\Gal(L_\chi/k)$; \item $\Gamma_{n}:=\Gamma_{\chi,n}=\Gal(L_{\chi,n}/L_\chi)$; \item $V:=V_\chi=\{ v\in S \mid \text{$v$ splits completely in $L_{\chi,\infty}$}\}$; \item $r:=r_\chi=\# V_\chi$. \end{itemize} Put $e:=r'-r$. Let $I(\Gamma_n)$ denote the augmentation ideal of $\ZZ_p[\Gamma_n]$. It is shown in \cite[Lemma 2.11]{sano} that there exists a canonical injection $$\bigcap^{r} U_{L,S,T} \hookrightarrow \bigcap^r U_{L_n,S,T}$$ which induces the injection $$\nu_n: (\bigcap^r U_{L,S,T})\otimes_{\ZZ_p} I(\Gamma_n)^e/I(\Gamma_n)^{e+1} \hookrightarrow (\bigcap^r U_{L_n,S,T})\otimes_{\ZZ_p} \ZZ_p[\Gamma_n]/I(\Gamma_n)^{e+1}.$$ Note that this injection does not coincides with the map induced by the inclusion $U_{L,S,T}\hookrightarrow U_{L_n,S,T}$, and we have $$\nu_n({\N_{L_n/L}^r(a)})=\N_{L_n/L} a$$ for all $a \in \bigcap^rU_{L_n,S,T}$ (see \cite[Remark 2.12]{sano}). Let $I_n$ be the kernel of the natural map $\ZZ_p[\G_n]\to \ZZ_p[G]$. For $v\in V'\setminus V$, let ${\rm rec}_w: L^\times \to \Gamma_n$ denote the local reciprocity map at $w$ (recall that $w$ is the fixed place lying above $v$). Define $${\rm Rec}_w:=\sum_{\sigma \in G} ({\rm rec}_w(\sigma(\cdot))-1)\sigma^{-1} \in \Hom_{\ZZ[G]}(L^\times, I_n/I_n^2).$$ It is shown in \cite[Proposition 2.7]{sano} that $\bigwedge_{v \in V'\setminus V}{\rm Rec}_w$ induces a homomorphism $${\rm Rec}_n:\bigcap^{r'} U_{L,S,T} \to \bigcap^{r} U_{L,S,T} \otimes_{\ZZ_p} I(\Gamma_n)^e/I(\Gamma_n)^{e+1}. $$ Finally, define $$\mathcal{N}_n : \bigcap^r U_{L_n,S,T} \to \bigcap^{r} U_{L_n,S,T}\otimes_{\ZZ_p} \ZZ_p[\Gamma_n]/I(\Gamma_n)^{e+1}$$ by $$\mathcal{N}_n(a):=\sum_{\sigma \in \Gamma_n}\sigma a\otimes \sigma^{-1}.$$ We now state the formulation of \cite[Conjecture 3]{sano} (or \cite[Conjecture 5.2]{MRGm}). \begin{conjecture}[{${\rm MRS}(L_n/L/k,S,T,V,V')_p$}] \label{mrs1} Assume Conjectures ${\rm RS}(L_n/k,S,T,V)_p$ and ${\rm RS}(L/k,S,T,V')_p$. Then we have \begin{eqnarray} \mathcal{N}_n(\epsilon_{L_n/k,S,T}^V)=(-1)^{re} \nu_n({\rm Rec}_n(\epsilon_{L/k,S,T}^{V'})) \text{ in }\bigcap^{r} U_{L_n,S,T}\otimes_{\ZZ_p} \ZZ_p[\Gamma_n]/I(\Gamma_n)^{e+1}.\nonumber \label{mrs eq}\end{eqnarray} (Note that the sign in the right hand side depends on the labeling of $S$. We follow the convention in \cite[\S 5.3]{bks1}. ) \end{conjecture} Note that \cite[Conjecture ${\rm MRS}(K/L/k,S,T,V,V')$]{bks1} is slightly stronger than the above conjecture (see \cite[Remark 5.7]{bks1}). We shall next give an Iwasawa theoretic version of the above conjecture. Note that, since the inverse limit $\varprojlim_n I(\Gamma_{n})^e/I(\Gamma_{n})^{e+1}$ is isomorphic to $\ZZ_p$, the map $$\varprojlim_n {\rm Rec}_n: \bigcap^{r'}U_{L,S,T} \to \bigcap^{r}U_{L,S,T} \otimes_{\ZZ_p}\varprojlim_n I(\Gamma_{n})^e/I(\Gamma_{n})^{e+1} $$ uniquely extends to give a $\CC_p$-linear map $$\CC_p \bigwedge^{r'}U_{L,S,T} \to \CC_p(\bigwedge^{r}U_{L,S,T} \otimes_{\ZZ_p}\varprojlim_n I(\Gamma_{n})^e/I(\Gamma_{n})^{e+1}) $$ which we denote by ${\rm Rec}_{\infty}$. \begin{conjecture}[{${\rm MRS}(K_\infty/k,S,T,\chi,V')$}] \label{mrs2} Assume that Conjecture ${\rm RS}(L_n/k,S,T,V)_p$ is valid for all $n$. Then, there exists a (unique) $$\kappa=(\kappa_n)_n \in \bigcap^r U_{L,S,T} \otimes_{\ZZ_p} \varprojlim_n I(\Gamma_n)^e/I(\Gamma_n)^{e+1}$$ such that $$\nu_n(\kappa_n)=\mathcal{N}_n(\epsilon_{L_n/k,S,T}^V)$$ for all $n$ and that $$e_\chi \kappa=(-1)^{re}e_\chi{\rm Rec}_\infty(\epsilon_{L/k,S,T}^{V'})\text{ in }\CC_p(\bigwedge^{r}U_{L,S,T} \otimes_{\ZZ_p}\varprojlim_n I(\Gamma_{n})^e/I(\Gamma_{n})^{e+1}).$$ \end{conjecture} \begin{remark} Clearly the validity of Conjecture ${\rm MRS}(L_n/L/k,S,T,V,V')_p$ for all $n$ implies the validity of ${\rm MRS}(K_\infty/k,S,T,\chi,V')$. A significant advantage of the above formulation of Conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is that we do not need to assume that Conjecture ${\rm RS}(L/k,S,T,V')_p$ is valid. \end{remark} \begin{proposition} \label{prop mrs}\ \begin{itemize} \item[(i)] If $V=V'$, then ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is valid. \item[(ii)] If $V \subset V'' \subset V'$, then ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ implies ${\rm MRS}(K_\infty/k,S,T,\chi,V'')$. \item[(iii)] Suppose that $\chi(G_v)=1$ for all $v\in S$ and $\# V'=\# S-1$. Then, for any $V''\subset S$ with $V \subset V''$ and $\# V''=\#S -1$, ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ and ${\rm MRS}(K_\infty/k,S,T,\chi,V'')$ are equivalent. \item[(iv)] If $v \in V'\setminus V$ is a finite place which is unramified in $L_\infty$, then ${\rm MRS}(K_\infty/k,S\setminus \{v\},T,\chi,V'\setminus \{v\})$ implies ${\rm MRS}(K_\infty/k,S,T,\chi,V')$. \item[(v)] If $\#V'\neq \# S-1$ and $v\in S \setminus V'$ is a finite place which is unramified in $L_\infty$, then ${\rm MRS}(K_\infty/k,S\setminus\{v\},T,\chi,V')$ implies ${\rm MRS}(K_\infty/k,S,T,\chi,V')$. \end{itemize} \end{proposition} \begin{proof} Claim (i) follows from the `norm relation' of Rubin-Stark elements, see \cite[Remark 3.9]{sano} or \cite[Proposition 5.7]{MRGm}. Claim (ii) follows from \cite[Proposition 3.12]{sano}. Claim (iii) follows from \cite[Lemma 5.1]{sanotjm}. Claim (iv) follows from the proof of \cite[Proposition 3.13]{sano}. Claim (v) follows by noting that $$\epsilon_{L_n/k,S,T}^V=(1-{\rm Fr}_v^{-1})\epsilon_{L_n/k,S\setminus\{v\},T}^V$$ and $$\epsilon_{L/k,S,T}^{V'}=(1-{\rm Fr}_v^{-1})\epsilon_{L/k,S\setminus\{v\},T}^{V'}.$$ \end{proof} \begin{corollary} \label{cor unram} If every place $v$ in $V'\setminus V$ is both non-archimedean and unramified in $L_\infty$, then ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is valid. \end{corollary} \begin{proof} By Proposition \ref{prop mrs}(iv), we may assume $V=V'$. By Proposition \ref{prop mrs}(i), ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is valid in this case. \end{proof} Consider the following condition: $${\rm NTZ}(K_\infty/k,\chi) \quad \text{$\chi(G_\mathfrak{p})\neq 1$ for all $\mathfrak{p} \in S_p(k)$ which ramify in $L_{\chi,\infty}$}.$$ This condition is usually called `no trivial zeros'. \begin{corollary} \label{nontrivzeros} Assume that $\chi$ satisfies ${\rm NTZ}(K_\infty/k,\chi)$. Then ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is valid. \end{corollary} \begin{proof} In this case we see that every $v \in V'\setminus V$ is finite and unramified in $L_{\infty}$. \end{proof} \subsection{Connection to the Gross-Stark conjecture} \label{GS} In this subsection we help set the context for Conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ by showing that it specializes to recover the Gross-Stark Conjecture (as stated in Conjecture \ref{gross stark conj} below). To do this we assume throughout that $k$ is totally real, $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension and $\chi$ is totally odd. We also set $V':=\{v \in S \mid \chi(G_v)=1\}$ (and note that this is a proper subset of $S$ since $\chi$ is totally odd) and we assume that every $v\in V'$ lies above $p$ (noting that this assumption is not restrictive as a consequence of Proposition \ref{prop mrs}(iv)). We shall now show that this case of ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is equivalent to the Gross-Stark conjecture. As a first step, we note that in this case $V$ is empty (that is, $r=0$) and so one knows that Conjecture ${\rm RS}(L_{n}/k,S,T,V)_p$ is valid for all $n$ (by \cite[Theorem 3.3]{R}). In fact, one has $\epsilon_{L_{n}/k,S,T}^{V}=\theta_{L_{n}/k,S,T}(0) \in \ZZ_p[\G_n]$ and, by \cite[Proposition 5.4]{MRGm}, the assertion of Conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is equivalent to the following claims: one has \begin{eqnarray} \theta_{L_n/k,S,T}(0) \in I_n^{r'} \label{vanishing order stickelberger} \end{eqnarray} for all $n$ and \begin{eqnarray} e_\chi\theta_{L_\infty/k,S,T}(0)= e_\chi {\rm Rec}_\infty(\epsilon_{L/k,S,T}^{V'}) \text{ in }\CC_p[G]\otimes_{\ZZ_p} \varprojlim_n I(\Gamma_n)^{r'}/I(\Gamma_n)^{r'+1}, \label{mrsgs} \end{eqnarray} where we set $$\theta_{L_\infty/k,S,T}(0):=\varprojlim_n \theta_{L_n/k,S,T}(0) \in \varprojlim_n I_n^{r'}/I_n^{r'+1} \simeq \ZZ_p[G]\otimes_{\ZZ_p} \varprojlim_n I(\Gamma_n)^{r'}/I(\Gamma_n)^{r'+1}.$$ We also note that the validity of (\ref{vanishing order stickelberger}) follows as a consequence of our Iwasawa main conjecture (Conjecture \ref{IMC}) by using Proposition \ref{explicit projector}(iii) and the result of \cite[Lemma 5.19]{bks1} (see the argument in \S \ref{proof main result}). To study (\ref{mrsgs}) we set $\chi_1:=\chi |_\Delta \in \widehat \Delta$ and regard (as we may) the product $\chi_2:=\chi \chi_1^{-1}$ as a character of $\Gamma=\Gal(k_\infty/k)$. Note that $\Gal(L_\infty/k)=G_{\chi_1}\times \Gamma_{\chi_1}$. Fix a topological generator $\gamma \in \Gamma_{\chi_1}$, and identify $\ZZ_p[{\rm im}(\chi_1)][[\Gamma_{\chi_1}]]$ with the ring of power series $\ZZ_p[{\rm im}(\chi_1)][[t]]$ via the correspondence $\gamma=1+t$. We then define $g_{L_\infty/k,S,T}^{\chi_1}(t)$ to be the image of $\theta_{L_\infty/k,S,T}(0)$ under the map $$\ZZ_p[[\Gal(L_\infty/k)]]=\ZZ_p[G_{\chi_1}][[ \Gamma_{\chi_1}]] \to \ZZ_p[{\rm im}(\chi_1)][[\Gamma_{\chi_1}]]=\ZZ_p[{\rm im}(\chi_1)][[t]]$$ induced by $\chi_1$. We recall that the $p$-adic $L$-function of Deligne-Ribet is defined by $$L_{k,S,T,p}(\chi^{-1}\omega,s):=g_{L_\infty/k,S,T}^{\chi_1}(\chi_2(\gamma)\chi_{\rm cyc}(\gamma)^s-1),$$ where $\chi_{\rm cyc}$ is the cyclotomic character, and we note that one can show $L_{k,S,T,p}(\chi^{-1}\omega,s)$ to be independent of the choice of $\gamma$. The validity of (\ref{vanishing order stickelberger}) implies an inequality \begin{eqnarray} \ord_{s=0}L_{k,S,T,p}(\chi^{-1}\omega,s) \geq r'. \label{p adic L order} \end{eqnarray} It is known that (\ref{p adic L order}) is a consequence of the Iwasawa main conjecture (in the sense of Wiles \cite{Wiles}), which is itself known to be valid when $p$ is odd. In addition, Spiess has recently proved that (\ref{p adic L order}) is valid, including the case $p=2$, by using Shintani cocycles \cite{spiess}. In all cases, therefore, we can define $$L_{k,S,T,p}^{(r')}(\chi^{-1}\omega,0):=\lim_{s\to 0} s^{-r'}L_{k,S,T,p}(\chi^{-1}\omega,s) \in \CC_p. $$ For $v \in V'$, define $${\rm Log}_w: L^\times \to \ZZ_p[G]$$ by $${\rm Log}_w(a):=-\sum_{\sigma\in G}\log_p({\N}_{L_w/\QQ_p}(\sigma a))\sigma^{-1},$$ where $\log_p: \QQ_p^\times \to \ZZ_p$ is Iwasawa's logarithm (in the sense that $\log_{p}(p)=0$). We set $${\rm Log}_{V'}:=\bigwedge_{v\in V'}{\rm Log}_w: \CC_p \bigwedge^{r'} U_{L,S,T} \to \CC_p[G].$$ We shall denote the map $\CC_p[G]\to \CC_p$ induced by $\chi$ also by $\chi$. For $v\in V'$, we define $${\rm Ord}_w: L^\times \to \ZZ[G]$$ by $${\rm Ord}_w(a):=\sum_{\sigma\in G}\ord_w(\sigma a)\sigma^{-1},$$ and set $${\rm Ord}_{V'}:= \bigwedge_{v\in V'}{\rm Ord}_w: \CC_p \bigwedge^{r'} U_{L,S,T} \to \CC_p[G].$$ On the $\chi$-component, ${\rm Ord}_{V'}$ induces an isomorphism $$\chi\circ {\rm Ord}_{V'}: e_\chi \CC_p \bigwedge^{r'} U_{L,S,T} \stackrel{\sim}{\to} \CC_p. $$ Taking a non-zero element $x\in e_\chi \CC_p \bigwedge^{r'} U_{L,S,T}$, we define the $\mathcal{L}$-invariant by $$\mathcal{L}(\chi):=\frac{\chi(\Log_{V'}(x))}{\chi(\Ord_{V'}(x))} \in \CC_p. $$ Since $e_\chi \CC_p \bigwedge^{r'} U_{L,S,T}$ is a one dimensional $\CC_p$-vector space, we see that $\mathcal{L}(\chi)$ does not depend on the choice of $x$. Then the Gross-Stark conjecture is stated as follows. \begin{conjecture}[{${\rm GS}(L/k,S,T,\chi)$}]\label{gross stark conj} $$L_{k,S,T,p}^{(r')}(\chi^{-1}\omega,0)=\mathcal{L}(\chi) L_{k,S\setminus V',T}(\chi^{-1},0).$$ \end{conjecture} \begin{remark} This formulation constitutes a natural higher rank generalization of the form of the Gross-Stark conjecture that is considered by Darmon, Dasgupta and Pollack (see \cite[Conjecture 1]{DDP}). \end{remark} Letting $x=e_\chi \epsilon_{L/k,S,T}^{V'}$, we obtain $$\chi({\rm Log}_{V'}(\epsilon_{L/k,S,T}^{V'}))=\mathcal{L}(\chi) L_{k,S\setminus V',T}(\chi^{-1},0).$$ Thus we see that Conjecture ${\rm GS}(L/k,S,T,\chi)$ is equivalent to the equality $$L_{k,S,T,p}^{(r')}(\chi^{-1}\omega,0)=\chi({\rm Log}_{V'}(\epsilon_{L/k,S,T}^{V'})).$$ Concerning the relation between ${\rm Rec}_\infty$ and ${\rm Log}_{V'}$, we note the fact $$\chi_{\rm cyc}({\rm rec}_w(a))={\N}_{L_w/\QQ_p}(a)^{-1},$$ where $v \in V'$ and $a\in L^\times$. Given this fact, it is straightforward to check (under the validity of (\ref{vanishing order stickelberger})) that Conjecture ${\rm GS}(L/k,S,T,\chi)$ is equivalent to (\ref{mrsgs}). At this stage we have therefore proved the following result. \begin{theorem}\label{GS thm} Suppose that $k$ is totally real, $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension, and $\chi$ is totally odd. Set $V':=\{v\in S \mid \chi(G_v)=1\}$ and assume that every $v\in V'$ lies above $p$. Assume also that (\ref{vanishing order stickelberger}) is valid. Then Conjecture ${\rm GS}(L/k,S,T,\chi)$ is equivalent to Conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V')$. \end{theorem} \subsection{A proof in the case $k=\QQ$} In \cite[Corollary 1.2]{bks1} the known validity of the eTNC for Tate motives over abelian fields is used to prove that Conjecture ${\rm MRS}(K/L/k,S,T,V,V')$ is valid in the case $k=\QQ$. In this subsection, we shall give a much simpler proof of the latter result which uses only Theorem \ref{GS thm}, the known validity of the Gross-Stark conjecture over abelian fields and a classical result of Solomon \cite{solomon}. We note that for any $\chi$ and $n$ the Rubin-Stark conjecture is known to be true for $(L_{\chi,n}/\QQ,S,T,V_\chi)$. (In this setting the Rubin-Stark element is given by a cyclotomic unit (resp. the Stickelberger element) when $r_\chi=1$ (resp. $r_\chi=0$).) \begin{theorem} \label{solomonFG} Suppose that $k=\QQ$. Then, ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ is valid. \end{theorem} \begin{proof} By Proposition \ref{prop mrs}(ii), we may assume that $V'$ is maximal, namely, $$r'= {\rm min} \{\#\{ v\in S \mid \chi(G_v)=1\}, \# S-1\}.$$ By Corollary \ref{nontrivzeros}, we may assume that $\chi(p)=1$. Suppose first that $\chi$ is odd. Since Conjecture ${\rm GS}(L/\QQ,S,T,\chi)$ is valid (see \cite[\S4]{Gp}), Conjecture ${\rm MRS}(K_\infty/\QQ,S,T,\chi,V')$ follows from Theorem \ref{GS thm}. Suppose next that $\chi=1$. In this case we have $r'=\#S-1$. We may assume $p \notin V'$ by Proposition \ref{prop mrs}(iii). In this case every $v \in V'\setminus V$ is unramified in $L_{\infty}$. Hence, the theorem follows from Corollary \ref{cor unram}. Finally, suppose that $\chi \neq 1$ is even. By Proposition \ref{prop mrs}(iv) and (v), we may assume $$S=\{\infty,p\}\cup S_{\rm ram}(L/\QQ) \text{ and }V'=\{\infty,p\}.$$ We label $S=\{v_0,v_1,\ldots\}$ so that $v_1=\infty$ and $v_2=p$. Fix a topological generator $\gamma$ of $\Gamma=\Gal(L_\infty/L)$. Then we construct an element $\kappa(L,\gamma)\in \varprojlim_n L^\times/(L^\times)^{p^n}$ as follows. Note that $\N_{L_n/L}(\epsilon_{L_n/\QQ,S,T}^V)$ vanishes since $\chi(p)=1$. So we can take $\beta_n\in L_n^\times$ such that $\beta_n^{\gamma-1}=\epsilon_{L_n/\QQ,S,T}^V$ (Hilbert's theorem 90). Define $$\kappa_n:=\N_{L_n/L}(\beta_n) \in L^\times/(L^\times)^{p^n}.$$ This element is independent of the choice of $\beta_n$, and for any $m>n$ the natural map $$L^\times/(L^\times)^{p^m} \to L^\times/(L^\times)^{p^n}$$ sends $\kappa_m$ to $\kappa_n$. We define \[ \kappa(L,\gamma):=(\kappa_n)_n\in \varprojlim_n L^\times/(L^\times)^{p^n}.\] Then, by Solomon \cite[Proposition 2.3(i)]{solomon}, we know that $$\kappa(L,\gamma)\in \ZZ_p \otimes_\ZZ \mathcal{O}_{L}\left[\frac1p\right]^\times \hookrightarrow \varprojlim_n L^\times/(L^\times)^{p^n}.$$ Fix a prime $\mathfrak{p}$ of $L$ lying above $p$. Define $${\rm Ord}_\frp : L^\times \to \ZZ_p[G]$$ by ${\rm Ord}_\frp(a):=\sum_{\sigma \in G}{\rm ord}_\frp(\sigma a)\sigma^{-1}$. Similarly, define $${\rm Log}_\frp: L^\times \to \ZZ_p[G]$$ by ${\rm Log}_\frp(a):=-\sum_{\sigma \in G}\log_p(\iota_\frp (\sigma a))\sigma^{-1}$, where $\iota_\frp: L \hookrightarrow L_\frp =\QQ_p$ is the natural embedding. Then by the result of Solomon \cite[Theorem 2.1 and Remark 2.4]{solomon}, one deduces \begin{eqnarray} {\rm Ord}_\frp(\kappa(L,\gamma))=-\frac{1}{\log_p(\chi_{\rm cyc}(\gamma))} {\rm Log}_\frp(\epsilon_{L/\QQ,S\setminus \{p\},T}^V). \nonumber \end{eqnarray} From this, we have \begin{eqnarray} {\rm Ord}_\frp(\kappa(L,\gamma))\otimes (\gamma-1)=-{\rm Rec}_\frp(\epsilon_{L/\QQ,S\setminus \{ p\} ,T}^V) \text{ in }\ZZ_p[G]\otimes_{\ZZ_p}I(\Gamma)/I(\Gamma)^2,\label{solomon eq} \end{eqnarray} where $I(\Gamma)$ is the augmentation ideal of $\ZZ_p[[\Gamma]]$. We know that $e_\chi \CC_p U_{L,S}$ is a two-dimensional $\CC_p$-vector space. Lemma \ref{lemma basis} below shows that $\{ e_\chi \epsilon_{L/\QQ,S\setminus\{p\},T}^V, e_\chi \kappa(L,\gamma)\}$ is a $\CC_p$-basis of this space. For simplicity, set $\epsilon_L^V:=\epsilon_{L/\QQ,S\setminus\{p\},T}^V$. Note that the isomorphism $${\rm Ord}_\frp: e_\chi \CC_p \bigwedge^2 U_{L,S} \stackrel{\sim}{\to} e_\chi \CC_p U_L$$ sends $e_\chi \epsilon_L^V\wedge \kappa(L,\gamma)$ to $-\chi({\rm Ord}_\frp(\kappa(L,\gamma)))e_\chi \epsilon_L^V$. Since we have $${\rm Ord}_\frp(e_\chi \epsilon_{L/\QQ,S,T}^{V'})=- e_\chi \epsilon_L^V$$ (see \cite[Proposition 5.2]{R} or \cite[Proposition 3.6]{sano}), we have $$e_\chi \epsilon_{L/\QQ,S,T}^{V'}=- \chi({\rm Ord}_\frp(\kappa(L,\gamma)))^{-1} e_\chi \epsilon_L^V \wedge \kappa(L,\gamma).$$ Hence we have \begin{eqnarray} {\rm Rec}_\frp(e_\chi \epsilon_{L/\QQ,S,T}^{V'})&=& \chi({\rm Ord}_\frp(\kappa(L,\gamma)))^{-1}e_\chi \kappa(L,\gamma)\cdot {\rm Rec}_\frp(\epsilon_L^V) \nonumber \\ &=&- e_\chi \kappa(L,\gamma)\otimes (\gamma-1), \nonumber \end{eqnarray} where the first equality follows by noting that ${\rm Rec}_\frp(\kappa(L,\gamma))=0$ (since $\kappa(L,\gamma)$ lies in the universal norm by definition), and the second by (\ref{solomon eq}). Now, noting that $$\nu_n: U_{L,S,T}\otimes_{\ZZ_p}I(\Gamma_n)/I(\Gamma_n)^2 \hookrightarrow U_{L_n,S,T}\otimes_{\ZZ_p} \ZZ_p[\Gamma_n]/I(\Gamma_n)^2$$ is induced by the inclusion map $L\hookrightarrow L_n$, and that $$\mathcal{N}_n(\epsilon_{L_n/\QQ,S,T}^V)=\kappa_n \otimes (\gamma-1),$$ it is easy to see that the element $\kappa:=\kappa(L,\gamma)\otimes(\gamma-1)$ has the properties in the statement of Conjecture ${\rm MRS}(K_\infty/\QQ,S,T,\chi,V')$. This completes the proof the claimed result. \end{proof} \begin{lemma} \label{lemma basis} Assume that $k=\QQ$ and $\chi \neq 1$ is even such that $\chi(p)=1$. Assume also that $S=\{\infty,p\}\cup S_{\rm ram}(L/\QQ)$. Then, $\{ e_\chi \epsilon_{L/\QQ,S\setminus\{p\},T}^V, e_\chi \kappa(L,\gamma)\}$ is a $\CC_p$-basis of $e_\chi \CC_p U_{L,S}$. \end{lemma} \begin{proof} This result follows from \cite[Remark 4.4]{solomon2}. But we give a sketch of another proof, which is essentially given by Flach in \cite{flachsurvey}. In the next section, we define the `Bockstein map' $$\beta: e_\chi \CC_p U_{L,S} \to e_\chi \CC_p(\cX_{L,S}\otimes_{\ZZ_p}I(\Gamma)/I(\Gamma)^2).$$ We see that $\beta$ is injective on $e_\chi \CC_p U_L$, and that $$\ker \beta\simeq U_{L_\infty,S} \otimes_\Lambda \CC_p,$$ where we put $\Lambda:=\ZZ_p[[\G]]$ and $\CC_p$ is regarded as a $\Lambda$-algebra via $\chi$. Hence we have $$e_\chi \CC_p U_{L,S}= e_\chi \CC_p U_{L} \oplus (U_{L_\infty,S}\otimes_\Lambda \CC_p).$$ Since $e_\chi\epsilon_{L/\QQ,S\setminus\{p\},T}^V$ is non-zero, this is a basis of $e_\chi \CC_p U_{L,S\setminus\{p\}}=e_\chi \CC_p U_{L}$. We prove that $e_\chi \kappa(L,\gamma)$ is a basis of $U_{L_\infty,S}\otimes_\Lambda \CC_p$. By using the exact sequence $$0 \to U_{L_\infty,S} \stackrel{\gamma-1}{\to} U_{L_\infty,S} \to U_{L,S},$$ we see that there exists a unique element $\alpha \in U_{L_\infty,S}$ such that $(\gamma-1)\alpha=\epsilon_{L_\infty/\QQ,S,T}^V$. By the cyclotomic Iwasawa main conjecture over $\QQ$, we see that $\alpha$ is a basis of $U_{L_\infty,S}\otimes_\Lambda \Lambda_{\frp_\chi}$, where $\frp_\chi:=\ker(\chi:\Lambda \to \CC_p)$. The image of $\alpha$ under the map $$U_{L_\infty,S}\otimes_\Lambda \Lambda_{\frp_\chi} \stackrel{\chi}{\to} U_{L_\infty,S}\otimes_\Lambda \CC_p \hookrightarrow e_\chi \CC_p U_{L,S}$$ is equal to $e_\chi \kappa(L,\gamma)$.\end{proof} \section{A strategy for proving the eTNC} \label{descent section} \subsection{Statement of the main result and applications} In the sequel we fix an intermediate field $L$ of $K_\infty/k$ which is finite over $k$ and set $G:=\Gal(L/k)$. In this section we always assume the following conditions to be satisfied: \begin{itemize} \item[(R)] for every $\chi \in \widehat G$, one has $r_{\chi,S}<\# S$; \item[(S)] no finite place of $k$ splits completely in $k_\infty$. \end{itemize} \begin{remark} Before proceeding we note that condition (R) is very mild since it is automatically satisfied when the class number of $k$ is equal to one and, for any $k$, is satisfied when $S$ is large enough. We also note that condition (S) is satisfied when, for example, $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension. \end{remark} The following result is one of the main results of this article and, as we will see, it provides an effective strategy for proving the special case of the eTNC that we are considering here. \begin{theorem} \label{mainthm} Assume the following conditions: \begin{itemize} \item[(hIMC)] The main conjecture ${\rm IMC}(K_\infty/k,S,T)$ is valid; \item[(F)] for every $\chi$ in $\widehat G$, the module of $\Gamma_{\chi}$-coinvariants of $A_S^T(L_{\chi,\infty})$ is finite; \item[(MRS)] for every $\chi$ in $\widehat G$, Conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V_\chi')$ is valid for a maximal set $V_\chi'$ (so that $\# V_\chi'={\rm min}\{ \#\{ v\in S \mid \chi(G_v)=1\}, \#S-1\}).$ \end{itemize} Then, the conjecture ${\rm eTNC}(h^0(\Spec L),\ZZ_p[G])$ is valid. \end{theorem} \begin{remark} We note that the set $V_\chi'$ in condition (MRS) is not uniquely determined when every place $v$ in $S$ satisfies $\chi(G_v)=1$, but that the validity of the conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V_\chi')$ is independent of the choice of $V_\chi'$ (by Proposition \ref{prop mrs}(iii)). \end{remark} \begin{remark} One checks easily that the condition (F) is equivalent to the finiteness of the the module of $\Gamma_\chi$-coinvariants of $A_S(L_{\chi,\infty})$. Hence, taking account of an observation of Kolster in \cite[Theorem 1.14]{kolster}, condition (F) can be regarded as a natural generalization of the Gross conjecture \cite[Conjecture 1.15]{Gp}. In particular, we recall that condition (F) is satisfied in each of the following cases: \begin{itemize} \item $L$ is abelian over $\QQ$ (due to Greenberg, see \cite{greenberg}), \item $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension and $L$ has unique $p$-adic place (in this case `$\delta_L=0$' holds obviously, see \cite{kolster}), \item $L$ is totally real and the Leopoldt conjecture is valid for $L$ at $p$ (see \cite[Corollary 1.3]{kolster}). \end{itemize} \end{remark} \begin{remark} The condition (MRS) is satisfied for $\chi$ in $\widehat G$ when the condition ${\rm NTZ}(K_\infty/k,\chi)$ is satisfied (see Corollary \ref{nontrivzeros}). \end{remark} As an immediate corollary of Theorem \ref{mainthm}, we obtain a new proof of a theorem that was first proved by Greither and the first author \cite{bg} for $p$ odd, and by Flach \cite{fg} for $p=2$. \begin{corollary} \label{burnsgreither} If $k=\QQ$, then the conjecture ${\rm eTNC}(h^0(\Spec L),\ZZ_p[G])$ is valid. \end{corollary} \begin{proof} As we mentioned above, the conditions (R), (S) and (F) are all satisfied in this case. In addition, the condition (hIMC) is a direct consequence of the classical Iwasawa main conjecture solved by Mazur and Wiles (see \cite{bg} and \cite{fg}) and the condition (MRS) is satisfied by Theorem \ref{solomonFG}. \end{proof} We also obtain a result over totally real fields. \begin{corollary} Suppose that $p$ is odd, $k$ is totally real, $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension, and $K$ is CM. Assume that (F) is satisfied, that the $\mu$-invariant of $K_\infty/K$ vanishes, and that for every odd character $\chi \in \widehat G$ Conjecture ${\rm GS}(L_\chi/k,S,T,\chi)$ is valid. Then, Conjecture ${\rm eTNC}(h^0(\Spec L), \ZZ_p[G]^-)$ is valid. \end{corollary} \begin{proof} We take $S$ so that condition (R) is satisfied. Then the minus-part of condition (hIMC) is satisfied by Theorem \ref{CM theorem}(i) and the minus part of condition (MRS) by Theorem \ref{GS}. \end{proof} When at most one $p$-adic place $\mathfrak{p}$ of $k$ satisfies $\chi(G_\mathfrak{p})=1$, Dasgupta, Darmon and Pollack proved the validity of Conjecture ${\rm GS}(L_\chi/k,S,T,\chi)$ under some assumptions including Leopoldt's conjecture (see \cite{DDP}). Recently in the same case Ventullo asserts in \cite{ventullo} that Conjecture ${\rm GS}(L_\chi/k,S,T,\chi)$ is unconditionally valid. In this case condition (F) is also valid by the argument of Gross in \cite[Proposition 2.13]{Gp}. Hence we get the following \begin{corollary} \label{MC1} Suppose that $p$ is odd, $k$ is totally real, $k_\infty/k$ is the cyclotomic $\ZZ_p$-extension, and $K$ is CM. Assume that the $\mu$-invariant of $K_\infty/K$ vanishes, and that for each odd character $\chi \in \widehat G$ there is at most one $p$-adic place $\frp$ of $k$ which satisfies $\chi(G_\frp)=1$. Then, Conjecture ${\rm eTNC}(h^0(\Spec L), \ZZ_p[G]^-)$ is valid. \end{corollary} \begin{examples} \label{RemarkExample} It is not difficult to find many concrete families of examples which satisfy all of the hypotheses of Corollary \ref{MC1} and hence to deduce the validity of ${\rm eTNC}(h^0(\Spec L), \ZZ_p[G]^-)$ in some new and interesting cases. In particular, we shall now describe several families of examples in which the extension $k/\QQ$ is not abelian (noting that if $L/\QQ$ is abelian and $k \subset L$, then ${\rm eTNC}(h^0(\Spec L), \ZZ_p[G])$ is already known to be valid). \ \noindent{}(i) The case $p=3$. As a simple example, we consider the case that $k/\QQ$ is a $S_{3}$-extension. To do this we fix an irreducible cubic polynomial $f(x)$ in $\ZZ[x]$ with discriminant $27d$ where $d$ is strictly positive and congruent to $2$ modulo $3$. (For example, one can take $f(x)$ to be $x^3-6x-3$, $x^3-15x-3$, etc.) The minimal splitting field $k$ of $f(x)$ over $\QQ$ is then totally real (since $27d>0$) and an $S_{3}$-extension of $\QQ$ (since $27d$ is not a square). Also, since the discriminant of $f(x)$ is divisible by $27$ but not $81$, the prime $3$ is totally ramified in $k$. Now set $p := 3$ and $K:=k(\mu_{p})=k(\sqrt{-p})=k(\sqrt{-d})$. Then the prime above $p$ splits in $K/k$ because $-d \equiv 1$ (mod $3$). In addition, as $K/\QQ(\sqrt{d}, \sqrt{-p})$ is a cyclic cubic extension, the $\mu$-invariant of $K_{\infty}/K$ vanishes and so the extension $K/k$ satisfies all the conditions of Corollary \ref{MC1} (with $p=3$). \ \noindent{}(ii) The case $p>3$. In this case one can construct a suitable field $K$ in the following way. Fix a primitive $p$-th root of unity $\zeta$, an integer $i$ such that $1 \leq i \leq (p-3)/2$ and an integer $b$ which is prime to $p$, and then set \[ a:=(1+b(\zeta-1))^{2i+1}/(1+b(\zeta^{-1}-1)^{2i+1}).\] Write $\ord_{\pi}$ for the normalized additive valuation of $\QQ(\mu_{p})$ associated to the prime element $\pi=\zeta-1$. Then, since $\ord_{\pi}(a-1)=2i+1<p$, $(\pi)$ is totally ramified in $\QQ(\mu_{p}, \sqrt[p]{a})/\QQ(\mu_{p})$. Also, since $\rho(a)=a^{-1}$ where $\rho$ is the complex conjugation, $\QQ(\mu_{p}, \sqrt[p]{a})$ is the composite of a cyclic extension of $\QQ(\mu_{p})^{+}$ of degree $p$ and $\QQ(\mu_{p})$. This shows that $\QQ(\mu_{p}, \sqrt[p]{a})$ is a CM-field and, since $1 < 2i+1 <p$, the extension $\QQ(\mu_{p}, \sqrt[p]{a})^{+}/\QQ$ is non-abelian. We now take a negative integer $-d$ which is a quadratic residue modulo $p$, let $K$ denote the CM-field $\QQ(\mu_{p}, \sqrt[p]{a}, \sqrt{-d})$ and set $k:=K^+$. Then $p$ is totally ramified in $k/\QQ$ and the $p$-adic prime of $k$ splits in $K$. In addition, $k/\QQ$ is not abelian and the $\mu$-invariant of $K_{\infty}/K$ vanishes since $K/\QQ(\mu_{p}, \sqrt{-d})$ is cyclic of degree $p$. This shows that the extension $K/k$ satisfies all of the hypotheses of Corollary \ref{MC1}. \ \noindent{}(iii) In both of the cases (i) and (ii) described above, $p$ is totally ramified in the extension $k_{\infty}/\QQ$ and so Corollary \ref{MC1} implies that ${\rm eTNC}(h^0(\Spec K_{n}), \ZZ_p[G]^-)$ is valid for any non-negative integer $n$. In addition, if $F$ is any real abelian field of degree prime to $[k: \QQ]$ in which $p$ is totally ramified, the minus component of the $p$-part of eTNC for $FK_{n}/k$ holds for any non-negative integer $n$. \end{examples} \begin{remark} Finally we note that, by using similar methods to the proofs of the above corollaries it is also possible to deduce the main result of Bley \cite{bley} as a consequence of Theorem \ref{mainthm}. In this case $k$ is imaginary quadratic, the validity of (hIMC) can be derived from Rubin's result in \cite{rubinIMC} (as explained in \cite{bley}), and the conjecture (MRS) from Bley's result \cite{bleysolomon}, which is itself an analogue of Solomon's theorem \cite{solomon} for elliptic units, by using the same argument as Theorem \ref{solomonFG}. \end{remark} \subsection{A computation of Bockstein maps} Fix a character $\chi \in \widehat G$. For simplicity, we set \begin{itemize} \item $L_n:=L_{\chi,n}$; \item $L:=L_\chi$; \item $V:=V_\chi=\{ v\in S \mid \text{$v$ splits completely in $L_{\chi,\infty}$}\}$; \item $r:=r_\chi=\# V_\chi$; \item $V':=V_{\chi}'$ (as in (MRS) in Theorem \ref{mainthm}); \item $r':=r_{\chi,S}=\# V' $; \item $e:=r'-r$. \end{itemize} As in \S \ref{formulate mrs}, we label $S=\{v_0,v_1,\ldots\}$ so that $V=\{v_1,\ldots,v_r\}$ and $V'=\{ v_1,\ldots ,v_{r'}\}$, and fix a place $w$ lying above each $v \in S$. Also, as in \S \ref{section explicit}, it will be useful to fix a representative of $C_{K_\infty,S,T}$: $$\Pi_{K_\infty} \to \Pi_{K_\infty},$$ where the first term is placed in degree zero, and $\Pi_{K_\infty}$ is a free $\Lambda$-module with basis $\{b_1,\ldots,b_d\}$. This representative is chosen so that the natural surjection $$\Pi_{K_\infty} \to H^1(C_{K_\infty,S,T}) \to \cX_{K_\infty,S}$$ sends $b_i$ to $w_i-w_0$ for every $i$ with $1\leq i \leq r'$. We define a height one regular prime ideal of $\Lambda$ by setting \[ \mathfrak{p}:=\ker(\Lambda \stackrel{\chi}{\to} \QQ_p(\chi):=\QQ_p(\im \chi)).\] Then the localization $R:=\Lambda_{\mathfrak{p}}$ is a discrete valuation ring and we write $P$ for its maximal ideal. We see that $\chi$ induces an isomorphism $$E:=R/P \stackrel{\sim}{\to} \QQ_p(\chi).$$ We set $C:=C_{K_\infty,S,T}\otimes_\Lambda R$ and $\Pi:=\Pi_{K_\infty}\otimes_{\Lambda}R$. \begin{lemma} \label{uniformizer} Let $\gamma$ be a topological generator of $\Gamma=\Gal(K_\infty/K)$. Let $n$ be an integer which satisfies $\gamma^{p^n} \in \Gal(K_\infty/L)$. Then $\gamma^{p^n}-1$ is a uniformizer of $R$. \end{lemma} \begin{proof} Regard $\chi \in \widehat \G$, and put $\chi_1:=\chi|_\Delta \in \widehat \Delta$. We identify $R$ with the localization of $\Lambda_{\chi_1}[1/p]=\ZZ_p[\im \chi_1][[\Gamma]][1/p]$ at $\mathfrak{q}:=\ker(\Lambda_{\chi_1}[1/p] \stackrel{\chi|_\Gamma}{\to} \QQ_p(\chi))$. Then the lemma follows by noting that the localization of $\Lambda_{\chi_1}[1/p]/(\gamma^{p^n}-1)=\ZZ_p[\im \chi_1][\Gamma_n][1/p]$ at $\mathfrak{q}$ is identified with $\QQ_p(\chi)$. \end{proof} \begin{lemma} \label{keylemma} Assume that the condition (F) is satisfied. \begin{itemize} \item[(i)] $H^0(C)$ is isomorphic to $U_{K_\infty,S,T}\otimes_\Lambda R$, and $R$-free of rank $r$. \item[(ii)] $H^1(C)$ is isomorphic to $\cX_{K_\infty,S}\otimes_\Lambda R$. \item[(iii)] The maximal $R$-torsion submodule $H^1(C)_{\rm tors}$ of $H^1(C)$ is isomorphic to $\cX_{K_\infty,S\setminus V}\otimes_\Lambda R$, and annihilated by $P$. (So $H^1(C)_{\rm tors}$ is an $E$-vector space.) \item[(iv)] $H^1(C)_{\rm tf}:=H^1(C)/H^1(C)_{\rm tors}$ is isomorphic to $\cY_{K_\infty,V}\otimes_\Lambda R$ and is therefore $R$-free of rank $r$. \item[(v)] $\dim_E(H^1(C)_{\rm tors})=e$. \end{itemize} \end{lemma} \begin{proof} Since $U_{K_\infty,S,T}\otimes_{\Lambda}R=H^0(C)$ is regarded as a submodule of $\Pi$, we see that $U_{K_\infty,S,T}\otimes_{\Lambda}R$ is $R$-free. Put $\chi_1:=\chi |_\Delta \in \widehat \Delta$. Note that $L_{\infty}:=L_{\chi,\infty}=L_{\chi_1,\infty}$, and that the quotient field of $R$ is $Q(\Lambda_{\chi_1})$. As in the proof of Theorem \ref{lemisom}, we have $$U_{K_\infty,S,T}\otimes_\Lambda Q(\Lambda_{\chi_1}) \simeq \cY_{L_\infty,V}\otimes_{\ZZ_p[[\G_\chi]]} Q(\Lambda_{\chi_1}).$$ These are $r$-dimensional $Q(\Lambda_{\chi_1})$-vector spaces. This proves (i). To prove (ii), it is sufficient to show that $A_S^T(K_\infty)\otimes_{\Lambda}R=0$. Fix a topological generator $\gamma$ of $\Gamma$, and regard $\ZZ_p[[\Gamma]]$ as the ring of power series $\ZZ_p[[T]]$ via the identification $\gamma=1+T$. Let $f$ be the characteristic polynomial of the $\ZZ_p[[T]]$-module $A_S^T(L_\infty)$. By Lemma \ref{uniformizer}, for sufficiently large $n$, $\gamma^{p^n}-1$ is a uniformizer of $R$. On the other hand, by the assumption (F), we see that $f$ is prime to $\gamma^{p^n}-1$. This implies (ii). We prove (iii). Proving that $H^1(C)_{\rm tors}$ is isomorphic to $\cX_{K_\infty,S\setminus V}\otimes_\Lambda R$, it is sufficient to show that $$\cX_{K_\infty,S}\otimes_\Lambda Q(\Lambda_{\chi_1}) \simeq \cY_{K_\infty,V}\otimes_{\Lambda}Q(\Lambda_{\chi_1}),$$ by (ii). This has been shown in the proof of Theorem \ref{lemisom}. We prove that $\cX_{K_\infty,S\setminus V}\otimes_\Lambda R$ is annihilated by $P$. Note that $$\cX_{K_\infty,S\setminus V}\otimes_\Lambda R=\cX_{K_\infty,S \setminus (V\cup S_\infty)}\otimes_\Lambda R,$$ since the complex conjugation $c$ at $v \in S_\infty\setminus (V\cap S_\infty)$ is non-trivial in $G_{\chi_1}$, and hence $c-1\in R^\times$. Hence, it is sufficient to show that, for every $v \in S\setminus (V\cup S_\infty)$, there exists $\sigma \in G_v \cap \Gamma$ such that $\sigma -1$ is a uniformizer of $R$, where $G_v\subset \G$ is the decomposition group at a place of $K_\infty$ lying above $v$. Thanks to the assumption (S), we find such $\sigma$ by Lemma \ref{uniformizer}. The assertion (iv) is immediate from the above argument. The assertion (v) follows from (iii), (iv), and that $$\cX_{K_\infty,S}\otimes_\Lambda E\simeq \cX_{L,S}\otimes_{\ZZ_p[G_\chi]}\QQ_p(\chi)\simeq e_\chi\QQ_p(\chi)\cX_{L,S}\simeq e_\chi\QQ_p(\chi)\cY_{L,V'}$$ is an $r'$-dimensional $E$-vector space. \end{proof} In the following for any $R$-module $M$ we often denote $M\otimes_R E$ by $M_E$. Also, we assume that (F) is satisfied. \begin{definition} The `Bockstein map' is the map \begin{eqnarray} \beta: H^0(C_E) &\to& H^1(C\otimes_R P) \nonumber \\ &=& H^1(C)\otimes_R P \nonumber \\ &\to & H^1(C_E)\otimes_E P/P^2\nonumber \end{eqnarray} induced by the exact triangle $$C\otimes_R P \to C \to C_E.$$ \end{definition} Note that there are canonical isomorphisms $$H^0(C_E)\simeq U_{L,S,T}\otimes_{\ZZ_p[G_\chi]}\QQ_p(\chi) \simeq e_\chi \QQ_p(\chi) U_{L,S,T},$$ $$H^1(C_E)\simeq \cX_{L,S}\otimes_{\ZZ_p[G_\chi]}\QQ_p(\chi) \simeq e_\chi \QQ_p(\chi)\cX_{L,S}\simeq e_\chi \QQ_p(\chi) \cY_{L,V'},$$ where $\QQ_p(\chi)$ is regarded as a $\ZZ_p[G_\chi]$-algebra via $\chi$. Note also that $P$ is generated by $\gamma^{p^n}-1$ with sufficiently large $n$, where $\gamma$ is a fixed topological generator of $\Gamma$ (see Lemma \ref{uniformizer}). There is a canonical isomorphism $$I(\Gamma_\chi)/I(\Gamma_\chi)^2\otimes_{\ZZ_p}\QQ_p(\chi) \simeq P/P^2,$$ where $I(\Gamma_\chi)$ denotes the augmentation ideal of $\ZZ_p[[\Gamma_\chi]].$ (Note that $\Gamma=\Gal(K_\infty/K)$ and $\Gamma_\chi=\Gal(L_\infty/L)$.) Thus, the Bockstein map is regarded as the map $$\beta: e_\chi\QQ_p(\chi)U_{L,S,T} \to e_\chi \QQ_p(\chi)(\cX_{L,S} \otimes_{\ZZ_p}I(\Gamma_\chi)/I(\Gamma_\chi)^2)\simeq e_\chi \QQ_p(\chi)(\cY_{L,V'} \otimes_{\ZZ_p}I(\Gamma_\chi)/I(\Gamma_\chi)^2).$$ \begin{proposition} \label{proprec} The Bockstein map $\beta$ is induced by the map $$U_{L,S,T} \to \cX_{L,S}\otimes_{\ZZ_p} I(\Gamma_\chi)/I(\Gamma_\chi)^2$$ given by $a \mapsto \sum_{w \in S_L} w \otimes ({\rm rec}_w(a)-1)$. \end{proposition} \begin{proof} The proof is the same as that of \cite[Lemma 5.8]{flachsurvey}. We sketch the proof given in loc. cit. Take $n$ so that the image of $\gamma^{p^n} \in \Gal(K_\infty/L)$ in $\Gal(L_\infty/L)=\Gamma_\chi$ is a generator. We regard $\gamma^{p^n} \in \Gamma_\chi$. Define $\theta\in H^1(L,\ZZ_p)=\Hom(G_L,\ZZ_p)$ by $\gamma^{p^n}\mapsto 1$. Define $$\beta' : e_\chi\QQ_p(\chi)U_{L,S,T} \to e_\chi \QQ_p(\chi)(\cX_{L,S} \otimes_{\ZZ_p}I(\Gamma_\chi)/I(\Gamma_\chi)^2) \stackrel{\sim}{\to} e_\chi \QQ_p(\chi)\cX_{L,S}$$ by $\beta(a)=\beta'(a)\otimes (\gamma^{p^n}-1)$. Then, $\beta'$ is induced by the cup product $$\cdot \cup \theta : \QQ_p U_{L,S}\simeq H^1(\mathcal{O}_{L,S},\QQ_p(1)) \to H^2(\mathcal{O}_{L,S},\QQ_p(1))\simeq \QQ_p \cX_{L,S\setminus S_\infty}.$$ By class field theory we see that $\beta$ is induced by the map $a \mapsto \sum_{w \in S_L\setminus S_\infty(L)} w \otimes ({\rm rec}_w(a)-1)$. Since ${\rm rec}_w(a)=1\in \Gamma_\chi$ for all $w \in S_\infty(L)$, the proposition follows. \end{proof} \begin{proposition} \label{ker coker} Then we have canonical isomorphisms $$\ker \beta\simeq H^0(C)_E$$ and $$\coker \beta\simeq H^1(C)_{\rm tf}\otimes_R P/P^2.$$ \end{proposition} \begin{proof} Let $\delta$ be the boundary map $H^0(C_E) \to H^1(C\otimes_R P)=H^1(C) \otimes_R P$. We have $$\ker \delta\simeq \coker (H^0(C\otimes_R P)\to H^0(C)) = H^0(C)_E$$ and $$\im \delta =\ker (H^1(C)\otimes_R P \to H^1(C))=H^1(C)[P] \otimes_R P,$$ where $H^1(C)[P]$ is the submodule of $H^1(C)$ which is annihilated by $P$. By Proposition \ref{keylemma} (iii), we know $H^1(C)[P]=H^1(C)_{\rm tors}$. Hence, the natural map $$H^1(C)\otimes_R P \to H^1(C) \otimes_R P/P^2 \simeq H^1(C)_E \otimes_E P/P^2\simeq H^1(C_E) \otimes_E P/P^2$$ is injective on $H^1(C)_{\rm tors}\otimes_R P$. From this we see that $\ker \beta \simeq H^0(C)_E$. We also have $$\coker \beta \simeq \coker (H^1(C)_{\rm tors} \otimes_R P \to H^1(C)\otimes_R P/P^2)\simeq H^1(C)_{\rm tf} \otimes_R P/P^2.$$ Hence we have completed the proof. \end{proof} By Lemma \ref{keylemma}, we see that there are canonical isomorphisms $$H^0(C)_E \simeq U_{K_\infty,S,T} \otimes_\Lambda \QQ_p(\chi),$$ $$H^1(C)_E\simeq \cX_{K_\infty,S} \otimes_\Lambda \QQ_p(\chi),$$ $$H^1(C)_{{\rm tf}, E} \simeq \cY_{K_\infty,V} \otimes_\Lambda \QQ_p(\chi).$$ Hence, by Proposition \ref{ker coker}, we have the exact sequence \begin{multline*} 0 \to U_{K_\infty,S,T}\otimes_{\Lambda}\QQ_p(\chi) \to e_\chi\QQ_p(\chi)U_{L,S,T} \\ \stackrel{\beta}{\to} e_\chi \QQ_p(\chi) (\cY_{L,V'}\otimes_{\ZZ_p}I(\Gamma_\chi)/I(\Gamma_\chi)^2)\to \cY_{K_\infty,V} \otimes_\Lambda P/P^2 \to 0. \end{multline*} This induces an isomorphism $$\widetilde \beta: e_\chi \QQ_p(\chi)(\bigwedge^{r'}U_{L,S,T} \otimes \bigwedge^{r'} \cY_{L,V'}^\ast )\stackrel{\sim}{\to} \bigwedge^r (U_{K_\infty,S,T}\otimes_\Lambda \QQ_p(\chi)) \otimes \bigwedge^r (\cY_{K_\infty,V}^\ast \otimes_{\Lambda}\QQ_p(\chi))\otimes P^e/P^{e+1}.$$ We have isomorphisms $$\bigwedge^{r'}\cY_{L,V'}^\ast \stackrel{\sim}{\to} \ZZ_p[G_\chi]; \ w_1^\ast \wedge \cdots \wedge w_{r'}^\ast \mapsto 1,$$ $$\bigwedge^r (\cY_{K_\infty,V}^\ast \otimes_\Lambda \QQ_p(\chi)) \stackrel{\sim}{\to} \QQ_p(\chi); \ w_1^\ast \wedge \cdots \wedge w_r^\ast \mapsto 1.$$ By these isomorphisms, we see that $\widetilde \beta$ induces an isomorphism $$e_\chi \QQ_p(\chi) \bigwedge^{r'} U_{L,S,T} \stackrel{\sim}{\to} \bigwedge^r(U_{K_\infty,S,T}\otimes_{\Lambda} \QQ_p(\chi)) \otimes P^e /P^{e+1},$$ which we denote also by $\widetilde \beta$. Note that we have a natural injection $$\bigwedge^r(U_{K_\infty,S,T}\otimes_{\Lambda} \QQ_p(\chi)) \otimes P^e /P^{e+1} \hookrightarrow e_\chi \QQ_p(\chi) (\bigwedge^r U_{L,S,T} \otimes_{\ZZ_p}I(\Gamma_\chi)^e/I(\Gamma_\chi)^{e+1}).$$ Composing this with $\widetilde \beta$, we have an injection $$\widetilde \beta: e_\chi \QQ_p(\chi) \bigwedge^{r'} U_{L,S,T} \hookrightarrow e_\chi \QQ_p(\chi) (\bigwedge^r U_{L,S,T} \otimes_{\ZZ_p}I(\Gamma_\chi)^e/I(\Gamma_\chi)^{e+1}).$$ By Proposition \ref{proprec}, we obtain the following \begin{proposition} \label{bock rec} Let $${\rm Rec}_\infty: \CC_p \bigwedge^{r'} U_{L,S,T} \rightarrow \CC_p(\bigwedge^r U_{L,S,T} \otimes_{\ZZ_p}I(\Gamma_\chi)^e/I(\Gamma_\chi)^{e+1})$$ be the map defined in \S \ref{formulate mrs}. Then we have $$(-1)^{re}e_\chi{\rm Rec}_\infty=\widetilde \beta.$$ In particular, $e_\chi{\rm Rec}_\infty$ is injective. \end{proposition} \subsection{The proof of the main result} \label{proof main result} In this section we prove Theorem \ref{mainthm}. We start with an important technical observation. Let $\Pi_n$ denote the free $\ZZ_p[\G_{\chi,n}]$-module $\Pi_{K_\infty}\otimes_{\Lambda} \ZZ_p[\G_{\chi,n}]$, and $I(\Gamma_{\chi,n})$ denote the augmentation ideal of $\ZZ_p[\Gamma_{\chi,n}]$. We recall from \cite[Lemma 5.19]{bks1} that the image of $$\pi_{L_n/k,S,T}^V: {\det}_{\ZZ_p[\G_{\chi,n}]}(C_{L_n,S,T})\to \bigwedge^r \Pi_n$$ is contained in $I(\Gamma_{\chi,n})^e\cdot \bigwedge^r \Pi_n$ (see Proposition \ref{explicit projector}(iii)) and also from \cite[Proposition 4.17]{bks1} that $\nu_n^{-1} \circ \mathcal{N}_n$ induces the map $$I(\Gamma_{\chi,n})^e \cdot \bigwedge^r \Pi_n \to \bigwedge^r\Pi_0 \otimes_{\ZZ_p}I(\Gamma_{\chi,n})^e/I(\Gamma_{\chi,n})^{e+1} .$$ \begin{lemma} \label{lemcomm1} There exists a commutative diagram $$\xymatrix{ {\det}_{\ZZ_p[\G_{\chi,n}]}(C_{L_n,S,T}) \ar[r] \ar[d]_{\pi_{L_n/k,S,T}^V} & {\det}_{\ZZ_p[G_\chi]}(C_{L,S,T}) \ar[d]^{\pi_{L/k,S,T}^{V'}} \\ I(\Gamma_{\chi,n})^e \cdot\bigwedge^r \Pi_n \ar[d]_{\nu_{n}^{-1} \circ \cN_{n}} & \bigcap^{r'} U_{L,S,T} \ar[d]^{(-1)^{re}{\rm Rec}_{n}} \\ \bigwedge^r\Pi_0 \otimes_{\ZZ_p}I(\Gamma_{\chi,n})^e/I(\Gamma_{\chi,n})^{e+1} & \bigcap^r U_{L,S,T}\otimes_{\ZZ}I(\Gamma_{\chi,n})^e/I(\Gamma_{\chi,n})^{e+1}. \ar[l]_\supset} $$ \end{lemma} \begin{proof} This follows from Proposition \ref{explicit projector}(iii) and \cite[Lemma 5.21]{bks1}. \end{proof} For any intermediate field $F$ of $K_\infty/k$, we denote by $\mathcal{L}_{F/k,S,T}$ the image of the (conjectured) element $\mathcal{L}_{K_\infty/k,S,T}$ of $ {\det}_\Lambda(C_{K_\infty,S,T})$ under the isomorphism $$\ZZ_p[[\Gal(F/k)]]\otimes_{\La}{\det}_\Lambda(C_{K_\infty,S,T}) \simeq {\det}_{\ZZ_p[[\Gal(F/k)]]}(C_{F,S,T}). $$ Note that, by the proof of Theorem \ref{imcrs}, we have $$\pi_{L_n/k,S,T}^{V}(\mathcal{L}_{L_n/k,S,T})=\epsilon_{L_n/k,S,T}^V.$$ Hence, Lemma \ref{lemcomm1} implies that $$(-1)^{re}{\rm Rec}_{n}(\pi_{L/k,S,T}^{V'}(\mathcal{L}_{L/k,S,T}))=\nu_{n}^{-1}\circ \cN_{n}(\epsilon_{L_{n}/k,S,T}^V)=:\kappa_n.$$ We set \[ \kappa:=(\kappa_n)_n \in \bigcap^r U_{L,S,T} \otimes_{\ZZ_p} \varprojlim_n I(\Gamma_{\chi,n})^e/I(\Gamma_{\chi,n})^{e+1}.\] Then the validity of Conjecture ${\rm MRS}(K_\infty/k,S,T,\chi,V')$ implies that $$e_\chi \kappa= (-1)^{re} e_\chi{\rm Rec}_\infty( \epsilon_{L/k,S,T}^{V'}).$$ In addition, by Proposition \ref{bock rec}, we know that $e_\chi{\rm Rec}_\infty$ is injective, and so $$\pi_{L/k,S,T}^{V'}(e_\chi \mathcal{L}_{L/k,S,T})=e_\chi \epsilon_{L/k,S,T}^{V'}.$$ Hence, by Proposition \ref{etncbyrs}, we see that Conjecture ${\rm eTNC}(h^0(\Spec L),\ZZ_p[G])$ is valid, as claimed.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{S:Introduction} \begin{table}[!tb] \caption{Nomenclature} \begin{center} {\footnotesize \begin{tabular}{ll} \hline $\mathbb{N}^{\ast}$; $\mathbb{N}_{a}^{\ast}$ & non-zero natural numbers; $\{1, \dots ,a\}, a \in N^{\ast}$\\ $i,k$; $j,l$; $n,m$ & class counters; subclass counters; observation counters \\ $\mathbf{x} \in \mathbb{R}^{F}$ & observation in $F$-dimensional real vector measurement space \\ $\mathbf{x}_{i}^n$ & $n$-th training observation of $i$-th class \\ $\mathbf{x}_{i,j}^n$ & $n$-th training observation of $j$-th subclass in class $i$\\ $\mathbf{m} \in \mathbb{R}^{F}$ & estimated total sample mean\\ $\mathbf{m}_{i}$, $\mathbf{m}_{i,j}$ & estimated sample mean of class $i$ and sublcass ($i,j$)\\ ${N}$; ${N}_i$; ${N}_{i,j}$ & total number of training observations; observations \\ & in $i$-th class; observations in $j$-th subclass of class $i$ \\ $p_i$, $p_{i,j}$ & estimated prior of $i$-th class and ($i,j$) subclass \\ $\mathbf{n}_{L} \in \mathbb{R}^{L}$ & vector with all elements equal to $L^{-1}$\\ $\mathbf{0}_{L} \in \mathbb{R}^{L}$ & vector with all elements equal to zero\\ $\mathbf{1}_{L} \in \mathbb{R}^{L}$ & vector with all elements equal to one\\ $\mathbf{J}_{L \times L} \in \mathbb{R}^{L \times L}$ & matrix with all elements equal to one\\ $\mathbf{X}_{i} \in \mathbb{R}^{F \times N_{i}}$ & matrix of observations in class $i$\\ $\mathbf{X}_{i,j} \in \mathbb{R}^{F \times N_{i,j}}$ & matrix of observations in subclass $(i,j)$\\ $\mathbf{X} \in \mathbb{R}^{F \times N}$ & block matrix of all training observations\\ $\mathbf{S}_t \in \mathbb{R}^{F \times F}$ & total scatter matrix\\ $\mathbf{S}_w \in \mathbb{R}^{F \times F}$ & within-class scatter matrix\\ $\mathbf{S}_b \in \mathbb{R}^{F \times F}$ & between-class scatter matrix\\ $\mathbf{S}_{bsb} \in \mathbb{R}^{F \times F}$ & inter-between-subclass scatter matrix\\ $\mathbf{B}_t \in \mathbb{R}^{N \times N}$ & matrix satisfying $\mathbf{S}_t = \mathbf{X} \mathbf{B}_t \mathbf{X}^T$\\ $\mathbf{B}_w \in \mathbb{R}^{N \times N}$ & matrix satisfying $\mathbf{S}_w = \mathbf{X} \mathbf{B}_w \mathbf{X}^T$\\ $\mathbf{B}_b \in \mathbb{R}^{N \times N}$ & matrix satisfying $\mathbf{S}_b = \mathbf{X} \mathbf{B}_b \mathbf{X}^T$\\ $\mathbf{B}_{bsb} \in \mathbb{R}^{N \times N}$ & matrix satisfying $\mathbf{S}_{bsb} = \mathbf{X} \mathbf{B}_{bsb} \mathbf{X}^T$\\ $[\mathbf{B}]_{i,n.k,m}$ & element of matrix $\mathbf{B}$ corresponding to $\mathbf{x}_{i}^n$, $\mathbf{x}_{k}^m$\\ $[\mathbf{B}]_{i,j,n.k,l,m}$ & element of matrix $\mathbf{B}$ corresponding to $\mathbf{x}_{i,j}^n$, $\mathbf{x}_{k,l}^m$\\ $\mathcal{R}(\mathbf{B})$ & range space of matrix $\mathbf{B}$\\ $\mathcal{N}(\mathbf{B})$ & null space of matrix $\mathbf{B}$\\ \hline \end{tabular} } \end{center} \label{tbl:SpeedupRates} \end{table} Nonlinear discriminant analysis (NDA) using kernel functions is a fundamental dimensionality reduction (DR) technique with applications in several domains such as machine learning, information retrieval, pattern detection, visualization and other. The NDA optimization problem is reformulated as generalized eigenproblem (GEP). The mathematical treatment of GEP is one of the most aesthetically pleasing problems in all numerical algebra. This fact contributed to the increasing interest of NDA-based techniques. The major advantage over linear DA (LDA) is that it captures data nonlinearities thus offering more informative feature representation in most real-world problems. Moreover, in comparison to other popular component analysis (CA) techniques (e.g. PCA, ICA, LLE) it offers a much more compact representation by projecting the data to a very low dimensional space while preserving the clustering structure of the specific task. For instance, in classification tasks the final classifier in the projection subspace deals more effectively with the curse of dimensionality\footnote{The curse of dimensionality problem states that phenomena described in high dimensional spaces require much more parameters to capture their properties}, thus increasing the generalization performance and speed up the classifier. The latter is very important in big data problems. In visualization tasks it offers more informative representations preserving valuable information by the extreme dimensionality reduction capabilities even in two or three dimensions. However, the computational cost of computing the eigenpairs of NDA has prohibited this technology from the widespread use in today's big data problems. Recently, a number of methods have been proposed to speed up NDA techniques. Here we provide a general framework accelerating the generalized eigenvalue decomposition of NDA approaches. Moreover, we show that the combination of the proposed method provides improved detection rates and computational efficiency when combined with LSVM, over the use of LSVM and KSVM alone. \section{Fundamentals of nonlinear discriminant analysis} \label{S:FUNDAMENTALS_NDA} A training set in the feature space may be described using a block data matrix \begin{equation} \mathbf{\Phi} = [\mathbf{X}_1, \dots, \mathbf{X}_C] \in \mathbb{R}^{L \times N} \end{equation} Let the block matrix $\mathbf{X} = [\mathbf{X}_1, \dots, \mathbf{X}_C] \in \mathbb{R}^{L \times N}$ represents an annotated training set of $C$ classes and $N$ observations $\mathbf{x}_n$, $i \in \mathbb{N}_C^{\ast}$, $n \in \mathbb{N}_{N}^{\ast}$, in the input space $\mathbb{R}^L$, where the $i$th block $\mathbf{X}_i$, contains the $N_i$ observations $\mathbf{x}_n, n \in Y_i \subseteq \mathbb{N}_{N}^{\ast}$ of class $i$, and $Y_i$ is the respective set of indices. In many real-world applications, class distributions are not linearly separable and the direct application of a linear classifier in the input space may provide a poor performance. Kernel-based techniques address this problem by utilizing a vector-valued function $\bg{\phi} (\cdot)$ to map observations non-linearly from the input space into some feature space $\mathcal{F} \subseteq \mathbb{R}^F$ where the data are expected to be linearly separable \cite{Muller01} \begin{eqnarray} \bg{\phi} (\cdot) : \mathbb{R}^L &\rightarrow& \mathcal{F} \subseteq \mathbb{R}^F, \\ \mathbf{x} &\rightarrow& \bg{\phi} = \bg{\phi} (\mathbf{x}). \end{eqnarray} The problem is then reformulated in terms of dot products, which in turn are replaced with kernel function evaluations \begin{equation} \bg{\phi}_{n}^T \bg{\phi}_{s} = k (\mathbf{x}_{n}, \mathbf{x}_{s}) = k_{n,s}, \label{E:KernelFunction} \end{equation} where, $k(\cdot, \cdot)$ is a Mercer kernel \cite{Mercer_09}. That is, we additionally require that $\bg{\phi} (\cdot)$ satisfies (\ref{E:KernelFunction}). This allows the use of traditional linear solvers in problems where classes are nonlinearly separable in the input space and additionally the application of them in the feature space is intractable (e.g. $F$ is large or even infinite). Also, exploiting the kernel trick the mapping is intrinsic and the problem can be solved in the feature space without even knowing the actual mapping. Assuming that the block matrix representing the training set in the feature space is \begin{equation} \mathbf{\Phi} = [\mathbf{\Phi}_1, \dots, \mathbf{\Phi}_C] \in \mathbb{R}^{F \times N}, \; \mathbf{\Phi}_i \in \mathbb{R}^{F \times N_i} \label{E:TRAIN_SET_FEA} \end{equation} nonlinear DA methods seek the linear transformation $\mathbf{\Psi}$ that simultaneously optimize the following criteria in the feature space \begin{equation} \begin{array}{l} \displaystyle \underset{\mathbf{\Psi}}{\argmin} \trace(\mathbf{\Psi}^T \breve{\mathbf{S}} \mathbf{\Psi}), \quad \underset{\mathbf{\Psi}}{\argmax} \trace(\mathbf{\Psi}^T \mathbf{S}_b \mathbf{\Psi}), \label{E:NDA_MIN_MAX_CRIT} \end{array} \end{equation} where $\mathbf{S}_b$ is the between-class scatter matrix and $\breve{\mathbf{S}}$ is either the within-class scatter matrix $\mathbf{S}_w$ or the total scatter matrix $\mathbf{S}_t$. For the reformulation of (\ref{E:NDA_MIN_MAX_CRIT}) using dot products we need an appropriate factorization of $\mathbf{\Psi}$, $\mathbf{S}_b$ and $\breve{\mathbf{S}}$ (i.e. $\mathbf{S}_w$ or $\mathbf{S}_t$). Starting from $\mathbf{\Psi}$ its solution space in $\mathbb{R}^F$ is restricted to $\spanvs(\mathbf{\Phi})$ \cite{Mika00,Park05_J}. This allows us to express each column of $\mathbf{G}$ as a linear combination of the mapped training data \begin{equation} \mathbf{\Psi} = \mathbf{\Phi} \mathbf{W} \label{E:G_linearCombination} \end{equation} where $\mathbf{W} \in \mathbb{R}^{N \times D}$ contains the expansion coefficients. Substituting (\ref{E:G_linearCombination}) in (\ref{E:NDA_MIN_MAX_CRIT}) the optimization criterion becomes \begin{equation} \begin{array}{l} \displaystyle \underset{\mathbf{W}}{\argmin} \trace(\mathbf{W}^T \check{\mathbf{S}} \mathbf{W}), \quad \underset{\mathbf{W}}{\argmax} \trace(\mathbf{W}^T \check{\mathbf{S}}_b \mathbf{W}), \label{E:NDA_MIN_MAX_CRIT_FEA_SPACE} \end{array} \end{equation} where \begin{eqnarray} \check{\mathbf{S}}_b &=& \mathbf{\Phi}^T \mathbf{S}_b \mathbf{\Phi}, \label{E:Sb_KER}\\ \check{\mathbf{S}}_w &=& \mathbf{\Phi}^T \mathbf{S}_w \mathbf{\Phi}, \label{E:Sw_KER}\\ \check{\mathbf{S}}_t &=& \mathbf{\Phi}^T \mathbf{S}_t \mathbf{\Phi}. \label{E:St_KER} \end{eqnarray} and $\check{\mathbf{S}}$ is replaced by $\check{\mathbf{S}}_w$ or $\check{\mathbf{S}}_t$. The above matrices can be entirely expressed in terms of dot products. By replacing dot products with kernel evaluations, $\check{\mathbf{S}}_b$, $\check{\mathbf{S}}_w$ and $\check{\mathbf{S}}_t$ (called hereafter kernel scatter matrices) can be viewed as the scatter matrices of the Gram matrix \begin{equation} \mathbf{K} = \mathbf{\Phi}^T \mathbf{\Phi}, \label{E:GramMat} \end{equation} where each column in $\mathbf{K}$ is considered as a data point in $\mathbb{R}^N$ \cite{Park05_J,Park05_J}. The matrix $\check{\mathbf{S}}$ is positive semidefinite (PSD) \cite{Park05_J} (as shown also in Sections \ref{S:AKDA} and \ref{S:AGDA}), and, thus, the optimal solution of (\ref{E:NDA_MIN_MAX_CRIT}) is commonly approximated using pseudoinverse criteria \cite{Zhang10}. That is, we compute the transformation matrix $\mathbf{\Psi} \in \mathbb{R}^{F \times D}$ that maximizes \cite{Zhang10} \begin{equation} \underset{\mathbf{W}}{\argmax} \trace( (\mathbf{W}^T \check{\mathbf{S}} \mathbf{W})^{+} \mathbf{W}^T \check{\mathbf{S}}_b \mathbf{W}), \label{E:NDA_CRIT_KER_PSEUDO} \end{equation} Considering that all kernel scatter matrices are symmetric PSD (SPSD), the above optimization problem is equivalent to solving the symmetric-semidefinite generalized eigenproblem (GEP) \begin{equation} \check{\mathbf{S}}_b \mathbf{W} = \check{\mathbf{S}} \mathbf{W} \mathbf{\Lambda}. \end{equation} where, $\mathbf{\Lambda} = \diag(\lambda_1, \dots, \lambda_D)$, $\mathbf{W} = [\mathbf{w}_1, \dots, \mathbf{w}_D] \in \mathbb{R}^{F \times D}$ and $(\lambda_i, \mathbf{w}_i)$ are the $D$ nonzero eigenpairs (EP) of the SPSD matrix pencil $(\check{\mathbf{S}}_b, \check{\mathbf{S}})$ with eigenvalue and eigenvector sets \begin{equation} \begin{array}{l} \lambda(\check{\mathbf{S}}_b, \check{\mathbf{S}}) = \{ \lambda_i \in \mathbb{R}_+^* | \det(\check{\mathbf{S}}_b - \lambda_i \check{\mathbf{S}})= 0; \lambda_i > \lambda_j \, \mbox{if} \, i > j \} \\ g(\check{\mathbf{S}}_b, \check{\mathbf{S}}) = \{ \mathbf{w}_i \in \mathbb{R}^F | \mathbf{S}_b \mathbf{w}_i = \lambda_i \check{\mathbf{S}} \mathbf{w}_i \}. \label{E:LDA_PENCIL} \end{array} \end{equation} Simultaneous diagonalization techniques are used to solve the problem in case of both SPD or SPSD matrix $\check{\mathbf{S}}$ as explained in the following. Firstly, we note that it can be easily shown that $\check{\mathbf{S}}_t$ is SPSD and \cite{Huang02,Ye06_J} \begin{equation} \begin{array}{l} \displaystyle \qquad \quad \;\; \check{\mathbf{S}}_t = \check{\mathbf{S}}_b + \check{\mathbf{S}}_w, \\ \displaystyle \nullsp(\check{\mathbf{S}}_t) = \nullsp(\check{\mathbf{S}}_b) \cap \nullsp(\check{\mathbf{S}}_w). \end{array} \end{equation} Then according to Theorem 8.7.1 in \cite{Golub13} there exists a matrix $\mathbf{W}$ that simultaneously diagonalizes the above scatter matrices. It has been shown that the matrix optimizing the above criterion is given by \begin{equation} \mathbf{W} = \mathbf{\Gamma} \mathbf{\Upsilon} \end{equation} where, $\mathbf{\Upsilon} \in \mathbb{R}^{D \times D}$ is any nonsingular matrix and $\mathbf{\Gamma} \in \mathbb{R}^{F \times D}$ provides the following diagonalization \begin{eqnarray} \mathbf{\Gamma}^T \mathbf{S}_b \mathbf{\Gamma} &=& \diag( \mathbf{I}_{r \times r}, \mathbf{D}_{b} )\\ \mathbf{\Gamma}^T \mathbf{S}_w \mathbf{\Gamma} &=& \diag( \mathbf{0}_{r \times r}, \mathbf{D}_{w} )\\ \mathbf{\Gamma}^T \mathbf{S}_t \mathbf{\Gamma} &=& \mathbf{I}_{D \times D} \\ \mathbf{D}_{b} &=& \diag(b_{r+1}, \dots, b_{r+s} )\\ \mathbf{D}_{b} &=& \diag(w_{r+1}, \dots, w_{r+s} ) \end{eqnarray} and, $1 > b_{r+1} \geq, \dots, \geq b_{r+s} > 0$, $0 < w_{r+1} \leq, \dots, \leq w_{r+s} < 1$, $b_i + w_i = 1$, $r = r_t - r_w$, $s = D + r_w - r_t$, $r_w = \rank(\check{\mathbf{S}}_w)$, $r_t = \rank(\check{\mathbf{S}}_t)$. Thus, the optimal transformation is provided by \begin{equation} \mathbf{\Psi} = \mathbf{\Phi} \mathbf{\Gamma} \mathbf{\Upsilon} \end{equation} Different constraints on the transformation matrix characterize a set of different algorithms \begin{eqnarray} \mathbf{\Psi}^T \mathbf{S}_w \mathbf{\Psi} &=& \mathbf{W}^T \check{\mathbf{S}}_w \mathbf{W} = \diag( \mathbf{0}_{r \times r}, \mathbf{I}_{s \times s} ) \label{E:KDA}\\ \mathbf{\Psi}^T \mathbf{S}_t \mathbf{\Psi} &=& \mathbf{W}^T \check{\mathbf{S}}_t \mathbf{W} = \mathbf{I}_{D \times D} \label{E:UKDA}\\ \mathbf{\Psi}^T \mathbf{\Psi} &=& \mathbf{W}^T \mathbf{K} \mathbf{W} = \mathbf{I}_{D \times D} \label{E:OKDA} \end{eqnarray} where (\ref{E:KDA}) is the conventional KDA, (\ref{E:UKDA}) is the uncorrelated kernel discriminant analysis (KUDA) and (\ref{E:OKDA}) is the orthogonal kernel discriminant analysis (OKDA). To this end, many authors exploit the special structure of the kernel scatter matrices and retrieve suitable factorization of them. We should note that all the above methods (ULDA, OLDA, NLDA) are equivalent when the following property holds \begin{equation} \rank(\mathbf{S}_t) = \rank(\mathbf{S}_w) + \rank(\mathbf{S}_b), \end{equation} which is usually the case for high-dimensional feature vectors. Indeed when the above is valid $r = r_b$, $s = 0$, and \begin{eqnarray} \mathbf{\Sigma}_{b} &=& \diag( \mathbf{I}_{r_b \times r_b}, \mathbf{0}_{t -r - s \times t -r - s})\\ \mathbf{\Sigma}_{w} &=& \diag( \mathbf{0}_{r_b \times r_b}, \mathbf{I}_{r_t -r - s \times r_t -r - s}). \end{eqnarray} This explains why the above methods provide equivalent performance in experimental evaluations with high dimensional data. During classification a test sample \subsection{Regularization} One popular way to solve this problem is to apply a ridge-type regularization operator \begin{equation} \check{\mathbf{S}} \leftarrow \check{\mathbf{S}} + \epsilon \mathbf{I}_{N \times N} \label{E:REGU} \end{equation} where $\epsilon \in \mathbb{R}_{+}^{\ast}$ is a regularization constant; or remove the null space of $\check{\mathbf{S}}_w$. Then, one way to identify the optimal transformation is to use the inverse of $\check{\mathbf{S}}$ and identify the nonzero EPs of the pencil $(\check{\mathbf{S}}^{-1} \check{\mathbf{S}}_b, \mathbf{I})$. However, exploiting the inverse breaks the symmetry and definitness of the pencil and thus standard unsymmetric methods are necessary, which are slower and more susceptible to round off errors in comparison to symmetric one \cite{Golub13}. Moreover, when $\check{\mathbf{S}} = \check{\mathbf{S}}_w$ the null space of $\check{\mathbf{S}}_w$ may contain significant discriminant information. When $\check{\mathbf{S}}$ is SPD then a common way to solve this problem is by using the Cholesky factorization and symmetric QR factorization (Algorithm 8.7.2 in \cite{Golub13}). Specifically, perform the Cholesky factorization of $\check{\mathbf{S}} = \mathbf{L} \mathbf{L}^T$, use the symmetric QR decomposition of $\mathbf{C} = \mathbf{L}^{-1} \check{\mathbf{S}}_b \mathbf{L}^{-T}$ to compute the matrix $\mathbf{G}$ and set $\mathbf{W} = \mathbf{L}^{-T} \mathbf{G}$. It can be easily shown that $\mathbf{W}$ provides the desired simultaneous diagonalization \begin{eqnarray} \mathbf{W}^T \check{\mathbf{S}}_b \mathbf{W} &=& \mathbf{\Lambda}, \\ \mathbf{W}^T \check{\mathbf{S}} \mathbf{W} &=& \mathbf{I}, \end{eqnarray} and $\mathbf{W}^T \mathbf{W} = \mathbf{I}$. The above techniques may have large round-off error and may be time consuming. \subsection{Factorization $\mathbf{H} \mathbf{H}^T$} In \cite{Park05_J,Xiong06}, the scatter matrices in the feature space are factorized as \begin{eqnarray} \mathbf{S}_b &=& \mathbf{H}_b \mathbf{H}_b^T, \label{E:Sb_Hb_FAC}\\ \mathbf{S}_w &=& \mathbf{H}_w \mathbf{H}_w^T, \label{E:Sw_Hw_FAC}\\ \mathbf{S}_t &=& \mathbf{H}_t \mathbf{H}_t^T, \label{E:St_Ht_FAC} \end{eqnarray} where, $\mathbf{H}_b \in \mathbb{R}^{F \times C}$, $\mathbf{H}_w, \mathbf{H}_t \in \mathbb{R}^{F \times N}$. Then, the problem can be reformulated using the following factorization \begin{eqnarray} \check{\mathbf{S}}_b &=& \mathbf{\Phi}^T \mathbf{S}_b \mathbf{\Phi} = \mathbf{K}_b \mathbf{K}_b^T, \label{E:Sb_Hb_FAC_KER}\\ \check{\mathbf{S}}_w &=& \mathbf{\Phi}^T \mathbf{S}_w \mathbf{\Phi} = \mathbf{K}_w \mathbf{K}_w^T, \label{E:Sw_Hw_FAC_KER}\\ \check{\mathbf{S}}_t &=& \mathbf{\Phi}^T \mathbf{S}_t \mathbf{\Phi} = \mathbf{K}_t \mathbf{K}_t^T, \label{E:St_Ht_FAC_KER} \end{eqnarray} where, \begin{eqnarray} \mathbf{K}_b &=& \mathbf{H}_b \mathbf{\Phi}\\ \mathbf{K}_w &=& \mathbf{H}_w \mathbf{\Phi}\\ \mathbf{K}_t &=& \mathbf{H}_t \mathbf{\Phi}. \end{eqnarray} Then, the GEP problem is reformulated as \begin{equation} \check{\mathbf{K}}_b \check{\mathbf{K}}_b^T \mathbf{W} = \check{\mathbf{K}} \check{\mathbf{K}}^T \mathbf{W} \mathbf{\Lambda} \label{E:GEP_GSVD_FAC} \end{equation} where $\check{\mathbf{K}}$ equals to $\mathbf{K}_w$ or $\mathbf{K}_t$. The above problem is connected with the computation of the generalized singular value decomposition (GSVD) of the matrix pair $(\check{\mathbf{K}}^T, \mathbf{K}_b^T)$. That is the right generalized singular vectors and the square root of the generalized singular values of the matrix pair $(\check{\mathbf{K}}, \mathbf{K}_b^T)$ are the generalized eigenvectors and generalized eigenvalues of the matrix pencil $(\check{\mathbf{K}}_b^T \check{\mathbf{K}}_b, \check{\mathbf{K}}^T \check{\mathbf{K}}) = (\check{\mathbf{S}}_b, \check{\mathbf{S}})$ (Theorem 8.7.4 in \cite{Golub13}). This are computed using the complete orthogonal decomposition of $[\mathbf{K}_b \mathbf{K}_w]^T$ and SVD of the derived orthogonal matrix \cite{Park05_J}. The solution of the GEP problem in (\ref{E:GEP_GSVD_FAC}) can be also computed by using the SVD of $\mathbf{K}_t$ \cite{Xiong06}. Differently from the above methods, in \cite{Lu03,Lu_05} the cross-product algorithm is exploited used to exploit the above factorization and provide efficient solutions. Specifically, the EVD of $\mathbf{H}_b^T \mathbf{H}_b$ is first computed to derive the EPs $\mathbf{V}_b, \mathbf{\Lambda}_b$. Then, the EVD of $\mathbf{U}_b^T \mathbf{S}_w \mathbf{U}_b$ is computed to derive the eigenvector matrix $\mathbf{P}$, where $\mathbf{U}_b = \mathbf{V}_b \mathbf{\Lambda}_b^{-1/2}$. The projection matrix $\mathbf{Q} = \mathbf{U}_b \mathbf{P}$ we can obtain $\mathbf{Q}^T \mathbf{S}_w \mathbf{Q} = \mathbf{\Lambda}_w$. Then the final projection matrix is defined as $\mathbf{\Gamma} = \mathbf{Q} \mathbf{\Lambda}_w^{-1/2}$. Note that $\mathbf{\Gamma}^T \mathbf{S}_w \mathbf{\Gamma} = \mathbf{I}$ and $\mathbf{\Gamma}^T \mathbf{S}_b \mathbf{\Gamma} = \mathbf{\Lambda}_w^{-1}$. The disadvantage of this method is that $\mathbf{\Lambda}_w$ may contain near zero or zero values and it is difficult to select the correct eigenvectors. This effect may be alleviated with regularization on the within and total scatter matrices. This will add additional time to retrieve the correct parameter. However, the major disadvantage is that the the cross-product algorithm in the first step is very susceptible to round off errors. \subsection{Factorization $\mathbf{K} \mathbf{B} \mathbf{K}$} Generalized discriminant analysis (GDA) is based on the assumption that the data are centered in the feature space. Under this setting the between- and total-scatter matrices are factorized as follows \begin{eqnarray} \bar{\mathbf{S}}_{b} &=& \bar{\mathbf{K}} \bar{\mathbf{B}}_b \bar{\mathbf{K}}, \\ \bar{\mathbf{S}}_{t} &=& \bar{\mathbf{K}} \bar{\mathbf{K}}, \end{eqnarray} where $\bar{\mathbf{B}}_b \in \mathbb{R}^{N \times N}$ is a symmetric block diagonal matrix in the form \begin{eqnarray} \bar{\mathbf{B}}_b &=& \diag(\bar{\mathbf{B}}_{b_1}, \dots, \bar{\mathbf{B}}_{b_C}) \\ \bar{\mathbf{B}}_{b_i} &=& \frac{1}{N_i} \mathbf{B}_{N_i \times N_i} \end{eqnarray} GDA computes the SVD of $\bar{\mathbf{K}}$ and solves an equivalent EP. To speed up the above computation in SRKDA first the following EVD is computed identifying the orthonormal matrix $\mathbf{V} \in \mathbb{R}^{F \times (C-1)}$ \begin{equation} \bar{\mathbf{B}}_b \mathbf{V}_b = \mathbf{V}_b \bar{\mathbf{\Lambda}}_b, \label{E:Bb_SRLDA_EVD} \end{equation} Next, the transformation matrix $\mathbf{G} \in \mathbb{R}^{F \times D}$ is identified by solving the following linear system for $\mathbf{G}$ \begin{equation} \bar{\mathbf{K}} \mathbf{G} = \mathbf{V}_b, \label{E:LS_SRLDA} \end{equation} It can be easily shown that the computed transformation matrix has the required properties \begin{eqnarray} \mathbf{G}^T \bar{\mathbf{S}}_b \mathbf{G} &=& \bar{\mathbf{\Lambda}}_b, \\ \mathbf{G}^T \bar{\mathbf{S}}_t \mathbf{G} &=& \mathbf{I}_{D \times D}, \end{eqnarray} The eigenvectors of $\bar{\mathbf{B}}_b$, can be efficiently obtained by the inspection of this matrix. Particularly, observing that $\mathbf{1}_{N_j}$ is the eigenvector of $\bar{\mathbf{B}}_{b_j}$ a set of $C$ orthogonal eigenvectors $\bar{\mathbf{B}}_{b}$ is obtained as $\mathbf{V} = [\mathbf{v}_{1,1}, \dots, \mathbf{v}_{1,D}]$ where \begin{equation}\label{E:EM_OPT} \mathbf{v}_{i,j} = \left \{ \begin{array}{lll} \mathbf{0}_{N_j} & \mbox{if} & i \neq j, \\ \mathbf{1}_{N_j} & \mbox{if} & i == j. \end{array} \right. \end{equation} with repeated eigenvalue 1. However, note that the vector $\mathbf{1}_N$ is in the null space of the training vector set $\mathbf{\Phi}$ and thus in the null space of the Gram matrix $\bar{\mathbf{\Phi}} \mathbf{1}_N = \bar{\mathbf{K}} \mathbf{1}_N = 0$. Therefore, the all ones vector is the trivial solution of the linear matrix system and thus useless. Since 1 is a repeated eigenvalue, we pick $\mathbf{1}_N$ as our first eigenvector and use the as any other $C$ orthogonal vectors in the space spanned by $\{\mathbf{v}_c \}$ and the Gram-Schmidt process is used to orthogonalize the set of eigenvectors $\{ \mathbf{v}_c | c=1,\dots, C \}$. The derived vectors are then denoted as \begin{equation} \{\tilde{\mathbf{v}}_c, c=1, \dots, C-1 | \tilde{\mathbf{v}}_c^T \mathbf{1}_N = 0, \tilde{\mathbf{v}}_i^T \tilde{\mathbf{v}}_i = 1, \tilde{\mathbf{v}}_i^T \tilde{\mathbf{v}}_j = 0, i \neq j \} \end{equation} Note that the Gram-Schmidt process is applied to the initial set of eigenvectors $\{\mathbf{v}_c \}$ and not in any random vector in $\mathbb{R}^N$. Concerning the computation of the solution of the linear matrix system problem, this can be efficiently solved, e.g., using Cholesky factorization. Thus, the computational cost of identifying $\mathbf{V}$ is negligible, and is extremely fast in comparison to the application of other nonlinear DA algorithms. \section{Conclusions} \label{S:CONCL} In this paper we have proposed an efficient generalized eigendecomposition framework and showed how it can be applied to accelerate fundamental NDA techniques. Experimental results showed that the combination of LSVM with the accelerated methods achieve state of the art performance in terms of both accuracy and training time performance. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }